ScanVibeScanVibe
·14 min read·ScanVibe Team

What Is Vibe Coding Security? The Complete Guide for AI-Built Apps

Vibe coding security explained: why apps built with Lovable, Bolt, Cursor, and Replit have unique vulnerabilities — with real data from published studies and CVEs.

vibe-codingsecurityguide

What Is Vibe Coding Security?

In February 2025, former OpenAI co-founder Andrej Karpathy posted on X: "There's a new kind of coding I call 'vibe coding', where you fully give in to the vibes, embrace exponentials, and forget that the code even exists." The term took off — Collins Dictionary named it Word of the Year 2025.

Tools like Lovable, Bolt.new, Cursor, Replit, and v0 now let anyone build full-stack apps in hours using natural language prompts. No CS degree needed. Ship by the end of the day.

But there's a problem that most vibe coders discover too late: AI-generated code ships with serious, well-documented security vulnerabilities. Not theoretical ones — vulnerabilities that have already exposed hundreds of thousands of real users' data.

Vibe coding security is the practice of identifying, understanding, and fixing the security flaws that AI coding tools introduce. This guide covers what the research shows, what the real incidents prove, and how to protect your app.


The Evidence: Why Vibe Coding Creates Security Risks

This isn't speculation. Multiple independent studies and real-world incidents paint a clear picture.

What the Research Shows

Escape.tech (2025) analyzed 5,600 publicly available vibe-coded apps and found:

2,000+ Vulnerabilities found
400+ Exposed secrets (API keys, DB credentials)
175 Instances of exposed PII

The platforms scanned included Lovable (~4,000 apps), Base44, Create.xyz, and Bolt.new.

Apiiro (2025) tracked 7,000 developers across 62,000 repositories and found:

10,000+ New security findings per month by June 2025
+322% Increase in privilege escalation paths
+153% Increase in design flaws

Developers using AI exposed cloud credentials nearly twice as often as those coding manually.

Stanford University showed that participants with access to AI assistants wrote significantly less secure code than those without — across 4 out of 5 programming tasks. Worse: developers using AI were more confident their code was secure, despite it being less secure. A dangerous false sense of security.

CSO Online / InfoWorld (December 2025) assessed five major vibe coding tools — Claude Code, OpenAI Codex, Cursor, Replit, and Devin — by building the same three test applications with each. Result: 69 vulnerabilities across 15 apps, with patterns so consistent the researchers concluded the problem is structural, not incidental.

The Root Cause

💡
AI coding assistants optimize for functionality, not security. As CSO Online puts it: "AI agents are optimized to provide a working answer, fast. This prioritization of function over security is a fundamental issue."

When you prompt an AI to "build me a SaaS with Supabase auth and Stripe payments," it will deliver a working app. But under the hood, that app likely has exposed API keys, missing database security rules, and no security headers — because none of those are needed for the app to work.


Real-World Incidents: When Vibe Coding Goes Wrong

CVE-2025-48757: 170+ Lovable Apps Exposed

CVE-2025-48757 Assigned in the National Vulnerability Database for Lovable-generated projects shipping without Supabase RLS

In 2025, security researcher Matt Palmer discovered that Lovable-generated projects were systematically shipping without Supabase Row Level Security (RLS).

The scope: 303 endpoints across 170 projects were accessible to unauthenticated attackers, who could read, modify, and delete all data in affected databases.

The Moltbook Incident (January 2026)

Moltbook, a "social network for AI agents" built with vibe coding tools, launched on January 28, 2026. Within three days, it was compromised.

1.5M API keys exposed
35,000 User emails exposed
3 days Time from launch to breach

A misconfigured Supabase database with RLS never enabled exposed everything directly to the public internet.

18,697 Students Exposed (February 2026)

In February 2026, The Register reported that a Lovable-built exam application exposed the data of 18,697 users, including students and educators from top U.S. universities. The security researcher found 16 vulnerabilities — six rated critical.

The most alarming finding: the AI had written authentication logic backwards. The code blocked legitimate authenticated users while allowing unauthenticated attackers full access. This logic inversion was replicated across multiple critical functions in the app.

The Most Common Vibe Coding Vulnerabilities

Based on the published research and the incidents above, here are the vulnerabilities that appear again and again:

1. Missing Supabase RLS
83% of exposed Supabase databases. Caused CVE-2025-48757, Moltbook, and 18K student exposure.
Critical
2. Exposed API Keys & Secrets
400+ exposed secrets found in 5,600 vibe-coded apps. OpenAI, Stripe, SendGrid keys.
Critical
3. Missing Security Headers
Enables XSS, clickjacking, MIME sniffing, and protocol downgrade attacks.
High
4. Insecure Firebase Configuration
Default rules allow anyone to read and write the entire database.
High
5. Authentication Logic Flaws
AI-generated auth can invert access control. Blocks legit users, allows attackers.
High
6. Exposed Sensitive Files
.env files, .git directories, backup files. 175 PII instances found.
High
7. Vulnerable Dependencies
AI suggests packages based on outdated training data with known CVEs.
Medium
8. Server-Side Request Forgery
AI fails on context-dependent issues. No universal rule to distinguish legit vs malicious URLs.
Medium

1. Missing Supabase Row Level Security (RLS)

Severity: Critical — This is the #1 vulnerability in the vibe coding ecosystem.

Supabase is the most popular backend for vibe-coded apps, especially on Lovable. By default, RLS is disabled on new tables. This means anyone who knows your Supabase URL and anon key — both of which are in your client-side JavaScript by design — can read, write, and delete all data in that table.

According to research, 83% of exposed Supabase databases involve RLS misconfigurations. This is what caused CVE-2025-48757, the Moltbook breach, and the 18K student data exposure.

Why AI skips it: Enabling RLS requires writing PostgreSQL policies — additional complexity that doesn't affect whether the app "works." AI optimizes for a functional demo, not production security.

The fix: Go to your Supabase dashboard. Enable RLS on every table. Write policies that restrict access based on auth.uid(). Test by attempting to query data without authentication — it should fail.

2. Exposed API Keys and Secrets

Severity: Critical

The Escape.tech study found 400+ exposed secrets across 5,600 vibe-coded apps. These aren't just Supabase anon keys (which are designed to be public when combined with RLS). They include:

AI tools place these inline because tutorials and docs show them that way for simplicity. The AI replicates the pattern.

The fix: Move all secrets to environment variables (.env.local). Never import secret keys in client-side code. Use NEXT_PUBLIC_ prefix only for truly public values.

3. Missing Security Headers

Severity: High

Security headers instruct browsers how to handle your content. Without them, your app is vulnerable to:

Most vibe-coded apps deploy to Vercel, Netlify, or Railway with zero custom headers. The hosting platform doesn't add them by default.

The fix: Add these headers to your hosting configuration (e.g., vercel.json or next.config.ts):

Content-Security-Policy: default-src 'self'; script-src 'self' 'unsafe-inline'
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000; includeSubDomains
Referrer-Policy: strict-origin-when-cross-origin
Permissions-Policy: camera=(), microphone=(), geolocation=()

4. Insecure Firebase Configuration

Severity: High

Firebase's default security rules allow anyone to read and write your entire database:

{
  "rules": {
    ".read": true,
    ".write": true
  }
}

AI tools deploy with these defaults because writing proper rules requires understanding your data model — context that's hard to derive from a natural language prompt.

The fix: Write granular rules for each collection. Use Firebase Authentication to validate users. Test with the Firebase Emulator before deploying.

5. Authentication Logic Flaws

Severity: High

The February 2026 Lovable incident revealed something deeply concerning: the AI wrote authentication that inverted access control — blocking authenticated users while allowing unauthenticated ones. This wasn't a missing check; it was a check that did the opposite of what was intended.

The CSO Online study confirmed that while AI handles basic patterns well (parameterized queries, framework-level XSS prevention), it struggles with context-dependent security logic — like knowing which users should access which resources.

The fix: Manually review all authentication and authorization logic. Test each protected route both with and without valid credentials. Never trust AI-generated auth code without verification.

6. Exposed Sensitive Files

Severity: High

Vibe-coded apps frequently expose files that should never be public: .env files, .git/ directories, package.json, backup files. The Escape.tech study found 175 instances of exposed PII through such files.

The fix: Configure .gitignore properly. Block access to dotfiles in your web server config. Test by accessing yourdomain.com/.env in a browser — it should return 404.

7. Vulnerable Dependencies

Severity: Medium to High

AI tools suggest packages based on training data that may predate security patches. The Apiiro study showed AI-assisted development leads to more vulnerabilities across all categories — including open-source dependencies.

The fix: Run npm audit after every install. Set up Dependabot or Snyk for automated alerts. Remove unused packages.

8. Server-Side Request Forgery (SSRF)

Severity: Medium to High

The CSO Online assessment found that while AI tools handle "solved" vulnerability classes well (SQL injection, basic XSS), they fail on context-dependent issues like SSRF. As the researchers noted: "There's no universal rule for distinguishing legitimate URL fetches from malicious ones."

The fix: Validate and sanitize all user-provided URLs. Maintain allowlists for external services. Never pass user input directly to server-side fetch calls.


How to Secure Your Vibe-Coded App

Step 1: Scan Your App

Before you can fix problems, you need to find them. Use a security scanner that checks for the specific vulnerabilities vibe coding creates — not just generic web security issues.

🔎
Scan your app now with ScanVibe — free, takes 30 seconds, no signup required. You get an A-F security grade with specific findings.

Step 2: Fix Critical Issues First

Prioritize by impact:

  1. Enable Supabase RLS on all tables (this alone would have prevented CVE-2025-48757)
  2. Remove exposed secret keys from client-side code
  3. Lock down Firebase rules if applicable
  4. Review authentication logic — test it manually, don't trust the AI

Step 3: Add Security Headers

Five minutes of configuration that blocks entire categories of attacks. Copy the headers above into your hosting config.

Step 4: Update Dependencies

npm audit
npm audit fix

Remove packages you're not using. Each dependency is additional attack surface.

Step 5: Set Up Ongoing Monitoring

Security isn't a one-time fix. New vulnerabilities appear when you update code, add features, or change configurations.


Vibe Coding Security by Platform

Lovable

The most-studied platform for vibe coding security. CVE-2025-48757 directly targeted Lovable-generated projects. Primary risks: missing Supabase RLS, exposed anon keys without proper policies, no security headers. Lovable released a "security scan" feature in Lovable 2.0, but researchers found it only detects the presence of RLS — not whether the policies actually work.

Bolt.new

Part of the Escape.tech study sample. Common issues: hardcoded API keys, minimal error handling that leaks internal details, permissive CORS configurations.

Cursor

The CurXecute vulnerability (2025) showed that Cursor itself could be exploited to execute arbitrary commands on a developer's machine via a malicious MCP server. For generated code: AI-suggested packages with known CVEs, inconsistent security patterns across files, test credentials left in production.

Replit

Instant deployment means apps go live before security review. Environment variables can leak through Replit's public forking mechanism. Missing rate limiting is nearly universal.

v0 (Vercel)

Generates frontend components that are generally safer. But when combined with backend services: API routes without auth middleware, server actions without input validation, unprotected database queries.


The Vibe Coding Security Checklist

Before Launch

After Launch


Frequently Asked Questions

What is vibe coding?

Vibe coding is building software by describing what you want in natural language to AI tools like Lovable, Bolt.new, Cursor, Replit, and v0. The AI generates the code, and you accept it without deeply reviewing the implementation. The term was coined by Andrej Karpathy in February 2025 and named Collins Dictionary Word of the Year 2025.

Is vibe coding safe?

The code vibe coding produces often contains security vulnerabilities. A December 2025 assessment found 69 vulnerabilities across 15 apps built with major vibe coding tools. However, with proper security scanning and manual review, vibe-coded apps can be secured for production use.

What are the biggest security risks?

Based on published research: missing Supabase RLS (caused CVE-2025-48757, affecting 170+ apps), exposed API keys and secrets (400+ found in 5,600 apps by Escape.tech), missing security headers, authentication logic flaws, and vulnerable dependencies. The Apiiro study showed AI-assisted code introduces 10,000+ new security findings per month.

How do I check if my vibe-coded app is secure?

Scan it with a tool designed for AI-built apps. ScanVibe checks 8 security categories — SSL, headers, secrets, libraries, exposed files, Supabase, Firebase, and auth endpoints — in under 30 seconds. Free, no signup needed.

Can I make my Lovable/Bolt app production-ready?

Yes, but it requires manual security work. The three Lovable apps with good security in the Escape.tech study all had developers who manually reviewed and hardened the AI-generated code. Enable RLS, add headers, remove exposed keys, review auth logic.

How often should I scan?

After every deployment, at minimum. Weekly scheduled scans are ideal. The Apiiro study showed the volume of AI-introduced vulnerabilities is accelerating, not stabilizing — so continuous monitoring is essential.


Sources

The data in this article comes from:


Start Securing Your App Today

Every incident above started the same way: someone shipped AI-generated code without checking it. The Moltbook breach took three days. The student data exposure affected 18,697 people. CVE-2025-48757 hit 170+ apps at once.

The fix starts with knowing where you stand.

30 sec to scan your app. Free. No signup required.

Scan your app for free with ScanVibe — A-F security grade in 30 seconds.

Your AI wrote the code. Let's make sure it's safe.

Is your AI-built app secure?

Run a free security scan and find out in 30 seconds.

Scan Your App Now