“There’s a new kind of coding I call ‘vibe coding,’ where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.” claimed Andrej Karpathy in a post on X back in February. This post led to many people sharing their “vibe coded” applications on social media or commenting on its effectiveness.

Curious, I downloaded Cursor to my home computer. The set up was easy. My first prompt was “create an application that asks for a zip code and returns the weather for that location.” Cursor replied with clarifying questions like, did I “want the temperature in Fahrenheit?” did I “want to show the humidity?” and did I “want a blue button?” I said yes to it all. In minutes Cursor was done, having generated three new files.

Yes, there were issues, but Cursor and I fixed them without me so much as glancing at the code — just like Karapthy’s post, “Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away.”

I was very proud of my creation and immediately sent it to family and friends for group testing. I got feature requests such as “what to wear,” which I quickly added.  But when I went to add another feature, Cursor prompted me to purchase more tokens. I used up all my free ones. And that was the end of my vibe coding.

From Fun To Functional To… Fortified? It’s Not By Default

I had prompted Cursor to do a security review and grade its own homework. To its credit, Cursor came back with findings such as a lack of input sanitization, no rate limiting, no proper error handling, and an API key in plain text, which Cursor then fixed.

Why didn’t Cursor write secure code from the start? Why did it have to be prompted to run a security review?  This is a huge “gotcha” as developers cannot assume the generated code is secure by default.

LLMs Are Not Secure Either

Cursor is not alone. While AI is getting better at coding syntax, security improvements have plateaued. In fact, 45% of coding tasks came back with security weaknesses. Additionally, a different study found that open-source LLMs suggest non-existent packages over 20% of the time and commercial models 5% of the time. Attackers exploit this by creating malicious packages with those names, leading developers to unknowingly introduce vulnerabilities.

Vibe Coding Is Not Ready For Business Applications… Yet

Are we taking vide coding too far? For example, are product managers, design professionals, and non-software developers vibe coding the next mobile banking application and putting it into production? Hopefully not. I too share Karaphty’s sentiment: “[vibe coding] is not too bad for throwaway weekend projects.”  In the professional world, product managers, designers, software developers, and testers can use AI-powered software tools to assist in building applications – from prototyping, to design, to coding, to testing, and even delivery. But for now, humans must remain in the loop.

What happens to the role of application security? With LLMs helping companies release faster, such as Microsoft and Google that boast over 25% of their code is written by AI, the amount of vulnerable code will only increase, especially in the short-term.  DevSecOps best practices must be adopted for all code regardless of how it is developed – with AI or without AI, by full time developers, a 3rd party, or downloaded from open source projects –or organizations will fail to innovate securely

“Vibe coding” tools such as Cursor, Cognition Windsurf, and Claude Code are already entrenched in professional software development. There will be a convergence with low-code platforms (solutions that allow technical and non-technical users to quickly build and iterate on applications with visual models). In the next three to five years, the software development lifecycle will collapse and the role of the software developer will evolve from programmer to agent orchestrator.  AI-native AppGen platforms that integrate ideation, design, coding, testing, and deployment into a single generative act will rise to meet the challenge of AI-enhanced coding within guardrails. AI security agents will emerge to help security and development professionals avoid a tsunami of insecure, poor quality, and unmaintainable code, whether low coded or vibed.

Join Us In Austin To Learn How To Secure AI-Generated Code

Interested in learning what the future holds? Attend the Forrester’s Security & Risk Summit in Austin, Texas, on November 5–7, 2025, where my colleague Chris Gardner and I will provide a look into Application Security In The Age Of AI-Generated Code and beyond.