Skip to content

Vibe Coding: When Developers Stop Typing and Start Talking

Vibe Coding: When Developers Stop Typing and Start Talking

I mass-deleted 200 lines of boilerplate last week and replaced them with a single English sentence. The AI wrote the code. It worked. I didn't touch the keyboard for twenty minutes.

That's vibe coding. And it's either the future of software development or a really efficient way to ship bugs -- depending on what you're building.

What vibe coding actually means

The term comes from Andrej Karpathy, who tweeted about it in early 2025. The idea is simple: instead of writing code character by character, you describe what you want in natural language and let an AI model generate the implementation. You talk (or type prompts), the AI codes, and you review what comes out.

It sounds like autocomplete on steroids, but in practice it's a different way of working. You stop thinking about syntax and start thinking about intent. Instead of "how do I write a recursive function to traverse this tree," you say "walk this tree depth-first and collect all nodes where status is active." The model figures out the implementation details.

Tools like Cursor, Claude Code, and Bolt.new have made this practical. You can describe a feature, watch the code appear, run it, see what breaks, describe the fix, and iterate. The feedback loop is fast enough that it feels like pair programming with someone who types 10,000 words per minute but occasionally gets confused about your project structure.

Where this actually works

Vibe coding is genuinely good at a specific category of work: stuff you already know how to build but don't want to type out.

CRUD endpoints. Form validation. Database migrations. Config files. Test scaffolding. The kind of code where the hard part isn't figuring out what to write -- it's the tedium of actually writing it. I've used it to generate entire API routes from a schema description, and the output was cleaner than what I would've written by hand because the model doesn't get lazy halfway through and start cutting corners on error handling.

Prototyping is the other sweet spot. When I'm trying to figure out if an idea works, I don't want to spend three hours setting up a project. Describe the thing, let the AI scaffold it, poke around, see if the concept holds up. If it doesn't, I've lost fifteen minutes instead of an afternoon.

Scripts and one-off tooling are great candidates too. Need a script to parse a CSV, rename 500 files, or migrate data between two formats? Describe the input, describe the output, let the model connect the dots. These are tasks where correctness is easy to verify -- you run it and check the output.

Where it falls apart

Here's the part that doesn't get enough attention: vibe coding works best when you can verify the output quickly. The further you get from "run it and see if it works," the more dangerous it gets.

Complex business logic is where I've been burned. The code looks right. It passes the obvious test cases. But there's an edge case buried in a conditional that the model got subtly wrong, and you don't catch it because the code was so fluent that you trusted it more than you should have. I've shipped bugs this way. Not proud of it.

Performance-sensitive code is another weak spot. The model will give you something that works, but "works" and "works at scale" are different problems. I asked an AI to write a search function last month. It returned something that was O(n^2). Technically correct. Would have crawled at production volume. The model had no concept of the data size it was working with.

Security is the one that keeps me up at night. Models are trained on public code, including public code with vulnerabilities. When you vibe-code an auth flow or a data sanitization layer, you're trusting that the model didn't learn from a Stack Overflow answer that was subtly wrong. I still write security-critical code by hand, and I still have someone else review it.

The skill shift nobody talks about

Vibe coding doesn't make programming easier. It changes which parts are hard.

Typing speed stops mattering. Prompt clarity matters a lot. The difference between a useful AI output and a useless one is often the difference between "make a login page" and "create a login page with email and password fields, client-side validation that checks email format before submission, a loading state on the submit button, and error handling that shows server-side validation messages below each field." The more precisely you describe what you want, the less time you spend fixing what you get.

Code review becomes the actual job. When you're writing code yourself, review is a sanity check. When an AI writes the code, review is where the engineering happens. You're reading someone else's implementation of your idea, and that someone has no context about your system beyond what you told them in the last thirty seconds.

This changes the junior-senior dynamic in interesting ways. A senior developer who vibe-codes is dangerous in a good way -- they know what to ask for, they can spot when the output is wrong, and they can describe edge cases the model would miss. A junior developer who vibe-codes is dangerous in a bad way. Not because the tool is bad, but because they don't have the experience to know when the code is lying to them.

I've seen junior devs ship AI-generated code that they couldn't explain when it broke. That's not the AI's fault. But it's a real problem, and telling them to "just review the code" isn't enough when they can't yet distinguish good code from code that merely looks good.

My honest take

I use vibe coding every day. It has genuinely made me faster at a certain kind of work. I build prototypes in an hour that used to take a day. I generate boilerplate without the soul-crushing boredom. I write throwaway scripts without opening documentation.

But I don't vibe-code anything I can't verify, and I don't trust the output more than I'd trust a pull request from a new hire. The code needs the same scrutiny regardless of who or what wrote it.

The developers who are getting the most out of this aren't the ones who stopped thinking. They're the ones who shifted what they think about -- from syntax to architecture, from typing to reviewing, from "how do I write this" to "is this actually correct."

That shift is real and it's worth paying attention to. Just don't let the speed trick you into skipping the part where you actually understand what got built.