There's a video on YouTube where a vibe coder and a senior iOS developer - ten years of experience - both try to clone Granola, a $250M AI meeting notetaker, using only five prompts each. No manual code edits. Just AI tools, strict rules, and everyone watching.
The vibe coder nearly won.
I say "nearly" carefully, because the honest answer at the end of the video is that nobody definitively won. They left it to the comments. And that ambiguity is actually the most interesting thing about the whole challenge.
The Setup
Riley (vibe coder) and Vishall (senior iOS dev) were building an app with these features:
- Voice recording and audio transcription
- Calendar sync - record a meeting, it shows up at the right time slot
- AI-generated summaries after transcription
- Folders for organizing recordings
Snake-draft style, ten total prompts split between them. No touching the code directly. Only the AI gets to write.
Vishall used Claude Code in terminal alongside Xcode, building in native Swift. Riley used Vibe Code, a tool built specifically for AI-first mobile development on top of Expo. Both used Claude Opus 4.1.
The tools were different. The experience levels were very different. The outputs were not that different.
What Each Person Understood That the Other Didn't
Vishall knew what he was looking at when Claude Code scaffolded out an architecture with EventKit, AVAudioRecorder, and the Apple Speech framework. He knew why it chose those frameworks. He caught, almost immediately, that Apple's native speech-to-text is worse than OpenAI Whisper - and flagged it on camera, planning to fix it later.
Riley didn't know EventKit from Event Horizon. But Riley knew something else: prompt strategy.
From the very first prompt, Riley structured the ask to cover core features (voice, transcription, AI summary, folders) while explicitly deferring calendar: "in the future we'll add calendar - build with that in mind, but not now." Vishall's first prompt tried to get calendar sync working from day one. Claude Code wrote a build error on the first run.
Prompt 2, for both of them, was a bug fix. So each effectively had three functional feature prompts, not five.
This is a pattern in every serious AI coding session: errors eat your budget. The senior dev's knowledge helps partly because it lets you recognize when you've painted yourself into a corner, and partly because it helps you write prompts that don't create corners in the first place. But if you're vibe coding and you defer the hard parts strategically, you can sidestep those corners entirely.
Or maybe you just get lucky. Hard to say.
The Apple Speech Moment
This one is worth sitting with.
When Vishall entered his first prompt, Claude Code chose Apple's native Speech framework for transcription - not OpenAI Whisper. Vishall knew this was probably worse quality. He said so. He let it ride, because he was already one error and two prompts deep, and switching felt like wasted budget.
Riley, using Vibe Code on Expo, had Whisper from the start. The AI just picked it.
There's something interesting there about how tools choose defaults. Claude Code, running in a native Swift context, reached for the most idiomatic Apple option. Vibe Code, built around cross-platform Expo, pulled in OpenAI's API instead. Neither was wrong exactly. But one was objectively better for transcription quality, and it happened entirely by accident - based on which tool you opened.
The quality showed it. Riley recorded himself for a couple minutes during filming. The transcript came back clean. Vishall's Apple Speech output was, in his own words, "significantly worse."
Where the Experience Still Showed
Vishall knew when the derived data cache was the problem. He knew how to add an API key in Xcode's scheme environment variables - not obvious if you've never touched Xcode. He understood, without being told, that EventKit is Apple's native calendar framework and why it matters for iOS.
When Riley's calendar prompt ran, render errors and sync issues ate a full prompt to untangle. Vishall got calendar working with less friction - not because Claude Code is smarter than Vibe Code, but because Vishall could interpret errors faster and prompt more precisely.
Actually, that's probably the cleanest description of what experience buys you in an AI coding world: it compresses the diagnosis loop. The AI writes the code. Your job is to read what went wrong, understand it, and craft the next prompt without wasting it.
Or maybe the job is to avoid wrong turns entirely, like Riley did by deferring calendar. Both approaches worked. They just reflect different mental models.
The Apps at the End
Vishall's app - "Serial," riffing on the Granola clone theme - ended up clean and professional. Solid calendar integration with a today-view and a peek at the coming week. Multiple recordings per meeting. Folders that saved correctly after one dedicated fix prompt. The kind of thing you'd open in a business meeting without second-guessing yourself.
Riley's - "Oatmeal" - had a gradient animation on the recording screen that was genuinely more fun. Colorful. The kind of thing you'd demo at a hackathon and someone says "oh, this is actually nice." Calendar integration had some edge-case render errors, but the core loop worked.
Neither app was close to Granola's actual product. Both apps were remarkable for five prompts.
The honest read: Vishall got more features working reliably. Riley's felt more like something a consumer might enjoy. The video ended with a vote. The comments will decide.
What the Vibe Coding Debate Gets Wrong
Here's what the vibe coding conversation usually misses: this wasn't really a test of vibe coding versus traditional development. It was a test of two different prompt strategies, using two different AI tools, in two different frameworks.
Vishall's edge wasn't his iOS knowledge per se. It was that his iOS knowledge made him a better prompter for this specific domain. He could describe errors precisely. He knew which framework names to drop. He could tell when something was good enough to ship and when it needed a fix.
That's not a transferable advantage. Put Vishall in a React Native project, or a backend Rust codebase, and his iOS expertise doesn't follow him. The AI does roughly the same job regardless of who's typing.
So maybe the real question isn't whether vibe coders can beat experienced developers. Maybe it's: what does experience actually mean when the machine writes the code?
Neither Vishall nor Riley had a clean answer at the end of the video. The comments section probably won't settle it either.
But watching two people ship working apps in five prompts - one with a decade of iOS experience, one with none - makes the question feel a lot more urgent than it did a year ago.
This article was originally published by DEV Community and written by Visesh.
Read original article on DEV Community