Hey,

Welcome to issue #1 of Vibe Shipped.

Here's the deal: I'm a Japanese student. I'm not a "10x engineer." I don't work at a FAANG. I have zero VC funding and exactly one collaborator (myself, at 2 AM).

But over the last 90 days, I shipped this:

tabenavi.jp — a Japanese restaurant nutrition tracking app, fully responsive web + native iOS via Capacitor, real-time PFC database for 33+ chains, 45 SEO-optimized guide articles, account deletion flow for App Store compliance, and the entire submission package ready for Apple review.

I wrote about 13% of the code by hand.

The other 87% came from Cursor and Claude Code. Some from v0 when I was too tired to think about Tailwind.

This isn't a brag. It's the opposite. I want to show you the gap between demos that look magical on Twitter and actually shipping something users can break.

Because that gap is where 90% of vibe coders quit.

Let's get into the 5 things nobody told me.

1. AI writes the code. You write the boundaries.

Every YouTube tutorial shows the happy path: "Build me a TODO app" → 30 seconds later, beautiful Tailwind interface, working CRUD.

In production, the prompts that mattered weren't "build me X". They were:

  • "Here's our existing data model. Don't break the foreign key on user_id."

  • "We use Server Components. Don't add 'use client' unless this needs interactivity."

  • "Match the styling conventions in src/components/guide/. Don't introduce new utility classes."

The skill isn't prompting harder. It's defining smaller — feeding the AI just enough context that it can't drift, while keeping the surface area you have to review small.

I now think of every Claude Code session as a negotiation about what NOT to touch.

2. The killer is integration, not generation.

Cursor will write you a beautiful Stripe checkout in 15 seconds. It will not, however, tell you that:

  • Your webhook needs to be idempotent (Stripe retries)

  • iOS App Store requires you to use IAP for digital goods, not Stripe (Apple takes 30%)

  • Your Capacitor wrapper needs server.url configured and the App Transport Security exception in Info.plist

I lost an entire afternoon to a Capacitor + WKWebView + cookie issue that no LLM could debug because the bug existed at the intersection of three systems.

Lesson: AI is great at generating units. The senior skill is now connecting them. Spend your reading time on integration docs (Stripe webhooks, Apple HIG, Capacitor lifecycle), not on syntax.

3. "Working code" ≠ "shippable code"

Claude wrote me an account deletion flow in one shot. It compiled. It worked in the demo.

It would have failed Apple's review immediately.

Why? Because Apple Guideline 5.1.1(v) requires:

  • Deletion must be initiated in-app (not by email request)

  • Confirmation step required

  • Server-side data deletion within 30 days

  • Account deletion does NOT cancel active App Store subscriptions (you must explicitly tell the user)

None of that was in the AI-generated code. None of it would have been caught by tests. It would have been caught by a human reviewer at Apple, costing me a 1-2 week resubmission cycle.

The AI generated correct code for the wrong specification. Specifying correctly is the new bottleneck.

4. The model is not the product. The eval is.

Here's a thing I learned the hard way:

I built tabenavi's nutrition database by feeding chain restaurant menus into Claude and asking it to estimate PFC values. Beautiful. Fast. Plausible-looking JSON.

Then I cross-referenced 50 entries against the actual Sukiya official site.

13 of them were wrong. Not catastrophically — the protein on a "牛丼ライト" was off by 4g, the sodium on a 親子丼 was 100mg high, etc. But for an app marketed on accuracy, 26% wrong is a brand-killing number.

Now my entire nutrition pipeline runs through a manual diff against the official chain PDF before anything ships. The model is the draft. The eval is the product.

If you're shipping anything where correctness matters (medical, financial, nutrition, legal), build the verification harness before the generator. Otherwise you're shipping plausible-sounding lies.

5. Vibe coding has a hidden tax: documentation rot

I'm working on tabenavi alone. Every feature I shipped, Cursor wrote me a comment block. Every refactor, Claude updated the docstring.

Three months later, I have 200+ comments and roughly half are wrong because the AI updated the code but not the related comment elsewhere in the file.

Fix: stop letting AI write comments by default. The code is the source of truth. Comments only when the why is non-obvious. Less noise, less drift.

The actual stack (with affiliate links — disclosure at the bottom)

If you want to vibe-code something real (not demos), here's exactly what I use today:

Layer

Tool

Why I picked it

Code editor + AI

Composer + multi-file edit is irreplaceable

Long-context refactors

Claude Code (via Anthropic API)

Best for "refactor the entire auth flow"

UI scaffolding

Generates Tailwind components matching your design system

Hosting + deploy

Zero-config Next.js, preview URLs

Newsletter

What you're reading right now is built on this

Each one of these I tried 2-3 alternatives. Each one I came back to. None are sponsored — I picked them because they're what I actually opened today.

What's coming next week

Issue #2 will be a tactical breakdown:

"Cursor Composer vs Claude Code for refactors: I ran the same migration in both. Here's which one shipped."

Real diffs. Real timings. Real bugs.

If that sounds useful, the best thing you can do right now is:

  1. Reply to this email and tell me what you're shipping. I read every one.

  2. Forward this to one developer friend who's also vibe-coding their way to production.

  3. Try Cursor if you haven't — link here. The 14-day Pro trial is the only way to know if it's for you.

—Hiro

Keep reading