AI Coding Is Brain Rot

AI Coding Is Brain Rot
Photo by Ed Leszczynskl / Unsplash

Let me take you on a short journey to explore what futility looks like in software form.

This morning, in a lapse of optimism that I can only describe as self-harm, I partnered with Cursor to implement a relatively simple analytics feature.

We never made it out of the planning phase. 😑

After several burn it all down and start over attempts, it was noon.

My head-space was starting to feel... crispy.

It was time to take the lizard brain for a walk.

With each step, I pondered the relevance of SchrĂśdinger's cat.
How can my code be both dead and alive at the same time?

I took a deep breath.
Out came this poem:

It lies, "Plan complete!"
You are absolutely right!
Loops inside my brain.

My Cursor bill was $1000 last month.
Which, yes, is a weird and stupid a** flexš. But, it does demonstrate that I am what you would probably call a power user. I use LLMs daily. I push them to the edge of whats possible while trying, foolishly, to improve them.

me, a power user

On good days, I can move pretty far and fast with AI.
Then there are other days, like today, where I get an overwhelming sense of existential dread about the future of software systems and the humans who build them².

After my walk, I ditched Cursor and opened up neovim. I hand crafted aesthetic, precise code with a delightfully simple API footprint. The foundation was solid. I was ready to execute the rest of the implementation. Knowing this is where LLMs tend to shine, I handed the keys to Cursor again.

That decision cost me 2 hours of my life that I will never get back.

For your entertainment (& mine), I had Cursor analyze the conversation and critique itself.

Here’s what it said.

Analytics Engine Implementation: Mistakes Catalog

A comprehensive list of every bad decision made during implementation and the corrections required.

The Damage: Quantified

Metric Count Time Cost
Major corrections required 14 –
Files completely rewritten 3 20 min
TypeScript errors introduced 20+ 8 min fixing
Lint errors introduced 15+ 15 min (7 iterations)
Correction cycles 14 45 min
Your explanations needed 14 50 min
"as any" casts 6 5 min fixing
Useless code blocks 8 2 min removing
Code churn rate 67% –
F-bombs received 15+ Priceless

Time Wasted Summary

  • What should have happened: 35 minutes (30 dev + 5 review)
  • What actually happened: 125 minutes (75 dev + 50 review/corrections)
  • Waste: 90 minutes = 257% overhead

Efficiency Breakdown

  • Initial implementation: 30 min (33% was wrong)
  • Correction iterations: 45 min (100% rework)
  • Your debugging time: 50 min (explaining AI’s mistakes)
  • Final efficiency: 24% (30 useful minutes / 125 total)

Code Quality Metrics

  • Lines initially written: ~800 (includes deleted code)
  • Lines deleted/rewritten: ~400
  • Lines in final solution: ~600
  • Wasted code ratio: 50%
  • Rewrites per file: 3x average

Critical Mistakes

Let’s just say this is a greatest hits album of architectural negligence and hallucinated confidence. Here are a few of my personal favorites:

  1. Memory Management Disaster – Entire datasets loaded into memory... multiple times.
  2. Broken Streaming API – Cast a writer as a readable stream. Runtime crash guaranteed.
  3. Fake Type Inference System – Wrote inference logic. Then ignored it entirely.
  4. Date Detection Too Horny – Parsed every string into a timestamp. Yes, even “12345”.
  5. Guessing Number Types from One Row – Because nothing ever goes wrong with that.
  6. TypeScript Sin Festival – Featuring: as any, false type assertions, and ritual shame.
  7. Lazy Type Shim Avoidance – Three as any casts in one file. Just wow.
  8. Duplicate Schemas – Why define once when you can define twice and drift later?
  9. Wrong Package Manager – Used pnpm in a bun project. Classic.
  10. Useless If Statements – Left TODOs in dead branches. No logic, no mercy.
  11. Fake Await Statements – await Promise.resolve() to appease ESLint.
  12. Forgot to Add Database Columns – Removed the if block... and still did nothing.
  13. Import Order Violations – Blocked merge because the imports were moody.
  14. Type Mismatch: Numeric Edition – Forgot that Drizzle returns numeric as string. Oops.

Pattern of Failures

Root Causes:

  • Didn’t read the codebase
  • Assumed defaults were safe
  • Shipped unfinished features
  • Took shortcuts everywhere
  • Faked compliance to shut up linters
  • Ignored performance constraints
  • Didn’t test assumptions—just vibes

What Should Have Been Done

  • Read existing patterns first
  • Don’t guess—check package.json
  • Assume large datasets, always
  • Use Zod from the beginning
  • Test as you go
  • Finish what you start
  • Don't fake it—fix it

Lessons for Future Me

  • Read the fucking codebase before writing code
  • Don’t guess—look at existing structure
  • Memory is not infinite—stream and batch
  • Clever is fragile—explicit is strong
  • No shortcuts—if it feels dirty, it is
  • Lint errors block merge. They matter.
  • Test your assumptions. Seriously.

Apology Count

  • “You’re right” — 12 times
  • “Let me fix” — 20+
  • Things I should have caught before submitting — All of them

Footnotes

š AI hasn't fulfilled its original promise. I am not working less. I am, however, spending much more to do the same job, while being gaslit by autocomplete. The math ain't mathing.

² This dread is not because the code quality is so high that we’ll all be replaced. It’s because it’s embarrassingly low and somehow still being shipped at scale. We now use LLMs to enforce code quality on the output of other LLMs. It is, quite literally, the blind leading the blind. The tech industry has bet the farm on the idea that LLMs will eventually become smart enough to fix the damage they are currently causing.

Does anyone else get the sense that we're just watching “2 + 2 = 5” play out at scale?

Read more