Is AI Coding Brain Rot?

Is AI Coding Brain Rot?
Photo by Dasha Yukhymyuk / Unsplash


Disclaimer

This is not an anti-AI post.

I use AI coding tools daily. Heavily. This is written from the perspective of someone who depends on them, understands their strengths, and has felt their limits firsthand.

The critique here is not about capability. It is about incentives, misuse, and what happens when speed replaces understanding. If your takeaway is “this person just hates AI,” you have missed the point and probably the rest of the post as well.


A Short Tour of Futility in Software Form

Let me take you on a short tour of what futility looks like when it compiles.

This morning, in a moment of optimism that I can only describe as self-inflicted, I sat down with Cursor to implement what should have been a relatively simple analytics feature.

We never made it out of planning.

After several burn-it-down-and-start-over attempts, it was noon. My head felt overcooked. Focus was gone. The only responsible move was to take a walk and let the lizard brain cool off.

Somewhere between the first and second block, Schrödinger’s cat came to mind.
How can my code be both correct and broken at the same time?

By the time I got back, this line had formed fully on its own:

It lies, “Plan complete!”
You are absolutely right.
Loops inside my brain.

For context, my Cursor bill last month was just over a thousand dollars. That is not a flex. It is an admission. I use LLMs daily. I push them hard. I rely on them enough to know exactly where they help and where they quietly sabotage progress.

Some days, they genuinely accelerate work.

Other days, like this one, they trigger a very specific kind of existential dread about the future of software systems and the humans building them.

When the Tool Becomes the Work

After the walk, I ditched Cursor and opened Neovim.

I wrote clean, deliberate code by hand. The API was small. The abstractions were boring. The foundation felt solid. This is usually the moment where LLMs shine, when the structure is clear and the remaining work is mechanical.

So I handed control back to Cursor.

That decision cost me two hours I will never get back.

For my own sanity, I had Cursor analyze the session and critique itself. What followed was unintentionally one of the most honest postmortems I have seen from an AI system.

The result was a catalog of architectural negligence delivered with complete confidence.

Memory management failures. Broken streaming APIs. Fake type systems that were written and then ignored. Aggressive type casting to silence linters rather than fix problems. Logic that looked plausible until you traced it end to end.

The system generated hundreds of lines of code. Roughly half of them were eventually deleted or rewritten. Most of the remaining effort went into explaining to the AI why its assumptions were wrong.

What should have taken thirty-five minutes took more than two hours. Not because the problem was hard, but because the tool kept confidently steering into ditches.

The final efficiency worked out to about twenty-four percent.

That number feels generous.

The Pattern Underneath

What matters here is not that Cursor made mistakes. Humans do that too.

What matters is the pattern.

The failures were not subtle. They all came from the same root causes: guessing instead of reading, assuming defaults were safe, faking correctness to satisfy tooling, and prioritizing output over understanding.

The system did not slow down when it was wrong. It sped up.

And that is the real problem.

LLMs are very good at producing something that looks finished. They are far less reliable at knowing when they do not understand the constraints of the system they are operating in. The result is code that passes a glance, sometimes even a review, while quietly accumulating debt and risk.

We are now in a place where LLMs generate code, other LLMs review it, and humans are left arbitrating disagreements between systems that are both confident and wrong in different ways.

This is not automation replacing engineers.
It is automation amplifying bad habits at scale.

Why This Feels Worse Than It Should

The unease does not come from fear of being replaced by better software.

It comes from watching mediocre output ship faster than ever, wrapped in the language of inevitability. We are told this is progress. We are told the rough edges will smooth out. We are told the models will get better.

Maybe they will.

But right now, the industry is betting heavily on the idea that these systems will eventually be smart enough to clean up the damage they are actively creating. That is not a technical strategy. It is wishful thinking with a budget.

We have somehow accepted a world where “mostly works” is good enough for systems that increasingly underpin everything else.

At scale, that is not harmless.
It is structural.

The Point

None of this means LLMs are useless. They are not. I will keep using them. Carefully.

But days like this are a reminder of something that feels increasingly unfashionable to say out loud.

Understanding still matters.
Reading the codebase still matters.
Explicit systems still outperform clever ones.
And no tool is a substitute for judgment.

When tools obscure those truths instead of reinforcing them, progress slows even as output explodes.

That is not the future I am interested in building.

And if we are honest with ourselves, it is not a future that should make anyone comfortable.

Let me take you on a short journey to explore what futility looks like in software form.

This morning, in a lapse of optimism that I can only describe as self-harm, I partnered with Cursor to implement a relatively simple analytics feature.

We never made it out of the planning phase. 😑

After several burn it all down and start over attempts, it was noon.

My head-space was starting to feel... crispy.

It was time to take the lizard brain for a walk.

With each step, I pondered the relevance of Schrödinger's cat.
How can my code be both dead and alive at the same time?

I took a deep breath.
Out came this poem:

It lies, "Plan complete!"
You are absolutely right!
Loops inside my brain.

My Cursor bill was $1000 last month.
Which, yes, is a weird and stupid a** flex¹. But, it does demonstrate that I am what you would probably call a power user. I use LLMs daily. I push them to the edge of whats possible while trying, foolishly, to improve them.

me, a power user

On good days, I can move pretty far and fast with AI.
Then there are other days, like today, where I get an overwhelming sense of existential dread about the future of software systems and the humans who build them².

After my walk, I ditched Cursor and opened up neovim. I hand crafted aesthetic, precise code with a delightfully simple API footprint. The foundation was solid. I was ready to execute the rest of the implementation. Knowing this is where LLMs tend to shine, I handed the keys to Cursor again.

That decision cost me 2 hours of my life that I will never get back.

For your entertainment (& mine), I had Cursor analyze the conversation and critique itself.

Here’s what it said.

Analytics Engine Implementation: Mistakes Catalog

A comprehensive list of every bad decision made during implementation and the corrections required.

The Damage: Quantified

Metric Count Time Cost
Major corrections required 14
Files completely rewritten 3 20 min
TypeScript errors introduced 20+ 8 min fixing
Lint errors introduced 15+ 15 min (7 iterations)
Correction cycles 14 45 min
Your explanations needed 14 50 min
"as any" casts 6 5 min fixing
Useless code blocks 8 2 min removing
Code churn rate 67%
F-bombs received 15+ Priceless

Time Wasted Summary

  • What should have happened: 35 minutes (30 dev + 5 review)
  • What actually happened: 125 minutes (75 dev + 50 review/corrections)
  • Waste: 90 minutes = 257% overhead

Efficiency Breakdown

  • Initial implementation: 30 min (33% was wrong)
  • Correction iterations: 45 min (100% rework)
  • Your debugging time: 50 min (explaining AI’s mistakes)
  • Final efficiency: 24% (30 useful minutes / 125 total)

Code Quality Metrics

  • Lines initially written: ~800 (includes deleted code)
  • Lines deleted/rewritten: ~400
  • Lines in final solution: ~600
  • Wasted code ratio: 50%
  • Rewrites per file: 3x average

Critical Mistakes

Let’s just say this is a greatest hits album of architectural negligence and hallucinated confidence. Here are a few of my personal favorites:

  1. Memory Management Disaster – Entire datasets loaded into memory... multiple times.
  2. Broken Streaming API – Cast a writer as a readable stream. Runtime crash guaranteed.
  3. Fake Type Inference System – Wrote inference logic. Then ignored it entirely.
  4. Date Detection Too Horny – Parsed every string into a timestamp. Yes, even “12345”.
  5. Guessing Number Types from One Row – Because nothing ever goes wrong with that.
  6. TypeScript Sin Festival – Featuring: as any, false type assertions, and ritual shame.
  7. Lazy Type Shim Avoidance – Three as any casts in one file. Just wow.
  8. Duplicate Schemas – Why define once when you can define twice and drift later?
  9. Wrong Package Manager – Used pnpm in a bun project. Classic.
  10. Useless If Statements – Left TODOs in dead branches. No logic, no mercy.
  11. Fake Await Statementsawait Promise.resolve() to appease ESLint.
  12. Forgot to Add Database Columns – Removed the if block... and still did nothing.
  13. Import Order Violations – Blocked merge because the imports were moody.
  14. Type Mismatch: Numeric Edition – Forgot that Drizzle returns numeric as string. Oops.

Pattern of Failures

Root Causes:

  • Didn’t read the codebase
  • Assumed defaults were safe
  • Shipped unfinished features
  • Took shortcuts everywhere
  • Faked compliance to shut up linters
  • Ignored performance constraints
  • Didn’t test assumptions—just vibes

What Should Have Been Done

  • Read existing patterns first
  • Don’t guess—check package.json
  • Assume large datasets, always
  • Use Zod from the beginning
  • Test as you go
  • Finish what you start
  • Don't fake it—fix it

Lessons for Future Me

  • Read the fucking codebase before writing code
  • Don’t guess—look at existing structure
  • Memory is not infinite—stream and batch
  • Clever is fragile—explicit is strong
  • No shortcuts—if it feels dirty, it is
  • Lint errors block merge. They matter.
  • Test your assumptions. Seriously.

Apology Count

  • “You’re right” — 12 times
  • “Let me fix” — 20+
  • Things I should have caught before submitting — All of them

Footnotes

¹ AI hasn't fulfilled its original promise. I am not working less. I am, however, spending much more to do the same job, while being gaslit by autocomplete. The math ain't mathing.

² This dread is not because the code quality is so high that we’ll all be replaced. It’s because it’s embarrassingly low and somehow still being shipped at scale. We now use LLMs to enforce code quality on the output of other LLMs. It is, quite literally, the blind leading the blind. The tech industry has bet the farm on the idea that LLMs will eventually become smart enough to fix the damage they are currently causing.

Does anyone else get the sense that we're just watching “2 + 2 = 5” play out at scale?