AI Doesn't Write Spaghetti Code. You Do.
5 min read
There’s a type of LinkedIn post I’ve been seeing a lot lately. It goes something like this: “AI-generated code is garbage. I spent three days refactoring what Copilot wrote. AI slop is killing codebases. Real developers write real code.”
Every time I read one, I have the same reactions.
First: okay, doomer.
Second: you’re doing it wrong.
Third: if you can’t tell an AI what good code looks like, I genuinely wonder if you can tell your developers, either.
And look. The doomers aren’t entirely wrong about the outcome. There are companies right now shipping unreviewed, unguided AI-generated code into production at scale. No standards. No guardrails. No architectural direction. They are absolutely mortgaging their future on a codebase that will cost them dearly to unwind. That’s real, and it’s worth saying clearly.
But here’s what they’re getting wrong: that’s not what AI does to you. That’s what you’re choosing to do with AI. The shoddy codebase isn’t the tool’s fault. It’s the absence of engineering leadership.
AI Writes Like a Junior Developer. That’s Not an Insult. It’s a Diagnosis.
Here’s what AI actually is, when we’re honest about it: a perpetual junior developer.
The mechanical capability is exceptional. Fast. Consistent. Usually works the first time. It doesn’t get tired, doesn’t have a bad day, doesn’t push sloppy code because it’s 4:55 on a Friday.
But the internalized judgment? Zero. No native CLEAN. No native SOLID. No native DRY. No architectural instincts built from years of code reviews and hard lessons.
The principles that senior engineers apply automatically: the ones so internalized they shape how you think before you write a single line. AI doesn’t have those. Not natively.
Because AI didn’t go through the maturation process a human engineer goes through. The junior dev who spends five years getting code-reviewed, getting mentored, reading the books, making the mistakes, and gradually internalizing what good looks like. That process is what produces a senior engineer. AI skipped it. It has the output speed of a senior and the internalized standards of someone on day one.
So Whose Fault Is It?
When AI writes spaghetti code for your team, here’s what actually happened: you handed a capable but uncorrected junior developer a keyboard, gave them zero standards documentation, skipped code reviews, and were surprised by the output.
That’s not an AI problem. That’s a management problem.
The fix for a junior developer who writes sloppy code is the same fix that’s always worked: make the standards explicit. Write them down. Review work against them. Build feedback loops. Give the developer a clear picture of what good looks like.
The fix for AI is identical. You put the standards in the instructions file. CLEAN code principles. SOLID architecture patterns. DRY as a hard rule. Your specific conventions: naming, layering, dependency injection style, whatever your codebase demands. You write it down and you make it part of every session.
If you haven’t done that, the spaghetti code is not AI’s fault. It’s yours.
Here’s the Question That Cuts Deepest
Writing an AI instructions file that produces consistent, clean output requires being able to articulate what good code looks like. Specifically. Precisely. In writing.
If you can’t do that for the AI, I want you to think about what that means for your team.
I’ve worked with a lot of engineering teams. A significant number of them have “standards” that live entirely in one senior developer’s head or don’t exist at all. Unwritten, uncommunicated, enforced only through code review comments that say “this isn’t how we do things here.” Junior developers on these teams struggle. Onboarding takes forever. Code quality is inconsistent.
Those teams were failing at standards before AI showed up. AI just made the gap visible and put it on LinkedIn.
Here Are My Receipts
I have over 400,000 lines of AI-generated production code in active use. The architecture is consistent. Code reviews are fast because the patterns are predictable. New features land where they’re supposed to land. My team uses AI daily to enhance existing products and fix issues that are the direct result of lack of good standards. They ship small, easily reviewable PRs with clean code, fast.
That didn’t happen by accident. It happened because before any agent wrote a single line, I told it exactly what good looked like. Architecture standards. Coding conventions. SOLID principles as non-negotiables. A code map so the agent understood the structure it was working within.
I’ve published the framework. The agentic coding standards repo shows the full setup. NuSpec.AI extends this for NuGet packages specifically, giving agents the intelligence they need to work with your dependencies correctly.
These aren’t magic. They’re exactly what a good engineering manager does with a new developer. Make the expectations explicit. Document the standards. Build the feedback loops. Review the output.
The Doomerism Isn’t About AI
It’s about teams that never built a culture of explicit standards, and are now surprised that a tool without internalized judgment produced output without quality.
AI is not going to fix a broken engineering culture. Neither is any other tool.
But if you build the right foundation (documented standards, consistent review, explicit coaching), AI will execute to those standards faster and more consistently than any junior developer you’ve ever hired.
The question was never “can AI write good code?”
The question was always “do you know what good code looks like well enough to define it?”
If the answer is yes, AI is the most powerful tool you’ve ever had.
If the answer is no, that’s the conversation worth having. And it has nothing to do with AI.
Coach's Playbook
AI workflows, team systems, and engineering leadership. Practical. Actionable. Weekly. Get it in your inbox — free.
Subscribe to Coach's Playbook →