John Crickett on AI Coding and Understanding Code
A deeper look at John Crickett's viral argument that engineers rarely write most code, and why AI tools still demand strong review skills.
John Crickett, a CTO / VP Engineering / Head of Software Engineering, recently posted something that made me stop scrolling: "You didn't write the code, so you don't truly understand it." He followed that with a blunt assessment: "But it doesn't hold up." And honestly, I think he is right.
John's point is simple, but it cuts through a lot of performative anxiety around AI-assisted coding. The claim goes like this: if an AI generates the code, you did not write it, so you cannot truly understand it, therefore you should not ship it. John replies with the reality every working engineer already lives in: most of the code we touch was written by someone else anyway. The job has always been to understand, debug, and build on top of unfamiliar code.
"Most of the code you work with, you didn't write." - John Crickett
In this post, I want to expand on what John is really saying, why the argument against AI coding tools is often misframed, and what the better standard should be for teams that want speed without sacrificing correctness.
The real work: understanding code you did not author
John shared an estimate that will feel familiar to anyone who has moved between teams or inherited systems: in many codebases he worked on, less than 1 percent was his. That is not a confession. It is the norm.
Think about a typical week:
- You investigate a production bug in a service written years ago.
- You add a feature by threading behavior through three existing layers.
- You modify a data model without breaking migrations.
- You read a flaky test and figure out why it is failing only in CI.
In each case, authorship is not the gating factor. Comprehension is. A great engineer is not the person who only works in code they wrote. A great engineer can build a correct mental model of code that already exists.
This is why the phrase "you didn't write it" is not a serious critique on its own. If we applied it consistently, we would have to stop using:
- Open source libraries
- SDKs and platform code
- Internal shared services
- Code written by teammates
- Code written by previous generations of the company
And yet, modern software is built from exactly those ingredients.
Why the anti-AI argument feels persuasive, but misses the mark
The objection John quotes sounds reasonable because it points at a real risk: people can paste code they do not understand and ship defects at scale. That risk exists. It just is not unique to AI.
Before AI assistants, teams already had:
- Copy-paste programming from Stack Overflow
- Boilerplate generated by frameworks and CLIs
- Code written under deadline pressure and never revisited
- "Fixes" made by someone on-call at 2 a.m.
AI changes the speed and volume, not the category of problem. The right question is not "Did you write it?" The right question is "Can you justify it?" Can you explain what it does, how it fails, how it is tested, and how it fits the architecture?
The standard should be: you can explain it, test it, and maintain it.
Code review was always the skill. AI just puts it center stage.
John wrote: "Reading and reviewing code has always been a core skill. AI-assisted coding doesn't change that." I would go further: AI makes that skill more visible, because it removes the illusion that writing is the main thing.
In many teams, "strong engineer" becomes synonymous with "fast typist" or "ships lots of lines." But that is not how real leverage works. The leverage comes from:
- Asking the right questions
- Seeing edge cases early
- Understanding invariants and failure modes
- Designing interfaces that reduce coupling
- Writing tests that lock in behavior
AI can help produce a first draft quickly. It cannot guarantee that the draft matches your system's constraints, performance requirements, security posture, or operational realities. That gap is where engineering judgment lives.
A helpful mental model: AI is a junior engineer that never sleeps
If you treat an AI coding tool like a superpower, you will ship junk faster. If you treat it like an eager junior engineer, you will get value.
A junior engineer can:
- Draft a function
- Translate patterns you already use
- Suggest a refactor
- Generate tests you can review
But you still have to:
- Provide context and constraints
- Review the output critically
- Reject the parts that do not fit
- Take responsibility for the merged code
That is exactly John's argument, reframed: authorship is not the point. Ownership is.
Practical ways to use AI without losing understanding
If your team wants the productivity upside while staying honest about quality, here are concrete practices that align with John's message.
1) Require "explainability" before merge
If I cannot explain the code in plain language, I should not merge it. This can be informal, but it must be real. In review, ask:
- What is the input and output contract?
- What invariants must hold?
- What are the failure cases?
- What is the worst-case performance?
- What did we choose not to handle?
If the author cannot answer, the code is not ready. It does not matter if the author is a human or an AI assistant.
2) Make tests the receipt, not an afterthought
AI can generate plausible code that is subtly wrong. Tests force precision.
A simple standard that works: if AI helped write the implementation, AI should also help propose tests, and a human should review both. Then run them, extend them, and add at least one test that reflects a real production edge case you have seen.
3) Constrain the solution space with patterns and linters
Most "AI wrote weird code" problems happen because the model is guessing your style and architecture.
You can reduce this by:
- Having clear internal patterns (service boundaries, error handling, logging)
- Using formatters and linters
- Keeping functions small and names explicit
- Providing a short "project conventions" file for the team
This is also good discipline even without AI.
4) Prefer small diffs over big drops
The easiest way to lose understanding is to accept a large chunk of code all at once. Instead:
- Ask the tool for a minimal change
- Integrate step by step
- Run tests and benchmarks at each step
- Keep reviewable boundaries
If a change cannot be reviewed, it cannot be trusted.
The uncomfortable truth John is pointing at
John's post pokes at a quiet reality: many people already ship code they do not deeply understand. They do it because the incentives reward speed, not comprehension. AI just makes that more obvious.
So when someone says, "You didn't write the code, so you don't truly understand it," it is worth responding with John's question: how much of your codebase did you actually write?
If the honest answer is "not much," then the debate shifts. The skill is not typing. The skill is building understanding quickly and validating behavior. The teams that win with AI will be the ones that:
- Invest in review quality
- Improve test coverage and reliability
- Document constraints and decisions
- Treat ownership as non-negotiable
A better principle than "you wrote it"
Here is the principle I take from John's post:
You do not need to have written the code to understand it, but you do need to be able to defend it.
That means you can ship AI-assisted code responsibly, just like you can ship code written by a previous employee, an open source maintainer, or a teammate on another time zone. The bar is the same: you own the behavior in production.
If your team adopts that bar, AI becomes a force multiplier, not a trust collapse.
This blog post expands on a viral LinkedIn post by John Crickett, CTO / VP Engineering / Head of Software Engineering. View the original LinkedIn post →