Back to Blog
Trending Post

John Crickett on AI Code: Clarity Still Wins

·AI in Software Engineering

A deeper take on John Crickett's viral post: why humans must own readability, code review standards, and mentoring in the AI era.

AI-generated codecode qualitysoftware maintainabilitycode reviewdeveloper mentoringLinkedIn contentviral postscontent strategysocial media marketing

John Crickett recently shared something that caught my attention: Martin Fowler's line, "Any fool can write code that a computer can understand. Good programmers write code that humans can understand." John added that as AI-generated code becomes normal, the quote feels "prophetic" - and I think he is right.

What made John Crickett's post resonate is that it does not argue about whether AI can code. It starts from the more realistic premise: AI will write a lot of the first draft. The real question becomes what we do next, and who is accountable when that draft turns into production software.

John framed it with three questions that matter more than ever when AI agents help build systems: Who owns clarity? Does the source matter? How do we mentor now? I want to expand on each of those, because they are not just philosophical. They change how teams should review code, how engineers should be evaluated, and how we keep systems safe at 3 AM.

The Fowler quote hits harder in the AI era

AI makes it cheap to produce code that "works" on the happy path. That is not the same as code that is easy to read, safe to change, and resilient under real-world pressure.

The cost curve has flipped:

  • Generating code is getting cheaper.
  • Understanding code is not getting cheaper.
  • Operating code in production is still expensive.

So if we let speed become the only metric, we will create a new class of legacy systems: not hand-written legacy, but machine-spun legacy. The code compiles, tests might pass, demos look good, but the system slowly becomes harder to reason about.

Key idea: AI accelerates output. Humans still pay the comprehension bill.

Who owns clarity? Humans do

John Crickett asked, "Who owns clarity?" and answered it plainly: an AI might write the first draft, but responsibility stays human. That is the heart of the issue.

Tools do not carry pager duty. People do. When something breaks at 3 AM, nobody wants to debug a tangle of clever abstractions, inconsistent naming, or magical behavior that only exists because the generator produced it that way.

Owning clarity means someone is explicitly responsible for making AI-produced code:

  • Readable: names, structure, and control flow communicate intent.
  • Maintainable: changes are localized, dependencies are obvious, complexity is bounded.
  • Safe: edge cases are handled, error behavior is explicit, security assumptions are documented.
  • Operable: logging, metrics, and failure modes are designed, not accidental.

In practice, I like to turn "own clarity" into a merge requirement: every PR must have a human-readable story.

A simple clarity checklist for AI-assisted code

Before approving, ask:

  1. "Can I explain what this code does without rereading it three times?"
  2. "If I delete this, what breaks?" (dependencies and contracts)
  3. "What are the failure modes and how are they surfaced?"
  4. "Where are the boundaries?" (modules, layers, ownership)
  5. "What is the smallest change I could make safely later?"

If the answer is "I am not sure," the code is not done yet, even if it passes tests.

Does the source matter? The bar is the bar

John Crickett wrote: "Does the source matter? In a PR, it shouldn't. Human or AI, the bar is the bar. If it doesn't clear it, don't merge it." That is exactly the stance teams need.

If reviewers lower standards because "the AI wrote it," they create two dangerous incentives:

  • The author stops thinking critically because the tool is treated as an authority.
  • Reviewers stop pushing for improvements because the output feels inevitable.

Neither is acceptable. A PR is not a provenance contest. It is a quality gate.

What should "the bar" include?

I would define the bar across five dimensions:

  1. Correctness: does it meet requirements, including edge cases?
  2. Clarity: will another engineer understand it quickly?
  3. Consistency: does it match existing conventions and architecture?
  4. Testability: are there appropriate tests, and are they meaningful?
  5. Risk: does it introduce security, performance, or operational hazards?

AI tends to be strongest at generating plausible structure and weakest at correctly capturing context: existing invariants, subtle business rules, and non-obvious performance constraints. That makes code review more important, not less.

A concrete example: the "looks fine" trap

Imagine an AI generates a caching layer with a default TTL, but your domain requires strict freshness for a subset of requests. The code might be "clean" and idiomatic, yet semantically wrong. Or it may add retries on network calls that accidentally amplify load during an incident.

A reviewer who focuses only on formatting and style will miss the real risk. Holding the same bar means reviewing intent, not just syntax.

Mentoring now: AI is a co-writer, not a ghostwriter

John Crickett also asked, "How do we mentor now?" and said we should teach developers that AI is a co-writer, not a ghostwriter. I like that framing because it preserves agency.

If AI becomes a ghostwriter, engineers may produce more code while learning less. Over time, that creates a team that can ship, but cannot debug, design, or simplify. And the moment the tool is wrong, they cannot tell.

Mentoring in the AI era should focus on judgment:

  • Knowing what to ask for (good requirements and constraints)
  • Knowing what to reject (wrong assumptions, unnecessary complexity)
  • Knowing how to reshape output into a coherent system

Coaching techniques that work well

Here are practical ways to mentor with AI in the loop:

  1. Require a written rationale
    Ask authors to include a short PR note: what changed, why this approach, what alternatives were considered, and what risks remain.

  2. Make "reduce complexity" a deliverable
    When AI generates a lot of code, the next step is often deletion and consolidation. Reward the refactor that removes 30% of the generated scaffolding.

  3. Teach engineers to interrogate code
    During pairing, do "read aloud" reviews: explain each function's purpose and invariants. If it cannot be explained, it is not clear enough.

  4. Focus tests on behavior, not implementation
    AI can generate tests that mirror implementation details and still miss real scenarios. Mentor engineers to write tests from requirements and failure cases.

  5. Practice "prompt to spec" thinking
    Instead of treating prompts as magic spells, treat them as drafts of a specification: inputs, outputs, constraints, performance targets, and security boundaries.

Value is not lines produced. It is problems solved and complexity removed.

Speed vs clarity: the trade you cannot afford

John Crickett ended with a bottom line I keep coming back to: "AI makes coding faster. Humans make software maintainable. Don't trade long-term clarity for short-term speed."

That trade shows up in subtle ways:

  • Shipping faster today can mean slower incident response tomorrow.
  • Skipping design work can mean repeated rewrites later.
  • Accepting unclear code can make onboarding painfully expensive.

The 3 AM test is the most honest one: can someone safely change this system while under pressure? If the answer is no, the organization has accumulated operational debt, even if the feature backlog looks great.

A practical playbook for AI-assisted teams

If you want to apply John Crickett's principles, here is a lightweight approach that does not require a process overhaul:

1) Declare ownership explicitly

In every PR, identify a human owner responsible for clarity and correctness. AI can assist, but it cannot be the accountable party.

2) Add a "readability budget"

Treat complexity like spending. If a change adds complexity, it must buy something: clearer boundaries, better performance, fewer bugs, or reduced future work.

3) Review for intent first

Start reviews with: "What is the behavior and why is it needed?" Then check whether the implementation matches that intent.

4) Normalize refactoring AI output

Make it culturally normal to say: "This is a good draft, now let's make it ours." That means consistent naming, fewer layers, and tighter APIs.

5) Measure outcomes, not output

If AI increases velocity but defect rates rise or on-call pain increases, you did not win. Track reliability, change failure rate, and lead time for safe changes.

Why John Crickett's post spread

A quick note on the communication side: this is also a strong example of LinkedIn content that earns engagement without hype.

John Crickett did three things that often drive viral posts in engineering circles:

  • Anchored the idea in a well-known quote.
  • Asked three sharp questions that invite discussion.
  • Landed a clear conclusion people can repeat: clarity over speed.

That is a simple content strategy: provide a memorable hook, a framework, and a principle that teams can adopt.

Closing thought

AI will keep getting better at producing code. That is not the bottleneck. The bottleneck is building software that teams can understand, evolve, and operate safely.

John Crickett's point is a useful standard to hold onto: we can accept AI as a powerful co-writer while still insisting that humans own clarity, enforce consistent review bars, and mentor engineers toward judgment rather than volume.

This blog post expands on a viral LinkedIn post by John Crickett, CTO / VP Engineering / Head of Software Engineering. View the original LinkedIn post ->