John Crickett on When Best Practices Become Cargo Cults
A deeper look at John Crickett's viral post on best practices, context, and avoiding cargo-cult engineering decisions.
John Crickett recently shared something that caught my attention: "Following best practice is great.
Following the wrong best practice? Not so much." That short punchline lands because most of us have lived the downside. We adopt a popular pattern, process, or principle because it is widely praised, and then we wonder why it slows us down or creates new problems.
John Crickett explained where things go wrong: we "grab a best practice without understanding the specific problem it was designed to solve." We implement it because a respected company did it, or because a well known person championed it. And then we treat it like a universal truth.
I want to expand on that idea, because in engineering management this is one of the most expensive ways to waste effort: doing the right thing for someone else.
Best practices are not laws, they are solutions
When John says "best practices aren't universal truths. They're contextual solutions," he is pointing at a simple mismatch that hides in plain sight.
A practice earns the label "best" because:
- It solved a real problem for a real team
- It was repeatable enough to be taught
- It produced better results than the alternatives in that context
But context includes things like team size, domain complexity, risk tolerance, latency requirements, compliance constraints, deployment frequency, staff experience, and even organizational incentives. Change the context, and the same practice can go from helpful to harmful.
"The best practice is only "best" when it fits your context."
That sentence is an engineering management compass. It is not anti-best-practice. It is pro-thinking.
Where cargo culting shows up in software teams
John uses the term "cargo-culting," and it is the perfect word. Cargo culting is copying the visible rituals without understanding the causal mechanisms behind them.
Here are a few places I see it most often.
Microservices because "that is what scale looks like"
Microservices are a best practice in certain environments: high scale, many independent teams, clear bounded contexts, strong platform tooling, mature observability, and a real need for independent deployability.
But if you have a small team, a fuzzy domain model, or weak operational maturity, microservices can create:
- Slower delivery due to coordination and integration overhead
- More production risk due to distributed failure modes
- Harder debugging due to fragmented logs and unclear ownership
The practice is not wrong. The fit is wrong.
Strict code coverage targets as a proxy for quality
Some teams adopt rules like "90% coverage" because it sounds like rigor. Coverage is useful when it incentivizes testing meaningful behavior and protects critical workflows.
But as a universal mandate it can push people toward:
- Shallow tests that assert implementation details
- Mock-heavy suites that break on refactors
- The illusion of safety while critical paths remain untested
The problem coverage solves is "we are shipping changes without reliable feedback." If you already have fast integration tests, strong review, and low change risk, your best investment might be elsewhere.
Agile ceremonies without the feedback loop
Daily standups, sprints, retrospectives, and story points can be powerful when they help a team inspect and adapt.
But if the underlying problem is actually "we cannot ship small increments" or "stakeholders keep changing priorities mid-week," more meetings do not fix the system. They often just add overhead and frustration.
"Use X architecture" because a famous company uses it
Well known organizations publish their practices after years of iteration and with a whole supporting ecosystem: internal tooling, training, platform teams, budget, and organizational design. Copying the visible technique without the invisible support can be a trap.
A practical way to evaluate a best practice
John suggests three questions:
- "What problem does this solve?"
- "Do I actually have that problem?"
- "Does my situation match the context where this works?"
I like these questions because they force you to move from imitation to diagnosis. Here is a slightly expanded version you can run in 10 to 30 minutes before committing.
1) Name the problem in your own words
If you cannot describe the problem clearly, you are not choosing a solution. You are choosing a vibe.
Write it down in one sentence, for example:
- "Incidents keep recurring because changes are hard to roll back"
- "Delivery is slow because deployments are manual and risky"
- "Code ownership is unclear, so reviews bottleneck"
2) Identify the symptoms and the cost
Ask: what is the measurable pain? Time-to-merge, lead time, defect rate, on-call load, customer churn, audit findings, missed deadlines.
If the cost is small, avoid heavy process. Lightweight fixes often outperform complex frameworks.
3) Confirm the mechanism
A good best practice has a reason it works. For example:
- Trunk-based development works because it reduces long-lived branches and forces frequent integration.
- Blameless postmortems work because they increase learning and reduce fear, which increases reporting and transparency.
If you cannot articulate the mechanism, you cannot predict whether it will work for you.
4) Compare contexts, not headlines
This is the step John is warning us not to skip. Ask:
- Team size and experience: are we 5 engineers or 200?
- Change frequency: weekly releases or hourly deploys?
- Risk profile: consumer app or regulated healthcare?
- System shape: monolith, modular monolith, distributed services?
- Tooling maturity: do we have CI/CD, observability, feature flags?
A best practice that assumes strong tooling will fail in a low-tooling environment, and you will blame the practice instead of the missing prerequisites.
5) Pilot, then decide
Instead of rolling out a sweeping mandate, pilot with one team or one codebase slice.
Define success criteria upfront:
- "Reduce time-to-deploy from 2 hours to 20 minutes"
- "Cut incident recurrence by 30%"
- "Decrease review queue time by 1 day"
If it works, scale it. If it does not, you learned cheaply.
What engineering leaders can do differently
The hardest part is cultural: teams often want certainty, and "best practice" sounds like certainty. As a leader, you can keep the benefits of best practices without turning them into dogma.
Treat best practices as hypotheses
Language matters. "We will adopt X because it is a best practice" shuts down thinking. "We will try X because it might solve Y" invites feedback.
Make the context explicit
When a practice is proposed, ask people to include:
- The problem statement
- The assumed prerequisites
- The expected tradeoffs
This turns adoption into an engineering decision, not an act of faith.
Reward reasoning, not memorization
Cargo culting thrives when people get social credit for citing famous companies or popular frameworks. Create space for someone to say, "That worked there, but our constraints are different."
Keep a "no default" mindset
Many organizations build defaults like "all services must use X" or "all teams must run Scrum." Defaults can be helpful, but they should be revisited. A default that never changes becomes dogma.
A simple checklist you can reuse
If you want something quick to copy into your decision doc, here is my condensed version of John Crickett's point:
- What specific problem are we solving?
- What evidence shows we have that problem?
- What is the mechanism by which this practice helps?
- What prerequisites does it assume?
- How does our context differ from the success stories?
- What tradeoffs are we accepting?
- How will we pilot it, and what metrics define success?
If you can answer these, you are not cargo culting. You are leading.
Closing thought
John Crickett's post is a reminder that maturity is not about collecting more practices. It is about choosing fewer practices with better fit. Best practices are a great starting point, but they should never be the ending point.
This blog post expands on a viral LinkedIn post by John Crickett, CTO / VP Engineering / Head of Software Engineering. View the original LinkedIn post →