
Jonny Longden on the Speedometer Trap in Growth
A practical expansion of Jonny Longden’s speedometer metaphor for CRO: balance metrics, context, and experimentation to grow.
Jonny Longden recently shared something that caught my attention: an image of driving a car while ONLY looking at the speedometer, blacking out the windows, and pressing the accelerator to hit a target speed. He asks what would happen.
Imagine if you drove a car whilst ONLY looking at and trying to 'optimise' the speedometer... Imagine you blacked-out all the windows and just pressed the accelerator and watched the speedometer to try and optimise a certain speed.
His answer is blunt: you would crash, because you are not paying attention to what is really going on. That metaphor is one of the cleanest explanations I have seen for a common failure mode in growth, CRO, and product: treating a single metric as the mission.
In this post, I want to expand on Jonny’s point and turn it into something you can use day to day. Because most teams are not literally blacking out the windows. They are doing something subtler: building operating rhythms, incentives, and experimentation roadmaps around one number and calling it strategy.
The metric is not the journey
Jonny points out that the speedometer is (a) incredibly specific and limited, and (b) not representative, on its own, of whether the journey is successful. You cannot separate time, destination, route conditions, and driving skill from the experience of getting somewhere.
Growth programs fail for the same reason. Conversion rate, revenue, retention, LTV, CAC, NPS, activation, MAU: each can be useful. None of them is the business.
When a single metric becomes the goal, two things tend to happen:
- Teams start optimizing for what is measurable rather than what is valuable.
- The organization stops noticing second order effects until they show up as a crisis.
Increasing a metric says absolutely nothing about what you are doing, why, or if it is the right thing.
A metric is an instrument. It tells you something. It does not tell you everything.
Why single-metric optimization is so seductive
Jonny also calls out a mental trap: our brains want to believe that simple leading indicators exist, and that if we increase one number, everything else will fall into place. That belief is comforting because it reduces uncertainty.
In practice, metrics are connected systems. Changing one number often changes the meaning of another:
- A conversion rate increase might be great, or it might mean you stopped acquiring cold traffic.
- A revenue increase might be real demand growth, or it might be discounting that pulls revenue forward at the expense of margin and retention.
- A retention increase might be real product value, or it might be a lock-in mechanism that hurts long-term brand trust.
That is why the speedometer analogy lands. Driving well requires integrating multiple inputs at once, not worshipping one dial.
The classic example: conversion rate goes up when you turn off ads
Jonny gives a deliberately ridiculous example that is still painfully instructive: you can increase conversion rate by switching off paid media.
That sounds absurd until you remember how dashboards are usually interpreted. If the CRO team is rewarded on conversion rate, turning off low-intent traffic can look like a win. The metric improves. The business may not.
Here are a few less obvious versions of the same pattern:
- You simplify checkout by removing options, and conversion rises, but AOV and repeat purchase fall because customers can no longer buy the configuration they actually want.
- You reduce form fields to boost lead conversion, but lead quality drops, sales cycles lengthen, and the sales team loses trust in marketing.
- You push more aggressive urgency messaging, and short-term revenue spikes, but refund rates and negative reviews rise.
The point is not that these changes are always wrong. The point is that without context and counter-metrics, you cannot tell what you actually improved.
A better way: treat growth like driving with instruments and a windshield
Jonny makes an important balancing statement: you cannot drive on pure intuition either. You need a map, signs, and telemetry. The skill is balance.
If I translate that into a practical growth operating model, it looks like this:
- The windshield: qualitative reality. Customer conversations, session replays, usability studies, support tickets, sales calls.
- The map: strategy. Who you serve, what problems you solve, where you compete, what you will not do.
- The instruments: metrics. Signals that confirm or challenge what you think is happening.
- The road conditions: constraints and risks. Tech debt, legal/compliance, seasonality, channel dynamics, competitor moves.
When any one of these dominates, you drive poorly.
Build a metric system, not a metric target
Single metrics can still be useful if you place them inside a system. Here is a simple structure that keeps teams from staring at the speedometer.
1) Pick a primary outcome (your destination)
Choose one primary business outcome for the period, such as profitable revenue growth, expansion revenue, or improved retention in a core segment. This is not your only measurement. It is your primary destination.
2) Add guardrails (your safety constraints)
Guardrails are the metrics that must not degrade while you pursue the primary outcome. Examples:
- If optimizing conversion rate, guardrail gross margin and refund rate.
- If optimizing revenue, guardrail retention and customer satisfaction.
- If optimizing activation, guardrail support burden and time to value.
This is how you stop the car from going fast into a wall.
3) Use diagnostic metrics (your instruments)
Diagnostic metrics help you understand why changes are happening. They are not goals; they are explanations. Examples include:
- Mix shifts by channel, segment, and device
- Step conversion in the funnel
- Latency, error rate, and performance measures
- Pricing and promotion exposure
When revenue moves, diagnostics tell you whether the business got healthier or just louder.
4) Tie experiments to decisions, not just lifts
One of Jonny’s underlying messages is that a metric increase does not prove the decision was right. I would take that further: experiments should reduce uncertainty about an important decision.
Instead of framing a test as "can we lift conversion by 2%", frame it as:
- "Will simplifying plan selection reduce confusion without increasing cancellations?"
- "Does this onboarding change improve time to value for our target segment?"
- "Are we losing trust when we add this urgency pattern?"
Then you measure the primary metric plus guardrails to judge the decision.
What to do next week (a quick checklist)
If your team suspects you are over-optimizing a single metric, here is a practical way to reset without boiling the ocean:
-
Audit incentives: What are teams praised for? What gets budget? If one metric dominates rewards, expect speedometer behavior.
-
Add two guardrails immediately: Pick the two most likely places your optimization could cause harm (often margin and retention).
-
Segment your reporting: If conversion rate is your headline, break it down by channel, intent level, new vs returning, and core segment. A single blended number is where meaning goes to die.
-
Pair quant with qual: For every metric movement you review, bring one piece of reality: five customer quotes, ten replay clips, or a short support theme summary.
-
Write decision-focused hypotheses: Make your next five experiments about clarifying choices, not chasing lifts.
The real takeaway
Jonny’s analogy sticks because it names the deeper issue: we mistake measurement for understanding. The speedometer is useful, but it is not the road, the map, the weather, the engine, or the driver.
If you want experimentation and CRO to create durable growth, treat metrics as a cockpit, not a single needle. Optimize the system, and use numbers to stay honest about what is happening, not to pretend the world is simpler than it is.
This blog post expands on a viral LinkedIn post by Jonny Longden, Chief Growth Officer @ Speero | Growth Experimentation Systems & Engineering | Product & Digital Innovation Leader. View the original LinkedIn post →