
Stephen Klein on AI: Competing on the Unmeasurable
Stephen Klein rejects AI doom and argues the future belongs to what metrics miss: meaning, trust, values, love, imagination, and dreams.
Stephen Klein recently shared something that caught my attention: "I do not buy this dystopian narrative in which we all lose our jobs and then robots will kill us. Or is it the other way around?" In the same post, he questioned whether AGI would necessarily be "as bad" for humanity as some predict, and he took aim at a buzzword that gets repeated like a prayer: values alignment.
That mix of skepticism and optimism is refreshing, especially in a feed that often swings between hype and despair. Klein is not denying that automation is real. He is arguing that the story we tell about it matters, because the story shapes what we build, what we fund, and what we tolerate in the workplace.
Below, I want to expand on what Klein is pointing at: if "everything that can be measured will eventually be automated," then the competitive advantage shifts to what cannot be measured well at all.
"Everything that can be measured will eventually be automated... but then what does that leave? Everything that can't be measured." - Stephen Klein
The dystopian narrative is compelling, but incomplete
The job-loss-and-robot-apocalypse storyline persists because it has a clean arc: new technology arrives, humans become obsolete, and power concentrates. It is also emotionally sticky because it treats uncertainty as certainty. If you are anxious, a simple doom story can feel like clarity.
Klein is pushing back on that certainty. Not because the risks are imaginary, but because the conclusion is not inevitable. History is messy: technology destroys some roles, creates others, and reshapes most. The outcome depends on choices made by companies, governments, investors, and workers.
If we assume collapse is guaranteed, we optimize for self-protection. People hoard information. Leaders cut learning budgets. Firms chase short-term automation wins without redesigning work. The doom story becomes a self-fulfilling strategy.
Why "values alignment" feels slippery in practice
Klein wrote, "I think the mere concept of values alignment is ludicrous. Whose value exactly?" That question is not cynicism. It is governance.
In the real world, "alignment" often hides a power problem: someone decides what the values are, then calls the result neutral. Even within one company, values conflict all the time:
- Growth versus safety
- Speed versus fairness
- Personalization versus privacy
- Cost reduction versus job quality
So when people say "align AI to human values," it is worth asking:
- Which humans?
- In which context?
- With what tradeoffs?
- Who is accountable when values collide?
A more useful framing is not "alignment" as a finish line, but "value negotiation" as a continuous process. That means clarity about decision rights, transparency about tradeoffs, and mechanisms for appeal when the system harms someone.
"Post-correction" and the return of grown-ups
Klein also offered a vivid image: after "post-correction," when the "riff-raff" is hiding in bunkers, "the grown-ups will show up." Whether or not you share the exact cast of characters, the underlying point is practical: cycles of hype tend to end in consolidation and institutionalization.
We have seen this pattern in other waves:
- Early exuberance leads to overpromising.
- The market corrects when reality shows up.
- Surviving organizations professionalize governance.
- The technology becomes infrastructure.
If AI follows that arc, then the important work is not just model capability. It is operational maturity: security, auditability, human oversight, and business incentives that reward long-term trust.
The core claim: measured work gets automated first
Klein wrote, "I believe that everything that can be measured will eventually be automated." This is the sharpest idea in the post, because it maps cleanly to how automation actually spreads.
If a task has:
- A clear goal
- A reliable feedback signal
- Plenty of historical examples
- A stable environment
...then it is a great candidate for automation. Not because machines are magical, but because the task is already shaped like a machine problem.
This is why we see rapid progress in areas like:
- Drafting routine text
- Summarizing calls
- Extracting entities from documents
- Coding predictable patterns
- Generating variants of ads and landing pages
A useful rule of thumb: if you can define success with a single dashboard metric, you can probably automate a meaningful chunk of the workflow.
What remains: the unmeasurable becomes the advantage
Klein lists what cannot be measured: "Meaning, Trust, Values, Love, Passion, Imagination, and Dreams." I would add a few cousins: judgment, taste, courage, and moral responsibility.
This is where the future of work gets counterintuitive. Many organizations still treat the unmeasurable as "soft." But when automation commoditizes the measurable, the so-called soft stuff becomes the hard edge.
Meaning: why people stay when they could leave
You cannot KPI your way into meaning. You can only design work that feels coherent: people understand how their effort contributes to something worth doing.
In an AI-saturated workplace, meaning becomes a retention strategy. If employees feel like they are supervising a black box that only exists to cut headcount, they will disengage. If they feel like AI removes drudgery so they can do higher-leverage, more human work, they will invest.
Trust: the real productivity multiplier
Trust shows up indirectly in metrics, but it is not reducible to them. High-trust teams:
- Share bad news early
- Ask for help faster
- Coordinate without bureaucratic friction
- Recover from mistakes without blame spirals
As AI increases speed, trust becomes the governor that prevents speed from becoming chaos.
Values: tradeoffs made visible
When AI systems make decisions, values move from posters on the wall to choices in the product. What is prioritized? Who is protected? Who bears the cost when the model is wrong?
Organizations that can articulate values in plain language, then translate them into policies and product constraints, will outperform those that hide behind vague "alignment" statements.
Imagination and dreams: the source of new games
Automation is great at playing existing games faster. Humans are still better at inventing new games: new categories, new narratives, new ways to serve customers that did not exist before.
In practice, imagination looks like:
- Reframing the customer problem
- Designing a new workflow, not just adding a tool
- Creating products that feel human, not just efficient
Competition shifts from Machiavelli to legitimacy
Klein predicts, "Machiavelli will get thrown under the bus." Translating that: if manipulation and zero-sum tactics are the default strategy, AI can turbocharge them. But that also raises the stakes for legitimacy. Customers, employees, and regulators react strongly when they sense exploitation.
In a world where automated systems can amplify persuasion at scale, the winners will be organizations that earn consent, not just attention. That means:
- Clear disclosure when AI is used
- Guardrails that prevent dark patterns
- Incentives aligned with customer outcomes
- Internal cultures that reward integrity
The future is not humans versus machines. It is trust-based organizations versus extraction-based ones.
What this means for leaders and builders
If Klein is right that the measurable gets automated, then leaders should invest where automation cannot substitute for accountability.
1) Redesign roles, do not just add tools
Ask: what decisions should humans own after automation? Define the human judgment layer explicitly, then train for it.
2) Treat governance as a product
Policies, audit trails, and escalation paths are not bureaucracy. They are how you operationalize values when systems scale.
3) Build culture that can handle ambiguity
The unmeasurable is ambiguous by nature. Teams need psychological safety, strong writing, and decision-making rituals that work when the spreadsheet is not enough.
4) Measure what you can, but respect what you cannot
You can track proxies for trust and meaning (retention, internal mobility, incident response time), but do not confuse proxies with the thing itself. Make room for qualitative signals and human narrative.
Closing thought
Klein ends with faith in humanity, and I share the spirit of that. The future of work is not automatically bright or dark. It becomes one or the other through design choices.
If we keep treating people as replaceable and only value what fits neatly into a dashboard, we will build the bleak version. If we use AI to automate the measurable while elevating the unmeasurable, we get something better: work that is more human, not less.
This blog post expands on a viral LinkedIn post by Stephen Klein, Founder & CEO, Curiouser.AI | Berkeley Instructor | Building Values-Based, Human-Centered AI | LinkedIn Top Voice in AI. View the original LinkedIn post →