Back to Blog
Trending Post

Why Damian Nomura Challenges AI Productivity Hype On LinkedIn

·AI

A deep dive into Damian Nomura's warning about AI "workslop", reviewing burden, and why leaders must design real quality control.

AI productivityAI quality controlworkflow designleadershipLinkedIn contentviral postscontent strategyworkplace automationsocial media marketing

Damian Nomura, "Stuck on AI? I help mid-sized companies go from paralyzed to pilot in 5 days | Speaker & Advisor," recently posted something that made me stop scrolling:

"If you've rolled out AI tools and something still feels off, you're not alone."

"The person using AI saves time. But someone still has to review the output. Verify it. Fix the errors. Edit out the verbose parts."

That framing nails what so many leaders are quietly feeling: we bought the AI tools, adoption looks decent, dashboards say usage is up — and yet the promised productivity gains stubbornly refuse to show up.

As Damian points out, research on developer teams captured the dynamic perfectly:

"Authors save time. Reviewers inherit the burden."

The work didn't disappear. It shifted. And, as he notes, it usually shifts to the people whose time is already stretched thin.

In this post, I want to build on Damian Nomura's argument and unpack what this means for leaders who are serious about getting real value from AI — not just "workslop at scale."

The Hidden Cost Behind AI Productivity Wins

Damian's post starts with an observation most teams now recognize:

  • The person prompting the AI finishes a draft faster.
  • But someone else has to check if it's correct, coherent, and on-brand.

On paper, tools like code assistants, writing copilots, and AI research bots look like pure upside. You reduce creation time, so throughput must increase — right?

In practice, the equation is more complicated:

  1. Creation time goes down. The "author" can produce more code, content, or analysis in less time.
  2. Review time quietly explodes. Senior people now spend more time:
    • Validating facts and data
    • Spotting subtle logic errors
    • Cleaning up bloated or generic AI text
    • Making judgment calls on what's "good enough"

Damian's key insight is that most organizations only measure step 1. They celebrate the time saved by the author and completely ignore the time absorbed by the reviewer.

The result? Leaders think they're buying productivity. What they're actually buying is a reallocation of cognitive load — from junior to senior, from visible to invisible, from trackable to untracked.

Why Reviewers End Up Overloaded

When Damian writes that the work has "shifted to the people whose time is already stretched thin," he's describing a pattern that plays out across functions:

  • In engineering, tech leads are reviewing twice as many AI-assisted pull requests.
  • In marketing, content leads are now editing AI drafts that "sort of" sound right but miss the nuance.
  • In legal and compliance, specialists are asked to "just sanity check" AI-generated documents.

Three things make this shift especially dangerous:

1. Review work is invisible

Calendar events say "1:1" or "project sync," not "spent 45 minutes untangling an AI-generated mess."

Leaders rarely see a line item for AI review in:

  • Time tracking
  • Project plans
  • Performance dashboards

So the burden grows quietly, until senior people hit a wall.

2. Review work is cognitively demanding

It's actually harder to review something half-right than to create something from scratch.

You have to:

  • Hold the original goal in your head
  • Parse what the AI produced
  • Detect what's missing or subtly wrong
  • Decide whether to fix, rewrite, or reject entirely

Damian's phrase "workslop at scale" captures this perfectly: you don't just get more output, you get more sloppy output that still looks polished on the surface. That surface-level polish makes errors easier to miss — and much costlier when they slip through.

3. Review work is emotionally draining

Reviewers are often the ones saying:

  • "No, we can't ship this."
  • "This needs another pass."
  • "This isn't accurate enough for our standards."

They become the friction in a system everyone else has been told is "faster" now. That tension adds emotional labor on top of the cognitive load.

The Leadership Question Damian Gets Exactly Right

Damian Nomura ends his post with a question that every AI-adopting organization should have printed on the wall:

"Who's doing the quality control on AI-generated work in your organization?"

That single question separates teams that get real leverage from AI from those that drown in low-quality output.

If your honest answer is:

  • "It depends,"
  • "Whoever touches it next,"
  • or "No one really — we trust the tools,"

then, as Damian bluntly puts it, you're not getting productivity gains.

You're getting workslop at scale.

Designing AI Quality Control Like a System (Not a Favor)

If you accept Damian's premise — that reviewers are inheriting the burden — the next move is not "work harder" or "be more careful." It's to redesign the system.

Here are practical steps leaders can take to turn his warning into an advantage.

1. Make AI review an explicit step in the workflow

Stop treating AI review as an informal favor from the nearest senior person.

For every AI-assisted process (code, content, analysis, customer communication), document:

  • Who is allowed to generate with AI
  • Who must review and approve
  • What thresholds trigger human review (risk, audience, complexity)

If there's no named reviewer, you don't have a workflow. You have hope.

2. Define what "good enough" means for AI output

One of the reasons review feels endless is that standards are fuzzy. Reviewers improvise.

Clarify, in writing:

  • What AI is allowed to do end-to-end (e.g., internal summaries)
  • What AI can draft but humans must finalize (e.g., client-facing docs)
  • What AI must not touch (e.g., regulatory filings, sensitive negotiations)

Pair this with concrete acceptance criteria: accuracy, tone, formatting, and risk tolerance.

3. Track review effort, not just creation time

Damian highlights the asymmetry between authors and reviewers. Close that loop by measuring both.

For key workflows, ask:

  • How long did it take to produce the AI-assisted draft?
  • How long did it take to review, correct, and approve?

You may discover cases where:

  • AI genuinely reduces total time (great — double down there), and
  • AI actually increases total time once review is included (those workflows need to be redesigned or de-AI'd).

4. Protect senior attention as a scarce resource

If AI shifts work upward, you must treat senior reviewers as a constrained asset.

That might mean:

  • Limiting which tasks juniors can offload to AI
  • Creating intermediate reviewers or specialized "AI editors"
  • Using AI to assist the reviewers (e.g., diffing versions, highlighting inconsistencies) instead of only assisting the authors

The point is not to stop using AI, but to use it where the net load on your most expensive people truly goes down.

5. Train for judgment, not just prompt hacking

Damian's post implicitly pushes back on the idea that prompt engineering alone will save us. Even with great prompts, someone has to exercise judgment.

So shift some enablement energy from:

  • "How to get ChatGPT to write faster"

to:

  • "How to quickly spot when AI output is unsafe, biased, or subtly wrong"
  • "How to design prompts that make review easier, not harder"

Judgment is where your competitive advantage lives. AI just amplifies it — or exposes the lack of it.

From Workslop to Real Leverage

Damian Nomura's post resonated because it names something leaders are experiencing but not always articulating: AI is not magically removing work. It's rearranging it, often in ways that stress the very people you rely on most.

If you're serious about AI-driven productivity, you can't stop at "authors save time." You have to ask, with Damian:

  • Who inherits the burden?
  • Who owns quality control?
  • And are we designing for them — or exploiting them?

Answer those questions honestly, and you can start turning AI from a source of scaled workslop into a genuine force multiplier for your organization.


This blog post expands on a viral LinkedIn post by Damian Nomura, Stuck on AI? I help mid-sized companies go from paralyzed to pilot in 5 days | Speaker & Advisor. View the original LinkedIn post →