Back to Blog
Ethan Mollick on the Problem With an "Anti-AI" Coalition
Trending Post

Ethan Mollick on the Problem With an "Anti-AI" Coalition

·AI Governance

A deeper look at Ethan Mollick’s viral post on why calls to halt AI can block targeted rules that reduce harms and unlock benefits.

LinkedIn contentviral postscontent strategyAI governanceAI policyAI safetytechnology regulationpublic debatesocial media marketing

Ethan Mollick recently shared something that caught my attention: he wrote, "I don’t think it is good for anybody that there is an emerging "anti-AI" group" and argued that this coalition spans many worries (jobs, kids using AI, slop, existential risk, the environment, industry concentration). Because those constituencies are so different, he warns they may only be able to agree on one remedy: a full halt to AI.

That framing matters. Not because every concern is wrong, but because bundling them together into a single demand (stop everything) can accidentally make it harder to do the work that actually protects people: building policies that channel AI toward good uses and mitigate specific harms.

"Not only is a halt to AI development or use unlikely, but it undermines the desire to make policies that channel AI to good uses or that mitigate specific harms." - Ethan Mollick

In this post, I want to expand on Mollick’s point and treat it like the start of a practical conversation: if a full halt is unrealistic, what does a more workable governance agenda look like, and how do we avoid turning legitimate disagreement into political gridlock?

The hidden problem with a big-tent "anti-AI" movement

Mollick’s observation is less about labeling people as pro or anti, and more about coalition math.

When a movement includes:

  • Workers worried about job displacement
  • Parents and educators worried about learning and integrity
  • Creators and consumers frustrated by low-quality AI "slop"
  • Safety researchers worried about catastrophic or existential risks
  • Environmental advocates worried about energy use
  • Antitrust voices worried about industry concentration

...those groups can share a vibe (something feels off) while still disagreeing on what to do tomorrow.

A blanket pause can become the only common denominator. It is the simplest slogan that covers the broadest set of fears.

But simplicity is not the same as effectiveness.

Why a halt is unlikely (and why that matters)

Even if you personally think a moratorium is morally appealing, it collides with three realities:

1) AI is distributed, not centralized

Advanced models are built by large labs, but the ecosystem is bigger than a few companies. Open-source models, academic research, and global competition make enforcement extremely difficult. A halt requires worldwide coordination and verification that we struggle to achieve even in domains with decades of treaties.

2) AI is already embedded in products and workflows

AI is not only "frontier" chatbots. It is in fraud detection, accessibility tools, medical imaging support, translation, coding assistants, and customer service systems. A full stop would mean defining what counts as AI, then policing use across millions of organizations.

3) It can backfire by freezing today’s harms in place

If you pause development without simultaneously addressing existing misuse, you may lock in unsafe deployments. Worse, you can discourage the very research that would help with interpretability, monitoring, robustness, security, and alignment.

This is where Mollick’s second point lands: focusing on an unlikely halt can crowd out the policy work that is both possible and urgently needed.

AI is a general purpose technology, so governance must be plural

Mollick calls AI a general purpose technology. That phrase is doing a lot of work.

General purpose technologies (think electricity, the internet, mobile computing) spread across sectors. They create new capabilities, reshape costs, change labor markets, and spark new social norms. Because they touch everything, you cannot govern them with one rule.

You need a portfolio.

In practice, that means treating AI governance less like a single referendum and more like city planning: zoning, building codes, inspections, incentives, public infrastructure, and enforcement, tailored to context.

A more actionable agenda: match the policy to the harm

If we accept Mollick’s premise that a full halt is both unlikely and distracting, the next move is to separate concerns and propose targeted interventions.

Jobs and labor disruption

Reasonable policies here look like workforce and labor-market tools, not "stop the technology":

  • Reskilling and training tied to local employer demand
  • Wage insurance or transition support for displaced workers
  • Standards for responsible automation, including worker consultation
  • Procurement rules that favor augmentation over pure replacement in some public roles

Kids, education, and academic integrity

Education is not just an AI issue. It is a pedagogy and assessment issue.

  • Update assessments to value process (drafts, oral defenses, in-class work)
  • Teach AI literacy: when it helps, when it hallucinates, how to verify
  • Clear school policies by age band, with parent communication
  • Privacy protections for student data and bans on certain data collection

"Slop", misinformation, and degraded information quality

Not all content problems are existential. Many are mundane, and fixable.

  • Provenance and labeling where feasible (watermarking, content credentials)
  • Platform incentives that down-rank spammy mass-generated pages
  • Stronger ad network policies against deceptive AI-generated landing pages
  • Media literacy that includes synthetic media detection basics

Existential or catastrophic risk

If you worry about worst-case outcomes, you still need concrete levers:

  • Mandatory incident reporting for major model failures and security breaches
  • Evaluations before deployment for high-capability systems (cyber, bio, autonomy)
  • Secure model handling standards (access controls, red-teaming, monitoring)
  • Controlled release practices and staged rollouts for frontier capabilities

Environment and energy use

Here the tools look like energy policy and reporting:

  • Standardized disclosure of training and inference energy footprints
  • Incentives for efficient hardware and greener data centers
  • Carbon-aware scheduling and model efficiency benchmarks
  • Public funding for efficiency research (distillation, sparse models)

Industry concentration and power

Competition policy is not solved by freezing innovation. It is addressed by ensuring contestability.

  • Antitrust scrutiny of bundling, exclusionary contracts, and acquisitions
  • Interoperability and portability standards where appropriate
  • Support for open research and public interest compute
  • Transparent procurement and avoidance of vendor lock-in

The point is not that these policies are easy. It is that they are legible. They create a path for debate where people can actually negotiate tradeoffs.

The political risk: all-or-nothing framing invites stalemate

Mollick’s warning is also a political one.

When the headline demand becomes "halt AI," two things happen:

  1. Many stakeholders who might support targeted safeguards disengage because the headline sounds unrealistic or anti-progress.
  2. Organizations that fear overreach treat the entire conversation as hostile and dig in.

The result is stalemate, and stalemate tends to favor the status quo. That is the opposite of what most concerned people want.

A practical governance strategy should reduce incentives for denial and obstruction. It should offer measurable requirements, clear scope, and credible enforcement.

A better conversation starter than "stop"

If I were rewriting the debate in the spirit of Mollick’s post, I would swap one question:

  • Instead of: "Should we halt AI?"
  • Ask: "Where do we need constraints, where do we need capacity, and who is accountable?"

That framing still leaves room for strong restrictions in high-risk areas. It just avoids turning every AI discussion into a single binary vote.

It also forces specificity, which is where progress lives: Which uses? Which thresholds? Which audits? Which penalties? Which agency? Which transparency requirements? Which rights for individuals?

Closing thought

Ethan Mollick’s core insight is that an umbrella "anti-AI" coalition can accidentally converge on the one remedy that is least likely to happen, and that doing so can derail the more nuanced work of shaping AI’s trajectory.

We do not need everyone to agree that AI is good or bad. We need to agree that different risks require different tools, and that governing a general purpose technology demands more than a single emergency brake.

This blog post expands on a viral LinkedIn post by Ethan Mollick, Associate Professor at The Wharton School. Author of Co-Intelligence. View the original LinkedIn post →