Back to Blog
Trending Post

Ethan Mollick on the Two Clicks Most AI Users Miss

·AI Adoption

Ethan Mollick highlights why default AI settings limit value, and how two clicks, better habits, and teams unlock faster results.

LinkedIn contentviral postscontent strategyAI adoptionchatbotsAI toolsuser behaviorproductivitysocial media marketing

Ethan Mollick, Associate Professor at The Wharton School and author of Co-Intelligence, recently shared something that caught my attention: "Watching enough AI users, even experienced ones, use chatbots leads to the revelation that essentially zero percent of people change the default model and you can significantly increase the value of AI to them by clicking twice."

That line is funny because it is true. It is also a little unsettling. We talk about AI adoption like it is a deep cultural transformation, but Ethan is pointing at a much smaller bottleneck: most people never touch the settings. They open a chatbot, accept the default model, default tone, and default capabilities, then judge the entire experience based on that first impression.

In other words, the ROI of AI for many knowledge workers is being capped by a behavior that takes ten seconds to fix.

The default model problem is an adoption problem

Ethan Mollick is not saying that model choice is the only thing that matters. He is highlighting something more practical: real users, including experienced ones, behave like most software users. They rarely customize.

Why? A few predictable reasons:

  • Defaults feel safe. Choosing something else feels like you might break it.
  • People do not know what changes. "Model" sounds technical.
  • The interface does not make the tradeoffs obvious.
  • Teams do not train for it. They train for prompts, not setup.

The result is that a large number of people are evaluating AI based on a model that may be optimized for general chat, not their specific task.

"Essentially zero percent of people change the default model."

If that is even close to accurate, it explains why so many AI rollouts plateau. Users try it, get mixed results, and conclude the tool is inconsistent. But the inconsistency is often coming from a mismatch between the task and the chosen mode or model.

What are the "two clicks" in practice?

Ethan Mollick kept it intentionally simple: "clicking twice" can significantly increase value. Depending on the platform, those clicks might be:

  1. Opening the model picker (or mode selector)
  2. Selecting a better model for the job

On some tools it might be switching from a lightweight model to a more capable reasoning model. On others it could be changing from "chat" to "analysis" mode, enabling web browsing, turning on file tools, or selecting a specialized assistant.

The point is not which brand or which model. The point is that many users never access the capability they already have.

A useful way to think about it:

  • Defaults are designed to work for most people most of the time.
  • Your work is rarely "most people" work.

If you write, analyze, code, plan, research, summarize meetings, or draft client emails, the best model for those tasks may not be the cheapest, fastest default.

When default is fine, and when it is costing you

Changing models is not automatically better. Sometimes the default is exactly what you want:

  • Quick brainstorming where speed matters more than precision
  • Simple rewrites or tone changes
  • Short Q and A where you already know the domain

But defaults become expensive when you need:

1) Fewer errors and more careful reasoning

If you are using AI for policy drafts, financial explanations, technical writing, or anything where mistakes create rework, a stronger model can pay for itself in time saved.

2) Longer context and document handling

If your workflow includes pasting long threads, uploading files, or extracting insights from a PDF, the better option is often not the default chat experience.

3) Tool use, retrieval, and web context

If you are asking for citations, current events, competitive research, or synthesis across sources, you may need browsing or retrieval features that the default mode does not activate.

4) Consistent format and structured outputs

When you want tables, JSON, project plans, meeting agendas, user stories, or test cases, some models and modes follow constraints more reliably.

The hidden cost of staying on default is not just lower quality. It is the second-order cost: the follow-up prompts, the manual cleanup, the "let me try again" loop, and the loss of trust.

A simple decision rule for model choice

One reason people do not change the model is they do not have a mental model for choosing. Here is a lightweight rule I recommend:

  • Use the default for speed and low-stakes tasks.
  • Switch to a more capable model when the cost of being wrong is higher than the cost of waiting a few seconds.

If you want something even more concrete, ask yourself two questions before you start:

  1. Will I copy-paste this output into something important?
  2. Will a subtle mistake cost me more than 5 minutes?

If the answer is yes, change the model. That is often the "two clicks" Ethan Mollick is talking about.

The bigger insight: users do not optimize, they satisfice

What I love about Ethan Mollick's observation is what it implies about AI adoption inside organizations.

Most companies are focusing on:

  • Prompt libraries
  • Use case lists
  • Policies and risk controls

Those matter, but Ethan is pointing to something more basic: if you want better outcomes, you sometimes just need to help people make one better choice at the start.

This is classic user behavior. People satisfice. They pick the first acceptable option and move on. That is rational, especially when they are busy.

So the real question becomes: how do you design adoption so the default path produces good outcomes?

Make "two clicks" a team habit

If you lead a team rolling out AI tools, you can operationalize Ethan Mollick's point quickly. A few practical moves:

1) Create a one-page "model map"

Not a technical document. A simple cheat sheet:

  • Default model: best for quick drafts and fast back-and-forth
  • Strong model: best for reasoning, accuracy, and final drafts
  • Research mode: best for web context and citations
  • File mode: best for document analysis

The goal is to reduce the fear of choosing wrong.

2) Add a pre-flight checklist to workflows

Before sending an important email draft or proposal outline, include:

  • Did you select the right model for accuracy?
  • Did you turn on tools you need (files, browsing)?
  • Did you specify format constraints?

This takes seconds and prevents the most common failure mode: asking the right question in the wrong mode.

3) Teach people to notice quality signals

Train users to recognize when the model is struggling:

  • Vague answers when you need specifics
  • Confident claims without sources
  • Missing constraints (word count, tone, structure)
  • Inconsistent formatting across iterations

When they see these signals, the fix is often not "prompt harder." It is "switch models or modes."

4) Measure adoption beyond logins

If you track only usage, you miss the point. Track behaviors that correlate with value:

  • Percentage of sessions using non-default models
  • Percentage using tools (files, browsing) when relevant
  • Rework rates (how often outputs are discarded)

If Ethan Mollick is right, even a small increase in non-default usage can create an outsized improvement in perceived usefulness.

Prompting is overrated when setup is wrong

A lot of AI advice assumes that better prompts fix everything. Prompts help, but there is a hierarchy:

  1. Right tool and right mode
  2. Right model
  3. Clear context and constraints
  4. Prompt craft

If the model is underpowered for the task, your prompt becomes a workaround. That is exhausting. The "two clicks" are often the highest leverage move because they change the underlying capability, not just the instructions.

Ethan Mollick's post is a reminder that small interface choices can dominate outcomes.

Closing thought

I keep coming back to Ethan Mollick's phrasing because it captures a truth about technology adoption: people do not need more features. They need fewer hidden decisions.

If you want to get more value from AI this week, do the simplest experiment possible. Open your chatbot, find the model selector, and try a more capable option for one important task. Compare the number of follow-ups, the amount of cleanup, and how confident you feel in the result.

Those two clicks will not solve every AI problem. But they might solve the most common one: users judging AI based on an avoidable default.

This blog post expands on a viral LinkedIn post by Ethan Mollick, Associate Professor at The Wharton School. Author of Co-Intelligence. View the original LinkedIn post →