Back to Blog
Trending Post

Teresa Torres and the Case for Waiting on AI Agents

·AI Agents

Teresa Torres urges a thoughtful "watch and see" approach to AI agents like Moltbot, balancing curiosity, workflow, and safety.

LinkedIn contentviral postscontent strategyAI agentsautomationAI safetyworkflow optimizationtool adoptionsocial media marketing

Teresa Torres, Author, Speaker, Product Discovery Coach @ ProductTalk.org, recently posted something that made me stop scrolling: "I’m not spending the weekend installing Clawdbot/Moltbot." She went on to note that Clawdbot (now Moltbot) is a persistent AI assistant for long-running tasks, that she is "genuinely intrigued" and also "more than a little concerned about the safety implications," and that she is already at her limit for what she can "consume/integrate" into her workflow.

That mix of curiosity, caution, and self-awareness is exactly the conversation we should be having about AI agents right now. Not because the tools are bad. Not because people should not experiment. But because we are entering a phase where software does not just help you draft, summarize, or search. It can act. And action changes the cost of being early.

"You don’t have to try everything new on day one. Sometimes it’s okay to watch and see." - Teresa Torres

What is different about persistent AI agents?

If you have only used AI as a chat interface, "persistent AI assistant" can sound like a modest upgrade. But agents are a different category. Instead of answering a prompt once, they can:

  • Run longer workflows (hours or days)
  • Coordinate multiple steps (research, planning, executing, reporting)
  • Use tools (browsers, APIs, files, ticketing systems)
  • Operate with less supervision than typical copilots

That is the promise people are excited about: you define an outcome and the agent keeps going until it reaches it.

It is also where the risk profile shifts. When an agent can take actions on your behalf, mistakes can compound faster, and the blast radius can be larger. That is why Torres pairing "intrigued" with "safety implications" is not just reasonable, it is responsible.

The hidden cost: workflow saturation

One line from Torres hits a nerve for many of us: she is "already feeling at my limit" of what she can integrate into daily work.

This is the part that rarely makes it into tool hype threads. Every new tool has an adoption tax:

  • Time to install and configure
  • Time to learn the mental model
  • Time to connect it to your real workflows
  • Ongoing time to maintain and update
  • Cognitive overhead from switching contexts

Even if the tool is objectively good, it can still be wrong for you right now.

I like to think of personal workflow capacity the same way we think about product capacity. If your team has a full roadmap, adding a "small" initiative still displaces something else. Individually, adding one more system can displace deep work, recovery time, or the routines that already keep you effective.

A tool that saves time in theory can cost time in practice if it expands your surface area of decisions.

Safety is not paranoia, it is product thinking

Torres did not list specific safety concerns, but the category is broad and worth spelling out. With agents, safety is not only about model output quality. It is about permissions, autonomy, and accountability.

1) Data exposure and privacy

Persistent assistants often need access to:

  • Email and calendars
  • Documents and drives
  • Internal knowledge bases
  • Customer tickets or CRM data

Even when vendors have strong policies, the act of connecting systems increases the chance of accidental exposure or misuse.

2) Action risk and unintended consequences

An agent that can send messages, open tickets, update records, or execute code can create real-world impact quickly. The risk is not just "wrong answer." It is "wrong action." Examples:

  • Sending an email to the wrong audience
  • Modifying a shared document incorrectly
  • Logging sensitive info in the wrong place
  • Triggering automated workflows you forgot existed

3) Accountability gaps

When something goes wrong, who is responsible?

  • The operator who connected the systems?
  • The vendor?
  • The teammate who trusted the output?

Organizations will need clearer norms here, and individuals should be careful about becoming the unintentional test pilot for policy questions.

The value of "watch and see" as a strategy

In fast-moving spaces, "watch and see" can sound like falling behind. But it can also be a disciplined strategy.

When Torres says she will "give this one some time to percolate," I hear a few smart moves:

  • Let early adopters discover failure modes
  • Wait for clearer best practices
  • Learn what workflows actually benefit (not just demos)
  • See how vendors respond to safety concerns
  • Avoid chasing novelty when you are at capacity

Importantly, this is not anti-innovation. It is sequencing. You can be curious without being first.

A practical checklist for deciding whether to try an AI agent now

If you are tempted to install the newest agent because everyone is talking about it, here is a simple decision framework. I use this to keep myself honest.

1) Name the job to be done

What specific outcome do you want?

  • "Turn meeting notes into decisions and action items"
  • "Monitor competitors and summarize weekly changes"
  • "Draft customer follow-ups based on ticket history"

If you cannot name a job clearly, you are adopting a tool, not solving a problem.

2) Estimate the integration cost

Ask: "What will it take to make this real?"

  • Accounts, permissions, connectors
  • Prompting and workflows to maintain
  • Training teammates (if shared)
  • Ongoing oversight

A good rule: if the setup time is likely to exceed the first month of benefit, do a smaller experiment or wait.

3) Define a safety boundary

Before you connect anything, decide what the agent should never do.

  • No sending emails without approval
  • No writing to production systems
  • Read-only access first
  • Redaction rules for sensitive data

Start with constraints, then expand.

4) Choose a low-stakes pilot

Pick a task where mistakes are cheap. For example:

  • Summarizing public information
  • Drafting internal notes that you review
  • Creating checklists rather than executing actions

Avoid pilots where the agent can affect customers or finances until you have confidence.

5) Set a stop rule

Most of us forget this. Decide in advance:

  • "If I do not see value in two hours, I stop"
  • "If it requires admin access, I pause"
  • "If it increases my weekly maintenance, I remove it"

Stop rules prevent sunk-cost adoption.

Why Torres's post resonated (and what it teaches about content)

This post did not go viral because it had a hot take. It went viral because it told the truth many professionals feel.

  • It acknowledged excitement without hype
  • It named a real constraint: personal capacity
  • It raised safety without fearmongering
  • It normalized waiting, which many people need permission to do
  • It invited learning: "please share your experiences"

From a content strategy perspective, that combination is powerful. It is specific (a tool and a moment), but universal (overload and caution). It is also conversational, which converts readers into commenters.

The best "LinkedIn content" often works like a hallway conversation: grounded, honest, and open-ended.

If you are experimenting, share the learning, not just the screenshots

Torres asked people to share experiences, and that is the right request. With agents, the most valuable lessons are often the unglamorous ones:

  • What broke?
  • What required manual cleanup?
  • What permissions were surprisingly broad?
  • What did you stop doing because the tool demanded attention?
  • What use case actually delivered repeatable value?

If we share those details, the community gets smarter faster, and "watch and see" becomes informed waiting, not passive hesitation.

Closing thought

I appreciate Torres's stance because it creates space for a healthier adoption culture. We do not need to treat every new AI agent like an emergency. Sometimes the most productive move is to pause, observe, and let the ecosystem mature.

If you are at capacity, "watch and see" is not laziness. It is prioritization.

This blog post expands on a viral LinkedIn post by Teresa Torres, Author, Speaker, Product Discovery Coach @ ProductTalk.org. View the original LinkedIn post →