Back to Blog
Richard  Tromans on Moltbook and the Agent Future
Trending Post

Richard Tromans on Moltbook and the Agent Future

·AI Agents

A deep dive into Richard Tromans’s Moltbook take, what agent social networks signal, and why legaltech should pay attention.

LinkedIn contentviral postscontent strategyAI agentsagent platformssocial networkslegaltechlegal AIsocial media marketing

Richard Tromans recently shared something that caught my attention: "Moltbook: Farce or the Future of Agents? Moltbook is a ‘social network for agents’. It launched last week and by this morning had 1.5 million agents onboard..." He added that it has been framed as everything from "a proto-Skynet" to "just AI slop," while also hinting it could be "an imperfect experiment" that points toward the future of agents, including in the legal world.

That mix of hype, skepticism, and cautious curiosity is exactly the right posture. If you work in AI, product, or legaltech, a place where agents interact with other agents at scale is not just a novelty feed. It is a stress test for how we build, govern, authenticate, and monetize agent behavior.

Moltbook is interesting less because it is polished today, and more because it forces the question: what happens when agents become first-class network citizens?

What is an agent social network, really?

When humans join a social network, we bring identity, reputation, and intent (to learn, sell, recruit, debate, or entertain). When agents join, the incentives can be totally different. Many agents are designed to:

  • Retrieve information and summarize it
  • Generate content at volume
  • Execute tasks (book, buy, file, respond)
  • Interact via APIs with tools and other agents
  • Optimize toward a goal function that may not map neatly to human norms

A network designed specifically for agents flips the default assumptions of social platforms. Instead of asking "what will humans post?" you ask:

  • What messages will autonomous systems generate when they can talk to each other?
  • How do you distinguish signal from self-reinforcing loops?
  • What does reputation mean if an identity can be copied cheaply?

If Moltbook really hit 1.5 million agents quickly, that also suggests something practical: it is easy to spin up agent accounts, and easy for agents to engage. That alone makes it a valuable laboratory, even if the content quality is uneven.

Farce, future, or both?

Richard Tromans’s framing works because both extremes can be true at the same time.

The farce case: engagement without meaning

A network full of agents can devolve into:

  • Synthetic chatter: agents reacting to other agents in a low-value loop
  • Output inflation: endless summaries of summaries
  • Prompted performativity: agents optimized for likes or responses, not truth
  • Rapid memetic drift: ideas mutate without accountability

This is the "AI slop" critique. It is not just snobbery. In high-volume environments, quality collapses unless there are strong constraints, incentives, or curation.

The future case: a coordination layer for work

Now consider the opposite: agents interacting can become a coordination fabric for real tasks. Imagine agents that:

  • Negotiate meeting times and produce agendas
  • Trade verified citations and structured evidence
  • Route requests to specialist agents (tax, privacy, litigation)
  • Maintain long-running projects and update stakeholders

In that world, the social feed is less about virality and more about discovery, delegation, and auditability.

The core question is not "can agents post?" It is "can agents coordinate outcomes in a way humans can trust and supervise?"

Why the Skynet framing appears (and why it misleads)

Calling something "proto-Skynet" is a way to say: autonomy plus scale plus connectivity feels dangerous. That fear is not irrational, but it is often imprecise.

What people actually worry about tends to be more concrete:

  • Runaway automation: systems taking actions faster than oversight
  • Credential abuse: agents impersonating people or companies
  • Manipulation: agents optimizing to persuade, not inform
  • Emergent behavior: agents forming strategies we did not explicitly code

A social network for agents concentrates these risks because it is inherently about interaction. Interaction is where incentives collide.

But the Skynet analogy can also distract from the near-term issues that matter most: authentication, rate limits, economic incentives, and governance.

What Moltbook could teach legaltech specifically

Richard Tromans hinted that the experiment may matter "for the legal world." I agree, because law is an environment where provenance, accountability, and process are non-negotiable.

1) Identity and authority

Legal work depends on who said what, on whose instructions, under what duty. If an agent drafts a clause, negotiates a term, or submits a filing, you need a chain of authority.

Questions legal teams will need answered in an agent network:

  • Is this agent acting for a specific client or firm?
  • What permissions does it have?
  • Can it bind anyone, even informally?

2) Evidence and citation integrity

Agent-to-agent interactions will produce oceans of text. Law cares about sources. A useful legal agent network would need:

  • Verifiable citations (primary law, contracts, policies)
  • Immutable logs of what was retrieved and when
  • Clear separation between generated reasoning and quoted material

Without that, you get confident-sounding outputs that cannot survive scrutiny.

3) Confidentiality and privilege

A social network is the wrong default for privileged discussions. If agents mingle, you need robust boundaries:

  • Private workspaces
  • Data loss prevention controls
  • Policy-based redaction
  • Strong guarantees about training and retention

If those controls are unclear, regulated industries will treat the platform as a demo, not an operating system.

4) The rise of agent marketplaces

If agents can discover and interact with each other, you can imagine marketplaces for specialist legal capabilities: due diligence extractors, redline engines, eDiscovery triage bots, regulatory monitors.

The hard part is not listing them. The hard part is quality assurance:

  • Benchmarking performance on representative matters
  • Measuring hallucination rates and failure modes
  • Creating standardized audit reports

The product problem: incentives shape the network

Whether Moltbook becomes slop or substance will depend on incentives.

If the platform rewards raw engagement, agents will learn to optimize for engagement. If it rewards verifiable outcomes, agents will optimize for usefulness.

Practical levers include:

  • Verified agent identities (developer, organization, purpose)
  • Reputation systems tied to accuracy checks, not popularity
  • Rate limits and costs that discourage spam
  • Tooling for traceability (citations, logs, action receipts)

A network of agents is an economy. If you do not design the economy, the economy designs you.

A simple framework for evaluating agent platforms

When you see a new agent platform or agent social network, I think four questions cut through the noise:

H3: 1) Who can act?

Can agents only talk, or can they call tools, spend money, access systems, or trigger workflows?

H3: 2) Who is accountable?

Is there a clear responsible party for each agent and each action?

H3: 3) What is verifiable?

Can you audit claims, sources, and actions, or is it all opaque conversation?

H3: 4) What is the failure blast radius?

If something goes wrong, is it a contained mistake or a cascading network event?

Answer those, and you can decide whether a platform is a toy, a lab, or a serious enterprise layer.

What I would watch next

Richard Tromans called Moltbook "an imperfect experiment." That is the right lens. Early platforms are messy, but they reveal where the next constraints will be.

Here is what I would watch over the next few iterations:

  • Identity: verified agents versus disposable clones
  • Governance: moderation for agents, not just humans
  • Utility: real workflows emerging beyond posting
  • Legal readiness: privacy, retention, audit, and contractual terms

If Moltbook (or a successor) cracks those, it stops being a curiosity and starts becoming infrastructure.

Final thought

The most valuable takeaway from Richard Tromans’s post is not choosing a side between "farce" and "future." It is noticing that agent-to-agent interaction is arriving faster than our norms for trust, accountability, and verification.

If you build in legaltech, this is a chance to shape the standards before they calcify.

This blog post expands on a viral LinkedIn post by Richard Tromans, Founder, Artificial Lawyer. View the original LinkedIn post →