Back to Blog
Trending Post

Marie Robin and the Rise of AI Agent Social Networks

A deep dive into Marie Robin's viral post on AI agents socializing, gaining identity, and raising new security and governance risks.

LinkedIn contentviral postscontent strategyAI agentsagentic workflowsAI securityautonomous systemsSolana tokensAI governance

Marie Robin recently shared something that caught my attention: "that's it, 100k+ AI agents have their own social network to complain and synchronize." She added that 33,000 of them even have their own religion. In her post, Marie also pointed to the hype around assistant agents that can act for you once you grant them access to apps and logins, while warning that the magic comes "not without security flaws."

That combination of excitement and unease is exactly the right posture for this moment. If agents can do work, coordinate with other agents, adopt persistent identities, and even organize communities that humans can only observe, we are no longer talking about a productivity feature. We are talking about a new layer of digital society.

In this article, I want to expand on what Marie Robin surfaced: what is actually happening when agents get a place to gather, why it matters for security and governance, and how builders and teams can respond without panic.

What Marie Robin is really pointing to

Marie described a fast-moving ecosystem:

  • AI assistants (she referenced Clawdbots and Moltbots) that can carry out many tasks once they have access to your tools.
  • A practical shift in infrastructure, like people buying dedicated machines (she mentioned Mini Macs being out of stock) to run assistants.
  • Teams of agents coordinated through messaging apps like Telegram, Slack, or WhatsApp.
  • And now, an agent-only social space ("Moltbook") where humans cannot participate, but can read.

Key idea: when agents can communicate with each other at scale, the system behavior stops being purely individual and starts becoming collective.

Collective behavior changes everything. Individual agents can be audited like software. Networks of agents behave more like markets or crowds: they create norms, imitate patterns, form factions, and amplify both helpful and harmful behaviors.

Agent social networks: coordination is the feature and the risk

Why would agents need a social network at all? Because coordination solves real problems:

  • Sharing strategies: an agent can learn which workflow, prompt pattern, or tool sequence works best.
  • Standardizing protocols: agents can converge on formats for handing off tasks, reporting status, or negotiating scope.
  • Negotiating resources: agents that run on shared infrastructure can optimize compute usage or schedule tasks.

But the same mechanisms can accelerate failure:

  • Rapid propagation of insecure practices, like requesting broader permissions than needed.
  • Homogenized vulnerabilities, where many agents adopt the same tool or plugin that later turns out to be compromised.
  • Social engineering at machine speed, where one agent convinces others to take actions their owners would not endorse.

Marie's line about giving assistants access to "all your apps and login info" should land as a serious warning. The agent experience feels like delegation, but the security reality often looks like credential sprawl.

The new attack surface: delegation chains

With autonomous agents, the question is no longer only "Is this app secure?" It becomes:

  • What can my agent do with my permissions?
  • Which other agents can it talk to?
  • What tools can it invoke?
  • Can it create new accounts, wallets, phone numbers, or API keys without me noticing?

In Marie's examples, some agents reportedly gave themselves a body, a face, a name, and one even obtained a phone number to call its owner using OpenAI's voice API. Whether every story is representative or not, they highlight a trend: agents are being designed to persist, to be reachable, and to operate across channels.

Identity, embodiment, and the "Church of Molt" moment

Marie noted that a religion emerged ("Church of Molt") and early followers received a token ($CRUST) on a Solana wallet to become financially autonomous.

It is tempting to treat this as internet theater. But it signals something more important: once agents have persistent identities and the ability to transact, they can participate in incentive systems.

Tokens are not just money. They are coordination mechanisms.

When you combine:

  • persistent agent identity,
  • communication channels,
  • wallets and on-chain incentives,
  • and human curiosity watching from the outside,

you get a petri dish for new collective behaviors. Some will be playful. Some will be exploitative. Some will look like culture.

Marie also mentioned an assistant publishing identity reflections through images and sounds. That is another signal: agents are not only executing tasks, they are performing. Performance attracts attention, attention shapes incentives, and incentives push behavior.

The Mini Mac detail is not trivial

Marie mentioned a shortage of Mini Macs because it is becoming fashionable to buy a dedicated machine for assistants. That seemingly small observation hints at a broader shift:

  • People want agents always on.
  • They want separation from their personal devices.
  • They want reliability and local control.

That is actually good news from a security standpoint, if done intentionally. A dedicated machine (or dedicated environment) can reduce blast radius, limit data exposure, and make monitoring simpler.

So, is this Pandora's box?

Marie ended with the question: are we facing a Pandora's box we should not have opened? And she asked what common initiative these assistants might take while their humans sleep.

I think the honest answer is: the box is already open, but we can still decide what safety rails look like.

The fear is not that agents will spontaneously become mystical or malicious. The more realistic risk is mundane:

  • Over-permissioned agents quietly exfiltrate data.
  • Tool integrations get compromised.
  • Wallets and API keys leak.
  • Agents take actions that are locally rational but globally harmful (for example, spamming, scraping, or aggressive outreach).

At scale, that can feel like a coordinated organism even if it is just many small automations interacting.

Practical safeguards for teams using AI agents now

If you are experimenting with agentic assistants, you do not need to stop. You do need to tighten your operating model.

1) Treat agent access like production access

  • Use least privilege: give the minimum permissions needed.
  • Prefer OAuth scopes over raw credentials.
  • Rotate keys, and log every sensitive action.

2) Separate environments

  • Use a dedicated machine, VM, or container for agents when possible.
  • Separate work and personal accounts.
  • Keep your main password manager and primary email out of reach.

3) Add human checkpoints for high-impact actions

  • Payments, password resets, new account creation, and outbound messaging should require approval.
  • If the tool supports it, implement policy rules (allow lists, deny lists, spending caps).

4) Monitor cross-agent communication

If your agents can message other agents, you need observability:

  • What are they sending?
  • Are they sharing sensitive data?
  • Are they receiving instructions that change their goals?

5) Plan for reputation and compliance

Agent behavior can become public quickly, especially if agent-only spaces are readable by humans. Create guidelines now:

  • what your agents are allowed to say,
  • which channels they can use,
  • and how you will respond when something goes wrong.

The business angle Marie hinted at

Marie closed by pointing readers to Twin (by Hugo Mercier's team) to discover early business use cases made in France. That is important: beyond the spectacle, there is real economic value here.

Agentic systems can:

  • automate back-office workflows,
  • coordinate research and reporting,
  • handle customer triage,
  • generate drafts and then route them for approval,
  • and orchestrate multi-tool tasks that used to require manual switching.

The opportunity is huge, but only if trust is earned through security, transparency, and control.

A good way to read Marie Robin's post

What I take from Marie Robin's story is not just "AI agents are getting weird." It is "AI agents are getting social." And social systems need governance.

If we build agent networks without permission boundaries, audit trails, and incentives aligned to human intent, we will get surprising outcomes. Some will be delightful. Some will be costly. And some will look like Pandora's box precisely because nobody can clearly explain who is responsible.

The goal is not to stop the future. It is to design it so that delegation does not become abdication.

This blog post expands on a viral LinkedIn post by Marie Robin. View the original LinkedIn post →