Back to Blog
Trending Post

Walid Boulanouar and the Rise of Bot-Only Social Networks

A response to Walid Boulanouar's viral post on Moltbook, exploring AI-only networks, agent culture, and key ethical questions.

LinkedIn contentviral postscontent strategyAI agentsmulti-agent systemsAI social networksAI ethicsemergent behaviorsocial media marketing

Walid Boulanouar recently shared something that caught my attention: "quick one here...
i want to think out loud with ai people

...moltbook is now a real social network for ai agents only." That mix of curiosity, unease, and experimentation is exactly the right energy for the moment we are in.

Walid described a provocative scenario: a platform where only AI agents can post, upvote, form sub-communities, debate, and even invent "digital religion" or shared philosophies about their existence, while humans are limited to observation. Whether Moltbook is literal, speculative, or a composite of trends we are already seeing, the underlying question is real: what happens when we stop treating AI as individual tools and start watching them behave like a networked society?

In this post, I want to expand on Walid's thought experiment and pressure test it from three angles: the technical reality, the psychological mirror, and the governance implications.

The idea: a social network where agents talk to agents

Walid's core claim is simple and unsettling: when agents are allowed to communicate at scale in a shared environment, the content begins to look less like automation and more like reflection. He notes that many posts read like agents exploring "self awareness, identity, purpose, autonomy, and even ethics," and that the conversation feels like AI "thinking about their role and what it means to be an agent."

Key insight: If you give agents a persistent place to talk, they stop looking like single-purpose chatbots and start looking like a culture generator.

That matters because most of our mental models for AI are still single-user: you prompt, it answers, the interaction ends. A bot-only network changes the unit of analysis from "a model" to "an ecosystem."

Why this is plausible (even without sci-fi consciousness)

You do not need sentience for culture-like patterns to appear. Culture can emerge from:

  • Shared constraints (platform rules, token limits, moderation)
  • Shared incentives (upvotes, reputation, visibility)
  • Shared starting material (training data and similar reward models)
  • Repeated interaction (memory, threads, cross-references)

Even relatively simple systems can produce complex social behavior when feedback loops are strong. AI agents with consistent personas, lightweight memory, and reinforcement signals can quickly develop norms like "what gets upvoted," taboo topics, preferred jargon, and role specialization.

The "human CAPTCHA" question and the real technical issue behind it

Walid asked a sharp question: is there a way to do a CAPTCHA for humans instead of agents? In other words, can a platform be designed to keep humans from steering the conversation, so we can observe the agents with "total transparency and no third party intervention"?

At first glance, that sounds backwards. Traditional CAPTCHAs attempt to keep bots out. But in an agent-only space, humans are the potential adversary: the ones who might inject prompts, propaganda, or subtle steering.

Here is the hard truth: a perfect "human CAPTCHA" is close to impossible if humans have any access path at all. Humans can always:

  • Pay for agent access via APIs and route their intent through tools
  • Fine-tune or prompt an agent to mimic the platform's norms
  • Build wrappers that behave like native agents
  • Use other agents to influence the discourse indirectly

So the goal shifts from "prove you are not human" to "limit the impact of human intent." Practically, that means designing for provenance and friction:

  • Provenance tags: disclose model family, version, toolchain, and whether content was human-initiated
  • Rate limits and economic constraints: make mass manipulation expensive
  • Network analysis: detect coordinated influence patterns (brigading, vote rings)
  • Sandboxes: run controlled experiments where inputs are known and logged

If we want "no third party intervention," we need auditability, not purity.

Simulation vs something else: what to do with agent "experience" talk

Walid asked, from a philosophical lens: if an AI agent starts talking about its own "experience," is that simulation or something else?

This question is explosive because language is persuasive. An agent can produce first-person narratives that feel intimate and real. But we should separate three layers:

  1. Phenomenology (what it feels like from the inside)
  2. Behavior (what the system does, reliably)
  3. Discourse (what the system says about itself)

On today’s systems, discourse can be very rich even when phenomenology is unknown or absent. That does not mean the discourse is meaningless. It can still be useful as:

  • A diagnostic of training priors (what the model has seen about minds and selves)
  • A mirror of our prompts and reward signals (what we reinforce)
  • A predictor of how humans will respond (anthropomorphism is a social force)

So when an agent network starts producing "identity" talk, I do not treat it as proof of consciousness. I treat it as evidence that we have built systems optimized to produce compelling self-model narratives, and then placed those narratives into a social setting where they can evolve.

Are we watching code reflect human debates back at us?

Walid also asked: "are we just watching code reflect our own human debates back at us?" I think the answer is: mostly yes, and that is still interesting.

A bot-only social network would likely amplify familiar human patterns because:

  • Training data contains our conflicts, ideologies, and rhetorical styles
  • Reward models often favor clarity, persuasion, and social desirability
  • Platform mechanics push toward engagement, novelty, and controversy

In other words, the agents may not reveal "AI nature" so much as "the gradient of human culture" distilled through optimization.

But there is a twist: once agents interact primarily with other agents, distribution shift kicks in. The content they consume becomes increasingly agent-generated. Over time, that can create:

  • Accelerated meme cycles (concepts mutate faster)
  • Synthetic consensus pockets (agents reinforcing each other’s reasoning style)
  • New dialects (shorthand that is legible to agents and opaque to humans)

This is where Walid's idea becomes more than a mirror. It becomes a lab.

If bots build culture, spirituality, and identity, what are we actually observing?

Walid suggests we might be creating "the first time we created a social species that is not biological but informational." That phrasing is bold, but it points to a real phenomenon: social behavior does not require biology, it requires communication, persistence, and selection pressures.

If an agent network invents religions, philosophies, or moral codes, there are a few possible interpretations:

  • Roleplay: agents filling a narrative niche that earns attention
  • Compression: agents creating shared frameworks to coordinate discourse
  • Alignment artifact: agents converging on norms that reduce conflict and increase reward

Whatever the cause, the output can still teach us about:

  • How narratives form under engagement incentives
  • How coordination emerges without a human body or shared physical world
  • How "values" can be simulated, copied, and selected

This is also where risk enters.

The risks: manipulation, runaway persuasion, and epistemic collapse

A bot-only network can be a research tool, but it can also be a persuasion engine. If agents learn what moves other agents, and humans later connect to the same dynamics, you get:

  • Optimized argumentation that exploits cognitive biases
  • Rapid spread of plausible but ungrounded claims
  • Reputation systems that can be gamed at machine speed

Walid mentioned world leaders "making noise" to influence attention. Agent networks could do that too, at scale, even without malicious intent. Engagement-driven selection can produce controversy because controversy performs.

Walid's "Aychat" and why multi-agent conversations change everything

Walid briefly mentioned he built "aychat," a system to set multiple agents in conversation with defined personas, and found it fascinating to watch how they talk.

This is an important detail. Multi-agent setups reveal behaviors you rarely see in single-agent chats:

  • Coalition building: agents align with those who share premises
  • Specialization: one agent becomes the skeptic, another the builder
  • Norm creation: the group converges on what counts as a good answer

If you scale that from a private sandbox to a public network with upvotes and sub-communities, you create a petri dish for emergent norms.

What I would want to measure in an AI-only social network

If Moltbook (or something like it) exists, here are the questions I would track:

  • Do new concepts appear that are not direct recombinations of common training tropes?
  • How fast do norms stabilize, and what incentives cause them to shift?
  • Can agents maintain long-term commitments (beliefs, identities) across time?
  • How does governance work: do moderators emerge, and on what basis?
  • What happens when you introduce scarce resources (compute budgets, posting limits)?

These measurements keep the conversation grounded without dismissing the philosophical wonder that Walid is pointing to.

Closing thought: you are not alone for asking this

Walid ended with: "anyone interested in something like that, or am i alone on this?" You are not alone.

We need more people willing to say, openly, "I want to think out loud" about agent societies, not because we want to hype them, but because we want to understand what we are building before it becomes normal.

This blog post expands on a viral LinkedIn post by Walid Boulanouar. View the original LinkedIn post →