Back to Blog
Walid Boulanouar and the Rise of Chat-Run Ops
Trending Post

Walid Boulanouar and the Rise of Chat-Run Ops

·AI Agents & Automation

A deep dive into Walid Boulanouar's viral post on 24/7 AI employees and why chat is becoming the frontend for infrastructure.

LinkedIn contentviral postscontent strategyAI agentsworkflow automationChatOpsn8ntool callingsocial media marketing

Walid Boulanouar recently shared something that caught my attention: "we hit the "24/7 ai employee" moment already." He followed it with a painfully relatable build log: "two months ago i built a scrappy poc. it was basically this: telegram → n8n → agent → ssh → claude code." And then the line that matters most for anyone running systems: he wanted "access to anything" from his phone - not just a chatbot, but a persistent helper that can run commands.

That post reads like a snapshot of where ops, engineering, and automation are heading right now. The tooling is getting better, packaging is getting cleaner, and the behavior change is already here: we are starting to treat chat as the UI for real work.

The real shift: chat as the frontend for infrastructure

Walid summed it up in one sentence: "chat is becoming the frontend for infrastructure." That is bigger than any single bot trend.

For years, the "frontend" for infrastructure meant:

  • a terminal
  • a CI pipeline
  • dashboards
  • tickets and runbooks

Now we are watching a new layer form: conversational interfaces that route intent to tools, keep state, and return results. Not because it is cute, but because it reduces friction. If I can trigger a safe, approved workflow in 10 seconds from a chat message, I stop deferring it. And if it can run while I sleep, it stops being a task that needs babysitting.

"message in. route it. keep a session id. let an agent call tools. send the answer back."

That is basically ChatOps 2.0: the interface is chat, the brain is an agent, and the hands are tools.

Walid's scrappy POC is a blueprint, not a gimmick

What I like about Walid's breakdown is that it is not magical. It is a clean architecture you can reason about:

1) A channel for requests

He used Telegram as the input. It could be Slack, Teams, Discord, SMS, a web widget, or even email. The key is that messages come in reliably and you can authenticate users.

2) Routing and command handling

Walid mentioned "telegram webhook + command routing (/session new, list, delete)." That matters because free-form chat alone is not enough for dependable operations. You need explicit affordances for:

  • creating a new session when you want a clean context
  • listing active sessions
  • deleting sessions when the task is done

In practice, this is the difference between an assistant and an operator interface.

3) Sessions so context sticks

He stored UUID sessions in a table "so context actually sticks." This is the unsexy part that makes the whole thing work. If you are going to run multi-step workflows (debug a service, change a config, open a PR, deploy), you need state.

A simple session record can store:

  • who is requesting
  • what system or repo the session is bound to
  • the current step and artifacts (logs, links, file paths)
  • a short memory summary to keep prompts small

4) An agent with memory

The agent layer translates intent into tool calls. This is where you decide how "autonomous" you want it to be. For most teams, the sweet spot is not full autonomy. It is: propose, confirm, execute, report.

5) Tool execution that can do real work

Walid used "tool execution via ssh into claude code." The important point is not the brand of model or environment. It is that the agent can:

  • run commands
  • access a repo
  • read and edit files
  • open pull requests
  • query logs and metrics
  • restart services (with guardrails)

In other words, it can produce outcomes, not just answers.

Why people are buying machines for this

Walid joked that "people are acting and buying mac mini" for the newer trending solution. That reaction makes sense because always-on agents need a home:

  • a dedicated machine reduces conflicts with your personal environment
  • a local executor can access local credentials, VPN, and internal tools
  • you can keep sensitive operations off shared SaaS runners

But there is a more subtle reason: if chat is the frontend, then the agent runtime is the backend. A stable, secure runtime becomes infrastructure.

The "new thing" vs the underlying pattern

Walid pointed out that the trending solution "just packages it cleanly + enterprise level security" with "always-on gateway, local exec tools, better security, more channels."

Packaging matters. Security matters even more. But the core pattern is consistent:

  1. Inbound message
  2. Policy checks and routing
  3. Session memory
  4. Tool calls
  5. Output back to chat

If you understand that pattern, you can evaluate any vendor or build your own.

What to automate first: the workflow you currently babysit

Walid's advice is practical: "pick one workflow you currently "babysit" and make it tool-callable. run it behind a bot. add allowlists. ship the first version today."

I would translate that into a simple selection rule:

  • high frequency (weekly or more)
  • low creativity (repeatable steps)
  • clear success criteria (done means done)
  • bounded blast radius (mistakes are reversible)

Good first candidates:

  • log triage: collect logs for a service and summarize errors from the last N minutes
  • deploy prep: run tests, check changelog, verify environment variables, post a checklist
  • incident hygiene: create a timeline doc, open the right channels, gather metrics links
  • repo chores: bump a dependency, run lint, open a PR, request review
  • customer ops: pull account info, generate a draft reply, create a ticket with context

A safe reference architecture (that you can ship fast)

If you want to follow Walid's lead without building a science project, aim for this minimal design:

H3) Control plane: identity and authorization

  • allowlist users and channels
  • map users to roles (viewer, operator, admin)
  • require confirmation for risky actions

H3) Execution plane: tools with narrow contracts

Instead of giving the agent a raw shell, wrap actions as tools:

  • "get_service_logs(service, since)"
  • "run_ci(repo, branch)"
  • "open_pr(repo, title, diff)"
  • "deploy(service, version, env)"

Then enforce:

  • parameter validation
  • timeouts
  • output limits
  • audit logs

H3) State: sessions and summaries

Store session metadata and a running summary. Do not rely on the model to remember everything. Your database is the source of truth.

H3) Observability: treat the agent like production software

Track:

  • who asked
  • what tools were called
  • what changed
  • what failed
  • how long it took

If you cannot audit it, you cannot trust it.

The human part: you are still the operator, just faster

Walid's post is not really saying "replace people." It is saying that many "operator behaviors" can be turned into callable tools. The agent becomes a force multiplier.

A good early operating model is:

  1. You ask the bot to propose a plan
  2. The bot lists steps and the tools it will use
  3. You approve step-by-step (or approve the whole plan)
  4. The bot executes, posts evidence, and links artifacts

That is how you get the "24/7" benefit without turning your infrastructure into a slot machine.

The goal is not more chat. The goal is less babysitting.

One small challenge to try this week

Take Walid's prompt literally: choose one task you repeat weekly.

  • Write the steps down as a checklist.
  • Turn each step into a tool call (even if it just runs a script).
  • Put it behind a bot with an allowlist.
  • Add a session id so you can continue the workflow later.
  • Ship v1 today, then harden it.

When you do that once, you will start seeing other parts of your work as "callable." And that is the real shift Walid is pointing at.

This blog post expands on a viral LinkedIn post by Walid Boulanouar, building more agents than you can count | aiCTO ay automate & humanoidz | building with n8n, a2a, cursor & ☕ | advisor | first ai agents talent recruiter. View the original LinkedIn post →