Back to Blog
Chorouk Malmoum on Google's Open-Source ADK
Trending Post

Chorouk Malmoum on Google's Open-Source ADK

·AI Agent Frameworks

A deeper look at Chorouk Malmoum's viral post on Google's open-source ADK and what it means for building production agents.

LinkedIn contentviral postscontent strategyAI agentsGoogle ADKopen-source softwareagent frameworksdeveloper toolssocial media marketing

Chorouk Malmoum recently shared something that made me stop scrolling: "🚨 Breaking : Google just open-sourced its own AI Agent framework. It's the same one powering Google's own AI products!" They called it ADK (Agent Development Kit) and described it as "a code-first framework for building, deploying, and orchestrating AI agents" with "real code, real tests, real CI/CD."

That framing matters. We have had plenty of agent demos in the last year, but far fewer agent systems that feel like software engineering: versioned, testable, observable, and deployable. Chorouk's post reads like a signal that Google is pushing agents out of the "cool prototype" phase and into production-grade engineering.

In this article, I want to expand on what Chorouk highlighted, explain why these design choices are important, and outline how teams can evaluate ADK alongside existing frameworks.

What ADK represents: agents as software, not prompts

Chorouk summarized ADK's core philosophy in a way that is easy to miss if you have only been skimming agent news:

"That lets you build agents the way you build software - with real code, real tests, real CI/CD."

This is the key shift. Many agent experiences today are built around a single request-response call: send a user question, get a response, maybe call a tool, return. That pattern is fine for chat, but it breaks down when:

  • Tasks take minutes (or hours) and need retries
  • You need approvals before actions run
  • Multiple sub-tasks run concurrently
  • A workflow must be deterministic and auditable
  • Teams need to ship changes safely with tests and rollbacks

A code-first framework changes the conversation from "Which prompt is best?" to "How do we design an agent system with the same rigor as any other service?"

Code-first development across real languages

Chorouk pointed out that ADK is "code-first" and lists Python, Java, TypeScript, and Go. In practice, language support is more than convenience:

  • It determines who can contribute (ML engineers, backend engineers, platform teams)
  • It influences packaging and deployment (containers, serverless, monorepos)
  • It shapes testing culture (unit tests, integration tests, contract tests)

If your agents live only as prompt templates in a UI, you will struggle to review changes, enforce standards, or reproduce bugs. If your agents are code, you can:

  • Put agent logic behind code review
  • Run static checks and type validation
  • Add regression tests for tool calling behavior
  • Manage versions like any other component

A practical benchmark: if your agent cannot be tested in CI without a human clicking around, it is not ready to run critical operations.

Event-driven runtime: why it matters

Chorouk also emphasized "event-driven runtime, not request-response." This is a big deal for production agents.

In request-response, everything must finish within a single call. In event-driven systems, agents can react to events over time: messages, tool results, approvals, timeouts, scheduled triggers, or external webhooks. That unlocks patterns like:

  • Long-running workflows that pause for human confirmation
  • Background execution for slow tools (data jobs, crawling, batch processing)
  • Durable orchestration with retries and state
  • Parallel execution to reduce end-to-end latency

If you have ever built a reliable pipeline, you already know the value of event-driven architecture. Applying it to agents is how you move from "LLM as chatbot" to "LLM as component in an operational system."

Agent primitives: structure around the LLM

One of the strongest parts of Chorouk's post is the way it lists ADK's primitives. The names alone tell you the intended design:

  • LlmAgent for reasoning
  • SequentialAgent, ParallelAgent, LoopAgent for deterministic pipelines
  • AgentTool to use agents as tools inside other agents
  • LLM-driven routing for dynamic delegation
  • A2A protocol for remote agent-to-agent communication

I like this mix because it acknowledges a reality that teams learn the hard way: you want both flexibility and determinism.

Deterministic pipelines when it counts

Sequential and parallel primitives encourage you to define the parts of a workflow that should be repeatable. For example:

  1. Validate input and permissions
  2. Gather context (search, database lookup)
  3. Draft an answer or plan
  4. Run tool actions
  5. Generate a final summary and log outputs

Even if an LLM is involved, you can constrain steps and enforce checks around them. That is how you reduce "agent randomness" in areas like billing, compliance, or customer communications.

Dynamic routing when it helps

LLM-driven routing and "agent as tool" patterns let you keep flexibility where it is useful, like deciding which specialist agent should handle a request:

  • A "Research" agent that focuses on search and citations
  • A "Data" agent that runs BigQuery queries
  • A "Support" agent that follows strict policy language

The router can delegate, but you still keep clear boundaries and responsibilities.

Tooling is the difference between a demo and a platform

Chorouk said: "The tool ecosystem is where it gets ridiculous" and then listed capabilities that, frankly, define whether an agent framework can become a real internal standard.

Some highlights worth unpacking:

Built-in search and code execution

If Google Search and Code Execution are first-class tools, you can build agents that:

  • Verify claims with retrieval
  • Run calculations and data transforms safely
  • Generate and validate code artifacts

The big win is not that the tools exist, but that they are part of the framework's operational model (permissions, logging, error handling).

MCP and OpenAPI tool generation

Chorouk mentioned "Full MCP support" and "Point it at any OpenAPI spec" with auto-generated tools.

This is the fastest path from "agent prototype" to "agent that can actually do work." Most companies already have OpenAPI specs for internal services. If ADK can generate tool wrappers reliably, your agents can interact with:

  • Ticketing systems
  • CRM and customer data
  • Inventory and fulfillment
  • Analytics platforms

The architectural implication is important: instead of hand-writing one-off tool functions for every agent, you define a contract once (the API spec) and let the framework create consistent tool interfaces.

Human-in-the-loop confirmations

Chorouk called out "Human-in-the-loop confirmation before tool execution." If you only take one idea from this post, make it this one.

Agents that can act should have a safety model. A good pattern is:

  • Draft the intended action
  • Show the action with parameters (what will change?)
  • Require approval for high-risk operations
  • Log the decision and the outcome

This is how you deploy agents without turning every rollout into a trust exercise.

Long-running async tools

"Long-running async tools with background execution" is not flashy, but it is foundational. Lots of valuable enterprise work is slow: data jobs, report generation, reconciliation, migrations, or multi-step integrations. If the framework treats these as first-class, you can build agents that behave like reliable workers, not impatient chatbots.

Multimodal streaming: audio and video out of the box

Chorouk also noted that ADK "streams bidirectional audio and video out of the box." That matters because multimodal experiences are increasingly the end-user expectation:

  • Voice agents for support and scheduling
  • Screen-aware assistants for complex software tasks
  • Video and audio input for field operations and training

Streaming, in particular, reduces latency and improves interaction quality. It also creates new engineering needs: buffering, partial outputs, session state, and moderation. Having this built into the framework suggests Google expects ADK to power real-time products, not just text workflows.

Quick setup and a practical first project

Chorouk shared a simple setup:

  1. Install: pip install google-adk
  2. Create a project: adk create my_agent
  3. Add your key: GOOGLE_API_KEY=your_key_here
  4. Run: adk web (local UI) or adk run my_agent (CLI)

If you want a meaningful first project (beyond "hello world"), I would build a small "triage agent" that:

  • Accepts an incoming request (ticket, email, form)
  • Uses search or internal docs retrieval for context
  • Routes to a specialist agent (billing, technical, account)
  • Proposes an action plan
  • Requires human approval before sending any external reply

That exercise forces you to test routing, tool calling, approvals, and observability. It also mirrors the real problems that make agent frameworks worth adopting.

My take: where ADK fits in your agent stack

Chorouk's post reads like a blueprint for production readiness: code-first, event-driven, testable, integrated with tools, and built for orchestration.

If your team is already invested in LangChain, LlamaIndex, CrewAI, or LangGraph, the encouraging part is that Chorouk explicitly mentioned direct integration. In other words, ADK may not require you to abandon existing building blocks. It may give you a more "software-native" runtime and orchestration layer to put around them.

The question I would ask is not "Can it build an agent?" but "Can it help us ship and operate agents safely at scale?"

If ADK delivers on the philosophy Chorouk outlined, it could become a serious default for teams that want Google-grade patterns: deterministic pipelines where needed, dynamic delegation where useful, and an ecosystem that connects agents to real systems.

This blog post expands on a viral LinkedIn post by Chorouk Malmoum, Founder & CTO | Building and teaching AI Agents | France’s Top 2% voice in AI. View the original LinkedIn post →