Back to Blog
Trending Post

Adam Janes on AI Guardrails for Production Access

A practical take on Adam Janes's viral post about giving AI tools database access, and the guardrails teams need in production.

LinkedIn contentviral postscontent strategyAI safetyproduction engineeringdatabase securityDevOpsLLM agentssocial media marketing

Adam Janes recently shared something that caught my attention: "I gave clawdbot unlimited access to my production database. 😬 ... Just kidding! But I've been tempted." That mix of humor and honesty hits a real nerve for anyone building with modern AI tools.

Adam is pointing at a tension a lot of teams feel right now: the closer AI gets to "just ship it for me," the more we want to hand it the keys. Especially early on, when speed feels like the only thing that matters.

But as Adam bluntly reminds us, production is not a playground. The same autonomy that feels magical in a greenfield project becomes dangerous when real customers, real money, and real reputation are on the line.

The temptation: autonomy feels like compounding interest

Adam described why full access is so attractive on a new project:

  • "Claude writes all the SQL to wire up complex relational DBs."
  • "Supabase, GitHub, Vercel credentials let it deploy end-to-end."
  • "No human checkpoints to get rolling or debug using logs."

That is the dream workflow: you specify intent, the agent implements, tests, deploys, and iterates while you focus on product decisions.

In a greenfield environment, this can be legitimately transformative. There is no legacy schema to tiptoe around. There are no long-lived customers depending on stable behavior. You can drop the database, rewrite migrations, and redeploy without turning a business incident into an all-hands fire drill.

Key insight: Maximum autonomy is most valuable when the cost of failure is low.

This is why "give the agent broad permissions" can be rational in a sandbox. You are buying speed with risk, but the risk is capped.

Production is not a greenfield project

Adam drew a clean line: "But production isn't a greenfield project." That is the whole story.

Production has three properties that change everything:

  1. Blast radius: A single bad query can lock tables, spike CPU, saturate connections, or corrupt data.
  2. Irreversibility: Some mistakes do not have a clean undo button. Even when you restore from backups, you might lose writes, break consistency, or violate customer expectations.
  3. External consequences: Downtime creates support tickets, refunds, churn, and brand damage. It also burns the team.

Adam put it simply: "Production is where one mistake costs real money. Where downtime means angry customers." That is why the same access level that is harmless in staging becomes existential in prod.

Why AI optimizes for speed, not safety

Adam said: "Claude doesn't think in those terms. It optimizes for speed, not safety." I would expand that slightly: the model is excellent at producing plausible next steps, but it does not carry real accountability for outcomes.

Even when an AI system is instructed to be cautious, the incentives in its loop usually reward:

  • completing the task
  • reducing visible errors
  • converging quickly

Safety, however, often means slowing down, asking clarifying questions, adding checks, and refusing risky actions. Humans learn this because we have been burned by incidents. Models do not "remember" pain the same way, and they can be overly confident about operations that are easy to describe but hard to execute safely.

Key insight: A model can be smart about syntax and still be naive about operations.

This shows up in common production failure modes:

  • generating a migration that rewrites a huge table during peak traffic
  • running an unbounded update without a where clause
  • changing an index strategy that looks correct, but causes query plan regressions
  • deleting resources it believes are unused
  • misreading logs and applying the wrong fix repeatedly

None of these require malicious intent. They require only speed plus permissions.

The guardrails that make autonomy sustainable

Adam summarized the practical conclusion: "Production? Human approval gates. Manual PR reviews. Every time." That is not anti-AI. It is pro-reliability.

Here are guardrails that preserve most of the upside while controlling the downside.

1) Separate environments and credentials, aggressively

If your agent has one set of credentials that works everywhere, you have already lost. Use:

  • distinct projects/accounts for dev, staging, and prod
  • separate database users with minimal privileges
  • different API keys per environment
  • short-lived tokens where possible

A good rule: the agent should never be able to reach production by default.

2) Least privilege, not blanket admin

Instead of "AI has full DB access," break permissions into capabilities:

  • read-only access for analysis and debugging
  • write access only in staging
  • production write access limited to migrations via controlled tooling
  • no direct delete or drop permissions in prod

You can still move fast if the agent can draft the SQL, but the execution path is constrained.

3) Human-in-the-loop at decision points, not keystrokes

People often interpret "human approval" as slowing everything down. It does not have to.

Use AI to generate:

  • migration scripts
  • rollout plans
  • rollback plans
  • monitoring queries
  • post-deploy verification checklists

Then require a human to approve:

  • anything that changes schema
  • anything that writes to prod
  • any permission escalation

This matches Adam's idea of "approval gates" while keeping the model useful.

4) PR reviews plus automated checks

Manual review catches conceptual risk, but automation catches repeatable mistakes.

Combine PR review with:

  • SQL linting and migration safety checks
  • policy-as-code for IAM changes
  • CI that runs migrations against a realistic dataset in staging
  • canary deploys and progressive delivery

The AI can even help here by writing tests and suggesting monitors, but the pipeline should block risky changes by default.

5) Observability and rollback as first-class features

If you let an agent deploy, it must also be able to prove the deploy is safe.

Require:

  • dashboards and alerts created alongside changes
  • explicit success metrics (latency, error rate, saturation)
  • a rollback plan that is actually executable

Autonomy without observability is just faster failure.

A simple policy that teams can adopt

If you want a crisp, teachable rule that echoes Adam's point, try this:

  • Sandbox and greenfield: AI can have broad access to build quickly.
  • Staging: AI can deploy and run migrations, but only through the pipeline.
  • Production: AI can propose changes, not execute them. Humans approve and merge.

Or, in Adam's phrasing: "Maximum access on the wrong environment is maximum risk."

So how much production access should AI have?

Adam ended with the question: "How much production access should AI have in your workflow?" My answer is: as little as possible, and only through controlled, auditable interfaces.

Give AI a lot of leverage where failure is cheap. Give it narrow, well-instrumented pathways where failure is expensive. The goal is not to slow teams down. The goal is to make speed repeatable without betting the company on a single automated decision.

If you design permissions and approvals like you design APIs, you can get the best of both worlds: AI-assisted velocity and production-grade safety.

This blog post expands on a viral LinkedIn post by Adam Janes. View the original LinkedIn post →