Pascal BORNET Warns About AI Autonomy in Coding
A response to Pascal BORNET’s viral post on self-shipping AI code, why it feels risky, and the guardrails teams need now.
Pascal BORNET recently shared something that caught my attention: "That Silicon Valley scene hit me harder than I expected." Then came the line that lands like a punchline and a warning at the same time: "Wait… you gave your AI permission to overwrite the internal file system?" Pascal added that it was meant as a joke, but it "perfectly describes where we are today."
I keep coming back to that framing because it captures the mood across software teams right now: we are moving from AI as assistance to AI as autonomy. And when Pascal BORNET says, "Every week, I see companies letting AI write, test, and ship code faster than any human could," it is both thrilling and unsettling for a very practical reason: speed changes what we consider acceptable risk.
From helpful copilot to autonomous teammate
Pascal BORNET’s key point is not that AI coding tools are impressive (they are). It is that we are increasingly delegating real agency: systems that do not just suggest code, but execute workflows that touch production environments.
There is a clear progression many teams are already following:
- Suggest - autocomplete, snippets, refactors.
- Generate - whole functions, modules, and tests.
- Act - open pull requests, run CI, address review comments.
- Ship - merge, deploy, rollback, and patch.
The jump from generate to act is where the risk profile changes. The jump from act to ship is where the blast radius becomes non-theoretical.
"We’re giving systems autonomy, not just assistance." - Pascal BORNET
That is the crux: autonomy means the system can take steps that you did not explicitly enumerate, in an environment that has real constraints (security, uptime, compliance, user trust).
Why "self-rewriting code" went from fiction to demo
Pascal BORNET also noted how quickly fiction became practice: "We used to laugh at the idea of self-rewriting code - now it’s a product demo." I think this happened for three reasons.
1) Software is uniquely automatable
Code is text, tests are text, tickets are text, logs are text. Modern engineering is already mediated through tools that accept structured instructions. That makes software a natural playground for agentic systems.
2) CI/CD turned production changes into a pipeline
Once you have a pipeline, the difference between a human clicking buttons and an agent calling APIs is smaller than we admit. Autonomy often arrives quietly because the interfaces are already there.
3) Competitive pressure makes speed feel like safety
If your competitors are shipping faster, "waiting to be sure" can feel like a strategic disadvantage. That is how judgment gets outpaced by ambition, exactly the question Pascal posed: "Innovation has never moved this fast. The question is, can our judgment keep up with our ambition?"
The real risk is not "bad code" - it is unbounded authority
The joke about "permission to overwrite the internal file system" is funny because it is plausible. In practice, the biggest failures will not come from an AI making a small syntax mistake. They will come from giving an agent too much authority with too little context.
Here are the failure modes I worry about most:
Over-permissioned agents
If an agent can read secrets, write to repositories, and push to production, it effectively holds a master key. Even if the model is well-behaved, the system is exposed to:
- Prompt injection through tickets, docs, logs, or dependencies
- Tool misuse through ambiguous instructions
- Accidental destructive actions (deleting, overwriting, or leaking)
Good intent, wrong objective
Agents optimize for goals. If the goal is "fix the bug" or "reduce latency," an agent may choose a solution that violates an unstated constraint (privacy, auditability, backward compatibility). Humans carry those constraints implicitly. Agents do not unless we encode them.
Silent regression debt
An agent that can ship quickly can also ship regressions quickly. If the feedback loop is weak (limited monitoring, noisy alerts, shallow tests), the system may create a churn cycle: patch, break, patch, break.
Judgment that keeps pace: guardrails for agentic software engineering
Pascal BORNET’s post reads like an invitation to design maturity. If autonomy is inevitable, then the question becomes: what does responsible autonomy look like?
1) Progressive autonomy (levels, not a switch)
Treat autonomy like you treat production access for humans: junior engineers do not get the same permissions as staff engineers.
A practical pattern:
- Level 0: AI drafts code only (no tools).
- Level 1: AI can run tests and linters locally.
- Level 2: AI can open PRs with full traceability.
- Level 3: AI can merge with mandatory human review.
- Level 4: AI can deploy behind feature flags with rollback automation.
The point is to earn autonomy through measurable reliability.
2) Least privilege for tools and data
If an agent does not need write access to your infrastructure, do not give it. Scope tool permissions to:
- Specific repos and branches
- Specific environments (dev before staging before prod)
- Time-bound credentials
- Explicit allowlists for high-risk operations (deletes, migrations, secret access)
This is where the file system joke becomes operational: many agent setups start as "just make it work" prototypes. But prototypes have a habit of becoming production.
3) Human accountability stays non-negotiable
Autonomy cannot be a way to outsource responsibility. Someone must be on the hook for:
- What shipped
- Why it shipped
- Evidence that it was safe to ship
That means every agent action should produce an audit trail: prompts, tool calls, diffs, test results, approvals.
4) Evaluation and monitoring that match the new reality
Traditional unit tests and CI are necessary, but agentic systems need additional coverage:
- Adversarial testing (prompt injection attempts via issues and docs)
- Policy checks (licenses, security rules, PII handling)
- Behavioral regression tests for the agent (does it follow instructions and constraints?)
- Runtime monitoring (canary deploys, anomaly detection, automated rollback)
If an agent can deploy, then observability is part of your safety system, not a nice-to-have.
5) Clear constraints and "stop conditions"
Agents need explicit definitions of:
- What they are allowed to change
- What they must never change (auth, payments, data handling)
- When to escalate to a human (uncertain fixes, flaky tests, security warnings)
- When to stop (time limits, failure thresholds)
This is how you translate judgment into system design.
The opportunity: speed with discipline
I do not read Pascal BORNET’s post as anti-innovation. If anything, it highlights a rare moment where we can decide what "good" looks like before the defaults harden.
Agentic AI in software engineering can be an amplifier for:
- Faster remediation of vulnerabilities
- Better test coverage through automated generation and mutation testing
- More consistent code quality via style and architecture enforcement
- Reduced toil in triage, dependency bumps, and routine refactors
But that only holds if we build the social and technical contract around autonomy: permissioning, audits, evals, and human oversight.
The question is not whether AI can ship code faster.
The question is whether we can ship judgment just as fast.
A practical takeaway for teams this quarter
If you are experimenting with agents that write and ship code, I would start with three moves:
- Inventory agent permissions: list every tool and credential the agent can touch.
- Add a hard human gate for merges or deployments until you have evidence-based confidence.
- Instrument everything: logs of prompts, diffs, tool calls, and outcomes.
That turns Pascal BORNET’s joke into a checklist: do we actually know what we just allowed?
This blog post expands on a viral LinkedIn post by Pascal BORNET, #1 Top Voice in AI & Automation | Award-Winning Expert | Best-Selling Author | Recognized Keynote Speaker | Agentic AI Pioneer | Forbes Tech Council | 2M+ Followers ✔️. View the original LinkedIn post →