Aravind Sriram on What Not to Automate with AI
A response to Aravind Sriram's viral post on using AI for coding while reserving models, architecture, and security for humans.
Aravind Sriram recently shared something that caught my attention:
It’s knowing what NOT to automate. I use AI for 80% of my coding tasks. But here’s what I NEVER let AI do.
That framing hits because it separates two things we often blur together: automation and delegation. AI can absolutely accelerate execution, but it cannot carry responsibility. When Aravind says AI is “incredible for execution” and “terrible for strategy,” I read it as a leadership reminder for engineers: the goal is not to eliminate thinking, it is to allocate it.
In other words, we should use AI to move faster on the HOW, while keeping the WHAT and WHY human.
AI is great at execution, not ownership
When a tool produces code, documentation, tests, or queries quickly, it is tempting to treat it like a junior teammate who can run independently. But AI is not a teammate. It does not share your incentives, understand your business constraints, or feel the consequences of a bad call at 2 a.m.
Execution work often has clear input and output:
- Given an API spec, generate a client.
- Given a function signature, write the implementation.
- Given a table schema, write a query.
- Given a failing test, propose a fix.
Strategy work is different. It requires context and judgment:
- What does the business actually need?
- What failure modes are acceptable?
- Who is accountable if this goes wrong?
- Which trade-offs match our roadmap and risk tolerance?
That is why Aravind’s list of “never automate” tasks is so useful. Each item maps to a place where context, accountability, and long-term consequences matter more than speed.
The five areas Aravind refuses to automate (and why)
Below, I expand on each point Aravind shared, with practical examples of what to delegate to AI and what to keep human.
1) Data model design
Aravind’s call: do not let AI design data models because it lacks business context.
A data model is not just tables and columns. It is a contract between teams and a reflection of how the organization defines reality. AI can propose a schema, but it cannot reliably answer questions like:
- What does “active” mean in our domain?
- Which identifiers are stable over time?
- What are the regulatory retention rules?
- How do we reconcile conflicting sources of truth?
Where AI helps:
- Generating draft DDL once you have decided entities and relationships.
- Proposing naming conventions or documenting fields.
- Creating sample queries and validation checks.
Where I stay hands-on:
- Defining entities, grain, and canonical definitions.
- Deciding the boundaries between domains.
- Making the model resilient to future product changes.
If your model encodes the wrong business meaning, perfect SQL only makes the wrong answer faster.
2) Choosing architectural approaches
Aravind’s call: do not let AI choose between architectural approaches because trade-offs need judgment.
Architecture is a series of bets. The right answer depends on load patterns, team maturity, on-call capacity, data sensitivity, cost constraints, and how quickly requirements will change.
Example: Should you adopt event-driven ingestion or scheduled batch? AI can list pros and cons, but it cannot know your org’s operational reality. A theoretically elegant system can be a practical nightmare if it exceeds your team’s ability to run it.
Where AI helps:
- Listing common patterns and their typical failure modes.
- Drafting an ADR (architecture decision record) based on your inputs.
- Generating diagrams or explaining a pattern to a new team member.
Where humans decide:
- Which risks you can afford.
- How to balance build time vs operational complexity.
- What constraints are non-negotiable (latency, compliance, cost).
3) Debugging production incidents end-to-end
Aravind’s call: do not let AI debug production incidents end-to-end because you need to learn what broke.
This is an underrated point. Incidents are where systems teach you their true behavior. If you outsource the entire investigation, you might restore service, but you lose the lesson, and the same class of incident returns.
AI can speed up pieces of incident response, but it should not be the sole driver of the narrative.
Where AI helps during incidents:
- Summarizing logs, dashboards, and alert timelines.
- Proposing likely root causes based on symptoms.
- Drafting a postmortem template and action items.
Where humans must lead:
- Establishing the ground truth and ruling out false correlations.
- Coordinating with stakeholders and making risk calls.
- Deciding whether to rollback, hotfix, or degrade features.
Speed matters in incidents, but understanding matters more if you do not want a repeat.
4) Security and privacy decisions
Aravind’s call: do not let AI make security or privacy decisions because it is too critical.
Security is about adversaries, incentives, and worst-case thinking. Privacy is about law, ethics, and user trust. AI can suggest best practices, but it cannot accept liability or understand the legal and reputational stakes unique to your company.
Where AI helps:
- Producing secure coding checklists.
- Highlighting common vulnerabilities in a snippet.
- Drafting threat models that you then validate.
Where humans must be accountable:
- Data classification and access policies.
- Encryption and key management choices.
- Vendor and third-party risk decisions.
- Privacy reviews, retention, and consent mechanisms.
If you are wrong here, the impact is not just a bug. It can be an incident with legal consequences.
5) Explaining technical decisions to stakeholders
Aravind’s call: do not let AI explain technical decisions to stakeholders because relationships matter.
This is not about writing ability. It is about trust. Stakeholder communication is where engineering earns credibility by being clear about trade-offs, timelines, uncertainty, and risk.
AI can draft a message, but it cannot read the room. It cannot know which detail will trigger concern, what context a leader lacks, or where a partner team has been burned before.
Where AI helps:
- Drafting an outline for a decision memo.
- Translating jargon into simpler language.
- Creating a FAQ of anticipated questions.
Where humans should speak:
- Owning the decision and its consequences.
- Negotiating scope, sequencing, and risk acceptance.
- Building long-term alignment.
A simple delegation rule: AI for the HOW, humans for the WHAT and WHY
Aravind ends with a line I keep coming back to:
Master delegation, not elimination.
Here is the practical version I use:
- WHAT: Define the problem precisely and the success metrics. Human-owned.
- WHY: Explain the rationale, constraints, and trade-offs. Human-owned.
- HOW: Generate options, implement, test, refactor, document. AI-assisted.
If I cannot clearly state the WHAT and WHY, I do not open the AI prompt yet. Otherwise I risk getting a fast answer to the wrong question.
A workflow that keeps you in control
If you want to use AI for a large portion of coding work without giving up judgment, try this sequence:
-
Write a short decision brief first
Include constraints (latency, cost, privacy), non-goals, and success metrics. -
Ask AI for options, not a single solution
Prompt for 2-3 approaches and require explicit trade-offs and risks. -
Decide, then delegate implementation
Once you choose, let AI help generate code, tests, and docs. -
Review like you would review a pull request from a strong but uninformed engineer
Check assumptions, edge cases, and operational impact. -
Close the loop with post-decision learning
After deploys or incidents, update your decision brief and your team’s standards.
The real question: what do you refuse to automate?
Aravind’s post ends with a challenge: what do you refuse to automate?
My answer is anything that quietly transfers accountability away from me or my team: defining meaning (data models), accepting risk (security), making bets (architecture), learning from failure (incidents), and building trust (stakeholder communication).
AI can be a force multiplier, but only if you keep your hands on the steering wheel.
This blog post expands on a viral LinkedIn post by Aravind Sriram. View the original LinkedIn post →