Steve Pritchard on AI Outreach That Actually Gets Replies
A deep dive into Steve Pritchard's take on AI outreach, lead-gen agents, and the real bottleneck: time to personalize well.
Steve Pritchard recently shared something that caught my attention: "If you didn't open yesterday's newsletter, here's what you missed:" followed by a tight list of lessons, from "the full architecture of a Lead Generation Agent" to why most "automated outreach" is just mail merge.
That framing matters because it cuts through the noise. Plenty of people talk about AI for sales. Steve is pointing at a more specific problem: the gap between having contacts and actually reaching them properly. He even noted he got 14 replies from readers describing their prospecting bottlenecks, and the pattern was clear: "the problem isn't finding contacts. It's finding time to reach them properly."
Below is my expansion on Steve's points, written as a practical guide for anyone trying to modernize outbound without turning it into spam.
The real constraint is not leads, it is execution time
Most teams are not suffering from a shortage of names in a CRM. They are suffering from a shortage of minutes.
If you are juggling pipeline reviews, internal meetings, follow-ups, and admin work, outreach becomes the thing you do "when you have time". And when you finally do it, you do it quickly. Quick tends to mean generic. Generic gets ignored.
Steve Pritchard's hidden thesis: speed is only useful if it buys you relevance, not volume.
AI can reduce time per message, but only if you build a process that turns raw data into a reason to talk to someone.
The 6-stage architecture of a Lead Generation Agent (expanded)
Steve mentioned "the full architecture of a Lead Generation Agent (6 stages, from ICP to personalised email in 90 seconds)". The key is that the email is not stage one. It is stage six.
Here is a clean, modern version of that 6-stage flow that works whether you implement it with an AI agent, a set of automations, or simply a disciplined playbook.
1) ICP definition (who you will not contact)
Start with constraints. A useful Ideal Customer Profile is as much about exclusion as inclusion:
- Firmographics: size, industry, geography
- Technographics: tools they use (when relevant)
- Triggers: hiring patterns, new funding, leadership changes, compliance deadlines
- Disqualifiers: no budget signals, wrong business model, poor fit segments
If your ICP is fuzzy, your personalization will be random. Random personalization is just noise with extra steps.
2) Account selection (where you can actually win)
An agent should not only find accounts. It should prioritize them. That means scoring:
- Fit score (ICP match)
- Intent or trigger score (something changed)
- Accessibility score (can you reach decision makers)
A short list you can execute beats a long list you cannot.
3) Contact mapping (who matters and why)
Most outreach fails because the contact is wrong, not because the copy is wrong. Good mapping answers:
- Who owns the problem?
- Who influences the decision?
- Who blocks the decision?
This is also where you decide whether you are going top-down (VP first) or bottom-up (operator first), and how that changes your message.
4) Context gathering (the raw ingredients for relevance)
This is where AI should shine, but only with guardrails. Pull a small set of high-signal facts:
- A recent company initiative (press release, product launch, expansion)
- A role-specific clue (job post, team structure, KPI hints)
- A constraint (regulatory change, seasonality, competitive pressure)
The goal is not to collect everything. It is to collect the one or two details that explain why reaching out now makes sense.
5) Hypothesis and offer (your "reason to believe")
Before writing, generate a hypothesis:
- "Given X, you might be trying to achieve Y, but Z makes it hard."
Then align a simple offer:
- A quick benchmark
- A short teardown
- A relevant case study
- A tiny diagnostic question
If you skip this step, your message becomes a feature dump. If you do this step well, your message becomes a conversation starter.
6) Message assembly (personalized, but controlled)
Only now do you write the email. And the output should be structured:
- One sentence of context
- One sentence of hypothesis
- One sentence of proof or example
- One low-friction call to action
Ninety seconds is believable if stages 1-5 are automated or templated and the agent only needs to assemble and polish.
Why most "automated outreach" is mail merge with extra steps
Steve called it out directly: most automated outreach is still mail merge, just wrapped in new tools.
Mail merge fails for a simple reason: it personalizes the wrong thing.
- It personalizes tokens (first name, company name)
- It does not personalize intent (why now, why you, why this)
If your sequence is built around fields instead of insight, the recipient feels it immediately. The email reads like it was generated to be sent, not written to be read.
A quick litmus test: if the only unique elements in your email come from a spreadsheet, you are not doing personalization. You are doing formatting.
The technique that makes AI emails get replies (it is not the first name)
Steve teased this: "The specific technique that makes AI-written emails actually get replies (hint: it's not the first name)."
Here is what consistently works in practice: make the first two lines about an evidence-based hypothesis, not an identity-based compliment.
Bad personalization:
- "Loved your recent post"
- "Congrats on the funding"
- "I saw you are the VP of Sales"
Better personalization:
- "Noticed you're hiring 4 SDRs in EMEA. Teams often do that right before tightening territory rules and redefining ICP. Is that happening?"
The technique is simple:
- Choose one observable signal
- Connect it to a plausible initiative or pain
- Ask a narrow question that a real human can answer quickly
The point is to earn a reply by being specific enough to be falsified.
AI helps because it can generate multiple hypotheses fast. Your job is to pick the one that is most likely true and least likely to feel creepy.
Process inconsistency is rarely a people problem
Steve also wrote: "Why process inconsistency is almost never a people problem." I agree, and I would go further: inconsistency is usually an incentives and clarity problem.
Common causes:
- The process is undocumented, so everyone improvises
- The process is documented, but not usable in real time
- Tools are fragmented, so reps create workarounds
- Metrics reward volume, so quality naturally decays
If leadership says "be more personalized" but dashboards only celebrate "emails sent", the system will always produce generic outreach.
A practical fix is to standardize just enough:
- Define a minimum research checklist (2 signals, 1 hypothesis)
- Define message structure (context, hypothesis, proof, CTA)
- Define quality gates (no claims without a source, no fluffy compliments)
- Review a small sample weekly, not just top-line numbers
The pattern Steve saw: time to reach them properly
The most useful line in Steve's post might be the simplest: the bottleneck is time.
So the goal is not "send more". It is "spend the same time and get more relevance per minute".
Tactics that help:
- Build a reusable library of hypotheses by segment (industry + role)
- Create trigger-based plays (funding, hiring, expansion, tool change)
- Pre-write proof blocks (1-2 sentence micro case studies)
- Use AI for drafts, then enforce a strict edit pass for truth and specificity
If you do this, AI becomes a leverage tool, not a spam cannon.
Closing thought
Steve Pritchard's post reads like a newsletter teaser, but the underlying message is bigger: outbound is not failing because people lack tools. It fails because teams confuse automation with relevance.
When you treat outreach like a six-stage system, the email becomes the output of thinking, not the substitute for it. That is the difference between "automated outreach" and effective prospecting.
This blog post expands on a viral LinkedIn post by Steve Pritchard. View the original LinkedIn post →