
Kyle Poyar on AI Plays That Actually Drive Pipeline
A practical breakdown of Kyle Poyar's AI-for-GTM plays, plus guardrails to avoid AI slop and generate pipeline with confidence.
Kyle Poyar recently shared something that caught my attention: "Many GTM leaders frankly don’t trust AI outputs and are worried about AI slop. Yet a small group are already seeing outsized returns from AI." That tension is real. Most go-to-market teams have experimented with AI, but plenty have been burned by generic copy, hallucinated details, or workflows that look impressive in a demo and fall apart in production.
Kyle’s point is that the winners are not waiting for perfect AI. They are building practical systems that generate pipeline, improve conversion with 1:1 messaging, and create efficiency across marketing and sales. He and Maja Voje interviewed 30 GTM experts and pulled together 40 AI x GTM plays across content creation, growth and product marketing, prospecting, and sales engagement.
In this post, I want to expand on what Kyle said and make it more actionable: why trust is the bottleneck, what "AI slop" really means in GTM, and how to implement a handful of the plays he highlighted without needing to be an AI engineer.
The real issue is not AI quality, it is GTM risk
When a GTM leader says they do not trust AI outputs, they are usually reacting to risk in one of four forms:
- Brand risk: Off-voice messaging that cheapens positioning
- Revenue risk: Wrong claims, wrong persona, wrong problem framing
- Compliance risk: Making statements you cannot substantiate
- Focus risk: Busywork automation that does not move pipeline
AI slop is not just bad writing. In GTM, slop is any output that is unmoored from your product truth, your ICP reality, and your specific proof points. The fix is not "use less AI." The fix is to put AI inside constraints: better inputs, clear evaluation criteria, and a workflow that forces grounding.
Kyle Poyar’s most important unlock: you do not need to be an AI expert to see value. Most plays can be built with general-purpose LLMs (ChatGPT, Claude, Gemini) plus affordable tools.
That matters because the advantage is shifting from model access to system design.
The four categories Kyle outlined (and why they work)
Kyle and Maja grouped the best AI x GTM use cases into four buckets. Here is how I think about them in practice:
- Content creation: Increase output without lowering standards by using AI for structure, first drafts, and repurposing, then add human judgment and evidence.
- Growth and product marketing: Turn messy qualitative inputs (calls, reviews, notes) into consistent insights, positioning angles, and experiments.
- Prospecting: Automate research and segmentation so reps spend time on reasoning and relevance, not tab switching.
- Sales engagement: Improve conversion by personalizing what matters: context, stakes, and next step. Not just swapping company names.
The throughline is simple: AI is strongest when it transforms inputs you already have into decisions and actions you already want.
Nine AI plays worth copying (expanded)
Kyle listed nine favorites. Below is my expanded, blog-ready version of what each play can look like, what it is good for, and a simple way to start.
1) AI content assistant (via Maja Voje)
A content assistant is not "write me a LinkedIn post." It is a guided workflow that knows your topics, points of view, examples, and voice.
What it does well: outlines, hooks, content repurposing, and turning raw notes into publishable structure.
How to start:
- Feed 5 to 10 examples of your best writing
- Create a checklist for quality (specificity, POV, proof, CTA)
- Use AI to draft, then force a human pass for evidence and tone
2) AI competitor comparison pages (via Matteo Tittarelli)
Comparison pages win high-intent search, but they are painful to maintain. AI can help generate first drafts and keep them fresh.
What it does well: page scaffolding, feature mapping, and summarizing public competitor claims.
Guardrail: never invent competitor features. Use citations and include an internal review step.
3) AI competitive intelligence copilot (via Justin Norris)
Instead of sporadic competitive docs, create a copilot that answers questions like "How do we position against X for persona Y?" grounded in your battlecards, win-loss notes, and approved messaging.
What it does well: quick retrieval and consistent talk tracks.
How to start:
- Centralize approved assets (battlecards, objection handling)
- Add rules: only answer from sources, otherwise say "I don’t know"
- Track which questions reps ask most and update assets accordingly
4) AI digital twin for customer research (via Kieran Flanagan)
A digital twin is a synthetic representation of a customer segment based on real inputs: interviews, call transcripts, reviews, and survey responses.
What it does well: rapid hypothesis generation for messaging and experimentation.
How to use it safely:
- Treat outputs as hypotheses, not facts
- Require every insight to map back to a quote or source snippet
5) AI SAM scoring algorithm (via Liam Gandelsman)
A SAM scoring play helps you prioritize accounts by fit and likelihood to buy, not just firmographics.
What it does well: combining signals (stack, hiring, intent, tech usage, triggers) into a ranked list.
Practical tip: start with a transparent rules-based model, then iterate. If you cannot explain why an account scored high, your team will not trust it.
6) AI personalized ABM campaign (via Dave Rigotti)
Personalized ABM should not mean "write 500 custom emails." It should mean tailoring the narrative and proof to a micro-segment.
What it does well: generating segment-specific angles, landing page variants, and ad copy aligned to one hypothesis.
Workflow idea:
- Pick 1 segment, 1 pain, 1 proof point
- Generate 3 angle options
- Run small tests, then scale the winner
7) AI outbound micro-campaigns (via Mike Ryan)
This is where teams see immediate pipeline impact. Micro-campaigns are small, targeted sequences built around a trigger and a tight persona.
What it does well: turning a trigger list into messaging that actually references the trigger.
Example triggers:
- New executive hire
- Recent funding
- Tool adoption detected
- Job posts that imply a new initiative
8) AI meeting prep with a custom GPT (via Joey Maddox)
Meeting prep is repetitive research. A custom GPT (or similar agent) can create a one-page brief: company context, likely priorities, relevant case studies, and suggested discovery questions.
What it does well: speed plus consistency.
Non-negotiable: require links to sources for any claim so reps can verify in seconds.
9) AI re-engagement for closed-lost deals (via Elaine Zelby)
Closed-lost is an underused goldmine. AI can cluster loss reasons, detect timing signals, and generate re-engagement messaging that reflects the actual last conversation.
What it does well: summarizing deal history and creating a credible reason to re-open.
Easy starting point: for each closed-lost opportunity, auto-generate:
- 3-bullet recap of why you lost
- 2 potential new triggers to re-engage
- 1 short email that references the original context
How to avoid AI slop while scaling output
If you implement only one idea from Kyle’s post, let it be this: build constraints.
Here are practical guardrails that make AI outputs trustworthy:
- Grounding rules: require sources (call snippets, CRM fields, approved docs). If missing, the model must ask for them.
- Voice and positioning rubric: define what "good" looks like before you generate anything.
- Human-in-the-loop for claims: AI can draft, but humans must approve factual statements, comparisons, and pricing.
- Small tests first: run micro-campaigns and limited experiments, then scale what works.
- Instrumentation: tie every play to a metric (reply rate, meeting rate, conversion, cycle time saved).
When GTM leaders say they do not trust AI, what they really want is predictability. These guardrails create predictability.
A simple way to start this week
Kyle mentioned the best news: you can do this with general-purpose LLMs and off-the-shelf tools. I would start with a two-week sprint:
- Pick one workflow in one category (prospecting or meeting prep tends to show ROI fastest)
- Define inputs (what the AI can use) and outputs (what good looks like)
- Build a first version with your existing stack
- Review 20 outputs manually and refine prompts and rules
- Ship to a small group, measure, and iterate
If you want Kyle and Maja’s exact prompts and step-by-step workflows, he linked them in the Growth Unhinged newsletter: https://lnkd.in/es9pNkm5
This blog post expands on a viral LinkedIn post by Kyle Poyar , Growth Unhinged | Real-life growth insights, playbooks, and case studies. View the original LinkedIn post →