Back to Blog
Trending Post

Suganthan Mohanadasan on SEO: Ask the LLM

·SEO & AI Search

A practical take on Suganthan Mohanadasan's post: stop overtracking AI rankings and ask LLMs why they recommend competitors.

LinkedIn contentviral postscontent strategySEOAI searchLLM optimizationChatGPTbrand monitoringsocial media marketing

Suganthan Mohanadasan, Co-founder @ Snippet Digital // Search Journey Optimization, recently posted something that made me stop scrolling: "SEOs in 2025: "We built a 47-tab spreadsheet tracking every prompt variation across 6 LLMs with sentiment scoring, citation mapping, and a custom Zapier workflow that pings Slack every time our brand drops from position 3 to position 4 in a ChatGPT response."" Then he delivers the punchline: "Cool. Have you actually tried asking ChatGPT why it recommends your competitor instead of you?"

That one question captures a pattern I see everywhere in SEO and AI search right now: we are building elaborate monitoring systems before we do the simplest, most revealing diagnostic. We measure the output, but we do not interrogate the cause. And as Suganthan joked, if the answer is "No, that's phase 2," you might want to check the calendar: "Phase 2 was 4 months ago mate."

In this post, I want to expand on what he is really saying: AI visibility is not just a ranking problem. It is an explanation problem. Your fastest path to improving how LLMs talk about your brand often starts by directly asking the model what it thinks you are, what it trusts, and what it believes your competitor does better.

The trap: treating LLM answers like classic SERP rankings

Traditional SEO trained us to obsess over positions, volatility, and alerts. That mindset is useful, but only up to a point. In AI Overviews, ChatGPT-style answers, and other LLM surfaces, the "position" concept is fuzzy: the model may reorder, summarize, or omit brands based on context, retrieval sources, or its internal preferences.

So when teams build tracking that resembles rank monitoring (just with more tabs), they risk optimizing for the metric rather than the mechanism. You can watch your brand slide from "3" to "4" all week and still have no clue what to change to move back up.

Key idea I hear in Suganthan's post: monitoring is not strategy. Diagnosis comes first.

Phase 2 should be phase 0: ask the model "why"

Suganthan's point is not "do not measure." It is "do not skip the conversation." LLMs are interactive. You can query them the way you would question a customer, a reviewer, or even a journalist.

Here are a few prompts that turn vague anxiety into actionable insight (and yes, you should run them across multiple models):

1) Identify the decision criteria

Ask:

  • "When recommending [category] tools, what criteria matter most to you?"
  • "Rank the top factors you use to choose between [Brand] and [Competitor]."

If the model says "pricing, ease of use, integrations, customer support," you have a checklist. Now you can go confirm whether your site, docs, reviews, and third-party mentions clearly support those claims.

2) Surface missing information

Ask:

  • "What information would you need to confidently recommend [Brand] over [Competitor]?"
  • "What are you unsure about regarding [Brand]?"

This often reveals gaps like unclear positioning, weak proof, missing comparison content, outdated feature pages, or inconsistent naming across the web.

3) Audit perceived weaknesses (even if you disagree)

Ask:

  • "What are the most common complaints about [Brand]?"
  • "What risks should a buyer consider with [Brand]?"

Then you do the real work: separate hallucinations from real perception. If the model repeats a misconception, that misconception is now an SEO problem and a PR problem.

4) Force citations and sources (where available)

Ask:

  • "Cite the sources you used to make that recommendation."
  • "Which reviews, articles, or pages most influenced your answer?"

In retrieval-based systems, this can point you to the exact pages shaping the narrative. In other cases, you will learn that the model cannot cite, which is also useful: it signals you should focus more on broadly consistent signals across the open web.

A simple workflow: explain, then measure

If your current setup is "track everything, then panic," flip it. Here is a lightweight workflow that aligns with Suganthan's nudge.

Step 1: Create a prompt set that mirrors real buyer intent

Instead of infinite prompt variations, pick 10 to 20 that map to your funnel:

  • "Best [category] for [use case]"
  • "[Brand] vs [Competitor]"
  • "Is [Brand] good for [industry]?"
  • "Alternatives to [Competitor]"

Make sure at least half of these are competitor-first prompts. That is where the truth comes out.

Step 2: For each prompt, ask the "why" follow-ups

For any response where you are not mentioned, or where you are mentioned but positioned poorly, immediately follow with:

  • "Why did you choose those brands?"
  • "What would make you include [Brand]?"
  • "What evidence supports that?"

Save the conversation, not just the output. The reasoning is your roadmap.

Step 3: Convert answers into an "evidence backlog"

Every reason the model gives should translate into an asset or signal you can strengthen. Examples:

  • "[Competitor] is better for enterprise" - publish enterprise case studies, security pages, procurement docs
  • "[Competitor] integrates with X" - build the integration, document it, get listed in the partner directory
  • "[Brand] is mainly for small teams" - clarify ICP messaging and pricing architecture

Step 4: Strengthen third-party corroboration

LLMs tend to reward consensus. You need more than on-site claims. Build and refresh signals like:

  • Independent reviews and comparison posts
  • Credible directory listings (with accurate descriptions)
  • Partner pages, integrations, and marketplace entries
  • Original research and statistics others cite
  • Founder or product expert content that gets referenced

This is where classic SEO, PR, and content marketing merge.

Step 5: Then bring back the tracking (but track the right things)

Now your spreadsheet can help, because it is measuring the impact of specific changes. Track:

  • Mention rate by intent cluster
  • Accuracy of brand description (does it get your positioning right?)
  • Citation presence (are you cited, and where?)
  • Competitor comparison outcomes (are you winning the stated criteria?)

If you cannot name the reason you lost, your alerts are just noise.

What this means for SEO teams in 2025 and beyond

Suganthan's joke lands because it is painfully true: we love complexity. Complexity feels like progress. But AI search rewards clarity, consistency, and evidence.

If you are responsible for "AI visibility" at your company, your job is shifting from purely optimizing pages to optimizing understanding. That includes:

  • Making sure your positioning is unambiguous
  • Ensuring your differentiators are repeated across trusted sources
  • Closing information gaps that cause models to default to competitors
  • Testing not just outputs, but explanations

And the most practical part is this: you do not need permission, budget, or a new tool to start. Open the model your customers are using and ask, plainly, "Why them, not us?" Then keep asking until you have a list of fixable inputs.

A final challenge (in the spirit of the post)

If you have a dashboard that reports your "position" in LLM answers, try this for one week: every time you drop, do not add another tab. Start a thread with the model and collect the reasons. By Friday, you should have a concrete backlog of messaging fixes, content gaps, and third-party signals to build.

That is phase 0. Phase 2 can wait.

This blog post expands on a viral LinkedIn post by Suganthan Mohanadasan, Co-founder @ Snippet Digital // Search Journey Optimization. View the original LinkedIn post ->