Back to Blog
Trending Post

Chris Marrano's Claude Audit Trick for Facebook Ads

·AI Marketing Analytics

Explore Chris Marrano's viral Claude audit workflow using Facebook Ads exports to spot hidden patterns in ecommerce accounts.

LinkedIn contentviral postscontent strategyAI marketing analyticsClaudeFacebook Adsecommerce marketingad account auditsocial media marketing

Chris Marrano recently shared something that caught my attention: "I have been auditing eight figure ecommerce ad accounts with the same Claude skill." He followed it with a promise that feels almost too simple for how powerful it is: "No dashboards. No new tools. No custom reporting. Just a simple export from Facebook Ads Manager dropped into Claude."

That line hit because it describes what most performance teams actually need right now. Not another interface, not another KPI waterfall chart, and not another attribution debate disguised as a dashboard. What we need is faster clarity.

Chris also nailed the emotional core of a good audit when he wrote that the reaction is always the same: "Wait... how did it see that?" If you have ever lived inside Ads Manager (or tried to explain what is happening in an account to a founder who wants an answer in 30 seconds), you know exactly what he means.

Below is my expanded take on what Chris is doing, why it works, and how to apply the same idea responsibly in a seven or eight figure Shopify brand.

The real problem with Ads Manager audits

An ad account audit is rarely blocked by missing data. It is blocked by attention.

Ads Manager is great at letting you drill down, but it is not great at helping you notice relationships across slices of data. You can click endlessly between campaigns, ad sets, placements, breakdowns, and time ranges, yet still miss the pattern that actually explains performance.

Chris summed up the advantage of an LLM perfectly:

"Patterns jump out that are almost impossible to spot when you are clicking around Ads Manager."

That is the key: a good model is not "better" at math than your spreadsheets. It is better at scanning, comparing, and surfacing anomalies across many columns and rows without getting tired.

Why a simple export can beat a fancy dashboard

Dashboards are typically built around predefined questions:

  • What is ROAS by campaign?
  • What is CPA week over week?
  • Which creative has the best CTR?

Those are useful, but they assume you already know what to look for.

A Claude-based audit (using a raw export) can support a different mode: discovery. Instead of only answering questions, it helps you find the questions you should be asking.

Chris described it in a way I like a lot:

"It does not tell you what to do. It shows you what you cannot unsee after."

That is an audit done right. Not prescriptions first, but visibility first.

What to export from Facebook Ads Manager

To make this work, the export has to be audit-friendly. You want enough granularity to diagnose, but not so much noise that the file becomes unreadable.

A practical export setup

If I were following Chris's approach, I would typically export at two levels:

  1. Ad level (last 30-90 days)
  • Spend, impressions, reach, frequency
  • CPM, CTR (link), CPC (link)
  • Adds to cart, purchases, purchase conversion value
  • Cost per purchase, ROAS
  • Creative identifiers (ad name, image/video name if available)
  • Placement breakdown (or a second export with placements)
  1. Ad set level (same date range)
  • Audience name, targeting notes if your naming is clean
  • Optimization event, bid strategy
  • Spend and conversion outcomes

If your account is complex, add one more export for time-based trends (daily) so the model can spot sudden shifts.

What Claude can "see" that humans often miss

When Chris says patterns jump out, he is talking about relationships that are hard to notice manually. Here are examples I frequently see in audits, and why an LLM can surface them quickly.

1) Spend concentration risk

A model can quickly flag when performance is being carried by a tiny set of ads or ad sets.

  • Example: 70 percent of spend sits in 3 ads, and those ads are aging
  • Hidden risk: one creative fatigue event can tank the entire month

2) The frequency cliff

Humans see frequency, but often fail to connect it to downstream efficiency across many segments.

  • Pattern: as frequency crosses a threshold (say 2.5 to 3.5), CPA spikes
  • The insight is not "lower frequency" but "this audience is saturating faster than you think"

3) Creative fatigue masked by blended results

Blended campaign ROAS can look stable while individual ads decay.

  • Pattern: CTR falls gradually, CPM rises, CPC rises, but purchases hold because spend shifts to retargeting
  • This is where you get the "Wait... how did it see that?" moment because the trend is spread across multiple columns and time periods

4) Placement or device mismatch

Placement breakdowns are tedious to analyze manually.

  • Pattern: Reels spend is high but conversion value per click is low
  • Or: Android has worse CVR but better AOV, changing how you should interpret CPA

5) Naming conventions reveal operational problems

This is underrated. If your naming is inconsistent, the model will still notice.

  • Pattern: multiple "testing" ad sets running for months
  • Pattern: duplicated audiences under different names, causing internal competition

A simple prompt framework that matches Chris's intent

Chris mentioned he uses the same "skill" each time. You can think of that as a repeatable prompt and checklist.

Here is a lightweight framework that stays true to his idea (show me what I cannot unsee) without pretending the model is an omniscient media buyer:

  1. Set the role and goal
    Ask Claude to act as a senior paid social auditor and identify non-obvious patterns, anomalies, and risk areas.

  2. Define the constraints
    Tell it: do not propose tactics until after identifying patterns and evidence.

  3. Ask for ranked findings with proof
    Require each finding to include:

  • the metric relationship (what changed with what)
  • the segments involved (which campaigns, ads, placements)
  • the size of impact (percent of spend, percent of conversions)
  1. Ask for follow-up questions
    A good audit ends with better questions. For example:
  • Are we mixing prospecting and retargeting in one campaign?
  • Did attribution settings change?
  • Were there offer or landing page changes on the same dates?

What this approach is not (and how to use it safely)

It is worth adding guardrails, because an LLM can hallucinate or overstate causality.

Treat findings as hypotheses, not truth

If Claude flags a pattern, validate it:

  • re-check in Ads Manager with the same date range
  • confirm tracking integrity (Pixel, CAPI, Shopify)
  • compare against external events (promo calendar, inventory, site speed)

Be careful with sensitive data

If you are exporting customer data, do not upload PII. Stick to aggregated performance exports.

Do not let the model "drive"

Chris's framing is the right one:

"It does not tell you what to do."

Use it to see, then decide with your operator judgment.

Why this resonates with seven and eight figure Shopify teams

At that scale, your biggest constraint is not access to metrics. It is speed to insight.

  • Founders want clarity fast.
  • Teams need repeatable processes.
  • Agencies and in-house buyers need a shared language for what is actually happening.

A simple export plus a consistent Claude workflow can become a common audit layer across accounts, months, and team members. That is likely why Chris can run the "same Claude skill" and get consistent reactions.

If you try this, focus on one goal: reduce the time it takes to spot the few patterns that matter, then use humans to choose the best response.

This blog post expands on a viral LinkedIn post by Chris Marrano, Scaling 7 & 8 Figure DTC Brands Profitably | Building AI-enhanced systems | Founder@BlueWaterMarketing | Founder@ADIQ.AI. View the original LinkedIn post →