Back to Blog
Neil Patel on Incremental vs Attributed Conversions
Trending Post

Neil Patel on Incremental vs Attributed Conversions

·Marketing Analytics

A practical breakdown of attributed vs incremental conversions, and how to measure channel impact when analytics misses the full story.

LinkedIn contentviral postscontent strategymarketing analyticsmarketing attributionincrementalityconversion trackingperformance marketingsocial media marketing

Neil Patel recently shared something that caught my attention: "Some marketing channels show they drive many direct conversions. And some drive conversions, but they don't always show up the way you want in your analytics. Check out how channels perform from incremental versus attributed conversions."

That short post nails a problem most growth teams run into sooner or later: we confuse what gets credit in analytics with what actually caused growth. Neil is pointing to the gap between attributed conversions (what your tools can assign to a channel) and incremental conversions (the lift you would not have gotten without that channel).

Below, I want to expand on Neil's point and translate it into an actionable way to think about channel performance, reporting, and budget decisions.

Attributed conversions: what your analytics can see

Attribution is a reporting system. It answers: "Which touchpoint gets credit for this conversion based on the rules and data available?"

Most teams live inside some mix of these:

  • Last-click (common in many analytics views)
  • First-click
  • Linear, time decay, position-based
  • Data-driven attribution (when available)
  • Platform attribution (Google Ads, Meta, TikTok, etc.)

This is useful, but it is not the same as causality. Attributed conversions are constrained by tracking, identity resolution, and the model rules. If the data is missing, the conversion cannot be credited correctly, even if the channel played a major role.

Why attribution often undercounts certain channels

If a channel influences intent but does not close the deal, it tends to look weak in last-click reports. Common reasons include:

  • Cross-device journeys (ad on mobile, purchase on desktop)
  • Cookie loss and consent limitations
  • Walled gardens with partial visibility
  • Long consideration cycles (B2B, high AOV ecommerce)
  • Brand demand that converts through direct or organic later

In other words, your dashboards can end up rewarding the channels that are closest to the conversion event, not necessarily the channels that created the demand.

Incremental conversions: what actually changed because of the channel

Incrementality asks a different question: "If we stopped this channel, how many conversions would we lose?"

That counterfactual is the heart of Neil's point about incremental versus attributed conversions. A channel can look like it drives tons of conversions in attribution and still be mostly cannibalizing conversions that would have happened anyway. The reverse is also true: a channel can look invisible in attribution and still be driving meaningful lift.

Key idea: Attribution is about assigning credit. Incrementality is about proving impact.

Examples of the mismatch

Here are a few patterns I see repeatedly:

  1. Brand search looks amazing in last-click
    Someone sees a YouTube ad, a podcast sponsorship, or a LinkedIn post, then later searches your brand and converts. Analytics credits paid search or organic search, while the earlier channel did the persuading.

  2. Retargeting looks like a hero
    Retargeting often captures users who already intend to buy. It can show high ROAS, but the incremental lift may be smaller than the report suggests.

  3. Upper-funnel channels look weak
    YouTube, connected TV, influencer, PR, podcasts, and some LinkedIn campaigns can drive significant demand that later converts via direct, organic, or branded search. Attribution can miss or minimize that contribution.

What to do with Neil Patel's insight: measure both views

If you only look at attributed conversions, you risk over-investing in channels that harvest demand and under-investing in channels that create it. If you only look at incrementality tests, you may move too slowly or miss day-to-day optimization signals. The practical answer is to run a two-layer measurement approach.

Layer 1: Use attribution for optimization, not truth

Attribution is still valuable, especially for:

  • Creative testing (which messages drive more downstream action)
  • Landing page and funnel improvements
  • Keyword and audience refinement
  • Week-to-week pacing

But treat these reports as directional. A good operating principle is: attribution helps you steer, incrementality helps you decide.

Make attribution less misleading

A few improvements that reduce bad conclusions:

  • Separate brand vs non-brand search in reporting
  • Track micro-conversions (email signups, demo starts) to capture earlier intent
  • Use consistent UTM governance
  • Compare platform-reported conversions vs analytics-reported conversions, and document the gap
  • Watch "direct" and "organic" trends when you change spend elsewhere

Layer 2: Use incrementality to decide budgets

Incrementality can sound intimidating, but you can start simple. The goal is to estimate lift, not to build a perfect model.

Methods to measure incrementality (from simplest to strongest)

1) Geo tests (regional holdouts)

Run ads in some regions and pause in matched regions, then compare conversion changes. This works well for larger businesses with enough volume.

2) Time-based tests (on-off or spend pulses)

Pause or reduce spend for a period, then restore it. It is noisier than geo tests but can still reveal whether a channel moves outcomes beyond normal variation.

3) Platform lift studies

Many ad platforms offer conversion lift or brand lift studies. They are not perfect, but they are often better than trusting last-click alone.

4) Matched-market experiments

More rigorous versions of geo tests that use statistical matching and controls. Great for high-stakes budget decisions.

5) Marketing mix modeling (MMM)

MMM uses historical data to estimate channel contribution, often helpful when user-level tracking is limited. It is best for strategic allocation, not daily optimization.

What to report from incrementality

When you run these tests, translate results into decisions:

  • Incremental conversions (lift)
  • Incremental CPA
  • Incremental ROAS
  • Confidence or uncertainty range
  • What changed in other channels (for example, did branded search drop when YouTube paused?)

A simple framework: the Incremental vs Attributed matrix

Neil's post hints at a very practical way to evaluate channels. I like to map each channel into one of four buckets:

1) High attributed, high incremental

Keep investing. These are your proven growth drivers.

2) High attributed, low incremental

Be careful. Often includes brand search and some retargeting. Optimize for efficiency, cap frequency, tighten audiences, and avoid over-crediting it in budget meetings.

3) Low attributed, high incremental

These are your hidden winners. Upper-funnel often lands here. Protect budget with lift tests and use leading indicators (reach, engaged sessions, branded search lift) alongside experiments.

4) Low attributed, low incremental

Candidates to cut, pause, or rethink. But validate with a test before you eliminate something that might have delayed effects.

Common mistakes teams make (and how to avoid them)

Mistake 1: Treating last-click ROAS as the budget allocator

Last-click tends to overweight the bottom of the funnel. Use it to manage execution, not to determine where growth comes from.

Mistake 2: Running "tests" without a clear counterfactual

If you change five things at once, you cannot attribute lift to one channel. Define the holdout, the duration, and the success metric before you start.

Mistake 3: Ignoring lag

Some channels have delayed impact. Build in a post-period to catch conversions that happen after exposure.

Mistake 4: Measuring the wrong outcome

If your sales cycle is long, the true outcome might be qualified pipeline, not purchases in a 7-day window. Choose metrics that match reality.

A practical checklist for your next reporting cycle

If you want to apply Neil Patel's point immediately, here is a lightweight plan you can execute this month:

  1. Add an "Attribution view" and an "Incrementality view" to your performance deck
  2. Split brand vs non-brand search everywhere
  3. Identify one channel you suspect is under-credited (often video, influencer, or LinkedIn)
  4. Design a small holdout: geo or time-based
  5. Report both: attributed conversions AND incremental lift
  6. Update budget rules: optimize with attribution, allocate with incrementality

Neil's post is short, but the implication is big: if you want sustainable growth, you have to stop equating visibility in analytics with real impact.

This blog post expands on a viral LinkedIn post by Neil Patel, Co-Founder at Neil Patel Digital. View the original LinkedIn post →