Insights | Causality Engine
return to overview

Causal Inference in Marketing Attribution: Beyond Correlation

Comprehensive guide to causal inference in marketing attribution: beyond correlation with real examples and actionable strategies.
No items found.

Causal Inference in Marketing Attribution: Beyond Correlation

Last Updated: October 13, 2025

Thursday evening. You're preparing your monthly report for the board.

Investor asks: "You spent €200K on ads with 4.5x blended ROAS (return on ad spend). That should be €900K in revenue. But you only did €600K. What's going on?"

You try to explain: "Well, some of that revenue would have happened anyway... organic traffic, direct visits, people who already knew about us..."

Investor hears: "I don't actually know which marketing works."

This is the fundamental problem with traditional attribution.

It measures correlation, not causation. It tells you which touchpoints were present when someone bought, not which touchpoints caused them to buy.

The difference? About €300K in your example above.

Let's fix it.

The Correlation vs. Causation Problem

Here's a thought experiment:

You run a branded search campaign on Google. Someone searches "Causality Engine," clicks your ad, and buys.

Google Ads says: This campaign drove the sale. 1.0x ROAS on that click.

But here's the question: Would they have bought anyway if you didn't run the ad?

Probably. They were already searching for your brand name.

Traditional attribution: Gives full credit to the ad (correlation)
Causal inference: Recognizes most of that revenue would have happened anyway (causation)

This is the difference between looking smart in dashboards and actually being profitable.

Real Example: The €180K Mistake

Fashion brand was celebrating their branded search campaign:

  • Spend: €15K/month
  • Revenue: €78K/month
  • ROAS: 5.2x
  • Conclusion: "Our best-performing channel!"

Then they ran an incrementality test. Paused the campaign for 2 weeks.

Result:

  • Branded search traffic: Dropped 15% (not 100%—organic results still there)
  • Revenue from branded search: €78K → €67K
  • Lost revenue: €11K
  • Saved ad spend (with accurate attribution): €7.5K
  • Net impact: -€3.5K (they were losing money)

True incremental ROAS? 0.47x

They were spending €15K/month to generate €11K in incremental revenue. Losing €4K/month. €48K/year.

Over 3 years? €144K burned.

But their dashboard said 5.2x ROAS. Brilliant.

What Is Causal Inference?

Simple definition: Measuring what would have happened without your marketing, then calculating the difference.

The formula:

Incremental Impact = (Outcome with Marketing) - (Outcome without Marketing)

Example:

  • With TikTok ads: 1,000 sales
  • Without TikTok ads: 750 sales (estimated via control group)
  • Incremental impact: 250 sales
  • TikTok ad spend: €25K
  • Incremental revenue: €50K
  • True incremental ROAS: 2.0x

This is the only number that matters for profitability.

Why Traditional Attribution Fails

Problem 1: It assumes all touchpoints matter equally

Customer journey:

  1. Sees TikTok ad (awareness)
  2. Clicks Meta retargeting ad (consideration)
  3. Googles brand name (intent)
  4. Clicks branded search ad (conversion)

Last-click attribution: Google gets 100% credit
Reality: TikTok started the journey, Meta nurtured it, Google just collected the sale

Problem 2: It can't separate organic from paid

Someone was going to buy anyway (organic demand). Your ad happened to be there. Traditional attribution gives the ad full credit.

Problem 3: It over-credits retargeting

Retargeting targets people who already visited your site. Many would have returned anyway. Traditional attribution assumes 100% of retargeting conversions are incremental.

They're not. Usually 40-60% would have happened anyway.

The Wednesday Morning Optimization Dilemma (Solved)

Remember this scenario?

Campaign A: 2.1x ROAS, cold prospecting
Campaign B: 5.2x ROAS, retargeting

Traditional thinking: Cut A, scale B.

Causal inference reveals:

Campaign A (Prospecting):

  • Reported ROAS: 2.1x
  • Incremental ROAS: 1.9x (90% of conversions are truly incremental)
  • Why: These are new customers who wouldn't have found you otherwise

Campaign B (Retargeting):

  • Reported ROAS: 5.2x
  • Incremental ROAS: 2.3x (only 44% of conversions are truly incremental)
  • Why: Many would have returned and bought anyway

Actual best performer: Campaign B still wins, but not by as much as you thought. And Campaign A is more valuable than it looks.

Cut Campaign A and your Campaign B performance will tank in 3 weeks when you run out of people to retarget.

How to Implement Causal Inference

Method 1: Incrementality Testing (The Gold Standard)

How it works:

  1. Split your audience: Test group (sees ads) vs. Control group (no ads)
  2. Run for 2-4 weeks (longer for low-volume)
  3. Measure conversion rate and attribution accuracy difference
  4. Calculate incremental impact

Example setup:

Test Group (100,000 people):

  • See your TikTok ads normally
  • Conversions: 3,200 (3.2% conversion rate)
  • Revenue: €160,000

Control Group (100,000 people):

  • Don't see your TikTok ads (excluded via platform)
  • Conversions: 2,100 (2.1% conversion rate)
  • Revenue: €105,000

Calculation:

  • Incremental conversions: 3,200 - 2,100 = 1,100
  • Incremental revenue: €160,000 - €105,000 = €55,000
  • Ad spend: €25,000
  • Incremental ROAS: €55,000 ÷ €25,000 = 2.2x

Interpretation: Your TikTok ads drove €55K in truly incremental revenue. The other €105K would have happened anyway (organic, other channels, direct).

How to Set Up Incrementality Tests

On Meta:

  1. Go to Experiments in Ads Manager
  2. Create "Conversion Lift Test"
  3. Select campaign to test
  4. Set test/control split (usually 90/10 or 80/20)
  5. Run for 2-4 weeks
  6. Review results in Experiments dashboard

On Google:

  1. Use "Campaign Experiments" (formerly Drafts & Experiments)
  2. Create experiment from existing campaign
  3. Set control group percentage
  4. Run for 2-4 weeks
  5. Compare performance

On TikTok:

  1. Currently no native incrementality testing
  2. Use geo-based testing (run ads in some regions, not others)
  3. Or use third-party tools (Causality Engine, GeoLift)

Best practices:

  • Minimum audience size: 200K+ for reliable results
  • Test duration: 2-4 weeks (longer for low conversion volume)
  • What to test: High-spend channels first, then retargeting, then branded search
  • Frequency: Quarterly for each major channel

Method 2: Geo-Based Testing

How it works: Run ads in some geographic regions, not in others. Compare performance.

Example:

Test regions (ads running):

  • London, Manchester, Birmingham
  • Conversions: 450
  • Population: 15M
  • Conversion rate: 0.003%

Control regions (no ads):

  • Leeds, Sheffield, Liverpool
  • Conversions: 180
  • Population: 6M
  • Conversion rate: 0.003%

Wait, same conversion rate? That means your ads had zero incremental impact. Ouch.

When to use geo testing:

  • Platforms without native incrementality testing (TikTok, YouTube)
  • Testing brand awareness campaigns (long-term impact)
  • Offline businesses (test in-store traffic by region)

Limitations:

  • Requires similar regions (demographics, seasonality)
  • Doesn't work for small geographic footprints
  • Can't isolate channel effects if running multiple channels

Method 3: Time-Based Testing (Holdout Tests)

How it works: Pause a channel for 2-4 weeks, measure impact on total revenue.

Example:

Weeks 1-4 (ads running):

  • TikTok spend: €10K/week
  • Total revenue: €50K/week
  • TikTok-attributed revenue: €22K/week

Weeks 5-6 (ads paused):

  • TikTok spend: €0
  • Total revenue: €44K/week
  • Revenue drop: €6K/week

Calculation:

  • TikTok claimed: €22K/week
  • Actual incremental: €6K/week
  • Incrementality: 27% (only 27% of attributed revenue was truly incremental)

When to use holdout tests:

  • Quick validation of channel performance
  • When you can't set up proper test/control splits
  • Testing branded search, retargeting (channels likely to over-report)

Limitations:

  • Doesn't account for seasonality (run during stable periods)
  • Short-term only (can't measure long-term brand impact)
  • Risky for high-performing channels (might lose revenue during test)

Method 4: Marketing Mix Modeling (MMM)

How it works: Statistical modeling that analyzes historical data to determine each channel's incremental contribution.

What it does:

  • Analyzes 1-3 years of marketing spend and revenue data
  • Uses regression analysis to isolate each channel's impact
  • Accounts for seasonality, external factors, lag effects
  • Outputs: Incremental ROAS per channel, optimal budget allocation

Example output:

ChannelReported ROASIncremental ROASIncrementality %TikTok Prospecting2.8x2.5x89%Meta Prospecting3.2x2.7x84%Meta Retargeting5.1x2.3x45%Google Branded Search5.8x0.9x16%Google Shopping4.2x3.1x74%

When to use MMM:

  • Spending €100K+/month across multiple channels
  • Want to understand long-term effects (brand building)
  • Can't run incrementality tests (too disruptive)
  • Need board-level strategic insights

Limitations:

  • Requires 1-3 years of clean data
  • Expensive (€20K-100K for professional MMM)
  • Less granular than incrementality tests (channel-level, not campaign-level)
  • Backward-looking (tells you what worked, not what will work)

The Friday Morning "Should I Pause This?" Moment (Solved with Causal Inference)

You're looking at that TikTok campaign. €7,000 spent, 2.3x ROAS.

Traditional thinking: "2.3x seems low, maybe pause it?"

Causal inference approach:

  1. Check incrementality: Is this prospecting or retargeting?
  2. Prospecting: Likely 80-90% incremental → True ROAS probably 2.0-2.1x
  3. Retargeting: Likely 40-60% incremental → True ROAS probably 0.9-1.4x
  4. Decision: If prospecting, keep running. If retargeting, pause or optimize.

Better yet, run a quick holdout test:

  • Pause for 1 week
  • Measure impact on total revenue
  • If revenue drops significantly → campaign is incremental, restart it
  • If revenue stays flat → campaign was stealing credit, leave it paused

Now you're making the €7K decision based on causation, not correlation.

Case Studies: Brands That Implemented Causal Inference

Case Study 1: Beauty Brand Saves €144K/Year

Problem: Spending €15K/month on branded search with 5.2x reported ROAS. Looked like best channel.

Causal inference test:

  1. Ran incrementality test (paused for 2 weeks)
  2. Discovered only 16% of conversions were truly incremental
  3. True incremental ROAS: 0.83x (losing money)

Action:

  • Cut branded search budget from €15K → €3K/month
  • Reallocated €12K to prospecting channels

Result:

  • Saved: €144K/year on branded search
  • Incremental revenue from reallocation: €180K/year
  • Net impact: +€324K/year

Case Study 2: Fashion Brand Scales TikTok 4x

Problem: TikTok showing 2.4x ROAS. Looked weak compared to Meta (4.1x) and Google (5.2x). Considering cutting it.

Causal inference test:

  1. Ran geo-based test (ads in 50% of regions)
  2. Discovered TikTok was driving 85% incremental conversions
  3. True incremental ROAS: 2.0x (vs. Meta at 2.3x after incrementality adjustment)

Action:

  • Scaled TikTok from €10K → €40K/month
  • Maintained Meta and Google at current levels

Result:

  • TikTok incremental revenue: €20K → €80K/month
  • Total revenue: +€720K/year
  • Avoided cutting a channel that was actually working

Causal Inference vs. Multi-Touch Attribution

You might be thinking: "Isn't multi-touch attribution supposed to solve this?"

Not quite. Here's the difference:

AspectMulti-Touch AttributionCausal InferenceWhat it measuresWhich touchpoints were presentWhich touchpoints caused conversionsMethodDistributes credit across touchpointsCompares outcomes with/without marketingOrganic demandCan't separateExplicitly accounts for itAccuracy60-70%85-95%Best forUnderstanding customer journeyMaking budget decisions

The ideal approach: Use both.

  • Multi-touch attribution: Understand how customers move through your funnel
  • Causal inference: Determine which channels drive incremental revenue

Together, they give you the complete picture.

What to Do This Week

  1. Identify your highest-spend channel (probably Meta or Google)
  2. Set up an incrementality test (use native platform tools)
  3. Run for 2-4 weeks
  4. Calculate true incremental ROAS
  5. Adjust budget allocation based on results

Most brands discover their "best" channels are 40-60% less incremental than they thought. And their "worst" channels are actually performing better than they look.

The difference between correlation and causation? Usually about 30-50% of your marketing budget.

Your choice: Keep optimizing for correlation, or start measuring causation.

Quick Answers

What is causal inference in marketing?

Causal inference measures what would have happened WITHOUT your marketing, then calculates the difference. It separates incremental impact (revenue you caused) from organic demand (revenue that would have happened anyway). This is the only way to know if your marketing actually works.

What's the difference between correlation and causation in attribution?

Correlation: "This ad was present when someone bought." Causation: "This ad caused someone to buy." Traditional attribution measures correlation (which touchpoints were there). Causal inference measures causation (which touchpoints actually drove the sale).

How do I run an incrementality test?

1) Split audience into test (sees ads) and control (no ads), 2) Run for 2-4 weeks, 3) Measure conversion rate difference, 4) Calculate incremental revenue. Most platforms (Meta, Google) have native incrementality testing tools. Minimum audience: 200K+.

What is incremental ROAS?

Incremental ROAS = (Incremental Revenue) ÷ (Ad Spend). It measures revenue you wouldn't have gotten WITHOUT the ads. Example: Reported ROAS 5.2x, but 60% would have bought anyway → Incremental ROAS 2.1x. This is your true profitability metric.

Why is my reported ROAS higher than incremental ROAS?

Because traditional attribution gives credit to ads even when customers would have bought anyway (organic demand). Branded search, retargeting, and bottom-funnel campaigns typically show 40-70% lower incremental ROAS than reported ROAS.

Should I pause low-ROAS campaigns?

Not necessarily. Check incrementality first. Prospecting campaigns often show lower reported ROAS but higher incrementality (80-90% of conversions are truly incremental). Retargeting shows higher reported ROAS but lower incrementality (40-60% incremental). Pause based on incremental ROAS, not reported ROAS.

How often should I run incrementality tests?

Quarterly for each major channel. More frequently if you're making big budget changes. Always test before cutting a channel or making major reallocation decisions. One test can save you €50K-200K/year in wasted spend.

What's the difference between incrementality testing and A/B testing?

A/B testing compares two versions of the same thing (ad creative A vs. B). Incrementality testing compares ads vs. no ads to measure if ads work at all. Both are valuable, but incrementality testing answers the more fundamental question: "Should I run ads?"

CFO breathing down your neck about marketing ROI? Get the undeniable, accurate data you need to justify your ad spend and secure future budgets. Learn how attribution clarity builds credibility.

Ready to measure true causal impact? Causality Engine uses advanced causal inference to show you which marketing drives real, incremental revenue—not just correlated conversions.

See Your True Impact

Read more

Ready to uncover
your hidden revenue?