Last Updated: October 13, 2025
Thursday evening. You're preparing your monthly report for the board.
Investor asks: "You spent €200K on ads with 4.5x blended ROAS (return on ad spend). That should be €900K in revenue. But you only did €600K. What's going on?"
You try to explain: "Well, some of that revenue would have happened anyway... organic traffic, direct visits, people who already knew about us..."
Investor hears: "I don't actually know which marketing works."
This is the fundamental problem with traditional attribution.
It measures correlation, not causation. It tells you which touchpoints were present when someone bought, not which touchpoints caused them to buy.
The difference? About €300K in your example above.
Let's fix it.
Here's a thought experiment:
You run a branded search campaign on Google. Someone searches "Causality Engine," clicks your ad, and buys.
Google Ads says: This campaign drove the sale. 1.0x ROAS on that click.
But here's the question: Would they have bought anyway if you didn't run the ad?
Probably. They were already searching for your brand name.
Traditional attribution: Gives full credit to the ad (correlation)
Causal inference: Recognizes most of that revenue would have happened anyway (causation)
This is the difference between looking smart in dashboards and actually being profitable.
Fashion brand was celebrating their branded search campaign:
Then they ran an incrementality test. Paused the campaign for 2 weeks.
Result:
True incremental ROAS? 0.47x
They were spending €15K/month to generate €11K in incremental revenue. Losing €4K/month. €48K/year.
Over 3 years? €144K burned.
But their dashboard said 5.2x ROAS. Brilliant.
Simple definition: Measuring what would have happened without your marketing, then calculating the difference.
The formula:
Incremental Impact = (Outcome with Marketing) - (Outcome without Marketing)
Example:
This is the only number that matters for profitability.
Problem 1: It assumes all touchpoints matter equally
Customer journey:
Last-click attribution: Google gets 100% credit
Reality: TikTok started the journey, Meta nurtured it, Google just collected the sale
Problem 2: It can't separate organic from paid
Someone was going to buy anyway (organic demand). Your ad happened to be there. Traditional attribution gives the ad full credit.
Problem 3: It over-credits retargeting
Retargeting targets people who already visited your site. Many would have returned anyway. Traditional attribution assumes 100% of retargeting conversions are incremental.
They're not. Usually 40-60% would have happened anyway.
Remember this scenario?
Campaign A: 2.1x ROAS, cold prospecting
Campaign B: 5.2x ROAS, retargeting
Traditional thinking: Cut A, scale B.
Causal inference reveals:
Campaign A (Prospecting):
Campaign B (Retargeting):
Actual best performer: Campaign B still wins, but not by as much as you thought. And Campaign A is more valuable than it looks.
Cut Campaign A and your Campaign B performance will tank in 3 weeks when you run out of people to retarget.
How it works:
Example setup:
Test Group (100,000 people):
Control Group (100,000 people):
Calculation:
Interpretation: Your TikTok ads drove €55K in truly incremental revenue. The other €105K would have happened anyway (organic, other channels, direct).
On Meta:
On Google:
On TikTok:
Best practices:
How it works: Run ads in some geographic regions, not in others. Compare performance.
Example:
Test regions (ads running):
Control regions (no ads):
Wait, same conversion rate? That means your ads had zero incremental impact. Ouch.
When to use geo testing:
Limitations:
How it works: Pause a channel for 2-4 weeks, measure impact on total revenue.
Example:
Weeks 1-4 (ads running):
Weeks 5-6 (ads paused):
Calculation:
When to use holdout tests:
Limitations:
How it works: Statistical modeling that analyzes historical data to determine each channel's incremental contribution.
What it does:
Example output:
ChannelReported ROASIncremental ROASIncrementality %TikTok Prospecting2.8x2.5x89%Meta Prospecting3.2x2.7x84%Meta Retargeting5.1x2.3x45%Google Branded Search5.8x0.9x16%Google Shopping4.2x3.1x74%
When to use MMM:
Limitations:
You're looking at that TikTok campaign. €7,000 spent, 2.3x ROAS.
Traditional thinking: "2.3x seems low, maybe pause it?"
Causal inference approach:
Better yet, run a quick holdout test:
Now you're making the €7K decision based on causation, not correlation.
Problem: Spending €15K/month on branded search with 5.2x reported ROAS. Looked like best channel.
Causal inference test:
Action:
Result:
Problem: TikTok showing 2.4x ROAS. Looked weak compared to Meta (4.1x) and Google (5.2x). Considering cutting it.
Causal inference test:
Action:
Result:
You might be thinking: "Isn't multi-touch attribution supposed to solve this?"
Not quite. Here's the difference:
AspectMulti-Touch AttributionCausal InferenceWhat it measuresWhich touchpoints were presentWhich touchpoints caused conversionsMethodDistributes credit across touchpointsCompares outcomes with/without marketingOrganic demandCan't separateExplicitly accounts for itAccuracy60-70%85-95%Best forUnderstanding customer journeyMaking budget decisions
The ideal approach: Use both.
Together, they give you the complete picture.
Most brands discover their "best" channels are 40-60% less incremental than they thought. And their "worst" channels are actually performing better than they look.
The difference between correlation and causation? Usually about 30-50% of your marketing budget.
Your choice: Keep optimizing for correlation, or start measuring causation.
Causal inference measures what would have happened WITHOUT your marketing, then calculates the difference. It separates incremental impact (revenue you caused) from organic demand (revenue that would have happened anyway). This is the only way to know if your marketing actually works.
Correlation: "This ad was present when someone bought." Causation: "This ad caused someone to buy." Traditional attribution measures correlation (which touchpoints were there). Causal inference measures causation (which touchpoints actually drove the sale).
1) Split audience into test (sees ads) and control (no ads), 2) Run for 2-4 weeks, 3) Measure conversion rate difference, 4) Calculate incremental revenue. Most platforms (Meta, Google) have native incrementality testing tools. Minimum audience: 200K+.
Incremental ROAS = (Incremental Revenue) ÷ (Ad Spend). It measures revenue you wouldn't have gotten WITHOUT the ads. Example: Reported ROAS 5.2x, but 60% would have bought anyway → Incremental ROAS 2.1x. This is your true profitability metric.
Because traditional attribution gives credit to ads even when customers would have bought anyway (organic demand). Branded search, retargeting, and bottom-funnel campaigns typically show 40-70% lower incremental ROAS than reported ROAS.
Not necessarily. Check incrementality first. Prospecting campaigns often show lower reported ROAS but higher incrementality (80-90% of conversions are truly incremental). Retargeting shows higher reported ROAS but lower incrementality (40-60% incremental). Pause based on incremental ROAS, not reported ROAS.
Quarterly for each major channel. More frequently if you're making big budget changes. Always test before cutting a channel or making major reallocation decisions. One test can save you €50K-200K/year in wasted spend.
A/B testing compares two versions of the same thing (ad creative A vs. B). Incrementality testing compares ads vs. no ads to measure if ads work at all. Both are valuable, but incrementality testing answers the more fundamental question: "Should I run ads?"
CFO breathing down your neck about marketing ROI? Get the undeniable, accurate data you need to justify your ad spend and secure future budgets. Learn how attribution clarity builds credibility.
Ready to measure true causal impact? Causality Engine uses advanced causal inference to show you which marketing drives real, incremental revenue—not just correlated conversions.
