Price Objections
1. "€99 for a one-time report? I can pull reports from Google Analytics for free."
GA4 shows you correlations — who clicked what. Causality Engine shows you causation — what actually drove the purchase. That distinction is typically worth thousands in recovered ad spend. The €99 is a diagnostic, not a report.
2. "$299/month is expensive for a brand doing €50K/month in revenue."
If you're spending even €10K/month on ads, a 10% improvement in allocation — which is conservative — pays for the entire year of Causality Engine in a single month. Most brands find five figures in misallocated spend within the first analysis.
3. "We can't justify another subscription — we already pay for Shopify, Klaviyo, Meta, GA4, and Triple Whale."
That's exactly why you need this. You're paying for five tools that each tell you a different story about what's working. We sit on top of all of them and tell you which story is actually true. Most customers cut or reallocate spend on at least one tool after seeing the results.
4. "Our agency already does this kind of analysis as part of their retainer."
Your agency uses the same platform dashboards you do — Meta Ads Manager, GA4. They're dressing up the same flawed last-click or modeled data. Causality Engine uses Bayesian causal inference, which is a fundamentally different methodology.
5. "I'd rather spend that $299 on more ad budget."
If even 15% of your current ad spend is going to channels that aren't actually driving revenue — which is what we find on average — then $299 to identify that waste is the highest-ROAS investment you can make this month.
6. "We'll look at attribution tools once we hit $1M in revenue."
By the time you hit $1M, you'll have wasted $50K–$150K on misallocated ad spend getting there. The brands that scale fastest are the ones that fix their marketing attribution early, when the cost of being wrong is still manageable.
7. "Can't we just build this in-house with a data analyst?"
You could — if you have a data scientist who specializes in Bayesian causal inference, six months to build the model, and ongoing engineering time to maintain data pipelines. Most DTC brands don't. Our $299/month replaces a $120K/year hire.
8. "The ROI isn't clear enough for me to sign off on this."
If you're spending $30K/month on ads, even a 5% improvement in allocation efficiency — well below what we typically find — saves $1,500/month. That's 5x return on the subscription cost in month one. The ROI is clearer than almost any other tool in your stack.
9. "We need to cut costs right now — this isn't a priority."
Cutting costs is exactly what causal attribution does. We typically identify 15–25% of ad spend that's generating zero incremental revenue. On $50K/month in spend, that's $7,500–$12,500/month in wasted ad spend recovered. This tool pays for itself 25x over.
10. "I can get similar insights from free tools like Google's Attribution reports."
Google's attribution reports only measure what Google can see — and they systematically overcredit Google channels. They can't measure Meta, TikTok, or Klaviyo's true contribution. You need a tool that sits above all platforms and measures incrementality with causal inference, not just redistributes click credit.
Tool Switching Objections
11. "We already use Triple Whale for attribution."
Triple Whale is a great dashboard, but it relies on pixel-based tracking and modeled click data. In a world where 30–40% of conversions are invisible to pixels — iOS opt-outs, Safari ITP, ad blockers — you're making decisions on partial data. We use Bayesian causal inference on your actual revenue data, no pixel required.
12. "We just implemented Northbeam three months ago."
Give Northbeam a fair shake — but ask yourself: can it tell you what would have happened to revenue if you'd paused Meta entirely last month? That's a causal question, and click-based tools can't answer it. Run us alongside for €99 and compare.
13. "Our Meta ROAS in Ads Manager looks great — why would I question it?"
Meta has a financial incentive to make your Meta ROAS look great. They count view-through conversions, cross-device modeled conversions, and conversions that would have happened organically. We've seen brands where Meta claims 5x ROAS but the true incremental ROAS is 1.8x.
14. "We use Google Analytics 4 and we trust the data."
GA4 is last-click by default and even its data-driven model only sees what it can track via cookies and consent. It systematically undercredits upper-funnel channels like TikTok and influencers, and overcredits branded search.
15. "We spent six months setting up our current attribution stack — I can't switch now."
You don't have to switch anything. We connect to your existing GA4 and Shopify data in two minutes. No pixels to install, no UTM taxonomy to rebuild. Run us alongside your current stack and compare.
16. "We use Rockerbox and it covers multi-touch attribution already."
Rockerbox is multi-touch attribution — it distributes credit across touchpoints that were tracked. But in a cookieless world, 30–40% of touchpoints are invisible. We use causal inference on aggregate data, which doesn't require tracking individual journeys at all. Different methodology, different answers.
17. "Our Shopify Plus plan includes built-in attribution features."
Shopify's native attribution shows first-touch and last-touch source. That's useful for basic reporting but doesn't answer the budget allocation question: where does the next marginal dollar generate the most incremental revenue? That requires causal modeling, not click tracking.
18. "We're happy with our current MMM provider — they deliver quarterly reports."
Quarterly reports are stale by the time you act on them. CPMs, creative performance, and audience saturation change weekly. We deliver monthly causal analyses that reflect current market conditions, not conditions from three months ago. Plus we cost 90% less than traditional MMM.
19. "We built custom dashboards in Looker/Tableau that our team already uses."
Keep them. Your dashboards are great for monitoring campaign-level metrics. What they can't do is tell you incremental contribution per channel or predict what happens if you shift $10K from Meta to TikTok. We answer the allocation question; your dashboards answer the execution question.
20. "We already invested in a customer data platform — shouldn't that solve attribution?"
CDPs like Segment or Rudderstack unify first party data and build customer profiles. But unifying data isn't the same as measuring causation. You still need a causal layer that answers: did this channel actually cause this purchase, or would it have happened anyway? That's what we do with the data your CDP collects.
Methodology Skepticism
21. "Bayesian inference sounds like a black box."
Every analysis comes with a plain-English explanation of what the model found and why. You don't need to understand Bayesian math — you need to understand that Meta's claimed ROAS is 4.2x but its incremental ROAS is 2.1x, and here's why.
22. "How can you measure attribution without tracking individual users?"
The same way epidemiologists measure whether a treatment works without tracking every cell in your body. We look at aggregate patterns: when spend on channel X goes up, does revenue go up more than expected? Individual tracking is one approach. Causal inference is a better one.
23. "Doesn't causal inference require controlled experiments?"
We use observational causal inference — specifically, Bayesian structural time-series models that exploit natural variation in your spend patterns. Your spend already varies naturally week to week — that variation is our experiment.
24. "What about confounding variables? Seasonality, promotions, virality?"
Our model explicitly accounts for seasonality, trends, day-of-week effects, and promotional events. The Bayesian framework is specifically designed to handle confounders — it's the reason causal inference exists.
25. "How is this different from just running incrementality tests?"
Incrementality tests require you to pause spend on a channel for 2–4 weeks — costing real revenue. Our analysis estimates incrementality continuously from natural variation, without requiring you to turn anything off. Always-on incrementality measurement.
26. "Your sample size is too small — we only have 12 months of data."
Twelve months of daily data is 365 data points across multiple channels with natural spend variation. Bayesian methods are specifically designed for smaller sample sizes — they incorporate prior knowledge to produce reliable estimates even with limited observations. Six months is our minimum; twelve is ideal.
27. "How do you handle new channels we just launched with only 8 weeks of data?"
For new channels, we use informative priors based on industry benchmarks and gradually update as your data accumulates. After 8 weeks, the model provides estimates with wider confidence intervals. After 12 weeks, the intervals narrow significantly. We're transparent about uncertainty rather than faking precision.
28. "Multi-touch attribution is the industry standard — why go against it?"
Multi-touch attribution was the industry standard when cookies tracked 90%+ of journeys. In a cookieless world where 30–40% of touchpoints are invisible, MTA is building an incomplete puzzle and pretending the missing pieces don't exist. Causal inference doesn't need to track individual journeys — it measures aggregate impact directly.
29. "How do I know your model isn't just overfitting to noise?"
We use Bayesian regularization and cross-validation to prevent overfitting. Every estimate includes posterior predictive checks — we verify that the model's predictions match held-out data before delivering results. If the model can't reliably separate signal from noise for a channel, we say so explicitly rather than inventing a number.
30. "Media mix modeling is outdated — it's what P&G used in the 1990s."
The Bayesian structural time-series models we use are nothing like 1990s regression-based MMM. Modern causal inference incorporates daily granularity, adstock decay modeling, saturation curves, and full posterior distributions. It's the same leap as comparing a 1990s spreadsheet to modern machine learning — same category, completely different sophistication.
31. "Can your model separate organic growth from paid channel contribution?"
Yes — that's exactly what causal inference is designed to do. The model estimates a counterfactual baseline (what revenue would have been with zero paid spend) and attributes the lift above baseline to specific channels proportionally. Separating organic from paid is literally the first thing the model does.
32. "What happens if we change our marketing strategy — does the model break?"
Strategy changes create the variation that makes causal inference work. If you shift budget dramatically, increase spend on a new channel, or pause a channel entirely, those changes produce strong natural experiments that sharpen the model's estimates. The more variation, the better our measurement.
33. "Your approach can't capture creative-level performance."
Correct — and by design. Causal inference answers the channel allocation question: how much incremental revenue does Meta drive in total? For creative-level performance (which ad within Meta works best), use Meta Ads Manager's own A/B testing tools. These are different questions requiring different methodologies. We solve allocation; platforms solve optimization within their walls.
34. "How do you account for the halo effect between channels?"
Our model explicitly estimates interaction effects between channels. When Meta prospecting drives awareness that makes Google Shopping convert better, the model detects this correlation in the data and attributes the lift appropriately. This inter-channel halo effect is one of the key insights that click-based multi-touch attribution completely misses.
35. "I talked to a data scientist who said your approach has limitations."
Every methodology has limitations — including click-based attribution, which has the fundamental limitation of requiring visible user tracking in a cookieless world. The question isn't whether causal inference is perfect; it's whether it produces better budget allocation decisions than the alternative. For DTC brands with 4+ channels and $20K+ monthly spend, the answer is consistently yes.
Timing Objections
36. "We're heading into Q4/BFCM — we can't change anything right now."
Don't change anything. But run the €99 analysis on your Q3 data so you know which channels were actually performing going into BFCM. The worst time to figure out attribution is after you've spent your biggest budget.
37. "Let me get through this current campaign first."
Every campaign you run without causal attribution is a campaign where you're guessing about what worked. Start the analysis now; results arrive after your campaign ends.
38. "We're switching agencies — let's wait until the new agency is onboarded."
This is the perfect time. Give your new agency a causal attribution baseline on day one. You'll cut their ramp-up time in half.
39. "Summer is our slow season — let's wait until things pick up."
Slow season is when you have breathing room to analyze and optimize. Fix your attribution now, so when volume picks up, every euro is already going to the right channel.
40. "I want to wait until we have more data."
Our Bayesian model works with as little as 6 months of Shopify and GA4 data. Waiting for "more data" usually means waiting for "more wasted budget."
41. "We're in the middle of a Shopify migration — let's revisit after."
Migrations are the perfect time to establish a causal attribution baseline on your old platform. When the new store launches, you'll have a clean comparison point. Otherwise you'll spend months post-migration wondering whether performance changes are due to the migration or your marketing.
42. "We just hired a new CMO — they'll want to evaluate tools themselves."
Give your new CMO a causal attribution analysis as their onboarding gift. They'll walk in with a clear picture of what's actually working instead of spending their first 90 days untangling conflicting platform dashboards. It accelerates their impact.
43. "We're launching a new product line next quarter — let's wait."
Run the analysis now to optimize your existing channels. When the new product launches, you'll already have a baseline understanding of channel incrementality — so you can allocate launch budget intelligently instead of spraying and praying.
44. "Our fiscal year resets in January — we'll budget for this then."
Every month you wait between now and January is a month of misallocated spend. If we find even $5K/month in wasted ad spend — which is typical — waiting four months costs $20K. The €99 diagnostic costs less than a day of wasted Meta spend.
45. "We need to fix our creative pipeline first — attribution won't help if our ads are bad."
How do you know which ads are bad without causal measurement? Platform-reported metrics inflate performance. You might be killing creatives that are actually working and scaling creatives that are just capturing existing demand. Fix measurement first, then optimize creative with accurate data.
46. "Let's wait for Google's Privacy Sandbox to settle before investing in attribution."
Google's Privacy Sandbox has been delayed repeatedly since 2020. Waiting for the privacy landscape to "settle" means making allocation decisions on broken data indefinitely. Causal inference works regardless of cookie policy because it doesn't depend on individual user tracking. It's already cookieless attribution.
47. "We're about to raise our Series A — can't add new tools right now."
Investors will ask about your unit economics and channel-level CAC during diligence. Having causal attribution data showing your true CAC per channel — instead of inflated platform numbers — strengthens your fundraise narrative and prevents a post-raise reckoning when the numbers don't hold at scale.
48. "We're testing too many things right now — adding attribution would be another variable."
Causal attribution isn't another variable — it's the measurement layer that tells you whether your other tests are actually working. Without it, you're running tests but evaluating them with flawed data. It doesn't add complexity; it reduces the noise in everything else you're doing.
49. "Our retention is the bottleneck, not acquisition — let's focus on LTV first."
Understanding channel-level LTV requires knowing which channel truly acquired each customer. If Meta claims credit for customers who came organically, your Meta LTV looks artificially high and your organic LTV looks artificially low. Causal attribution is the foundation of accurate LTV analysis by channel.
50. "We only allocate budget annually — come back in October for next year's planning."
Annual planning based on this year's flawed attribution data bakes in the same mistakes for another 12 months. Run the €99 analysis now to inform your October planning with actual causal data instead of repeating the same allocation errors at larger scale.
Trust & Size Objections
51. "I've never heard of Causality Engine."
Start with the €99 one-time analysis. It's a low-risk way to evaluate both the methodology and the output before committing to anything. The analysis either shows you something valuable or it doesn't.
52. "We're only doing €50K/month — we probably don't need attribution yet."
At €50K/month, you're likely spending €10K–€20K on ads. Getting attribution right now sets the foundation for scaling. Brands that wait until €200K/month have already wasted six figures.
53. "We're doing €500K/month — we need an enterprise solution."
Enterprise MMM solutions cost $50K–$100K/year and deliver quarterly updates. We deliver monthly causal analyses for $3,588/year. The methodology is comparable. Test us with the €99 analysis and compare.
54. "We're worried about GDPR compliance."
Our analysis works with aggregated, anonymized data — we don't process individual user-level data. No PII, no cookies, no individual tracking. Fully compatible with GDPR.
55. "What if your analysis is wrong and I make bad decisions?"
Our analysis comes with confidence intervals — we don't just say "Meta ROAS is 2.3x," we say "between 1.8x and 2.9x with 90% confidence." We always recommend testing changes incrementally.
56. "We need case studies from brands in our exact vertical."
The causal inference methodology works across all DTC verticals on Shopify because it measures the relationship between spend and revenue — which is universal. Whether you sell supplements, apparel, or home goods, the model adapts to your data. That said, we can share anonymized results from brands at similar spend levels.
57. "How do I know you won't just tell me what I want to hear?"
We've told brands that their favorite channel was actually underperforming. We've shown CMOs that their pet project had zero incremental impact. Our value depends on telling the truth, not on validating existing beliefs. If everything you're doing is actually working, the analysis will confirm that — and that's valuable too.
58. "We're a small team — we don't have bandwidth to onboard a new tool."
Setup takes two minutes — you connect your Shopify and GA4, and we do the rest. There's no pixel to install, no dashboard to learn, no daily workflow to change. You receive a monthly analysis with clear recommendations. It's less work than reading your weekly Meta Ads Manager digest.
59. "Our CTO wants to vet the methodology before we commit."
We publish our methodology — Bayesian structural time-series models for causal inference, similar to Google's CausalImpact R package but adapted for DTC marketing data. We're happy to do a technical deep-dive with your CTO. The €99 analysis lets them evaluate the output quality before any ongoing commitment.
60. "We tried an attribution tool before and it didn't deliver value."
Most attribution tools are multi-touch attribution — they redistribute click credit but don't answer the causal question. If you tried Triple Whale, Northbeam, or Rockerbox and found the output unhelpful, it's because click-based MTA and causal inference are fundamentally different methodologies answering fundamentally different questions.
61. "We don't trust any attribution tool — the whole category is snake oil."
Healthy skepticism. Here's the difference: click-based tools claim precision they can't deliver because they can't see 30–40% of user journeys. We quantify uncertainty explicitly with confidence intervals. If we can't measure something reliably, we say so. The €99 analysis lets you judge the output yourself.
62. "Our competitors seem to do fine without fancy attribution."
Your competitors are likely spending 15–25% more than necessary to achieve the same revenue because they can't identify their wasted ad spend. You just can't see their inefficiency from the outside. The brands that quietly adopt better measurement gain a compounding edge quarter over quarter.
63. "We're bootstrapped — every dollar matters and we can't take risks."
That's exactly why you can't afford to waste 15% of your ad budget on channels that aren't driving incremental revenue. The €99 one-time analysis is lower risk than a single day of wasted Meta spend. If it doesn't show you something valuable, you're out one dinner.
64. "I need to get buy-in from my co-founder/CFO/board before adding tools."
Share this with them: for every $10K/month in ad spend, platforms over-report performance by $3K–$5K due to double-counting and non-incremental conversions. A €99 diagnostic quantifies exactly how much of your specific budget is misallocated. The buy-in conversation writes itself when you can show the number.
65. "We're an omnichannel brand with retail stores — does this still apply?"
Yes — in fact, omnichannel makes causal attribution more valuable because the cross-channel interactions are more complex. Online ads drive in-store visits; in-store experiences drive online repeat purchases. Click-based attribution misses all offline conversion paths. Causal inference captures the total revenue impact regardless of where the sale closes.
66. "We sell on Amazon too — can you measure that?"
If increased Meta or TikTok spend drives Amazon sales (which it often does for DTC brands), causal inference can detect that relationship in the data. We model total revenue — including Amazon — against channel spend to capture cross-marketplace halo effects that click-based Shopify attribution completely misses.
67. "Our business model is subscription-based — attribution is different for us."
Subscription brands need attribution even more because the initial CAC must be justified by multi-month LTV. Platform-reported CAC is inflated by non-incremental conversions. If your true incremental CAC is higher than reported, your LTV:CAC ratio is worse than you think — which changes your scaling math entirely.
68. "We don't spend enough on ads to justify this — we're mostly organic."
If you're spending anything on paid acquisition — even $5K/month — causal attribution tells you whether that spend is actually driving incremental revenue above your organic baseline, or whether you're paying for conversions that would have happened through organic anyway. Many "mostly organic" brands discover their paid spend is entirely redundant.
69. "Your tool seems US-focused — we sell primarily in Europe."
Our methodology is geography-agnostic — it works on any Shopify store with revenue and ad spend data. In fact, European brands benefit more because GDPR consent rates make pixel-based attribution even less reliable. Cookieless attribution via causal inference is essential for any EU-focused DTC brand.
70. "We've already committed our tech budget for this year."
The €99 one-time analysis doesn't require a tech budget allocation — it's a single diagnostic that can come from the marketing budget. If the results show $10K+/month in recoverable wasted ad spend, the ongoing subscription pays for itself from marketing efficiency gains, not from tech budget.
71. "I'm worried about data security — we don't want to share our revenue data."
We use aggregated daily-level data — total revenue and total spend per channel per day. We don't access individual order data, customer PII, or product-level details. The data we need is less sensitive than what your agency already sees in your ad accounts.
72. "What happens if we outgrow you? Will we be locked into your ecosystem?"
There's no lock-in. We use your existing GA4 and Shopify data — not proprietary pixels or tracking infrastructure. If you ever decide to switch to an enterprise MMM provider, your data stays yours and nothing breaks. You can also cancel monthly with no commitment.
73. "We ran a media mix model with a consultancy and the results were inconclusive."
Traditional MMM consultancies use quarterly data with monthly granularity, which limits statistical power. We use daily granularity with Bayesian methods that extract more signal from less data. Inconclusive results from traditional MMM often become clear, actionable results with modern causal inference approaches.
74. "Our industry has long consideration periods — can causal inference handle 90-day purchase cycles?"
Yes. Our models incorporate adstock decay parameters that measure how channel spend continues to influence revenue over weeks and months. For high-AOV brands with long consideration periods, we adjust the decay curves to match your observed purchase timeline. Longer cycles actually produce clearer causal signals because there's more data in each decision window.
75. "We need real-time attribution — your monthly cadence is too slow."
Real-time click attribution tells you who clicked what in the last hour. Monthly causal attribution tells you which channels are actually driving incremental revenue. These serve different purposes. Use real-time platform data for daily campaign management; use monthly causal analysis for budget allocation decisions. You shouldn't be reallocating budgets in real-time anyway — that causes allocation whiplash.
Additional Price Objections
76. "We'd need this to deliver at least 10x ROI for it to be worth the conversation."
On $30K/month in ad spend, we typically identify $4,500–$7,500/month in wasted spend or reallocation opportunities. That's $54K–$90K per year in recovered value against a $3,588 annual subscription. That's 15x–25x ROI. The 10x bar is comfortably cleared.
77. "Can we do a free trial instead of paying €99 upfront?"
The €99 analysis is effectively a trial — it's a one-time diagnostic with no ongoing commitment. If the results don't show clear value, you walk away having spent less than a single underperforming Meta ad set spends in a day. We can't do it free because the analysis requires meaningful compute and analyst review time.
78. "We'd rather put that money toward incrementality testing on Meta directly."
Meta's Conversion Lift studies require pausing ads for a holdback group for 2–4 weeks, costing real revenue. Our analysis estimates incrementality from your existing data without requiring you to turn anything off. You get incrementality measurement without the revenue sacrifice. Run both if you want validation.
79. "Our finance team won't approve a marketing tool without a guaranteed ROI."
No tool can guarantee ROI — but we can guarantee that if you're spending $20K+/month on ads across multiple channels, there's misallocation. Every brand we've analyzed has found at least 10% of spend going to non-incremental conversions. At $20K/month, that's $2K/month minimum in identifiable waste — 7x the subscription cost.
80. "We're spending too little to have meaningful waste — we only run Meta."
Even single-channel brands waste money. On Meta specifically, retargeting campaigns over-claim by 40–70% because they take credit for conversions that were already going to happen. If you're spending $5K/month on Meta retargeting, $2K–$3.5K of that might be paying for conversions you'd get organically. That's worth knowing.
Additional Tool Switching Objections
81. "Our pixel-based attribution works fine for us — why switch to something different?"
Pixel-based attribution worked before iOS 14.5. Since then, 30–40% of conversions are invisible to pixels. If you're making budget decisions on 60–70% of your data and assuming it represents 100%, you're systematically misallocating. "Works fine" means "we can't see what we're missing."
82. "We use Hyros and it tracks everything including phone calls and offline."
Hyros excels at tracking individual conversions across touchpoints. But tracking and causation are different things. Hyros tells you which touchpoints preceded a purchase. Causal inference tells you which touchpoints actually caused the purchase versus which were incidental. A customer who saw a Meta ad, received a Klaviyo email, and then bought — which touchpoint was causal?
83. "We're locked into an annual contract with our current attribution tool."
Run our €99 diagnostic alongside your current tool. Compare the recommendations. When your contract expires, you'll have months of side-by-side data to make an informed decision about which methodology produced better allocation outcomes.
84. "Our media buyer manages attribution — they prefer the tools they know."
Media buyers naturally prefer tools that validate their existing strategy. An independent causal analysis provides an objective check on platform-reported performance that protects both you and your media buyer from making decisions on inflated data. Frame it as quality assurance, not replacement.
85. "We just built a custom attribution model with our data team."
What methodology does it use? If it's multi-touch attribution based on tracked touchpoints, it has the same cookieless blind spot as every other MTA tool. If it's regression-based media mix modeling, it likely lacks the Bayesian uncertainty quantification that prevents overconfident recommendations. Run our €99 analysis alongside and compare outputs.
Additional Methodology Objections
86. "If causal inference is so good, why doesn't everyone use it?"
Adoption is accelerating rapidly. The bottleneck was accessibility — traditional causal inference required $50K+ engagements with PhD statisticians. We've made it accessible at $299/month because DTC brands on Shopify have standardized data formats that allow automation. You're early, not contrarian.
87. "How do you handle creative changes within a channel? The model only sees spend."
Creative changes affect channel efficiency, which shows up in the relationship between spend and revenue. If new creative makes Meta more efficient, the model detects a structural change in Meta's contribution. We can't tell you which specific creative drove the improvement — use platform A/B testing for that — but we can tell you that Meta's incremental ROAS improved, and when.
88. "Your model assumes diminishing returns — but what if a channel has increasing returns at our spend level?"
Our saturation curves are fit to your actual data, not assumed a priori. If a channel shows increasing returns at your current spend level (common for brands far below optimal spend), the model will reflect that and recommend increasing budget. Diminishing returns is the general pattern, but your specific curve is estimated from your data.
89. "Can you separate the impact of discounting from channel performance?"
Yes. We include promotional events as explicit variables in the model. When you run a 20% off sale, the model estimates the lift from the promotion separately from the lift from channel spend. This prevents promotional spikes from being incorrectly attributed to whichever channel happened to be scaled up during the sale period.
90. "What about attribution for influencer campaigns that aren't tracked with UTMs?"
Influencer spend drives revenue through multiple unmeasured paths: branded search, direct visits, social proof. Causal inference detects the total revenue lift during and after influencer campaigns by measuring the deviation from expected revenue. You don't need UTMs or promo codes when you're measuring aggregate causal impact.
Additional Trust & Size Objections
91. "Your company is too small — what if you shut down?"
We understand the concern. Our analysis uses your own GA4 and Shopify data, which you always retain. The methodology — Bayesian structural time-series — is open-source and published in academic literature. Even if we disappeared tomorrow, your data and the analytical approach are fully portable. Zero lock-in.
92. "Can you handle brands with 50+ products across different categories?"
Absolutely. We model at the total revenue level and channel level — product mix complexity doesn't affect the core methodology. If you need to segment analysis by product category (e.g., skincare vs. supplements), we can run separate models per category to give you category-level channel attribution.
93. "We sell B2B — DTC attribution doesn't apply to us."
If you're running Meta, Google, and LinkedIn ads to drive purchases on your Shopify store — whether B2B or B2C — the causal relationship between spend and revenue works identically. B2B purchase cycles may be longer, but our adstock decay parameters accommodate that. The methodology is about measuring cause and effect, not about who's buying.
94. "We need custom reporting that integrates with our existing BI stack."
Our output is a monthly analysis with clear channel-level incrementality estimates and recommendations. If you need raw data for your BI tools, we can provide the model outputs in a format that integrates with Looker, Tableau, or Google Sheets. The analysis doesn't replace your BI stack — it feeds better numbers into it.
95. "How do I explain causal inference to my CEO who just wants a simple ROAS number?"
You give them the simple ROAS number — just the correct one. "Meta's true incremental ROAS is 2.3x" is just as simple as "Meta Ads Manager says 4.5x." The methodology complexity is under the hood. The output is a clear, actionable number your CEO can use without understanding Bayesian math.
96. "We have investors who already trust our current attribution numbers."
That's a liability, not an asset. If your current numbers overstate performance by 30–50% — which platform-reported numbers typically do — your investors have a false picture of unit economics. Better to present accurate numbers now than face a reckoning when you try to scale and the economics don't hold.
97. "We operate in a regulated industry (supplements/alcohol/CBD) — does this work differently?"
Regulated industries face more advertising restrictions, which often means fewer available channels and higher CPMs. This makes efficient allocation even more critical — you have less margin for error. The causal methodology works identically; the strategic value is actually higher because waste is more costly when your options are limited.
98. "We're pre-revenue / just launched — is it too early?"
If you haven't been running ads for at least six months, yes — we need historical data to build the model. But once you have six months of spend and revenue data, starting causal attribution early means you'll scale with accurate signals from day one instead of building bad habits based on platform-reported vanity metrics.
99. "What if the analysis shows all our channels are actually performing well?"
Then you get the confidence to scale aggressively, knowing your current allocation is sound. That's worth €99 on its own. Most brands live in a state of anxious uncertainty about whether their spend is working. Confirmation is valuable — it lets you invest more boldly with lower risk.
100. "I need to see a demo before I make any decisions."
The €99 one-time analysis is the demo. It runs on your actual data and delivers actionable results specific to your brand. A generic product demo with sample data tells you nothing about whether the output will be valuable for your specific channel mix, spend levels, and business model. Your data is the demo.