When marketers start thinking about incrementality testing, one of the first questions they ask is: "What should my results look like?"Â
The short answer is that while some aggregate data exists, incrementality benchmarks will not tell you what you need to know about your own marketing performance. What works for one business often does not work for another, even within the same industry or channel. The only way to understand whether your incrementality test result was “good” or “bad”is to test for yourself.
Incrementality = causality. It measures the direct causal impact your marketing has on business outcomes. This is fundamentally different from correlation or attribution, which simply track what happened without proving your marketing caused it to happen.
Because incrementality is causality, it can only be measured through controlled experiments that establish a true counterfactual: What would have happened without your marketing? This causal relationship is unique to your specific situation because it depends on factors that vary dramatically across businesses.
Your competitive landscape shapes how much incremental value your ads can generate. For example, a brand facing heavy competition on branded search terms may see different incrementality than one with little to no competition. The same channel can be highly incremental for one business and nearly worthless for another based solely on competitive dynamics, omnichannel influences, or other consumer behavior factors.
Distribution channels influence which marketing tactics drive incremental sales, and where. A brand selling primarily through Amazon may see different incrementality patterns than one focused on direct-to-consumer sales, even if both use identical advertising strategies.
These factors combine in ways that make your business fundamentally different from any benchmark population. Even businesses that appear similar on the surface (say, two DTC beauty brands with similar revenue) can have wildly different incrementality profiles.
Experiments, not models, reveal causality. This is a critical distinction when thinking about incrementality benchmarks.
Attribution models and traditional marketing mix models (MMM) rely on historical correlations. They observe patterns in your data and make assumptions about what caused what. These tools can be useful, but they do not establish causality. They cannot tell you with certainty that your marketing caused a specific outcome because they lack a true control group.
Incrementality experiments, by contrast, use test and control methodology. You hold out marketing activity from some regions while continuing to serve it in others. The difference in outcomes between these groups, controlling for all other variables, reveals the causal impact of your marketing.
This is why incrementality benchmarks are particularly misleading. A benchmark might tell you something like "the average brand sees a 1.2x lift from Channel ABC advertising" but this number comes from experiments run under completely different conditions than yours. The causal relationship those experiments measured applies to those specific businesses, with their specific competitive dynamics, brand strength, seasonality, and consumer behavior. It does not transfer to your business.
Some research has been published on incrementality across groups of brands, but the findings consistently demonstrate variation rather than reliable benchmarks. Consider what Haus found when analyzing 640 incrementality tests for Meta advertising:
Manual campaigns outperformed Advantage+ Shopping campaigns 58% of the time. This finding made headlines and led to recommendations that brands should stop using Advantage+ campaigns. But the complete picture is more nuanced. For 42% of brands in the study, Advantage+ actually outperformed Manual campaigns. For 39% of brands, Advantage+ receives the majority of their Meta budget because it works better for their specific business.
The correct takeaway is not "Manual is better than Advantage+." The correct takeaway is that what works varies by business. Some brands benefit from Manual campaigns, others from Advantage+. The only way to know which camp you're in is to test both approaches for your business.
Similar patterns emerge in branded search testing. Incrementality from branded search ads varies based on:
The alternative to benchmarks is not flying blind. The alternative is building your own incrementality practice through continuous testing. This approach gives you insights that are actually actionable for your specific business.
Start with baseline tests for your core channels. The goal is understanding whether each major tactic (Meta, Google branded search, YouTube, etc.) is driving incremental value or simply taking credit for sales that would have happened anyway. These baseline reads tell you which channels deserve continued investment and which might be wasting budget.
A well-designed baseline test compares business outcomes between regions where you run marketing (treatment) and regions where you do not (holdout). The difference between these groups, adjusted for pre-existing trends, reveals the true incremental impact of your marketing.
Once you have baseline incrementality data, you can iterate to improve performance:
Each test builds on previous learnings, creating a compounding knowledge advantage. After six months of continuous testing, you'll understand your marketing effectiveness better than any external benchmark could ever tell you.
A successful incrementality practice starts with clear business objectives. Are you focused on efficiency, growth, or something else? Your goals shape which tests matter most.
Efficiency-focused businesses should prioritize identifying and cutting waste. Run baseline tests to find channels or tactics with low incrementality, then reallocate that budget to stronger performers or reduce total spend.
Growth-focused businesses should test scaling opportunities. Once you know which channels are incremental, push them harder to find the point where returns diminish, then test new channels or tactics to find additional growth levers.
Your testing roadmap should balance quick wins with longer-term strategic questions:
Quick-win tests to prioritize:
Strategic tests that build knowledge:
The key is consistency. Incrementality is not a one-time project. Incrementality is a continuous practice. Each test answers specific questions while generating new hypotheses to explore. Over time, this builds a comprehensive understanding of your marketing effectiveness that no external benchmark could provide.
Incrementality benchmarks exist, but they will not tell you what you need to know. The variation across businesses is too great, the factors influencing incrementality too specific, and the strategic implications too important to rely on someone else's numbers.
Because incrementality is causality, it must be established through experiments specific to your business. What works for one business may not work for another. The brands that win do not chase benchmarks. The brands that win build their own data through rigorous, continuous testing. This testing reveals not just what's working, but why it's working, creating strategic advantages that compound over time.
The question is not "What's the benchmark for my channel?" The question is "What's incremental for my business?" And there's only one way to answer it: test for yourself.
Make better ad investment decisions with Haus.