Incrementality.
What's the first thing you think of?
It might be testing, it might be something related to attribution or marketing measurement, it might be something associated with experimentation or statistics or advertising spend but you're not quite sure how to connect the dots. Or maybe you've heard the word thrown around, feel like it's something you should be familiar with, but aren't really sure what it means or how to use incrementality experiments in practice. If so, you certainly wouldn't be alone.
That's why we're here: to not only help explain this concept, but to walk you through how to use incrementality experiments in marketing and business decision-making. We have a strong viewpoint on this stuff – so much so that we're launching a new series about it.
Welcome to Incrementality School.
A snack-sized, tactical guide to incrementality
Congratulations, you've stumbled upon the very first piece in our series aimed at helping marketers, CFOs, and other stakeholders wrap their heads around this funny phrase that online dictionaries have yet to recognize (sidenote: we're working on this). Together, we'll cover:
- What incrementality is (insert "you are here" marker)
- What you can test with incrementality and what the consequences of not testing might look like
- How brands measure incrementality today (and the approaches that don't measure incrementality)
- Who actually needs incrementality testing
- The difference between incrementality experimentation types (such as geo-testing, conversion lift testing, and natural experiments)
- How to foster a culture of incrementality experimentation at your own organization
… and maybe even a thing or two more. These aren't meant to be 201s or 301s – this is incrementality 101 for you, the marketer or financial stakeholder who's heard the word and needs a primer that's deeper than surface level, but not scientist-dense. Snack-sized, if you will.
Get ready for class – you're in good company.
What is Incrementality?
Let's start with the basics: Haus' Head of Strategy Olivia Kory describes incrementality by likening the concept to randomized control trials in healthcare:
When you're rolling out a new drug, you're going to give one group of people a placebo drug (the control group) and you're gonna give another group of people – statistically indistinguishable from the control group – the drug (the treatment group).
Then, you observe the difference in behavior between those two groups to validate the efficacy of that drug – that's incrementality testing.
With Haus, we have a counterfactual to understand what would have happened anyway in the absence of this marketing intervention, whether it's search, or video, or YouTube, or OOH. What was that group going to do anyway? And that's fundamentally what we mean when we talk about incrementality testing.
Haus Principal Economist Phil Erickson takes us one step further, transitioning us from the concept of incrementality to how we might use incrementality testing in a practical sense:
Incrementality measures how a change in strategy causes a change in business outcomes. For example, how would my revenue increase if I increased my ad budget by 10%? Or how many more units would I sell on Amazon if I moved 50% of my ad spend from YouTube to Google PMax?
Lastly, chew on this – from our no-nonsense Head of Science Joe Wyer:
Attribution and incrementality are two frameworks for understanding marketing impacts. Attribution frames impact as "crediting" each customer event to some strategy or tactic. Incrementality frames impact as the change in customer events caused by a change in the application of strategy or tactic.
The thing about "incrementality" is that it's just the non-scientist-friendly way of saying "causality."
Causality. Incrementality = causality.
Here's an open secret: Whereas traditional marketing measurement solutions are rooted in correlational or observational data (yes, even traditional MMMs), incrementality is rooted in causation. Actual causation – that enables you to know exactly what's working instead of what may-or-may-not be working.
An incremental conversion is one that results specifically from ad exposure.
How to Measure Incrementality
Measuring incrementality requires controlled experiments that establish causality. There are three primary methodologies:
Geo-Experiments
Geo-experiments compare geographical sets by creating treatment and control groups across different markets or regions. This is Haus' core methodology for incrementality testing.
Key features:
- Standardized across all channels
- Flexible metrics (new vs existing customers, retail vs Amazon)
- Privacy-durable (doesn't rely on user-level tracking)
Example: A DTC brand wants to test if increasing Meta spend drives incremental sales. They divide the US into treatment regions (with increased spend) and control regions (normal spend), then measure the difference in sales between the two groups.
What makes geo-experiments rigorous:
- Stratified sampling ensures balanced representation across treatment and control
- Placebo tests (A/A tests) build confidence that effects are real and not just noise
- Synthetic controls combine and weight multiple control regions for better precision
User-Level Experiments
User-level experiments compare different sets of users, typically run by ad platforms themselves (example: Meta Conversion Lift).
Key features:
- Can detect smaller lifts due to larger sample sizes
- Typically run by ad platforms
- Limited to digital channels
Example: Meta's platform randomly assigns users to see or not see your ads, then measures the difference in conversions between exposed and unexposed groups.
Observational Studies
Observational studies compare performance before and after a marketing event or change.
Key features:
- No control group
- Natural experiments based on business changes
- Useful for understanding major shifts
Example: Measuring how a significant price increase or product launch impacted revenue by comparing periods before and after the change.
Why Experiments Matter
Unlike attribution (which tracks user behavior) or traditional MMM (which identifies correlations), experiments are the only method that establishes causality – telling you what actually drives business outcomes.
As detailed in our measurement fundamentals guide:
- Attribution has no holdout group, is limited by privacy changes, can't measure offline channels, and tends to undervalue the upper funnel
- Traditional MMM has no holdout group, suffers from multicollinearity problems, and can be slow to update
- Experiments use holdout groups to answer "what would have happened without marketing?," employ test/control methodology, and are tied to actual sales data and business outcomes
Incrementality: Key Terms
Understanding incrementality requires familiarity with several key concepts:
Incrementality Factor (IF)
The ratio of incremental conversions to attributed conversions. An IF of 1.0 means every attributed conversion was truly incremental; an IF of 0.5 means only half were incremental.
Example: If Meta reports 750 attributed orders from $100K spend, but an experiment shows only 900 incremental orders, the IF is 1.2 (900 ÷ 750).
Cost Per Incremental Acquisition (CPIA)
The true cost of acquiring a customer through your marketing efforts, calculated using incremental conversions rather than attributed conversions.
Formula: Total Spend ÷ Incremental Conversions = CPIA
Why it matters: CPIA can be compared apples-to-apples across all channels – even out-of-home advertising and offline tactics - because it's grounded in causal experiments. This enables true cross-channel optimization.
Incremental Return on Ad Spend (iROAS)
The revenue generated per dollar spent, calculated using only incremental conversions. Unlike platform-reported ROAS, iROAS tells you the true return on your investment.
Formula: Incremental Revenue ÷ Total Spend = iROAS
Power and Precision
Two critical metrics for experiment design:
- Power: The probability of detecting an effect if one truly exists - like the probability of hearing music in a crowded park. If the music is soft or you're far away, you may not hear it even if it's there. Haus recommends achieving at least 80% power for reliable test results.
- Precision: How accurately you can measure the size of that effect – like how clearly you can make out the specific song and lyrics. If the sound is fuzzy, you may not understand exactly what's being sung.
What improves power:
- Better precision
- Bigger lifts
- Larger holdout sizes
- Longer test duration
What improves precision:
- Larger holdout sizes
- Longer test duration
- Stable historical data
Stratified Sampling
A technique that ensures balanced representation across treatment and control groups by accounting for multiple variables (size, seasonality, demographics) rather than just population size.
Synthetic Control
A method that combines and weights multiple control regions to create a better comparison group, resulting in more precise measurements than simple matched markets. Haus' research shows this approach is 4x more precise than matched market tests.
What's Next?
That's where we'll leave things for now (told you: snack-sized). Tune in next week to learn more about what kinds of things you can measure with incrementality testing – and the potential real-world business consequences of punting on experimentation.
Frequently Asked Questions
What's the difference between incrementality and attribution?
Attribution tracks which touchpoints a customer interacted with before converting. Incrementality measures which marketing actually caused the conversion. An incremental conversion is one that results specifically from ad exposure - meaning it wouldn't have happened without your marketing.
Attribution tells you correlation (what happened), while incrementality tells you causation (what you made happen).
Learn more in our guide: Incrementality vs. Attribution
Can I rely on benchmarks or other companies' test results?
No. Law #5 of Incrementality states that "incrementality is unique to your business." What works for one brand may not work for another due to differences in brand strength, competition, distribution channels, and objectives.
For example, Haus' analysis of branded search shows that brands in high-competition markets see significant lift 82% of the time, while brands in low-competition markets only see significant lift 35% of the time. The only way to know where you stand is to test for yourself.
Do I need to run experiments continuously?
Yes. Law #4 states that "incrementality is a continuous practice." Marketing effectiveness changes over time due to seasonality, competition, creative fatigue, and platform changes.
As shown in the Jones Road Beauty case study, running multiple tests over time (they ran 3 Meta baseline tests) led to a 31% improvement in iROAS by identifying and fixing issues iteratively. Their first test diagnosed a customer exclusion issue, their second identified a reach problem, and their third optimized for mid-funnel events.
How is incrementality different from Marketing Mix Modeling (MMM)?
Traditional MMM uses historical correlations to estimate channel performance but cannot establish causation. Incrementality testing uses controlled experiments with holdout groups to prove what actually drives results.
The key difference: experiments reveal causality while traditional MMMs reveal correlation.
Learn more: Incrementality Testing vs. Traditional MMM
What does "actioning on incrementality" mean?
It means using your test results to make concrete changes to your media mix - like reallocating budget from underperforming channels to winners. Law #3 states: "You can't get ROI unless you action."
For example, if testing reveals that:
- Meta Advantage+ has a CPIA of $111 (16% better than average)
- Google Brand Search has a CPIA of $333 (153% worse than average)
Shifting 20% of budget from Brand Search to Meta creates significant compounding value over time. Even maintaining the same total budget, you can unlock a 6% increase in incremental order volume by moving money to more efficient channels.
How long does an incrementality test take?
It depends on your business volume and the experiment design. Tests typically run for 2-4 weeks, with the following considerations:
- Holdout size: 10-50% of markets (larger holdouts = faster results but more opportunity cost)
- Test duration: 2-4 weeks (longer tests = more precision)
- Business volume: Higher volume enables shorter tests with smaller holdouts
For example, a 3-week test with a 20% holdout achieves 80% power, while a 2-week test with the same holdout achieves 77% power. The specific design depends on your risk tolerance and need for speed.
What can I test with incrementality experiments?
You can test any marketing tactic, channel, or strategy change:
- Branded search effectiveness
- Channel-level spend (Meta, Google, TikTok, YouTube)
- Out-of-home campaigns
- Promotional effectiveness
- Creative variations
- Audience targeting strategies
- New vs. existing customer focus
- Omnichannel impact (DTC + Retail + Amazon)
Read more: What Can You Incrementality Test?
What are some channels that are difficult to test?
Some channels present unique measurement challenges:
- Host-read podcasts: Cannot be geo-segmented (use time tests instead)
- Out-of-home: Markets are pre-determined (use fixed geo tests)
- Linear TV: Converting from national to local is expensive (use geo-tests with CPM premium, fixed geo, or time tests)
- Influencer: Cannot geo-segment organic creator posts (time tests for organic content, geo-tests for boosted partnership ads)
However, with the right methodology, even these "tricky" channels can be measured for incrementality. Learn more: Can You Measure OOH?

.png)
.png)
.png)
.png)
.png)
.avif)


.png)
.png)
.png)
.png)
.png)

.avif)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.webp)
.webp)
.webp)
.webp)

.webp)

.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)

.webp)
.webp)
.webp)
.webp)
.webp)

.webp)


.avif)
.avif)



.avif)
.avif)
.avif)


.avif)
.avif)
.avif)
.avif)
.avif)
.avif)




.png)
.avif)
.png)
.avif)



















