Raise your hand if youâve ever lost sleep over a marketing campaign. (We canât see you right now, but weâre guessing your hand is up.)Â
Now raise your other hand if itâs specifically measurement that keeps you up at night. (Okay, you can put your hands down. People are staring.)
We get it â marketing measurement isnât easy. Todayâs customer journey is a tangled web of cross-channel touchpoints. Plus, a steady drumbeat of new privacy regulations has made it harder than ever to actually understand this journey. Thatâs why more and more marketers are ditching outdated attribution models and choosing incrementality to measure their marketing.Â
Incrementality testing takes an experiment-driven approach, which helps you understand the true, causal impact of your campaign. Did seeing a marketing campaign cause a customer to make a purchase? If the answer is yes, thatâs an incremental conversion.
This comprehensive guide is designed to help you get started. Weâll lay out the shortcomings of traditional models, the benefits of incrementality, and some tactical steps for building an incrementality testing plan. No, we canât promise weâll cure your marketing-fueled insomnia â but weâll get you that much closer to building a modern, science-backed measurement practice.
Todayâs marketing measurement hurdles
In 2025, marketing measurement is a bit like one of those American Ninja Warrior obstacle courses. Youâre trying to understand your customers, but theyâre too busy jumping from platform to platform. And just when you think youâre nearing the finish line, a new privacy initiative pops up and blocks your path to accurate measurement.Â
Itâs an exhausting process â and many of todayâs popular tools arenât ready to meet the moment. So before we dive into the benefits of incrementality testing, letâs first look at why so many of todayâs tools are frustratingly outdated.
Traditional measurement limitations
First, letâs rewind. The year is 1992. A man (probably wearing a flannel shirt and listening to Nirvana on his Discman) sees an ad for a candy bar on the side of a city bus. He wasnât originally planning on buying a candy bar, but the ad convinced him. So he goes into a store and he buys that candy bar. Boom: incremental conversion.Â
Fast-forward to today and the path to purchase isnât so simple. Now a person might see an ad for that candy bar before a YouTube video â not even realizing that they had gotten an email about that same candy bar earlier that day. Which campaign convinced them to purchase?
For brands that are advertising on multiple channels or platforms, incrementality can offer answers. Otherwise, you might have to rely on platforms that mistake correlation for causation, which often leads to inflated, inaccurate results â and inefficient ad spend.
Privacy updates change everything
Itâs not just that the customer journey is more complex. Growing support for consumer privacy protections have also complicated things. Laws like the General Data Protection Regulation (GDPR) in the EU and the California Consumer Privacy Act (CCPA) are affecting how companies collect, store, and use consumer data.Â
And no, this desire for enhanced privacy isnât just going to go away. The proverbial horse is out of the barn. Public sentiment and legislative momentum mean the laws will keep coming. Marketers have no choice but to accept this reality and evolve their tactics.
Luckily, platforms like Haus are privacy-durable. Thatâs because incrementality tests donât rely on pixels, PII, or other vulnerable information. Instead of asking âHow is our campaign affecting this userâs behavior?â incrementality tests geo-segment conversion data so that consumer behavior and business outcomes are analyzed through regional test and control experiments.Â
Existing tools aren't cutting it
Marketers have no shortage of measurement tools at their disposal. The problem is that these tools lack accuracy, donât account for new privacy regulations, and donât show causation â meaning they donât tell us if an outcome happened because of an ad.Â
So letâs walk through these options one by one and explain how they come up short.
Platform reporting
If youâre relying on the platforms themselves for conversion data, youâll need to take it all with a big grain of salt. After all, these platforms are grading their own homework â and will happily take credit for a conversion that would have happened anyway, without an ad. Because of this over-reporting of ROI, many marketing teams end up spending way too much on these channels. Meanwhile, incrementality testing will tell you how many conversions they actually drive so you can optimize your spend.
Multi-touch attribution (MTA)
MTA was billed as a cure-all for traditional attributionâs flaws. But MTA relies way too much on cookies and pixel tracking, which is a lot harder given todayâs privacy regulations. Plus, MTA tends to underrate the upper funnel and bases its metrics on correlation, which still doesnât explain the true impact of your marketing. Oh, and it doesnât do well accounting for seasonality, which can drastically affect your conversion rates.Â
Traditional media mix models (MMMs)
Traditional MMMs rely on aggregate daily KPI data, which doesnât give the model enough data to make precise conclusions. (Thatâs why traditional MMMs rarely give a confidence interval with their results â it would just be too wide of an interval.) Theyâre also another model driven by correlation, which doesnât offer an accurate read into marketing performance. The cherry on top: traditional MMMs tend to be âresource-intensive,â which is a fancy way of saying âexpensive.â For all these reasons, we're building a causal MMM.
Okay, now that weâve completed our roast of traditional measurement solutions, letâs explain how incrementality can help you overcome these marketing hurdles.Â
The power of incrementality testing
Brands have gravitated toward incrementality testing because itâs powered by causality, itâs privacy-durable, itâs based on current sales data, and itâs fast. With the right mix of incrementality tests, brands are able to allocate budget efficiently and maximize growth.
But before we go any further, letâs make sure we know exactly what incrementality is and how it tends to work.Â
So, what is incrementality?
Hausâ Principal Economist Phil Erickson offers an elegant definition: Incrementality measures how a change in strategy causes a change in business outcomes.
For instance, if we invest more in a certain channel, how does that affect conversions? What about if we prioritize more upper-funnel creative? Or include branded search terms? When it comes to testing, the options are pretty much endless.Â
So we get what incrementality does. But what does it look like in practice? Here we like to use the analogy to randomized control trials for drug development. In such a trial, one group gets the drug (i.e. the treatment group) and one group gets a placebo (i.e. the control group). Then you measure both groupsâ reactions to understand the efficacy of the drug. If both groups have similar outcomes, the drug likely isnât very effective.
An incrementality test is the same sort of thing (minus the drugs). Instead, the treatment group sees a marketing campaign, while the control group doesnât. Then you compare KPIs. Did the treatment group convert significantly more than the control group? If so, the ad was incremental to some degree. If there was minimal or no lift, that might be a campaign worth rethinking.
Incrementality: The hard way
Youâre sold on incrementality and have decided to start your own in-house incrementality program. Weâll warn you: It wonât be easy. To ensure you reap incrementalityâs rewards, youâll need to nail the following steps.Â
1. Assemble an expert team
Your org is likely packed with growth marketing experts. But â all due respect â itâs less likely your team is full of experts in causal inference. And thatâs okay. But in order to run powerful incrementality tests, youâll need advanced data scientists to design an experiment that controls for variables, reduces bias, and selects randomized samples.Â
In short, you need a team with a wealth of experience running rigorous experiments. This will require recruiting and hiring talented PhD-level data scientists and economists.Â
2. Hone in on your goals
Now that youâve spent seven figures assembling the Avengers of causal inference expertise, itâs time to put their brain power to use. That starts with setting a clear experimental goal. Just saying âI want to know if my marketing is workingâ wonât cut it. You can test lots of different things. You may go a bit more general and test if a new channel is incremental. Or you may go a bit granular and test to see if you should include branded terms in your PMAX campaigns. (We haveâŠlots of thoughts on that.)Â
Youâll probably want to hire some strategists who can help you prioritize experiments and build a testing roadmap. The more clear and focused your goals are, the more likely you are to surface useful insights. (Spoiler alert: If you cut corners here, youâll have a much harder time uncovering valuable takeaways.)
3. Structure the exact experiment you need
Once youâve figured out the first experiment you want to run, itâs time to pick test groups. These will vary based on the question youâre trying to answer.Â
For instance, say you want to figure out if TikTok is incremental. Youâll want two groups: one that gets the TikTok campaign and one that doesnât. In Haus-speak, this is known as a 2-cell geo-holdout. â2-cellâ just means you have two groups, and geo-holdout means one region doesnât receive the marketing intervention (e.g. control group).Â
Say instead youâre trying to figure out the optimal spend level on TikTok. Then you need three groups â two treatment groups and a control. (This is called, you guessed it, a 3-cell test.) A common 3-cell test involves one group that receives your business-as-usual (BAU) TikTok ad spend, another that receives double this BAU ad spend, and then a third-group that receives no TikTok ads.Â
Bottom line: Youâll need an incrementality platform that lets you easily toggle between 2-cell tests and 3-cell tests, with or without holdouts.Â
4. Donât forget to control for⊠everything
If a drug company is testing new allergy meds, they would never choose a control group that consists of people with a long history of allergies, and a treatment group full of people who have never experienced allergies. That would introduce a big, fat, confounding variable.
The same goes for incrementality testing. You want control groups and treatment groups that are very similar. Use frontier econometric methods to create characteristically indistinguishable groups. And be sure to take the extra step of cleaning up your data to soften noise, account for seasonality, and weed out outliers, which ensures your metrics are accurate and actionable.Â
5. Flexibly calibrate testing power
There are tradeoffs when it comes to experimental design. For instance, a four-week holdout experiment is more âpowerfulâ than a two-week holdout experiment. That just means that the four-week experiment has a narrower confidence interval. We can be more confident in the results because weâve collected more data in four weeks (compared to two).Â
That said, you might not always have the bandwidth to run a longer test. You might need results in two weeks, not four. And you might not feel comfortable turning off advertising in a region for four weeks.Â
So youâll need to design a program that allows you to flexibly change your experimentation period and holdout size on an experiment-by-experiment basis â helping you find that sweet spot between quick results and reliable data.Â
6. Get your results â and use them
You run the experiment. A few weeks pass, then you get your results and email them to your team. Then? You all high-five and head out to happy hour to celebrate your first ever incrementality test!
Not so fast. Itâs time to analyze those results and act on them. (Then you can go to happy hour. We promise.) Ultimately, your next move will depend on the questions you were trying to answer and KPIs you were tracking. Below are some possible moves to make after analyzing results.Â
Recalibrate platform metrics
After you complete an incrementality test, youâll get a nifty metric called the incrementality factor (IF). Your IF signifies the proportion of total conversions that were incremental. So an IF of 0.6 means that if you had 1000 total conversions, 600 of those were incremental.Â
Applying this IF to your platform-attributed metrics is a great way to get a more realistic view of your marketingâs impact. After all, platform-reported metrics tend to overstate impact.Â
Refine channel budgets
Are you over-spending on a channel? Under-spending? A powerful incrementality platform can give you the insights needed to correctly allocate your ad budget. For instance, you might learn that more spend doesnât exactly equal more conversions. In which case, you can scale back the channel and lower your CPA, without lowering conversions.Â

Maybe your spend falls on the flat part of that line. Spending more doesnât mean youâll get more conversions, which means itâs time to scale back and save.Â
7. Plan informed future tests based on what you learn
Say youâve learned you can scale back your spend on a certain channel. This produces an obvious next question: Where should we reallocate that ad spend? Maybe this money could go toward a brand new channel. Youâve always been YouTube-curious â so nowâs your chance to run a 2-cell holdout test to see if YouTubeâs incremental.
In short, insights tend to build on one another as you test more and more. Often youâll find that answers you get from one incrementality test will inspire the central question of your next test. Before you know it, youâll have developed a culture of experimentation.Â
Incrementality: The easy way
As you can see, incrementality is as easy as 1-2-3âŠ4âŠ5-6-7. Whew. Okay. Maybe itâs not that easy. Especially when step #1 involves assembling a dream team of causal inference experts. And thatâs before you even get into building your own experimentation software then designing intentional, effective, non-noisy experiments.Â
Thankfully, thereâs an easier, one-step solution.Â
Step 1: Choose a partner that already has those experts, already has an automated self-serve incrementality platform, and has already helped brands big and small save millions in ad spend.Â
Thatâs it. No more steps. With Haus, you can get up and running fast, then enjoy the following benefits that come with precise, expert-backed incrementality testing.
More accurate budget planning
No more basing decisions on correlative data, which can overstate or understate impact. Instead, youâll be working off of your marketingâs true impact, then you can budget accordingly.Â
When it comes to testing, you can learn much more than just âDoes this campaign work?â Maybe you want to find out if increasing spend by 10% on a certain channel moves the needle. Donât be afraid to get granular, then zoom out and fill in the big picture. (Haus comes packed with experimental templates so that you can get started faster.)
A full view of omnichannel impact
If you only measure DTC, you might be missing out on some incremental conversions and underestimating impact. Thatâs why Haus ingests .com data and Amazon data so that you can understand your marketingâs true impact across all sales channels (including physical retail locations, too).Â
Preparation for the next big privacy initiative
Now every news article about privacy controls wonât fill you with dread. Instead, youâll have left individual user-based tracking in the past and designed experiments based on group behavior â which anonymizes the single user. For more on how Haus helps teams handle the ongoing privacy initiatives, check out the tail end of this Open Haus episode.Â
An extra layer of precision
âAttributing changes in data (ex: âmy KPI went up or downâ) to certain factors (ex: âwe upped spend in an ad channelâ) is incredibly hard to do well, and extremely easy to screw up,â explains Hausâ Economist Simeon Minard. âThe world is huge and chaotic, and there are a million things that could be messing up your measurement.â
Thatâs why Haus uses synthetic control methodology to take your precision from good to great. In fact, Hausâ results are 4x more precise than platforms that use matched market test design. Then, some extra data-cleaning and outlier management steps add yet another layer of precision.
Help when you need it
From laying out a testing roadmap during onboarding to interpreting your results, Hausâ team of customer success pros will make sure youâre getting the most out of your experiment â and preparing you for the next one. Have we mentioned theyâre former growth marketers, agency pros, and platform experts? In other words: They know the ropes.
Haus passes the test for customers
As marketing science enthusiasts, itâs no surprise we like evidence. So instead of just telling you about Hausâ benefits, letâs show you how customers have used the platform to improve their measurement, surface valuable insights, and optimize their ad spend.Â
Ritual rethinks TikTok
Health and wellness brand Ritual wanted to know how incremental TikTok was for their business. To get some answers, they ran a 2-cell test in the Haus app and learned that TikTok drove no lift to their business. Instead of turning off the channel, they made some key changes to their TikTok ads then retested. These adjustments boosted lift from 0% to 8% â all at a highly efficient CPIA.
Jones Road Beauty finds YouTubeâs true impactÂ
Cosmetic brands Jones Road Beauty knew YouTube was a major driver of sales â but given the limited attribution capabilities of YouTube viewing on TVs, they wondered if the channelâs value was being understated. They ran a 3-cell test in the Haus platform, testing three different levels of YouTube ad spend. It turned out that doubling their YouTube ad spend led to 2.26X more new customer orders â a clear sign they could up their investment in the channel.
FanDuel finds a winning spend level
YouTube was a big factor for online sportsbook FanDuel â so they wanted to make sure they were getting their spend level right on the platform. To find that sweet spot, they set up a 3-cell no holdout test where a third of their markets had âlowâ spend levels, another third saw âmediumâ spend levels, and the final third saw âhighâ spend levels. They found that there was no difference in lift between the âmediumâ spend and âhighâ spend tiers. This meant they could stick to their medium spend knowing they wouldnât be missing out on conversions.Â
Newton Baby uncovers key halo effectsÂ
Infant sleep brand Newton Baby sees a large share of sales on Amazon. To better understand their TikTokâs incremental impact on Amazon and .com sales, they ran a 2-cell geo experiment for 4 weeks. They realized that only looking at direct sales from TikTok understated the channelâs impact. When accounting for Amazon sales as well, they realized they could increase their spend.
For more examples of Hausâ impact across industries, check out our library of success stories.Â
Getting started with incrementality testing
For years, major brands like Netflix and Amazon have had science-backed internal tools that measure incremental impact. But Haus believes these tools should be democratized so that any brand can get started with incrementality.Â
And no, Haus doesnât just hand over the results and say âyou do you.â Instead, we help you dive deep into your results and develop a complete view of your impact across all sales channels. This expert-backed approach is how weâve helped marketing and finance teams reallocate millions of dollars towards what is working so they can make smarter decisions.
So as the terrain shifts under marketersâ feet, you can stick with outdated tools â or you can shift away from attribution and toward a culture of experimentation.Â
.png)
.avif)


.png)
.png)
.png)
.png)
.png)

.avif)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.webp)
.webp)
.webp)
.webp)

.webp)

.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)

.webp)
.webp)
.webp)
.webp)
.webp)

.webp)


.avif)
.avif)



.avif)
.avif)
.avif)


.avif)
.avif)
.avif)
.avif)
.avif)
.avif)




.png)
.avif)
.png)
.avif)



















