5 Ways to Take Action on Your Incrementality Results

Madeline Dault - Product Marketing Lead @ Haus
March 22, 2023

You successfully get your team bought in on experimentation. You run your first geo-experiment with Haus, you get high confidence results, but NOW WHAT? Applying your experiment results is a whole new can of worms. In this article, we’ll go through five different ways you can utilize your Haus incrementality results and apply your learnings. 

To start, let’s review the metrics available to you at the end of a Haus geo-experiment:

  • Lift percent: The incremental proportion of the KPI observed in the targeted regions during the analysis period
  • Lift Amount: The incremental amount of KPIs observed in the targeted regions during the analysis period
  • CPI (Cost per Incremental): The cost in dollars of driving 1 more incremental unit of this KPI
  • Lift Likelihood: The likelihood of the true incremental lift being more than 0.
  • Incrementality Factor: the estimated incremental conversions divided by the platform reported conversions

Now the question is, how do you use these results in practice. 

1. Create a new proxy metric using your incrementality factor 

At the completion of a test, you receive an Incrementality Factor. This metric tells you - of all the conversions that the platform took credit for, how many of those conversions were actually incremental. 

Let’s say the TikTok platform says you drove 1,000 conversions from your TikTok ads but your test results show an Incrementality Factor or “IF” of 0.6. This means only 600 of the 1,000 conversions were actually incremental. The TikTok platform is over crediting performance by 400 conversions or 40%. 

You can apply this incrementality factor to all platform attributed conversions moving forward to create a more realistic picture of the channel’s impact. While the test results and IF only accurately reflect what happened over the duration of the test, utilizing the IF thereafter is a great way to assess ongoing performance of a channel and to ensure you’re not relying purely on platform attribution which tends to over-credit itself. 

Note that your incrementality factor is likely to be different every time you retest – and you should retest to account for changes in customer mood, your own marketing strategies, or seasonality. Whenever you retest and get a new incrementality factor, update the IF to reflect the latest results.

Let’s run through an example. 

The TikTok platform says your TikTok ads spent $120K and drove 1,000 conversions at a $120 CPA. But, you ran a geo-experiment measuring the incrementality of your TikTok ads and the results say that only 600 conversions were incremental at a CPIA of $200. This means the Incrementality Factor (IF) is 0.60X. Heading into March, you decide to continue running TikTok ads and focus experimentation elsewhere, but you want to more accurately reflect TikTok’s impact. Instead of reporting on platform attribution, you create a proxy metric (CPIA) to quantify TikTok’s results. You are tasked with reporting TikTok’s performance by day for the first week in March so you build the table below:


Utilizing a proxy metric adjusted with an incrementality factor is a great way to calibrate your platform attribution. 

2. Determine new platform CPA thresholds

Beyond calculating a proxy metric, you can also use your incrementality results to simply determine your ideal ROAS or CPA threshold across different channels. 

Using our previous example, let's say your company’s goal is to acquire new customers at a cost under $150. Prior to running this test, you thought your TikTok ads were driving customers at a CPA of $120. Now, that you ran your incrementality test, you realize that in actuality, TikTok was driving new customer acquisition at a cost of $200. 

Prior to this understanding, maybe you kept spending in this channel so long as the platform indicated a CPA of $120. You thought there was room to continue scaling the platform while still staying under the $150 goal. Now, with this test under your belt, you have an accurate picture of the channel’s impact. You realize you need to lower the platform CPA and initially think to pull back spend a bit, but what platform CPA would best reflect a true $150 cost per incremental acquisition? Here, we can use our Incrementality Factor once again. Multiply the target CPA by your IF to get the new platform CPA threshold you should optimize against:

($150 CPA Goal ) * (0.60 IF) = $90 target platform CPA

This means, as long as you are seeing a TikTok-reported CPA of $90 or below, you can be confident that you are optimizing that platform effectively to hit the program wide CPA goal.

3. Optimize spend levels across channels

At some point, most marketing tactics will experience diminishing marginal returns. So depending on your investment level, you can’t just double your budget and expect your conversions to double, proportionally too. The next conversion is typically harder to acquire than the last one. It’s important to understand where your current investment level lies on the diminishing return curve.

Maybe your current investment level lies on the flat part of the curve and you have room to scale spend without sacrificing efficiency. Alternatively, maybe you discover that at your current spend level you are not hitting your goals (i.e. your CPA is too high). Try pulling budget back and retesting to determine if simply decreasing spend will drive more efficient conversions. 

With Haus, you can easily run diminishing marginal return experiments so you can determine the optimal spend level for each tactic and identify areas where there is room to scale.

4. Calibrate your MMM

In general, running geo-experiments help to improve your Media Mix Model (MMM) by creating spend variation. However, you can also use the results of your experiments to upweight or down-weight each channel in your model. Since your incrementality reads reflect true business impact, use the data to anchor your media mix model. An MMM can intake your incrementality reads and use them to adjust the model accordingly.

5. Determine the best next test to run

Sometimes, getting the answer to one of the questions you’ve mulled over for a long time only sparks more questions. One test’s results can help you determine which test is best to prioritize next. 

Using our previous example, let’s say you just discovered that TikTok is not as incremental as you thought. Maybe this makes you question the incrementality of other smaller social channels like Snapchat or Pinterest. Maybe you wonder if you could improve Tiktok’s incrementality with a few campaign adjustments. 

When you partner with Haus, we help you interpret your experiment results and provide insights and recommendations for each so you can effectively implement your learnings and move the needle on your bottom line. Want to see the magic for yourself? Book a demo with Haus today.

Oops! Something went wrong while submitting the form.