Understanding Demand Gen incrementality testing

Demand Gen incrementality testing measures the real impact of marketing by comparing outcomes between groups that were and weren't exposed to your campaigns. You create test (exposed) and control (unexposed) groups, run your campaign, then measure the difference in conversion rates, sales, or other metrics. The difference represents the true incremental impact of your marketing.

People do this testing because marketing data is often misleading. Attribution models typically overstate marketing impact by claiming credit for actions that would have happened anyway. Users who see your ads might already be more likely to convert. Incrementality testing helps separate correlation from causation by revealing what additional value your marketing actually created.

To run these tests, you need to randomly divide your audience, maintain similar conditions across groups except for the marketing exposure, and measure relevant outcomes. Digital platforms often provide tools for this. The results help you understand which marketing activities genuinely drive business growth versus those that merely reach people who would have converted regardless. This means more efficient spending and better ROI.

Getting started

Demand Gen incrementality testing can be approached through various methodologies, each designed to isolate the true impact of marketing activities. Holdout tests, where a control group is excluded from specific campaigns, offer a clean comparison but require sufficient audience size. Geo-experiments isolate marketing efforts by region, while time-based tests compare performance before, during, and after campaign periods. More sophisticated approaches include PSA (Public Service Announcement) tests, which replace marketing messages with non-commercial content for the control group, and multi-touch attribution models that algorithmically assign credit across touchpoints in the customer journey.

A mid-sized software company might implement a geo-based incrementality test when launching a new product feature campaign. They could select ten similar metropolitan areas, randomly designate five as test markets receiving the full campaign (paid search, display ads, and email), and five as control markets receiving only organic touchpoints. By comparing new customer acquisition rates and conversion metrics between these groups over a 90-day period, they could calculate that their campaign drove a 23% lift in conversions with statistical significance, providing clear ROI justification for expanding the campaign nationwide.

Designing effective geo-experiments

Geo-experiments remain the gold standard for incrementality testing in Demand Generation. Based on the provided context, here are key best practices that will help marketing teams design and execute more effective geo-tests.

Set up comparable test groups

Creating balanced test groups is crucial for reliable results. Use synthetic controls to establish treatment and control groups with similar characteristics. The YouTube tests described show that carefully designed geo-experiments can reveal that platforms may be underreporting true impact by as much as 70%. Ensure groups have similar historical conversion rates, audience demographics, and purchasing behaviors to isolate the impact of your variable.

Determine appropriate test duration

For a YouTube campaign test, consider running experiments for at least 3-4 weeks with an additional post-treatment observation window of 2 weeks. The data from 190 YouTube incrementality tests showed that iROAS improved by an average of 79% during the post-treatment window. A hypothetical luxury skincare brand might see conversion rates continue to climb weeks after exposure to their video ads, capturing this latent effect only with sufficient observation time.

Track omnichannel impact

Don't limit measurement to direct site conversions. In the YouTube tests analyzed, brands experienced an average additional sales lift of 99% beyond DTC impact when measuring across channels like retail and Amazon. A hypothetical athletic apparel company might discover their YouTube campaigns drive significant in-store purchases that would be completely missed by platform attribution, providing crucial data for holistic budget allocation decisions.

Develop clear action plans

Before running your test, determine what actions you'll take based on different outcome scenarios. If a hypothetical meal delivery service discovers their Demand Gen campaigns have an incrementality factor of 3.4x compared to platform reporting, they might immediately recalibrate their ROAS targets from 2.0 to 0.6 to reflect true incremental value. Having clear decision thresholds established before seeing results prevents analysis paralysis and ensures insights translate to action.

Incrementality School

Master marketing measurement with incrementality

Learn the basics with these 101 lessons.