Understanding Google Ads incrementality testing

Google ads incrementality testing measures the true causal impact of your ads on business results

Google Ads incrementality testing measures the true causal impact of your ads on business results. It shows how many conversions would not have happened without ads by comparing test groups who see your ads with control groups who don't. The difference between these groups represents your advertising's actual incremental value.

People run incrementality tests to avoid wasting money on customers who would have purchased anyway. Without such testing, you're flying blind—conventional metrics like ROAS often count conversions that would have occurred naturally. Testing reveals which ad campaigns genuinely create new business versus merely taking credit for existing demand.

The process typically involves creating matched experiment and control groups, running ads only to the experiment group, measuring the difference in outcomes, and calculating the incremental lift percentage. This tells you what portion of your conversions are truly caused by ads. Armed with this knowledge, you can reallocate budgets to genuinely effective campaigns and stop funding ineffective ones.

Getting started

Google Ads incrementality testing can be conducted through various methodologies including geo-based experiments, PSA (Public Service Announcement) tests, and holdout groups. Geo-based testing involves selecting comparable geographic regions and applying different advertising intensities, while PSA tests replace actual ads with non-commercial messages in control groups. Alternatively, advertisers can implement randomized holdout groups where a percentage of the target audience is excluded from seeing ads, allowing for direct comparison of conversion rates between exposed and unexposed segments.

For example, a clothing retailer might implement a 90/10 holdout test for their summer collection campaign, where 90% of their target audience receives the ads while 10% is excluded. After running the campaign for six weeks, they might find that the exposed group had a 5.2% conversion rate compared to 3.8% in the unexposed group, indicating a true incremental lift of 1.4 percentage points. This would demonstrate that 27% of their conversions would not have occurred without the advertising, helping justify their ad spend and refine their attribution modeling.

Geo-experiment design best practices

When setting up Google Ads incrementality testing, proper geo-experiment design is critical. Start by identifying similar geographic regions that can be paired based on historical performance data. For example, if you're testing a new Video Action Campaign (VAC), you might pair Dallas with Houston based on similar past conversion rates and customer demographics. Create test and control groups with equal revenue potential, ensuring they're geographically isolated to prevent spillover effects. Maintain a minimum test duration of 3-4 weeks with an additional 2-week post-treatment observation window to capture YouTube's full incremental impact, which typically improves by 79% during this period.

Data collection and measurement framework

Establish a comprehensive measurement framework before launching your test to accurately capture incremental effects. Beyond Google Ads reporting, incorporate first-party sales data across all channels, including your website, Amazon, and retail locations. For instance, a DTC skincare brand testing YouTube might track not only direct website sales but also Amazon orders and in-store purchases at Sephora locations within test regions. This approach reveals YouTube's true impact, which is typically 3.4x higher than Google Ads reports, with an additional 99% lift from omnichannel halo effects. Track both new and returning customer metrics separately, as YouTube tends to drive 85% higher lift for new customer acquisition.

Campaign configuration and budget allocation

Configure your test campaigns carefully to ensure clean experimental conditions. Maintain consistent campaign settings between test and control regions, varying only the independent variable you're testing. For example, when testing if Demand Gen outperforms VAC, keep all other parameters identical—creative assets, daily budgets, audience targeting. Set aside sufficient budget to achieve statistical significance; underfunded tests produce inconclusive results. A retail brand with $100,000 monthly YouTube spend might allocate $25,000 specifically for a four-week incrementality test, ensuring equal daily spend across test regions while maintaining standard campaigns in control regions.

Analysis and implementation strategy

Analyze test results with a focus on incremental ROAS across both immediate and delayed conversion windows. Don't rush to judgment based on early performance—YouTube's impact typically unfolds over time, with significant improvement during the post-treatment window. When a hypothetical fitness equipment brand tested YouTube, they might have seen modest 1.2x ROAS during the active campaign period but discovered this grew to 2.1x ROAS when including the post-treatment window. Use test insights to recalibrate attribution models, adjusting channel investment based on true incremental value rather than platform-reported metrics. For multi-channel retailers, factor in the omnichannel halo effect when calculating YouTube's total ROI, potentially justifying higher investment despite seemingly modest direct response metrics.

Incrementality School

Master marketing measurement with incrementality

Learn the basics with these 101 lessons.