If an ad is shown on your TV but no one is home to see it, does it make an incremental impact?Â
The answer, it turns out, depends on how itâs being measured.Â
In the first two installments of our incrementality school series, we discussed the concept of incrementality and the kinds of questions it can help answer.Â
We learned that incrementality is âjust the non-scientist-friendly way of saying causalityâ and that âmarketing without knowing your incremental returns is just lighting money on fireâ.Â
Yikes. Given destroying currency is a crime, we'd say itâs worth figuring out whatâs incremental and whatâs not â lest we waste away in prison for an ill-measured sock ad.Â
Attribution as an "incomplete and inaccurate solution"
Before we dive into the wonderful world of randomized controlled trials, letâs take a look at the most common tools marketers are using today to drive media buying decisions: Google Analytics and platform reporting.Â
These tools are cheap and easy to implement but the convenience has a hidden cost: Google Analytics supports the narrative that paid search is the most effective advertising channel, while platform reporting is known for suggesting that advertising is somehow collectively driving more sales than appear on the business' P&L.
These tools belong to the âattributionâ class of measurement products, implying they can appropriately attribute credit to the ads responsible for driving sales. Yet such products are largely unthinking and rules-based, relying on tracking digital ad interactions, which are then linked to web conversions via cookies â akin to invisible stalkers that live in your web browser, waiting and watching from the moment you see an ad until you buy.Â
Attribution products donât care if the customer is metaphorically âhomeâ when the ad is served â if a cookie is dropped and still stored in the browser when a customer converts, the ad is credited, regardless of whether the person who converted looked at the ad, let alone was influenced by it.Â
âAd served. User convert. Ad get credit.â (Best read with the voice of Hulk in mind)
Multi-touch-attribution (MTA) software that intelligently splits credit across all advertising was once hailed as the cure-all for these limitations, but as Feliks Malts, a Solutions Engineer at Haus puts it: âThe rise of consumer privacy policies and loss of data availability that followed has rendered MTA an incomplete and inaccurate solution going forward.â
Not only are MTAs largely blind to ad interactions that donât result in a click (e.g. impressions), but they also lack the ability to connect these touchpoints to sales that happen off-site (e.g. Amazon or in-store). (Hulk has been blindfolded by regulators and stripped of his favorite kind of cookies â itâs not a good environment for accurate smashing⊠err, recognition of causal impact.)
In short, while these tools check an optimizing-day-to-day-performance box, they donât tell a story rooted in causality â in other words, whether a campaign is incremental to your business.Â
Media modeling shortcomings
If the basic logic of attribution is too pedestrian, wait for this one: A traditional (aka, not causal)Â media mix model (MMM) is filled to the brim with opaque mathematics thatâll make your head spin.Â
Albeit a powerful tool for legacy consumer packaged goods (CPG) brands with global distribution and decades of historical data, traditional MMMs applied to modern brands are all too often a CPG-shaped peg in a DTC-shaped hole.Â
While an MTA identifies relationships based on clicks and conversions, a traditional MMM doesnât like to get its hands dirty dealing with pixels and messy user-level data. With just two variables â spend and sales â one can build a traditional MMM using linear regression. By identifying how sales respond to shifts in spend on a channel over time, the model aims to predict how increasing budget on a channel will impact sales.Â
The trouble is that linear regression cannot prove causation on its own. It can illustrate correlations, but it doesnât define which way the arrow of causality points.Â
For example, you could decide to ratchet up your connected TV (CTV) spend every time your business achieves a new record-high sales figure. Unchecked, a traditional MMM would pretty soon tell you that if you want to increase sales, you should increase your spend on CTV, even though the chain of causality in this case was exactly the opposite.Â
Statistical hazards like this cause a lot of mental pretzels for marketers trying to apply these models to their business. Absent a deep understanding of the math behind  traditional MMM, the best we can do is gut-check what itâs recommending. If it agrees with our expectations, we accept it; if it disagrees, we reject it. In either case, we havenât learned anything new. However â pair MMMs with experiments â and a glimmer of promise emerges. (Pssst. Learn more about our forthcoming Causal MMM here.)
Just test it
Many of us have been burned by negative experiences with attribution and traditional MMM solutions, but theyâre not entirely hopeless. As Feliks will tell you, âThese solutions are only as accurate as the incrementality tests calibrating them.â
While both attribution and traditional MMM tools are inherently biased and flawed â the former due to its love for clicks and the latter in its reliance on correlation â they can be saved and harnessed for good. All they need is the assistance of an old-fashioned experiment. Or, in the case of Haus, a new-fashioned one.Â
âAn incrementality test is just short for a randomized controlled trial â the gold standard for evaluating the effectiveness of medical interventions. Itâs the most accurate and simple way to understand the cause-and-effect relationship between an intervention and an outcome.Â
In a geo experiment â the primary type of experiment we run here at Haus â we randomly assign markets across a specific country to receive the treatment (advertising on a specific channel) or the placebo (no advertising). By analyzing the lift in sales for the markets receiving ads, weâre able to determine their true, causal impact.Â
In other words: Unless the TV ad playing in an empty home set off an inexplicable chain reaction that made someone want to buy your product, an incrementality test wouldn't attribute any credit to it â just how it should be.Â
And that is the beauty of experimentation. For more on the types of business, brands, and teams that are ideal candidates for incrementality testing, stay tuned for the next installment of Incrementality School.Â
.png)
.png)
.png)
.png)

.avif)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.webp)
.webp)
.webp)
.webp)
.avif)

.webp)

.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)

.webp)
.webp)
.webp)
.webp)
.webp)
.webp)

.webp)


.avif)
.avif)



.avif)
.avif)
.avif)


.avif)
.avif)
.avif)
.avif)
.avif)
.avif)




.png)
.avif)
.png)
.avif)



















