In the second session of Open Haus, Zach Epstein and Phil Erickson, Principal Economist at Haus, discuss the different methods of marketing measurement, including traditional MMM, MTA, and incrementality testing. They explore the strengths and limitations of each approach, the importance of causality in measurement, and how these methods can work together to provide a comprehensive view of marketing effectiveness.
Data is changing, and measurement needs to catch up
The world of marketing measurement has changed dramatically in just a few years.
The loss of third-party cookies, iOS privacy updates, and rising consumer concerns around tracking have all weakened traditional methods.
What once felt like tried-and-true playbooks now look shaky. At the same time, marketing spend is bigger and more complex than ever. Brands are investing across dozens of platforms, channels, and markets. Still, a lot of the measurement methods are showing correlation but not causality.
As Phil put it: “Economists obsess over causality. That’s our job: to understand how actions actually cause outcomes.”
Attribution: fast signals, not causal truth
Attribution was the first modern measurement framework. It started with single-touch models, which are as simple as they sound:
- First-touch attribution gives credit to the first ad a customer interacted with.
- Last-touch attribution (still the most common) gives all the credit to the final click or view before purchase.
These models are easy to understand and quick to implement. If the last thing a customer clicked was your search ad, that ad gets the win. Marketers liked this clarity, even if it oversimplified reality.
As Haus CEO Zach Epstein put it: “Attribution is useful for quick signals but it doesn’t tell you if marketing spend actually changed behavior.”
Multi-touch attribution tries to add some nuance
Multi-touch attribution (MTA) tried to solve the oversimplification problem. Rather than focusing on a single touchpoint, it maps the entire sequence of interactions leading up to conversion.
Linear MTAs give each touchpoint equal weight, while machine learning–driven MTAs reweight them based on observed patterns in the data.
But the one big limitation remains: attribution is not causal. Just because a user clicked a branded search ad before converting doesn’t mean the ad caused the purchase. They might have been on their way to buy regardless.
Privacy changes have made attribution even trickier
Early models relied heavily on third-party cookies to track users across sites. With those disappearing, modern MTAs rely on a patchwork of first-party data, identity resolution, and statistical modeling.
That makes the models more complex but also introduces more noise. At Haus, we see attribution as directional—good for short-term signals, but never the final word.
MMM: broad view, correlational core
Marketing Mix Modeling (MMM) takes a broader view. Rather than following individual clicks, it looks at the relationship between channel spend and outcomes over time. The goal is to understand how all your channels interact, and how changes in budget allocation affect sales.
MMM is handy for understanding diminishing returns but, just like multi-touch attribution, it’s fundamentally correlational.Â
Is prediction really possible without causality?
It can predict sales with impressive accuracy, yet that doesn’t mean it has captured true causation. Different models can “fit” the data while disagreeing completely on which channels matter most.
“Two models can hit the same forecast with opposite coefficients,” Phil said. “Prediction isn’t the same as proof.” Without experiments to anchor them, MMMs risk becoming exercises in model preference rather than scientific truth.
Our team really does think that MMM is valuable, but it’s too flawed to trust without experiments.
That’s why we’re building Causal MMM, where incrementality results serve as the ground truth.
Incrementality: experiments for causal lift
Incrementality testing asks a basic but crucial question: Did this marketing tactic cause the outcome, or would it have happened anyway?Â
By splitting audiences into treatment and holdout groups, marketers can isolate the true effect of a campaign. If sales rise more in the treatment group, the difference is the incremental lift.
Unlike attribution or MMM, incrementality provides a causal estimate. Causality will show you how outcomes actually change when spend changes.
Incremental experiments can go way beyond basic A/B setups:
- Stratified sampling ensures treatment and control groups are well matched.
- Synthetic controls create weighted composites that more closely mirror the test group.Â
These techniques make results more precise and more reliable.
The clarity is powerful, but there are trade-offs
Experiments are costly to run because they require withholding spend in some markets.
They also provide only a snapshot—one campaign, one timeframe—not a continuous view across every channel.
That’s why Haus has focused on making incrementality faster, more automated, and easier to interpret. Our platform turns what used to take months of planning into a matter of weeks. And our economists work directly with brands to ensure test design fits business goals.
How they fit together: incrementality vs MMM vs attribution
Each approach has strengths and weaknesses:
- Attribution is quick and frequent, but not causal.
- MMM is broad and good for planning, but correlation-based.
- Incrementality is causal and rigorous, but slower and more resource-intensive.
Smaller brands with under $100,000 in annual spend should keep things simple. “If you’re under ~$100k a year, just use platform tools,” Phil said. At that stage, consistency matters more than complex modeling.
As spend grows, the approaches work best in combination:
- Attribution provides fast feedback.Â
- MMM helps optimize across channels and budgets.
- Experiments validate both and provide the causal ground truth.
The key is calibration. Rather than “triangulating” by picking the model you like, align MMM and attribution results with experiments.Â
Incrementality tests can act as filters, eliminating MMM variants that don’t match reality or providing priors for Bayesian models. Any move toward data-driven judgment—especially rooted in causal estimates—makes you better.
Consistency matters as much as accuracy. A horizontal framework that applies the same measurement standards across channels is more useful than a patchwork of conflicting methodologies.
The bottom line
Used together, they form a stronger measurement stack. Attribution keeps you agile, MMM helps with planning, and incrementality ensures your models are anchored in truth.
In a privacy-first world, causality isn’t just a technical detail—it’s the foundation for smart, efficient growth. At Haus, we’re building the future of measurement: Causal MMM, rooted in experiments and powered by economists.
 It’s how brands can stop guessing, start trusting, and finally measure what truly matters.
FAQs about incrementality vs attribution
What is the core difference between marketing attribution and incrementality?
Marketing attribution assigns credit to various touchpoints in the customer journey leading to a conversion. It shows where a conversion originated. Incrementality measures the true causal impact or uplift of marketing efforts. Incrementality determines if specific activities drove new conversions that would not have occurred otherwise.
In other words: attribution provides a historical view of conversion paths, while incrementality focuses on the net new value created by marketing spend.
What are the core differences between MMM and incrementality?
As with attribution, the core difference between MMM (Media Mix Modeling) and incrementality is the difference between correlation and causality. MMM uses historical, aggregate data to estimate how different channels correlate with outcomes. It provides a broad, strategic view of marketing impact. Incrementality uses experiments (like treatment vs. control groups) to directly measure causal lift. MMM is useful for planning, while incrementality proves whether spend truly changed behavior.
What are the main challenges associated with last-touch attribution?
Last-touch attribution assigns 100% of the conversion credit to the final marketing touchpoint a customer interacts with before converting. This method often overvalues channels that are close to the point of conversion, such as paid search or retargeting. At the same time, it ignores the crucial role of earlier touchpoints in the customer journey.Â
Because of those oversights, last-touch attribution can lead to misinformed budget allocation and an incomplete understanding of true marketing effectiveness. Simply put: it doesn't account for the full customer experience or organic conversions.
How have recent changes in user-level tracking impacted marketing measurement?
The deprecation of user-level tracking, driven by new privacy regulations and browser changes, has significantly limited marketers' ability to attribute conversions to individual users across platforms.Â
As a result, methods like incrementality testing and Media Mix Modeling have become increasingly vital for understanding the overall impact of marketing campaigns and optimizing spend.
Which is better for smaller companies, attribution or incrementality?
For brands spending under ~$100,000 annually, attribution or even built-in platform reporting is usually sufficient—consistency matters more than sophistication at that scale.Â
Incrementality testing is powerful but resource-intensive, so it becomes more valuable as spend grows and budgets diversify.


.png)