LevelUP Your Research

October 10, 2023

How Can You Tell If Your Advertising is Working?

Learn how to increase measurement sensitivity with five principles of ad effectiveness measurement. Don't let flawed approaches hinder advertising success.

How Can You Tell If Your Advertising is Working?
Joel Rubinson

by Joel Rubinson

President at Rubinson Partners Inc

It is not uncommon for marketers to get reports that their advertising campaign showed no benefit.  Then, they ask, “Why did my advertising fail?”  But maybe it didn’t…maybe the measurement approaches failed.

Five principles of ad effectiveness measurement that can help increase measurement sensitivity:

1. Behavioral lift is more likely to be observed than changes in brand measures from a survey.  If you track sales movements or conversions at an ID level, you are likely to see more movement than brand or ad recognition survey measures.  

Also, you are likely to be seeing behavioral data at a much larger scale than a survey, often millions of IDs exposed to advertising so any movement at all is statistically significant.

2. Individual level data reveals more than aggregated data.  Building on the last point, the most sensitive methods for ad effectiveness are when you set up a test of exposed vs. unexposed participants or use MTA (Multi-touch Attribution) and look for differences in levels of sales or other behavioral metrics at a tactic and creative level.

Individual level data also shows more pattern variability which busts up multicollinearity problems that exist with marketing mix modeling.

3. Attitudinal and behavioral data don’t tell you the same thing.  If the campaign is primarily intended to improve brand perceptions (i.e brand/upper funnel) vs. generating more sales, you should have a different measurement strategy. 

Think of it in terms of vehicles.  Maybe 5% of drivers are in the market to buy or lease at any point in time; that 5% accounts for nearly all of the sales. 

On the other hand, your tracker data mostly reflects advertising’s effect on brand positivity of the other 95% who do not contribute to sales. While attitudes move slower, are attitudes what you are trying to move?

Then rely more heavily on surveys.  If sales need to move to prove the success of the campaign, you need to design a behavioral experiment. In that case, you are working with the AdTech partner of the marketer, creating ID lists that are matched, advertising to one list and suppressing advertising to the other.

4. Test vs. control experiments sound like a gold standard but they are easy to get wrong.  The control cell is like a counterfactual…what would have happened had those exposed to advertising NOT been exposed?  That means that you need a “perfectly matched” control cell for comparison.

Rarely can someone conduct an RCT (randomized control test) experiment, so usually you are in the realm of quasi-experimentation where you are matching unexposed consumers to those who were naturally exposed.  In this situation, a carefully constructed control becomes critical. A clean control (i.e. no ad exposure at all) is NOT a good idea if you are assessing the impact of one particular tactic among many used in the campaign.

The exposed cell will be exposed to multiple tactics so the lift is inflated. Even if you block the control cell from the tactic under study, you still have the issue of the covariance pattern across tactics.  For example, online video and display are often highly correlated because the same DSP is working with the same inventory partners. 

Hence, the exposed cell and the forensic control are not matched on those confounders.  I use counterfactual regression approaches to clean that up (in conjunction with data weighting).

5. Marketing Mix modeling is not well suited to determine if campaign is working or which pieces are working…but MTA is.  This question is exactly what MTA was designed to address and it uses millions of data points that bust through the intercorrelations of the predictor variables.  MMM is more of an annual budget setting tool that can be used only inferentially (“Am I outperforming the model?”).

To help guide my readers, I have prepared a handy reference guide below.

 

Sensitivity to ad effects (in order)

Ad research method

Rationale

1.

RCT experiments

A gold standard, but hard to execute

Tied for 2

Quasi experiments of conversion and ad serving data

User level matching of exposed and control cells with precise matching. Often uses millions of records. Careful that you have a well matched and balanced unexposed cell and a really good counterfactual modeling approach.

Tied for 2

Multi-touch attribution

User level data often at a scale of millions of records. Finds ad impact even for smaller investment media tactics.  Can serve as a counterfactual model.

4

Conventional test markets

Looks for effect of advertising on sales but often there are uncontrolled factors that lead to confounding sales effects

5

Surveys triggered by ad exposure

Survey measures of brand favorability have a relationship to sales that have only recently been studied by me via the MMA. Smaller sample sizes can lead to insufficient statistical power to detect small percentage movements in outcome measures.

Tied for 6

Marketing/media Mix Models

Ad effects can be swamped by promotion effects and masked by multi-collinearity.  Not sensitive to differentiating the effects of granular media tactics.

Tied for 6

Brand trackers

On-going surveys that produce brand equity KPIs that simply do not move very much in practice.

Lowest

Click through reports and sales tracking

Click-through has been shown to not correlate well with the complete beenfit of advertising. They also do not differentiate those conversions that would have happened anyway.  Looking for bumps  via sales tracking when a lot of other stuff is going on is also not very telling.

 

advertising researchadvertising testing

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

More from Joel Rubinson

The Paradox of the Paradox of Choice
LevelUP Your Research

The Paradox of the Paradox of Choice

Discover how to navigate consumer choices effectively. Learn to leverage behavioral cues and refine ad targeting to enhance brand visibility and drive...

How to Improve Ad Attentiveness Measurement
LevelUP Your Research

How to Improve Ad Attentiveness Measurement

Explore the hierarchy of advertising effects, from impressions to sales. Discover how consumer attentiveness and relevance drive effective marketing s...

Are You Using Synthetic Data for Analytics?
LevelUP Your Research

Are You Using Synthetic Data for Analytics?

Explore the use of synthetic data to bridge the gap between sales and ad exposure data. Learn how it can enhance targeting and validate ad effectivene...

How Baseball Led Me to Marketing Analytics
LevelUP Your Research

How Baseball Led Me to Marketing Analytics

Discover the power of Moneyballing marketing and revolutionize your outcomes by leveraging math over judgment for superior results.

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers