Posts

Attribution models for mobile advertising

A well-built media plan involves the very difficult challenge of trying to balance the different advertising channels used in acquiring and retaining a customer, and attributing the right merit to each of those channels to achieve a sustainable and scalable customer strategy. Therein lies the challenge: knowing the real contribution made by each channel, the value from the advertising spends invested and what attribution models to use.

The promise of “measurability” of digital marketing has led many to believe that the problem could be greatly simplified. This is partly true. The incredible power to measure digital channels has made it possible to find better solutions to the media planning process. However, over-simplistic approaches (where numbers replace reasoning) may achieve the opposite result of what was intended.

A simplistic (and in our view, wrong) approach to solving this complex challenge, is to try different channels simultaneously, measure them all with the same last-click attribution model, and keep eliminating those with the worse last-click as measured by CPA results.

The benefit of this method is its extraordinary simplicity. The problem is that it may achieve the opposite of what we want.

Measuring the sales funnel with different attribution models

Going back to marketing basics (the AIDA [Awareness, Interest, Desire, Action] framework is probably a good common reference to use): acquiring a user always follows a sales-funnel approach where we prospect potential leads at the top of the funnel (Awareness), find those that are interested in our product (Interest), follow them through the decision process (Desire), and finally convert leads into buying customers (Action),at the bottom of our funnel.

Each of these steps in the sales funnel has different objectives, and hence, can’t be measured equally. The objective of the prospecting phase is to make people aware of a product (top of the funnel), so that eventually, the lead will become interested enough progress through the sales funnel and end up buying. We can’t compare the cost of acquiring a new user to the cost of acquiring a user that is familiar with a product already (bottom of the funnel) and is ready to purchase.

If we do make the mistake of comparing these costs equally, we will tend to believe that the tool or channel that was helping us convert a lead at the bottom of the sales funnel is always better that the tool that helped build awareness to a new lead at the top of the funnel.

The catch is that if we follow the simple strategy defined above, and end up only with bottom of the funnel channels and tools, we will get very good CPA metrics, but very low scale. Indeed, once everyone that knows our products (those leads at the bottom of the sales funnel) are acquired, there’s no one else to convert.

This is typical of poorly designed media plans. It’s easier to quickly eliminate those channels that help us drive product awareness and end up with bottom-of-the-funnel tools such as search and retargeting channels that have great CPAs. The problem arises when we try to get scale. It’s then that we start pouring money into these channels and when CPAs become worse

The real key to the very complex problem of designing the perfect media plan, is not only which channels to use, but also how to attribute merit to each channel to measure the effectiveness of each with respect to the distinct goal that should be attributed to each acquisition stage.

That’s where programmatic buying can play a significant role, smartly utilizing its data processing capabilities to efficiently achieve properly measured results at each stage of the funnel, in conjunction with other marketing channels.

Ideally, last-click attribution should only be used to measure effectiveness of bottom of the funnel channels (search, retargeting). View or algorithmic attribution should be used to attribute merit and measure effectiveness of top-middle funnel channels, like programmatic.

The idea is to be able to measure “awareness” in a way that allows marketers to choose the right platforms without compromising scale and efficiency of a campaign. This measurement is possible. We should look into measuring which of the leads that arrived to the bottom of the funnel were impacted by top of the funnel channels.

For example, If I impact one million people with my awareness campaign, I should make sure that the 5,000 people that end up converting were among that million impacted a priori.

Properly combining channels, with the right measurements and attribution logics, one can achieve the holy grail of a scalable and effective marketing strategy.

View attribution: building a case for using it

There are two types of events that can lead to an action in display advertising; a click or a view. Conversions attributed to a campaign can be measured by either post click or post view attribution. Generally speaking, most marketers buy into the post-click attribution model, as the default way of working, but many still do not fully buy into the post view attribution model.

Building the case for view attribution

As consumers, many of us still prefer not to click on mobile display ads, with reasons ranging from lack of trust to distracting us from the task at hand. Now, that’s not to say that we don’t register the ad in our minds. That said, instead of clicking, many of us might go directly to the relevant app store to download the app shown in the campaign. Under the post click attribution model, that app install event would not be attributed to the campaign and may even get attributed to some other platform. This is the problem, trying to determine how to attribute specific use cases to a campaign where there was no click but a very high likelihood that a user downloaded an app because they just saw the ad minutes earlier.

At Smadex, we conducted an analysis on a number of campaigns that ran with view attribution turned on and a look back window of 24 hours. As you can see from the histogram below, the data demonstrated a strong correlation between an ad displayed and an app downloaded within two hours. In fact, 95% of downloads were completed within 90 minutes of an ad displayed. From a statistical point of view, the distribution line proves that we should be counting views and not just clicks.  If the downloads shown in the graph were not correlated with ads displayed, the distribution of downloads would be flat (i.e. there would be the same statistical number of downloads over time, independently of when the ads where shown).

 

View distibution

Better attribution = increased efficiency and effectiveness

Simply counting clicks does not provide a robust measurement platform for marketers to work with. They need data that informs them what channels to market (social, display, search etc.) contribute volume and at what price. The impact of excluding view attribution on average could mean discounting halve of new users and generating a significantly inflated Cost Per Acquisition number – the key metric that marketers use to decide whether an agency/platform is delivering value.

Conclusion

The good news for advertisers is that both Smadex and most independent tracking platforms now support multiple attribution models and these can be set on a per campaign basis. Advertisers can configure the look back window as needed. An important parameter for any campaign in order to compare apples with apples is to ensure that all display partners are running under the same attribution conditions.