Having A Poor Signal Results In Poor Measurement

Grant Simmons
6 min readMay 2, 2018

--

Why do you hire an attribution company? Sounds like a silly question: Why do advertisers pay for attribution? As a data analyst, I’d argue that the main reason is to get a handle on all the marketing efforts to make associations that weren’t previously available and to get a measure of user engagement.

Marketers collect clicks, impressions, installs, and (post-install) events for attribution to measure user engagement and the effectiveness of their partner networks. They then send this data (signal) to their attribution provider via the SDK or server-to-server method.

Once all the marketing touchpoints are collected, marketers can infer the strength and weakness of publisher sources based on cost to acquire, retain, generate revenue, etc., tied back to the marketing source. Based on this measurement, the marketer can extrapolate quality vs. scale to optimize their ad spend to figure out where to place their next ad dollar.

Poor signals

Unfortunately, attribution isn’t a cut-and-dried process, and there are several issues that muddy signals so that attribution and measurement are inaccurate. For instance, if you’re marketing at scale, and you’re not deduplicating installs or assigning winners vs. influencers, multiple network sources, to some extent, hit the same devices and multiple parties will claim credit.

Deduplication also includes eliminating users who have previously installed an app and have either gone dormant or have uninstalled the app and are re-engaging or re-installing it at a later time. For this reason, Data retention proves important: Without a way to deduplicate users over time, advertisers will “buy back” their already acquired customers once the data retention window closes.

That said, attribution is first a deduplication exercise: The attribution partner is there to ingest a good signal and adorn credit based on the attribution waterfall. The attribution waterfall — most of the time — rewards credit on the most recent (last) click. Of course, there are exceptions: Some advertisers emphasize the impression as well as the click, some fractionally allocate based on all touchpoints.

The attribution waterfall prioritizes the click with the highest fidelity, that is nearest the time of install.

The above graphic shows the logic Kochava uses to perform attribution. The system follows the last-click attribution model where the highest-fidelity source with the last click (touchpoint) wins the attribution.

My last post detailed the misaligned incentives of cost per install (CPI) pricing combined with last-click attribution. The takeaway is that within the app attribution space, publishers are over-incentivized to be the last touchpoint prior to the install occurring. This over-incentivization leads to fraudulent activity in the form of click flooding or sniping of organic installs.

Dubious activity goes beyond excessive click volume to include the way in which signals are sent to an attribution partner like Kochava. First, let’s define a click. At Kochava, a click represents a purposeful action taken by the prospective customer (user). The user is presented an ad unit, and if the ad piques the prospect’s interest, they click on it. This action represents intent and engagement. The ad unit (an impression) influenced the customer to take purposeful action (a click). Therefore, most attribution — in direct response marketing — relies on that click (intent) signal.

The click signal becomes complicated when the definition wavers from it. For instance, some media partners have sent impressions as clicks for attribution. If impressions are being called clicks, how much intent or engagement is there? The only engagement is that the prospective customer was online and was served an ad. There is no user intent inherent to being served an ad. The impression does not represent customer engagement. But, if impressions are being sent as clicks, the least engaged action (impression) is treated as the most engaged action (a click). This is a primary example of a poor signal.

Regardless of the attribution logic (e.g., last-click, fractional, multi-touch), marketers execute, the process boils down to signal. If you trust the signal, then you can answer any questions about your marketing mix, overall efficiency, and scale provided by all your network partners.

To answer the question of why hire an attribution company: You use an attribution company to ingest a clean signal tied to marketing outputs. To be clear, having a clean signal means clicks sent as clicks, impressions as impressions, etc. If the signal isn’t clean, attribution can become poor or misleading.

When we talk about obtaining truthful attribution, what we’re talking about is having a reliable or trustworthy signal. If you believe the metrics tied to attributable events are trustworthy, you can then look at the network mix and stack rank your order in terms of top performers vs. the bottom, either by overall network or at the sub-publisher level. However, if your signal is untrustworthy, how can you properly attribute good vs. bad?

More simply put: If you can’t trust your signal, you can’t trust your attribution. If you can’t trust your attribution, you can’t trust your measurement. And, if you can’t trust your measurement, you can’t market effectively.

So — What should marketers do about it?

Get the right signal

First step: deduplicate. Educate yourself on how long your attribution provider retains its data. Kochava maintains it in perpetuity.

Second: Run a simple analysis to answer the question: Are my signal and attributed installs/registrations/purchases, etc. correlated? In the direct response space for apps, the question is, “Do my clicks and attributed installs move in the same direction?”

The top graph represents an R-squared of 0.38, or poor-to-no correlation but note the downward-sloping trendline. This means that there’s a loose but NEGATIVE correlation between clicks and attributed installs. Any movement is in the opposite direction. Loosely interpreted, the more clicks, fewer installs. We don’t believe that is reasonable. At best, there’s no correlation between signal (clicks) and effect (attributed installs against those clicks).

Third: Deploy an anti-fraud strategy. Within Kochava, the marketer has several options:

  • Subscribe to the Global Fraud Blacklist: If we see the traffic violate thresholds across multiple accounts, we add the device, sub-publisher (site) or IP address to the Blacklist, and we won’t attribute from those sources.
  • Use the Fraud Console to curate your own Blacklist. You may observe bad behavior specifically against your apps. Blacklist sub-publishers , IPs, or device IDs directly from the console.
  • Employ frequency capping using Traffic Verifier: Cap impressions or clicks globally, by network or by site.
  • Establish thresholds across our anti-fraud algorithms to automatically block sources that violate your custom thresholds. Typically for this exercise, you’d work with our Client Insights team to analyze recent trends and identify appropriate thresholds. This will assist you in reviewing data flagged by the console. By having baseline measurements, you can compare when flagged data is deemed unreasonable.

Fourth: Truthful attribution requires working with your media partners to clearly define what you consider a click and an impression, and ensure that the signal is sent according to your definition.

Cleaning up the signal will provide:

1. A much more correlative view between your marketing efforts and the attributed actions

2. A cleaner reporting layer to understand which traffic sources have meaningful reach and quality

Ensuring a clean signal means analyzing campaigns to understand what is an acceptable click-to-install ratio. Use the analysis to establish click and impression thresholds and enter them into Traffic Verifier before launching a campaign.

Prevent fraudulent entities from being wrongfully attributed using the Global Fraud Blacklist and an account-level Blacklist. Use the Fraud Console to visualization your data and understand where data is deviating from the norm for your apps.

Last, marketers should ensure that their definition of clicks, impressions, and events is upheld by their media partners. When everyone is under the same understanding, only then can marketers trust that their attribution is true and accurate.

--

--

Grant Simmons

Former Head of Retail Analytics at Oracle Data Cloud. Currently Head of Client Analytics at Kochava. Your data speaks volumes if you know how to listen.