MAU 2018 — A Programmatic Tale

I first attended Grow.Co’s Mobile Apps Unlocked (MAU) in 2015 and was impressed with Adam’s focus on quality content then already. Over the years, I’ve seen the conference grow in size, which at times made me question the overwhelming amount of sponsors that often brought in too many “pitching” people and sessions to the mix, but this year the Grow.co team proved that they’ve mastered growth: they hit scale whilst maintaining the original 2015 quality on both content and attendance. And impeccable execution — not a single hiccup with mics, sound, food, drinks, schedule, wifi… Unless the new mParticle logo “mis-appearance” was their work, in which case there might be some tiny room for improvement 🤓(for those who never got around to seeing it, it looks pretty cool!)

Part of the credit is also due to app marketers, who have driven the shifts in our industry over the past 12 months — for once the keynotes and conversations were not centered around network fraud (or network traffic in general). Halleluyah! 🙏

As part of the industry’s overall shift to programmatic (which we fully adopted in June 2017 when we became 100% programmatic), we decided to partner with Mopub for a joint panel to discuss the learnings that came from that shift, and from the newfound “impression transparency”.

We were fortunate enough to have Ibotta’s Cassie Chernin and Tophatter’s Drew Lehman join us on stage, sharing their experience around incrementality testing, creative testing and overall mobile display campaign optimization through programmatic buying (both for User Acquisition (UA) and Retargeting).

Here are the key takeaways from the session:

#1 Transparency is Non-Negotiable

Transparency is no longer perceived as a benefit, it’s a requirement. Marketers understand there is no real reason for publishers to withhold placement information.

“[We are] really trying to move away from partners that don’t share that transparency. We know you have it, we want to see it too.” — DREW
“Transparency is very important, really understanding where your ads are being served. And from a brand safety perspective, is it in the right context? A major difference between the network space and the programmatic space is just knowing where your ads are shown and where are we reaching our users.” — CASSIE

Aside from brand safety and insights into audience behavior, impression level data can also provide valuable data points for creative optimization.

“Transparency is super helpful and we feel more informed about our campaigns and what publishers are working well. It helps influence our creatives, and it helps us influence our marketing strategies.” — DREW

#2 Making Creatives Great Again

A few years ago, most marketers did not focus on creatives. It was all about the sources, the publishers, which are the good ones, which are the bad ones, “do you have unique inventory? Let’s see how it performs!”. With programmatic came further knowledge and understanding of the KEY role creatives play on campaign performance. Programmatic is making creatives great again!

“I remember years ago it was really big to buy direct, [we’d say] this site is super relevant, and the CPM is very expensive, but we are going to do it because that’s the site where I want to show my ads. “This site is super relevant to my company…” But it’s starting to change. It is transitioning — it’s not really “where” your ads are shown but also “what” ads are shown. Although there are certain sites where you don’t want to be on, and that’s where transparency comes in play.” — CASSIE
“The main aspect we focus on is creatives: we really drive impressions, then need to get enough conversions to really understand which creatives are performing better. We then use that information to influence all our other campaigns (search/social/etc).” — DREW

It’s important to really tell a story across all different formats put together, understanding that each format has its unique potential to engage users.

#3 Relentless Testing

With programmatic’s (reliable) transparency it is much easier to understand what creatives are working, and to go one step further and understand why those creatives are performing better.

“Testing is not a one time thing. Marketers should test tirelessly — test all theories. You should be constantly rethinking what you are being told, questioning everything.” — CASSIE

It’s also important to determine the best way to “measure” these tests, to ensure results are actionable.

“We want to make as many variants as we can, just throw it at the wall to see what sticks. But when you are looking back and trying to analyze, (…) [you realize] it’s hard when you do too many variations at the same time. We’ve actually found that keeping the amount of variants that we are testing at about 5–8 actually helps us get more insights into what we are doing (…) because creative iteration isn’t only dependent on which one is winning day to day, but WHY. WHY does this creative do better than another. An overload of variants (testing too many variations at once) can really hinder that.” — DREW

Creative A/B testing can be automated by templates, but to use the results to action new iterations it’s important to understand the reason one template or Call To Action might be getting better results.

#4 Pricing in Programmatic

Making the switch to programmatic means looking at pricing from a new perspective. CPI pricing had been a widely accepted metric in the industry, especially due to the nature of network traffic and how campaigns were measured. But focusing on the top of the funnel can be quite deceptive…

“When you’re bidding CPI you are going to see the top of the funnel (…) the actual cost to acquire that install tends to be better than programmatic, but in programmatic you see more of the ROI. Programmatic allows us to better measure ROI, our real return on Investment.” — CASSIE

When we made the shift to programmatic at Jampp, we consciously stopped looking at installs as a primary metric — a download is just another event in the funnel. Now that we can measure everything from impression cost to final in-app purchase revenue, there is no need to treat the install as an acquisition (or ROI) performance metric.

As EA’s Belinda Smith pointed out on her article called “The Transparency Hangover”, “Drunk for so long off cheap, seemingly efficient digital media, we’re now suffering through the morning-after hangover. […] Quality does not come cheap. Many of us carried the banner of programmatic to our organizations as an exercise in cost savings, efficiency or following the recommendation of media-mix models that favor low-cost inventory. We are now tasked with helping our teams understand why programmatic isn’t cheap and what that means for how we should value and activate it.”

“We experienced something similar with higher cost per installs, but as we waited longer we saw it was still profitable for us. So we are taking different signals early on in programmatic and using that to measure ROI and optimize campaigns. From an LTV perspective different CPIs often bring different quality users, and we’re seeing $15 CPI users often have better LTV than the under $5 CPI ones we get from network traffic.” — DREW

As long as there are clear KPIs that should be optimized towards, one can (and should!) pretty much ignore install cost altogether on programmatic. Then comes the real question: “how do you measure those KPIs?” And that is where attribution has an important role to play in what needs to change in the coming year…

#5 Rethinking Attribution

Programmatic transparency opens up the possibility of tracking impression level data, which wasn’t available on network traffic. While many marketers understand the value of View-Through Attribution (VTA) to measure campaigns at the bidding level, there is still valid skepticism as to what sort of cannibalization it may imply.

“We’ve played around with click and view and have skepticism of the blanket view window, that’s where the space needs to develop some multi-touch attribution. I don’t want to set things at one day view through because I’m not sure what happened after a user saw the ad. Right now I think unfortunately what happens is that the view-through windows get larger and larger and partners are asking for 30-day view, which is crazy. You are just kind of setting yourself up to pay for a lot of installs that are probably organic. Can you actually remember the ad you saw three days ago? I mean maybe 5% but not more…” — CASSIE

Cassie is spot on. We’ve been just as skeptical as anyone else, in fact we were pretty sure users who didn’t click on an ad would never actually go look for it in the app store and download it later…

Turns out we were wrong.

We ran an experiment to test View-Through Attribution — that means we had to be SURE that any install associated with an impression could NOT be organic. That meant we had to build a “dummy” app from scratch, with absolutely ZERO organic installs, to trust the results. Turns out 28% of users who installed our video recipe app never clicked on a single ad…

But what’s really interesting here is to see how long it took between impression and install — the bulk happens under 5 hours. Yes that’s right, 5hs, not 24hs like the industry has set as “standard”..! And the same applies for clicks — in fact that window should be EVEN SHORTER on clicks, not longer…

Anything above 6hs pretty much cannibalizes organics, on both last-click or view-through (we know this from various other incrementality and uplift tests we’ve run for our clients, not just this experiment).

“Partners across the space need to have more flexibility with the VTA windows. We can’t really go lower, and we should be able to go to 1–2 hs.” — CASSIE

#6 Incrementality & Uplift

Which brings us to incrementality… Or uplift. Whatever name you’ve been giving it, you’ve likely been playing with the idea of measuring exactly how much your current attribution settings cannibalize your organic conversions…

“I think it’s really important to take a step back before running these tests, and define exactly what you are trying to measure. If you’re trying to measure cannibalization of your app campaigns, then you need to make sure you’re measuring that accurately. Desktop is very different from mobile — on desktop you can serve an ad to almost anybody, whilst you’ll never be able to reach more than 40% of the smartphone population on mobile display. You absolutely need to use PSA ads on VTA to make sure you’re comparing apples to apples, especially on retargeting campaigns.” — DREW

Drew hit the nail on the head.

I wish I’d spoken to him before we started running our first uplift tests in March 2017, would have saved me a lot of time and headache..! We sure hit our heads on the wall multiple times trying to understand why the retargeting uplift tests weren’t giving us conclusive results (uplift results would bounce from positive to negative week over week on a single campaign, making the results inconclusive). We eventually understood that we had to add PSA ads and not just track users in the control group assuming they would all have been “reachable” with an ad. We had to “clear the noise” in both treatment and control by only comparing exposed users (whilst in contrast you can easily measure uplift on retargeting web campaigns by just tracking your control group).

We also learned a lot with one of our European clients on “how to read” the results for UA incrementality tests, that can show you not only the incrementality in %, but also WHERE that incrementality is (in terms of minutes/hours from the impression).

And don’t get me started on how to randomize the retargeting control and treatment groups, or I’ll type for another 2 pages! (For those who like technical papers feel free to read our data science team’s post on the subject.)

It seems most marketers are still trying to decide not just how to run these tests, but also how often — some trying to measure incrementality on an ongoing basis with a 5 to 10% holdout group (which is a real challenge in terms of efficiency…), whilst others apply the findings of an isolated test to their ongoing media spend (knowing their “incremental user” cost them X% more than the attributed CPA).

“We typically run an uplift test once per quarter, to see how it changes, what it’s affecting, with all these other signals that we are giving to our users. So we test it every quarter, for about a month.” — DREW

#7 Fighting Fraud

Let’s not fool ourselves people, there is no such thing as a World without fraudsters… Not even on programmatic! Lucky for us, most of the fraud on programmatic is still targeted mostly at branding campaigns that only track impressions (for now anyway…), so we can easily pick it up by just tracking conversions for optimization purposes.

“There’s always going to be the next step in fraud, they are making money, they are smart obviously, and so they are going to figure it out. We have to try our best and accept that there will always be a risk in what we are doing.” — CASSIE

Serious exchanges like Mopub have a series of checkpoints in place to ensure the publishers have real content before making them available to DSPs like Jampp. Mopub specifically hired forensiq to keep an even closer eye on suspicious placements keeping things safer even to branding advertisers.

One of the things we were able to develop on programmatic (which we couldn’t roll out on network traffic) are the heatmaps, that show us where users clicked on an ad. It not only helps us optimize campaigns by making some areas “not clickable”, but it also helps us identify “suspicious” publishers with unusual concentrated click areas.

What’s next for programmatic?

When we asked Cassie and Drew what the wanted to see evolve for programmatic over the next 12 months…

“I want to see multi-touch attribution!” — CASSIE

If you’ve read this far, kudos! I hope it was entertaining if it wasn’t insightful.

If you have any questions about programmatic performance marketing for mobile apps request a demo, or reach out to Jamppers you might know in your region.