Just How Bad IS Mobile Ad Data?

An experiment that took place during Hurricane Harvey has given us some worthwhile data on the status of programmatic advertising on mobile.

We tend to forget that programmatic tools are still in their relative infancy, and that there’s more to advertising than simply data. But Augustine Fou’s Houston v. Bozeman test should bring us back to how much we still have to learn.

On Monday morning [August31], the torrential rains and flooding caused by tropical storm Harvey gave Houston residents plenty to worry about. Yet that didn’t seem to keep them from using photo-filtering and music-discovery apps between 4 a.m. to 5 a.m. local time — largely the same rate as did people who were out of harm’s way some 1,500 miles northwest in Bozeman, Mont.
At least that’s what it looked like when programmatic digital buys were placed across 18 exchanges early Monday in a test conductged by cybersecurity researcher Augustine Fou of Marketing Science Consulting Group. Buys in the two cities went to the exact same group of 15 apps, despite the very different circumstances.

Fou’s experiment showed that fraudulent traffic came in equal numbers to a forest fire public service announcement from both cities, despite both time and weather differences. The ad ran on 18 exchanges, and the traffic came from fake devices through data centers such as Amazon Web Services and Microsoft Azure, using proxies indicating it had come from various residential IP addresses.

the test showed all the geo-located traffic he bought … was fraudulent. Even though [Fou] didn’t specify by type of device, 100% of the buys came from Android mobile apps. The traffic was proportional to the relative populations of Bozeman and Houston despite all the power, cellular service and evacuation issues in the latter. And none of the ads generated a single click, despite the fact that accidental “fat thumb” clicks always occur when human traffic is involved, Fou says. “Common sense,” he adds, “says this cannot be real.”

All the fake data came, however, from Android mobile apps, and none from iPhone. There’s plenty of awareness about the security leaks in the Android system, but the numbers are so large that the ad buys are attractive for brands that need scale. The lesson here is that there’s a big difference between “scale” and “real,” even with geo-fencing.

It’s not that geo-fencing never works. It’s just that we’re not at a stage yet where fraud detection has much visibility on mobile apps,and geo-fencing isn’t a guarantee. Advertisers may pay higher CPMs for geo-targeted data, but they still have no guarantee that they will get good data.

Most fraud detection systems were designed to work on desktops, and despite the fact that most advertising dollars have now shifted to mobile, fraud detection hasn’t yet caught up. It will. It must.

The good news is that the major apps, like Google Maps, Facebook, and Foursquare are not among those sending the fraudulent data.