Exploding Old Contexts With AI
According to IBM, we produce 2.5 exabytes of data each day. This is equivalent to 250,000 Libraries of Congress or 90 years of HD video — each day.This data exhaust results from our continual digital interactions, whether explicit — such as typing a search into Google — or implicit, like the location signals we give off as we move through the world with our smartphones.
In advertising, we use these data signals for demand capture — which is less expensive than demand generation — usually in programmatic contexts, for targeting and analytics and attribution. From a user’s point of view, much of that data output is visual and text-based, such as a list of Google search results. This visual environment is a good brand too, since it contextualizes and dimensionalizes its offering.
Big Data is also an important driver of advances in artificial intelligence. AI is nothing if not data-hungry, and cheap access to the exponentially growing, cloud-stored motherlode of data means that machine learning and deep learning systems, with their sophisticated algorithms and parallelized processors, can be trained that much more effectively.
It’s easy to get breathless when talking about AI (it happens to me all the time). What people spend less time talking about are the near-term collisions between AI and various professional fields, like medical diagnostics or finance or media and marketing.
Take Alexa, for example, the brain inside Amazon’s Echo and Dot. Alexa lives in your device but also in the cloud, and without sophisticated deep-learning algorithms trained on massive amounts of data, Alexa wouldn’t exist at all.
But when you ask Alexa for the best restaurant in Brooklyn, she names exactly four restaurants, one of them Shake Shack. And that’s it. When you type the same results into Google, you get a huge number of results, richly contextualized with ratings and descriptions and locations on a map.
The results returned in text and image are far more useful to the user, and the brand, than those returned in voice. The point isn’t that Alexa should use Google for its search results rather than Bing, but that, as an interface, voice is decontextualized.
You have to wonder how brands will manage discovery in a world increasingly dominated by voice. If this seems hyperbolic, consider that, by 2020, 30% of Web browsing will happen without a screen (according to Gartner) and 50% of searches will be voice (according to comScore) — and that besides Alexa we have Google Assistant, Cortana, Siri, and Ozlo, to name a few.
These new ecosystems will change how we connect buyers and sellers of media, as well as the fundamental role of publishers.
The main thing a brand can do today to prepare is to get its data strategy in order. A data strategy starts with a DMP and extends from that foundation into all of the new, AI-enabled contexts its customers are going to be, whether voice or messaging, cars, homes or AR, and whatever comes next.
The time to prepare for this is now. Fundamental goals like reach, engagement, discovery and sales will be the same in these new contexts, but only the brands that have adopted a sound data strategy, informed by ongoing advances in machine learning, computer vision and natural language understanding, will benefit.
In closing, I should note that Alexa did answer one of my queries with the perfect response: “Alexa, what is the meaning of life?”
“The traditional answer is 42.”
Originally published at www.mediapost.com.