Will brands do the right thing with the power of artificial intelligence? Brands use AI to collect information on us already, through our browsing habits, through beacons pinging our phones in stores, even with face and voice recognition. Information is power. Morality is not relative. But anyone’s view of AI will shift, depending on where they stand.
What got me thinking about AI and brands was a keynote by Jeremy Gutsche delivered at Trendhunter’s Future Festival in Los Angeles. (Disclosure: They invited me to the festival and I said I would write this article about it and do a podcast interview with one of the other Trendhunter analysts, Ady Floyd.)
“We use the word AI,” Gutsche said during his talk, “not realizing that it’s already embedded in so much of our life.” AI helps us swipe to find a match and gives us directions to a party. It seems friendly, ready to help, benign. That might start to change when AIs build other AIs, when AI programs itself, when systems create new systems. What happens when there is a super intelligence? “It will get scary or awesome. Depends on how you look at it,” said Gutsche, and he got a laugh with that line. It was uneasy laughter. My guess is that none of us in the audience knew where to go with the idea of super intelligence. We didn’t truly know where we stood on AI.
A moral equation to solve
The audience members attending the talk were there to represent brands. They were in charge of innovation, tasked with dreaming up new ideas so their company would be positioned to move into the future. That advancement wouldn’t be simple, though, because of a moral equation that none of us had solved. Maybe we didn’t want to solve it because it was too technical, but then freezing up and doing nothing about AI — not examining our position on it — was the worst choice. As Gutsche pointed out in his talk, “I would argue that if we in this room are complacent, that’s what could lead us to the more dystopian future. I believe AI could solve a lot of things you’re working on. And I believe that your involvement and your brand’s involvement is important to think about, to shape this in the right direction.”
The right direction. Which way would that be? Gutsche pointed out that AI is developing fast because so much data is available to analyze. Pick your statistic: Ninety percent of all the data in the world has been created in the last two years. More data was created last year than in the last five thousand. In just one day, 500 million Tweets are sent, four million gigabytes of data are created on Facebook, and 294 billion emails go out. Much of it is vacuumed up by companies like IBM, Microsoft, Facebook, Amazon, and Google, all of which are in a race to develop artificial intelligence and sell their AI-generated insights to brands.
What are the brands going to do with the power AI brings them?
There will come a point when artificial intelligence will move faster and think more deeply than the humans who created it. This is already true with computers that beat humans at chess and the complex game of Go. Your car’s AI navigation system will need to decide which bystander will die when your car goes out of control and runs off the road. The AI controlling the power grid will need to decide which hospital will stay operational and which will go dark. These are decisions of efficiency weighed down by morality, but they are mere extensions of the decisions AI is already making for us. It’s not that much of a jump.
When brands begin using more AI-generated insights and data, they will gain power rapidly. They will know more about us, and not just our preferences, but the keys to our identity.
I unlock my phone and computer with my fingerprint. I trust that Apple has secured my biometric data. But what about when we hand over our face and retina data to AI systems? To recognize me, these systems don’t need my touch, my proximity, or choice from me. Delta Airlines is already checking in people on some flights by scanning their face. Governments are using facial recognition to identify protestors in a crowd. This is data grabbed from a distance. No permission or contact required. There are growing repositories of biometric data.
Can brands be trusted with that treasure trove of information? If somebody has my voice and can synthesize what I say, who will post fake interviews with me? What if the unique identifiers that make me, well, me are lost, leaked, or misplaced?
Accidents happen, but this is deeper than a credit card data breach. Retail stores use beacons — wireless gizmos that send a Bluetooth signal to your phone. If you have the right app, you walk into a store and your phone pings to tell you what’s on sale. Take it one step further: Imagine that a car rental brand has your face biometric or voice. You can check in quickly at the airport. Now imagine that brand sells that data to someone else or they are hacked. My credit card bill fills up with people renting cars in my name — using my face or voice. Unlike a password, I can’t change my face or voice. What happens then?
Who is warehousing your identity?
Most people like the idea of a brand knowing who they are. You can log in easily. You get emails tuned to your preferences. You feel recognized and appreciated. None of that is bad, but those choices make us willing conspirators. As brands learn more about us, how will they safeguard our data? When they have the choice to collect data without our permission, like facial biometrics, will they collect it? Of course they will. Data bring power. More data mean more power.
Brands often do what is right for them, not what is right for their customers. There are notable brands like Lyft, Patagonia, the dating app Bumble, and the Canadian clothing brand Roots — a few examples of companies with social-positive corporate cultures and a desire to make the world better. These are the sort of brands, I’d like to think, that really want to do the right thing. Then there are brands, like Postmates and Uber, that have to be forced to do the right thing. Postmates recently changed its policy on tipping because of social pressure. Uber has forced out toxic management only after scandal after scandal made it necessary. What will the not-nice brands do with their AI power? Facebook, a not-nice brand, has set a bad example, losing data, misusing data, selling data to third parties without permission. You wonder about a company like Wells Fargo, which pushed accounts on people who didn’t need them, would do with biometric data like voice prints and facial scans.
The brands will leverage AI as much as they can. It will come to them at low cost. Even tech tools once reserved for spies are available at low cost. The regulations around AI will be slow in coming. We need regulation faster, because if the banking industry is any indicator, regulation is what drives positive corporate behavior. Banks only do the right thing when confronted with regulation forcing them to do it. Most are not corporate good citizens by choice or design.
A Slippery Question
When questions around AI are framed around good-guy brands serving consumers they love, AI looks like a win for everyone. Not all brands are good, though. Some pretend to serve customers but are only interested in serving themselves. The decisions they make to benefit themselves will become only more powerful and pervasive when boosted with AI.
You would think “What is the right thing?” is a question that balances on a moral absolute. Something is right or it isn’t. But with AI there is a lot of gray. If consumers have no objection to their information being used by brands, for whatever purpose, then what is stopping brands from using AI, even running AI amok if they want? Consumers say they want it. But do we know what are saying yes to?