Not too long ago, Apple implemented a feature on the iPhone to help you understand how long it will take you to get to your frequently visited locations based on current traffic conditions. When my iPhone connects to the Bluetooth when I turn on my car, I see a screen that looks something like this:
I’m not 100% how the underlying architecture works, but I’m guessing it’s a relatively simple machine learning-based piece of artificial intelligence (AI). Every time you visit a location, Apple caches that history (securely) on your phone, and it collectively remembers which places you visit most often at certain points in the week. For example, on weekday mornings, it gives me directions to my workplace, and on Sunday mornings, it gives me directions to the church.
(Funny story… on Thursday evenings, it used to tell me how long it took for me to get to my one of my favorite pizza places, and that’s when I knew I had to stop eating too much pizza.)
I’m okay with this because while I can’t 100% verify that a true human being isn’t spying on where I’m going, it’s just too unfeasible to have an army of humans pushing out all these direction recommendations by hand. I also just don’t believe anybody is interested in seeing how often I go to Chipotle!
But apparently, not everybody shares my same mindset. Recently, my parents questioned me about this precise feature since it started showing up on their phones, and while I tried explaining how it was likely due to machine learning, they remained skeptical. Not understanding the underlying technology, they couldn’t get away from this “man in the box” concept. They wanted to turn off the feature altogether, fearing that they were being tracked by a real human being. And mind you, my parents are the last people to have any reason to fear being tracked. All anybody would see from them is my mom’s love of visiting her grandkids and my dad’s fondness for Menards.
They’re definitely not alone. 2018 was not a good year for the IT community, particularly in the spaces of information security and artificial intelligence. Between the congressional hearings against Facebook and Google, all the stuff to do with Russia, and more, the media did not paint a pretty picture of this landscape. So to folks like my parents who aren’t knowledgeable about the underlying technology, they’re just going off what the media tells them.
I hate to say it, folks, but 2019 isn’t looking too much better. I’ve already read a handful of articles questioning the security and “creepy factor” around AI, and I bet we see another congressional grilling within the next few months. I’m not at all one to guess on how politics will play out moving forward, but this is one place where I’d be willing to wager that some sort of legislation is going to happen here, more or less. If you really squeezed me for what I think will really happen, I’m going to guess that companies will be audited for their algorithmic models to analyze what user information goes into making the “secret sauce” happen. I do expect Congress to be relatively reasonable and not force companies to reveal the fundamental proprietary knowledge that builds an artificial intelligence solution.
Again, I’m taking some shots in the dark here, but I think this is what we’re building toward. The reputations of Google, Facebook, and others are at an all-time low, and we’re probably just scratching the surface of these congressional hearings. It doesn’t help that Mark Zuckerberg recently announced that his 2019 personal goal is to “host a series of public discussions about the future of technology in society,” which honestly feels a lot like conveniently timed overcompensation from a guilty party, whether he truly is guilty or not.
Congress doesn’t exactly move fast, so we’re at least a year (if not two) from something happening. That said, what should companies be doing prior to this happening, if anything even happens at all?
To explain that, let’s talk a bit about Charity: Water.
Charity: Water is a nonprofit started by former nightclub promoter Scott Harrison. Chronicled in his great book, Thirst, Harrison describes his journey from promoter to the nonprofit world after finding the nightlife world to be ultimately lacking. Harrison aptly noted that his friends didn’t particularly like giving to nonprofits because people really don’t know what happens to their money. As a result, Charity: Water has adopted a 100% model, meaning that 100% of contributions from regular givers go directly toward actual outcomes (aka building of wells), and administrative funds for the nonprofit are provided by a separate group of donors.
During an interview on Rob Bell’s RobCast podcast, Bell asks Harrison how he got those donors onboard to fund the administrative side of the nonprofit. And his answer was dead on: people care less about what happens with the money so long as they know what their money is going toward.
In a word, trust.
People are much more forgiving and understanding about actions if you are transparent and upfront about what you’re doing. So with Charity: Water, Harrison notes that people are actually glad to pay the salary of the administrative assistant because while it might not be as “glamorous” as building a well, they understand that the administrative assistant still plays a role in making that ultimate vision happen.
Now, just as trust is in short supply in the nonprofit world, so trust is in short supply in the IT world. It’s probably much worse for the IT world, actually. It’s not that often you hear a nonprofit getting dinged for misusing donation funds, but I feel like we hear something new almost weekly about an IT company misusing its users’ data.
I don’t want to seem cold or inhumane here, but there’s a huge business opportunity on the table here. In a world where trust is in short supply, it would behoove a company to be upfront about its artificial intelligence practices. Ideologically, your moral compass should point you toward being ethical about your business practices, but if I can’t appeal to your morals, I can appeal to your bottom line.
There’s a number of ways a company could implement this, and I’ll share one idea here. For each AI solution you implement, you provide an upfront explanation on a) how the AI solution will benefit you as the customer and b) what specific data elements your company is the solution as inputs to the solution. You don’t have to share your “secret sauce” proprietary inner workings, so nothing to worry about there from an intellectual property perspective.
And alongside this explanation, provide an opportunity (e.g. instructions, toggle) for the user to opt out of the solution.
Some people might shudder at that last one in fear of everybody toggling off the solution but think back to the Charity: Water model. Those people are totally okay with funding the administrative assistant because they’re upfront about where their funds go. Along those same lines, I’m willing to bet that if you were transparent about your AI solution, the number of people who would intentionally opt out would be minimal.
Again, I think legislation is going to mandate this eventually anyway, so your company might as well make a business opportunity out of it while you still can. It doesn’t necessarily have to come in the form I suggested. Hey, you’re creative folks; I’m sure you could come up with an even better solution.
(And again… I feel weird addressing it as a business opportunity, but the whole moral argument obviously isn’t making a dent if we keep hearing about misuses of data.)
That wraps up this post. I don’t normally speculate on concrete things like this, so I’m curious to hear if you agree or disagree on these thoughts. Or how you might tweak a business plan on implementing transparency in your AI solutions. Catch you in the next one.