Image for post
Image for post
Image for post
Image for post

The Next Big Opportunity in Artificial Intelligence

David Hundley
Jan 9, 2019 · 6 min read

Not too long ago, Apple implemented a feature on the iPhone to help you understand how long it will take you to get to your frequently visited locations based on current traffic conditions. When my iPhone connects to the Bluetooth when I turn on my car, I see a screen that looks something like this:

Image for post
Image for post

I’m not 100% how the underlying architecture works, but I’m guessing it’s a relatively simple machine learning-based piece of artificial intelligence (AI). Every time you visit a location, Apple caches that history (securely) on your phone, and it collectively remembers which places you visit most often at certain points in the week. For example, on weekday mornings, it gives me directions to my workplace, and on Sunday mornings, it gives me directions to the church.

(Funny story… on Thursday evenings, it used to tell me how long it took for me to get to my one of my favorite pizza places, and that’s when I knew I had to stop eating too much pizza.)

I’m okay with this because while I can’t 100% verify that a true human being isn’t spying on where I’m going, it’s just too unfeasible to have an army of humans pushing out all these direction recommendations by hand. I also just don’t believe anybody is interested in seeing how often I go to Chipotle!

But apparently, not everybody shares my same mindset. Recently, my parents questioned me about this precise feature since it started showing up on their phones, and while I tried explaining how it was likely due to machine learning, they remained skeptical. Not understanding the underlying technology, they couldn’t get away from this “man in the box” concept. They wanted to turn off the feature altogether, fearing that they were being tracked by a real human being. And mind you, my parents are the last people to have any reason to fear being tracked. All anybody would see from them is my mom’s love of visiting her grandkids and my dad’s fondness for Menards.

They’re definitely not alone. 2018 was not a good year for the IT community, particularly in the spaces of information security and artificial intelligence. Between the congressional hearings against Facebook and Google, all the stuff to do with Russia, and more, the media did not paint a pretty picture of this landscape. So to folks like my parents who aren’t knowledgeable about the underlying technology, they’re just going off what the media tells them.

I hate to say it, folks, but 2019 isn’t looking too much better. I’ve already read a handful of articles questioning the security and “creepy factor” around AI, and I bet we see another congressional grilling within the next few months. I’m not at all one to guess on how politics will play out moving forward, but this is one place where I’d be willing to wager that some sort of legislation is going to happen here, more or less. If you really squeezed me for what I think will really happen, I’m going to guess that companies will be audited for their algorithmic models to analyze what user information goes into making the “secret sauce” happen. I do expect Congress to be relatively reasonable and not force companies to reveal the fundamental proprietary knowledge that builds an artificial intelligence solution.

Again, I’m taking some shots in the dark here, but I think this is what we’re building toward. The reputations of Google, Facebook, and others are at an all-time low, and we’re probably just scratching the surface of these congressional hearings. It doesn’t help that Mark Zuckerberg recently announced that his 2019 personal goal is to “host a series of public discussions about the future of technology in society,” which honestly feels a lot like conveniently timed overcompensation from a guilty party, whether he truly is guilty or not.

Congress doesn’t exactly move fast, so we’re at least a year (if not two) from something happening. That said, what should companies be doing prior to this happening, if anything even happens at all?

To explain that, let’s talk a bit about Charity: Water.

Charity: Water is a nonprofit started by former nightclub promoter Scott Harrison. Chronicled in his great book, Thirst, Harrison describes his journey from promoter to the nonprofit world after finding the nightlife world to be ultimately lacking. Harrison aptly noted that his friends didn’t particularly like giving to nonprofits because people really don’t know what happens to their money. As a result, Charity: Water has adopted a 100% model, meaning that 100% of contributions from regular givers go directly toward actual outcomes (aka building of wells), and administrative funds for the nonprofit are provided by a separate group of donors.

During an interview on Rob Bell’s RobCast podcast, Bell asks Harrison how he got those donors onboard to fund the administrative side of the nonprofit. And his answer was dead on: people care less about what happens with the money so long as they know what their money is going toward.

In a word, trust.

People are much more forgiving and understanding about actions if you are transparent and upfront about what you’re doing. So with Charity: Water, Harrison notes that people are actually glad to pay the salary of the administrative assistant because while it might not be as “glamorous” as building a well, they understand that the administrative assistant still plays a role in making that ultimate vision happen.

Now, just as trust is in short supply in the nonprofit world, so trust is in short supply in the IT world. It’s probably much worse for the IT world, actually. It’s not that often you hear a nonprofit getting dinged for misusing donation funds, but I feel like we hear something new almost weekly about an IT company misusing its users’ data.

I don’t want to seem cold or inhumane here, but there’s a huge business opportunity on the table here. In a world where trust is in short supply, it would behoove a company to be upfront about its artificial intelligence practices. Ideologically, your moral compass should point you toward being ethical about your business practices, but if I can’t appeal to your morals, I can appeal to your bottom line.

There’s a number of ways a company could implement this, and I’ll share one idea here. For each AI solution you implement, you provide an upfront explanation on a) how the AI solution will benefit you as the customer and b) what specific data elements your company is the solution as inputs to the solution. You don’t have to share your “secret sauce” proprietary inner workings, so nothing to worry about there from an intellectual property perspective.

And alongside this explanation, provide an opportunity (e.g. instructions, toggle) for the user to opt out of the solution.

Some people might shudder at that last one in fear of everybody toggling off the solution but think back to the Charity: Water model. Those people are totally okay with funding the administrative assistant because they’re upfront about where their funds go. Along those same lines, I’m willing to bet that if you were transparent about your AI solution, the number of people who would intentionally opt out would be minimal.

Again, I think legislation is going to mandate this eventually anyway, so your company might as well make a business opportunity out of it while you still can. It doesn’t necessarily have to come in the form I suggested. Hey, you’re creative folks; I’m sure you could come up with an even better solution.

(And again… I feel weird addressing it as a business opportunity, but the whole moral argument obviously isn’t making a dent if we keep hearing about misuses of data.)

That wraps up this post. I don’t normally speculate on concrete things like this, so I’m curious to hear if you agree or disagree on these thoughts. Or how you might tweak a business plan on implementing transparency in your AI solutions. Catch you in the next one.

empowering you with data, knowledge, and expertise

By Data Driven Investor

In each issue we share the best stories from the Data-Driven Investor's expert community. Take a look

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

David Hundley

Written by

Machine learning engineer by day, spiritual explorer by night.

Data Driven Investor

empowering you with data, knowledge, and expertise

David Hundley

Written by

Machine learning engineer by day, spiritual explorer by night.

Data Driven Investor

empowering you with data, knowledge, and expertise

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store