Democratize AI (Part I)

Richard Whitt
Jun 3, 2019 · 6 min read

How to ensure human autonomy over our computational “screens, scenes, and unseens.”

Who is really in control here?

Digital assistants such as Alexa and Siri and Google Assistant can be quite helpful — but their actual allegiance is to Amazon and Apple and Google, not to the ordinary people who use them. By introducing AI-based digital agents that truly represent and advocate for us as individuals, rather than corporate or government institutions, we can make the Web a more trustworthy and accountable place.

By Richard Whitt

In the 2004 film “I Robot,” Will Smith’s character, the enigmatic Detective Del Spooner, harbors an animosity toward the humanoid-like robots operating in his society. Over the course of the film we learn why. While Spooner once was driving his car on a rainy street, a crash sent his vehicle and another careening into a torrential river. A rescue robot was deployed to pull Spooner and his car to safety, but Spooner implored it to instead retrieve a young girl still alive in the other car.

The rescue robot doesn’t listen. It turns out this artificial intelligence was programmed to rescue humans with the best chances of survival — and since Spooner’s odds were deemed better (45 percent to 11 percent), he was the one chosen to be saved. The 12-year-old girl perishes. As Spooner bitterly put it later, “Eleven percent is more than enough. Human being would have known that.”

Parts of “I, Robot” remain sci-fi lore, like AI first responders with physical bodies. But the ubiquity of AI is no longer fiction. Today, artificial intelligence powers search engines, social media platforms, smart speakers, drones, and much more. The ethical conundra presented in “I, Robot” no longer are fiction, either: AI algorithms now do everything from curating journalism, to diagnosing our health, to determining who gets a loan, or a job, or parole. Think of these AIs as incredibly advanced, and hugely impactful, selection engines.

Ceding any kind of human decision-making to machines is ethically complex territory. This is especially the case when the machines are proverbial “black boxes,” allowing little transparency into the ways they are programmed. Who gets to decide who designs and builds the algorithms, and what data they’re trained with, are deep questions without easy answers. And today, there’s a complication making this complex realm even more problematic: The AIs embedded in and shaping our everyday lives typically are beholden to the priorities and control of large institutions — not to the “end user.”

Device-embedded AIs like Alexa and Siri can be useful, and even delightful. They’re impressive feats of engineering, too. But it’s important to recognize that their real allegiance is to Amazon and Google, not the individual person. As a result, the tech giants of Silicon Valley and elsewhere have their virtual agents perched in our living rooms, and embedded within our phones and laptops, constantly vying for our attention, our data, and our money.

We already can see the consequences of this dynamic, from privacy breaches to technology addiction to the spread of misinformation. As AIs become more advanced, and make more decisions for us and about us, what will these problems look like on a larger scale? And is there a way to prevent them?

First, it’s helpful to understand the algorithmic systems in our lives, and the various consequences of ceding them unilateral authority to make decisions on our behalf. There are roughly three types of institutional AIs we interact with each day: the online, the bureaucratic, and the environmental. It may be helpful to think of them as a mix of computational screens, scenes, and the unseens.

The first type of AI, the online, powers the digital systems we interact with every day over our screens, like Facebook, Twitter, and YouTube. This AI often functions like a recommendation engine, suggesting what news article we read, what video we watch, and even who we match with on dating apps. This AI can benefit both its company and its user: We are recommended (ostensibly relevant) content, and the company gets to sell us more (ostensibly relevant) ads. However, this AI can also benefit the company and harm the user: for example, a video is recommended we find difficult to resist, despite it distracting, misinforming, or even radicalizing us.

The second type of AI, the bureaucratic, is embedded within corporations and governments. It may be largely unseen, but its influence on our lives is outsized: It can determine whether (or not) someone is hired, or qualifies for a loan, or receives medical treatments. Again, this AI can benefit both the entity and the user, if it makes bureaucracies more efficient and effective. But these systems typically aren’t built to accommodate the needs of the individual human being. By drawing on biased data sets, for example, they can discriminate against women, misdiagnose dark-skinned patients, and wrongly incriminate African Americans.

The third type of AI, the environmental, is most like that in “I, Robot”: This is the smart speaker on our kitchen counters, or the connected cameras, sensors, drones, and even vehicles scattered around our towns and cities. This AI can use advanced biometrics to track our location, map our heartbeats, listen to our conversations, and more. AI with access to this sort of intimate data in our daily scenes obviously should be accountable to the affected humans, first and foremost. But today, that’s not the case: This personal data usually is vacuumed up and used to sell ads, or enhance commercial technology.

Today’s institutional-controlled AIs present a number of challenging problems; tomorrow’s will bring us even more. As but one example, the autonomous vehicle promises to transform our lives for the better. Smoother traffic flows, reduced accident rates, fewer fatalities. And yet, the newly-bestowed autonomy of such vehicles must come from someplace else — mainly, us. Such displacement of personal choice and decision-making, if all-complete, is quite troubling.

Here’s one scenario. I rent an autonomous car to drive me from San Francisco to Los Angeles. Along the way, another car loses control and is about to collide with mine. In those intervening split-seconds, when the speed and relative position of my car can still be modified, the vehicle’s computational system swings into action. What happens next?

Maybe the car was programmed by the rental car company to minimize structural damage, and therefore increase the risk of bodily harm. Or perhaps the insurance company provided overriding instructions to protect the policy holder over anyone else — including children in the back seat. Or, maybe the automotive manufacturer has concluded it is best to “deprioritize” bystanders outside the vehicle. Or, even the original software programmer could have left it in some completely unknown “default” mode. These are all decisions that would affect the participants immensely — and yet, the humans involved have no real agency or consent in this complex chain of reasoning.

What’s the solution? Can we create a future where the AI lurking behind our digital “screens, scenes, and unseens” doesn’t automatically cede to an institution’s priorities and incentives over people’s well-being? There is no shortage of suggestions, from introducing government oversight, to implementing ethical training for programmers. These proposals are hugely important, and together many of them can have a positive impact. However, they all depend upon one approach: modifying the AIs which still remain largely under the control of the underlying institutions. In other words, incrementally modifying an already-powerful status quo, rather than challenging it more directly. There’s another approach worth considering — one that entails creating new AIs that have true allegiance to the end user.

More on that “how,” in Part II, coming soon.

The Startup

Richard Whitt

Written by

Richard is a former Googler with a passion for making the open Web a more trustworthy and accountable place for human beings.

The Startup

Medium's largest active publication, followed by +589K people. Follow to join our community.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade