Why hasn’t Artificial Intelligence been democratized?

3 Factors Preventing the Democratization of AI

Bill Su
Bill Su
Sep 22, 2017 · 12 min read

The potential impact of AI on society can be compared with that of nuclear power. On the one hand, it is able to create a massive boost in productivity and quality of life for human societies. On the other hand, it also has the potential to fall in the wrong hands and create weapons of mass destruction that will endanger billions of lives.

That’s why, before the full onset of the AI revolution, it is important for us as a society to consider ways to prevent the potential negative effects of AI from becoming a reality.

Elon Musk, a thought leader in predicting the effects of AI on society, proposed democratization of AI as the solution to its potential risks in an interview with YCombinator founder Sam Altman.

His view can be summarized by the following snippet:

“It becomes a very unstable situation if you’ve got any incredibly powerful AI. You just don’t know who’s going to control that. It’s not that it will develop a will of its own right off the bat, the concern is that someone may use it in a way that’s bad… So we must have democratization of AI technology and make it widely available. And that’s the reason that… [we] made OpenAI, to help spread out AI technology so it doesn’t get concentrated in the hands of a few.”

OpenAI cofounders Elon Musk and Sam Altman talk about the future of AI

Democratization of AI, that is, making AI technologies widely available to all businesses and individuals at an affordable cost, is becoming one of the most popular topics of discussion in recent years. However, while OpenAI has gained much traction since its establishment, the progress we have made towards AI democratization has been at most uninspiring.

According to a recent survey by McKinsey Global Institute, only 20% of companies are using AI technologies in their day-to-day operations. Given the respondents of the survey are mostly the larger enterprises in their industry, AI adoption in small and medium sized businesses is even more bleak.

This article is going to explore three primary factors preventing companies, large and small alike, from adopting AI technology in their day-to-day operations:

  1. Factor 1: Few people really know what AI is
  2. Factor 2: A lack of large, publicly available datasets makes adopting AI difficult
  3. Factor 3: The true power of current AI technologies is very limited

For each of the factors, I will also propose potential solutions that we can use to bypass the adoption hurdles of AI and make its democratization truly possible.

Factor 1: Few people really know what AI is

This does not currently exist for AI.

According to a survey conducted by Qualtric, only 10% of the internet users surveyed (compare with 34% for 3D printing) consider themselves experts of AI (i.e. they understand the ins and outs of the technology). On the other hand, over 30% of respondents have only heard of AI, but don’t really know what it entails.

PC: Tom Fishburne

Honestly, I am not surprised by the result of this survey since the term “Artificial Intelligence” is very general and covers such a wide range of computer innovations that began in the 1950s.

According to Merriam Webster, AI can be defined as the “science and engineering of making intelligent machines, especially intelligent computer programs.”

Based on this definition, even some of the simplest things in everyday life, such as vending machines and gas pumps, can be included in some categories of AI because they do mimic certain tasks that were previously accomplishable only by human intelligence.

Even if we are only talking about well-known AIs that are “changing the world,” they can be further broken down into many categories such as game-playing (Deepblue and AlphaGo), natural language processing (Amazon Alexa), anomaly detection (fraud detection technology), and computer vision (facial recognition technology).

Each of these categories has their own distinctive impact on our society. For example, computer vision systems and game-playing systems are the main technology behind self-driving cars. On the other hand, natural language processing is the potential disruptor of many reading-heavy jobs, such as financial analyst jobs.

Therefore, without a deep-level understanding of the history and components of AI, it is really hard for a lay-person to carefully consider the impacts of each individual category of AI technology, and to offer their own informed opinions on how to prevent their negative effects.

Proposed Solutions

The first prong is education and engagement.

As AI experts and technologists, it is our duty to create more resources and opportunities for the public to earn more about AI.

Through those learning experiences and public conversations, we will not only be able to inform the general population about AI, but also gain a fresh perspective on how technologies we are creating are benefiting or harming these people’s lives.

Luckily, many initiatives are already underway to further this objective.

In addition to the aformentioned OpenAI project, Andrew Ng (co-founder of Coursera and a top authority on Artificial Intelligence) recently launched deeplearning.io to help provide resources for more people to use AI in their work and everyday life.

While these new developments are great starts, we need to do more.

Currently, most of the discussions and meetups related to AI are concentrated in high-tech cities such as Seattle and San Francisco. Also, the topics of discussion are often too technical for people without a technical background in coding.

To solve this problem, we need to expand this conversion to less tech-savvy cities such as Cleveland and Birmingham, and shift the discussion to be more approachable and friendly for laypeople without an intensive computer science background.

The second prong calls for the redefinition of the term AI.

As illustrated above, AI really means a lot of different things, and talking about “AI’s impact on society” is like talking about “electricity’s impact on society” — it is too vague to get any meaningful conversations started.

Instead of being so general, I encourage discussion leaders to frame conversations around each specific aspect of AI. We need to start talking about a specific AI technology’s impact on society, such as “the impact of computer vision on this aspect of society” or “the effects of natural language processing on a society”.

With these more specific areas defined, more productive conversations will ensue.

Factor 2: A lack of large, publicly available datasets makes adopting AI difficult

According to Forbes, the amount of data created from 2013 to 2015 is greater than the amount of data created in all of human history before that. Most of these data are created on social media platforms such as Facebook, search engines such as Google, and online/mobile applications.

With these collected data, companies such as Facebook and Google are able to create extremely sophisticated algorithms that help their core advertising platforms target ads to precise audiences. These data enable these tech giants to predict other interesting phenomena such as when the flu season hits.

However, in order for everyone to have access to AI technologies that pay incredible dividends for companies like Google and Facebook, lesser players in the field must also have access to the volume of data that are available for the big players — which they do not.

Having worked in the field of open data prior to founding Humanlytics, I can tell you with certainty that open data is at most in its nascent stage, with only a select few datasets available to support meaningful innovations.

Right now, while open data initiatives such as data.gov provided many open datasets for the public to analyze, these datasets are usually too small, too general, or too unclean enable any meaningful AI innovations.

PC: data.gov

Without the help of enough open data, companies have to purchase extremely expensive datasets from data companies such as Nielson and the Weather Company for a prohibitively expensive price tag, making AI innovation effectively impossible for smaller enterprises.

Proposed Solutions

In short, we need to democratize data as well (this is actually the idea behind the founding of Humanlytics, read more below).

Here I will borrow an argument made by Tak Lo, the founder of Zeroth.ai, a Hong Kong based venture fund with the goal of facilitating AI democratization through the democratization of data.

In this recent Medium post, tak_lo proposed two solutions to the data divide:

  1. Develop a more organized way to store and manage public data
  2. Help implement AI solutions in data-rich but technology-poor organizations.
PC: Zeroth.ai

While I agree with both solutions, I would like to add a few of my own insights to each point.

First of all, while creating a systematic way of organizing public data is extremely important, making these data available are not enough for AI to be democratized.

In addition to creating a systematic way of sharing public data, there should also be a mechanism for organizations to share use cases of public data with the community.

Then, either the company that needs AI help or a third-party organization can create open source AI libraries with plug-and-play data solutions. This would enable other players in the field to use public data with hundreds of hours of work parsing and creating models around the data.

One of the best examples of this is the Facebook Prophet open source library. Released by Facebook, Prophet is a open-source python model that enables users to access the most sophiscated time-series algorithms without having to manually implement them.

Facebook Prophet

Personally, Prophet saved us so much time at Humanlytics when we built a time-series model to forecast metrics such as user traffic. This both saved us valuable time and elevated our “AI” capabilities.

Secondly, I believe helping data-rich companies adopt AI is also necessary but not sufficient. If these companies established AI capabilities while refusing to share their data to the public, it will only exacerbate the data-gap in AI.

Therefore, instead of building a tailor-made AI for each organization, I propose building a general-purpose AI for each major industry. Companies can plug in their data into these general-purpose AI systems to derive insights, but they must also share their data anonymously to contribute to the development of the AI.

This is exactly what we are trying to do at Humanlytics (you can learn more at humanlytics.co) for the digital marketing industry. I believe this is the only way we can prevent the data divide from growing as a result of AI development.

Factor 3: The true power of current AI technologies is very limited

Recently, Sam Harris, the famous podcaster and philosopher, was interviewed on Ted Radio Hour by NPR (starting at 35:54).

I recommend you listen to the entire podcast, but the part I want to reference here is at 39:40. Here he explains that we are currently living in the age of “narrow AI,” in which machines can achieve super-human intelligence in a very narrow spectrum, rather than becoming this omnipotent force in all areas that many imagine it to be.

PC: Ted

I found this point of “narrow AI” to be extremely important because while we have this fear of AI surpassing our whole intelligence, current AI technologies can only really do a few restricted tasks such as looking at pictures, understanding languages, and detecting potential fraudulent activities.

Furthermore, AIs nowadays cannot learn by itself. Data scientists and AI engineers have to manually program relevant features and variables into the AI for it to work as intended. If these variables and features change in the future, there is no way for the current AI systems to adapt itself to these changes.

What this tells us is that one reason AI is not democratized is because AI is not powerful enough to be useful for everyone — but soon it will be.

Proposed Solutions

What this means is that in order for AI to evolve, its development process must be transparent to the general population, because its development requires the help of the general population to constantly interact with the AI to teach it about the human world.

In other words, AI democratization is not only a good-to-have, but also a necessary step towards a general-purpose AI that can learn from its environment to improve itself.

I discussed this point in more detail in my recent blog post linked below. To summarize, in order for AI technologies to improve to a level that understands human thinking, AI engineers and designers need to pay much more attention to designing AI interaction with humans, rather than focusing on complex algorithms to improve prediction models.

These AIs will then be sent into the human world either untrained or under-trained, and learn from humans as we undergo our day-to-day activities. This is how AIs will constantly improve itself to our level.

Only with this level of intimate understanding between human and machine can both entities improve without cannibalizing each other. Only then will AI democratization will occur as a by-product.


Wrapping Up

We attributed the lack of AI democratization to three factors:

  1. Few people really know what AI is
  2. A lack of large, publicly available datasets makes adopting AI difficult
  3. The true power of current AI technologies is very limited

For the lack of public knowledge challenge, we proposed a two-pronged solution that calls for an increase in AI education in all cities around the world, and an increase in the specificity in AI-related discussion topics.

For the lack of open data, we advocated for a more structured way to manage not only public data, but also use cases and open-source AI projects surrounding that data. We also called for creating a “public AI” that companies can access at a low price in exchange for contributing their data to train the AI.

Finally, for the constraints of current AI technologies, we proposed an interaction-based framework that calls for AI engineers to focus on the interaction design of AI, rather than the algorithms they are currently focusing on.

With these solutions, I believe we can drastically accelerate the process of AI democratization. We can then use AI democratization as a tool to prevent the potential adverse effects of AI and ensure that it benefits all of humankind.

PC: Teslarati

This article was produced by Humanlytics. Looking for more content just like this? Check us out on Twitter and Medium, and join our Analytics for Humans Facebook community to discuss more ideas and topics like this!

Analytics for Humans

We examine how technologies can work with humans to create a brighter future for everyone. To that end, we showcase augmented analytics tools we are building to bring us closer to that vision. Beta test our AI-powered marketing analytics tool for free: bit.ly/HMLbetatest

Bill Su

Written by

Bill Su

CEO, Humanlytics. Bringing data analytics to everyone.

Analytics for Humans

We examine how technologies can work with humans to create a brighter future for everyone. To that end, we showcase augmented analytics tools we are building to bring us closer to that vision. Beta test our AI-powered marketing analytics tool for free: bit.ly/HMLbetatest

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade