Another Decade of AI

My analysis and prediction of trends in AI for the 2020s

Sebastian Theiler
Dec 30, 2019 · 11 min read
Adapted from: [1], [2], [3], [4], [5], [6], [7], [8], [9]

AI has seen a tremendous amount of improvement in the last 10 years and has the power to absolutely transform our lives within the next decade.

In this article I have compiled my top predictions for the future of artificial intelligence, how AI will change our lives and when all of this will happen.

Topics

  • Machine Learning Frameworks
  • Chatbots and Virtual Assistants
  • Hardware
  • AI Explainability
  • The Medical Industry
  • Education
  • Reinforcement Learning
  • Space

Machine Learning Frameworks

TensorFlow 2.0/Keras vs. PyTorch

I find TensorFlow much more readable than PyTorch and more straightforward for writing quick examples, which is a high priority for explaining ideas to the public.

On top of this, TensorFlow has TensorBoard, one of the most potent visualization tools possible for machine learning. Currently, PyTorch has no official integration with TensorBoard; however, community solutions to this problem exist.

In the 2020s, TensorFlow and PyTorch will likely remain the two dominant frameworks for machine learning — TensorFlow for more broad applications and PyTorch for more research-oriented uses.

As of late 2019, there is no niche for a new machine learning framework to occupy; you can already get pretty much everything done with TensorFlow or PyTorch. This means that we probably won’t be seeing any new large ML frameworks for Python anytime soon.

Mobile Development

Because smartphones have much less computing power than the workstations ML models are developed on, I believe there will be a considerable amount of development towards less computationally expensive models.

This may take form in model pruning, or attempting to redesign (or design entirely new) models that require less computation. The latter is seen in examples such as seen in ALBERT, beating the previous state-of-the-art, BERT, in many NLP tasks, while only having a fraction as many parameters.

Julia

Source

To remedy this, Julia was created. Julia was designed with the speed of low-level languages like C in mind, while also having the high-level syntax of a language like Python.

Julia already has support for data visualization, machine learning tools, differential equation solvers, and even support for user interfaces. It can also be statically compiled and deployed on web servers without much trouble. Python can even interact with Julia through interfaces such as PyJulia.

A visualization example built with Julia | Source

Over the next decade, Julia will undoubtedly grow in capabilities and popularity. Whether or not it will be able to overtake Python is hard to say, but I believe that it will certainly get close.

Chatbots and Virtual Assistants

By the end of the 2020s, I believe that chatbots and virtual assistants will have a profound effect on our lives. Here’s why:

Natural Language Processing (NLP)

That’s how fast state-of-the-art NLP systems are being developed.

Within a few years, I predict that these sort of NLP systems will be advanced and trustworthy enough for them to be widely deployed for traditional virtual assistants such as Siri and Cortana.

This will allow these rudimentary chatbots to be able to respond to a much greater array of tasks and respond with information in a way currently, only humans can do.

Text to Speech (TTS)

Recent advances in TTS take us a step closer to solving this. Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis by Jia, Zhang, and Weiss et al. introduces an approach (which I briefly cover in my article here) for cloning a voice given a few seconds of input data, and synthesizing human-sounding audio.

If you would like to take a listen to some samples created using this method, you can find them here. The samples created through speakers in the VCTK dataset are truly incredible.

Humanoid Chatbots

So far, most attempts fall into the uncanny valley; too similar to humans to be obviously robotic, but not close enough to be human. Just unsettling.

The Uncanny Valley

Not to mention, the field of robotics must take significant strides (which it has been doing) to be able to navigate a home environment, which is necessary for a truly humanoid robotic assistant to become widespread. I do not see robotics being able to complete such tasks until no earlier than the late 2020s or 2030s.

Because of all this, the most successful virtual assistants will be just that, virtual, possibly having some human features like an animated face.

With the power of much more advanced NLP and TTS, the virtual assistants of the future will have much more control over our computers, and quite possibly our homes. Our voice may even become a third medium (along with our keyboards and mice) for interacting with our computer.

  • “Hey ____, play that video I watched earlier about machine learning on my TV”
  • “Hey ____, summarize this article for me”
  • “Hey ____, find a study on the effect of sleep on productivity, and pull it up on my smaller monitor”

Not to mention hundreds more capabilities, even Iron Man couldn’t think of when designing Jarvis.


Hardware

Currently, NVIDIA practically owns the market for ML GPUs. However, it is possible that a competitor, e.g., AMD, begins to move more into the ML arena.

I’m no economist, but in my prediction, the increased competition between the two (or more) companies would likely significantly increase the rate of innovation while decreasing costs.

I can’t say with any certainty that this will happen, but if it does, reduced GPU prices will make high-level deep learning much more accessible to the public.

Tensor Processing Units (TPUs)

Google’s custom TPU | Source

Announced in 2016, Google’s Tensor Processing Unit (TPU) is a chip designed specifically for the TensorFlow framework. TPUs have remained accessible to the public via Google’s cloud services; however, Google has made it clear they do not plan to sell TPUs commercially.

It is highly unlikely that the public will ever be able to get their hands on a TPU. Still it is within the realm of possibility that some company will create a new form of hardware, marketed publicly, explicitly designed for ML technologies.

Quantum Computing

Some argue that the idea of consciousness is inherently quantum, and can thus only be modeled on quantum devices. Whether or not this is true, quantum computers allow for us to model the real world in its true form, not abstracted away by 1s and 0s.

With this said, I don’t believe quantum computers will truly become useful anytime soon.

Google’s announcement a few months ago that they had achieved “quantum supremacy” got a lot of people hyped up. But, as Scott Aaronson says in his blog, Google’s claim of quantum supremacy is equivalent to the Wright Brother’s flight at Kitty Hawk, “about which rumors and half-truths leaked out in dribs and drabs between 1903 and 1908, the year Will and Orville finally agreed to do public demonstration flights.”

Google’s achievement doesn’t mean we will have quantum computers within the next decade, it just means they’re possible.

AI Explainability (XAI)

For example, if AI can predict the weather with a tested 99.99% accuracy, people won’t care why it works; they will care about what it says.

Trusting something we don’t understand sounds ridiculous. However, researchers still don’t fully understand why some drugs, e.g., penicillin, work, but nonetheless, we use them to save thousands of lives every year.

Only huge corporations and governments will have the money and time to invest into explainable AI. Startups will continue to use unexplained AI because they simply can’t afford the resources to develop explainable AI. If companies like Google can sell tools for explaining currently existing models at a cheap enough price, that actually add value to their buyer’s solutions, explainable AI will develop. If that doesn’t happen, explainable AI will advance much slower than many people believe.

The Medical Industry

Despite this, in countries where the supply of modern medicine is limited, e.g., parts of sub-Saharan Africa, AI will come into play much faster. AI can provide large scale diagnoses at an incredibly cheap cost (once the model is built), and when combined with fewer regulations, leaves such countries as these perfect for AI-based diagnoses and cures.

Home Diagnoses

As we discussed earlier, NLP is developing rapidly, which enables these sorts of systems. Such systems may slowly grow; however, the real boom in those systems will be when well-known medical brands officially endorse them. That will likely not happen until such systems are explainable.

Medicine Development

With developments such as Google’s AlphaFold for simulating protein folding, a reality where AI can discover new medicines and treatments is getting closer every day.

AlphaFold predicting protein structures | Source

At its current state, AI is not advanced enough to provide real assistance to doctors and researchers, but within a few decades, and possibly by the end of the 2020s, we may see AI creating cures to diseases humans could have only dreamed of inventing.

Education

I believe that education will not be greatly affected by AI in the 2020s, and if it does, it will come much closer to the 2030s.

With this said, AI will revolutionize online learning much sooner than expected. Online learning has the power of statistics about its users, and what works best with large amounts of data? AI.

With AI, online learning can see correlations about what works best better than any human. AI can understand each individual learner far better than any human and can assign them materials accordingly.

Students will be able to learn in their own way with AI administering human-made resources in the scientifically optimal way.

Minutes spent on the online learning platform Khan Academy by country | Source

Reinforcement Learning

RL will advance to a point where it can learn from much less data, and incorporate prior experiences in its learning. These two steps will allow RL to see more real-world applications and will let it enter mainstream applications.

Frameworks

Despite this, as more and more is added to these frameworks and they become more well known, the best ones will rise to the top and become the “TensorFlows” and the “PyTorches” of RL. I suspect that the clear separation of these frameworks will happen within the 2020s.

The Lunar Lander v2 environment provided by OpenAI Gym | Source

Space

With a more developed space industry, the public may start to see faster internet with wider coverage, for example, SpaceX’s Starlink service, vast reservoirs of resources available through asteroid mining, and whole hosts of other technologies, as seen in all sorts of items already developed from NASA projects.

AI in Space

The answer, as it always seems to be with data science and AI, is data.

If we thought there was a lot of data on Earth, now we have a whole universe of data to explore, enabled by more advanced space technologies. Far too much data for humans to analyze and draw connections from; the perfect amount for AI.

AI will be able to analyze data about everything, from more advanced views of Earth through satellites, to information about extraterrestrial soil and rocks, to data about strange stars light-years away. No human, even with “perfect” visualization algorithms, can look through all this data with the meticulousness and power of AI.

Apart from that, AI (more specifically RL) may assist in designing rockets. However, I find this slightly less likely to happen on a large scale in the 2020s than what I’ve described above.


Conclusion

The above were my predictions for the next decade of AI — what sort of advancements we will see and how those advancements will affect our lives.

It may turn out that all these predictions are utterly wrong, and if they are, so be it. It feels good to speculate about what the future may have in stock for humanity.


One thing, though, none of these predictions will turn out remotely true without us working towards them. So with that said, use your skills and build the future.

And as always, until next time,

Happy coding!


Thank you so much for 100 followers, I couldn’t have done it without each one of you. Happy New Year’s!

Analytics Vidhya

Analytics Vidhya is a community of Analytics and Data Science professionals. We are building the next-gen data science ecosystem https://www.analyticsvidhya.com

Sebastian Theiler

Written by

I write articles about data science, artificial intelligence, and machine learning

Analytics Vidhya

Analytics Vidhya is a community of Analytics and Data Science professionals. We are building the next-gen data science ecosystem https://www.analyticsvidhya.com

More From Medium

More from Analytics Vidhya

More from Analytics Vidhya

More from Analytics Vidhya

Get More Out of Google Colab

More from Analytics Vidhya

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade