AI has seen a tremendous amount of improvement in the last 10 years and has the power to absolutely transform our lives within the next decade.
In this article I have compiled my top predictions for the future of artificial intelligence, how AI will change our lives and when all of this will happen.
In this article, we will cover:
- Machine Learning Frameworks
- Chatbots and Virtual Assistants
- AI Explainability
- The Medical Industry
- Reinforcement Learning
Machine Learning Frameworks
Behind every AI application comes the frameworks powering it.
TensorFlow 2.0/Keras vs. PyTorch
Personally, I will continue TensorFlow 2.0/Keras (I will just refer to this as TensorFlow from now on) for the majority of my projects as far as I can see.
I find TensorFlow much more readable than PyTorch and more straightforward for writing quick examples, which is a high priority for explaining ideas to the public.
On top of this, TensorFlow has TensorBoard, one of the most potent visualization tools possible for machine learning. Currently, PyTorch has no official integration with TensorBoard; however, community solutions to this problem exist.
In the 2020s, TensorFlow and PyTorch will likely remain the two dominant frameworks for machine learning — TensorFlow for more broad applications and PyTorch for more research-oriented uses.
As of late 2019, there is no niche for a new machine learning framework to occupy; you can already get pretty much everything done with TensorFlow or PyTorch. This means that we probably won’t be seeing any new large ML frameworks for Python anytime soon.
As AI gets to the point where it will start to become more useful to the public, a large number of startups will integrate AI applications into their apps. This will drive the demand for mobile ML frameworks, i.e., TensorFlow Lite and PyTorch Mobile.
Because smartphones have much less computing power than the workstations ML models are developed on, I believe there will be a considerable amount of development towards less computationally expensive models.
This may take form in model pruning, or attempting to redesign (or design entirely new) models that require less computation. The latter is seen in examples such as seen in ALBERT, beating the previous state-of-the-art, BERT, in many NLP tasks, while only having a fraction as many parameters.
Python (and other languages like R) have been the go-to languages in data science for quite some time now. With that said, a common complaint of Python is its speed. Python is slow.
To remedy this, Julia was created. Julia was designed with the speed of low-level languages like C in mind, while also having the high-level syntax of a language like Python.
Julia already has support for data visualization, machine learning tools, differential equation solvers, and even support for user interfaces. It can also be statically compiled and deployed on web servers without much trouble. Python can even interact with Julia through interfaces such as PyJulia.
Over the next decade, Julia will undoubtedly grow in capabilities and popularity. Whether or not it will be able to overtake Python is hard to say, but I believe that it will certainly get close.
Chatbots and Virtual Assistants
Chatbots have existed in their basic forms for quite some time now. We can laugh at Siri and Cortana’s programmed attempts at emotion for now, but such systems may evolve into nearly human companions sooner than we may think.
By the end of the 2020s, I believe that chatbots and virtual assistants will have a profound effect on our lives. Here’s why:
Natural Language Processing (NLP)
I would tell you what model is at the top of the GLUE leaderboards, but by the time you read this, it’s probably changed a few times.
That’s how fast state-of-the-art NLP systems are being developed.
Within a few years, I predict that these sort of NLP systems will be advanced and trustworthy enough for them to be widely deployed for traditional virtual assistants such as Siri and Cortana.
This will allow these rudimentary chatbots to be able to respond to a much greater array of tasks and respond with information in a way currently, only humans can do.
Text to Speech (TTS)
No chatbot could pass a Turing Test if they sound as robotic as they do currently. And without a genuinely human voice, the quality of interaction between a chatbot and a human will be forever limited.
Recent advances in TTS take us a step closer to solving this. Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis by Jia, Zhang, and Weiss et al. introduces an approach (which I briefly cover in my article here) for cloning a voice given a few seconds of input data, and synthesizing human-sounding audio.
If you would like to take a listen to some samples created using this method, you can find them here. The samples created through speakers in the VCTK dataset are truly incredible.
Trying to make chatbots human hasn’t seemed too work to well.
So far, most attempts fall into the uncanny valley; too similar to humans to be obviously robotic, but not close enough to be human. Just unsettling.
Not to mention, the field of robotics must take significant strides (which it has been doing) to be able to navigate a home environment, which is necessary for a truly humanoid robotic assistant to become widespread. I do not see robotics being able to complete such tasks until no earlier than the late 2020s or 2030s.
Because of all this, the most successful virtual assistants will be just that, virtual, possibly having some human features like an animated face.
With the power of much more advanced NLP and TTS, the virtual assistants of the future will have much more control over our computers, and quite possibly our homes. Our voice may even become a third medium (along with our keyboards and mice) for interacting with our computer.
- “Hey ____, play that video I watched earlier about machine learning on my TV”
- “Hey ____, summarize this article for me”
- “Hey ____, find a study on the effect of sleep on productivity, and pull it up on my smaller monitor”
Not to mention hundreds more capabilities, even Iron Man couldn’t think of when designing Jarvis.
Behind every machine learning project comes computing power.
Currently, NVIDIA practically owns the market for ML GPUs. However, it is possible that a competitor, e.g., AMD, begins to move more into the ML arena.
I’m no economist, but in my prediction, the increased competition between the two (or more) companies would likely significantly increase the rate of innovation while decreasing costs.
I can’t say with any certainty that this will happen, but if it does, reduced GPU prices will make high-level deep learning much more accessible to the public.
Tensor Processing Units (TPUs)
Announced in 2016, Google’s Tensor Processing Unit (TPU) is a chip designed specifically for the TensorFlow framework. TPUs have remained accessible to the public via Google’s cloud services; however, Google has made it clear they do not plan to sell TPUs commercially.
It is highly unlikely that the public will ever be able to get their hands on a TPU. Still it is within the realm of possibility that some company will create a new form of hardware, marketed publicly, explicitly designed for ML technologies.
Quantum computing has the power to speed up computations beyond anything even imaginable today. On top of that, quantum computing has merged with the study of AI into quantum neural networks, or QNNs for short.
Some argue that the idea of consciousness is inherently quantum, and can thus only be modeled on quantum devices. Whether or not this is true, quantum computers allow for us to model the real world in its true form, not abstracted away by 1s and 0s.
With this said, I don’t believe quantum computers will truly become useful anytime soon.
Google’s announcement a few months ago that they had achieved “quantum supremacy” got a lot of people hyped up. But, as Scott Aaronson says in his blog, Google’s claim of quantum supremacy is equivalent to the Wright Brother’s flight at Kitty Hawk, “about which rumors and half-truths leaked out in dribs and drabs between 1903 and 1908, the year Will and Orville finally agreed to do public demonstration flights.”
Google’s achievement doesn’t mean we will have quantum computers within the next decade, it just means they’re possible.
AI Explainability (XAI)
Personally, I don’t think explainable AI will be as important as many make it out to be. Don’t get me wrong, explainable AI will undoubtedly be required in some forms, e.g., healthcare, hiring, and law, but this doesn’t apply to everything.
For example, if AI can predict the weather with a tested 99.99% accuracy, people won’t care why it works; they will care about what it says.
Trusting something we don’t understand sounds ridiculous. However, researchers still don’t fully understand why some drugs, e.g., penicillin, work, but nonetheless, we use them to save thousands of lives every year.
Only huge corporations and governments will have the money and time to invest into explainable AI. Startups will continue to use unexplained AI because they simply can’t afford the resources to develop explainable AI. If companies like Google can sell tools for explaining currently existing models at a cheap enough price, that actually add value to their buyer’s solutions, explainable AI will develop. If that doesn’t happen, explainable AI will advance much slower than many people believe.
The Medical Industry
Through all of the 2020s AI will slowly become more prominent in medicine; however, I do not see AI becoming a standard in modern medicine until the late 2020s or even further.
Despite this, in countries where the supply of modern medicine is limited, e.g., parts of sub-Saharan Africa, AI will come into play much faster. AI can provide large scale diagnoses at an incredibly cheap cost (once the model is built), and when combined with fewer regulations, leaves such countries as these perfect for AI-based diagnoses and cures.
Although it will take a while for AI to enter clinics and doctor offices around the world, tools that provide diagnoses without having you leave your house will become likely become prominent more quickly.
As we discussed earlier, NLP is developing rapidly, which enables these sorts of systems. Such systems may slowly grow; however, the real boom in those systems will be when well-known medical brands officially endorse them. That will likely not happen until such systems are explainable.
Other than diagnosing patients and providing according treatments, AI may be the one creating the treatments.
With developments such as Google’s AlphaFold for simulating protein folding, a reality where AI can discover new medicines and treatments is getting closer every day.
At its current state, AI is not advanced enough to provide real assistance to doctors and researchers, but within a few decades, and possibly by the end of the 2020s, we may see AI creating cures to diseases humans could have only dreamed of inventing.
Teachers will be replaced no time soon.
I believe that education will not be greatly affected by AI in the 2020s, and if it does, it will come much closer to the 2030s.
With this said, AI will revolutionize online learning much sooner than expected. Online learning has the power of statistics about its users, and what works best with large amounts of data? AI.
With AI, online learning can see correlations about what works best better than any human. AI can understand each individual learner far better than any human and can assign them materials accordingly.
Students will be able to learn in their own way with AI administering human-made resources in the scientifically optimal way.
Reinforcement learning has made incredible progress in the past decade, and with examples such as AlphaGo, AlphaStar, and OpenAI beating human champions in Dota 2, I don’t see it slowing down anytime soon.
RL will advance to a point where it can learn from much less data, and incorporate prior experiences in its learning. These two steps will allow RL to see more real-world applications and will let it enter mainstream applications.
There are a lot of different available frameworks for RL, and unlike TensorFlow and PyTorch (and a few others) for general machine learning, there seem to be no clear winners.
Despite this, as more and more is added to these frameworks and they become more well known, the best ones will rise to the top and become the “TensorFlows” and the “PyTorches” of RL. I suspect that the clear separation of these frameworks will happen within the 2020s.
I believe that space travel will truly begin to take off in the 2020s. With NASA’s Artemis missions to return humans to the moon by 2024 and private corporations like SpaceX’s rapidly developing space technology, it won’t take long for the space industry to take off.
With a more developed space industry, the public may start to see faster internet with wider coverage, for example, SpaceX’s Starlink service, vast reservoirs of resources available through asteroid mining, and whole hosts of other technologies, as seen in all sorts of items already developed from NASA projects.
AI in Space
With all of the great developments space will bring humanity, what role will AI play?
The answer, as it always seems to be with data science and AI, is data.
If we thought there was a lot of data on Earth, now we have a whole universe of data to explore, enabled by more advanced space technologies. Far too much data for humans to analyze and draw connections from; the perfect amount for AI.
AI will be able to analyze data about everything, from more advanced views of Earth through satellites, to information about extraterrestrial soil and rocks, to data about strange stars light-years away. No human, even with “perfect” visualization algorithms, can look through all this data with the meticulousness and power of AI.
Apart from that, AI (more specifically RL) may assist in designing rockets. However, I find this slightly less likely to happen on a large scale in the 2020s than what I’ve described above.
The future of AI is bright, and as it advances, it is destined to take a greater and greater hold in our daily lives.
The above were my predictions for the next decade of AI — what sort of advancements we will see and how those advancements will affect our lives.
It may turn out that all these predictions are utterly wrong, and if they are, so be it. It feels good to speculate about what the future may have in stock for humanity.
One thing, though, none of these predictions will turn out remotely true without us working towards them. So with that said, use your skills and build the future.
And as always, until next time,
Thank you so much for 100 followers, I couldn’t have done it without each one of you. Happy New Year’s!