AI trends to pay attention to in 2018
Let’s take a look at the top trends in AI that will be making a boom in the year to come.
As 2017 opens room for the new year to arrive we would like to take a look at the main AI trends that will be making headlines in 2018, where AI will continue to be as relevant as ever.
Let’s start by having a look at what the researchers at Gartner had to say on their “Top 10 Strategic Technology Trends for 2018” report, where, AI Foundation is listed as number one:
The ability to use AI to enhance decision making, reinvent business models and ecosystems, and remake the customer experience will drive the payoff for digital initiatives through 2025.
Gartner, December 2017
More specifically Gartner sees Machine Learning systems as the most concrete application of Artificial Intelligence. The evolution of these automatic learning systems is very fast and, although there is still way to go, companies looking to stay in the lead of the digital transformation tool should take this technology into account as part of their business development plans for 2018. According to the study made my Gartner, this type of tech will be consolidated by 2025.
If you're looking to understand more about how Machine Learning works then check out Machine Learning — The conjuring code Episode #1 by Vishal Ranjan where he runs you through the basics of this exciting tech.
But Gartner is not alone in thinking that the creation of systems that learn, adapt and potentially act autonomously will be a great battlefield for technology vendors, at least until 2020. Trend studies carried our by PwC also concur that the use of AI to improve decision making to reinvent business models and working ecosystems, will be the tonic that shapes it all, across industries by 2025.
Let’s discuss some of the most exciting AI trends that PwC has identified for 2018.
Deep learning theory: demystifying how neural networks work
According to PwC deep neural networks, which mimic the human brain, have demonstrated their ability to “learn” from image, audio, and text data. This gives them the potential to be adapted to resolve different task, which boosts the technology’s success rate.
Deep reinforcement learning: interacting with the environment to solve business problems
Kai Arulkumaran, believes that deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels.
A great example of this technology is the famous AlphaGo program that beat a human champion.
Lean and augmented data learning: addressing the labeled data challenge
Sebastien C. Wong from the university of South Australia, describes data augmentation as a regularizer in preventing overfitting in neural networks and that can help improve performance in imbalanced class problems. This helps address the biggest challenge in machine learning (deep learning, in particular), which is the availability of large volumes of labeled data to train the system.
Automated machine learning (AutoML): model creation without programming
Google’s approach to AutoML works as illustrated below by “allowing a controller neural net to propose a child model architecture, which can then be trained and evaluated for quality on a particular task. That feedback is then used to inform the controller how to improve its proposals for the next round. We repeat this process thousands of times — generating new architectures, testing them, and giving that feedback to the controller to learn from”.
Digital Twins are a metaphor used in the design of IoT architectures (internet systems of things) that has proven quite useful in practice. The idea is very simple: for each physical object, involved in the system, a virtual twin is created.
Pwc describes digital twin as a virtual model used to facilitate detailed analysis and monitoring of physical or psychological systems. The concept of the digital twin originated in the industrial world where it has been used widely to analyze and monitor things like windmill farms or industrial systems.
Finally, Explainable AI is relative new Approach that seeks to develop machine learning techniques that produce more explainable models while maintaining prediction accuracy. This is relevant because as PwC poses it, AI that is explainable, provable, and transparent will be critical to establishing trust in the technology and will encourage wider adoption of machine learning techniques.
DARPA has pioneered a system called XAI that has as central purpose the increase effectiveness of AI operated machines, while at the same time allowing for the user to receive explanations of the individual decisions made by the machine to lead to an overall understanding of the behaviour of the system.
As you can see, AI will continue to make headlines as a main protagonist in the main tech developments for 2018. Currently, 2,500 million gigabytes of data are generated per day and by 2020 40 zettabytes are expected, and you can bet your hat that all this data will be processed with the help of AI based solutions.
Catch ya, in the future brought to us all by the awesomeness of AI! 👋