10 Machine Learning, Artificial Intelligence Advances in 2017

Alejandro Alex Jaimes
5 min readJan 5, 2018

--

2017 was a big year for Artificial Intelligence (AI) and Machine Learning (ML), and although some of it was pure hype, a lot of what was achieved will be significant as the field continues to grow and have an impact in every industry. Below I summarize what I consider the main achievements or events of the year, and briefly explain why they’re important. The list includes technical achievements as well as commercial ones because in the end, the real impact of technological transformation happens when technology reaches the masses. While some of them are preliminary in nature and most were not realized from scratch in 2017, they do plant important seeds for the future.

  1. ML with little supervision. AlphaGo Zero and CMU Libratus made important progress in problem solving. AlgphaGo Zero learned by playing against itself, rather than relying on training data, while Libratus learned from “hidden” information in Poker. These two advances are significant, in reducing the reliance on training data and “perfect” information. This kind of work successfully addresses two major limitations of current ML systems, encouraging further research in these areas and opening the door to many applications in which training data and imperfect information are problematic (that happens to be the case in almost everything!).
  2. AI got more creative. We saw a much wider range of applications of Generative Adversarial Networks (GANs), a technique in which one network judges what another network generates. Impressive results were obtained in automatic music composition, generation of paintings, and of realistic images, among others. The use of AI in creative endeavors will push us to be even more creative, and to question the authenticity of the content we are presented with. We’re not quite there yet, but fake images and videos in news settings will likely be a challenge we’ll face in the near future.
  3. Vision and Speech recognition on demand. A lot of the progress in AI in recent years has its roots in Computer Vision (labeling objects in images and video). In 2017 we saw stronger foray into the beginning of the commoditization of these technologies, which has far reaching implications for future applications across several industries. Both are still active research areas, and only a few companies have the technical power to develop and commercialize the algorithms. But it is easier than ever for any developer or business to use these services without having the in-house expertise, creating countless opportunities for new applications of the technology.
  4. Hardware wars heat up. Although it’s still early, advances were made in specialized ML hardware, with several traditional hardware manufacturers announcing chips and initiatives optimized for Machine Learning (Intel, IBM, ARM, AMD), a slew of startups working on different hardware architectures, and new players in the chip space such as Google with its TPUs. A flurry of activity in this space is significant because as competition increases on creating better hardware solutions, prices will continue to decrease as performance increases, opening the doors for new and exciting applications.
  5. ML tackled data structures. Data structures are critical in most large-scale computing tasks and in 2017 we saw Deep Learning being applied, by researchers at Google, to optimize data distributions, achieving significant performance gains. This ignited a heated debate on whether ML can automatically optimize index structures. It is interesting because it’s yet another new area of application for Deep Learning, which could translate into very significant increases in performance and savings, possibly shaking up that part of the market.
  6. Machine Language Translation. In 2017 we saw significant activity, and continued improvements to Neural Machine Translation, which is based on Deep Learning. There were promising improvements in speed and accuracy in academic and industrial systems. Reaching near human performance on automatic translation has significant economic implications at very large scale. On one hand, it increases access to information, which is necessary for development, and on the other hand, facilitates real time communication. As such technologies improve, preservation of human languages and human knowledge of multiple languages will become a bigger challenge.
  7. Interpretability in the front row. Although we’re far from understanding how some algorithms actually make decisions, there was renewed attention and efforts in understanding Deep Learning systems, in particular. This included heated debates on the theoretical aspects as well as progress in understanding when those algorithms fail. Techniques were developed that are able to fool systems that identify objects in images, by changing a single pixel or adding a “sticker” to the image, among others. While these efforts help us understand how algorithms make decisions, they also encourage activity to address challenges in hacking AI, which is critical in many applications, including healthcare and security, among others.
  8. AI entered the home. It was a big year for voice assistants such as Alexa and Google Home as they became mainstream. The success of these products is significant as it’s likely to pave the way for major changes in how we interact with a wide range of devices, at home and on the go. What is most important is not the technology, but that these devices have been successful at changing habits. It’s a very significant shift if the success continues. While in the home such new ways of interacting are a matter of convenience, in many scenarios, they have significant safety implications, for example, while driving, and efficiency gains, for example, in manufacturing and healthcare, among others.
  9. AI became mainstream. From a second Superbowl ad featuring AI to an advertising campaign that casted an AI researcher, to news on an almost daily basis on AI, it was the year in which AI achieved its highest level of public notoriety. On one hand, this is leading some to think that its magic will solve every problem, and on the other, that it’s all just a hyped bubble likely to crash soon. The biggest challenge with this trend is that it has created knowledge vacuums at every level, from developers, executives, and investors, to average workers (some of whom now fear that technology will take away their jobs). In general, it’s a good thing, but there are risks, including unrealistic expectations both ways, which could hamper further innovation.
  10. Ethics in AI. Countless conversation and debates ensued, not only on topics such as bias in AI systems, but also on the impact of AI in the future of work. While most of the fears are unfounded, 2017 planted the seeds to start seriously considering the impact of AI, whether and how it is regulated, and other important questions. It’s very early, and as the technology becomes more commonplace, we’re likely to see big failures, which might push the ethics conversations further. What’s most positive about the debate, however, is bringing some of the real issues to the spotlight, at every level.

I hope this list is helpful in identifying what was most important in 2017- there were many other advances, of course. But, in thinking about the future, how might each of these affect you, your business, and society in the future?

--

--

Alejandro Alex Jaimes

SVP of AI & Data Science at Dataminr. Ex-Head of R&D @ DigitalOcean, ex-Director at Yahoo, ex CTO/Chief Scientist (Machine Learning, Computer Vision)