Machine learning: What to Watch Out for in 2019

By Ivan Danov, Ines Marusic, Erik Pazos, Maren Eckhoff, Paul Beaumont, Stavros Tsalides, QuantumBlack

We were privileged to recently attend NeurIPS, one of the most important AI and machine learning events providing an intensive week of workshops, tutorials and guest lectures, alongside an archive of freshly published research papers. With thousands of the most intelligent data minds gathered in Montréal, there’s no better place to assess the biggest themes and issues facing data science for the year ahead.

With this in mind, the QuantumBlack team has curated a rundown of the five big themes facing machine learning in 2019, based on what we heard, learned and discussed at the conference.

The Future Is Bright For Meta-Learning

Ivan Danov, Machine Learning Engineer

Historically, machine learning involved training a model for a specific task using one dataset. This was often done from scratch and training could be time consuming — and it’s also not exactly the fast-paced, adaptive image that springs to mind when we think about sophisticated AI.

As humans, we learn from experience — we learn to learn — and passing this onto machine learning models is now a key focus of the data science community. We want algorithms that use past learning experiences to adapt for different tasks without the need to retrain them how to adapt — and eventually, without the need to feed them additional enormous banks of data.

This process is known as meta-learning, and it was well represented at NeurIPS. One of the most impressive tutorials I was fortunate enough to attend was the ‘Automatic Machine Learning’ session, where Joaquin Vanschoren, presented an extensive overview of the field of meta-learning and state-of-the-art research currently in development. In the same tutorial session Frank Hutter revealed that AlphaGo, the computer program created by DeepMind that faced South Korea’s go grandmaster Lee Sedol, was able to improve its win-rate from 50% to 66.5% in self-play games by utilising a form of meta-learning just before the match and went on to record three consecutive victories over one of the most talented human Go players on the planet.

In many ways, the conference was a double-edged sword. While the research was certainly impressive, it’s clear that we are still some way from developing the ultimate meta-learning model. However, to my mind meta-learning presents one of the most exciting, potentially transformative fields in our sector, one that will impact all areas of machine learning and could eventually lead to truly intelligent agents. With so much research currently at an integral phase of development, 2019 could be the year that meta-learning makes the leap from predominantly theoretical to reality!

A Holistic Approach To Algorithmic Fairness

Ines Marusic, Senior Data Scientist

We have previously shared that ensuring AI models remain free from bias is one of the biggest issues facing data scientists and it proved to be a popular theme at NeurIPS, with more than eight papers published and two full-day workshops dedicated to fairness.

Algorithmic fairness is an area that has matured rapidly in recent years as the impact of AI decisions has become more significant — with the machines deciding whether a loan is underwritten or a candidate’s CV is flagged to a prospective employer. It has become increasingly more important to ensure these models are making justified calls which are free from unintended bias.

At NeurIPS we saw fresh theoretical definitions proposed and there was plenty of discussion around what kind of metrics can be used to measure fairness, whether we’re assessing the risks of discrimination in the datasets used or the features that differentiate one candidate from another. Progress has also been made on the practical side, with the development of tools that help identify or alleviate issues such as bias and discrimination.

To my mind, the standout point from the event was the enormous emphasis on algorithmic fairness to be truly interdisciplinary. Data scientists need to collaborate with domain experts in the fields where these models will be implemented — whether that involves law, medicine or financial services. Another takeaway from the conference is that there is no silver bullet when it comes to developing fair machine learning models, as solutions are increasingly use-case specific.

A Compromise In The Explainability

Erik Pazos, Data Scientist

As machine learning models increase in sophistication, they also become less understandable to humans. While black box models such as neural networks are often more performant than linear regressions or decision trees, they also come with an added layer of complexity — and this means there’s a risk that the professionals depending on them cannot interpret them.

Explainability is an area of research that aims to address this trade-off between performance and interpretability, and it commanded a big presence at NeurIPS 2018. There was a particular focus on developing models with built-in explainability, which involves models that have been trained to explain their complex predictions at each step.

It’s clear that many are now thinking about explainability beyond ensuring either a model is simple enough to understand or by applying methods to ‘translate’ a machine’s decision after it has processed a prediction. I expect that 2019 will see built-in explainability as a natural compromise between the two, and NeurIPS included some fantastic ideas on how this option could provide stability to even the most complex neural network models.

More Refined Causal Inference Techniques

Maren Eckhoff, Principal Data Scientist and Paul Beaumont, Senior Data Scientist

Estimating causal effects on observational data is an important open problem. The large and comprehensive datasets available today come with the caveat that they have been collected through natural behaviour rather than scientific testing. Any inference on this data faces the challenge of distinguishing correlation from causation when aiming to recommend interventions. At NeurIPS, new approaches were presented for handling confounding — i.e how to measure the correct effect a driver has on the target in the presence of other factors that influence both, driver and target, in the training data.

Causal inference can be greatly assisted by using structural causal models — often directed acyclic graphs — and this is a method we frequently leverage at QuantumBlack. At the conference we saw many interesting advances in structure learning, a methodology used to infer the relationships between variables. Techniques to either estimate or predict the causal direction between variables also featured prominently. The importance of assessing and ensuring models respect causality is becoming increasingly more mainstream in Data Science and we expect that over the next year there will be further improvements to the accuracy and effectiveness of explanatory techniques.

Reinforcement Learning’s Deadly Triad

Stavros Tsalides, Senior Data Scientist

Reinforcement Learning (RL) constitutes one of machine learning’s key branches and involves developing algorithms learning and optimising without instruction.

Following nature’s paradigm, and depending on their behaviour, RL agents (algorithms) just like drained animals, receive rewards and then learn to maximise success by performing optimal actions. In business applications, reinforcement learning is crucial — ultimately we want a model that learns how to maximise revenues for a company over time, without a human stepping in to teach it.

At NeurIPS, one of the most challenging aspects of RL was explored: the ‘deadly triad’. This essentially involves how to keep an algorithm learning if there are far too many potential variables or outcomes for it to evaluate and without being fed continually with refreshed data. To put it more simply, as humans, if we were to fall down a hole while walking on the pavement, we would likely learn to not fall down again when treading the same path. However, we also learn not to fall down holes on grass, carpet and an array of other surfaces. We generalise from past experience to make fresh evaluations on new situations. For machines, this wide variety of variables presents an issue — and unless your model has the time to test and learn from infinite possibilities, the most complex tasks remain tricky for RL algorithms. The ‘deadly triad’ problem reveals the difficulties of combining three main types of RL algorithms for generalising and learning from a restricted amount of information.

There are clearly still challenges with this branch of machine learning — however, I was heartened to see plenty of discussion around how we can all work from more accurate baselines and ultimately share progress. Joëlle Pineau called for higher standards and checks when assessing RL research and also asked that researchers provide code for the algorithms used so that experiments can be reproduced and tested. Ultimately, I believe that 2019 will be the year we all become far more rigorous with assessing not just RL results but all machine learning research — and we’ll progress further and faster as a result.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

With so much to discuss, this is only a summary of what we heard at NeurIPS. However, it offers a glimpse at the key areas to watch in the year ahead. We’re looking forward to NeurIPS 2019 at the end of the year and the fresh developments and challenges to be tackled until then!

If you are interested in learning more about us please go to our QuantumBlack website, or if you are interested in specific roles, please contact us at careers@quantumback.com.

--

--

QuantumBlack, AI by McKinsey
QuantumBlack, AI by McKinsey

We are the AI arm of McKinsey & Company. We are a global community of technical & business experts, and we thrive on using AI to tackle complex problems.