As popularity of machine learning grows, there is rising recognition that it exposes new security and privacy issues in software systems. While artificially intelligent algorithms are providing new opportunities in business and society, fixing these flaws in security will be integral to a successful uptake of machine learning and deep learning across industry.

This means, in addition to the global race to create the best intelligent interfaces, companies and institutions are placing an even stronger focus on security than ever before. After all, artificial intelligence will be incredibly vulnerable, and useless to some, if the enormous amounts of data fed into machines is not private and secure.

At the Deep Learning Summit in Singapore, Nicolas Papernot, Google PhD Fellow in Security at Penn State University, will share expertise on security and privacy risks when using machine learning, and how we can protect against them. In his session Nicolas will address the attack surface of systems deploying machine learning; how an attacker can force models to make incorrect predictions; frameworks for learning privately, and more.

I asked Nicolas some questions about his work ahead of the the summit on 27–28 April.

What started your work in machine learning?

I joined Patrick McDaniel’s lab in 2014 to work on computer security. A few months after I started, he recommended that I look for potential vulnerabilities in systems built using machine learning. As I was delving in the details of the algorithms used to train newer machine learning architectures like the ones used in deep learning, I stumbled upon a video of a lecture by Geoffrey Hinton. He was explaining how the error of a model can be propagated through its architecture to update its parameters and make better predictions. I then realized that it may be possible to adapt the technique to find damaging perturbations of the inputs. This is how Patrick and I started the research that led to our first paper on the topic in the summer of 2015.

What are the key factors that have enabled recent advancements in machine learning and deep learning?

The machine learning community benefits from an exceptionally open environment where ideas and algorithms flow freely between research groups, including academia of course but also several industrial research groups. In my opinion, this is one of the key factors that have enabled rapid advancements of the field in the last few years. Historically, benchmarks like the ImageNet dataset for computer vision have fostered a healthy competition between researchers. This is why Ian Goodfellow and I decided to create cleverhans to facilitate the benchmarking of security vulnerabilities in machine learning. We hope that this open-source library will facilitate progress in the many challenging open-problems that lie at the intersection of security and machine learning.

How has machine learning advancements affected security and privacy issues?

We have reached sufficient maturity in machine learning research for models to perform very well on many challenging tasks, sometimes superseding human performance. Hence, machine learning is becoming pervasive in many applications, and is increasingly a candidate for innovative cybersecurity solutions. Yet, as long as vulnerabilities like adversarial examples are not fully understood and mitigated, predictions made by machine learning models will remain difficult to trust. In a way, the security of machine learning is a prerequisite to the application of machine learning to security.

In addition, machine learning algorithms work by studying a lot of data and updating their parameters to encode the relationships in that data. Ideally, we would like the parameters of these machine learning models to encode general patterns (‘’patients who smoke are more likely to have heart disease’’) rather than facts about specific training examples (“Jane Smith has heart disease”). Unfortunately, machine learning algorithms do not learn to ignore these specifics by default. If we want to use machine learning to solve an important task, like making a cancer diagnosis model, then when we publish that machine learning model (for example, by making an open source cancer diagnosis model for doctors all over the world to use) we might also inadvertently reveal information about the training set. A malicious attacker might be able to inspect the published model and learn private information about Jane Smith. Interested readers are invited to take a look at our recent paper on the topic.

What developments can we expect to see in computer security and machine learning in the next 5 years?

The state of adversarial machine learning is at a turning point. We have several effective attack algorithms but few strong countermeasures. Yet, we’ve only scratched the attack surface so we can expect to see many new exploits. Like many disciplines in computer security, we find ourselves in an arms race between those who are identifying attacks (like algorithms for generating adversarial samples) and those building defenses.

Outside of your field, what area of ML and deep learning advancements excites you most?

I find the pace at which progress is made with generative modeling to be fascinating, especially within the framework of generative adversarial networks introduced by Ian Goodfellow et al. In the next few years, I hope — -like many other researchers — -that we can build a better theoretical understanding of deep learning.

Enjoyed the content? Head over to our RE•WORK blog to read more. Join Nicolas at the Deep Learning Summit by using the discount code MEDIUM20, exclusive for our Medium readers to get 20% off all passes!

There’s just 3 weeks to go until the Deep Learning Summit, taking place alongside the Deep Learning in Finance Summit in Singapore on 27–28 April. Explore how deep learning will impact communications, manufacturing, healthcare, transportation and more. View more information here.

Confirmed speakers include Jeffrey de Fauw, Research Engineer at DeepMind; Vikramank Singh, Software Engineer at Facebook; Nicolas Papernot, Google PhD Fellow at Penn State University; Brian Cheung, Researcher at Google Brain; Somnath Mukherjee, Senior Computer Vision Engineer at Continental; and Ilija Ilievski PhD Student at NUS.

Tickets are limited for this event. Register your place now.

Can’t join us in Singapore? The Deep Learning Summit will also take place in Boston on 25–26 May, London on 21–22 September, and Montreal on 12–13 October.

Opinions expressed in this interview may not represent the views of RE•WORK. As a result some opinions may even go against the views of RE•WORK but are posted in order to encourage debate and well-rounded knowledge sharing, and to allow alternate views to be presented to the RE•WORK community.