AMLD – Applied Machine Learning Days 2018

Over the past days I attended the AMLD event in Lausanne, Switzerland, where I held one of the workshops on Saturday, January 27th.

The workshop aimed to present the DATA RING CANVAS, the tool I developed inside TOP-IX a couple of years ago, together with my colleague Leonardo, to design and manage data projects. The workshop was not technical so the room was definitely not fully-packed. I can imagine that some other tech arguments – PyTorch, TensorFlow, … – generated a greater interest among the geek attendees. Nevertheless I really appreciated the opportunity to discuss and share this conceptual framework with such a high-level audience, so I was definitely satisfied about the workshop results. Also let me thanks my colleague Stefania Delprete for supporting me in this session.

The workshop gave me the chance to attend some other tech sessions on Sunday like the “TensorFlow basic”, as well as the conference panels scheduled on Monday and Tuesday.

I collected some notes and thoughts on what impressed me during the conference. Just to be clear with the readers, this post does not want to be an exhaustive report of the event, so not all the speakers and panelists will be mentioned in the following lines.

Jeremiah Harmsen – Google

“Big G” implemented a very nice internal training program named ML ninja rotation, where engineers and software developers, from different Google units, can practice for six months in the Machine Learning field with top experts inside the company.

I can imagine what kind of heavy investment this program means for the company, but according to Jeremiah’s speech it generates amazing results among the participants of the rotation.

Furthermore Jeremiah proposed the so called “wide and deep paradigm” for AI / Machine Learning problems.

Wide” means learning through memorisation while “Deep” means being able to generalise exceptions. This paradigm is fully integrated and released inside TensorFlow package.

Holga Russakokowsky – Princeton University

Holga gave a nice talk on the “human side in computer vision” trying to deepen what is currently missing in traditional AI vision systems.

She focused on the importance of building collaborative and interpretable AI systems. This approach definitely would help to prevent loss of trust in those type of systems.

Holga pointed the attention on two important questions to keep in mind:

  • Who is deciding where the data come from?
  • Who is deciding what problems to work on?

Actually, we can observe a strong bias in how AI systems are managed (e.g. woman, as well as Afro-Americans presence is very low) while it would be nice to have a diverse and more inclusive AI engineers community!

In order to explore some of the Olga’s researches, I would recommend to check the AI4ALL program ( ).

Claudiu Musat – Swisscom

Claudiu focused on the importance of having useful and positive AI cycles designed around the people.

He also summed up very well some of the common problems that a generic company might encounter in the ML project implementation. The two main categories are:

  • Data constraints
  • Not-suitable algorithms

Claudiu finally presented some nice results from a bunch of master thesis and related research articles showing Swisscom’s commitment to collaborate with universities.

Aude Billard – EPFL

The main topic of this talk was “Machine learning for robots”.

Here some of the current open challenges in this space:

  • Data are robot dependent
  • Robots are environment dependent
  • How to interpret outliers

I considered this final slide very interesting to understand the differences between Robot and Human Learning.

Alexandra Gunders – Arundo Analytics

In her effective talk Alexandra highlighted the barriers to address in applying ML in asset-heavy industries. Particularly there are two main gaps between data scientists and engineers:

  1. Aligning expectations
  2. Sharing results (which tools? In which format?)
Soumith Chintala – Facebook – PyTorch Core Team

AI implementations are growing day by day, as well as the tools to work on it. Nevertheless we can observe an emerging trend in AI. Let’s call it “The static VS the dynamic”:

  1. Static means static datasets and static models (train the model and then deploy it in the production).
  2. Dynamic means live data + continuous live training online. The models prediction capacity changes at runtime. In this case the system might self-add memory based on real time demand.
Nicola Rohrseitz – Cisco

In hissuper-fast talk, Nicola addressed one single big question:

AI adoption correlates very well with digitalisation… so why many Big Companies aren’t yet digitalised?

Panel: Jeremiah Harmsen, Olga Russakowsky, Claudiu Musat

Definitely the speakers experience and their domain knowledge were not totally exploited in the panel, and some very key arguments in ML were probably not addressed. In any case some nice considerations were raised during the conversation.

About the skills problem:

  • Changes need time, years… Don’t trust fast changes!
  • The importance of internal training programs within the company.
  • The importance of relation between companies and universities.
  • The fact that working with Academia requires to be prepared in opening the code and in dedicating time to publish papers after the project execution. If companies are not prepared to this, so it is a huge waste of time for all.

About the data problem:

  • Make the people owner of their data. This is the key to mitigate the “lack of data” risk.

About the gap – between expectations and implementation – problem:

  • The importance of having inside the team a business person who understands the real customer/users problems/needs.
  • Start with simple algorithms instead of choosing the “coolest” approaches. Be sure to understand the present before trying to predict the future.
Joanna Bryson – Universities of Bath & Princeton

Joanna’s talk focused mainly on the bias issue.

She confirmed that AI right now contains implicit bias, but at the same time she said that we need to remember that various forms of bias are definitely implicit in our language semantics and are hard-coded in our culture.

There are at least three sources of bias in AI.

  • Implicit: it can be compensated with design and architecture.
  • Accidental: it can be mitigated with proper test and control systems.
  • Deliberate: right now it can be controlled only through a proper regulation framework.
Martin Vetterl – EPFL

Martin’s talk was a nice reflection on the evolution of computer science training. I really appreciated this nice statement:

Computer science is different from informatics!

To face the today’s and tomorrow’s challenges, a new pillar named computational thinking has been introduced at EPFL. Computational thinking is about reasoning on how to address problems using right algorithms and data, and is not directly linked with a specific programming language.

The photo here is pretty clear.

Finally he mentioned the DIGITAL GENEVA CONVENTION initiative. This is something I will probably investigate in details in the next weeks.

Christopher Bishop – Microsoft Research

What is the right algorithm for a specific problem?

Christopher started mentioning the “No free lunch algorithm” – by Wolpert, a mathematical way to state that no universal algorithm might exist!

To tackle a new application, a researcher typically tries to map their problem onto one of the existing methods, often influenced by its familiarity with specific algorithms and by the availability of corresponding software implementations.

In his talk Christopher described an alternative methodology for applying Machine Learning, in which the solution is expressed through a compact modelling language, and the corresponding custom Machine Learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios as well as rapid prototyping and comparison of a range of alternative models.

This method is described in this free online book:

Raia Hadsell – Deepmind

Raia opened her talk with a question: “Is it possible to implement an end-to-end deep learning for robots?”

A great push to solve this challenge has been given by the adoption of Deep Reinforcement Learning. But some open questions are still present.

I collected here below some of the problems raised by Raia:

  • Catastrophic forgetting: The ability to learn multiple tasks in a sequential fashion is crucial to the development of artificial intelligence. Neural networks are not, in general, capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models.
  • How to speed up Learning? This is particularly relevant in learning to navigate complex environments. (E.g. An artificial maze)
  • Using real world data, that usually contains lot of noise, to train AI systems.
  • Continuous control and adaptive behaviour.
Panel: Joanna Bryson, Martin Vetterli, Raia Hadsell, Christopher Bishop

Some nice thoughts were discussed here:

  • In order to take benefits from AI, we need to better understanding the difference between humans and machines in terms of learning.
  • There is a huge mismatch in terms of people studying AI and ML and people working on real big-challenge problems. Lots of companies are still working on minor issues, with a limited impact.
  • It is important to keep a human-centric vision, even in the future when some great improvements in AI will be inevitable.
  • It is easier to focus on adverse effects instead of imagining happy scenarios, nevertheless we must force themselves to design those positive examples.
  • The gap between the Hype and the Real implementation is still huge.
  • The issue is about who owns the data!
  • Is it highly important to publish results and to share them in the community.
  • Now AlphaGo plays Go game better than humans, but machines didn’t invent Go, and probably machines didn’t have fun when they play Go as well as humans have. Creativity is still a human-only feature.
  • Don’t forget that humans are HIGHLY SPECIALISED, more than what we usually think.
  • Intelligent doesn’t mean human-like.
  • Researchers responsibility is not only to create new technologies but also to reduce the digital divide and to foster a participatory dialogue with the society.

To conclude, let me thank the organisers for having me and, in general, for the great event. This is only the second edition but personally I think there are all the basis for making AMLD an awesome event concerning applied ML and AI in Europe.

Last but not least, an applause goes to Chiara Enderle that played amazing classic music in the cello interludes during the conference. No Machine Learning, no AI, simply Bach’s music.

Brainstorming about future and innovation means setting the mindset and the discussion on an “higher” level: in this sense music is absolutely a very nice and effective mind-elevator.