DI Summit Talk summaries

DiSummit 2019
Jul 12 · 6 min read

Cleaner air for Brussels (Jose Gonzalez)

Jose Gonzalez works on a community driven project around air pollution data gathered by “Les chercheurs d’air”. After filtering and cleaning days they found a big difference in air pollution based on location and time of day. The next step is to use machine learning to couple it with extra info like traffic data and find an explanation why it varies (and how it can be improved).

Put Trusted AI to Work for business (Ann-Elise Delbecq)

Ann-Elise Delbecq is putting AI to work for business. In a use case around an application to approve loans, she explained how machine learning can improve speed, convenience, and thus customer satisfaction. It’s still important to think of the four pillars of Trusted AI: Fairness, Explainability, Robustness, and Lineage.

Some human intelligence on ethics and AI (Wannes Rossiers)

Wannes Rossiers explained how we shouldn’t do AI for AI, but instead to use it wisely. We should be ethical and convince the customer this adds value. And the human should always be at the center.

Liability and Risk: Why Explanability & Ethical AI Should Matter to Industry (Rachel Alexander)

Rachel Alexander told us about Ethical AI at Omina. They focus on explainability and accountability, so no blackbox AI. It’s important to ensure privacy and ethical treatment of data. To do this, you need to think of an audit system from the get go.

Human & AI: Collective decision making (Liliana Carillo)

Liliana Carrillo started off with a small biology lesson about swarm intelligence, where collective behaviour from individual abilities works towards a collective goal. This can be applied in a variety of cases, such as an optimisation algorithm for routing based on ants. People can swarm as well, this can be used to create an artificial export on top of individual knowledge and get joint intelligence from the whole team.

How to control your digital twin? (Christophe Cop)

Christophe Cop talked about the massive amount of data that is available for each person. This is currently owned by monopolies or governments. He proposed a system where you as a person take control of your data and are able to monetize it for yourself, with a sort of data bank as a broker in between. All this means your virtual twin has yourself as interest, not a government or company.

Introducing Hermes! Predicting which type of business will succeed where (Jan Van Looy)

Jan Van Looy introduced us to Hermes, an application that aims to predict which business will succeed based on location and building size. This way they can suggest the types of business that should occupy a vacant lot.

Programming cochlear implant with Artificial Intelligence (Justine Wahtour)

Justine Wahtour explained how cochlear implants can help with hearing loss, but it takes a lot of time to fit it just right. The Fitting to Outcome eXpert (FOX) aims to improve this, by providing intelligent assistance in CI fitting.

Katya Vladislavleva

Katya Vladislavleva explained how they use augmented intelligence: a marriage of BI and AI. Collaboration between the two is key, and they provide a platform that allows them to work together, resulting in quick returns on investment and substantial cost improvements.

Responsible AI in the era of Human+Machine society (David Bruyneel)

David Bruyneel explained that while 60% of executives believe that adoption of AI is necessary, 45% are hesitant to apply it to scale. This is why we should be able to TRUST a machine:

  • Trustworthy
  • Reliable
  • Understandable
  • Secure
  • Teachable

While they use an algorithmic assessment tool to measure this, a human centric mindset should still be added by the people developing it.

Why do AI and big data need social sciences? (Tuba Bircan)

Tuba Bircan explains how film and TV shows us how AI will change society, but can social sciences impact AI in turn? It’s difficult to translate human constructs to data rules, but it’s important to create AI with human values:

  • Unbiased
  • Fair
  • Ethical
  • Safe

We need to think about the people who will be affected by our code.

What to teach our kids at the age of AI? (Maryse Colson)

Maryse Colson painted a picture of parenting in the age of AI: Netflix as the babysitter, getting recommendations from Amazon, and birthday reminders from Facebook. She explained that while AI is convenient, it can’t do everything. We still need to teach kids about important things. That’s why we need to understand AI to master its power, and be aware of the risk.

5 reasons why AI is overrated (Jennifer Roelens)

Jennifer Roelens believes in AI in the long run, but the are inflated expectations from customers. She proposed reasons that contribute to this and why AI is overrated: companies and developers being into the hype themselves, the paradox of speed, blindly trusting algorithms without interpretability and a knowledge gap between managers and data scientists.

Applied behavioural science and machine learning (Bertrand fontaine)

Bertrand Fontaine explained how we can mix behavioural science and machine learning in an ethical way. They transform data from low-level sensors (like wearables or smart devices) into actionable insights, then use engagement gamification to try to change the behaviour, then measure that change through the same sensors. Since the technology can be used for the wrong goals, and since AI should make life better, they try to filter companies so they only work on applications that will contribute to this. Ethical AI is based on the following:

  • Fairness
  • Privacy
  • Explainable
  • Robustness

AI can suggest jobs. Humans want to know why. (Enias Cailliau)

Enias Cailliau showed an application that uses deep learning to parse a CV and find the best matching jobs based on skill. But we need to explain why we’re matching with these jobs, and we can find skills that we need to work on for a certain job.

From paper to digital : the power of AI to extract data from financial documents (Segolene Martin)

Segolene Martin talked about the way humans learned to read: we learn to recognise letters, words, and the meaning behind them. Since it takes a lot of time to read an invoice and pay it, attempts have been made to automate this. Current technologies need a template before they can identify information, but their product fyn.ai improves on this.

Making Machine Learning more Human? (Patrice Latinne)

Patrice Latinne is looking for ways to include the human element in AI, as it matters a lot. Through two use cases (an autonomous bus and an application that approves loan) he provided we got different reasons why we need the human element, such as making the users trust it or the importance of explaining the prediction.

Embedding AI in operational teams’ day-to-day workflow (Tem Bertels)

Tem Bertels showed some examples of how to embed AI in operational teams’ day to day workflow. In their work for a telecommunications company they used the wealth of data available to translate it into something useful for the employees of the company. It’s still important not to do AI for AI’s sake, but to start with a business problem. By analysing data they were able to find actionable points to improve the company and its employee’s experience. This means the users need to benefit from the intelligence you’ve introduced.

A recipe to embark the AI train rapidly, safely and sustainably before talking about AI strategy. (Kevin Françoisse)

Kevin Françoisse explained that when it comes to making the step towards AI there are good resources for developers, but there are less for managers. Based on Andrew Ng’s AI Transformation Playbook he went over some steps a company can do to build AI capabilities for your company.

How data science can stop energy wastage (Nele Verbiest)

Nele Verbiest talked about the way Niko offers control of your home through smart phones or touch screens. The next step is to develop this further into smart homes, which analyse the data and help their customers.

Data Democracy — A path to build sustainable AI Ethics (Astrid Van Lierde)

Astrid Van Lierde explained that AI is made up of three parts:

  • Data is collected
  • The data is used to develop technology
  • The technology is used in a certain context

People don’t know the value of their data and assume a small portion of people, experts, know what to do with it. But can these experts make the correct decisions? It’s important for them to spend time with the people they’re developing the solution for, and keep the individual at the centre of the decisions.

DiSummit 2019

Written by

June 26 diSummit2019 👉 Being Human in the Age of AI 🤖 The annual conference of the #ai4belgium and #datascience community of Belgium