Machine Learning Intern Journal — Responsible AI

As the title indicates, this is the journal of a Machine Learning (ML) intern at the impactIA Foundation. I’ll be attempting to keep a weekly journal of my activities in the Foundation to keep track of my progress and leave a roadmap for the interns who come after me.

Léo de Riedmatten
impactIA
4 min readNov 20, 2020

--

Earlier this week, a friend sent me a link to a blog post by TensorFlow, “Learn how to integrate Responsible AI practices into your ML workflow using TensorFlow”. This should be a nice follow up to last week’s blog on Algorithmic Bias. Without further ado, let’s explore this blog post and see if TensorFlow provides any useful practices for building responsible AI systems.

© https://www.tensorflow.org/resources/responsible-ai

The first step is to define “responsible”. What is responsible AI? With the ever increasing pace of AI system integrations, new opportunities to solve real-world problems are flowering, but as Uncle Ben wisely told a young Peter Parker, “With great power comes great responsibility”. Throughout the workflow, questions must be asked about how this AI will benefit everyone equally. TensorFlow proposes 5 areas to focus on to define responsible AI: Recommended best practices, fairness, interpretability, privacy and security.

TensorFlow proposes questions to be asked at every step of the 5-part workflow:

Define Problem: Who is my ML system for? A primordial question to ask yourself at the very beginning. This is essential to assess the true implications of the system’s predictions, recommendations and decisions.

Construct & Prepare: Am I using a representative dataset and is there real-world/human bias in my data?It is important to use a dataset that matches the real-world input once the model is deployed. Training a system on middle-aged people but then deploying the model for all-ages is a no no. For the second part, as we discussed in last week’s blog, great care must be taken in the construction of the dataset, identifying the presence of bias and fixing them before the system reinforces existing stereotypes.

Build & Train Model: What methods should I use to train my model? Another important part of building responsible AI. This is especially important for privacy and security. TensorFlow provides a library called Privacy, which includes implementations of TensorFlow optimisers for training machine learning models with differential privacy.In the privacy aspect, federated learning (which TensorFlow implements) is probably the most exciting and promising new approach (I might get into this in another blog, in the mean time if you’re interested in learning more, read this great article).

Evaluate Model: How is my model performing? Testing and evaluating your model in real-world scenarios across various users, use cases and contexts of use is crucial to building a robust system. TensorFlow provides libraries such as Fairness Indicators, which enables easy computation of commonly-identified fairness metrics for binary and multiclass classifiers. Google is testing a beta for Explainable AI, a set of tools and frameworks to help you understand and interpret predictions made by your machine learning models. An important part of building AI systems is to understand what is going on under the hood, and this is a great initiative to demystify deep learning’s black box.

Deploy & Monitor: Are there complex feedback loops? Some people believe that once you deploy your model, the job is done. But really, deploying the model is just the beginning. Monitoring your system as it interacts with the real-world over time is indispensable in creating responsible AI systems. Model Cards is a really interesting concept that was first introduced in a Google research paper and is now implemented in TensorFlow. Model Cards increase transparency by gathering in one place all the useful information about the inner workings of a machine learning model.

To conclude, AI systems have many promising applications for solving real-world problems, but that doesn’t mean we should push on without carefully assessing what is being done and how. TensorFlow is one of the most used machine learning libraries, and it is reassuring to see them making efforts in creating the tools necessary for building fair and responsible AI systems. I’m excited to begin including these tools in my future work.

--

--

Léo de Riedmatten
impactIA

BSc in Computer Science & Artificial Intelligence with Neuroscience from Sussex University, currently a Machine Learning Intern at impactIA in Geneva (CH).