A quick tour through the ethical and governance issues

Part 1 : Quick Basics and Advantages & Costs

Photo by Birmingham Museums Trust on Unsplash

Advances in AI have spilled over into usage in defense applications and there are rightly many ethical concerns that it raises. While there are many detailed documents available that talk about specific areas of concern in the use of AI in warfighting application, I’d like to give here an overview of those issues and talk about some basic ideas that will help you discuss and address issues in the space in a more informed manner.

Let’s talk through the following items to build up an understanding of why the use of AI in war raises so many ethical concerns:

Photo by Startaê Team on Unsplash

Design decisions for AI systems involve value judgements and optimization choices. Some relate to technical considerations like latency and accuracy, others relate to business metrics. But each require careful consideration as they have consequences in the final outcome from the system.

To be clear, not everything has to translate into a tradeoff. There are often smart reformulations of a problem so that you can meet the needs of your users and customers while also satisfying internal business considerations.

Take for example an early LinkedIn feature that encouraged job postings by asking connections to recommend specific job postings to target users…

Photo by Maxwell Nelson on Unsplash

In a recent piece featured in the MIT Technology Review titled “Worried about your firm’s AI ethics? These startups are here to help.”, there is a spotlight cast on some startups in the technology ecosystem that are developing tools and services to aid other companies in implementing AI ethics within their organizations.

I firmly believe that when we don’t have internal expertise to develop something, we should acknowledge that and bring in external experts who have ample experience in the domain and can help us address the challenges that we face.

But, in normalizing the practice of having to rely…

Why are you building the product that you are working on right now?

Yes, take a second to reflect on that, I’ll be right here!

Nordwood on Unsplash

Do you have a clear answer for this? If not, do you think that is a problem?

Product managers make this a core part of their work: articulating why a product should exist. Yet, sometimes we get excited about new technology and jump head-first into taking it to find a problem that we can solve with our shiny new toy.

The lack of clarity on why a team is working on a project can have consequences for adherence to responsible AI principles.

Why goal setting?

The exercise of goal setting helps…

Data for Change

Photo by Paul Skorupskas on Unsplash

With the recently concluded FAccT 2021 conference, we were regaled with many papers and workshops that talked about statistical techniques, intersectional discussions, interdisciplinary ideas, and many different attempts at putting responsible AI into practice. There were many interesting discussions on Twitter that helped to unearth even more insights from the talks, panels, and presentations, which definitely helped me elevate my own understanding of AI ethics, and I encourage everyone to check them out.

But, there was still something that was left wanting at the end of everything.

I was constantly bugged with the question: “What if we could make taking…

Moving from the “why” to the “how” of ethical practices

Photo by Anh Vy on Unsplash

So you’ve heard about AI ethics and its importance in building AI systems that are aligned with societal values. There might be many reasons why you choose to embark on this journey of incorporating responsible AI principles into your design, development, and deployment phase. But, we’ve talked about those before here and you’ll find tons of literature elsewhere that articulates why you should be doing it.

I want to take a few moments and talk about how you should be doing it.I want to zero in on one aspect of that: transparency on your AI ethics methodology.

I think there…

Photo by Amanda Lins on Unsplash

We don’t remember the words of our enemies but the silence of our friends.


As we’ve seen the enormous upheaval in the field of AI ethics over the past 3 months, I think it behooves us to think a little deeply about the role all of us can play in making a meaningful, positive impact on the world. This idea of becoming an upstander in AI ethics is particularly powerful and I believe that in 2021, this is the right way to help create a healthier ecosystem for us all.

A. Why is this important

As I had spoken about in my piece…

Photo by Christian Joudrey on Unsplash

Responsible AI can seem overwhelming to achieve. I am with you on that. It comes with so many challenges that it is easy to get lost and feel disheartened in trying to get anything done. But, as they say that a journey of a thousand miles begins with the first step, I believe that there are some small steps that we can take in actually achieving Responsible AI in a realistic manner.

Essential to this strategy is the emphasis of starting with partial solutions which may be imperfect to begin with but provide the necessary fertile ground that can aid…

We’ve had an outpouring of interest in the field of AI ethics over 2019 and 2020 which has led to many people sharing insights, best practices, tips and tricks, etc. that can help us achieve Responsible AI.

But, as we head into 2021, it seems that there are still huge gaps in how AI ethics is being operationalized. A part of this stems from what I call the believability gap that needs to be bridged before we can realize our goal of having widespread adoption of these practices in a way that actually creates positive change.

Fragmentation in the field…

This article was co-written with Muriam Fancy, AI Ethics Researcher at the Montreal AI Ethics Institute

The use of AI-enabled systems to streamline services that are often labor-, and time-intensive is being recognized by governments to implement in multiple sectors. However, there are significant implications for procuring these systems and the way they are deployed. Due to the gap in understanding of the implications of these systems and how to properly measure their risks, oftentimes governments will procure and deploy solutions that are biased, risk-heavy, with the potential to cause significant harm to the public.

When things go horribly wrong …

Photo by Jeswin Thomas on Unsplash

A recent case where this…

Abhishek Gupta

SE #machinelearning @Microsoft | Founder @mtlaiethics | #AI #Ethics researcher @mcgillu | @mcgillu grad #deeplearning

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store