The idea for AI regulation, step one

Monika Mani Swiatek
My 52 problems
Published in
4 min readJan 14, 2020

In my 21st post from My 52 problems series, I want to write what I would suggest as a first step towards AI transparency. It’s a surprisingly simple concept.

There’s something wrong and we can see it

Artificial Intelligence (AI) is the ability of a computer program or a machine to “think” (make calculations) and learn.
AI is quite often intelligent just from the name. AI may inherit biases of its creators or data it’s fed with. Many people know it but how can we/they make AI more human-friendly or at least more transparent?

People still can't agree what Artificial Intelligence (AI) is. I mean there is a concept but there’s no final definition. AI quite often is a productivity or automation software which is making things which people used to do but much faster, with less hassle.

There are various types of AI, more lifestyle: creating the list of suggested songs, identifying people in the picture and more serious one deciding about people’s lives, I’ll focus on the lifestyle stuff now.

The way how AI systems work is a secret (which companies don't want to disclose crying it will destroy their business, never mind people’s lives) we’re not able to find out what is wrong and why.

There’s hope

2020 was announced a year when governments are about to do something about AI, especially with the ethical aspect of it’s use and outcomes. I know we’re waiting for proper assessments to sort out appropriate policies, but there are things which we know about already. Why not do something about it?!

Know it, show it

Sometimes we are able to figure out when something’s wrong. Quite often when it relates to a big brand ( remember Apple credit card?) it lands on first pages of daily newspapers and tech magazines. Then the company is saying they had no idea! Of course, they did or just didn’t care. They were just waiting till more people will be upset and draw attention to the issue or stop buying their product (yes it's always about money.)

I believe as many industries were told to stop certain practices or act according to certain standards, AI industry should follow.
Pharma companies have to disclose the side effects of medicines, why don’t AI industry have similar regulations? In some cases, it would be easy to write it down now as we know what’s wrong!

Of course, it would apply only to certain types of AI systems or algorithms, but hey it’s good to start with little steps!

Simple examples

Have you heard about cases of car’s voice-command system only listening to men’s voice? (or women when they lowered the pitch of their voice). It’s generally a big problem of voice recognition software which is known since a long time, even in 2003 in medic dictation software people noticed that it works worse for female voices.

Imagine if you were woman planning to buy a car and wondering either to pay more for advanced version with a voice assistant or take the basic one, would this information impact your choice? Of course, it would.

Recently I had a chat with my friend, she’s Irish with a strong accent. She was laughing that she tried to use voice assistant but it wasn't able to recognize most of the words she was saying. Do you think she would buy Alexa to for her family member for Christmas?

Even Google maps which we use a lot can cause troubles. Whatever we’re using we should remember not to listen to it blindly just keep our common sense on. Many people learned the hard way that at night Google can direct through the pitch dark area as it doesn’t care about safety, just getting from point A to point B (although recently I’ve heard they’re working on the option where you can filter out dark street routes.) If I knew that beforehand I’d plan my trip more carefully (well it wouldn't impact my trip but I know many people who don't feel comfortable walking in unknown area in the dark.)

Transparency push

Products will be biased as data they are fed with or people who create them, but disclosing existing biases and looking at numbers of sales of such flawed products would (hopefully) push companies to work harder and provide better products tested with diverse types of customers.

We’re fed up with mediacore stuff which they advertise as ground breaking put everywhere they can till the moment when a huge group of people start to rant or write a book listing all the flaws which were ignored (at the end of the article I’m giving you a list of books devoted to this topic).

Usability statement

Imagine if we’d had a system where we’d be able to report an issue with the AI we came across, which would be reviewed by a committee and if justified, added to the list which would need to be added to the product description.

AI designers wouldn’t need to disclose black boxes, their treasure so much but will have a clear indication of what they f***ed up and what they should work on.

Thanks for reading here’s a short subjective list of books

Weapons of math destruction, Cathy O’Neil

Hello World, Hannah Fry

Invisible Women: Exposing Data Bias in a World Designed for Men, Caroline Criado-Perez

--

--

Monika Mani Swiatek
My 52 problems

Trying to decide if I should be a warning or an example to others today... Feminist, sceptic, alleged stoic, public servant and bookaholic trying to write.