Why we should all talk about AI, now

Eolenka
The Startup
Published in
5 min readNov 4, 2019
Photo by Andy Kelly on Unsplash

The discussion on the shape of the future is going on right now, and our views play a very limited, if any, role in it. The everyday life, work, interactions with other humans, or emotional wellbeing may look diametrally different from what they are what we can imagine today. Nevertheless, the lack of engagement and control over something so important does not seem to bother us too much.

Still, it is not too late to get on track with the fast-paced fourth industrial revolution. The voices calling for a higher AI awareness and literacy are getting stronger, we just have to start listening to them.

They want us to care

In its AI principles published in May 2019, the Organisation for Economic Cooperation and Development (OECD) sets the first intergovernmental standard for AI policies. OECD recognises that the AI, with its “pervasive, far-reaching and global implications that are transforming societies, economic sectors and the world of work”, may cause changes with “disparate effects within, and between societies and economies.

According to OECD, everybody is a stakeholder in the AI debate, as everyone is either involved in or affected by, AI systems, be it directly or indirectly. OECD deems that “a well-informed whole-of-society public debate is necessary for capturing the beneficial potential of the technology, while limiting the risks associated with it.” It emphasises the need for a “general understanding of AI systems” and encourages the governments to “empower people to effectively use and interact with AI systems across the breadth of applications.

OECD deems that “a well-informed whole-of-society public debate is necessary for capturing the beneficial potential of the technology, while limiting the risks associated with it.

In June 2016, the High-Level Expert Group on AI of the European Commission published Policy and investment recommendations for trustworthy Artificial Intelligence, a follow-up to the Ethics Guidelines for Trustworthy AI from April 2019. The policy document in its very first section recommends that Member States “empower humans by increasing knowledge and awareness of AI.

More specifically, countries should invest in elementary AI training courses (machine learning, data protection, AI systems robustness), which shall be made widely accessible. People are to be kept informed about “free available resources on AI that they can use to learn and experiment (e.g. algorithms and data), to discuss (e.g. via blogs) and to share best practices.” Additionally, a dialog between regulators, AI system developers and the users should be established, to discuss the “most ethically sensitive issues revolving around AI systems with a significant impact on society or individuals.

Some countries have already understood the importance of involving the citizens in the talk on AI and/or have taken the first steps to implement practical solutions. French president Emmanuel Macron thinks that “we need a fair discussion between service providers and consumers, who are also citizens and will say: “I have to better understand your algorithm [that plays a role in my life] and be sure that this is trustworthy.” Finland, whose education system has been traditionally ranked among the best in the world, kicked-off a nationwide education campaign aiming to train 1 % of the population in basic AI concepts, and gradually build on that. And that is just the start.

Diversity matters

Diversity shall be one of the leading principles applied in the data creation and collection process when it comes to building of AI systems: big data and wisdom of the crowds can be compatible, but they are no synonyms. However, diversity should be what we aim at not just as an input (the more data the better) into the algorithm, but also as the output of (diversity of thoughts or ideas).

AI Now refers to the current lack of diversity in the AI industry as a diversity crisis. Poor diversity among the used data and in the system development process can result in biased systems that enhance public opinion polarisation, increase ideological segregation, can undermine democratic processes, or lead to further bias.

AI Now refers to the current lack of diversity in the AI industry as a diversity crisis.

Policy makers shall get more involved with diverse groups of stakeholders active in the design building and implementing of AI models in order to draft meaningful and well-tailored regulation, avoiding over- as well as underregulation.

More diversity of opinion can help mitigating the racial, age, gender and other type of biases inherent in the data or the algorithm applied, leading to fair systems. Such AI can then help us prevent, analyse and eventually mitigate bias in human decision-making processes, such as in the court ruling practice, as well as in the decisions made by the AI systems themselves.

Inform ourselves

To make sure the AI technology does not get out of hand, and the input to and output from the system is as just as possible, it certainly helps having high-level principles, and if possible a more specific, including technology-based, set of rules for the AI actors to follow (see e.g. AI Now’s recommendations). However, no matter how many principles, rules and awareness campaigns do we have out there, if the addressees of such efforts — citizens/customers that are to benefit (or potentially suffer from) the technology at stake — are not ready to listen, they may very well miss their target.

In order to be able to protect ourselves from the potential negative effects of the AI systems, including the lack of diversity and the proliferation of dangerous biases, we need to empower ourselves with knowledge and understanding of what AI is about. However, gaining the sufficient capabilities and know-how on the topic does not come by itself, and no governmental voluntary initiative, no matter how attractive it is, will force you to care, unless you decide to do so.

In order to be able to protect ourselves from the potential negative effects of the AI systems, including the lack of diversity and the proliferation of dangerous biases, we need to empower ourselves with knowledge and understanding of what AI is about.

Yet, it is not difficult to make things easier for our policy-makers and get one step ahead. The next time we have a spare moment, let’s dive into the fascinating world of machine learning, big data, or robotics, with all its diverse facets, uses and ethical implications. We may learn about practical ways of protecting your data, think of means of using neural networks to solve problems that matter to us, or simply investigate a bit whether our politicians actually know and do anything about the AI and its impact on our and our country’s future.

To reach a diversity of opinions, we should also diversify our talks: as the way we interact with others nowadays, especially on social media, disproportionally rewards the lack of diversity, leading to the so-called filter bubbles, let’s try to engage with friends or people outside of our closest circle and get their opinions.

Last but not least, let’s not forget: artificial intelligence 1) is not and does not have to be an exclusive playground for white male nerds only, and 2) it has no fiduciary duty — it does not have to act in our’s best interest per se. We may end up bearing the cost of the ill-designed AI systems ourselves. The engagement of every single of us matters, so let’s start caring.

Now.

--

--

Eolenka
The Startup

“May I never stop being so damn curious.” I write about law, tech & mind. Here to share what I find interesting and learn from others. Twitter: @eo_lenka