Artificial intelligence (AI) ​​driven innovation requires ethics and sustainability at its heart

Josefin Rosén
Innovation at Scale
5 min readSep 5, 2022

Artificial intelligence has the potential to revolutionize the way we work and live, across all businesses and walks of life. From smart speakers to photo-sorting software and algorithms that can diagnose cancer, AI is already quietly a part of our lives. However, as the use of AI spreads, there is also growing recognition that it cannot be allowed to do so unchecked. If we are to succeed with AI-based innovation, we must put ethics and sustainability at the heart of the AI revolution.

A place for regulation

The EU’s new regulation on artificial intelligence, the AI Act, is an important milestone. The regulation will contribute to increased use of AI among organizations and companies. In particular, it will allow people to develop greater trust in the use of AI, and therefore help them to become more comfortable with its further use and development. This is key to the spread of AI through sectors and industries.

Photo by Christian Lue on Unsplash

However, successful AI innovation requires more than just laws and regulations. If we are to harness the full potential of AI, companies and developers must build in sustainability and ethical requirements from the start and through out the AI lifecycle.

For example, we know that one of the biggest issues with responsible AI is potential bias. AI based algorithms are trained using data — and if we give them biased data, then the output will also be biased. As our CTO Bryan Harris has pointed out, the algorithm does not understand our goals. It is up to us to build models with the right objectives. Small but subtle errors can easily arise — and not always be obvious.

In her book Wanted: Human–AI Translators, Mieke De Ketelaere describes being invited by some of her male colleagues to come and see a demo of their new AI application, which was voice-activated. Unfortunately, despite everyone’s best efforts, the application simply would not respond to Mieke at all. It became clear that all her colleagues were male: the algorithm had been trained on male voices, and did not recognize a woman’s voice. On a smaller scale, my colleague Thomas Keil described asking a smart speaker to play jazz, and getting music from a band called Chess. These mistakes are amusing when they do not matter — but would not be so funny if they affected a diagnosis in medicine, or whether you got access to banking services, policing or criminal justice.

We also cannot see AI in isolation. We must look at the ethics of the use of data and algorithms as a whole. We need to be thinking about responsible innovation, not responsible use of AI. In this sense, the EU’s new regulation leaves gaps — and it is up to us as responsible developers to fill those gaps, not to circumvent them.

Succeeding with AI and innovation

There are a number of possible checklists for developing responsible AI. My colleague Fadi Glor suggests that the key issues that must be addressed are privacy, bias, explainability and governance. He also proposes several ways in which each of these can and should be addressed.

I suggest that there are also some broad principles that must be applied to innovate successfully with credible and sustainable AI. These are:

Awareness. With AI products, communication is essential. Difficult questions must not be avoided. Transparency must also be present from the beginning. Potential risks from the use of AI or analytics should also be highlighted. Addressing and managing them should be a central part of the strategy. This will enable the developers to build in explainability and governance, as part of the development process.

Diversity. Diversity is key to reducing bias. The experiences reported by both Mieke De Ketelaere and Thomas Keil highlight this. AI applications should be built involving a diverse and wide team, with a range of backgrounds and experiences, both ‘life’ and functional. This avoids the risk of unseen bias, where the team is not even aware of the bias. More input data will also decrease the risk of errors, because it will give the algorithm more to ‘learn’ from. However, the data must also be from diverse sources — more of the same will not be helpful.

Continuous control. To ensure that AI applications work correctly and fairly, they must be continuously monitored against their objectives. This applies across the entire life cycle from data to decision.

The human factor. To take the really big steps forward, we must work with artificial intelligence, not alongside or behind it. A successful human–machine collaboration requires AI to be treated as part of the team, with the same checks and balances as all the other team members. Human involvement is essential to building trust in AI.

Building for success

The potential to gain competitive advantage from AI is huge. We need to be able to meet the demands of both the market and the population for innovation if we are to create sustainability in business and operations. The proposed EU regulation may look like excessive control. However, I am convinced that, since we are forcing trust and transparency into the business models, it will actually give European companies a competitive advantage over Chinese and American companies currently ahead of Europe on AI.

How will this work? Strong rules and regulations will give AI ​​products a guarantee of quality and safety. At the end of the day, AI must be usable by humans — which means that it must be trusted. As trust develops, people will want to use AI more, and its potential will be developed further. This will create a positive feedback loop, characterized by trust, innovation and sustainability.

I am convinced that European AI development faces a bright future. Not in spite of EU rules, but because of them, and the strong ethical framework that they put in place. Regulations are not everything, but the current direction of travel shows the vital place of ethics and sustainability in AI and innovation in future.

--

--