Ethical Engineering

Luca Collalti
FARSIGHT
Published in
6 min readAug 9, 2021

Inese Podgaiska is the Secretary General of the Association of Nordic Engineers, which represents more than 500,000 engineers across the Nordic countries. She is an advocate for strengthening the role of engineering in advancing responsible AI and achieving sustainable development goals.

We met online with Inese for a talk about ethics in technology, challenging the tech industry, what the social sciences and technical fields can learn from each other, and what a Nordic approach to AI should look like.

You have said that you believe democracy needs to be encoded into the process of technological development. Can you explain what you mean by this?

It means making sure that the new generation of technical developers have a broader perspective of the societal implications of new technology. This includes ethics, democracy, freedom, and human rights, as well as the risk scenarios stemming from their products. These principles should be understood before the process of development is begun. It also means asking: do we always need to develop new technology just because we can do it? Very few workplaces engage actively with these questions today.

One of the things we do at the Association of Nordic Engineers is to gather technical developers and engineers to initiate the shift of mindset. That said, it is of course not just the engineers who bear the responsibility. There is also a political responsibility in deciding how the technology is going to be used, by whom and for which purpose.

Take facial recognition. Today, in the EU, some Member States deploy facial recognition in public spaces, and it is being used actively by the police. More than 50 different organisations have written a letter to the EU commission calling for a ban, stating that this is an infringement on our privacy and human rights. We still need to see what the result will be, as security is used as an excuse for a wider use of this biometric technology.

You believe that both ordinary people and technical developers need to be empowered to challenge the authority of the tech industry. How do you see this happening?

In April 2021, the European Commission tabled a proposal for AI regulation, which is going to be under negotiation between the Council of the EU and the European Parliament. It is the first proposal for AI regulation at EU level that has been tabled, so it’s quite an important milestone.

But when you dig into the proposal, you see that its overall focus is on providing a legal framework for industries, and that the whole supply and demand chain is missing. Notably, the impact on people, workers, and end users is not properly addressed. This perspective is important because people who are not technicians and specialists often lack a basic knowledge of how new technology impacts their life. Again, let us take privacy as an example. When we talk about the consequences of facial recognition, many people reply: ‘well, I am not doing anything wrong, why should I be sceptical of it?’ We already give so much data away without knowing it, and we have no control over how that data is used or misused. Empowerment means bringing our experts closer to the public and providing insights on the benefits and challenges of new technology — including informing people of new risks.

From a trade union’s perspective, empowerment means providing knowledge of how the deployment of technology impacts workers. It’s not only the auto- mation of work and jobs that’s relevant here, but also labour and employment conditions, which are changing with the emergence of new kinds of employment contracts and changes to labour protection. We also see a growth in workplace surveillance and performance tracking. In this context, a big task for trade unions now is to step up in their role and safeguard workers’ working conditions, rights to privacy, and the ‘right to disconnect’. These things, also, are not mentioned in the EU regulation at all.

The Association of Nordic Engineers has campaigned for ethics to be included in the curriculum of engineers. What are the obstacles in doing this?

It’s already being implemented. For instance, the Technical University of Denmark (DTU) already has ethics in engineering on the curriculum. For me, it’s difficult to see the obstacles. It is a matter of willingness to see an added value in doing so. But it’s not only engineering that needs an understanding of social sciences and humanities. These fields should also have a component in their curriculum providing them with a basic knowledge of technologies. A lawyer specialising in technology today probably didn’t learn it in school because it wasn’t taught. It’s no longer a spiritual statement to say that everything is connected. It is a fact in our interconnected society, and that needs to be embraced in our education systems.

Ethics in AI is often considered a hinderance or a luxury that can’t be afforded. Do you think that making AI that is somehow encoded with Nordic values could provide the Nordics with a competitive advantage?

I think a Nordic cooperation on AI would be great. And I do think the Nordic countries could gain a lot if there was a Nordic lighthouse of research on AI. It would boost innovation and allow us to retain our talents instead of having them move to Silicon Valley.

When it comes to AI research and development, we need a triple helix approach encompassing academia, business, and politicians instead of the current, very scattered approach. It is usually the industry’s business models and development strategies that define what kinds of products they end up developing. And politicians talk about technology either in terms of dystopian scenarios or as some- thing that will solve all problems.

Regarding the claim that regulation will hinder innovation — I don’t buy it. Rules within a certain framework, if defined well, don’t necessarily prohibit the actors from innovating — because they all know and play by the same rules. Some years ago, I discussed this topic with someone working in the industry who told me: I and many others want to have dialogues with you and politicians. We want to know what the rules are so we can play by them.

We also took up this topic last year at our hackathon. Here, we asked: should we make certificates for companies, such as to be labelled as an ethically friendly AI company or should we certify the products as ethically aligned, which in turn could be used as a competitive advantage. It was the latter approach that prevailed. In the Nordics, trust is a societal cornerstone. If there is no trust, there is nothing. A certification of products could likewise make trust the cornerstone of a Nordic approach to AI.

In my work with engineers, I have been met with different reactions to the possibility of including approaches from the social sciences in what they do. Some are sceptical of the idea because they ‘want to do math and build stuff’. What do we say to them?

My response would be: do you really see the world as black and white? True, some engineers do, as they do in other professions. But it’s a question of integrating it into the curriculum in a reasonable way. You could have a course on societal issues — law, politics, ethics of philosophy — without requiring a written exam on it, but as a reflection or dialogue, I don’t see how this would be a problem. My daughter studies political science, but she also has a course on macro-economics even though she is not going to be an economist, because it is needed for a general understanding how politics and economy are interlinked. It’s a part of broadening up your field of view.

There is a quite lively debate about ethics in AI, but we don’t really see this in other facets of the engineering profession, even those that have to do with the Fourth Industrial Revolution like collaborative robots. What’s your take on this?

I think you are right. The deployment and automation of AI were the catalysts for looking into their impact on our values, ethics included. So I imagine that the focus on ethics, privacy, and human rights will only increase in the coming years, and it will encompass other fields of engineering as well. The essence of engineering is to be in the forefront of technological development, and the pressure for responsible development of technology is ever growing. I represent engineers and I have an enthusiasm for progress and innovation — but in recent years there has been, in my view, too much focus on productivity and economic growth with too little focus on people. The Covid-19 pandemic and its aftermath has enhanced this discrepancy. We need to ask: does our economic system really function if it doesn’t benefit people in the lower part of the income chain? I am almost tempted to say that the consumer society has consumed itself. I think we need to rethink our approach going forward.

--

--

Luca Collalti
FARSIGHT

I am a Techno-Anthropologist with a strong interest in Science and Technology Studies and the politics of techno-science.