Part3: Privacy Talk with Maksim Karliuk, Research specialist at International Development Law Organization: How is data protection interconnected with AI and ethics?

Kohei Kurihara
Privacy Talk
Published in
9 min readApr 26, 2024

“This interview has been recorded on 18 March 2024 and discusses international organizations, law, technology and ethics”

  • How is data protection interconnected with AI and ethics?
  • What is needed to enhance and secure the rule of law in digital contexts?
  • Message to listeners
  • How is data protection interconnected with AI and ethics?

Kohei: Yeah, thanks to your work and UNESCO, there are a lot of companies trying to start the discussion on AI and ethics at this moment. Besides that there is a very important announcement by the European Parliament about the AI Acts becoming law and a lot of companies seriously work in AI and data protection as well.

So the next topic is about an important space. Maybe you have included data protection within AI and ethics in your program. Could you share how data protection is interconnected with AI and ethics from your perspective?

Maksim: It was definitely. Privacy is a right essential to the protection of human dignity, human autonomy and human agency.

And since the major AI methods currently employed are based on processing of ever increasing amounts of data, data protection becomes a particular concern in terms of both the use of data for the development of AI systems, and providing impacted people with agency over their data and decisions made with it.

To give you an example of certain tradeoffs that might appear and already appear in practice. I already spoke about biases that technologies can introduce. One way to go around to reduce biases is to use more data.

It’s generally true that a machine learning algorithm based on data is more accurate and relevant if the input data is precise and extensive, which potentially can reduce bias as well.

This, however, at a certain point may come into conflict with personal data protection, because in this case there is a clear conflict between privacy and using more specific and detailed datasets to offset AI bias. A very careful balancing exercise that has to be made here.

Another example is mass surveillance, because large amounts of data and AI can be used for mass surveillance and things like that. In fact, the Recommendation I was talking about explicitly says that AI systems should not be used for social scoring or mass surveillance.

The Recommendation has quite strong provisions on data protection and privacy as such. It has in fact the principle on the right to privacy and data protection and has a separate data policy with an explanation what states should do to ensure data protection.

Among those, it requires privacy impact assessments for algorithmic systems, which also include societal and ethical considerations into it; using privacy by design approach; establishing proper accountability of AI actors; institute redress mechanism to protect people’s rights; ensure that individuals retain rights over their personal data.

And essentially ensuring data protection throughout the whole lifecycle of AI systems, because I mentioned at the beginning that the core point is that we should look at all the stages of AI systems, which start from development and design to the implementation and use of those technologies. So privacy is important at all the stages.

  • What is needed to enhance and secure the rule of law in digital contexts?

Kohei: I agree with your idea and it’s becoming so serious, because AI is becoming normal in our society. The need for privacy protection is also becoming important not only for lawyers, but also for developers, businesses or institutions that should cooperate and create the collaboration for our future society and new technologies.

There is an interesting post on LinkedIn about your involvement in the forum by Politico, where you spoke on a very interesting theme. Could you share what is needed to enhance the rule of law in the digital context?

Maksim: Yes, sure. It’s a very large question in fact. I will say a few foundational points, without which you cannot proceed in this direction of ensuring the rule of law in the digital context.

First of all, previously we thought that we need to build the rule of law online as we have it offline, meaning in the digital world as we have it in the physical world.

But nowadays, in my view, it is no longer enough, simply because we have seen that the rule of law in the physical world has been in persistent decline globally for several years in a row. So transferring this into the digital world simply is no longer enough.

That’s one thing. A second thing is that digital contexts are quite different from the physical ones and might require different approaches. These are two points to bear in mind.

The second major foundational point is that we have to have a very strong and common foundation in terms of value-neutrality. There is this so-called value-neutrality thesis, which says that technology is morally and politically neutral and is neither good or bad on its own.

I still hear it to this day that it’s just a tool used by people, and you can use it for good or bad purposes, in good or bad ways.

But this is not the case. In fact, there is a pretty much common or wide agreement among philosophers and theorists of technology from various schools that technology is not morally and politically neutral.

In fact, if we speak about artificial intelligence, it’s even more so, because people are still making choices every step along the way of AI development; their personal ideas, the ideologies of their group are being passed down through the AI ecosystem.

To put it simply, if you or someone like you are not in the room where this happens, whatever gets built won’t necessarily reflect who you are. Many AI systems are biased — and we already talked about this — due to the data on which they are being trained or some other things, so they essentially depend on what kind of data is used.

And data can be incomplete, it can include societal biases itself, or stereotypes of different sorts, which can have really far reaching consequences, because these technologies, the way they are being applied, can basically extrapolate really widely and really fast.

But even beyond that, AI systems can learn from the information they receive from the outside world and can act in ways their creators could not have predicted. Essentially they can operate independently of their creators or operators.

And this in the end complicates the task of determining responsibility and creates problems of predictability, ability to act independently, while at the same time not being liable.

These are the issues, which undermine the possibility to build proper rule of law in this domain.

Long story short, the important part is not to just look at the use of these technologies for the purposes of their assessment or their regulation, because it’s going to be already very late. It’s important still, but it’s late.

It’s important to look at them at all stages of the lifecycle as I was talking previously, from research, design development, to deployment and use. And even these stages have their own sub stages like maintenance, operation, trade, financing, monitoring, disassembly, termination.

If we’re to build a proper rule of law in this domain, we have to ensure that we look at all of the stages and not only at the stage of usage of these technologies.

This is my second foundational point — these technologies are not value-neutral, they’re not morally or politically neutral.

The third and last point is something that I’ve been thinking about recently. It is about the regulatory presuppositions that we have.

Presently, laws operate on the basis of the same assumptions fixed in the normative foundations, such as those about freedom of will, about human autonomy, which essentially are the basis even for the new, technology-driven regulation.

At the same time, as we as individuals are invited continuously to outsource our agency more and more to digital tools, it might happen that we might be losing the ability to exercise judgment to a certain degree, and make decisions independent of these systems.

This can have both practical and moral challenges. On the one hand, the regulatory intervention might not achieve the anticipated result, since the technologies can undermine this fixed assumption about our autonomy and freedom of will.

On the other hand, such interventions from the regulatory perspective place unrealistic expectations on us, on human agency or on structures involving human agency, which can result in unfair treatment.

To give an analogy, we can end up building more and more rules based on false assumptions or foundations that might shatter the whole regulatory structure that we’re trying to build.

So to me, it’s not really clear how we can rely on regulatory frameworks based on established assumptions to cope with emerging technologies, while at the same time these frameworks are themselves being challenged by technology itself through the disruption of these normative foundations.

It’s something that I have been thinking about recently, and I don’t have solutions for this yet. But it’s something that really has to be taken care of, if we are to make these regulatory framework that we’re building, effective in addressing the issues we have with the advent of new technologies.

  • Message to listeners

Kohei: That’s a very big insight and thank you for sharing. It’s also important to consolidate the different models not just in tech, but also in politics and law-making.

Lastly, I’d like you to share your message for listeners. Throughout your great experiences, you have a lot of things to share to the listeners of this interview. So could you share your insight and final message for them?

Maksim: Yes. There are many things to say here, but I would say that a very important thing is to think long-term. It comes from my experience, from my conversations with people throughout the years and in different places.

It is very common for people to prioritize short-term thinking, which is absolutely normal because it is human psychology essentially — it is easier to think short-term, and it is very difficult to think long-term. But we have to make a really conscious effort to think long-term.

What I also see is that people often put more weight on short-term benefits over long-term risks, which in reality should be the reverse. Long-term risks, if they materialize, can create a situation where you’re no longer choosing as you can do now, between a good and bad option, but you might be choosing already between a bad and worse option.

And if they materialize, they can cancel out all of the short-term benefits that you have ever received and you won’t be able to gain them in the future anymore.

This applies not only to the new technologies and to the points of the decline of the rule of law around the world, or the way technologies are being currently developed and deployed. It is relevant in a broader context as well.

I would say it’s very important to make this conscious effort, however difficult it is, to think really long-term and put more weight on long-term benefits and risks.

That would be my message. I also met quite a few people who throughout the years would understand the importance of that, but at some point it really becomes late. It applies to a broader context, but it’s also very relevant to the issues that we talked about today.

Kohei: Yeah, that’s a brilliant message. We need a long-term vision for the tech industry at this moment. That’s a very good context for sharing your important message. Again, thank you for joining from Italy, Maksim. That’s great to have this conversation.

Maksim: My pleasure, thank you very much!

Kohei: Thank you!

Thank you for reading and please contact me if you want to join interview together.

Privacy Talk is the global community with diversified expert, and contact me below Linkedin if we can work together!

--

--