Privacy Talk with Kris Shrishak, Technology Fellow at the Irish Council for Civil Liberties: What is your take on rights and enforcement of the regulation?

Kohei Kurihara
Privacy Talk
Published in
7 min readJun 27, 2022

--

“This interview recorded on 19 May 2022 is talking about
technology-policy and privacy technology.”

Kohei is having great time discussing technology-policy and privacy technology with Kris Shrishak.

This interview outline:

  • What does it required to protect rights to complain?
  • Who should take to represent consumer rights?
  • What is the discussion about facial recognition in Europe?
  • What is your take on rights and enforcement of the regulation?
  • What does it required to protect rights to complain?

Kris: Related thing is what is known as the right to judicial remedy. And that’s another thing we propose. To both of these aspects about Union level enforcement paths and being given to either the commission or another entity at the Union level, providing right to complaint and right the judicial remedy. So these have been included and Parliament’s draft currently.

One aspect that I personally am thinking about, but isn’t currently being included anywhere is the issue of reporting what is known as near misses.

So that is a requirement in the regulation that if there’s a serious incident that involves actually is because of an AI system, then those need to be reported to the regulators by the essentially manufacturers. I think the term is providers here.

But I’m concerned that might be too late in some cases. We should have a mechanism so that near misses, so cases where something went wrong, but not a big harm was done.

So if those are also reported, potentially anonymously by the providers, then there is a scope that we will actually have a learning mechanism in place and we can prevent larger serious incidents.

This is not a new concept. We have similar concepts in aviation safety, for example, in railway safety, at least in Europe, we have these already there. So but they were introduced much much later, not in the first regulations, I mean.

So I think we’d get it’s a good thing if we can actually include it in the first AI regulation, rather than waiting for issues to pop up. Right. So we’ve already had enough issues with AI systems, so at least I think we should be considering including near misses in the firstly AI regulation as well.

Kohei: From Japanese perspective, Europe is quite advanced, since you tried to propose AI regulations, but actually there is some of the points needed to improve to protect the consumer rights.

I assume that it’s very important to notice for the consumer as well, to notice for the regulatory enforcement as well so that’s very knowledgeable then thank you for sharing the information.

  • Who should take to represent consumer rights?

Kris: If I may quickly add one thing, so you did mention consumer rights. And in the European Union, I think you can have consumer rights, but you can also have individual fundamental rights in addition.

So the two things that I mentioned, go into the second category you can actually include certain aspects of consumer rights protection.

So for instance, it’s known as a collective redress. If I’m not wrong, that’s the right term, wherein nonprofit organizations can essentially represent a group of consumers. That’s an additional thing that can be done.

I’m not sure if that’s been already included in the parliament text or not, but that’s also a proposal for instance, the consumer rights organization, Beuc has been suggesting in Europe.

Kohei: Thank you, maybe the law of the consumer organization, their consumer rights should be inclusive to the political decisions. There seems to be across the community where the policy decision is going up, but it’s not just the sometimes it democratic decision.

So we need to be more inclusive voices of the consumers and citizens to create a more democratic public regulation. That’s supposed to be your message is quite important. To the next question is about facial recognition AI in Europe.

  • What is the discussion about facial recognition in Europe?

Since you propose the some requests through the letters that is quite important. And also, I’m quite interested in the discussion about facial recognition AI in Europe. So how this kind of the topic has been discussed among Europeans are what is an entry the status about?

Kris: Right. So from a regulatory standpoint, facial recognition will fall under what is known as biometric recognition.

So, because biometric can include other aspects, not only facial recognition, and in the Commission’s proposal, so just the first proposal, it was categorized as what is known as high risk. So a high risk case would be part of the regulation, but it’s not prohibited.

And there are some nuances. I’ll leave that aside. But essentially, that was the case. But there was a vote, I think, late last year in the European Parliament, which is a non-binding vote. So which means it doesn’t become a law immediately.

But it shows what the position taken by the majority of the ministers are. And there, there was an emphasis on actually banning facial recognition in the public domain.

So if you think of it as the city squares or town squares, right, so facial recognition using CCTV cameras there, including CCTV cameras, there would be.

Those should be bad because that was at least a non-binding word. It’ll be interesting to see what comes out of the parliament as well as the council in Europe because I think some of the ministers have voiced out repeatedly that they are not in favor of using facial recognition in the public.

One particular aspect also is that the European Commission, for instance, makes a distinction between what is known as real time and just remote biometrics. I would personally not make that distinction because there’s so many ways to actually get around restrictions like that.

I just think of it as biometric surveillance. So that’s one and that’s from a regulatory standpoint, but they’re also been cases in the past. So there are a couple of angles here when you think of issues.

The first is the database itself. So a database of facial images. Some of them essentially, you might think of them not as images, but what’s technically known as a template because your face is essentially transformed into biometric templates.

  • What is your take on rights and enforcement of the regulation?

And you might have heard of cases such as the company ClearView AI, who have been selling their products, especially to law enforcement agencies. And that has raised a lot of red flags, including fines from multiple data protection authorities if I remember correctly, the Italian authority issued a fine months back, the Australians and the Canadians have as well.

In addition, I think some of them have also issued that ClearView AI should delete the images that are of the citizens of these countries. The challenge that I see there is how would they actually enforce it?

We know that the data protection agencies have requested for these images to be deleted, but I haven’t seen any evidence of if they have been deleted or anyone being able to check, right. So that’s a challenge. And that also links up to the first thing that I mentioned about enforcement.

Enforcement first needs to be good on paper, and then we need to figure out how to actually enforce things, what’s on paper. Two angles there.

Kohei: Yeah, thank you. So in terms of the ClearView AI, I also had some research that there’s many arguments in other countries, in Canada and in Australia, in even the US has been tried to regulate it.

Their activity to selling the data to the governments, police, is where there has been stronger enforcement to the citizen that is very cautious to any misinformation, the misunderstanding of the algorithmic decision.

So that’s supposed to be needed to examine exactly how it works in data processing. So then understanding. Thank you.

Kris: So the challenge there is this, a lot of the images in the database were not collected with consent of individuals, right. And that’s the first step towards data collection, at least isn’t that GDPR. You shouldn’t be collecting private information of people that are in consent.

So that’s like the first step. Everything else is later. So yeah.

Kohei: Yeah, absolutely. So to protect the consumer data, I suppose there needs to be many technical advancements. For example, there is some of the privacy enhancing technologies such as the homomorphic encryptions or some of the technologies of Federated Learning that is Google developing right now.

So the next question about this privacy enhancing technology and PETs. So could you tell us about what is a privacy enhancing technology then what is interesting about this tech field?

To be continued..

Thank you for reading and please contact me if you want to join interview together.

Privacy Talk is the global community with diversified expert, and contact me below Linkedin if we can work together!

--

--