Privacy Talk with Marielza Oliveira, UNESCO Director for Digital Inclusion, Policies and Transformation, in the Communication and Information Sector: How UNESCO had started to work on AI and Ethics, and where is the point at this moment?

Kohei Kurihara
Privacy Talk
Published in
8 min readJan 14, 2023

“This interview recorded on 20th December 2022 is talking about digital human rights and freedom of expression.”

Kohei is having great time discussing digital human rights and freedom of expression.

This interview outline:

  • How UNESCO had started to work on AI and Ethics, and where is the point at this moment?
  • What do you think is needed for freedom of expression and privacy?
  • What arises the concerns with surveillance technology?
  • How UNESCO had started to work on AI and Ethics, and where is the point at this moment?

Marielza: Thank you for the question. And you’re absolutely right. Today, artificial intelligence does play a role in the lives of billions of people , actually almost all of us in one way or another.

Sometimes we don’t even know that. But AI is having really profound consequences, in how it is transforming our lives and our societies. You can do amazing things with AI, like, you know, offer customized education pathways to millions of students at the same time, provide new jobs so they can help us tackle global challenges such as climate change or COVID pandemic, but it also has big risks.

It generates big risks, such as the possibility of deepening existing inequalities between and within countries. For example, as you of course know, there are only a few countries in the world that are actually capable of generating advanced AI technologies.

Most countries are actually being left behind in AI. They don’t have the capacity for AI development. So they become users of other countries’ technologies, and that makes for even higher inequality between countries.

And within countries, we know of course, that at times, vulnerable members of society are not well represented in datasets, in data that feed into these systems. Therefore, these systems don’t really take into account their needs.

And, minorities, as well as women tend to be underrepresented in the all the stages of producing, using and even disposing of artificial intelligence systems. In addition, these systems also contribute to spreading misinformation, disinformation and hate speech at scale.

We were looking at chatGPT, and how that system can actually generate an incredible amount of misinformation. We also know that somedeep fakes “out there” are generated with the help of artificial intelligence, and so on.

So AI has downsides. We need to address these downsides. At the same time, we should enable and promote the potential that AI has to help society, to contribute to prosperity, to peace, to development in general.

So in November 2021, the 193 member states of UNESCO adopted a recommendation on the ethics of artificial intelligence. They actually adopted it by acclamation, as it was the very first global standard-setting instrument on artificial intelligence ethics. There are around 170 other instruments, but none of them actually global.

This is still the only one, and it’s important, since we have global norms that we can all accept and share, because AI “reaches out” globally. So we need to have ways of protecting and promoting human rights and human dignity at scale, just like artificial intelligence works at scale.

And that’s why the AI ethics recommendation of UNESCO is such an important guiding compass: it is a global normative framework that helps us strengthen human dignity and the rule of law in the digital world.

It was produced with a deep global consultation that involved experts, civil society, tech companies, and of course all UNESCO Member States. With all stakeholders, together setting principles that include existing and new principles such as proportionality, but also defining policies that countries need to follow in order to really extract the potential of AI while addressing its risks.

These include policies in the areas of education, culture, and communications and Information, and in other areas, including in data protection.

The recommendation also gave us two instruments that are very important. One instrument is an ethics readiness assessment, as artificial intelligence, that countries can take to understand whether they are ready to really develop and deploy ethical artificial intelligence, that is aligned with the human rights framework.

And the other instrument is the impact assessment, that actually before you deploy a system,, looks at who could possibly be harmed and how we can mitigate those risks. It also looks at what are the potential upsides, why and how we can maximize the positive impact.

So, it’s not just principles, you know, it’s really important to say that the recommendation is a very practical instrument that goes beyond looking at principles for the development of artificial intelligence, but that actually looks across the lifecycle of artificial intelligence systems -from development all the way to disposal — and sets standards for how to ensure these systems contribute to to human dignity, to human rights, to human life.

Kohei: Thank you. I think the UNESCO approach is very diversified. It’s a very helpful whole, some of the minority participants in different jurisdictions, because it’s not that easy for one single country.

  • What do you think is needed for freedom of expression and privacy?

It just decided those kind of the principle that UNESCO is, that kind of organization is very helpful to support the creation of the more universal framework, and to join up the different perspectives.

So I think your work is a very important support to those kinds of inclusions. So the next topic is also very primary actions, maybe in UNESCO.

The freedom of expression is the fundamental right for all of us. Those constitutional importance have been discussed some centuries ago, but this moment is becoming on the internet.

So we posted on many of the contents of internet for freedom, actually, it’s sometimes intrusive, problems of privacy the data protections. So could you tell us an ideas what is needed for the freedom of expression and privacy to protect human rights?

Marielza: You’re absolutely right. And this is a very important question. So thank you for that, Kohei. Privacy is actually a really fundamental human right recognized in article 12 of the United Nations Declaration of Human Rights.

In article 17 of the International Covenant on Civil and Political Rights and in a lot of other international and regional treaties. It really is the cornerstone of human dignity and other key values such as freedom of association and freedom of expression.

So you can really freely express yourself, which you cannot do if you are afraid of exposing yourself in some way, or you know that you’re going to be surveilled, that you’re going to be prosecuted for your expression, and so on.

So nowadays, most countries in the world, recognize that the right to privacy exists, and even explicitly recognize it in their Constitutions. And many of these laws are actually based on how the Organisation for Economic Cooperation and Development, and the Council of Europe have developed their privacy laws.

But the problem is that, as you know, one of the most difficult rights to define is the right of privacy, and thus definitions vary widely, depending on context, depending on environment, and so on.

But privacy is becoming even more important now, like you mentioned, that we operate in a digital world. The increasing sophistication of information and communication technologies, their capacity to collect, to analyze to, disseminate information on people, has really created a sense of urgency for us to demand effective legislation in this area.

What arises the concerns with surveillance technology?

So, digital transformation is really a key reason why in many countries, the concepts of freedom of expression, access to information, and privacy, have been changing so much. The concept of digital transformation actually being infused with issues of data protection, particularly because new technologies “feed” on data.

They [new tech] essentially are made possible because there is a massive amount of data, what we call big data. But much big data is collected in in many ways that actually are invasive to privacy, and include personal information that reveals details about people that legally should be kept privacy.

With evolving digital technologies, the right to privacy has actually become one of the most important, most demanded human rights, and one of the biggest issues of our time.

People are concerned about surveillance technologies — including wire tapping, video monitoring, biometrics, personal ID systems, data mining, and all that comes with that. And with the capacity of these systems to really invade people’s privacy and enable controls overhuman activity.

And the thing is that, in many countries, the laws have really not kept up with the technology, and leave tremendous gaps in the protections that may exist. What UNESCO does, is to support countries in developing adequate regulatory frameworks including building their capacities for that.

And we look at how to protect rights in digital worlds, and also advocate for open data within a human rights framework, because open data can help us,solve a lot of problems.

As you may remember, in the beginning of the pandemic, scientists were able to sequence in the genome of the COVID virus very fast, exactly because they shared data across countries. And, you know, scientific collaboration speeds up the process of finding a solution.

But for that to happen, one really needs to also have protections for personal data, protections for data and privacy in general, so that we can build trust and confidence in society that people can really share data they may have, that can help us to develop solutions to common problems, without the fear that they will be manipulated, controlled, invaded by systems.

So you know, this is a really important area of the UNESCO’s work, and we do quite a lot in that regard, including a new guidance framework that we’ve developed recently on Open Data, and a competency framework for civil servants, called Artificial Intelligence and Digital Transformation competency framework.

One of its key areas is exactly data, covering the management of data from the design of data frames, all the way to collection, analysis,, etc. But it’s covered within the human rights perspective, making sure that data protection and privacy are a key part of how systems are developed. We are helping countries to build those capacities. At the same time, we’re helping people to protect themselves through data literacy.

We also monitor, globally, the number of countries that have information laws and how effectively they monitor and implement these information laws.

Because having a law is not enough, countries should have a budget behind that law, civil servants that are capable of enforcing and monitoring, and so on. So we help with that, all over the world, as an essential part of the work we do.

Kohei: Thank you. I think that these days, a lot of questions about the freedom of expression and privacy, some of the concerns related to journalism because of the journalists restrict their own privacy, it’s not easy for them to express themselves.

So is there any solution that protects the freedom of expression, of course, privacy is a very important solution to protect the idea. Do you have any other options for them to distribute the important expressions by any activities, or any actors to stream the content?

To be continued..

Thank you for reading and please contact me if you want to join interview together.

Privacy Talk is the global community with diversified expert, and contact me below Linkedin if we can work together!

--

--