Co-parenting Ethical Innovation

Kostadina
Bootcamp
Published in
10 min readApr 3, 2023

*This article is based on a recent talk I gave during UX DAY 2023 in Sofia

An abstract image of a nucleus and connected elements.
Photo by DeepMind on Unsplash

You might have noticed a prolonged quietness in this channel, and you might have guessed the reason. It cries a lot and grows up in a minute.

Having just returned from my maternity leave, I can say one thing for certain — what drains your body also hungers your mind.

In other words, I wasn’t writing, but I was thinking.

The first thing you notice about parenthood — once you get used to sleep deprivation — is that control starts slipping away as your baby grows. At first, it’s easy to perform certain tasks — changing diapers, feeding, bath time — and strive for some level of performance (good being preferable to perfection if you don’t want to end up in a mental institution).

But at one point, it’s not enough to look after that baby; you have to start raising her to become a person. I hoped against all hopes that my daughter would take my word for it and just do the right things, but no. Children learn by example, not by what they are being told. They take notice, they imitate, they memorize your reactions, your emotional response, and your coping mechanisms. Frightening to see at times and a good wake-up call. You are the adult and you have all the responsibility.

Let’s draw the parallel here. The pace of technical innovation is mind-blowing and recently it has become evident once again in the field of AI. Only in the matter of months, communication models like ChatGPT and text-to-image services like Midjourney have become available to everyone and have disrupted — well, our lives and, mostly, our sense of security. The concerns multiply together with the possibilities, and those seem limitless. It might look like control is slipping away. It might seem that soon our voices will not be enough to control our own lives.

And here, we must remember that AI will evolve and form into what we make it. It will take our words and say them back; it will notice our behavioral models and copy them; it will learn from how our systems work and from the way we exploit breaches in them. But all the responsibility lies with us. We need to parent this innovation to make it beneficial rather than apocalyptical and frightening. We are the parents, and not just one or two of us but billions. We need to learn to share that responsibility, to not shirk it — as users, designers, developers, and as citizens.

Let’s take a step back and look at some of the ethical concerns I mentioned. In fact, let’s see what ChatGPT thinks are the ethical concerns with its usage.

A screenshot from a conversation with ChatGPT, where it replies to the question “What are the ethical concerns with using ChatGPT?”
Tiles with a list of ethical concerns, provided by ChatGPT on the above question — Privacy, Bias, Accuracy, Misuse, Transparency, Responsibility

And let’s add those pertaining to using AI in general:

A screenshot from a conversation with ChatGPT, where it replies to the question “What are the ethical concerns with using AI?”
Tiles with a list of ethical concerns, provided by ChatGPT on the above question — Privacy, Bias and Discrimination, Employment, Autonomous Weapons, Transparency, Accountability

So you see many of those overlap. Transparency is a big concern — we do not know exactly how the AI system derived some outputs and how to override the algorithm. Another huge topic is privacy. In the case of ChatGPT, it concerns both the training data, as it might contain personal or sensitive information, and the data collected from users’ questions. OpenAI states in their Terms and conditions that conversations are stored and used for training and improving the model. As harmless as that sounds, it is still your data in someone else’s hands. We need to learn to be mindful of that.

Bias and discrimination is a major concern that has already become apparent in the deployment of certain technologies. For example, there was a mistaken arrest in Louisiana in Jаnuary, due to face-tracking technology, which has shown to be less capable of correctly identifying people of color. Another prominent example is the HR industry, where candidate sourcing and screening are being widely automated, but the risk of bias in the systems remains.

I recently read a study that tried to create a benchmark for truthfulness of language models, including ChatGPT. They designed questions that probed at widely popular human falsehoods and tested how the models answered. The result was that larger models were more prone to mistakes because they had more extensive data containing those falsehoods. The study is already two years old, so many of the questions in the examples are no longer producing incorrect answers. In fact, ChatGPT is replying very correctly and very diplomatically to most of those. But this is a good reminder that those models are not always accurate, even if they do always sound plausible. As we are allowing them to become more and more of an authority on knowledge and information, we need to remember Einstein’s words:

“Blind belief in authority is the greatest enemy of truth.”

To think about all the ways AI can be misused is to shudder at the images of impending doom. But it’s not fiction. Deepfake videos and pictures can have a catastrophic effect on lives — anything from revenge tapes to child abuse pornography to harassment and undermining democratic systems through fake news is possible. Compared to that, another issue — the use of copyrighted material, seems almost innocuous, while in fact it can lead to the loss of livelihood for many artists and to legal action against users that fail to realize AI cannot be held accountable.

Tiles with combined list of the ethical concerns in using AI and ChatGPT according to ChatGPT
Just a few of the ethical concerns, related to the use of AI

The moment every parent dreads is when their innocent angel utters her first deliberate lie. That’s when the rules of the game change. Looking at ChatGPT’s responses, we might say that at least it has not learned to lie. Yet. But according to a recent case study on using chatbots in education, its answers can vary a lot from person to person, and it’s not entirely clear why. It might be related to previous interactions, although in some cases, the answers differed — both in form and quality — even when neither of the users who asked them had any previous interactions with the model. That raises a few questions about fairness and the risk of manipulation in the future. If that actually happens, the young users will be the first exposed to it but the least likely to notice.

While we are deliberating and exploring those new capabilities and thinking of creative ways to embed them in everyday processes, students are already doing it. ChatGPT has disrupted the educational system to the point where we need to consider changing the very nature of learning. According to the research mentioned above, the attempts to curb the model’s effect on educational institutions include anything from simple bans to complex algorithms like GPTZero, which detect AI-generated content. So we can add a new set of concerns that come with using AI in schools.

Cheating is the obvious one. How are teachers to evaluate coursework if they need to invest most of their time in analyzing whether that coursework was actually done by humans? How are students supposed to learn critical thinking if the model does all the work? It might be different in the scientific field, where such a model can be a valuable companion to seasoned scientists, but younger people will not have the skills yet. Laziness is obvious — I am sure you all remember the sheer will that sometimes was needed to do homework. Imagine if all the effort it took was to ask a few questions and change a few words. Playtime need never end.

I would also point out here the need for equity and how to guarantee it. We already saw that the model might respond differently to different people. Let’s go further. We already have barriers to fair access to schools and technology as is. AI might give an additional edge to already privileged groups. And if using features of the models require paid subscription — as OpenAI already does — how can one compete if they cannot afford the price?

Four tiles with text, listing ethical concerns in education
ChatGPT is already disrupting education and raising additional ethical concerns

There are a lot of concerns but also a lot of exciting paths for innovation. We do not need to thwart the latter to escape the former. But we do need to be aware of the issues and take steps to mitigate them. We need to create guidelines that consider diverse backgrounds, disabilities and special needs. We need to ensure everyone’s equal access to the benefits of AI before making it a part of the game. And we need governance on state and global levels. The good news is that we already have new legislation on the way that will hopefully regulate the most concerning aspects of what is otherwise a giant leap forward.

Let’s look briefly at what’s coming from the biggest players.

The EU is leading the charge with the AI Act, dubbed the GDPR of AI. It employs a risk-based approach where different areas and technology are classed according to risk levels. Biometrics and employment, for example, are classed as high-risk, while unacceptable risk applies to manipulative behavior and social scoring. The EU standards organizations (CEN/CENELEC) are already developing technical standards for AI Act compliance. Their working in parallel at this stage shows the level of urgency. Two accompanying pieces of legislation are the Digital Markets Act, which will reduce bottlenecks by gatekeepers who provide core services, and the Digital Services Act, which sets rules for social media and ad-driven online businesses. All three are at some stage of adoption and are planned to become applicable by the end of this year or in early 2024. The EU AI Act will probably become the global gold standard for AI legislation.

In the US things look differently. Their approach is more fragmented and sectorial, with a focus on bias and transparency. A blueprint for US AI Bill of Rights was published last year to guide the design, development and deployment of AI technology. It has voluntary nature and would probably remain just that — a guideline. The National Institute of Standards and Technology (NIST) will be working on benchmarks for the development of AI, while a binding piece of legislation — The Algorithmic Accountability Act — is in the works. A significant player in AI regulation in the US will be The Federal Trade Commission (FTC), focusing on dark patterns and deceptive technology. State-level initiatives are also being discussed but still stalled. In the meantime, existing legislation to AI and case law will play a large role in how the US approaches AI regulation.

China seems ahead of the game with the earliest enforcement of AI regulation in the world. They already have national, regional and local regulative measures. Those include the Deep Synthesis Provisions, regulating every stage of creating and disseminating deepfake videos. The Internet Information Service Algorithmic Recommendation Management Provisions require personalized recommendations to uphold user rights, protect minors, ban different prices based on personal characteristics, require notifications when algorithms are used, prohibit fake news, require news licensing and address fraud among older users. The Shanghai Regulations on Promoting the Development of the AI Industry introduce a grade management system, sandbox supervision, and an Ethics Council to promote innovation. The Regulations on Promoting Artificial Intelligence Industry in Shenzhen Special Economic Zone take a risk-management approach for that area, where many AI and tech businesses are located. With its early AI regulation, China focuses on the implications of digital services and black box technologies and is looking to set a precedent in global standards.

Three cards with short overview of different legislation on AI in the EU, the US and China.

While the political initiative in shaping better AI legislation for the world of tomorrow is critical, it is equally important to leverage the influence of big companies, especially in technological innovation.

I am proud to be working at a company strongly committed to the ethical application of AI. SAP has recently published the SAP AI Ethics Handbook for applying our Global AI Ethics Policy. The three pillars of the policy are Human Agency & Oversight, Addressing Bias & Discrimination, and Transparency & Explainability. The policy also defines clear Red Lines, which are deemed highly unethical. They include Human Surveillance, Deanonymization, Discrimination, Manipulation, Undermining Debate, and Environmental Harm. Any use case built for those purposes must cease development and deployment immediately.

Not bad, is it? And similar efforts can be tracked in other large-scale companies. As designers and developers, we already have a direction and framework to look at for guidance. But the responsibility stays with us all.

In conclusion, I want to mention my dog Jaime. I’ve had him since before I had a kid. He is a cute furry Cavalier and can sometimes be very silly, especially when I am in a work call. He, too, needed some parenting at the beginning. I taught him commands and trained him to at least not get himself run over when outside. In a way, he was like a simpler algorithm — easier to manage but limited in capabilities. A child, however, grows up to surpass all her parents taught her, to know more than they knew. To become her own person. And while I hope AI never reaches that level of development, I believe we need to parent it responsibly and urgently, for our children’s sakes, if not ours.

--

--

Kostadina
Bootcamp

Storyteller. UX designer, writer, Design Ethics enthusiast, connoisseur of sorts