What is an AI designer?

And how can they influence ethical AI?

Solveig Neseth
daios
6 min readSep 2, 2022

--

Photo by Annie Spratt on Unsplash

I must admit, this series has been a daunting task. I feel like every piece I produce exhausts my mind to this field of AI and machine learning. The more I write, the less I feel like I know, and to be honest, the less I feel like I want to know. But I made a commitment to educating myself and the masses. ’Tis the cross I chose to bear. So, as they say in the biz, the show must go on, and so shall I.

And let me tell you, I’m glad I did. Because my next interview was a true pleasure. A couple weeks ago, I had another interesting conversation, this time with Professor Qian Yang. Associate professor at Cornell, Qian is a human-AI interaction designer and researcher. She had a particularly clear and interesting perspective which I felt super graspable. Below are some of her ideas, I’m sure you’ll find them as interesting as I did.

Before we go any further, let me first emphasize, as she did to me, that her job description means she designs how things work, not necessarily how they look. Just to make that perfectly clear before we get started. Because I was not clear on it until she explained it to me.

Qian’s AI journey began when she worked at a consultancy that helped design cars. The thing about car design is that any new car model takes years to develop. This means tons of research over the course of several years. Essentially, her job was to predict how technologies were going to evolve around cars — what features were going to become popular or essential. “It made me really wonder if we can truly just predict what people are going to want,” she told me. “This was back when big data started to become a thing; with so much technology expansion it was becoming really difficult to predict. This is ultimately what drove me to pursue a Ph.D. in human and computer interaction with a focus on AI.”

Qian made several interesting points during the course of our conversation. One was her concept of “unremarkable AI.” In essence, when something becomes so ingrained in society and our daily lives that we hardly notice it, it becomes unremarkable. Qian wrote a paper on this concept, in which she uses electricity as an example. Electricity, a force so wild and powerful it can be lethal, has been integrated seamlessly into our everyday lives. Your phone charger, your bathroom light, your electric razor — all of these things utilize, in some way, the same basic energy source that we now take completely for granted. This is what Qian considers to be the goal for AI systems. “In an ideal world, they should be so seamlessly embedded into people’s lives that we don’t notice them. I think that the fact that we’re having so many discussions about AI right now indicates that we’re definitely not there yet.”

With all of the chatter regarding AI these days, this feels like a difficult feat, but Qian thinks it’s achievable. “Academically, there are two commonly referred definitions of AI. The first, more technical definition is the rule that governs the behavior of the system is dependent on the data. The secondly, more folksy definition is a machine that does something that is typically associated with human behavior. A good example of a really simple system is customer service chats.”

Look, even this cat can do it.

“Oftentimes when AI becomes unremarkable, we don’t think of them as AI anymore. In my Intro to Human AI class, I always ask my students what AI systems they interact with on a daily basis. No one says, ‘Google search,’ because they don’t think of it as AI, but it is, and a pretty sophisticated AI at that. Not to say it’s perfect, but it works well, so we don’t even think about it anymore. When we stop thinking of things as AI, that’s a good sign that we’re getting there.”

So, I asked Qian what she considers the biggest challenge right now when it comes to the world of AI design. Her response, scale. Small scale AI is simpler and accounting for its ethical problems is easier. What’s more difficult are things like fake news on Facebook.

“There are two things at play here. One is the large scale of the platform itself. The second is that everyone is dealing with a slightly different system. The data is constantly changing and evolving for each individual user. Every day there are new types of data. Every day you’re dealing with a new AI.” She then compares the system to the world of healthcare. Qian does a lot of research into how AI can help doctors make better healthcare decisions. “In healthcare, the system is “fixed.” You have to have FDA approval for any change, including new data. In consumer AI, however, the platforms and technology change very rapidly.”

The point she drove home the most was that there is a difference between AI as a system and AI as a platform. Essentially, yes, there are a lot of issues with social media, but a lot of the issues we’re seeing there are platform technology issues and can’t just be boiled down to ‘fixing the AI.’ As Tom said to me, if the goal of the system is misaligned, tweaking only the system isn’t going to work.

What Qian really wants to see is more intention with how we make these kinds of distinctions when it comes to AI. “Regulating individual systems for biases and regulating a platform are very different tasks.” We then discussed how best to approach this regulation. Like the other experts I’ve interviewed, she thinks there needs to be a cooperative effort between government, corporations, and the public. “In my dream world there will be more deliberate distinctions between types of AI systems and how they are used. We can have a policy for ‘all AI’, but what are the distinctions? Can we categorize them in a way that is relevant to policy making and regulation or design activities that prevent unintended consequences? We don’t even have that yet.”

Once again, the pang of responsibility weighed on me. She did emphasize that the public is going to need to have more awareness when it comes to AI issues, especially those surrounding social media, but she was optimistic that we’re already starting to see that shift. Then, she said something to quell my AI anxiety. “When a driver drives a car, it’s not realistic to expect the driver to know how the car is working. It’s for policy makers and car designers to make sure the system hits a baseline of ethics and functionality. The requirements for the driver of the car are very different from the requirements of the policy makers or system designers. That distinction is not as commonly made in AI as you’d expect. There’s a lot of activism around AI about how people need to understand how AI works,” then she jested, “but you don’t necessarily know how much math you got into. It’s unrealistic. We deal with so many AIs in our lives every day. We can divide the responsibilities between a policy level and a corporation level and user awareness level.”

And just like that, I felt a sigh of relief for the first time since I started this series. No longer must I feel guilty about not wanting to spend my days learning to program in order to understand this better. And I hope you feel that way, too. As we near the end of this series, this will be something I try to hold onto — the reality that, while I am affected by AI, it is not my job. I’m going to try to take comfort in relying on the experts to fight for humanity on this one. After all, if they’re anything like Qian Yang, we’re in good hands.

If you’re curious about who I’ve talked to so far and what I’ve learned, be sure to check out the other publications of my series!

--

--