Image background generated using Midjourney … with a crap-ton of human iteration

The Humans Behind AI

AI Demystification with Machine Learning Engineer Israel Knight

--

At Craft, a digital experience design and strategy agency, we continue to develop our own perspectives on how to create products and services for and alongside artificial intelligence. A critical step in our process is to seek perspectives from other experts in the field. At our core, we are all curious problem solvers who thrive on gathering data so that we can do right by our users.

From an internal survey, Craft employees identified a desire to further understand how AI models work and how AI engineers build and train these models. Fortunately, a few Crafters’ old colleagues are now successful product people working directly on experiences incorporating AI. The Craft team recently had the privilege of inviting Israel Knight, Principal Machine Learning Engineer at Riot Games, to speak with us about the myths and realities of AI. Israel plays a hybrid role across the organization, sitting across numerous product teams at Riot to support them with their product goals through all things data, including the use of AI. It’s through this opportunity and others that we continue to evolve our thinking, our AI experience principles, and product execution to make good products, with good people, for good clients, with good AI.

The Big Myth: AI has Agency

Perhaps our easiest misconception about AI, regardless of whether you’re an everyday user or a product professional, is that AI is some kind of living entity capable of making its own decisions and shaping its own opinions. But you can’t blame us for making this assumption, right? (Israel would rightfully argue “wrong!”). After all, we’re humans those social, empathetic creatures with strong survival instincts. It’s easiest for us to interpret the world around us through our own experiences and our understanding of ourselves.

Media would have us believe AI alone is after us. Source: Demystifying AI with Israel Knight

And if this wasn’t enough, we’re told daily through just about every AI headline, that it is responsible for all the good and bad its influence has on the world.

So, let us debunk.

#1. It’s all just pattern matching.

Today, AI is simply the demonstration of the ability to identify and replicate the patterns of data. That’s it. But the sheer vastness of the data sets that these large language models (LLMs) parse is what makes them seem so magical to you and me.

The gradient descent graph illustrates how a machine learning algorithm gets optimized during training. Its primary purpose is to minimize an error or loss by constantly adjusting parameters and maneuvering around hills until that gradient is near zero. Source: Gradient Descent GIF Wikipedia

These large language models build off the labor of billions of humans (more on this later) who’ve written logically patterned language over millennia. While Israel couldn’t go into detail on many of the internal uses for machine learning at Riot Games, he outlined how in a public example of Legends of Runeterra, AI’s pattern matching was used to help product teams better understand how players would experience the game, what card decks dominate, and which underperform. With this pattern matching, designers could better balance the gameplay, improving player experience.

Riot Games’ Legends of Runeterra used deep reinforcement learning to optimize user gameplay. Source: Macworld.com

It’s important to acknowledge that it might seem like we’re downplaying the power of AI as just a pattern matcher. We’re not. Pattern matching is insanely powerful; it feels magical. It’s just not AI exercising independent decision-making.

#2. There’s so much human effort behind AI.

All AI is designed by humans, programmed by humans, and calibrated by humans, sometimes unethically. AI is nothing but the statistical predictability of what humanity has written. AI might seem magical to you and me, and that is because product teams have made the experiences utilizing AI magical. Those of us in the product design world who have interacted with and designed delightfully magical experiences can appreciate the human ingenuity behind them. AI products are no less reliant on human ingenuity. And in the case of Riot Games and the gaming industry, it is actually gamers who spawned the rise of AI through the need to run parallel calculations resulting in the GPU arms race.

Blame gamers for AI. Not Sam Altman. Source: Demystifying AI with Israel Knight

Take Chat GPT for example: you may think it’s simply a chatbot experience between you and magical AI but therein lies a criminal underestimation of the technical product and engineering effort it takes for you to ask Chat GPT to write your next email for you. It starts with training processes that cluster petabytes of text data. Engineers then build products that are fed those data clusters and develop predictive sequencing products so that AI spits out semi-coherent words. From there, engineers custom-build the training, overrides, and post-content filtering products to prevent the racist diatribes from reaching you and me. A company like Open AI directly employs 800+ team members and far more outsourced individuals to create the products that leverage AI today. It takes a lot of human effort.

It takes a [human] team to create even some of the more simple machine learning programs that yes, replace other human teams. Source: Demystifying AI with Israel Knight

So when “AI does something evil”, it’s actually a large workforce that creates products and tools that leverage AI to do so, intentionally or not, and who is influenced by its own incentive structures. We must “credit” AI’s impact (good or bad) to us, not AI.

OK sure, but isn’t all this human effort required for narrow AI only? What about Artificial General Intelligence?

#3. All AI today is narrow, not general.

The popular large language models today (Google’s Gemini, Meta’s Llama, and OpenAI’s GPT) are ensembles of narrow AI apps working together as excellent pattern predictors. Narrow AI are task-based AIs; they do specific things very well (based on robust pattern matching) and may seem to you and me as if they’re exhibiting the ability to reason and opine as Artificial General Intelligence (AGI) promises, but they’re simply not. LLMs have the potential to become even more reliable through planning, but even then, they’re still narrow in their execution of AI.

Artificial General Intelligence systems are not on the market yet, and Israel doesn’t necessarily believe LLMs are the answer to AGI.

So as easy as it might be to say that we’re just too narrowly focused (pun intended) on the AI of the past, the AI we interact with today is still just narrow AI, brought to you by, well, humanity over eons.

OK, so what? Why is this important?

We need to build it better.

The human effort behind AI is often overshadowed by the false perception that AI is running the show. AI isn’t unethical. It’s not evil. It’s not racist. It’s not taking our jobs. Rather our models, our tools, and our products for AI have the potential to be unethical, evil, racist, and job-replacing.

Chat GPT (and others) are products, not AIs. Chat GPT is not simply a UI layer on top of an AI, either. Product teams have designed incredibly robust product experiences that make your experience with AI the way it is. Because of this, humans are absolutely still in control of shaping AI. Humans, not AI, are 100% behind AI’s outputs to you as a user.

So let’s build it better. Let’s build it better with more data, and more diverse data. Let’s build it better by making it more fair. Let’s build it better by designing for (against) AI dark patterns. Let’s build it better by asking the tough questions. Are you ready for AI? Does your company have the right data to leverage AI? Is AI even the right solution for the problem you’re solving for your user?

There has never been a better time for designers and researchers to play an active role in shaping the products that use AI. Researchers and designers are (and must be) AI experience ethicists.

Where we go from here.

We continue to refine Craft’s Principles for Good AI with the additional expertise we consult. Our principles are living and breathing, just like our influence over AI products is. We also continue to seek the perspectives of other experts in the field of AI and product experience design. We’re actively putting our principles to work on client projects and internal initiatives.

Craft’s Principles for Good AI guide our AI-enabled product creation process internally and with our clients.

We want to thank Israel for sharing his expertise with our team. It’s folks like him who make us optimists about our AI-enabled future. Perhaps more importantly though, it’s our evolved understanding that motivates us to stay responsible in shaping AI products and experiences into the future.

Join us in evolving our thinking. We welcome your thoughts and feedback here and at hello@madebycaft.co

--

--