Human-Centered AI is Within Reach

IDE researcher offers a primer on keeping AI interactions customer-centric, not friction-free

MIT IDE
MIT Initiative on the Digital Economy
6 min readJul 30, 2024

--

By Irving Wladawsky-Berger

Psychological studies have shown that our judgment is affected by the many decisions we make every day and the friction-based paint points of making these decisions. So, not surprisingly, we look to technology and algorithms to help us make our decisions as frictionless as possible, said Renée Richardson Gosline https://www.reneegosline.com/biography Renee Gosline in her keynote at the 2024 MIT IDE Annual Conference.

Gosline is a Research Scientist and Senior Lecturer at the MIT Sloan School of Management, and the head of the Human-First AI group at MIT’s Initiative on The Digital Economy. “But when AI is involved, sometime we need to add beneficial friction. Sometimes, it’s useful for us to engage System 2,” she added, referencing the pioneering behavioral research on human judgment and decision-making by Daniel Kahneman, for which he was awarded the 2002 Nobel Prize in Economics.

In his 2011 bestseller Thinking, Fast and Slow, Kahneman wrote that our mind is composed of two very different systems for reasoning and making decisions, which he called System 1 and System 2. System 1 is the intuitive, fast and emotional part of our mind. Thoughts come automatically and very quickly to System 1, without us doing anything to make them happen. System 2 is the slower, logical, more deliberate part of the mind. It’s where we make complex decisions by evaluating and choosing among multiple options.

Gosline’s human-first AI research group aims to “add beneficial friction in AI-mediated systems to ensure that humans are in the loop,” as well as to reduce bad friction. When a choice is being made,

“we might want to remove friction [to] make the best choice with AI’s help. But there may be circumstances when we want to add friction to reduce harm.”

“No doubt removing friction-based pain points can be beneficial, as in the case of simplifying healthcare systems, voter registration, and tax codes,” wrote Gosline in “Why AI Customer Journeys Need More Friction,” an article published in the Harvard Business Review in June of 2022. “But, when it comes to the adoption of AI and machine learning, ‘frictionless’ strategies can also lead to harm, from privacy and surveillance concerns, to algorithms’ capacity to reflect and amplify bias, to ethical questions about how and when to use AI.”

“Cognitive biases can undermine optimal decision-making and the decisions around the application of AI are no different.

Humans can hold biases against or in favor of algorithms, but in the latter case, they presume greater AI neutrality and accuracy despite being made aware of algorithmic errors. Even when there is an aversion to algorithmic use in consequential domains (like medical care, college admissions, and legal judgments), the perceived responsibility of a biased decision-maker can be lowered if they incorporate AI input.”

Identify Good Friction

“Good friction is a touch point to a goal that gives humans the agency and autonomy to improve choice, rather than automating the humans out of decision-making.” This is a very important question for firms to consider as they seek to deploy AI to improve their customer experiences. It’s a particularly important question given the “black box” nature of AI systems, that is, that we generally cannot explain how AI models arrived at their decisions.

“The promise of AI is tremendous, but if we are to be truly customer-centric, its application requires guardrails, including systemic elimination of bad friction and the addition of good friction,” added Gosline. “Companies should analyze where humans interact with AI and investigate areas where harm could occur, weigh how adding or removing friction would change the process, and test these modified systems via experimentation and multi-method analyses.”

She clearly explained the difference between good and bad friction in her HBR article. Good friction should enhance the overall customer journey by helping consumers better understand the implications of their choices. Bad friction, on the other hand, places obstacles that undermine human agency, “disempowers the customer and introduces potential harm, especially to vulnerable populations.” To better understand the impact of AI on their customers, firms should conduct ‘friction audits’ to identify touchpoints where good friction could be deliberately employed to benefit the user, or where bad friction has nudged customers into ‘dark patterns.’”

Influence or Manipulation?

When assessing the positive or negative role of friction, firms should keep in mind the welfare of their customers. Nudging, for example, is a powerful way to influence the behavior and decision making of groups or individuals, but it can become manipulative if not wielded carefully. To reduce such risks to their reputation, Gosline offers three suggestions:

  1. When it comes to the deployment of AI, practice acts of inconvenience. While offering more choice can be less convenient, affirmative consent must be preserved, as in the case of default cookie acceptance. But, ultimately, firms need to very carefully consider when it’s appropriate to use AI and when it’s not: “Should AI be doing this? And can it do what is being promised? Deliberately place kinks in the processes that we have made automatic in our breathless pursuit of frictionless strategy and incorporate ‘good friction’ touchpoints that surface the limitations, assumptions, and error rates for algorithms.”
  2. Experiment (and fail) a lot to prevent auto-pilot applications of machine learning. “This requires a mindset shift to a culture of experimentation throughout the organization, but too often, only the data scientists are charged with embracing experimentation. … Re-acquaint yourself and your team with the scientific method and encourage all members to generate testable hypotheses, testing small and being precise about the variables. … Plan for lots of experimental failure — the learning is friction. But it’s not just about failing fast, it’s about incorporating the lessons, so make the dissemination of these learnings as frictionless as possible.”
  3. Be on the lookout for dark patterns. “Gather your team, map your digital customer journey, and ask:

Is it easy for customers to enter a contract or experience, yet disproportionately difficult or inscrutable to exit?

If the answer is yes, they are likely in a digital version of a lobster trap. This entry/exit asymmetry undermines a customer’s ability to act with agency, and nudging along this type of customer journey can start to resemble manipulation. … Increased transparency into customer options, though not frictionless, preserves customer agency and eventually, trust. This is a critical for customer loyalty.”

In her IDE keynote, Gosline noted that in physics, friction is neither good nor bad. It’s a force around a runner’s feet that serves a very important purpose. Without friction, the runner would slide all over the place, but with beneficial friction, the runner is able to accelerate and change direction as needed. Similarly, that should be the role of friction for all organizations: reduce or eliminate bad friction, and add beneficial friction in AI mediated systems to ensure that humans are in the loop.

She further explained that there will be circumstances under which we want to add friction to minimize harm. For example, a pop-up appears when interacting with a chatbot warning us that we should be careful because there may be errors in the chatbot’s response; adding friction by highlighting potential errors is warranted to improve accuracy.

3 Ways to Put Humans First

“What does it all mean?” asked Gosline at the end of her keynote. She answered with three key takeaways based on the research of her human-AI group:

· Beneficial friction can serve as a cognitive speed bump that improves accuracy without sacrificing time;

· Despite the proliferation of GenAI, human involvement is still valued and pervasive; and

· AI does not have to be perfect to be useful, so be transparent with any shortcomings, weaknesses or errors that AI may have.

“We should be human-first and use friction to our advantage,” concluded Gosline. “AI does not mean we should automate the human element out; quite the opposite. It’s time we viewed friction as a tool, when harnessed effectively, can spark the fire of empowerment and agency, as well as convenience — not as something to eradicate. This will lead your firm to become not only customer-centric, but human-first.”

This blog first appeared July 11 here.

--

--

MIT IDE
MIT Initiative on the Digital Economy

Addressing one of the most critical issues of our time: the impact of digital technology on businesses, the economy, and society.