The Case For Human Centered Algorithm Design

Tessa Forshaw
People Rocket
Published in
12 min readJan 31, 2024
Bringing stakeholders inside the Algorithm Design process.

The Role of Algorithms in our Everyday Lives

If you’ve ever used a professional networking platform, you’re familiar with how quickly your feed evolves. Spend a few moments engaging with a post about innovative sales strategies, and suddenly, your feed becomes a hub for sales content.

These computational algorithms are deliberately designed to present you with information they predict you’ll find interesting; this often means content you’ve shown an interest in — however fleeting- like the innovative sales strategies. But apart from consciously avoiding looking at similar posts, your ability to influence the content you see or to comprehend the reasons behind its appearance is quite limited.

Sure, scrolling fast past content you don’t like isn’t a horrible consequence, but it’s crucial to recognize that this algorithmic approach doesn’t just influence our social network interactions; it shapes our personal lives and business outcomes in ways we might not even be aware of.

Judges are using algorithms in sentencing decisions, recruiters are using them to select talent to interview, and search engines use them to regulate what results you see. Algorithms are shaping the world around us and yet we have very little influence over them and how they work.

Accuracy or Transparency?

Most algorithms we encounter daily are developed in a ‘black box’ environment, crafted by technical professionals who are often detached from the individuals most impacted by their designs. These algorithms process data and produce ‘optimized’ outcomes without their creators — let alone users — fully understanding the mechanics behind these conclusions.

This lack of transparency in a business context is problematic. Sure, when your feed is suddenly filled with content about innovative sales strategies, it is not deeply problematic. However, when similar principles are applied in judicial, employment, or consumer behavior contexts, the consequences are more substantive and far-reaching.

Still rare but increasingly sought after are ‘glass box’ algorithms, which aim to address these challenges. In response to the criticisms of black box models, the glass box approach endeavors to provide transparency into the algorithm’s methodology, allowing other computer scientists and, potentially, some technical business professionals to understand its inner workings. Imagine being able to peek inside the professional networking platform algorithm’s decision-making process and understand why certain posts are being shown to you.

Beyond the Existing Paradigm

Simply observing the logic of an algorithm from the outside is vastly different from being immersed in the experience of designing it.

From Black Box to Glass Box, to Beyond the Box

By divorcing the design of the algorithm from the human experience, as traditional algorithm design does, we produce out-of-touch products at best or reproduce inequities at worst. These algorithms raise concerns about transparency, trustworthiness, bias, equity, and the potential for unintended consequences. As an example, professional networking platforms have been criticized for perpetuating echo chambers and biases in job recommendations, where their algorithms prioritize opportunities based on existing networks, often unintentionally sidelining diverse applicants and reinforcing existing disparities.

Remember, Algorithms are designed to achieve an objective or outcome. Their primary consideration is optimizing. Morality and ethics are not a consideration to an algorithm.

To address these concerns, we must design algorithms with and alongside the people they impact and affect. By pushing beyond computer scientists designing “with users in mind” or for “user's best interests” we can ensure humans are included in the “why” and “how” of the original algorithm design, but also are involved when the technical implementation introduces new challenges that need moral and ethical guidance.

That’s where human-centered algorithm design comes in.

A Framework for Human-centered Algorithm Design (HCAD)

Drawing on the tenets of human-centered design and the quantitative elements of traditional algorithm design, HCAD brings stakeholders into the process to co-design algorithms that are explicitly human-centered, built on human needs and contexts, and tested for potential negative implications or unintended consequences.

This way, users don’t just see an algorithm through a glass box; they are involved in its end-to-end design, fully grasp how it operates, and have a say in the cost-benefit analysis of possible impacts. In the context of our professional networking platform, this would mean that users get to see all jobs they may be relevant for, regardless of connection proximity. While also having a say in how the algorithm operates and a role in the design and technical improvements that enable them to ensure that it serves their networking needs effectively and ethically.

To this end, ‘Pauses for Equity’ are an essential part of this practice. These are built-in moments of reflection to work against unconscious bias and hold ourselves accountable for the impact and unintended consequences of our design decision-making.

Eight Elements of Human-Centered Algorithm Design

Of course, all of these practices mean that the process can take longer, require involvement from more stakeholders, and need more thoughtfulness. There is no denying that. However, if the opportunity to ensure that algorithms work for humans and not against them isn’t enough of a sell, think about the savings from mitigating a significant risk before it becomes an issue you need to clean up. Given a chance, Facebook may have rathered to spend a few extra dollars and days thinking through the liking algorithm rather than facing ad revenue loss, public criticism, and congress inquiries for accidentally creating a news echo chamber.

Like all good human-centered design processes, HCAD is highly iterative. So keep that in mind as we discuss this in the linear format that written text dictates. Let’s go.

Establish Human Embodiment: Engage with and include people traditionally excluded from technological spaces, teaching them to think like an algorithm. This step involves interactive and physical activities to demystify algorithm processes and address potential biases and unintended consequences in design.

Activity to get you started:
Divide your inclusive design team into two groups, each on opposite sides of the room. Display a series of photos showing either hotdogs or human legs. Each time a photo is displayed, participants must quickly move to the side of the room that represents their guess — one side for hotdogs, the other for legs. This physical movement embodies the process of a classification algorithm, making the concept tangible.

Questions to ask when you pause for an equity pause:
-
Who is in the room, and what, if any, voices are missing?
- Are we elevating some voices over others?
- Who are our current and historical selves? What biases do we hold and perpetuate?

Deep Dive into the Problem Context: Using interviews and other qualitative methods, gather a broad range of perspectives and begin to shape the problem statement. This step emphasizes understanding the societal context in which the algorithm will operate and ensuring decisions are informed and user-centric. It is also a place to gather requirements and jobs to be done.

Activity to get you started:
Create a system map to understand all of the players in your ecosystem and how they relate to one another. Then create a stakeholder matrix that includes all the diverse stakeholders and how you will connect with them.

Questions to ask when you pause for an equity pause:
-
Are we addressing the root causes of the problem or just its symptoms
- How does this problem and its potential solutions vary across different societal contexts?
- What are the long-term societal impacts of solving this problem with an algorithm?

Mindfully Select Your Data: Carefully choose the data that will power the algorithm, considering the variety of available options and focusing on the data that best serves the identified problem and technological solution. This step involves a critical evaluation of data sources and their alignment with the project’s goals.

Activity to get you started:
Brainstorm potential data sources. Then get all of the data sources brainstormed and mark each one's relevance, potential biases, and representativeness

Questions to ask when you pause for an equity pause:
-
How might the data we select reinforce existing societal biases?
- Are there gaps in our data that could lead to underrepresentation of certain groups?
- What types of power are at play in the creation and promotion of each of these datasets?

Creatively Consider Models: Examine different analytical models to determine which best suits the project’s needs. This step encourages creative thinking and exploration of various modeling techniques, whether for classification, regression, clustering, or other purposes.

Activity to get you started:
Sketch different model concepts by creating flowcharts representing how the model will work and illustrating how each would process and interpret data.

Questions to ask when you pause for an equity pause:
-
Can we anticipate any unintended consequences of the chosen model?
- How do different types of models unintentionally prioritize or de-prioritize different attributes in the data?
- Are we making the invisible visible?

Design the Algorithm: Methodically design the algorithm, focusing on step-by-step development. This includes physically mapping out the algorithm and building its technical architecture, with a focus on detail and optimization.

Activity to get you started:
Using a large board or digital tool like Miro, physically map out the algorithm’s architecture, step by step. And pick a random decision point, and ask yourself if I changed this, what would the impact be? Keep going until you have considered each decision point and if the parameters are fit for purpose.

Questions to ask when you pause for an equity pause:
-
Does each parameter in the algorithm behave how we expect?
- What are the unintended consequences for each parameter?
- Do any of our parameters reinforce systemic or institutional biases?

Iterate with People: Continuously refine the algorithm based on feedback from real human beings. A lot of them. This phase involves an iterative process where each version of the algorithm is improved upon, akin to a feedback loop in product development. This process continues until those most impacted by the algorithm’s outputs are sufficiently confident that their experiences and needs are accurately reflected.

Activity to get you started:
Run the algorithm and get some output. Set up time with a stakeholder to talk aloud with them as they review the algorithm’s output. Capture their thinking and insights.

Questions to ask when you pause for an equity pause:
-
Are we amplifying and recognizing all those involved in the design process?
- Are we being intentional about bringing diverse stakeholders together? And are we identifying barriers that exclude and eliminate them from participating?
- How can we ensure that the feedback loop isn’t dominated by the most vocal users?

Scrutinize the Results and Impact: The algorithm that you have designed performs well by this point. Improving it to be precise and fair has two parts. First, benchmark the algorithm’s output with a trusted dataset to gauge accuracy. Second, delve into the subtleties of the output by rigorously testing specific outputs. This in-depth analysis will help you address the broader implications of the algorithm, making explicit and agreed-upon tradeoffs.

Activity to get you started:
Generate the output of the algorithm. Double-click on a selected data point or outcome and dive deep into scrutinizing it for potential biases, inaccuracies, or unfair results. Document the algorithm’s effectiveness and any deviations from expected or fair results.

Questions to ask when you pause for an equity pause:
-
What are the unintentional consequences of our work?
- Who is being unfairly impacted or marginalized by our work?
- What pre-existing structures, systems, or institutions are we recreating unintentionally?

Use Responsibly: Publicly acknowledge the algorithm’s limitations and intended uses, and be transparent about potential unintended consequences. This step emphasizes the ethical deployment and ongoing evaluation of the algorithm in real-world settings.

Activity to get you started:
Create an impact cascade map that outlines the potential positive and negative consequences of the algorithm in various contexts. This helps in foreseeing and mitigating negative impacts and enhancing positive ones.

Questions to ask when you pause for an equity pause:
-
How will we communicate the limitations and intended use of the algorithm to users?
- Could our ideas be misunderstood? And used to incidentally cause harm to others?
- How can we remain accountable for the algorithm’s impact post-deployment?

Standing on the Shoulders of Giants to Reach Forward

In the current discourse surrounding algorithm design, tension exists between those who advocate for stringent government regulation and those championing user empowerment in the development process. This debate underscores a shared goal: to promote innovation and technology potential, while safeguarding society from harms that algorithms might unleash, such as breaches of data privacy, job displacement, etc.

On one side, there’s a call for governments to use policy to mitigate these risks, to safeguard against threats. Simultaneously, there’s a rising call for a more inclusive approach to technology education, suggesting that involvement in design, particularly at the K12 level, could democratize technology and make it more accessible and inclusive.

But this isn’t an either/or situation; both government regulatory frameworks and participatory design processes are vital. HCAD emerges as a key strategy to drive a shift in focus from the end products of algorithms to the processes behind their creation.

Building upon the Work of Others

Others are seeing HCAD emerge as a key strategy as well. It has started to be explored by thinkers and institutions alike. Take Dr. Eric Baumer whose NSF-backed research deep-dived into related thinking. Harvard’s AI-Kitchen group even ran a module on a similar topic, and Fortune journalist Mahesh Saptharishi wrote about its pivotal role in the future of Responsible AI. Others, like the Stanford Human-Centered AI Institute, are also diving into similar concepts, focusing on the importance of the relationship between Humans and AI.

Of course, this doesn’t negate the immediate and global urgency of addressing immediate societal concerns like job disruption and the impact of social media on democratic processes. It proposes that we also future-proof by cultivating a culture where algorithms are designed responsibly and can help humans flourish.

Building on this foundation and our expertise and experiences, we propose a pragmatic HCAD framework, aimed at making these principles actionable in real-world settings. This framework acknowledges that while perfection in HCAD may not always be feasible, even incorporating just one of its steps, a single activity, or an equity pause, can significantly enhance the responsibility of the algorithm towards its users, stakeholders, its impacts, and the world. Our objective is to co-create a future where digital technologies are technically and ethically sound. We achieve this future by designing responsibly.

Questions about HCAD answered below:

How are ethical considerations integrated into the initial stages of algorithm design?

When we talk about mixing ethics right into the start of making algorithms, it’s like setting the moral compass for the project from day one. This means gathering a team that’s not just tech-savvy but also knows a thing or two about what’s fair and right. They use big ideas like fairness and transparency to guide every decision, making sure the technology we create does good and avoids harm. Before they even start coding, they think hard about who might be affected and in what ways, making sure to steer clear of any potential mess-ups. So, ethics isn’t just an afterthought; it’s part of the algorithm’s DNA.

What are the challenges and barriers to implementing ‘glass box’ algorithms?

Switching to ‘glass box’ algorithms, where everything is open and clear, isn’t easy. It’s not just about the technical stuff; it’s about changing the whole vibe in places where secrets are usually kept close. Companies have to be okay with showing off how their algorithms work, which can feel like giving away trade secrets. Plus, making complex algorithms easy for everyone to understand is really tricky. There’s also worry about people misusing these transparent algorithms. Overcoming these challenges means building trust, creating easy-to-understand transparency standards, and maybe even getting some rules in place to help everyone play fair.

How Do We Know if Human-Centered Algorithm Design is Working?

Figuring out if human-centered algorithm design is really hitting the mark involves looking at more than just the numbers. It’s about asking if the algorithm is fair, if it listens to what different people need, and if it’s making things better in the long run. This means keeping the conversation going with the people who use or are affected by the algorithm, checking in to see if it’s living up to its ethical goals. Success looks like everyone feeling good about how the algorithm works, seeing less bias, and making sure the algorithm can grow and change with society. Long-term studies help us see if these efforts are truly making a difference, ensuring technology makes life better and fairer for everyone.

  • **As always these pieces are not the work of one human — they are the written product of many discussions, debates, and intellectual contributions. Huge shout out to Emily Meland, Jake Hale, Julia Henrikson, Victoria Lee, Rich Braden, Meredith Caldwell, Fiona Duerr, and the People Rocket team. Extra special call out to Fiona Duerr for the visual design.

--

--