Use Your Nightmares to Create Actionable AI Guidelines

Justin Pfeifer
SAS Product Design
5 min readOct 25, 2023

--

Illustration by Mark Pernice

Although it is important for an organization to state its values regarding Artificial Intelligence (AI), values alone are not sufficient for building responsible AI systems. Current guidelines for AI governance are largely abstract, vague, and ungrounded. They typically incorporate the latest buzzwords such as “transparency,” “reliability,” and “explainability.”

As an industry leader, SAS is committed to implementing AI responsibly, ethically, and effectively. The Data Ethics Practice (DEP) serves as the guiding light of responsible innovation at SAS. Our DEP is comprised of a multidisciplinary “Trustworthy AI” team that includes product designers specifically dedicated to determining how and when AI should be used in SAS products. Articulating the company’s approach to responsible innovation and the outcome of these efforts is a key factor in its success.

Reid Blackman’s Ethical Machines provides ideas and suggestions to aid this process. In an attempt to bridge the gap between organizational principles and actionable AI guidelines, I’ve outlined Blackman’s four steps to follow when writing an AI ethics statement.

Step 1. State your values by thinking about your ethical nightmares

A seemingly reasonable approach to articulating ethics statements is to focus on values rather than potential faults, thus producing a positive message. However, pinpointing your ethical nightmares early in the process will guide the discussion to a level of specificity that is otherwise hard to reach. By practicing this type of thinking, discussions evolve from statements such as, “We respect our clients” to specific ways an organization fails to do so. A shift like this typically provides specific ethical failures with their own context. Composing ethical statements informed by ethical nightmares is like designing an interface with a particular persona in mind — it increases empathy with your clients.

For example, a healthcare software company may draft their AI ethics statement as: “We design software that is accurate and trustworthy.” After understanding their AI ethical nightmare, a patient is not receiving proper care, they revise their statement to: “We hold patients as a top priority in how we deliver trustworthy AI systems.”

In a recent blog post, Reggie Townsend, Vice President — SAS Data Ethics Practice, provided the groundwork on how SAS is approaching AI. He defined a three-pronged approach on fostering “AI common sense:” (1) recognize human nature and AI, (2) combat automation bias, and (3) promote critical thinking. Each of these approaches is informed by undesirable outcomes. For example, “AI algorithms used in hiring processes may inadvertently favor certain demographics over others if trained on biased data.” In health care, “a doctor might rely on an AI system to diagnose a patient, despite evidence contradicting the AI’s recommendation.” By stating ethical nightmares, Townsend created context around AI ethics and operationalized his ethics statements.

Illustration by Mark Pernice

Step 2. Explain why you value what you do in a way that connects to your organization’s mission or purpose

The concerns you considered while working on your ethics statement should be kept in mind throughout the entire process of designing and using an AI system. If you fail to do this, your AI values and goals may be perceived as a bonus rather than a necessity. Effectively tying an AI ethics statement’s strategy to the mission of an organization ensures the strategies will remain a priority.

For example, an enterprise software company may focus on the trustworthiness, usability, and capability of an AI system; while a company using AI generated art may put more emphasis on copyright and the sociological implications of their AI products. AI ethics statements should be specific to their context for stakeholders to view them as actionable and essential.

At SAS, our Visual Data Mining and Machine Learning product is designed to align with the company’s value of ensuring stakeholders maintain power over the AI they use in their decision-making process. Mary Beth Ainsworth, an AI specialist at SAS, explains in her article 3 Essential Steps for AI Ethics, “Humans solve problems, not machines. Machines can surface the information needed to solve problems and then be programmed to address that problem in an automated way — based on the human solution provided for the problem.” The design of this product focuses on keeping humans in control.

Step 3. Connect your values to what you take to be ethically impermissible

Stating your values is the bulk of an AI ethics statement; however, stating what you deem ethically impermissible is often equally as important. Stating ethically impermissible consequences of an AI system or its development creates a boundary that makes value statements more substantive.

For example, a company deploying AI systems in a healthcare setting could state that it would be ethically impermissible for their algorithm to result in falsely prescribed medications. This creates an easily conceivable, highly contextualized boundary.

As noted in 3 Essential Steps for AI Ethics, “AI can enhance automobile safety and diagnose cancer — but it can also choose targets for cruise missiles. All AI capabilities have considerable ethical ramifications that need to be discussed from multiple points of view.”

Illustration by Mark Pernice

Step 4. Articulate how you will realize your ethical goals or avoid ethical nightmares

Once you understand your ethical nightmares and intolerable outcomes, it is crucial to write concrete statements about how your values and principles will be fostered.

At SAS, we hold human centricity, inclusivity, accountability, transparency, privacy, security, and robustness as our core principles of responsible innovation. These principles were in part formed by imagining scenarios in which they are not upheld. In the case of privacy and security, it would be intolerable for a stakeholder’s privacy and security to be breached. As such, upholding the privacy and security of our stakeholders is held ethically imperative. This goal is realized in our research and design processes, ensuring that ethical dilemmas are proactively addressed and avoided.

Although this article does not cover all the challenges to AI ethics, my intent was to provide a quick and helpful approach to getting started with writing AI ethics statements. Once you’ve completed the four steps, Blackman recommends you review your ethics statement for common mistakes, highlight any weaknesses, and repeat the steps again. This process should be performed as many times as necessary to produce the best AI ethics statement possible.

--

--

Justin Pfeifer
SAS Product Design

UX design intern at SAS Institute. All opinions are my own.