The Ethics of Designing a (Generative) AI Product
The brief exploration of a (kinda) new design material: Part 2
Since the advent of generative artificial intelligence (genAI), its application often veers towards sensationalism rather than practicality. Artificial intelligence (AI) should not be about seemingly creative tasks; rather, its deployment should focus on areas where it can provide relief and augmentation in tasks that are resource-intensive or monotonous for humans.
AI excels at recognizing patterns, processing vast amounts of data, and executing repetitive tasks with precision and speed. This makes AI particularly effective in areas such as data analysis, image and speech recognition, and predictive analytics. However, even genAI lacks the ability to understand problems on a deeper, contextual level. It does not possess intuition, empathy, or the creative thinking that humans bring to problem-solving. And while AI can identify trends and generate insights from data, it is the human ability to interpret these insights, apply them to real-world scenarios, and make nuanced decisions that drives true innovation. It is not a matter of AI versus humans, but rather a symbiotic relationship where each complements the other. It should not be about replacing human capabilities but enhancing them to free us to focus on areas where human intuition and creativity are irreplaceable. It should be about pain relief.
Disclaimer: I’m not an AI expert by trade, just a fellow designer trying to keep up with the world we live in.
Core Ethical Considerations
When discussing the integration of genAI into our lives, several core ethical considerations must be addressed to ensure responsible and equitable use.
The Frankenstein Complex
One of the foundational ethical considerations in AI is what is colloquially known as the “Frankenstein Complex” — the fear that what we create may get out of control and cause harm. The potential for creating systems that operate beyond our intended boundaries poses real risks, this is a prominent part of the perception of AI. Designers must consider safeguards, oversight, and corrective mechanisms to prevent and mitigate such risks, ensuring that AI systems do not evolve or behave in unintended harmful ways.
Privacy, Surveillance, Manipulation and Copyright
The capabilities of AI to collect, analyze, and store vast amounts of personal data raise significant concerns about surveillance and manipulation. This ability can threaten individual freedoms and societal norms, making it essential to establish strong privacy protections and transparent data usage policies to safeguard personal information. Designers are tasked with the critical responsibility of embedding privacy by design and being transparent about data usage.
Data ownership is a complex issue, particularly concerning the data used to train AI models. According to EU Directive (EU) 2019/790, for example, applied in German copyright laws (UrhG 44b & 60d), the use of data for training AI is legally permissible. Ethically that's a different discussion. Additionally, mostly any current global copyright laws do not grant ownership rights to AI-generated works, further complicating the landscape of data ownership and intellectual property.
Compliance with regulations such as the General Data Protection Regulation (GDPR) is paramount. GDPR mandates strict guidelines for data collection, processing, and storage, ensuring that personal data is handled with the utmost care. AI systems must be designed to respect these regulations, incorporating features like data minimization, user consent, and the right to be forgotten.
Opacity and Bias
AI systems, especially those based on any kind of machine learning, can often be opaque — making it difficult to understand how decisions are made. This “black box” issue is compounded by the biases that these systems can inherit from their training data. While technology itself does not discriminate, the historical and societal biases embedded in the data used to train AI models do. Designers must strive for transparency and interpretability in AI systems, actively working to identify and correct biases, and ensuring that AI decisions are fair and equitable.
Hallucination
Particularly genAI, but also a lot of other AI types, can sometimes produce fabricated or misleading outputs, known commonly as “hallucinations”. These inaccuracies can be harmless, or they can lead to significant misunderstandings or misrepresentations. It is vital for designers to implement checks and balances within AI systems to detect and mitigate such issues, ensuring the reliability and accuracy of AI-generated information. On the other side, we need design pattern where our users can build calibrated trust in the system and understand what they can believe and what not.
Accountability
A critical ethical question in AI development is determining who is responsible for the actions of AI systems. As designers, we have an obligation to not only create systems that are ethical by design but also to establish clear accountability for AI actions. We need to establish clear guidelines and accountability frameworks to determine responsibility and address potential issues responsibly, ensuring that AI operates within ethical and legal boundaries.
Our Responsibility
We have the duty to consider the impact of our creations on the world. Every decision we make can have far-reaching consequences, influencing how people interact with technology and each other. AI explicitly is just so powerful, it is imperative that we take this responsibility seriously and think critically about the ethical implications of our work. As Spidey’s Uncle Ben reminds us, “With great power comes great responsibility”. Some areas where we should be careful with genAI:
Independent Autonomous Decision-Making
Especially the way foundation models are trained makes them some sort of a “black box”, where we don’t know exactly how they work, which connections between points of data they work with. Incorporating genAI into autonomous handling of tasks can therefore easily lead to scenarios, where the decision-making process is opaque and not easily understood by users or even the developers of the AI itself. This lack of transparency raises significant concerns about accountability, especially when decisions have serious implications. Determining who is responsible for AI’s actions — whether it’s the developers, the company, or the machine itself — remains a contentious issue.
AI led Creation
As we already discussed: AI can’t handle tasks that require an understanding of deeper context or emotional nuance. While AI can generate content based on patterns it has learned, it does not genuinely ‘understand’ the problem it is solving. This can lead to outputs that are technically correct but contextually inappropriate, missing the mark in terms of meeting true user needs and expectations.
Overreliance on AI
Relying heavily on AI can lead to what is academically known as ‘automation paradox’, where increasing the level of automation actually decreases overall efficiency because of over-dependence on technology. This risk is particularly evident in critical areas such as semi-autonomous driving cars and have with Air France Flight 447 a very non-AI example of how imminent this risk is. On this flight, the autopilot failed due to ice crystals, and the pilots who were, despite years of experience, not prepared for manual control of the aircraft, were unable to prevent a crash.
Value Driver
So, and what should we actually use genAI for? Where can genAI bring an actual value? This is of course a question (as with everything in this exploration) with no final answer, but there are a couple of value propositions that are feasible as of now, which include (but are not limited to):
Intend-Based Outcome Specifications
AI can enable intent-based outcome specifications, transforming how products and services meet user needs. By recognizing patterns and linking them to the intention behind user interactions, AI systems can provide tailored responses and functionalities that can align with user goals. This means AI can ‘anticipate’ user needs and deliver personalized outcomes, improving satisfaction and efficiency.
Monkey Jobs
If there is one thing where AI really excels, it’s at automating repetitive and mundane tasks. By taking over these routine activities, AI allows human workers to focus on more complex and strategic tasks that require creativity and critical thinking. Automation not only boosts productivity and efficiency but also reduces the risk of human error in monotonous processes.
More of the Same
One of the most apparent benefits of AI in products is in the realm of pattern recognition and reproduction. Personalized Playlists on Spotify are a known example. But there are also more generative cases, Perplexity uses AI to provide highly personalized and contextually relevant search results, enhancing the user’s ability to find information that precisely matches their needs. Similarly, “creative AIs” like Suno, a music AI, generate custom music tracks based on user prompts, in the end by reproducing established patterns and thus producing more of the same.
Alien Perspectives
In terms of fostering creativity and exploring diversity, AI introduces what James Evans refers to as “Alien AI”. This concept refers to, that AI systems can offer novel perspectives and solutions by thinking in ways that differ fundamentally from human cognitive patterns and thus add to the creative discussion. Yes, this is in harsh contrast with the fear discussed prior and is an expression of what we can win if we ride this dragon instead of hiding from it.
Accessibility
With AI-supported interaction patterns, users can master complex systems with relative ease. By automating intricate processes and simplifying user interfaces, AI can allow individuals to manage sophisticated systems without requiring extensive technical knowledge. This democratization of technology empowers a broader range of users to engage with and benefit from advanced tools, making expert-level control accessible to all. This not only broadens access but also enhances the potential for innovation and excellence, as more individuals can contribute creatively and effectively using advanced tools. And of course, we just do not talk about bridging the gap of expertise, but every sort of divergence.
A Crucial Role
Establishing calibrated trust in AI-integrated products is crucial, to ensure that users have clear expectations of what AI can and cannot do. This involves transparently communicating both the capabilities and the limitations of AI. We should provide insight into the data sources AI uses, helping users understand the information that influences AI decisions and what information remains beyond its scope.
By making the workings of AI systems more transparent, we can help build user confidence and ensure a more informed interaction with the technology. Our work is key not just in shaping user experiences but in fostering a deeper understanding and trust in AI functionalities.
This article is part of a three-part series in which I look at what AI is, what it can be used for, and how it can be used as a design material: