The Ethics Washing of AI

Aphie Gover
12 min readNov 30, 2023

--

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.
Clarote & AI4Media / Better Images of AI / Power/Profit / CC-BY 4.0

This story was co-written by Mark Okhakumwe and Aphie Gover.

Artificial intelligence has undeniably revolutionised the landscape of innovation, productivity, and convenience, with AI-driven solutions becoming an integral part of our daily lives. From virtual personal assistants that make our tasks more manageable to advanced healthcare diagnostics that enhance patient care and autonomous vehicles that promise safer and more efficient transportation, the impact of AI is profound. This transformation, however, is accompanied by a looming concern, one that has garnered the term “ethics washing” in the context of AI.

Ethics washing can undermine trust in AI technologies and those who develop and deploy them. It is not enough for organisations to simply pay lip service to ethical principles; they must actively demonstrate their commitment through tangible actions and policies. When ethical considerations are merely an afterthought or a marketing ploy, the consequences can be severe, potentially resulting in bias, discrimination, and privacy violations, among other ethical dilemmas.

As we wholeheartedly embrace AI’s transformative potential, it becomes increasingly imperative to distinguish between authentic ethical commitment from superficial claims of it.

What is Ethics Washing?

The Carnegie Council defines Ethics Washing as the “…practice of feigning ethical consideration to improve how a person or organisation is perceived.” Ethics Washing can also be described as “ethics laundering,” which is the practice of insincerely emphasising ethical considerations to present a facade of compliance with ethical standards, while not genuinely acting on those considerations in meaningful ways. The term highlights the gap between symbolic gestures and real action and brings to the fore the need for stakeholders to critically assess whether organisations are truly embodying the ethical commitments they profess.

This practice of instrumentalizing ethical language to create a facade of adopting ethical practices, often by technology companies, is done to deflect criticism or comply with perceived public expectations without making substantive changes. Such companies go as far as building internal structures with no real power to enforce ethical practices, all in a bid to appear ethical. The most popular example is Google, which in 2020 abruptly fired Timnit Gebru, a Google AI researcher, for sending an internal letter to Google employees that discussed her superiors’ unexplained resistance to the publication of a research paper she co-authored. This cascaded into a series of events that eventually led to a wave of backlash, with many Google employees subsequently quitting. Ironically, Timnit Gebru had received an impact award the previous year on the heels of her earlier suit against Google. With this series of events, Google appeared to punish an employee who was championing the implementation of ethical standards while outwardly presenting an appearance of ethical behaviour.

Unmasking AI Ethics-Washing

The tech industry has witnessed several instances where ethics teams or entire departments dedicated to ethical practices have been downsized or dissolved, seemingly in response to economic pressures or shifts in company priorities. Recently, Microsoft laid off its entire ethics and society team. In November 2022, X (formerly Twitter) cut an entire team of artificial intelligence researchers, leaving only one person, along with a 15% reduction in its trust and safety department. Meta was also reported to have terminated the contracts of about 200 content moderators in early January. According to documents filed with the U.S. Department of Labor, over 100 positions related to trust, integrity, and responsibility were eliminated, along with at least 16 members of Instagram’s well-being group.

These actions suggest a troubling trend: when fiscal challenges arise, ethical considerations are among the first to be compromised. A prevailing trend in the tech industry is the downsizing or complete dissolution of ethics teams within these companies, driven by various factors. A common rationale cited is the anticipation of an economic downturn, sometimes stemming from knee-jerk reactions, especially among smaller tech firms influenced by larger counterparts’ predictions of a potential recession. Furthermore, since most tech companies perceive trust, safety, and AI ethics as non-profit generating areas, they often deem it practical to eliminate entire teams in these domains when the need to downsize becomes imminent.

The lumping together of trust and safety departments with AI ethics teams underscores how these areas are often viewed as cost centres by tech companies. This simplification can lead to a lack of emphasis on maintaining ethical standards and safety measures in their products and services.

Another prevalent reason for these layoffs is overhiring during the pandemic. The surge in online activity during the pandemic triggered a hiring frenzy to meet the demand. By August 2022, Meta had nearly doubled its employee headcount. However, by November 2022, the company announced it was laying off 11,000 employees, including a significant portion of the ethics team.

These layoffs, particularly the ones involving ethics teams, send a message that these organisations are deprioritizing ethical considerations. This can result in an erosion of the ethical culture within the company, as employees may interpret this as a sign that ethical behaviour is no longer a top priority within the organisation. Furthermore, since ethics teams often play a critical role in holding employees and leadership accountable for ethical breaches, their absence may lead to a lack of transparency and accountability within the organisation.

Ethical considerations are crucial for long-term sustainability and success. Ignoring ethics may yield short-term gains but can lead to significant long-term consequences, including reputational damage and a loss of customer loyalty. And most importantly, the potential of developing a technology that has harmful, unintended consequences to the wider public.

Many tech companies establish these departments to create a smokescreen of compliance. These departments often serve as more of a public relations strategy than a genuine attempt to integrate ethics into their business practices, making it easier to eliminate entire ethics teams when downsizing becomes necessary. Even when AI ethics committees are established by tech companies, they often have limited powers to enforce or implement their findings. Examples of tech companies that have set up AI ethics committees or similar bodies include Microsoft’s AI and Ethics in Engineering and Research (AETHER) Committee and Facebook’s Oversight Board, which focuses on AI-related content moderation. These committees aim to address the complex and evolving ethical challenges posed by AI and strive to strike a balance between technological innovation and responsible use. However, their effectiveness and independence are a subject of debate and scrutiny.

Another example is the Partnership on AI (PAI), a consortium of tech companies, including Facebook, Google, and others, aiming to address AI’s societal impact. The PAI has also faced issues related to transparency in its decision-making processes and a perceived lack of public input in its governance. These situations raise questions about the extent to which companies are willing to adhere to their ethical guidelines when they clash with business interests.

Importance of Genuine AI Ethics Efforts

Genuine AI ethics endeavours are of paramount importance in unlocking the full potential of AI while concurrently mitigating its associated risks and adverse ramifications. These concerted efforts are instrumental in ensuring that AI technologies are developed and deployed in a manner consistent with our core values, in full respect of human rights, and with the overarching goal of fostering a fair and just society. When individuals and society at large have confidence that AI systems are being developed and utilised in an ethical manner, they are more inclined to embrace and integrate AI applications into their lives.

The documentary “Coded Bias” has brought to the forefront the insidious issue of AI bias and its manifestation in facial recognition discrimination. It underscores the imperative need to regulate and ensure the ethical underpinnings of AI technology development, thereby preventing inadvertent programming of biases by innovators into AI products. Genuine AI ethics initiatives are indispensable for the conception and utilisation of AI systems that uphold human values, rights, and principles. This encompasses the critical facets of fairness, accountability, transparency, and privacy. These endeavours also aid in mitigating the unintended biases perpetuated by AI, ensuring that AI systems do not discriminate against individuals based on their race, gender, or other protected characteristics.

The emergence of technologies like deepfakes has ushered in a new era fraught with profound concerns, as AI is leveraged to create astonishingly lifelike videos of individuals. This technology has been used to create pornographic content with images of people who are often unaware of these developments. Furthermore, there are concerns about the potential use of deepfake videos to manipulate election outcomes across the globe. These concerns have led companies like Meta to release new guidelines for election advertising. Meta now requires advertisers to flag when AI has been used in a political advert on Facebook and Instagram. The potential risks associated with AI often overshadow its benefits, and addressing ethical considerations in AI is pivotal to mitigating these risks. However, retrofitting ethical solutions post-development is often too late. A more efficient approach involves integrating ethics into the very design and core functionality of AI technology.

Case study

One of the most notable examples of ethics washing in recent memory is the short-lived Google AI ethics board of 2019. The board, which disintegrated just over a week after its formation, has been a source of vocal frustration for DeepMind’s co-founder Mustafa Suleyman. He has expressed disappointment, attributing the collapse of this long-anticipated initiative to the challenges inherent in incorporating a diverse array of perspectives into the governance process.

Suleyman laments the resistance to this change, emphasising the board’s intent to democratise oversight of AI technologies — a move that was ultimately stymied by a combination of internal power dynamics and external social pressures. The controversy that led to the board’s dissolution sparked from the appointment of a member with anti-LGBTQ and conservative views, which ignited a firestorm of public and internal backlash — and raised the question of who gets to decide what makes an AI system ethical.

The incident not only triggered an outcry but also cast a shadow on Google’s commitment to genuine ethical oversight in AI development. This sequence of events is emblematic of the ethics washing phenomenon — where companies make public overtures towards ethical behaviour or social responsibility, yet their commitment appears superficial when tested against challenging scenarios. Google’s swift disbandment of the ethics board following the controversy raises questions about the depth of the company’s dedication to ethical discourse, especially when it invites public scrutiny or internal dissent.

The term “ethics washing” is often used to describe actions that give a veneer of ethical consideration without enacting meaningful change. In Google’s case, the formation and rapid dissolution of the ethics board could be interpreted as a performative act rather than a sincere attempt to embed ethical considerations into the AI development process. This performative aspect is especially concerning given the already realised risks associated with AI technology.

These risks, exemplified by AI’s murky transparency, potential for bias, privacy invasion, and security vulnerabilities, cast a long shadow over the technologies that are becoming deeply entrenched in our daily lives. The dissolution of Google’s AI ethics board did not just represent a setback for one company; it symbolised the tech industry’s broader struggle to reconcile the relentless pace of innovation with the societal and ethical implications of its applications.

The UK Department for Work and Pensions’ (DWP) use of AI in managing Universal Credit has come under scrutiny for perpetuating systemic biases, reflecting a grim reality where algorithms can perpetuate inequality under the guise of impartiality. Similarly, the deployment of live facial recognition by police forces — a technology infamous for its misidentification rates — highlights the tangible risks of unregulated AI. These are not isolated incidents but part of a pattern where the absence of robust ethical oversight can lead to real-world harm.

It is no secret, then, that the need for effective AI governance is paramount to shaping our relationship with AI moving forward. For AI ethics to evolve from a check-box exercise to a fundamental component of technology development, it must be built into the core of AI systems. This shift requires more than just ethics boards; it necessitates a multidisciplinary approach where ethicists, sociologists, technologists, and affected communities collaborate to guide AI development.

Towards a considered AI ethics

The global dialogue around AI ethics is intensifying. In October 2023, the US government unveiled the Blueprint for an AI Bill of Rights. This framework, put forth by the White House Office of Science and Technology Policy, outlined a set of principles designed to ensure that automated systems serve the public while upholding democracy and protecting civil rights — a huge step forward for global AI governance.

The release of this Bill was closely followed by the UK AI Safety Summit, a landmark event that drew together international governments and industry leaders to discuss the frontier risks of AI. The Summit was an attempt by the UK to assert its position as a leader in AI safety discussions.

One of the most contentious debates in the AI safety space is centred on the potential existential threats posed by AI versus the pressing issues it presents in the here and now. The fear-mongering narrative of AI leading to catastrophic outcomes for humanity, a view popularised by figures like Elon Musk, has been met with pushback from those who argue that immediate concerns such as algorithmic bias, the spread of disinformation, and the misuse of AI in surveillance demand our immediate attention. There is also a tension between the need for stringent regulations to safeguard societal welfare and the fear that such measures could stifle the growth and free exchange within the AI technology sector.

The ethical challenges associated with AI development and deployment are pressing and multifaceted, and the practice of ethics-washing undermines the legitimacy and urgency of these issues. Companies that engage in ethics-washing not only risk their own reputation, but also the broader (already tentative) trust in AI technologies. To move beyond performative gestures, solutions must be integrated and systemic, involving internal guidelines and external policy.

One potential solution to counter ethics washing and reinforce responsible AI efforts is the development and enforcement of comprehensive regulatory frameworks. Governments and international bodies can play a pivotal role by establishing standards that mandate ethical AI practices. This should include the requirement for tech companies to conduct impact assessments that evaluate the potential harms of AI systems before they are deployed. Further to this is a need for independent oversight bodies with the power to audit and hold companies accountable. These bodies should be diverse in composition, representing various sectors of society, including marginalised communities that are often most impacted by AI systems. They should also be granted the authority to impose sanctions on companies that fail to meet ethical standards.

One potential punitive measure is the implementation of substantial fines for companies found guilty of ethics-washing. This approach mirrors the enforcement mechanisms found in data protection laws, such as the General Data Protection Regulation (GDPR) in the European Union, which can levy heavy fines against entities that violate privacy and data protection standards. Fines can be scaled based on the severity of the ethical breach and the size of the company, ensuring that penalties are a significant deterrent rather than a negligible cost of doing business. Furthermore, just as environmental certifications exist for products that meet certain ecological standards, there could be a certification process for ethical AI. This certification would be revoked if a company is found to be engaging in ethics-washing, leading to a loss of reputation and potentially barring them from certain markets or contracts, particularly those involving government tenders where ethical compliance is a prerequisite. Whistleblower protections are also vital. Employees who expose ethics-washing should be safeguarded against retaliation, encouraging more individuals to come forward with information about unethical practices.

To support these measures, there is a need for meaningful collaboration between the public sector, private companies, academia, and civil society. This collaboration could foster the exchange of best practices and provide a platform for addressing complex ethical dilemmas. Initiatives like The Partnership on AI UNESCO’s Recommendation on the Ethics of Artificial Intelligence are examples of such collaborations, aiming to create an inclusive dialogue on AI ethics.

Education and awareness are also critical. Educational institutions should integrate ethics into technology curricula to prepare the next generation of AI developers to prioritise ethical considerations, in addition to increasing ethical tech literacy in future generations. Public awareness campaigns can help citizens understand the potential impacts of AI and advocate for ethical practices; we are even beginning to see the mobilisation of responsible tech social movements such as the All Tech is Human (ATIH) community.

Investors play a crucial role in ensuring AI ethics compliance by influencing the direction and behaviour of the companies they invest in. Investors can conduct thorough due diligence on the companies they consider investing in, specifically assessing their AI development and deployment practices. This includes reviewing the company’s ethical guidelines, data privacy policies, and overall commitment to responsible AI. By leveraging their financial influence and engaging with companies on ethical AI practices, investors can contribute to creating a culture of responsibility and accountability in the development and deployment of AI technologies.

Finally, embedding ethicists directly into AI research and development teams can ensure that ethical considerations are integrated throughout the design and implementation process. Rather than being an afterthought or a compliance measure, ethics would become a foundational component of AI development. This is something we are seeing more of on the ATIH job board and similar platforms such as 80,000 Hours — though job role titles for AI ethicists are notoriously varied, as are the disciplinary backgrounds different organisations find desirable for the role.

As the world grapples with the rapid advancement of AI technologies, the need for genuine, robust ethical oversight is urgent. Ethical AI must transcend rhetoric and become woven into the fabric of technological innovation, guided by regulatory measures, industry standards, and societal values. Moving forward, the challenge lies not in the creation of ethics boards or proclamations of intent, but in the steadfast application of ethical principles in the face of adversity and the relentless pursuit of accountability. The dissolution of Google’s AI ethics board serves as an example of what can happen when ethics are sidelined — trust is eroded, and the societal costs can ripple outwards. To ensure that AI serves the greater good, all stakeholders — governments, businesses, academia, and civil society — must unite in the commitment to a future where AI is equitable, transparent, and, above all, for the benefit of humanity. This is our collective responsibility and the most critical endeavour of the digital age so far.

--

--