Morality in the Tech World: 7 Ethical AI Issues to Consider
In the past decade, we have witnessed an exponential increase in the use and application of artificial intelligence models globally.
Able to change human interaction, how we approach the world of work, and creating huge amounts of increasingly personal data, AI technology has catapulted humanity leagues into the future.
However, with the great power of AI comes great responsibility.
Despite AI technology’s positive influence in productivity, insight and handling decisions way beyond human scale, keeping artificial intelligence models ethical is becoming a Herculean task.
With businesses struggling to establish the ideal balance between boosting profit margins and staying principled, many are being forced to rely on inherently biased systems.
Before we go too far ahead and delve into the details surrounding this complex topic, let me introduce you to the concept of ethical AI and the major issues facing the industry at large.
So, what is ethical AI and why should we care about it?
In its simplest terms, “ethical AI” or “AI ethics” is “a system of moral principles and techniques intended to inform the development and responsible use of artificial intelligence technology”.[1]
As AI takes more complex and critical decisions, such as hiring employees, giving medical diagnoses and pricing your insurance risk, we must be more certain than ever before that artificial intelligence remains unbiased, explainable, and trustworthy.
Although creating democratic and equitable models should always be our end goal, keeping autonomous machines ethical is more difficult to achieve in practice than in theory. To give us more context on why this is, let’s explore the most pressing ethical issues facing the industry today.
The biggest ethical issues in the AI world
Bias and Discrimination
It’s clear that machine learning models have immense processing power, but depending on the data that models are fed during training, they may be unable to perform in an unbiased and moral manner.
For example, despite using some of the most advanced technology in the world, the algorithms created by Microsoft, IBM, and Megvii for facial recognition were proven to be gender and racially biased in favor of white men. When these algorithms were analyzed in-depth, an error rate of 0.8 percent was found for white men, while black women faced a staggering error rate of 34.7%.[2] This outcome was attributed to the “facial recognition training data contain[ing] mostly Caucasian faces”.[3] Surely, in our modern world, this level of bias in training is inexcusable.
However, bias and discrimination aren’t exclusively found in facial recognition software. Despite being tested in 2021, Google’s translation algorithm still refuses to acknowledge that female historians and male nurses are present in the world of work. Instead, the tech giant’s algorithm chooses to systematically change genders for these professions in translations for all European languages. With over 182 out of 440 translations registering as incorrect because of this flaw, one could argue that these algorithms are not only unethical, but unfit for purpose.[4]
It’s worth noting that AI models aren’t designed to be sensitive to race and gender. However, should we be training models with more varied datasets to try and avoid these cognitive distortions?[5] Can we trust that AI models won’t develop human-like learned biases that will make machines inherently racist or sexist?[6]
There have been concerted efforts to remove bias from AI systems in the past, but the question of how we can make AI moral and completely unbiased is a complex issue that creators will be grappling with for years to come.
Denial of autonomy
The rise of self-driving cars has led to increased questioning over AI’s denial of autonomy. The primary issue here is simple — when it comes to machines, how do we allocate blame when something goes wrong?
In the US, congress is grappling with issue of fully autonomous vehicles, particularly when vision models can confuse stop signs for 45mph speed limits, simply with the addition of 4 pieces of coloured tape.
This interesting moral question was brought to the fore when an autonomous Tesla ran into a pedestrian during a test to protect its passenger at all costs. After much deliberation, it was decided that Tesla was to blame for the incident, and not the human driver behind the wheel.
The company’s self-driving system relies on cameras, radar, and short-range sonar to establish risk and keep its passengers safe, and although Tesla recommends that their autopilot feature is only used on highways where it can work most effectively, it’s difficult to keep AI ethical in instances where the stakes are so high.[7]
When drivers are killed because autonomous vehicles struggle to deal with stationary objects, should it be the company or the algorithm developers who take responsibility?
This question doesn’t refute the fact that complex and unpredictable AI models can be positive and life-changing. However, machines denying humans autonomy must always be at the forefront of a developer’s mind when any potential mistakes could result in serious injury or death.
Unemployment with the rise of automation
Replacing menial workers with machine learning models may be a cost-saving decision for businesses, but is AI-prompted unemployment ethical? AI has been proven to make workplaces more efficient and free up humans to work on complex or customer-facing tasks, but studies have also shown that 400,000 jobs were lost to automation between 1990 and 2007 in the USA alone.[8]
Supporting the human workforce is perhaps the morally ‘correct’ thing to do, but should businesses be forced to employ workers when machine learning keeps operating costs low and is a more effective alternative?
With companies forced to protect their bottom-lines more staunchly during the COVID-19 pandemic, economists estimate that up to 42% of lost jobs during this period will never return and tasks will instead be divided between existing staff and machines.
There is always the possibility that radical changes to the labor market will create higher-paying jobs enabled by machine learning advancements, but this profit over people approach makes ethical AI even more difficult to achieve, especially within underprivileged communities and low-skilled labour markets.
The concentration of economic power
Mass unemployment is a major ethical issue that arises from machine learning, but so is the concentration of economic power among big tech firms.
While many companies were forced to slash costs during the pandemic by reducing employee numbers, Apple, Microsoft, Amazon, Alphabet, and Facebook’s stock market values increased by $3 trillion between 23 March and 19 August 2020.[9] There is some speculation surrounding this increase and how it was achieved, but it is likely down to an ability to harness the finest AI models available.
Although enjoying healthy profit margins and performing excellently on the stock market aren’t criminal acts, we must ask ourselves how moral this oligopoly is. Realistically, how can smaller firms possibly compete with large multinational corporations like these when it comes to AI deployment?
Giving consumers reduced choice and small businesses fewer opportunities to make their mark, this concentration of economic power is certainly one of the most troublesome ethical AI issues for creators to consider.
The singularity argument
The rise of a robot army may read like something out of a science-fiction novel, but machines reaching singularity and super-intelligence is one of the most obvious AI ethics issues facing the tech world today.
There will come a time when AI models can self-improve, giving way to self-aware machines that possess human-level cogency. It may sound extreme, but the most pressing concern for future developers is whether robots will view humans as something to be aligned with, or to be destroyed.
American inventor, futurist, and chief Google engineer Ray Kurzweil has predicted that singularity will be reached by 2045, but he believes that singularity is more likely to improve human intelligence than result in robot-driven destruction in the first instance.[10]
To put your mind at ease, the human species being defeated by super-intelligent robots is highly unlikely. However, with technology evolving at such an impressive rate, we must ensure that all AI developments are ethical, sustainable, and unlikely to endanger future generations.
Privacy
State surveillance and harvesting data is nothing new to most of us, but how much is too much?
There are privacy laws in place around the world that attempt to counter unacceptable data use, but the potential for AI privacy breaches is enormous as models require untold amounts of quality data to learn and improve.
In the business world, AI is often used to collect consumer data for product and performance improvements. However, without our consent, this action becomes problematic. After all, how can we control who collects what data? Will facial recognition from photos and smartphones eventually result in negative population profiling?
Much of the data about us online hasn’t been garnered with our express consent, and it’s concerning to think that a simple Alexa smart assistant could be listening to your conversations and storing snippets of data for future use.
However, Amazon is only one example of where AI has been used to monetize digital profiles through targeted advertising and improved customer experience, and this trend is only set to become more prevalent as systems become hyper-intelligent.[11]
There are positive elements to be gleaned from a data-driven future, but one of the major ethical AI concerns is how we can hold onto our liberties in the face of increased surveillance.
Deepfakes and the rise of misinformation
It’s tempting to chuckle at Donald Trump’s outrage over “fake news”, but convincing videos and photos created using AI have hugely reduced the credibility of media outlets, making it difficult to know what news we can trust and what is deliberately misleading.
Not only do deepfakes have the potential to defame public figures and cause social embarrassment, but any mistrust created could impact democratic processes, cause cross-border conflicts, and even incite violence.[12]
The existence of deepfakes is problematic for obvious reasons, but the fact that experts and AI detectors struggle to recognize tampered videos or images over 40% of the time means that we have little chance of countering the spread of misinformation.[13]
Due to the potential socio-political fallout that comes with deepfake creation, managing them is one of the most important ethical AI issues to keep track of.
How can we address ethics in machine learning?
The best way to tackle ethics issues in AI is through transparency and accountability. It’s inherently difficult to instill moral values in algorithms, but ethicists are hoping that corporate companies will strive to place ethics on par with profitability.
According to participants at the Future of Life conference in 2017, the ultimate ethical AI goal would be to “align [machines] with human values throughout their operation”.[14] This may be tricky to achieve at first, but large companies are taking steps towards becoming more ethical in their practices.
A positive step towards this was seen when Google called for the “responsible development of AI”, stating that technology should not enforce bias nor infringe on privacy.[15] The tech giant also joined the Partnership for Artificial Intelligence to Benefit People and Society alongside Microsoft, Amazon, Facebook, Apple, and IBM in 2016.[16] Aiming to promote transparency and inclusivity, these are both excellent steps forward for big tech companies.
What does the future hold for ethics in AI?
As the ancient Chinese proverb suggests, “a journey of a thousand miles begins with a single step”.
Naturally, we want to attack issues like discrimination and unemployment with an iron fist. However, our approach to ethical AI must be reasonable and measured.
To start with, developers can take appropriate steps to mitigate bias in AI models by removing prejudiced scoring and ranking. [17] To achieve the best possible outcomes, anti-bias should start from the top down and be central to an organization’s ethos.
Large-scale implementation of ethics in AI requires input from tech’s top players, and there must be strict consequences for unethical behavior and poor enforcement.[18] Challenging, yes — but not impossible to achieve.
Only with transparency, accountability, and outcome fairness can we begin to embrace ethical AI and build a brighter, more inclusive future.
The only question remaining is this: are the largest players in tech willing to take that all-important first step?
[1] George Lawton, “AI Ethics (AI code of ethics)”, TechTarget, last modified June 2021, https://whatis.techtarget.com/definition/AI-code-of-ethics
[2] Larry Hardesty, “Study finds gender and skin-type bias in commercial artificial intelligence systems”, MIT News Office, last modified February 11, 2018, https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212
[3] “Joy Buolamwini”, Bloomberg Businessweek, July 3, 2017, p.80, Darrell M. West, “The role of corporations in addressing AI’s ethical dilemmas”, last modified September 13 2018, https://www.brookings.edu/research/how-to-address-ai-ethical-dilemmas/
[4] Nicolas Kayser-Bril, “Female historians and male nurses do not exist, Google Translate tells its European users”, Algorithm Watch, last modified 17 September 2020, https://algorithmwatch.org/en/google-translate-gender-bias/
[5] Yulia Gavrilova, “10 AI ethics questions we need to answer”, Serokell, last modified March 1 2021, https://serokell.io/blog/ai-ethics-questions
[6] Vincent C. Muller, “Ethics of Artificial Intelligence and Robotics”, The Stanford Encyclopedia of Philosophy (Summer 2021 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/entries/ethics-ai/
[7] Economy and Technology Department (ALJAZEERA), “Tesla in deadly California crash was on Autopilot: Authorities”, ALJAZEERA, last modified 14 May 2021, https://www.aljazeera.com/economy/2021/5/14/tesla-in-deadly-california-crash-was-on-autopilot-authorities
[8] Alana Semuels, “Millions of Americans Have Lost Jobs in the Pandemic — And Robots and AI Are Replacing Them Faster Than Ever”, TIME, last modified August 6 2020, https://time.com/5876604/machines-jobs-coronavirus/
[9] Nicas J, “Apple Reaches $2 trillion, punctuating big tech’s grip”, New York Times, last modified August 20 2020, https://www.nytimes.com/2020/08/19/technology/apple-2-trillion.html
[10] Ray Kurzweil, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, 1999, London: Penguin
[11] Ben Dickson, “What is ethical AI?”, TechTalks, last modified April 15 2019, https://bdtechtalks.com/2019/04/15/trustworthy-ethical-artificial-intelligence/
[12] Basil Han, Deepfakes & Misinformation: The Ethics Behind AI, last modified June 1 2021, AI Singapore, https://aisingapore.org/2021/06/deepfakes-misinformation-the-ethics-behind-the-ai/
[13] Cade Metz, Internet Companies Prepare to Fight the ‘Deepfake’ Future, last modified November 24 2019, https://www.nytimes.com/2019/11/24/technology/tech-companies-deepfakes.html
[14] Asilomar Future of Life Institute, “AI Principles”, 2017
[15] Google, “Responsible Development of AI”, 2018
[16] Alex Hern, “Partnership on AI Formed by Google, Facebook, Amazon, IBM and Microsoft”, The Guardian, published September 28 2016
[17] Dr. David Leslie, “Understanding artificial intelligence ethics and safety: a guide for the responsible design and implementation of AI systems in the public sector”, The Alan Turing Institute, https://doi.org/10.5281/zenodo.3240529
[18] Dave Trier, “How to Keep Your AI Ethical”, Global Banking and Finance Review, last modified 28 October 2021, https://www.globalbankingandfinance.com/how-to-keep-your-ai-ethical/