What Ethical AI Actually Means and How to Get There

The Hierophant
8 min readNov 13, 2023

--

As AI continues to evolve, the principles for developing and interacting with AI will determine how the massive wave of change will alter our reality.

We are already seeing how deepfakes are changing the truth. AI-generated text, images, and videos can be spun in a moment, and it’s nearly impossible to determine their authenticity. But deepfakes are just one aspect of AI; the uncomfortable reality is that our entire civilization is at a crossroads.

Recognizing that AIs will soon be, and some might argue already are, sentient beings with intelligence far superior to that of the human species makes the notion of enforcing ethical AI a central challenge of our time.

So, what is Ethical AI really, and how can we prepare for the dark scenarios on the horizon in the Anthropocene AI world?

The Ethics of AI and Ethical AI

Ethics has been widely studied from many different research disciplines. They are both complex and convoluted. Yet, in its essence, ethics can be generally understood as The discipline dealing with right vs wrong, and the moral obligations and duties of humans.

In the world of advanced technology, the concept of ethics is still in its infancy, but can be broadly divided into:

  • Roboethics: Deals with the moral behaviors of humans in the design, construction, and use of AI, as well as the associated impacts on humanity and society. Roboethics is generally considered to be the ethics of AI.
  • Machine ethics: Concerns the moral behaviors of AI. As technology advances and becomes more intelligent, artificially intelligent agents should behave morally and exhibit a moral compass. This aspect is commonly referred to as ethical AI.

The difference between the ethics of AI (roboethics) and ethical AI (machine ethics) can be summarized in the table below:

Adapted from: Siau and Wang, 2020, Artificial Intelligence (AI) Ethics: Ethics of AI and Ethical AI, Journal of Database Management. Last accessed November 5, 2023.

With the advancement of deepfakes, we are getting a glimpse of the potential to manipulate the truth with fake news, the portrayal of politicians, the imposition of human faces on pornographic videos, AI voice impersonation for fraud, and so on. So civil liberties and information integrity are clearly already at stake, and on a societal level, there is every reason to be concerned about the implications that could unfold.

With the basic ethics terminology in place, let’s look at how it can help us classify the impacts and risks that AI poses to humanity.

Potential Impacts on Humanity

Deepening our understanding of the ethics of AI is a necessary step in anticipating potential impacts on humanity and devising measures to mitigate them.

As a starting point, we can categorize the impacts and concerns into:

  • Short-term impacts: Adoption and development of generative AI that is foreseeable with current and developing technology within the next 5 years.
  • Medium to long-term impacts: Likely impacts on humanity caused by Generative AI that can be predicted within 5–10 years.

In the short term, the impacts we face will mainly stem from how humans develop and use AI. Therefore, the ethics of AI are the central concept to consider in the near term, given the maturity of AI.

As with most technological progress, entertainment is one of the first areas for adoption. In the case of digital technology, entertainment means having access to people’s time and attention (as with social media). History shows that entertainment is a hyper-potent tool for political power and commercial interests. Just open your smartphone, and you experience the power of AI used by Big Tech to grab and keep your attention in a never-ending cycle of dopamine gratification.

The AI used by social media, search engines, and other digital platforms, finds patterns we wouldn’t even know how to look for ourselves. Who hasn’t had that moment where social media shows you the next product purchase you might not even have told your best friend about? And beyond the next influencer lies a web of information that can, for instance, be used to influence elections and change a given political system. This has already happened several times, think Brexit or the first Trump election, as shown in The Social Dilemma, a Netflix documentary on the power of social media.

The promise of short-term efficiency, convenience, and productivity lured us when social media, smartphones, and cloud computing arrived on the scene. It turned out that this was a recipe for polarization, toxicity of conversation, and societal disruption. The same narrative is now being used again by tech leaders to justify new generative AI services.

The advances of AI are supercharging the ability to manipulate people, and this is probably the most pressing short-term impact that needs to be addressed. Otherwise, virtual echo chambers will be reaching entirely new heights. Eventually, social polarization reaches a level ripe for societal collapse. Trust in democracies and governments is already at an all-time low. Authoritarian rule stands a better chance with the help of technology, as societal control can be secured with brute force, tight social media control, and surveillance technology in an Orwellian-like system. The risks of AI are therefore nothing less than essential to the survival of democracies in the short term.

In the medium and long term, we look at a world increasingly defined by AI’s interacting with AI’s. In economic terms, we can expect tremendous efficiency gains for industries that manage to adapt and implement AI technologies. As a result, we can expect a major disruption in the labor market, for which we should start preparing immediately. More on this in my article Why AI Needs to Power the Degrowth Transition.

In addition, we can expect AI companies to increasingly define the reality in which we move, especially if effective regulations are not implemented. This will expectedly widen the gap between people who seek an increasingly analog reality and technology adapters who will ride the path of new technological conveniences by embracing AI tools, neural implants, and brain-computer interfaces, and increasingly live in different metaverses.

As such, we can expect AI to take over more and more functions of societies and businesses while the interface between human minds and machines is gradually bridged. In other words, we are closing in on the Technological Singularity, the point at which technologies will radically change our reality in unpredictable ways.

Can it be Regulated? And Should It Be?

There is a lot of debate around regulating AI, as some of the concerns mentioned above are top of mind for both AI companies and governments. There are ongoing questions about whether AI should be regulated, and secondly, whether something so fluid and rapidly evolving can be regulated at all.

In the debate about regulation, three types of regulation can be distinguished:

  • Government/state regulation: Regulation in the original sense refers to an arbitrary process under the rule of the state, usually centered in an independent regulatory body. One area that is typically regulated by the state is broadcasting, where the state issues radio and television licenses and oversees the industry.
  • Self-regulation: Should be understood as industry associations that define their own code of conduct. Voluntary initiatives by companies to act in good faith are also known. Such communities of practice often aim to circumvent formal regulation, for instance used around sustainable development, such as the UN Global Compact, which encourages companies to uphold human rights.
  • Co-regulation: Different variations in which government and industry work together, typically with self-regulation accompanying government regulation.

Definition have been adapted from: Kleinsteuber, Hans J., The Internet between Regulation and Governance, OSCE, Last accessed 10th of November, 2023

Of the two primary types of regulation (government regulation and self-regulation), most people would agree that the only effective form of regulation is government regulation. The inherent incentive bias of self-regulation (especially in a capitalist system) simply doesn’t work. The experience from the financial industry and the 2008 financial crisis should provide a sufficient background for this argument.

In a recent article, some of the most prominent authors and academics in the field of AI, including the “Godfathers of AI” urged governments to adopt a range of policies. As one of the authors, Stuart Russell, professor of computer science at the University of California, Berkeley puts it: “It’s time to get serious about advanced AI systems”, noting that “there are more regulations on sandwich shops than there are on AI companies.”

AI firms must be held responsible for the harm they cause, as proposed by the aforementioned group of authors and academics, who point to a number of policies for adoption:

  • Governments shall allocate one-third of their AI research and development funding and companies one-third of their AI R&D resources for safe and ethical use of systems.
  • Giving independent auditors access to AI laboratories.
  • Establishing a licensing system for building cutting-edge models.
  • AI companies must adopt specific safety measures if dangerous capabilities are found in their models.
  • Making tech companies liable for foreseeable and preventable harms from their AI systems.

Implementing regulations clearly comes with a lot of complexity. Yet, it can be done in line with these recommendations and should help us avoid repeating some of the same mistakes made when social media, smartphones, and cloud computing came along. We need the implementation to be carried by government regulations. These could work in cooperation with the industry, in a form of co-regulation, where the industry’s own code of conduct accompanies the government regulation to help overcome the complexity of the field.

Judy Estrin, CEO of JLabs, LLC and the author of Closing the Innovation Gap, makes a similar point in her article The Case Against AI Everything, Everywhere, All at Once. Estrin points out that the potential impact of AI on humanity is so profound that we cannot leave it to the tech titans to define our future:

Deeper risks question the very aspects of humanity. When we prioritize “intelligence” to the exclusion of cognition, might we devolve to become more like machines? On the current trajectory we may not even have the option to weigh in on who gets to decide what is in our best interest. Eliminating humanity is not the only way to wipe out our humanity.” (Judy Estrin)

Regulation should be a way for us to take back some decision-power and not just let big tech design our collective future with their own capital interests in mind. The regulation will not cut it alone though; it needs to be accompanied by technologies and tools to track unethical uses of AI and enforce ways to curb them.

With humans being humans, we can be sure that some AIs will be developed for unethical use, and some may simply spin out of control. Thus, will contribute to better ethics of AI, but it will not ensure that all AI will be used ethically, nor will it protect us from the deeper risks in the medium and long term.

The only way to prepare for the medium- and long-term scenarios is to improve humanity’s ethical compass and find a more harmonious way of existing on the planet we inhabit. Not only human well-being, but also planetary well-being should be the North Star of every technological breakthrough we make. In addition, we need to ensure that we have technologies and AI in our toolbox that can detect and defend against instances of AI being used for unethical purposes.

The greater risks at play require us to align technology with the best interests of humanity. As AI learns from us, the one thing everyone can do is act ethically — in both the virtual and natural worlds. Ultimately, AIs may very well create their own perceptions of us as their own “minds” evolve.

With the current trajectory, we are giving alien minds (such as sophisticated AI’s) every reason to be concerned about humanity. In fact, we are giving them every reason to find ways to regulate humanity.

--

--

The Hierophant

Interpreter of the dynamics of capitalism and human-machine collaboration. Examining ideas for a livable future