The Rise of Artificial Intelligence: Regulating Ethics in Tech

By Lika Gegenava, Core Writers’ Group

When ChatGPT became available to the public earlier this year, a revolutionary AI tool became accessible to virtually everyone with access to the internet. The discourse around Artificial Intelligence has been present in the decision-making and legislative processes for over a decade. In fact, ChatGPT as an AI chatbot is far from the only form of AI being used today — although it maintains an enormous popularity and consumer base. Artificial Intelligence is not a specific technological tool; rather it is “a general-purpose technology combining software and hardware in systems that enable technologies.” These technologies are beginning to emerge as effective new tools for almost all professional fields. For instance, McKinsey Global Institute estimates that, by 2030, with the help of AI tools global output can be increased by up to 16%. Yet with the fast-paced development and further evolution of such technologies, concerns regarding the ethical principles they are based on arise. The necessity of ethics and ethical AI emerges as these tools infiltrate more and more aspects of social existence. Topics such as privacy, accountability, and fairness come up as more tools become available to the public and are integrated into the systems of major global corporations.

In light of these events governments and international organizations around the world are posed with the task and challenge of ensuring the fast-paced growth of the AI field stays within the limits of ethical principles. As these tools are popularized, governments attempt to create specific guidelines on the expected ethical principles they will follow. On one hand, global superpowers aim to win the “AI race” to ensure an advantage over adversaries. On the other hand, a certain caution is required in policymaking and legislature, which will aim to regulate and ensure that AI evolution carries on safely, and is to benefit of all.

Currently, the US holds the position of the leader in the “AI Race.” The publicly available AI tools, including OpenAI’s ChatGPT, are largely products of American-based corporations. The US maintains an open system of collaboration with its allies in the West and has been open to global collaboration. The US, and with it the West, is not an uncontested leader. China has named the development of AI as a strategic and economic priority. With the global openness of cooperation on AI, China was able to take advantage of Western resources, while investing in its domestic capabilities and human capital to emerge as one of the leading AI powers today. As such China gains a bigger say in the formation of AI usage norms. Often such usage comes into conflict with the ethical principles and guidelines of the West (such as increased surveillance technology).

Yet, China and the US are not the only countries expanding their AI capacity. As members of the EU make strides in innovative AI technologies, legislation on how to regulate these tools also develops. While the necessity of such regulations is imperative to ensure the future of these tools is ethical AI, the concern over strict regulations emerges. If AI policies are too strict they might hinder the innovation and further evolution of the field, which would leave the West disadvantaged.

Both the US and the EU have begun developing tools for ensuring that the future of AI is ethical. The US developed a guideline on ethical AI for specific government institutions and agencies. Individual states within the US have also implemented regulations concerning the privacy of the consumers that engage with AI tools, which aim to increase corporate accountability and algorithmic transparency. Similarly, the EU and several European countries have begun developing or already developed specific ethical guidelines for AI and its development. What exactly does it mean for an AI to be ethical? The guidelines and principles already in place focus on three main requirements that an AI must fulfill to be considered ethical: accountability, privacy, and fairness. The majority of such guidelines also call for transparency on the specifics of AI operations and define the goals of AI as increased safety and the common good.

Thus, as the field of Artificial Intelligence continues to advance rapidly, the governments of the world and international organizations must face the challenges of ensuring the norms of AI are ethical, and implement regulations to achieve this goal. While China emerges as the contender for a leading AI power, the West must take steps to minimize the effects of its influence on AI norms and promote ethical AI globally. Cooperation among the stakeholders — mainly the governments, international organizations, and corporations specializing in AI — is crucial to developing common ethical principles and enforcement mechanisms that will ensure that the future of AI is ethical.

Lika Gegenava is a member of the European Horizons Core Writer’s Team. She is a student at Barnard College of Columbia University, studying Political Science. Her main research interests involve democratisation processes in post-Soviet states, and international and regional security.

--

--

The European Horizons Editorial Board
Transatlantic Perspectives

European Horizons empowers youth to foster a stronger transatlantic bond and a more united Europe.