Regulatory and Ethical Issues in the Age of AI

Ted Ling Hu
Prime Movers Lab
Published in
4 min readMar 24, 2023
Courtesy of DALL-E

With ChatGPT becoming an overnight cultural phenomenon, artificial intelligence seems to be the sudden talk of the town. However, what is often unknown is that AI has been researched for decades, dating back to the 1950s when the perceptron, the building blocks of neural networks, was created by Frank Rosenblatt. More recently, researchers such as Yann LeCun, Geoffrey Hinton, and Yoshua Bengio have pioneered novel algorithms that have greatly advanced the field of artificial intelligence. Couple this with advances made in the hardware industry (i.e. Moore’s law) and you have a perfect storm for the explosion of AI.

But what really is AI? AI, and more broadly machine learning, refers to algorithms that can iteratively “learn” from data. Hence, in theory, the more data, the more the algorithm “learns.” These algorithms boil down to complex math equations relying on linear algebra and calculus principles that give the ability to recognize patterns in granular detail, far surpassing that of human capabilities. Because of AI’s ability to be repetitive, consistent, automatic, and scalable, many corporations have begun integrating AI into their businesses. A recent report conducted by McKinsey in 2021 revealed that 56% of companies they surveyed reported at least one function of AI adoption.

However, due to the relatively new expansion of AI to multiple sectors, there are unprecedented use cases that may have ethical and regulatory complications. For example, if an autonomous vehicle causes a car crash, is the driver or the company to blame?

Generative AI, a subfield of AI that gives these algorithms the ability to “create,” is a novel field that is capable of many spectacular feats, such as creating art. DALL-E, for instance, is a popular image-generating algorithm that can take any prompt and generate a collection of images. However, what happens when an artist’s unique style is used to help these algorithms learn, and then a user asks DALL-E to generate a new image in the style of that artist, does the artist get royalties or is there copyright infringement?

Another problem stems from the data used. An AI algorithm is as good as its data and if the data is biased, the algorithm will likely produce biased outcomes. This could specifically be difficult in health-related AI, where the dataset may not be fully representative of minority populations. But with stringent data privacy regulations, it is extremely difficult for external parties to validate the accuracy of the dataset, creating a difficult tension between transparency and privacy.

To address these problems, the European Union has already taken action by drafting the “EU AI Act” (AIA), which aims to identify risks and regulate how AI technologies are developed, deployed, and governed while simultaneously promoting AI uptake. The act prohibits four specific cases of AI including distorting a person’s behavior to cause harm, exploiting vulnerable groups, social scoring, and biometric identification for law enforcement purposes. Subsequently, they provide a broad definition of high-risk AI that may be deployed under certain restrictions. Though detailed, this first draft will most certainly change as the AI landscape continues to evolve.

The U.S. has also called for AI regulation, however, not yet to the scale of the EU. In 2022, there were regulatory efforts for automated hiring processes. The U.S. Chamber of Commerce recently released a comprehensive report documenting the promise of AI but also the necessity of a risk-based regulatory framework that allows for responsible and ethical development, deployment, and governance. The National Institute of Science and Technology has also begun to standardize AI risks by establishing the AI Risk Management Framework that calls for certain risk-mitigating characteristics to be incorporated into all AI.

While governments continually develop laws to govern the use of AI, companies can begin to review, manage and regulate their own AI. Google, for example, has released a framework expressing the need to build responsible and well-regulated AI. However, even within this document, much like the framework suggested in the AIA and the NIST framework, a lot of the language is broad and vague, due to the unprecedented nature of AI. Companies can, however, take actionable steps to ensure proper regulation of AI use by designing accountability and governance checks internally. They can also perform routine audits and random system checks to ensure that the algorithm is producing accurate but unbiased results. Transparent training data as well as stringent documentation of protocols can further mitigate risk in AI.

But even with these efforts, some questions are left unanswerable, such as what minimum metrics should the algorithm reach that allow the algorithm to be ethically deployable. Though this will likely vary from task to task but who gets to decide on the standard minimum in these metrics? What are some novel metrics that we can deploy to measure bias and fairness? Even though these questions are largely unanswerable with the current methodologies, novel fields of research at the intersection of ethics and AI will soon emerge and curricula related to AI should incorporate ethics modules such that the next generation of AI engineers will have an ethics framework at the foundation of their training. AI as a tool will inevitably be incorporated into our daily lives but the question is how to point it in the direction of good and a force for human flourishing will be the next wave of questions.

Prime Movers Lab invests in breakthrough scientific startups founded by Prime Movers, the inventors who transform billions of lives. We invest in companies reinventing energy, transportation, infrastructure, manufacturing, human augmentation, and agriculture.

Sign up here if you are not already subscribed to our blog.

--

--