The Pursuit of Artificial Intelligence and the Ethical Frameworks it Demands

RS21
RS21 Blog

--

By Michelle Archuleta, PhD

Michelle Archuleta is the Director of Data Science at RS21. She has worked in the field of AI for over 15 years and recently presented at the NASEM Arab-American Frontiers of Science, Engineering, and Medicine Symposium to discuss advances and opportunities in artificial intelligence. She shares some of those ideas here.

The field of Artificial Intelligence is rapidly evolving and full of bold innovations.

AI has already been used to advance the treatment of diseases that were previously untreatable.

It has been used to improve humanitarian aid, helping reach more people and freeing up human capacity to focus on strategic, high-priority work.

And it’s been leveraged by local, state, and federal municipalities to improve emergency and disaster response and tackle issues such as climate change and pollution.

AI is disrupting every industry and could be an immense help in making people’s lives better — improving things like the quality of healthcare and government services, enhancing decision-making, and even making communities and infrastructure more resilient.

As AI continues to revolutionize products and services, it’s important to consider the impact on society and the development of ethical AI frameworks to help inform policy and regulation and respond to cultural expectations around AI.

“Perhaps we should all stop for a moment and focus our thinking on not only making AI more capable and successful but maximizing its societal benefit.”

— Stephen Hawking

The Pursuit of AI

Why the pursuit of AI? Because AI enables us to solve so many challenges that were once out of reach.

Along with the benefits of AI, however, there’s also a significant amount of distrust. In 2017, Stephen Hawking famously said:

“Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst. We just don’t know… Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization.”

He went on to urge creators of AI to “employ best practice and effective management” and focus on how to “maximize its societal benefit”.

As AI is developed further, there needs to be a greater conversation about how we build ethical AI so that it will benefit society as a whole, and I’d like to spend some time describing how we might respond to Stephen Hawking’s call to employ best practices.

Two Paths for AI

The way I see it, we are faced with a tale of two roads. One path is focused on the speed of development and race to Artificial Generalized Intelligence (AGI); this is the pursuit of creating technology that mirrors human intelligence and thinking. The other, less traveled path, is focused on building ethical AI frameworks and progressing best practices, which have been largely left behind in the quest for more sophisticated models.

Let’s start with definitions of a few types of AI and basic technical understandings.

Narrow AI is defined as an AI in which technology outperforms humans in some very narrowly defined tasks. Narrow AI, also known as weak AI, focuses on a single subset of cognitive abilities (e.g., natural language processing tools like mobile phone assistants).

Artificial General Intelligence (AGI) is defined as an AI capable of applying knowledge and skills in different contexts. It more closely resembles human intelligence by providing opportunities for autonomous learning and problem solving. We have not accomplished AGI to date, but with recent advancements, some AI researchers, like Blake Yan, say that “large-scale pre-trained models are one of our best shortcuts to artificial general intelligence.”

Rapid Progress in Artificial Intelligence

AI/ML models rely on training models of thousands, or even a million, labeled examples to create models that perform well on unseen datasets. This is retrospective learning and very different from the way humans learn.

It was then discovered that layers of neural network architectures capture core knowledge and abstractions. The weights of these lower hidden layers can be frozen to capture core knowledge and extracted features and then transferred to other neural networks. The final neural network will then have the frozen weights of the lower hidden layers and be fine-tuned on context specific information.

An example of transfer learning in natural language processing (NLP) is training a neural network on general linguistic data, freezing the weights of the shallower layers of the neural network that capture syntactic relationships, and then transferring those weights of the shallower layers to a neural net that is fine-tuned on, say, electronic health records for health care applications. Transfer learning falls under narrow AI.

The idea of leveraging skills in different contexts — or the ability of a model to solve a task despite not being trained on examples of that specific task — is called zero shot learning.

Massively trained deep neural networks — like the language generator GPT-3, or the largest neural network ever created Wu Dao 2.0 — do not require fine tuning, even on a very small amount of training data. These models are performing zero shot learning, in which the algorithm can learn a new concept without receiving any specific examples beforehand. This is getting closer to AGI and begins to look more like human learning.

The Path to Speed of Development + Race to AGI

So what can more sophisticated AI models do, and why should we care?

GPT-3 uses deep learning to produce human-like text and was used by The Guardian to write an incredibly human-like article. It has also been used to answer questions, write essays, summarize long texts, translate languages, and even create computer code.

Wu Dao 2.0 takes AI a step further. It is multi-modal, so it can tackle tasks that include data from both text and images. A virtual student was built using Wu Dao 2.0 and is capable of learning continuously, composing poetry, drawing pictures, and even learning to code.

As organizations quickly pursue more advanced AI, still short of AGI but arguably on the path to it, we should note the huge gap in who can build, own, and access such models and the tremendous number of resources they require. For example, GPT-3 cost $12 million to train and produced over 78,000 pounds of CO2 emissions. Meanwhile, Wu Dao 2.0 is ten times larger.

A timeline illustrating a decade of breakthroughs in natural language processing (NLP), beginning in 2013 with word embeddings and progressing to zero shot learning with Wu Dao 2.0 in 2021.

The Path to Building Frameworks for Ethical AI

Let’s switch our focus now to an alternative path for AI — one that prioritizes ethical frameworks to ensure AI is practiced thoughtfully and in a manner that is transparent to auditors and the public.

With huge breakthroughs in AI over the past decade, maturity for AI ethics has fallen short. This is evidenced by examples of unintended bias in models that have real impact in people’s lives, including credit application bias and job application automations that discriminate based on gender or race.

There has been some movement in terms of new regulations, such as the EU’s human centric approach that bans some uses of AI, heavily regulates high risk uses, and then lightly regulates less risky AI systems. Separately, Idaho passed a law requiring that data and methods used in AI algorithms must be made public to promote transparency.

Examples of such regulations and oversight are fewer than they should be. When we consider the potential impact of more sophisticated models on the path to AGI, we lack the frameworks to even begin serious discussions about how to regulate it.

  • How do we ensure AI algorithms benefit society?
  • How do we explain or trust conclusions of AI without enforcing transparency standards?
  • Who gets to own these increasingly complex and resource intensive algorithms?

It is a major oversight to allow ethical frameworks to trail AI developments. If we allow ethical standards to be sidelined, we run the risk of stifling tremendous opportunities in AI and the reason so many of us pursue this type of research in the first place.

Consider some of the breakthroughs in the field.

  • AI is enabling earlier detection of Alzheimer’s disease so treatments can be leveraged at much earlier stages.
  • AI algorithms can predict protein structures, which has major implications for drug discovery, and can even contribute to the scientific community’s understanding of how viruses function, such as SARS-CoV-2 which causes COVID-19.
  • Machine learning and robotics are used to sort plastics and help address pollution concerns.

AI is an incredible technology, and we have a lot of potential to do a lot of good. So where do we start?

I believe there’s an alternative approach to the quick pursuit of more advanced models, and ultimately AGI. Instead, we should first work through a framework for narrow AI in which we define ethical standards that ensure applications are transparent, explainable, fair, and provide interpretable results.

Rather than publishing AI ethics manifestos that most people can agree on, but are impossible to measure and enforce, we should go much further. For example, at RS21, we are currently working on policies and procedures that provide a framework to help us practice ethical AI.

Specifically, we have developed a playbook, which currently includes four ethics plays:

  1. Client Communication Strategies for Explaining the Limitations of an AI/ML model
  2. Framework for Evaluating Bias in Datasets
  3. Framework for Evaluating the Performance of ML Models
  4. Framework for Evaluating Bias in Model Predictions

There is a need for standards and procedures in implementing ethical AI. It forces us to be proactive rather than reactive, and it will help keep us on the path to “maximizing [AI’s] societal benefit”.

This is as daunting and rigorous a task as pursuing artificial generalized intelligence, but I believe we need to get started on ethical AI frameworks now if we’re going to be able to build bigger and better AI systems that benefit humanity.

About Michelle Archuleta, PhD

Michelle Archuleta is the Director of Data Science at RS21. She has been working in the field of AI for over 15 years, specifically in the areas of drug development, natural language processing, bioinformatics, humanitarian resource allocation, and most recently at RS21, social equality and satellite fault prediction.

She also leads the company’s AI Ethics Working Group, an employee-led initiative intended to ensure the development of transparent, explainable, accountable AI and machine learning technologies that can be used ethically to inform high-impact decisions.

About RS21

RS21 is a rapidly growing data science company that uses artificial intelligence, data engineering, design, and modern software development methods to empower organizations to make data-driven decisions that positively impact the world. Our innovative solutions are insightful, intuitive, inspiring, and intellectually honest.

With offices in Albuquerque, NM and Washington, DC, RS21 is an Inc. 5000 fastest-growing company two years in a row and a Fast Company Best Workplace for Innovators.

--

--

RS21
RS21 Blog

RS21 is revolutionizing decision-making with data + AI. We believe the power of data can unleash human potential and make a better world. Visit www.rs21.io.