Ethical AI can change the world — if we act now

We are at a crossroads. We can be dissuaded by continued misuse of AI and data. Or we can choose to demand change.

RS21
RS21 Blog
5 min readAug 24, 2021

--

Written by Charles Rath, RS21 President + CEO

Photo by Alex Knight on Unsplash

This article first appeared in Fast Company.

Together, we can pave the way for AI to help us address complex issues and realize its ultimate promise.

I’m driven by two realities today. First, we’re generating more data than ever before — we’re on track to quadruple the current amount of data by 2025. And second, data is handing us an amazing opportunity to make our world a better place. Data fuels artificial intelligence (AI) technologies that are changing the face of healthcare, climate research, community resilience, and even space. AI can literally save the world.

Look at the role AI has played in our COVID-19 response, from tracing to variant prediction to rapid vaccine development. Consider the breakthroughs in improved cancer diagnostics thanks to AI revealing subtle patterns that humans can’t perceive. Notice the momentum in outer space, where my team is working on AI for autonomous satellite failure detection that will support ground operators as they carry out missions thousands of miles away. We haven’t even touched on how AI is driving climate research.

All these innovations in AI are based on massive amounts of data. Algorithms depend on data to “learn” and produce reliable outputs that we use to better understand our world and devise solutions for some of our greatest challenges. The wealth of data that’s fueling rapid progress, however, is under threat of being cut off if we don’t fix a mega problem in tech: trust.

If we’ve learned anything about digital technologies this past decade, it’s that conversations about ethical practices haven’t kept pace. People are concerned, rightfully so, about how their information is tracked.

The issue has grabbed national attention. Notable documentaries, news articles, and congressional hearings expose our failings to ensure transparency and ethical standards for data collection, AI, and automation. Some of the most prominent issues include unintended bias in algorithms, job replacement, and implications of social media’s influence on our behaviors and information flow.

Adobe Stock

Now’s the time for us to get smarter — as consumers, technologists, and policymakers — before data abuses undermine the promise of AI. Without this three-pronged approach, we’re eroding trust in the very technology that can help us solve our most urgent issues.

INCREASE TRANSPARENCY

As 21st-century consumers, we need more transparency and better policy: no more default opt-ins or terms and conditions that hide under the veil of six-point font agreements. Consumers must know — must demand to know — how their data is being used. Data should only be collected with explicit consent and a general understanding of intended applications. With more transparency, everyday consumers can better evaluate ethical practices and have more agency in deciding whether the technology is enhancing or hurting our society and environment.

More inclusive dialogue will also increase awareness among young people and attract them to AI research and development. I frequently meet students and discuss opportunities in tech, and I’m heartened that the next generation will strengthen the way forward in responsible, ethical AI.

Ultimately, greater transparency will give us greater confidence in emerging technologies and push forward important AI work.

ETHICAL AI IS GOOD BUSINESS

Technology companies need to operate according to a clear set of ethics and put operations into practice that prioritize the common good. This means, in part, taking algorithms out of the black box. It also means that organizations in the business of AI must define and abide by their values and ethical standards.

I know first-hand the power of leading with values. Values drive behaviors — from hiring, to partnerships, to deciding on work to execute. Because AI has the power to amplify an organization’s values, it’s crucial to live by a set of ethics that prioritize people and planet and incentivize investments in “good” tech.

Moreover, tech alone shouldn’t drive AI applications. AI companies need to partner with domain experts who have devoted their careers to understanding complex issues and can steer the use of technology in the right direction. AI alone doesn’t fix the problem. AI made in collaboration with experts is what will get us from today’s limitations and threats to tomorrow’s solutions.

AI FOR HUMANS

Policymakers must set enforceable standards for technology we invite into our daily lives. We’ve seen early indications that governments are moving in this direction with the EU’s GDPR, arguably the leading data privacy and security requirement, and California’s CPRA “right to know” legislation. As of late, we’ve also seen more technology experts testify and contribute to shaping guidelines.

Let’s continue this momentum. Let’s invite AI practitioners to be part of the policy-making process so we can be confident that oversight and legislation set specific data protection requirements and support better practices.

A bottom-up approach driven by consumers forces change through demand. A top-down approach from government ensures industry focuses on developing AI that empowers all people, not just those who can profit the most. A public and human perspective can guide responsible use of AI, rather than using it to stoke fears and aggravate existing inequities.

A BETTER WORLD

The promise of AI is that it can improve the human condition, and you don’t have to look far to find good examples of this. NASA is using its massive volumes of data and AI to accelerate research of Earth’s systems. The European Union is creating a digital replica of Earth, enhanced with AI, to predict environmental change. Microsoft’s AI for Earth initiative is supporting research programs around the world. Companies like mine are working with resilience experts to create models that will safeguard infrastructure and help communities bounce back from natural disasters.

It’s time we have a real conversation about the tremendous benefits of AI, clarify it instead of scare people with it, and commit to a path forward where AI is used for good.

We’re at a crossroads. We can be dissuaded by continued misuse of AI and data. Or we can choose to demand change that positions all of us to be better agents in detecting nefarious activities and better advocates for ethical AI. Together, we can pave the way for AI to help us address complex issues and realize its ultimate promise.

Charles Rath is President & CEO of RS21, an Inc. 500 fastest-growing data science & AI company & Fast Company Best Workplace for Innovators.

This article originally appeared in Fast Company.

--

--

RS21
RS21 Blog

RS21 is revolutionizing decision-making with data + AI. We believe the power of data can unleash human potential and make a better world. Visit www.rs21.io.