At the crossroads of AI ethics and innovations — how did we get here?

Tarun Chopra
4 min readAug 15, 2023

--

When it comes to technology, new innovations always find themselves balancing the line between reality and hypothetical. We’ve watched the rise and fall of Crypto. Experienced the short-lived buzz that everyone would soon be living in the metaverse. And now we find ourselves under constant barrage about the promises of generative AI. However, when it comes to AI, we’ve moved past the hypothicals. AI is here. Because beneath the hype, there’s also a reality, which is that we are in midst of the next AI revolution.

That’s right, this isn’t the first AI revolution. Because before ChatGPT, there was IBM Watson.

Making AI history with IBM Watson

In 2010, we released IBM Watson, a system which uses natural language processing (NLP) and machine learning (ML) to answer questions. It wasn’t until the following year, however, that Watson became famous when it dethroned former champions Ken Jennings and Brad Rutter on Jeopardy!.

What’s often overlooked about Watson’s win in Jeopardy! wasn’t its ability to come up with the answers, it was the process that led to the answers. If you recall, Watson showed the potential answers it was considering, including the answer it ultimately gave, with its confidence factor for each response — a key component when scaling AI, because it helps explain how the decisions were made.

In the 13 years since we launched Watson, we’ve completed tens of thousands of AI engagements with companies around the world and across industries, from healthcare to digital advertising. And we’ve learned a lot. A lot. One of the main takeaways from these engagements is that there’s a big difference between demonstrations (like chess and Jeopardy!), consumer applications (like Amazon’s Alexa or OpenAI’s ChatGPT), and enterprise applications.

Operationalizing AI in the enterprise is harder than it looks

In the enterprise, companies need to use AI to reinvent their business. They’re exploring new, unknown territory, trying to rebuild established processes from the ground up, instead of layering AI on top of them. The reality is — it’s not easy to operationalize AI in large-scale enterprise environments. It’s expensive and time-consuming, not to mention the number of corporate regulations and mandates that business leaders must account for.

That’s just skimming the surface. The biggest feedback we received from our customers was if they were going to prioritize AI, then they need to be able to break open the black box of AI so they can explain why it’s making the decisions it’s making. They needed to be able to trust it.

As a result, we’ve spent the last decade focusing on developing AI Governance, or Trusted AI — ways for companies to be able to not only operationalize AI across their enterprise, but also to do so with trust and transparency.

Living at the crossroads of AI ethics and innovations

Today, we find ourselves once again in the midst of the next AI revolution, but this time it’s not ML-based models, but foundation models. These large-language models (LLMs) are trained on ever-growing datasets that achieve better and better results — promising to make massive AI scalability possible — even at the enterprise-level. However, this huge potential comes with other risks as well, such as the generation of fake content and harmful text, possible privacy leaks, amplification of bias, and a profound lack of transparency into how these systems operate.

In order to foster societal trust in AI, companies must embed ethical principles into their AI development and deployment processes, and they must do so without waiting for government mandates. This requires companies to:

  1. Govern across the AI lifecycle and promote transparency

It is imperative for people to know why AI is making the decisions it is making. For example, a bank needs to be able to tell a consumer what the factors were behind their loan being denied (it’s the law in the European Union as per Article 14 in the GDPR), and what the consumer would need to do to change that decision. By automating and consolidating tools, applications, and platforms across the AI lifecycle, businesses can gain visibility into their AI models for transparent and explainable outcomes. This allows companies to audit the lineage of the models and the associated training data, along with the inputs and outputs for each AI recommendation.

2. Manage risk and protect brand reputation

Customers, employees, and shareholders expect organizations to use AI responsibly. This is becoming even more critical now, as companies can find their brand reputation on the line when it comes to their use of AI. No one wants to be in the news for the wrong reasons. By automating workflows to better detect fairness, bias and drift, companies can identify, manage, monitor, and report risks at scale.

3. Adhere to regulatory compliance

With the growing number of AI regulations, responsibly implementing and scaling AI is an increasingly complicated challenge, especially for global entities governed by diverse requirements and highly regulated industries such as financial services, healthcare, and telecom. Companies should look to automate the translation of AI regulations into enforceable standards and policies to help simplify compliance with industry and regulatory requirements.

It won’t be easy. It takes a lot of work, skill, and investment, but it’s essential to ensure enterprises can put AI to work on a massive scale with trust and transparency. Because while we might find ourselves overrun by generative AI headlines, there’s a clear imperative: filter out the hype and seize the opportunity.

What’s next for AI?

Now that we know how we’ve gotten here, I want to focus on where we go next. In my upcoming blogs, I’ll be inviting subject matter experts in the AI space to join me as we discuss how companies can get started with generative AI and foundation models, as well as a responsible approach to building ethical AI by developing guardrails through principles of trust and transparency.

--

--

Tarun Chopra

Tarun Chopra is an accomplished and goal-oriented IT Executive with end to end technological know-how, and extensive experience leading teams