The need for new AI standards

Tarun Chopra
7 min readOct 30, 2023

--

Written by: Tarun Chopra & Christina Montgomery

Just last month, U.S. Senate Majority Leader, Chuck Schumer, hosted a number of high-profile tech CEOs in a closed-door meeting to discuss the major AI issues affecting the world and how the federal government could help solve them. This was just one of many recent meetings that have taken place over the last year as governments and corporations alike seek to regulate AI.

AI is not new, as I talked about in my first blog of this series, but it has advanced to the point where we need new AI standards. The technology’s dramatic surge in public attention through consumer tools like OpenAI’s ChatGPT has, rightfully, raised serious questions such as: what do we do about bias? What about misinformation, misuse, or harmful and abusive content generated by AI systems?

We discussed in my last blog, that while generative AI brings great promise, it is also forcing business leaders, governments, and everyday people alike to seek assurances that the AI systems they use and interact with are trustworthy. For more than a century, IBM has been at the forefront of ethical use of new technologies. We pride ourselves on being trusted stewards for enterprises, and we’ve worked to earn the trust of society by responsibly ushering in these new technologies with principles of trust and transparency.

Based on my meetings with our clients around the world, many understand the obligation we collectively face, but the question weighing on their mind is how they can manage their risk and reputation for today and be prepared for the legal obligations of tomorrow.

To help answer that question, I’m thrilled to sit down today with Christina Montgomery, our Chief Privacy and Trust Officer at IBM, and discuss the current and future state of AI policy.

Meet a global leader in AI Ethics and governance, Christina Montgomery

As IBM’s Chief Privacy and Trust Officer, Christina oversees their privacy program, compliance, and strategy on a global basis, and directs all aspects of IBM’s privacy policies. She also chairs IBM’s AI Ethics Board, a multi-disciplinary team responsible for the governance and decision-making process for AI ethics policies and practices. Recently, Christina spoke to a U.S. Senate Judiciary Committee Subcommittee on the “Oversight of AI: Rules for Artificial Intelligence.”

Tarun: Generative AI raises several regulatory, legal, and ethical questions for businesses and society. How can organizations establish ethical guidelines to ensure accountability and transparency in developing and using these systems?

Christina: AI has been around for over ten years, but in the last year or so, it advanced to the point where it is certainly having a moment. This new wave of generative AI tools has given people a chance to experience it first-hand. Citizens are using it for help with emails, their homework, and even their resumes. From a business standpoint, we’re seeing generative AI optimize marketing copy, generate product ideas, and design industrial equipment. However, generative AI is also raising ethical issues around things like bias, misinformation, copyright and privacy. This is particularly true when it’s delivered as a black box — with no transparency into the data used to train the models, and no explanation as to how it reached a recommendation. In order to foster societal trust in AI, organizations must embed ethical principles into their end-to-end AI development and deployment processes. That means every company should have an accountability process in place, whether that’s an ethics board or a more general governance system. Finally, it’s important to note that companies need to establish these guidelines and processes without waiting for government mandates. It is within their power to act now, and those that do will earn the trust of their clients and be better prepared for the legal obligations of tomorrow.

Tarun: From a regulation standpoint, some companies are calling for the U.S. government to establish a separate federal agency to regulate AI? Why would that add to the challenges and complexities?

Christina: A new federal agency in the U.S. won’t work for a few pragmatic reasons that are grounded in historical precedent, not ideology or corporate self-interest. First, a new agency would lack the institutional knowledge to effectively regulate each and every domain and application of AI. Second, it would be plagued by the challenges that vex today’s regulators, from budget and resource constraints to redundancy and inefficiency. Not to mention the fact that creating a vast, new bureaucracy would require time, years of planning, and broad bipartisan support that currently does not exist.

AI is already becoming ubiquitous in society–time isn’t on our side. So, rather than create a new federal regulation agency from scratch, Congress should focus on making every agency an AI agency. At IBM, we firmly ​believe​ Congress can do so by enacting a precision regulation approach to strike the right balance between protecting society from potential harm and allowing innovation to flourish.

Tarun: Can you elaborate on what you mean by a “precision regulation approach?”

Christina: I want to start by saying that precision regulation does not mean no regulation. It means making the best use of current regulatory capacity, updating it for a new era of technology, and ensuring our existing government agencies can stay current on the technology and its implications to break the cycle where innovation outpaces regulation. In short, what we’re advocating for is this:

  • Regulating AI risk, not algorithms — We’ve urged Congress to establish rules to govern high-risk, harmful uses of AI — not lines of code. Not all uses of AI carry the same level of risk — we should regulate when, where, and how those products are used.
  • Making AI creators and deployers accountable, not immune to liability — Legislation should consider the different roles of AI creators and deployers and hold them accountable in the context in which they develop or deploy AI. Section 230 stands as a cautionary tale; we cannot create another broad shield against legal liability, irrespective of reasonable care.
  • Supporting open AI innovation, not an AI licensing regime — A licensing regime would inadvertently increase costs, hinder innovation, disadvantage smaller players and open-source developers, and cement the market power of a few players. A vibrant open AI ecosystem is good for competition, innovation, skilling, and security. It guarantees that AI models are shaped by diverse, inclusive voices.

Tarun: How has IBM been at the forefront of building trustworthy AI?

Christina: IBM has strived for more than a century to bring powerful new technologies like AI into the world responsibly, and with clear purpose. We follow long-held principles of trust and transparency that make clear the role of AI is to augment, not replace, human expertise and judgement. IBM was the first major company in our industry to establish a Chief Privacy Officer, and one of the first to establish an AI Ethics Board. Our Board infuses the company’s principles and ethical thinking into business and product decision-making. It provides centralized governance and accountability while still being flexible enough to support decentralized initiatives across IBM’s global operations. That’s no small thing when you remember that we have 250,000 employees operating in 170 countries. Also, IBM’s Chief Privacy Office has taken significant steps in putting into practice industry-leading AI and data capabilities, building on a strong combination of privacy, security, AI governance, ethics, processes, and technology and tooling in an effort to help bring these capabilities to our clients worldwide.

Tarun: What solutions does IBM offer to help companies on their AI governance journey?

Christina: Guided by our principles, we have built groundbreaking, trusted AI systems for our clients that power innovation. For example, our AI tools are helping NASA unlock insights from geospatial images to help the world better understand and prepare for the impacts of climate change. And, we’ve partnered with Moderna to explore ways that AI can accelerate the development of new therapeutics and vaccines using mRNA technology. Earlier this year, we combined the power of generative AI, its trusted technologies, and a proven governance model into one platform for its clients — watsonx. Watsonx is IBM’s enterprise-ready AI and data platform that was designed to multiply the impact of AI across our clients’ businesses through three powerful components, watsonx.ai, watsonx.data, and watsonx.governance (which will be available later this year).

With watsonx, organizations can perform complex tasks to help our clients empower teams, engage their customers, and save time. Recently, at Wimbledon 2023, our watsonx technology enhanced fans’ and players’ experiences through AI-generated commentary and analysis. Ultimately, our tools are transparent and explainable. We share how our data is obtained and trained. And we provide insight into how our AI arrives at decisions or recommendations. This helps enable companies to deploy trusted, responsible, and accountable AI.

Tarun: What are some of the first steps you think companies need to take when considering how to deploy trusted AI?

Christina: I think it’s important to recognize that waiting for regulatory guardrails to be established by government entities does not — not by any stretch — let business off the hook for its role in enabling the responsible deployment of AI. Every company looking to leverage the promises of generative AI should consider taking the following steps:

1. Designate a lead AI ethics official. Hire or assign one of your company leaders to be responsible for your trustworthy AI strategy.

2. Stand up an AI Ethics Board, or similar function. Companies need a centralized clearinghouse for resources to help guide implementation of the trustworthy AI strategy defined by your lead AI ethics official.

3. Adopt strong internal governance practices. At IBM, we offer enterprises a number of different products and services specifically designed to help organizations adopt strong internal governance practices so their AI outputs are transparent and explainable.

How to get started?

To elaborate on what Christina stated above, we have been working hard to help companies deploy trusted AI. Later this year, we are excited to announce the general availability of watsonx.governance, a toolkit designed to help organizations enable responsibility, transparency, and explainability in their data and AI workflows by helping them direct, manage, and monitor all of their AI activities. If you’re interested in a sneak preview, you can sign up for the waitlist to access the tech preview by clicking here.

--

--

Tarun Chopra

Tarun Chopra is an accomplished and goal-oriented IT Executive with end to end technological know-how, and extensive experience leading teams