Who’s in Charge? Ensuring Responsible Development for AI

CLX Forum
CLX Forum
Published in
4 min readSep 16, 2019

With the pace of technological development in artificial intelligence (AI) increasing at a staggering rate, concerns about the responsible AI development have grown in recent years. Several international bodies, including the EU and the OECD, have issued frameworks seeking to define what makes AI ethical or ‘trustworthy’.Tech giant Microsoft has issued statements calling for industry regulations and the enacting of measures to ensure that AI technology is developed in ways that generate the broadest benefits while minimizing the risks of abuse. Our recent conversation with Andrew Gardner and Keith Rayle (respective heads of AI from Symantec and Fortinet), examined this topic in detail.

Defining the problem

The first challenge to overcome is being able to set parameters for the discussion. Keith Rayle questions how regulations focused on technology might function, as the technology is already out there and accessible: “Are we talking about point-specific usage, or are we looking to place limits on aspects of the technology, such as the number of layers and nodes an AI can contain? How should we limit the size of an AI engine?” Rayle also believes that it might be more sensible to focus on the use cases, specifically how the AI is being used. And in this instance, says Rayle, “We need to not only review our laws, but look at ways to apply pressures and set limits. National or global regulatory bodies typically lag technological innovation by a wide timeframe, such as we see with digital currencies.”

Should AI have a warning label?

In terms of government oversight, Andrew Gardner suggested that we might use “a safety label that tells consumers how the AI is used, what data it collects, and how it’s processed. There should be enough information to help the consumer understand the risk.” This would place the government in a regulatory role regarding the reporting of AI for prevention and forensic purposes, and Gardner argues that “this is the kind of oversight that the government could do.”

There would be challenges, however. Keith Rayle questions how notification and reporting would work, when there are already so many interconnected systems: “What happens if one AI is simply connecting to another, and it’s that second AI that is using (or even misusing) the data? How do you handle those chains of connections, and who’s responsible for making the notifications?”

Encouraging responsible AI development

Another strategy, says Gardner, would be to “employ initiatives or incentives to encourage the responsible development of AI.” Gardner points out that this is already being done in China, as part of that government’s plan to become the global AI leader.

Still, it’s important to consider that AI operates primarily in the digital world, and the challenges of controlling that world are significant. While you cannot stop people from sharing information regarding AI, you can control the points where the AI intersects with real data or physical devices, such as drones and cars. Gardner suggests that this would be a good place to start when thinking about regulations or control, “because physical devices are built in factories, and factories have economic barriers. So that would give the government a runway to get involved and think about best practices for regulating AI.”

What happens if we get it wrong?

Keith Rayle’s biggest fear is the greed factor, “for instance, online traders using AI for their personal gain.” And Andrew Gardner is critical of the tendency, especially in industry and product development. There is danger in the mindset of ’if we can do it, we should’, without any thought for the possible risks or costs involved.

Make no mistake, there are potential pitfalls involved in the use of AI. Speaking about facial recognition technology, for example, Microsoft highlighted the risks in three main areas: first, that it could be used in ways that violate laws prohibiting discrimination; second, that it can result in “new intrusions into people’s privacy,” and third, that it could be used by governments for mass surveillance, with the aim of limiting democratic freedoms. These are just a few risks that apply to the general use of AI. Indeed, earlier this year, news reports surfaced that AI-enabled facial recognition technology had been weaponized for the purpose of suppressing minority groups in China.

Determining what counts as ’responsible’ AI usage is not easy. As Keith Rayle asks, “should it be fair? Should it be transparent? And what do ‘fair’ and ‘transparent’ mean in the context of AI?” Perhaps the biggest role the government could play is by providing the necessary pressures, through positive and negative incentives, to continue to push for responsible development. With positive incentives, we’ll be able to fund, guide and incent research in certain areas. By creating negative repercussion, we’ll be able to de-incent organizations or individuals leveraging the AI realm contrary to the standards and norms that we, as a society, deem to be acceptable. If such protocols can be established, we should face a future that is both safer and open to greater possibilities for the human race.

Download your FREE copy of Canadian Cybersecurity 2020: https://secure.e-ventcentral.com/event.registry/CanadianCybersecurity2020/

Check out the CLX Forum blog and follow the CLX Forum on LinkedIn and Facebook to keep up-to-date with the latest happenings in the world of cyber security.

Interested in becoming a contributor? If you’ve got a topic which you feel is important to your peers, we want to hear from you! Get involved today by visiting: https://www.clxforum.org/get-involved/

--

--

CLX Forum
CLX Forum

The Cybersecurity Leadership Exchange Forum (CLX Forum) is a thought leadership community created by Symantec.