New Power Means New Responsibility: A Framework for AI Governance

Jean-Francois Gagné
Element AI
Published in
12 min readOct 31, 2018

It used to be that people would study a process, its inputs and its outputs, to write code that could automate that process. Building such software is a form of capturing intellectual property in digital form, and until now it has been a cognitive task mostly driven by humans. Today AI is writing its own software, extracting signal from noise, figuring out the rules by itself; it’s taking on that cognitive task of digitally codifying the world. It is revolutionizing what can be automated and the scale at which it can be deployed. With that new reach comes new responsibilities to make sure AI is serving the right objectives.

Modern AI can figure out patterns, classify objects, make decisions and evaluate the results. It can learn and adapt to new situations using feedback loops. It’s awesome software. You don’t need to pay someone to make an analysis then pay someone to create a piece of software then pay someone to validate the outcome then realize it’s not doing what you expected then go back and tweak the software. That whole cycle can take years in large organizations, yet a properly designed process powered by artificial intelligence can do that in a matter of days. It can recode itself, roll the changes out, and verify if it’s actually moving the needle in a direction that you want — all at an accuracy and speed beyond human ability. Quite powerful. Quite threatening.

This new approach is introducing new risk into organizations at a scale not seen before. The way you code it is by showing examples of data, not by writing or editing the system yourself. At this point, there is very little monitoring of the consequences, because the current field of data governance only concerns the quality, integrity and security of the data itself. AI governance, on the other hand, looks more broadly at the signals in the data and the outcomes they drive.

In contrast to data supporting human-driven processes, AI will be operating processes sometimes 10x better or faster than they used to, leading to scenarios that you’ve never seen before. Simply from a value standpoint, this can be problematic. If you become super efficient at selling to a specific consumer segment, you might lock yourself in and forget about the rest of the market. Is your model seeing the entire problem?

Those responsible for managing these new information systems have three main focuses:

  1. Accurately defining the problem and objectives the agent is solving for and the outcomes it should be seeking. This includes the right metrics of performance and also what IP (insights, models of the world, etc.) should be extracted and the gradients of ownership between user and vendor.
  2. Orchestrating the feedback loops that drive the learning and improvement of the model, from the raw collection to the interpretation of results and connection with other intelligent systems’ insights.
  3. Assessing the risk, all the points where your agent system can go wrong. How will the model self-assess at every point in the process? How are you monitoring the automated system to make sure it is doing the right things?

It’s quite simple to get AI to learn and perform the basic functions of a car. The challenge is whether it can do so in all the different possible contexts, such as with changing road conditions, stormy weather, pedestrians, etc. Coding with data examples can drastically reduce the cost, but it still requires a good deal of human ingenuity to consider how it is applied, and the job of managing AI systems will become much more about considering whether the AI is dealing with the whole picture. Managing the value creation and the risks are what governance frameworks are for.

So, here’s a framework of what I think governance should look like for AI to be trusted both inside and outside the organization.*

Each area of consideration will depend on the amount of autonomy you are building an agent for.

Levels of Autonomy

0. Disconnected

There is activity in your organization that AI has no clue about. You need to consider how even disconnected activities may become connected eventually in your risk assessment. Example: Handwritten notes made 10 years prior or old store video footage may be analyzed by an AI system for relevant information.

1. Watching

Watching is one of the basic implementations of AI, collecting and classifying information that will drive other processes. What is it watching, what is it paying attention to or ignoring? What processes does its information collection connect to? Example: An AI agent that watches a hockey game and automatically records the stats, including ones a human can’t see such as the force of a check.

2. Coaching

Coaching is about making suggestions without taking action. AI coaches can still be powerful, thanks in part to what we know about nudging human behavior. Example: Say I’m presenting and a camera is watching the audience, analyzing their body language to tell if they’re bored. It can tell what they like and dislike, and when it’s time for me to try another joke.

3. Collaborating

This is where a machine can’t yet fully automate a scenario, but can still drive most of the show. Example: In insurance claims processing, you may get 60% or 70% automation. That means many of the tasks are performed by an AI agent, while a human is very much in the loop to process and analyze the other 30%.

4. Autonomous

When an agent is fully autonomous, things are going to happen so quickly that the human cannot be in the loop. The interaction will be through adjusting the system, monitoring the results and providing feedback. Example: This is what much of cybersecurity or high-frequency robotrading looks like today, machines making huge numbers of decisions on their own at speeds beyond any detailed human oversight.

Classifying these levels of autonomy will vary for different organizations, but this captures the broad scope of it.

An AI Governance Framework for Industry

The below is to explain what I mean by each category. Every organization will need to think of their own general principles for each of these sections, but also apply them individually to each of their agents to make specific rules for a given situation. For the individual considerations, I would add: role of the agent, requirements for deployment, risks to watch for, parameters for adversarial governance models, and also how it connects with broader, existing corporate governance, especially around data and ethics.

Performance.

AI needs to do what it says on the tin. That is to say, it needs to perform properly and meet expectations. Only AI that performs predictably and accurately will be able to gain trust in the actual outcome that it delivers.

Accuracy

Accuracy pertains to an AI’s confidence and ability to correctly classify one or more data points into the right categories, as well as its ability to make the correct predictions, recommendations or decisions based on those data points and classifications. Accuracy is relative. You want to determine what level of accuracy makes sense for your business or product in a given context. 70% accuracy is very good if you’re forecasting calls coming into a call center at the minute level. It’s terrible if you’re trying to forecast sales in a grocery store for the week.

Bias

There are so many ways bias makes its way into systems, and it can never be eliminated entirely. All data is collected with some kind of bias, an intent or world view of what’s important. In many cases, unwanted bias can be removed at the stage of collection. Given that data always carries some kind of bias, it is also a matter of accounting and controlling for it within the model. It is important to make sure bias does not build up enough to sway the outcome to one that is detrimental to the business. Depending on the application and circumstance, there are several ways to counter bias in AI, including adding more diverse datasets and inputs, as well as ensuring the sub-goals of the objective are correctly described.

Completeness

The notion of completeness is closely tied to “fairness”, though it does not encapsulate prejudicial outcomes so much as missing useful information. An AI that is not complete misses certain data inputs that prohibit it from being effective at its task. For example, a traffic app predicting congestion patterns that do not take the weather into effect. This is a particular area of focus that will determine the appropriate level of autonomy.

Security

To protect performance, AI needs to be secure with its processes, data and outcomes. A decision-making AI cannot be compromised by adversarial data, unforeseen scenarios, outside influencing and other manipulations that may negatively affect its decision-making abilities.

Adaptability

This is the ability of the AI to deal with changing situations. If I’m introducing a new product or a competitor opens, how reliable are the forecasts and how reliable is any decision to adjust staffing? A properly adaptable AI can be used for the same use case in new situations. An effective measure for determining a threshold might be the volume and diversity of situations it has seen before going into deployment. The sufficient set of edge cases to show the model will expand over time and should be regularly added to the training regimen.

Adversarial Robustness

A particular type of adaptability is towards agents (human or artificial) trying to corrupt the model. This is essentially the focus of cybersecurity. By exposing the AI to various situations or agents with maligned, or even simply unaligned, objectives, an organization can prepare for adversarial probes before it encounters them post-deployment.

Privacy

Privacy needs to be guaranteed at all moments of interaction with the user. This includes all the information that is put in by the user, as well as all information generated about the user over the course of their interactions with the AI.

IP Capture

Intellectual property rights are core to the business models of many AI-developing organizations. The parameters for IP capture need to be tied to the business value they drive to make sure the right stuff is being captured, but they also need to determine who owns what. These IP rights need to be clearly defined between vendors, employees, organizations and end users. What constitutes the “correct” division of rights depends on the specific context and implies agreement upon concepts of control and ownership of data, for which no consensus is currently in place. In a GDPR world, these determinations will also need to be disclosed more and more.

Impacted Users

With those IP rights defined, any AI-enabled organization needs to be mindful of how different information is used and impacts various levels of user. Tracing the flow of data (i.e. where it comes from and where it goes) and how it is used inside and outside the organization is essential to guaranteeing privacy. There need to be mechanisms in place to allow users to either take their data elsewhere (portability) or erase their data (right to be forgotten).

Transparency

These efforts will only build trust in the system if they are properly communicated. Without transparency about the values, processes and outcomes, trust-building is going to be limited.

Explainability

Regarding explainability, the goal should not be to expose the exact, inner technical workings of the algorithms used to get to a certain outcome. Rather, the goal should be to expose why certain standards for the application at hand are met or not met. Example: the autonomous car did not see a small post and hit it. Why? Because the specific sensors it is equipped with cannot process the reflective coating of the paint on the post. In this case, it does not matter how the autonomous driving AI processed the data exactly. The relevant explanation is how the accident could occur, after which adjustments can be made both in the moment by the user and after the fact by the supervisors of the system.

Intent

An organization working on an AI should document its intentions, as well as underwrite them with standards of certain desirable values such as human rights, transparency and harm-avoidance. Not only does this force a conscious design process, stated intent also ensures the user (whether internal or external) understands how the tool should be applied. Misapplying the tool can disrupt all the other considerations made in this framework.

If you want to roll out AI at scale, governance cannot be an afterthought. It has to be part of the strategy and well documented. Personally, I think this will be built through the execution of different projects. Governance needs to be worked out collaboratively with the business units in the company during the problem/solution definition. Yet it is necessarily the ultimate responsibility of whoever is in charge of the cyber systems in the organization, and they need to clearly define it for everyone.

Accountability

Even with the best of efforts, things will go wrong. Good AI governance should include accountability mechanisms, which can be diverse in choice depending on the goals. Mechanisms can include monetary compensation (no-fault insurance), fault finding, and reconciliation without monetary compensations. The choice of accountability mechanisms may also depend on the nature and weight of the activity, as well as the level of autonomy at play. An instance in which a system misreads a medical claim and wrongly decides not to reimburse may be compensated for with money. In a case of discrimination, however, an explanation and apology may be at least as important.

Here’s Some Last Food for Thought

If you want to roll out AI at scale, governance cannot be an afterthought. It has to be part of the strategy and well documented. Personally, I think this will be built through the execution of different projects. Governance needs to be worked out collaboratively with the business units in the company during the problem/solution definition. Yet it is necessarily the ultimate responsibility of whoever is in charge of the cyber systems in the organization, and they need to clearly define it for everyone.

Accountability

Even with the best of efforts, things will go wrong. Good AI governance should include accountability mechanisms, which can be diverse in choice depending on the goals. Mechanisms can include monetary compensation (no-fault insurance), fault finding, and reconciliation without monetary compensations. The choice of accountability mechanisms may also depend on the nature and weight of the activity, as well as the level of autonomy at play. An instance in which a system misreads a medicine claim and wrongly decides not to reimburse may be compensated for with money. In a case of discrimination, however, an explanation and apology may be at least as important.

Ethics and Corporate Governance

One crucial area of collaboration is with the broader corporate governance structure, as that is where an organization’s guiding ethics come from. Because with AI, organizations have the ability to take actions that would be difficult or impossible to hire people to do. You can call every single one of your customers with a very human-like, friendly bot with augmented recommendations without telling them it’s a bot. Would you do that? It’s a hell of an engineering feat, but it has comes with ethical issues that aren’t immediately obvious. The point is that it can’t be the technical team determining the ethics underpinning the governance framework, because it is a subjective question. Most likely the governance board will need its own ethics committee, if it doesn’t already have one, to answer these questions.

Augmented Governance

I don’t think AI governance will be able to work without itself being augmented with AI. Governance will likely need its own swarm of adversarial agents whose role it is to test and challenge the infrastructure and systems. I think this is the next generation of regulation, ever present to monitor robustness in all governance considerations. They will be able to actively force explanations of decisions, measure biases and estimate completeness independently, and may also have different iterations from the corporate governance and external auditor. These will be both the unit tests and the ongoing stress tests, rolling out across organizations and even enforcing the accountability.

Of course, the framework needs to be implemented first. Bring these considerations into the development process of your agents, and begin the practice of sharing the results across your organization and with your users. The payoff will not just be more powerful models, but avoided regulatory snares, and more trust in the system by all its users.

////

*If you’ve been following my blog you may be asking how this relates to PEST (link). I’ve designed this as a more detailed framework for the organization to use in its assessment. I think PEST is still quite relevant and effective as a list of guidelines for how the organization should be thinking about it is making systems the public can trust, and the AI Governance framework reflects many of the same ideas.

I also post these blogs on LinkedIn, and send it out first via the subscriber list available on my site.

Originally published at www.jfgagne.com on October 31, 2018.

--

--

Jean-Francois Gagné
Element AI

Serial entrepreneur and thought leader, Former: Head of AI product management and strategy @ServiceNow , Founder @element_ai , CPO @blueyonder , Ceo @Planora