Responsible AI in the Government of Canada: a Sneak Peek
Note: While I am an employee of the Government of Canada, this is not an official publication of the Government of Canada. It’s just my personal soapbox.
If you wanted to revitalize the service offerings of a large organization such as a government, but are laden with a technology debt and archaic processes, what would you do? What if your organization was ready to accept significant change, understanding that the trajectory we’ve been on for the past several years wasn’t working, what would you recommend?
Talk to users for starters. Talk to experts and industry. Hire fresh talent with relevant skills. Upgrade the skills of others. Crumple up the rules that failed you and rewrite them. Look at your tools and your levers for change.
One of these tools is artificial intelligence/machine learning. While there is a lot of hype to cut through, tangible advances in AI/ML can help the Government of Canada deliver services more timely, meaningful and informed than ever before. That means information that is easier to find, benefits processed more quickly (maybe instantly), and more relevant service offerings to make your life easier. It also means more efficient services and informed policy design. This is a win-win scenario.
AI applications can be tremendously powerful, but as machine intelligence can be harnessed to advise, recommend, or decide on things that can have fundamental consequences for people’s lives, we have to tread carefully. When people provide advice or make decisions, they do so imbued with values. Some values are their own and some belong to the organization that they represent; in our case it’s the Values and Ethics Code for the Public Sector.
As machines begin to complement human tasks in government, they should be expected to meet similar — or even higher — standards of values as the people with whom they’re working. So in consultation with other governments, academia, industry, and the public, we will need to set a framework on how machines can provide public services with advice and recommendations to federal institutions and vendors that is practical and accessible to laypersons. This task is going to be as challenging (or more!) than any enterprise architecture or application development, but it is so important.
People need to be governed by a government that consists of their peers, both in perception and reality. If we get to a point where machines make decisions about people and we don’t understand why, or how to challenge it, then in my opinion we have failed to deliver good public service. As we strive to achieve more effective and efficient digital services, we should not do so at the expense of our duty to act fairly.
AI is like a long set of hurdles laid in front of us. We can’t bulldoze through them because they pose inconveniences. Nor can we just give up at the first one and whine that we can’t do it. Instead, we need to study the course, train, and deftly jump each one. Here’s the thing: we can do this. Yes, we have screwed up big and have done so recently, but in no way should that make us declare failure before even trying.
Here’s a sneak peek of what we’ll be up to
We will be embarking on a “Digital Disruption White Paper” that will look to outline some of the key policy and ethical implications of the use of artificial intelligence in government. We are going to start with how we can use AI to improve the delivery of services and data to the public. This will include client-centric illustrations of where AI can empower service and data delivery, followed by discussions on data quality and bias, maintaining accountability for decisions, keeping decision making processes transparent, and what governance and recourse look like in a post-AI government. While this is a pretty broad scope, it’s only a fraction of the AI-related work that governments around the world need to address.
A discussion is good; a discussion followed by action is better. The paper will provide a Government of Canada position on the use of AI for our work, and be a first step in actively working with federal institutions to pilot AI applications. For those pilots being worked on already, I’m going to nag the leads to learn along the way. If we chose to issue formal guidance to federal institutions (a decision that’s way above my pay grade), then this paper will go a long way to informing that, too.
I’m going to be relying on — and building on — the plethora of content written about this subject internationally. This is a crowded room of ideas and my primary job will be sorting through them and negotiating a single document out of it. So it’s probably not going to go far enough to please everyone, but I’ll do my best.
Another thing I’m excited (and totally nervous) about is how we are going to do this: way out in the open. We’re going to be looking for ongoing feedback and collaboration by experts from around the world in data science, privacy, human rights, customer service, and more. This is going to be iterative and collaborative to a degree that I personally have never attempted before. I’m sure it’s going to chafe people and I’ll be put on some proverbial hit lists, but it will be worth it. The end goal will hopefully be a thorough, but accessible, guide on how we can use AI responsibly.
Along the way, I’m going using Medium as a travelogue through the process and muse on some sections as they develop. I’m giving you my personal take on my personal space on how we’re proceeding so you get a window inside my thought processes as I work with you to develop this strategy. We’ll be using our suite of GCTools for formal project contributions with interested parties on an ongoing basis (aux deux langues officielles — je promets !), but from time to time I may post raw content in development here for contemplation, discussion, or total rewriting.
This long journey starts next week. I look forward to working with you.