The scope trope
Note: While I am an employee of the Government of Canada, this is not an official publication of the Government of Canada. It’s just my personal soapbox.

So first of all, I really want to thank everyone that read and recommended my previous post. I also want to apologize to anyone who contacted me that I didn’t get back to yet. Needless to say, interest in the white paper initiative was very high, something that I’m both proud and thankful for. You’ll also want to watch this space here, which will be our official, bilingual Medium stream. It will feature content on work we are doing on disruptive technology, profiling projects and people throughout the Government of Canada.
Aujourd’hui, on commence
The number one question that I receive from people interested in our project is on scope, so I wanted to address that right away. It’s of course all subject to change.
There has been considerable ink spilled recently about the future of work and whether to regulate AI, most recently a topic raised by Elon Musk to much public debate. So it’s with some trepidation that I have to mention that the Responsible AI in the Government of Canada white paper will probably not contain a thorough discussion of either of these very important topics.
Sorry!
This paper can’t talk about autonomous vehicles or universal basic income or whether to regulate private sector algorithms because my department doesn’t do those things. My colleagues in the Department of Transport or Employment and Social Development Canada are very smart people and I trust them to lead the discussions on these topics when they are ready, bringing their own expertise to the table. My amazing (seriously, they are; you should meet them) colleagues at Innovation, Science and Economic Development Canada are working with CIFAR and Canada’s AI superclusters on the Pan-Canadian AI Strategy. The National Research Council is a world leader in AI research. It’s a crowded room and — being the polite Canadian — I am keen not to step on too many toes.
So I’m broadly proposing a scope that aligns with the Treasury Board’s mandate of issuing general administrative policy for the Government of Canada (section 7 of the Financial Administration Act for you wonks). These are the rules by which we federal institutions comport themselves to serve you. Using our mandate as a scope boundary allows myself and the contributors (aka “we”) to examine how government uses AI develop policy and administer services. Insert AI into our organizational DNA, and the results can be profound.
So don’t fret! This is a good start, and will definitely address some of the key policy, ethical, and legal issues that will extend to other areas. Plus I hope that the depth and detail of the white paper can in fact provide thought leadership to other areas in government. I know that it will disappoint some, but I would prefer to focus on doing some good with a high chance of success that extraordinary good with a mediocre chance of success.
“AI”: A term undergoing cellular division
The other part of the scope discussion is what exactly we mean by “artificial intelligence.” There was a time that this term perhaps had more specificity to it but now it’s been carried away by decades of field specialization, marketing, and occasional misuse. The suite of business applications that we are examining may in fact have a more accurate name entirely. Since this paper is going to explore recommended practices rather than binding guidance, I also don’t need to define AI with a scalpel at this juncture. That is, unless my bosses request otherwise!
We can start with something pithy and vague like “computer systems that operate autonomously and seek to mimic human intelligence” but that’s a bit reductive and tautological. On the other hand, we can take a page from the UK Government Office for Science report and simply describe what AI is and isn’t over a couple of pages. A government report should be informative, professional, direct, and focused towards as wide an audience as possible, so I really need to nail down a definition that’s comprehensible to the targeted audience.
I’m also staying away from artificial general intelligence or its terrifying big brother, superintelligence. While I am personally fascinated by concepts of exponential machine intelligence (I read the mistake of reading Nick Bostrom’s epic on vacation in Mexico…), alas I think they are a bit too theoretical to go into a serious government paper in 2017. Part of the innovation change management in the Government of Canada is the idea that in this age, the gap between science fiction and market maturity is much narrower than ever before.
So there you go. We now have a scope, which while vast and complex, has a defined boundary. That’s a good first step.
