Why Talking Innovation Sucks
Happy holidays, my friends! In the spirit of the season, I come bearing gifts of innovation discourse, and whether defining it is half of our problem. Why talk innovation? Well partly because I work in a government innovation lab and think about it often. And partly because the amazing Dr. Pal is letting me write this blog as the very last deliverable of my Masters program. (Thanks Les!) Still reading? Cool. Let’s talk about defining innovation.
First, let me ask you a question — is the government truly innovative? Is what it creates brand-spanking new? Something that gains ten times the efficiency or effectiveness? Something with widespread adoption? Do we as the public service deliver products or processes that meet those three criteria? So sure… I realize that it was government that landed on the moon and even developed the foundation of the internet. But are these the exception or the norm? Is our way of working designed to perpetuate these types of innovations?
Brad Rebelo recently wrote a blog post about a conversation we had where I said that innovation in the public sector is not incredibly likely to occur. Given that I work to foster innovation in the public service — that’s problematic. But I was referring to a definition that was offered up by Alex Ryan and Jerry Koh of the MaRS Solutions Lab. They state that innovation is “the invention of a new way of creating significantly more value and the adoption of the invention at scale”. By their definition, innovations have to be novel (and cannot have existed beforehand), should aim for 10x gains in value creation, and must eventually become the new normal.
Essentially, their argument is that improvement is not innovation. According to them, “if your organisation uses these terms interchangeably, or has an innovation strategy that manages both innovation and improvement in the same way, it will very likely fail to achieve anything truly innovative”. I think they are right. We as the public service have a tendency to call everything new to us an innovation — regardless of whether it existed in the private sector or didn’t really deviate from our normal course of business. Doing so potentially demotivates our workforce — calling everything an innovation devalues the meaning of the word and promotes our complacency with minor variations of the status quo. The figure above demonstrates this difference. Improvement merely refines the normal, while innovation creates an entirely new one. They argue that a “10x [gain] forces you to re-examine fundamental assumptions, diverge from business as usual, forge unlikely partnerships, and experiment with new technologies, business models, and organisational forms”.
But if we use this definition to re-evaluate everything the public service has been calling innovation, is anything actually left? Maybe not. And that, my friends, is kind of depressing.
Cue the Organisation for Economic Co-operation and Development with its November 20 report, The Innovation System of the Public Service of Canada. Is their definition somehow broader? I’d say so. (Full disclosure — their report endorses the Government of Canada Entrepreneurs, of which I am one. While getting a shout-out from the OECD was cool, I’d like to think my opinion would have been the same regardless.) Their definition of innovation has three criteria: novelty (i.e., introduce new approaches in a defined context); implementation (i.e., not just an idea); and impact (i.e., aim to better public results including efficiency, effectiveness and user or employee satisfaction). All easy-to-use concepts, right?
But then I started to overthink things (as I sometimes do). As part of the Accelerated Business Solutions Lab at the Canada Revenue Agency, I try to foster innovation throughout the CRA. And while that’s partly promoting innovative behaviours (e.g., prototyping, iterating, accepting failure, etc.), I personally believe that we also need to be able to clearly identify what we’re talking about when we use the word innovation. Do these criteria help us to do that? Their report identifies some immediate gains, as it “distinguishes innovation from creativity (coming up with new ideas) and invention (the creation of new things that may not be used)”. So, if it’s new, implemented and had an impact, we can now define and identify our innovations!
Is that enough though? We live in a world with limited resources, and the public service is no different. If someone came to you with multiple innovations, how would you compare one initiative to another? How do we map out and manage a portfolio of innovations? Alex and Jerry from MaRs rightly point out that “unsuccessful ones should be terminated — and their learnings harvested — when they fail to achieve liftoff [freeing] up the resources needed for new innovation”. So let’s look at those criteria again.
Novelty. Is this binary? Are things either new or not? Well no… their criteria refer to new approaches in a defined context. So maybe novelty is on a spectrum — from the public service adopting practices developed elsewhere (private sector or another government) to creating a brand new process itself. And what about implementation? I know ideas don’t count, but what about proofs of concept? Does this include pilots? Or is it only innovations that have widespread adoption? Believing that we can contrast this definition against MaRS, I came up with the following visual:
These figures showcase the real problem — why we have so much confusion talking about innovation. Innovation is different things to different people. And some believe innovation is simply a buzzword because it’s unclear what we’re actually talking about. (An old Nesta blog put it nicely — “when you say innovation, you say nothing”.)
For argument’s sake, let’s assume incremental innovation exists (i.e., “new to you” counts). We haven’t explored the final aspect of their definition — impact. How are we even able to tackle that? How is one proof of concept or pilot deemed more impactful than another? I realize this is getting into the well-documented field of performance measurement and evaluation, but if I’m managing a portfolio of projects, how will I identify which one is the right one to begin scaling up? How would you go about comparing an internal-facing innovation to an external one? And to complicate matters even further, my colleague Tracey Snow raised an excellent point — what timeframe are we looking at? When do we start to realize the benefits of a tested innovation? Immediately? In three years? Five? How are we going to know when we have been successful?
And then, like a true Christmas miracle, the OECD’s Observatory on Public Sector Innovation was already refining the definition (I’m referring to it as the OPSI definition to differentiate it from the above). In a series of blogs, Alex Roberts broke innovation down into four facets below:
The vertical axis refers to the degree to which an innovation is directed (e.g., aligned with an existing business objective). Mission-oriented innovation is “a driving ambition to achieve an articulated goal, though the specifics of how it might be done are still unclear”, while adaptive innovation is “trying new approaches in order to respond to a changing operating environment”. The horizontal axis speaks to the degree of uncertainty (e.g., a known solution for a complicated problem versus an unknown solution for a complex problem). Enhancement-oriented innovation is focusing on “upgrading practices, achieving efficiencies and better results, and building on existing structures, rather than challenging the status quo” (i.e., the “new to you”). At the other end is anticipatory innovation, which “is not so much about solving a problem, but about trying to understand if and how there might be a problem” (i.e., the brand-spanking new).
What’s cool about this model is that OPSI is also looking at the interactions between these facets (i.e., optimizing, sustaining, transformative and disruptive change). Referring back to portfolio management, this is an incredibly cool framework by which we could begin to categorize and contrast our innovations. Moreover, this lends itself to a fascinating area of future research — would a balanced portfolio have certain proportions of initiatives in each facet? And also has very practical applications — what change should your organization be affecting?
To recap, innovation is different things to different people. If you’re talking about innovation and think that everyone’s saying something different, they probably are. I’ve presented three models — MaRS, OECD and OPSI. Which is the right one? Short answer — they all are. Context matters, my friends. Think of your organization. Are you just starting out in this space? Are you trying to demonstrate the value of innovative practices? The MaRS definition serves to help us set ambitious goals, recognizing that “innovation is high uncertainty, high return work”. Or maybe you’re trying to socialize the concept, helping people develop a common vernacular. The OECD definition is easily relatable and simple to understand. Or maybe your environment is mature and you’re looking to manage your portfolio — OPSI offers different facets which may help you to categorize innovation. Each has its own specific use. It’s up to us to know who our audience is, explain the definition we’re using, and make the discussion understandable for anyone listening.
My parting Christmas wish? That you found this somewhat helpful in understanding these different definitions and which one may be most applicable and beneficial to your context. (And, with that, I am done my Masters… perhaps the greatest Christmas gift of all.)