The Network of Innovators Experiment: What We Are Learning about Expertise Matching

with Andrew Young

reposted from the UK Cabinet Office Open Policymaking Blog

Increasingly “experimentalism is no longer confined to formal scientific labs,”writes Charles Leadbetter. “It has become an organising method for social policy, startup businesses, venture capitalists, tech companies and the creative arts.” In order to improve how we govern, we must embrace experimentation in how we make decisions and solve problems. Suchexperiments will not be neat or perfect. But, as MIT Professor Kurt Lewin famously said, “research that produces nothing but books will not suffice.”

The 3rd annual International Open Data Conference in Ottawa, Canada attended by 1200 open data practitioners and enthusiasts inside and outside government presented an opportunity to experiment with and test the Network of Innovators, an expert network designed to make expertise in innovative governing easily searchable. (For more background about NoI, read this earlier post.)

By rolling out an alpha version of this tool to help conference attendees 1) get matched to those most like them; 2) get matched to those who possess the expertise they lack; or 3) search for people with specific know how, we wanted to begin to test whether and how expert networking tools could improve how we govern.

By the end of the two-day conference, over 20% of the participants tried the NoI. Users spanned the private, civil, academic and governmental sectors, 51 countries and six continents — the most-represented countries on the NoI were the United States, Canada, Great Britain, Finland and Spain.

The testing provided useful insights:

  • Assessing What People Know With Questions — Rather than asking people to self-assess their level of expertise (notoriously unreliable), NoI asks them to go through a list of twenty questions about implementing an open data project and identify which questions they could answer if asked. People found this to be a more compelling and useful way to uncover someone’s expertise than asking them to create a simple laundry list of skills (e.g., my skills are: open data, intellectual property). More than 70% of users responded to questions in the open data questionnaire.
  • Too Many Questions, Too Few Questions — Perhaps the biggest challenge was adequately capturing the full gamut of potentially useful open data expertise in a single questionnaire. Many users felt that the questionnaire gave them the ability to articulate their open data skills comprehensively, but others felt that more questions were needed. This, of course, creates a user experience challenge, with a tension between the comprehensiveness of the questionnaire and the usability that comes from a shorter, more digestible collection of questions.
  • Matching — The backend functionality that matches people to those with complementary know-how skills and experiences worked but not well.
  • Searching Across Borders — There was equal demand for the ability to search one’s peer domestically and to search colleagues internationally to find collaborators, soul-mates, and co-conspirators.
  • Projects and People — We also learned that people want to learn more about relevant projects, not just individuals. Knowing that someone can answer a question on data scraping is useful, but knowing the specific project in which that individual scraped data could be even more useful.
  • Authentication — One last nitty gritty item. We thought that social media based authentication would make things easier. But extra access controls imposed on users when traveling and using a shared device, actually made it harder. We need email-based authentication.

We are continuing the conversation with users who signed up in Ottawa and new users across the seven collaborating countries participating in NoI. We want to identify the list of features to be added to the next iteration of the tool prior to initial deployments.

In particular, we are trying to understand:

  • How do government officials get expertise today and where are the gaps?
  • How might searchable directories of people’s talents, skills, and experiences accelerate the rate of innovation in governance?
  • How might civil servants use such “technologies of expertise” and in which contexts and when?
  • What features do civil servants need for such tools to help them to do their jobs better?
  • Looking ahead, how can such tools bridge the gap between government and citizens to enable more co-creation?
  • How do we test the use of NoI and measure its impact?

The first step toward answering these questions is broadening the use of NoI within the civil service and learning how it can be better crafted to meet the needs of civil servants, whether mutual learning or support from colleagues within and across jurisdictions. We will be launching version 1.2 of NoI later this summer. Hence we encourage you to join and test the alpha version of NoI here, and use this form to share your feedback on the questionnaires, desired functionalities, and use cases. With your help, we can enable more and better knowledge-sharing around innovative governance and, we hope, improve people’s lives as a result.
Email us at