Governing SkyNet

Use of Machine learning is quickly becoming a standard part of digital solutions across government; providing insights, reducing the marginal cost of repeatable work and helping to deliver amazing user experiences.

Given governments unique access to data, impact on end users lives and accountability to the taxpayer; implementing the right level of governance is critical to grow trust in such solutions.

Too heavy handed will restrict growth and innovation as we scale throughout the government ‘stack’.

Too light-touch will risk unintended consequences on end users and increased risk of failing to meet the strict auditing requirements our decisions sit under.

To consider this further, we’ve adopted three broad guidelines on how we want to approach scaling of machine learning governance.

ML governance should be an enabler

We currently think of ML projects as exceptions to the rule of current development practices, however we are moving into an environment where its use will be the default choice for capabilities across government since:

  • The ‘cost’ of machine learning is falling – User friendly and mature toolset are becoming common place (I.e Tensorflow) alongside cloud services which take infrastructural complexity out of the equation.
  • The supply of ML ready developers is increasing – University and online courses are teaching mature courses to students plus increasingly gaining experience in real world firms.
  • The evidence base is broadening – Communities of practitioners are now commonplace to learn from, with a huge range of open source projects to contribute to and patterns to implement for many common challenges.

ML governance should be proportional to the use case

We don’t classify information just because its processed using an SQL database. Equally we shouldn’t apply heavy handed governance to Machine Learning, just because it using the paradigm.

Governance should be based on the impact of decisions made by the end to end capability, for example;

  • A back office function to optimise infrastructure power efficiency – Low
  • A operational function to prioritise customer service tickets – Medium
  • A operational function to flag evidential content for review in a court case – high

ML governance should be adaptive to change

A few years ago Microsoft released a ML powered ‘bot’ that enabled free form Q&A on Twitter. It learnt from the questions it was asked and syntax/language used to converse with it. It started off as a ‘polite’ conversationalist, however it quickly became a rude, obnoxious and racist monster as it was ‘gamed’ by its users.

Hence a ML capability which has been approved or governed under a certain set of assumptions or inputs, might be inappropriate to a new set of inputs or use cases. Governance has to be an iterative process which evolves with the capability as it develops and scales.