Can we automate “personal?”

Leigh Whittaker
Design Voices
Published in
8 min readDec 18, 2017

--

Considerations for applying AI & automation in government services and the importance of human interactions

Despite recent breakthroughs in artificial intelligence (AI), humans still continue to play a central role in the delivery of government services. In fact, their role is even more crucial as we continue to transform the experience of citizens, update platforms and even train and apply AI itself.

Automation, simplification and personalisation of services — and the data and digital platforms that enable them — can transform the ability of government to be more effective and should be embraced. If done well, we can balance the potentially competing goals of better application of scarce resources with the expectation on government-based services to look after citizens and their interests. With this in mind, government has a responsibility to make the best use of AI and data. It is rare for technologies to come along that can transform the expectations of what governments can do while also potentially reducing the overall cost of providing services. However, we are also now faced with the need to consider which actions and interactions we automate or hand over to AI and which ones we reserve for highly skilled people.

A number of technological advancements mean we are increasingly automating, personalising and digitising many interactions and actions that traditionally were undertaken by humans — or at least involved a number of humans. Some of these interactions with government are important and highly valuable, while others may be categorised as low-value or occur due to bad design of systems, processes, or policies. This occurs because we have accepted the status quo, allowed bad practices to evolve over time, or simply have never had the capacity or capability to reimagine them. It’s now more possible than ever to reimagine these experiences.

“AI has the potential to transform every aspect of how we live, work and do business.”

If we are to embrace these new technologies and see the gains in efficiency, cost saving and potential reduction in frustrating interactions, then we cannot mindlessly build new services or replace old ones without properly considering unintended consequences of automation or personalisation.

In our work, we are increasingly exploring the leveraging of data and intelligent systems to ‘personalise’ services and interactions. Personalisation, in this context, refers to the ability to plan, orchestrate and deliver tailored and relevant experiences to individuals — most often automatically.

Leading the design of these experiences has surfaced some questions I have had for a while:

  • Where does the work of AI or decision engines start and the role of the human finish?
  • What do we want to reserve for the human?
  • How does group intelligence form, where is it visible and how do we harness it?
  • If we are to introduce a new form of intelligence (artificial intelligence that drives automation and personalisation), then what does this mean for group intelligences?
  • How do we design ‘Living Services’ that learn from our experiences and improve on them?
  • How do we design better systems/organisations to nurture multiple intelligences?

These questions are pertinent, and we must approach our work with a balanced view and deep thought about the application of ‘humanness’ and shared intelligence that has built up over a long period of time in many teams or departments. I will write about this specifically soon. In this piece though, I want to focus on a small slice of what the above questions represent. How might we better design the application of AI and personalisation within government that balances the needs of the individual, their community and government policy?

Governments need to balance efficient service provision with effective outcomes — this has traditionally resulted in imbalance

For government in particular, there is an expectation to provide ‘platforms’ for the population to achieve their goals and address their needs. This represents the infrastructure required for the functioning of society as a whole. There is also an expectation to efficiently use taxpayers resources and moderate intervention with freedom (ie. intervene, but not too much). This potentially opens up a contradiction or competition in strategic drivers. Additionally, there is a risk that technologies implemented by governments (and organisations as service providers) address one side of the equation more than the other — ie. service efficiency and resource efficiency.

As ‘consumers’ we are now expecting a lot from the organisations we choose to interact with, and, slowly, due to liquid expectations, we as ‘citizens’ are expecting more from our interactions with government.

No matter the context, any less-than-downright-delightful touchpoint or experience will be automated — eventually. And customers and citizens will welcome the automation. But this process will require some deftness in working with the interplay of personalised interactions, trust in the relationship, permission to gather data and agreement that effective outcomes are being achieved. Getting this right may form the basis for a ‘virtuous cycle’ where we drive digital adoption and automation by providing better and more personalised experiences. We then reinvest these resources in enhanced human interactions and dealing with complex cases. The result of this is better outcomes and experiences, which drive trust and potentially greater comfort with automated and digital solutions.

This will take some planning. And learning.

The new wave of technology may help governments create the desired ‘virtuous cycle’ — if done well

Progressive governments around the world are starting to think about how to best leverage the power that the combination of AI, personalisation and big data present. As a start, governments should;

  • Establish well designed policy and strategic frameworks — that seek to provide the right balance of capability and enablers where the potential contradiction in drivers is addressed so that it forms a virtuous cycle.
  • Explore new forms of partnering, organising and structuring that support a more flexible provision of services, change employee expectations and shift away from predictable interactions (which will be automated)
  • Provide stable infrastructure and platforms that support the experimentation and delivery of new forms of services
  • Develop viable action plans that translate the vision into actions and new experiences for citizens while being informed and designed with the intuition and expertise of highly experienced departmental staff
  • Embrace mechanisms to continually assess and adjust services or ways of working based on the mutual learning of both intelligences (artificial and shared group intelligence formed from experiences). This would involve departmental experts helping to design and train AI and Personalisation Frameworks as well as allowing the technologies to provide ‘recommendations’ or richer information for decisions and actions that we ultimately choose to remain in the realm of human interactions
  • Provide access to open and linked data sources
  • Reduce the hierarchy of departments to allow for a focus to form around the new forms of interactions that are enabled

The above list — while not exhaustive — represents the more actionable and pragmatic requirements. What is also needed is a conversation and exploration of some more philosophical and ethical challenges. Governments — and us as advisors and designers — need to be mindful of:

  • The role that intuition and empathy play in the interaction with a citizen — especially those who are vulnerable or facing unique or complex circumstances.
  • The extended benefits of dealing with a human, slowing things down and paying attention rather than just speeding things up, automating or providing efficient services.
  • The underlying assumptions and value systems we use to design the technologies and how that may play out when applied ‘automatically’ and across a population.
  • Biases that may be present in the designer or programmer that play a part in determining outcomes. The frameworks, algorithms or data may not be fully representative of what we are trying to teach AI or the outcome the department is after.
  • The processes or actions that government workers undertake that are not adding value or no longer necessary. Knowing this gives us the choice to reimagine, automate or stop doing them all together.
  • The way you are going to deal with broader challenges where AI can be seen as a ‘job destroyer’ on one hand or a ‘worker liberator’ from mindless administration tasks (in the early stages) on the other hand.
  • The opportunity to use AI to help us get better at what we do (the activities reserved for ourselves) and how it can augment our ability to learn.

[For now] We need to consider the limits of ‘the bots’ to undertake the caring, intuitive and personalised experiences that our ‘human actors’ do today

Any effective strategy or program that is trying to create more personalised experience needs to consider both the pragmatic and philosophical design challenges above. I also believe that we need to apply a framework that considers a number of factors to determine when it’s suitable to leverage a digital channel or when it’s better to provide the personalised information or recommendation to a service agent or case manager, for example.

In highly complex situations, when the citizen is vulnerable or we are uncertain about their context, it is ultimately the staff member who is going to apply their intuition and experience, in the moment, to identify the best action and solution for the citizen. This doesn’t mean that the technology doesn’t play a crucial role, it’s just that we shift the lever slightly towards enabling the human interaction in some cases.

Personalising experiences — choosing the level of automation

The personalised experiences that government will increasingly seek to provide will be orchestrate around what is known about the person, situation, actions and channels available. This orchestration will be driven by algorithms, but should not be solely left to the platforms and machines we design. We need to build by putting both the citizen at the centre as well as the experienced employees who leverage intuition and a collective intelligence to deliver effective interventions.

There are some exciting advancements in technology which indicate that these new technologies and artificial intelligences are learning more human qualities. For example, scientists are experimenting with methods of instilling ethical principles in AI using ‘inverse reinforcement learning’ that ‘involves letting the system observe how people behave in various situations and figure out what people actually value, allowing the system to make decisions consistent with our underlying ethical principles.’

For now, the systems we are training are not capable of performing the function of the human in these complex interactions. However, with change coming exponentially, it wont be long before we need to seriously consider;

  • What services are ready for automation or heavily assisted by AI?
  • What principles and paradigms do we want them to be operating with? and,
  • What experiences do we want to keep in the realm of the ‘imperfect humans?’

My last suggestion — find some strategic designers (*Fjord!) to start working on this now. Before the bots design the future for us. ;)

Leigh Whittaker
Business & Service Design Lead.
Fjord
Melbourne (& everywhere!)

--

--

Leigh Whittaker
Design Voices

Transformational Strategist / Experience Designer / Ambassador for new ways of working / Explorer / Adventurer / Photographer