The SOIF Retreat 2019 — Days 2 and 3

--

Source: Kristel Van der Elst. Image: Chris Skelly.

Chris Skelly is a member of the facilitation team at this year’s SOIF Retreat, when policy-makers and practitioners come together to learn with us about how to use futures to improve outcomes. It’s held at Hartwell House, in the south of England. He meant to post daily, but the facilitator’s role is a demanding one. Here are his thoughts on Days 2 and 3.

Chris Skelly writes: Days 2 and 3 were long, hard days. I am finishing this blog on the morning of day 4.

Last night we had drinks in the library after the formal sessions finished listening to some words of wisdom from the former EU President José Manuel Barroso, followed by a fantastic dinner, music, and conversation… where does one find the time to write?

José Manuel Barroso reflects on politics, foresight and Weber in the library of Hartwell House

Too Long; Didn’t Read (TL;DR)

We covered the fundamental steps in developing a futures project involving horizon scanning, trend analysis, project scoping, identifying, ordering or framing the key drivers of change that are used to build a picture of possible futures through scenario development.

Not rocket science, but something you can only learn-by-doing. We’ve now all got a basic framework that we can use to learn about the future. There was a lot covered over the past two days, and more potential readings than I can handle this year.

It is starting to come together.

What we learned

Thinking like a futurist

Complexity is the primary intellectual challenge for everyone. Experts and leaders too. It is making the achievement of “long-term goals in a shifting pluralistic, ambitious and novel world” increasingly more challenging. My group at Public Health Dorset has invested significant resource in developing our Systems Thinking capability and this is definitely a required capability in foresighting (or strategic insights) work.

Where experts feel safe: The familiar, predictable, and known.

Where leaders add value: The unfamiliar, uncertain, unknown.

Quantitative modelling can be used to create ‘baseline projections’ that can feed into a foresighting exercise called ‘trend impact analysis’.

Forecast failures are common: OECD GDP one-year forecast 2008 to 2009 was wrong; oil price 3-monthly forecasts have been wrong in a number of situations of a falling price phases, because of the repeated use of models underpinned by the same ‘mental model’ — it is often difficult to change those mental models, so you tend to repeat your failures multiple times before you are forced to act. Solar energy production has been forecast many times, by many organisations and individuals — the range of predictions is huge. So what underlies these quantitative predictions must be examined.

Do not use quantitative models blindly.

Successful leadership and decision-making is required to shape the future. You want to shape the future, because this is the best way of getting to your desirable future, i.e. achieving your strategic vision.

This requires us to combine creative and analytical thinking.

It is not easy or natural for most of us.

A good way of thinking about foresight is that it is composed of 3 components:

  1. Trends — long rolling processes (think big waves as you swim to shore), ageing population, environmental degradation, climate heating, ageing infrastructure…
  2. Events/ideas ‘coming at us’ — cyber systems, big data, CRISPR technologies…
  3. Decisions we make — this defines our ‘degree of agency’, it is worth thinking explicitly about this (I never have).

Foresight tools

  • essential for strategic insights
  • providing space for discourse
  • bringing attention to ideas, trends, developments that are not in daily sight
  • However — this is mostly about a process, a journey, that you take clients from the ‘familiar’ to the ‘unfamiliar’ — moving them and you from comfortable zones of discussion to areas of discomfort.
Source: Kristel Van der Elst. Image: Chris Skelly.

Foresight — as a process of thinking challenges conventional thinking and it involves four stages:

  1. scoping — divergent exercise creating choices
  2. ordering/prioritising
  3. implications — convergent exercise is about making choices
  4. integrating futures

We should not mix divergent and convergent exercises, because you don’t want to start reducing options before we have got them all out on the table!

In order to create impact with foresight, like all other decision-supporting activities of any importance, it takes time (you need thinking time and time to work with your clients — be they internal or external), resources (staff time in particular) and political capital (to bring people and organisations along with you — or even to the table for the first time).

The client’s perspective

We had an interesting panel discussion providing foresighting experience and perspectives from Nigeria, Finland and Wales. Random snippets that resonated with me:

  • getting foresight going in your organisation is hard and slow, i.e. see ‘creating impact’, above — my experience
  • need to focus discussion rapidly on ‘our beliefs’ (our hidden mental models), as these are what can hold us back, and if people open up we can have a better conversation
  • need to be client-centred — don’t own the client’s challenges, keep them engaged and leading — whatever it takes
  • making change requires some base-lining exercises at some point in the development process — this can be quite confronting when it appears you are not making progress
  • anecdote — Einstein was asked that if he only had one-hour to solve a critical problem how would he spend that hour? He replied that he would spend 55 minutes on developing the question and 5 minutes trying to answer it.

A quick discussion around the table revealed that:

  • making change is challenging when there aren’t clearly defined goals
  • when mental models remain hidden
  • when there isn’t any ‘space’ discourse in the decision-making process it becomes hard to shape and focus in a useful way.

It seems that these problems are common to most organisations and it often relates often to leadership style. For example, a leader may say that “they will not take this legislation forward unless you can explain it to the public”. Without a process for building a common discourse between ‘they’ and ‘you’, you can get stuck quite quickly.

  • Foresighting must focus on the problems that you cannot afford to get wrong
  • Foresighting is a ‘learning by doing’ process
  • There are cultural differences in the way we see and discuss the future

A plenary session reminded us that you need to design insights work to have impact right from the beginning by aiming it squarely at the client’s choices that they are going to have to be made at some point in the future. Don’t assume that the clients know what the right questions are… (right = important).

“Foresight is about better decision-making today for a better future tomorrow”.

Scoping

Critical start of each project. Important to establish appropriate client governance structure. This stage shapes the entire project. Get it wrong (as in all project management) and you will struggle to deliver.

Stakeholder mapping and analysis — classic tools to look at who has power to get things done or block change and what might their attitude to change be…

Foresighting is done for one or more of the following reasons that are worth bearing in mind when setting out the scope:

  1. Strategy development: contextual intelligence, vision and mission development
  2. Decision support: stress testing existing or designing new options
  3. Anticipatory capability: development of strategic awareness, increase bandwidth and diversity of mindsets, create indicators and signposts
  4. Social capital building: new relationship building, values and alignment work
  5. Influencing: setting an agenda, explaining choices

The question is everything (and anyone who knows me, knows that this is my mantra! See — other people including Einstein, believe(d) it too…). Getting the right question prevents costly mistakes.

A highly recommended read was Scenarios: the Art of Strategic Conversations.

Horizon scanning

The process of seeking out ‘signals’ and stories of new and different futures arising. There is a number of frameworks around that help you avoid blind spots by reminding you to scan more widely. For example, STEEP — social, technology, environment, economy and political — thus carving out often overlapping, but sometimes distinct areas of reading. What we are most often searching for are potential ‘drivers of change’ in any of the above areas. You might start searching for information traditionally produced by organisations you have the most direct relationships with, but you need to go beyond that and look at your wider sector and almost certainly wider than your sector to get a bigger picture of the context for what might be shaping the future of your sector from outside.

Scanning process

  1. desktop research
  2. conversations with your network
  3. conversations with specialists
  4. future-facing interviews with clients’ staff
  5. field trips to ‘docking points’ — i.e. where you can connect to people involved in weak signal developments, i.e. developing CRISPR DIY kits
  6. review

Not all futures projects will allow you to run the full range of techniques — just know that you cannot do it all from the desktop. Get out and talk to people whenever you can. Stay engaged with people outside your local networks.

Your scoping of the scanning will also direct you down certain paths: targeted search vs an open search; conversations and focus on expert opinion vs crowd sourced opinion; official vs social media. Time horizons — 10, 20, or more years into the future?

Exploring trends in the drivers of change

A key bit of information that seems obvious when you are told it — trends become less reliable the further out you look. Weak signals that you might note now, could go anywhere in 20 years, but they still might be useful within the 5 year time frame. But you need to look for and find them first!

There are interesting developments in the merging of predictive data analytics as an input to the foresight process, and we had an evening session discussing this. Methods and practice are just starting to develop around this, but in my view this is just another way of trying to identify ‘signals’. It has the advantage of providing insight from what is happening on the ground around you now. It also has the potential downside of drawing too much attention to signals that are close at hand, but not likely to have impact across or beyond a sector. It is something worth exploring if you have the time and resources.

For the UK health sector this has intriguing potential in being able to connect Population Health Management and its nascent data science capability with the practice of foresighting for truly supercharging strategic capability. For discussion in another blog I think!

Generating alternative images of the future — Scenario methods

Scenario development is particularly useful when one or more of the following is happening:

  1. accelerating technological change
  2. increasing complexity
  3. genuine uncertainty

Scenarios might be considered to be ‘pathways’ to get from ‘facts’ you might have at hand today to ‘the future’ you want to explore. Interestingly, it was suggested that this pathway isn’t about the future. There are many possible futures (as far as we know); rather this pathway is about shaping or creating a future, something we can work with in the ‘here and now’. It is a learning by doing process.

If you create several plausible scenarios of the future, you are learning ‘by doing’ and you are learning about what that may look like.

Hand-wavy, I know. But if you are still with me, hang on a moment longer.

Say for example, that you have four scenarios and everyone loves one of them. Everyone in the room is fairly confident that is where you are heading (whether it is desirable or not). Did you waste time developing the other three; do you leave the room and never work with the other three scenarios; do you stop sharing them with others who weren’t in the room?

No.

If you did any of these things — the learning stops. Learning, preparing and actively working towards your preferred future is a process. If you don’t embed this within your business processes/cycles then you are doing ‘one-off learning.’

Wildcards. You can always miss something. Weak signals or wildcard events or new developments can surprise you. It may not invalidate your scenarios — if you still have your scenarios then you have a touch-stone, something you can refer to, update, bump-up against. If you drop them when you find a ‘preferred scenario’ then you’ve discarded hard won knowledge. And who knows, one of those less favoured scenarios… might just be the one that eventuates. Nice to have that in your back-pocket.

Scenarios aren’t predictions.

To me they seem much more like logic models that you’ve sketched out to see whether something seems plausible, passes the ‘sniff test’, and if so what it might look, smell, feel and taste like.

Don’t discard that good thinking time you have invested — there isn’t enough of that to go around. Cultivate it. Stress test those ideas. Hold them up to the light of new information…. wait, I think that’s what is on today’s agenda!

What we did

The Clown was at it again on both day 2 and 3. The Crazy Chicken. You can’t make it up, but it did wake us up. Public Health Dorset needs a clown. Our councils need official clowns.

90 Second Exercise: When have you (discuss with participant next to you) experienced failure in predicting important events or future states. My discussion demonstrated that neither of us had attempted to do this for anything that we’d then had an opportunity to evaluate (e.g. I’ve done work on climate change futuring, but it would be too early to actually say whether this was a failure or success…whew).

However we did discuss (1) that it would be a good idea to incorporate this into a routine process or we’d never get experience, and (2) there is very little ‘thinking’ time in the workplace to incorporate this, but we are going to have to find a way… 90 seconds up.

Contracting: A very brief session where we set out how we would work together. Behaviours we wanted to model. A very client-centred activity. Essential. Ignore it at your own peril, because this is a very people-centred pursuit. You need the client as much as they need you.

Scoping a futures project: We identified the stakeholders, timeline, risks to the project and governance structures. The client’s question (see my previous article) was: “In 2030, how will digital technologies be used and governed in pursuit of the common good?” This was broken down into four sub-areas:

  1. Governance, public service, surveillance and security
  2. Economies dependent on data and digital
  3. Communities, society and civic voice
  4. Changing global order

Our group got sub-area #4 and our scoping effort refined the client’s question to “Given the multi-polar bifurcating world [in terms of the actors that have power], how are digital technologies shifting balances of power [between these actors — traditional and non-traditional]?” We got here primarily by brainstorming what the drivers of change might be using as a rough guide the STEEP categories (society, technology, economy, environment, politics).

Drivers of change: We identified a number of potential drivers of change through a quick silent brainstorming that we dumped onto post-it notes and then we had a discussion about.

Prioritisation and Clustering: Taking our drivers of change we organised them on a 2-axis grid of IMPACT and UNCERTAINTY. We drew a horizontal line to arbitrarily segment the drivers by a given level of Impact — those below the line we ignored for now (above). We then divided the remaining drivers by level of uncertainty.

Critical Uncertainties: Those drivers of change that had both high impact and high uncertainty were subjected to further scrutiny. Again, the dividing level of uncertainty was somewhat arbitrary, because we were simulating a process of discovery without having the domain knowledge in the room, as would be the case when working with a client. Only 4 drivers were classified as critical uncertainties — ‘good enough for government work’ (possibly my favourite saying when we make choices for which we have limited if any evidence).

Building Scenarios: The first step is to create a ‘deep structure’, by taking your critical uncertainties and developing the polar opposites. For example, Effectiveness of Actors to use individuals’ data was a critical uncertainty of interest to us, and the polar opposites were Effective and Not Effective. Not rocket science. Likewise Agency of individuals to control their data had the poles of High and Low Agency.

This allowed us to develop Deductive Scenarios with four quadrants defined by

  1. Effective actors and High Agency individuals
  2. Effective actors and Low Agency individuals
  3. Non-Effective actors and High Agency individuals
  4. Non-Effective actors and Low Agency individuals

From this quadrant, we developed ‘use cases’ and created the beginnings of a story around each of the quadrants. From this we created a simple narrative describing what we thought to be a plausible story based on a very simple system:

AI develops -> Impacts population -> Actors respond, and then back to technological (AI) developments.

We used a simple collective 3-word story telling method to quickly bash together a few ideas (time pressures). More on this tomorrow.

What were my take away messages

Good leaders manage ambiguity.

Good decision-support (or public health intelligence) comes from the co-development of ‘strategic insight’ (systems, Foresight, people) with leaders to better understand and indeed develop shared-understanding of the complex world in which we live.

The answer is never 42, and we all need to acknowledge this.

A version of this article was originally published at https://www.publichealthdorset.org.uk on August 8, 2019.

--

--