How to make our learning programs (even) better?

Four questions to grapple with and four successes to build on.

Nicole Barling-Luke
States of Change
9 min readJan 10, 2020

--

We recently published an evaluation of the States of Change learning program we ran in 2019. Put together by PaperGiant, it has a lot of rich insight and if you are interested in capability initiatives of any form, I really encourage you to give it a read.

I would say that though, I was the program manager and it was my baby being evaluated. And it’s got me thinking, both about how to make the program better and about the future of ‘innovation learning programs’.

It’s daunting sharing my thoughts publicly. But I want to hear what questions and challenges you are facing, and what shared experiences or tactics have been working for you too. And I can’t do that without sharing the questions I’m grappling with.

Quick context: the program was a 6-month team-based learning program. Teams worked together to develop innovation capacity and capability via real-life government projects. It was learning by doing. We had a mix of face-to-face training and a significant amount of time spent learning as they worked on their projects. We supported them as they integrated these new skill sets into their existing environments.

Questions I’m asking myself:

The small fence between us and where our learners were. Photo by Sylwia Bartyzel, Unsplash.

1. Did we lose a bit of empathy for where our learners were at?

The evaluation said: Participants were often caught between two worlds; the new innovation practices, and their operating environments. Barriers within their organisation (often at the executive level) were usually the reason they lost momentum or opportunity.

Sometimes there’s just a lot of tensions to hold.

I’m reflecting: There were inevitable tensions. Between the desire to deliver now! and to hold back and trust the process. Tension between the time scale to learn — slow at first, but accelerating as we went on. Tension between putting something out there unfinished or waiting until it’s perfect. Sometimes this tension is unbearable.

As a program and faculty team, our bread and butter is recognition for and tolerance of these tensions. We exist in a culture where iterative development, experimentation, vulnerability, testing, failing forward are normal, but no less difficult. The teams we work with operate in a different culture. And we might have inadvertently become part of the ‘how to draw an owl’ meme:

I think we might have lost a bit of empathy for that and overlooked some easy things to make it simpler to traverse between those two cultures. Prompting questions for me like:

  • Who else in your environment needs to know what and how you’ve learnt? And what does learning look like as it’s happening?
  • How can we help to better articulate the change journey you’re on, for you, given you are right in the middle of it?
  • How do we make the most of this program as ‘protected space’ to demand a shift in expectations?
  • How can we support you to make the case for more space and time?

Any examples of how to make this transition and translation easier are welcome!

2. How do we value emergent goals while supporting those in a KPI heavy environment?

The evaluation said: co-created development plans and learning objectives between the teams, program and their executives could unlock apprehension and risk of participating in the program.

I’m reflecting: that we took for granted what we think of as ‘good’. Shifting towards more effective behaviours and practices requires some recognition and aspiration for what the end state is. We know this — we created the competency framework below! We then published an accompanying competency framework guide to get the most from it!

Nesta’s competency framework

Yet we didn’t set out individually “what good looks like for you is X” and revisit that again and again over the program.

At the beginning, we were grappling with how the teams would create self-directed development plans if they didn’t know what good looked like. We made the decision to let it emerge instead. In the spirit of experiential learning (which is a core pillar of our pedagogy) we’d ensure reflection about the growth they’ve undergone (eg Kolbs cycle).

This worked well. Much of the feedback reflects the complexity of the learning and how it couldn’t have been captured in specific narrow learning goals.

However, did we cause levels of anxiety over and above what was needed when going through inherently uncomfortable phases of unlearning and learning? Could we have better given them ‘a’ path to head towards and adapt as they gathered more information about what good looks like for them and their context?

We wanted the learning to be emergent and embodied and not treated like a ‘tick-box’ exercise, but it’s hard to develop that without a map of where you’re going. The trade-off will be the other way around next time.

3. How can we better focus on people’s strengths?

The evaluation said: often the skills most valued were not ‘new’, rather the program refined and gave permission to existing skills and enabled them to flourish within a team environment.

I’m asking: How might we shift the narrative to include a bit more kindness while keeping a sense of urgency for the necessary shifts ahead?

There is undoubtedly lots of expertise and latent potential in the public service. That’s clear in the evaluation and my experience of the program too.

So how do we maintain a burning platform with statements like “we have 19th Century institutions responding to 21st Century problems” while also encouraging, supporting and enabling the talent already inside the sector?

A desire for learning and adopting new practices can often be rooted in a deficit model. But it needn’t be.

How often do you hear something like, “we need X because there are fewer resources, we need to do Y because services are failing the community”. I agree that all this is real, present and necessary. But the recent program suggests that much of the impact we had was in refining, structuring and giving permission to skills, capabilities, relationships and connections that already existed. It wasn’t creating something entirely new it was untapping the existing dormant talents.

As a program, we demand high standards of ourselves and those who participate. Higher perhaps than what we can achieve. I’m thinking - how might we set the ambition high and recognise that what we need is what we already have.

4. Do we know enough about how learning transfers from individuals to teams to organisations?

The evaluation said: we are still yet to see strong evidence of sustained organisational transfer and adoption from team-based learning.

I’m reflecting: whether our evaluation measures are the right way of paying attention to a broader organisational shift. We still don’t think we’ve got this nailed. And many others are asking this question.

Still, we can’t ignore that we’re coming up short and need to either readjust expectations or to be explicit what’s an aspiration (that’s notoriously difficult to measure).

Things I am encouraged by and want to build on:

It takes practise to build your practice

The evaluation said: participants found it necessary to have a ‘push beyond what you’re comfortable with’ and that being given permission to try something new with a structured process ensured a focus on action. They were forced to learn new practices, rather than absorbing the information in a more intellectual or passive way. Captured nicely by Dale’s cone of learning:

Inspired by Dale’s cone of learning. From Nesta’s innovation playbook.

What might have contributed to this: having real projects to work on, connected to our participants day jobs means the program is designed to ensure we never get caught up in theoretical or hypothetical situations, instead lessons are rooted in real-world consequences. We also have the privilege of working with these teams for over 6 months, it takes time to adopt new practices and unlearn previous behaviour.

And now: How else might we create opportunities for practice-based learning inside other ‘containers’ now that the program is over? This group had the excuse of the program to insist on action, and had somewhat mitigated risk by expert facilitators guiding them through. How else might we create those conditions without the formal structure and investment of the program so that practice and unlearning/learning can continue?

There’s no avoiding that reflective practice is best practice

The evaluation said: participants found enormous value in the protected time and space to reflect. This enabled them to better identify bias’ and alternate routes to their challenge. Building a regular reflective practice supported them to ask better questions and resist the urge to go to action unnecessarily.

The might have contributed to this: Reflective practice is a pillar of the program design.

When you reflect, you question yourself, others, the processes, the systems — this seeding of doubt can lead to alternatives, by eliciting the ‘what if’ questions fundamental to experimentation.

We model a whole of self reflection during the program by having regular check-ins, report backs and we share tools for teams to integrate reflective questions into their own rhythms. This is all part of double loop learning outlined in Kelly’s fantastic post on reflection.

Double loop learning — adapted from Argyris and Schon (1974) Theory in Practice

And now: How might we advocate and make more visible the value of reflective practice and help this group, and others carve out time to meaningfully reflect?

The tension between ‘being still’ and ‘moving’ is a hard one to balance (particularly for increasingly time-poor public servants), what ways can we better value the oscillation between the two states and transpose this into everyday practices of workplaces?

Connection to purpose

The evaluation said: Taking a strengths-based approach, drawing on whole of self-reflection exercises built a (re)commitment and (re)connection to the public service and delivery of public value.

The conditions that contributed to this: Going back to the ‘why’ (constantly) at the project level and at the whole of self level, led almost all of our teams to think about what their role was as a public servant, what their role was as a change-maker and how their project was modelling the bigger change they wanted to see in government.

And now: We never explicitly set out for this to be a program which prompts a re-identification with the purpose of the service, yet repeatedly our feedback suggests it is. This is a consistent and exciting emergence and rings true with Adrian’s Brown’s post ‘On being and doing in Government’. Where we shift our focus from ‘the what we do’, to the underlying assumptions that drive the ‘why we do it’.

Doing describes “what” whereas Being explains “why”. Image by Alex Carabi

I’m going to find out what others are thinking about these questions:

  • Does experiencing other, alternate, viable ways of doing and being in government enable a richer connection into the purpose of the service?
  • Does an increase in sense of agency enable a more positive identification with the purpose of the service because they now feel enabled to better enact their contribution to that purpose?
  • Does closer proximity to users, citizens, stakeholders directly correlate to increased sense of purpose and agency?

Being vulnerable and trusting

The evaluation said: Participants experienced ‘ah-ha’ moments through struggle, through stripping away personal bias’, through getting to a place of vulnerability and not-knowing — which is inherently uncomfortable.

What might have contributed to this: We have the privilege to work with the group for over 6 months. This and the experienced charisma of the faculty allowed a lot of trust to be developed quickly, and meaningfully. I would also say that on the whole, everyone in the group wanted to be there, they leant into the uncomfortable parts of the program wholeheartedly. We can create the space and a path outside the comfort zone but ultimately it’s up to them to go there. Am hugely grateful for just how much effort they put into this.

And now: For learning to be effective and sustainable it requires a deep loop in unlearning what you thought was true. I don’t take for granted how hard that is and I’m interested in learning more about how to ‘let go’ of old beliefs in the learning/unlearning cycle, and inspired by Cassie and Penny in particular (two of our guest faculty), how to do that with care.

You made it to the end! Why not sign up to the monthly States of Change newsletter for articles, analysis and stories from a community of public innovators. We promise to respect your efforts to get to inbox zero. 🚀

--

--