We’ve achieved a lot: but big questions still remain

So, two years on, I’m moving on from my role in Civil Service Learning.

I feel like we’ve achieved a lot. But this blog isn’t about what we’ve achieved.

This blog is about a big question that we haven’t yet managed to answer. I’m not too disappointed. As far as I can tell, it’s a question that the Learning and Development sector as a whole hasn’t answered. Nor can I see a whole set of failed attempts to answer it.

Do leadership programmes make a difference?

In today’s google world, the first stage of understanding is to head straight to google. The first result takes us back to a Forbes report in 2012. It doesn’t seem to have a much evidence in there. In fact, the only study that is referenced seems to have a sample size of 11 people. Most economists would throw that evidence away within seconds.

Randomised Control Trials..where are they?

Any analyst worth their salt will know that the best way to work out the relationship between an output (doing some training/reading something/doing something) and the outcome (being better at your job — if that is preferred outcome measure) is to do a Randomised Control Trial. These used to be pretty rare in the public policy field, as confident leaders frequently feel that they are right and do not need to test that assumption. The Behavioural Insights Team has changed all of that, making RCTs more common right across the board.

RCTs are frequently argued against on the basis that is not equitable to deny some people an intervention that could work. It’s an odd argument. Is it fair to give people an intervention that might not work? With scarce resources, surely it is right that we make sure that we are giving people the right medicine?

The education establishment seems to have moved beyond this debate in children’s eduction, with the creation of the Education Endowment Fund which builds an evidence base of ‘what works’ in the sector. This organisation, coupled with the various ‘what works’ centres shows a stronger desire to link together inputs, outputs and outcomes.

I can’t tell a similar desire in the corporate Learning and Development field. In fact, I haven’t come across one significant, sizeable, RCT that does it well. There a few that do it. But a key principle of evidence based policy making is the notion of impartiality — if I’ve made something myself, I’ll want it to work. The Hawthorn effect might be at play. Someone impartial will evaluate it much better.

Deliberate practice

Something else spooked me. Whilst on a long bus journey in, Mexico I was listening to a Freakonomics podcast that was discussing “deliberate practice”. It’s a good listen, quoting 30 years of academic research by a psychologist called Anders Ericsson. I admit that I haven’t read all of his research papers but he certainly sounded credible.

Why was I spooked? Well, I’d never heard of him. Yet some high calibre US economists were talking about him.

From my short stint in the L&D sector I heard of the famed Kirkpatrick model and the 70/20/10 model. I’d heard plenty about both. The second is a model that claims to be the best mix of how people learn. How did the researchers come to that conclusion? They asked 200 executives how they thought they best learnt! Obviously. An obvious criticism is the lack of empirical data to base this on.

What really spooked me was that I’ve met many L&D experts in my two years in this role. Not one of them had mentioned the concept of ‘deliberate practice’.

So that leads me to think that the sector needs to do more with academia to work out what works and when.

Corporate athletes

‘Deliberate practice’ resonates with me. If you want to get better at something, do it. Lots. Learn about it. Apply your learning to doing it. See how it’s going. Reflect. Then learn more. Keep adjusting. That’s how most people improve and that’s supported by the concept of ‘practice’.

But how many leadership programmes provide deep interventions to support this? Not many.

One reason for this is that the concept of the ‘corporate athlete’ is so different from the typical learning that happens. Traditional athletes spend the vast majority of their time practicing and resting. The real events come up at irregular intervals. For corporate athletes, there is no time to practice. It’s all real. Failure in these practice modes is not encouraged. There simply isn’t time to practice like athletes do.

It’s true, though,that the 70/20/10 model actually supports deliberate practice (or it isn’t contrary to it). The 70% is on the job learning. Only the 10% is formal training. But how many organisations actually have L&D teams in place who supply interventions to support this 70%? Not many, I suspect. The closest is probably the model of executive coaching. But the vast majority of L&D spending is, I think, focussed on the 10% that it can control.

But there could be so many more interventions that help build it. There are some innovations around (DWP digital academy, being one of them) but these are often seen as ‘niche’ and not applicable across all capability areas (‘it’s easier to build a site than it is to deliver a change programme’ is language often heard).

Beliefs

I declare an interest. I have my own belief systems at play here.

I doubt that a days training course with some e-learning will change capability. I suspect that coaching might be as much about therapy as it is about building capability. I believe that HR teams will focus on ‘bums in seats’ in the absence of any proper measure of capability, in order to show themselves as doing a great job. I believe that there are better ways to help people deliberately practice at work but these require radically different models to many L&D teams and contracts. I believe that L&D might be as much about staff engagement as it is capability. I believe that different people respond to different interventions in completely different ways.

But, these are only my beliefs. I could be right. But I could be very wrong.

What I’d love to see is the sector start to grapple with some of these hypotheses so that it can understand ‘what works’ , when and in which circumstances. With all of these questions being answered in a robust, evidence based, impartial way and these answers standing up to scrutiny.

In the absence of that, poor providers will remain and the stars in the sector will fail to blossom. This is the one thing that I wish I had made more progress on. If the stars ever align around a consensus to look at this, I’ll be there.

Stu Bennett