Optimising organizational behaviour with the critical mind of Koen Smets

Koen Smets is an engineer turned behavioural change specialist, who has been helping organizations deal with change and become better at doing what they do for 25 years. He works in industry sectors ranging from Technology and Healthcare to Consumer Goods as well as in the public sector, and this has given him deep insights in the fundamental similarities as well in as the rich diversity of different organizations.

His approach to Organization Development reflects his fascination with human behaviour and decision-making. In particular, he applies elements from economics, behavioural economics, and behavioural science in general to understand the underlying forces in organizational behaviour, and to develop interventions that bring about successful organization change.

Koen is known on Twitter and Medium as @koenfucius

Arjan: So you are a management consultant, why do you love experiments?

Koen: Yes, I have been a management consultant for longer than I like to remember. It’s a profession that some regard as being somewhere at the level of lawyers, bankers and real estate agents in terms of status. :-)

It’s funny you frame the question like that. I hadn’t really thought about this before, but you make me think that perhaps my profession has played a role in developing my interest in experiments. I wouldn’t necessarily say that I *love* experiments, but I think they are underrated in organization management in general. For much of my early career, we used to use off-the-shelf approaches, almost always based on some hyped-up book that we all had to read and then treat as if it were the gospel. Initially you don’t know any better of course, and you believe that’s the way it works. Some guru comes up with a new idea, sells lots of books, and you then simply apply what the book says to the best of your ability, because that’s apparently what the client wants.

“Some guru comes up with a new idea, sells lots of books, and you then simply apply what the book says to the best of your ability, because that’s apparently what the client wants.”

But soon I (and many of my colleagues) started questioning the wisdom of blindly following these precooked techniques, and looking more critically at whether, when, and how they really worked. I now no longer work for a large consultancy, so the environment in which I do my job is very different. But the experience from back then continues to shape my client work. The emphasis on first defining the problem without any solution in mind; then crafting a solution, not from a monolithic, predefined protocol but, like an artisan, using the tools that are most appropriate for the task; and finally during the execution, as much as possible testing different solutions and seeking robust feedback and evidence — it all goes back to the scepticism of panacea solutions that I developed as a young consultant.

I have to say that experimentation is not always easy in my job, though. There are often not enough people to properly randomize groups, and there are many facets of an organization that might affect the outcome of an intervention — from the personality of a department head and the nature of the work to the length of tenure and the level of education. But the thing is that you can still apply the experimental mindset.

Be clear about what you expect to change, make sure you have a good, objective description of the current situation (quantifying it as much as possible), and describe clearly the intervention you’re planning and the hypothesis of what will change and how. And then when you apply your intervention, look critically at the outcomes. The best way to combat confirmation bias, the worst of all biases, is to look for signs that the intervention does not work, or that any positive effects are not the result of the intervention.

“The best way to combat confirmation bias, the worst of all biases, is to look for signs that the intervention does not work, or that any positive effects are not the result of the intervention.”

The key thing is to not take anything for granted. Of course, the more experienced you are, the more you will be able to recognize certain organizational patterns and zoom in on the kind of interventions that are more likely to work. But every organization is different in so many ways, and so whatever you do, you always need to tailor it, and that means trial and error. And that’s also a form of experimentation! :-)

Do you like what you have read so far? Get a quarterly update of what I am busy with.

Arjan: Nice! We share similar ideas of what experimentation is and why it’s useful to organisations. Breaking the habit of following fads in business and focusing on real impact of your ideas is something that resonates with me.

But it makes me wonder why there is so much incentive to follow fads and less incentive for a more skeptic approach to business.

As a longtime management consultant you probably have some sense of what motivitates executives. Do you think experimentation is something that will be adopted by organisations in the nearby future? And could you elaborate why so, or why not?

Koen: I think one answer to why fads have so much following can probably be found in Richard Thaler’s summary of nudging in three words: “Make it easy”. All else being equal, people tend to do what is easiest. And let’s be honest, what is easier: starting from first principles, tailoring the known science into organization-specific hypotheses about staff behaviour and relationships at work, developing multiple interventions, designing experiments to validate them) and so on … or just adopting a fad? A second reason is the social proof that many fads carry — if the book sells well, and lots of other big organizations seem to adopt its ways, surely that’s enough of an endorsement? In a way it’s a modern variant on “nobody ever got fired for buying IBM”. The difference is that IBM made at least decent computers and software, and some fads have a rather more questionable efficacy.

“It’s hard to say whether more experimentation will really become the norm in the nearby future.”

It’s hard to say whether more experimentation will really become the norm in the nearby future. The status quo bias — in this case going for the latest new hype — is powerful, and so if it happens it won’t be an easy evolution. But there is hope as well, I think. There is a small but growing group of people in the broad organization development/IO psychology/HR domain that is very critical of fads, and that seeks to spread evidence-based management.

In my view, one of the possible catalysts for a positive change is also the interest in behavioural science and behavioural economics. I’ll be among the first people to warn against too much uncritical belief in the findings in this domain. Some studies have been found to be quite a bit less robust than was initially thought, and the recent replication crisis in the social sciences has not left it unaffected.

But for me, one of the most important consequences of the ascent of behavioural economics is not its content (which is not the new gospel), but the way in which it has led to a fundamental challenging of some of the long-held truths in mainstream economics. This represents a mindset of not simply accepting something because it has been accepted for a long time, or because it is accepted by a lot of people, but of demanding evidence. That critical mindset is very useful in a much wider sense.

And so behavioural science and behavioural economics could provide support in two ways.

  • First it can provide a robust tool to understand organizational behaviour, to diagnose dysfunction and to formulate interventions to address them.
  • And second, the critical mindset that is associated with it can help expose questionable fads and hypes.

I admit this is an optimistic view — other, more pessimistic views are possible. But I am an optimist at heart. :-)

“This represents a mindset of not simply accepting something because it has been accepted for a long time, or because it is accepted by a lot of people, but of demanding evidence.”

Arjan: I love optimists! I am one myself ;)

When I worked for Booking.com I was impressed by their flat structure. There was 1 manager on more than thousand people, work was done agile in self organizing teams and the innovation budget went straight to the senior product owners.

It was completely different from all the other companies I worked for.

Experimentation was important as well. It was actually the core of what they did. No long term strategies, no opinions, just experiments that would be created every day to see if they could improve their products and services (mainly their website and app).

In large traditional organizations experiments can be a serious threat to the status quo, but in this case experiments were a part of a very happy work culture. People were in charge of their work, not dependent on a manager that might or might not like there work. Individual experiments would have a positive impact or not, but the main performance indicator was “velocity”, the number of experiments that the organisation could run.

Could you comment on the positive impact experiments can have for the culture of an organization? And do you see any drawbacks?

Koen: I think experiments can be quite threatening. They have the power to undermine received wisdom, and sometimes managers rely on received wisdom, not just their own, but that of others in the organization, to reinforce and maintain their position.

I am hardly surprised that a relatively new company like Booking.com took to experimentation. There are at least two reasons for this, I think.

  1. One is the business they are in. Travel booking is an area in which lots of experiments can be run all the time, large amounts of transactions with customers, relatively straightforward randomization and segmentation, and perhaps most importantly, very quick feedback loops.
  2. The other one is that, as you say, it benefits from a non-traditional structure and culture, and that means that it’s a lot easier to adopt a kind of experimentation mindset.

But in other circumstances things may be a bit trickier. The more the senior people are entrenched in their position, the less inclined they will be to open themselves up to any intervention that might cast doubt on the way they have been running the show. And that is of course a potential consequence of experimentation and getting actual data.

Arguably, experiments are part of a wider “evidence-based” mindset, where people are prepared to question conventional wisdom. That is strongly linked with the culture of an organization. Organizations in which people are comfortable with being challenged, where colleagues play the ball and not the man, are more likely to offer an environment that is conducive to experimentation.

Organizations in which people are comfortable with being challenged, where colleagues play the ball and not the man, are more likely to offer an environment that is conducive to experimentation.

But as I said before, when the focus of change is not so much the commercial relationship with tens of thousands of customers, as with Booking.com, but the organizational relationships of departments and staff, it’s much harder to define robust experiments. That means you need to be more circumspect. Experiments may still be able to disprove a hypothesis, but you should be careful concluding too quickly that a hypothesis is proved by an experiment in this kind of context.

The replication crisis that has been raging through the social sciences domain for a good few months now shows how even proper, real scientific experiments are liable to being questioned. And if this is the case in controlled laboratory conditions, it is certainly not better in field experiments with small numbers of participants and huge challenges in separating treatment and control groups.

“That is why experimentation can never be the solution on its own.”

That is why experimentation can never be the solution on its own. You have to rely on the existing science (even if that too is not always 100% robust!), use your experience and insight to establish how best to apply it, and where appropriate, to establish what experimental approach to use to verify and validate what you are trying to do.

Arjan: I agree again! Note to self; maybe it´s good to interview someone next time that doesn´t share all your opinions… just to spice things up ;)

But let’s take a final look at this. Experiments in my opinion are “just” a tool within evidence-based management. As discussed earlier, the experimental mindset fits a more broad critical mindset when people don’t take anything for granted.

Many executives have had a (Ivy league) scientific training. A scientific training that fosters this exact same mindset. Why is it that this mindset doesn’t hold up in business?

Wasn’t the mindset taught right? Is there are role for universities here?

Koen: Heh heh — well perhaps I could try to be a bit more controversial… But seriously, I think the fact that we agree is probably not so surprising: if you have a scientific mindset, then seeking evidence is second nature.

Experiments may be “just” a tool, but they are a very important tool when there is no available evidence, or when there are reasons to believe it may not fully apply.

Your suggestion about the training of the management team is interesting. I have never really verified whether there is a correlation with the degree to which a company adopts an evidence-based mindset in management, and the specialism of the bosses — that would certainly be an interesting thing to investigate!

But I think that one of the main reasons why you don’t often find a lot of scientific rigour in business management, even in high tech companies that rely on science, is that management practice is seen as distinctly messy in comparison with hard, positive science and engineering. It doesn’t at first sight fit the view of a positive scientist or an engineer of something that can be scientifically investigated.

“one of the main reasons why you don’t often find a lot of scientific rigour in business management is that management practice is seen as distinctly messy”

There is an interesting parallel here with economics and behavioural economics. The neoclassical economics of the last half century has been criticized for regarding people as if they are always rational, self-interested, utility-maximizing economic agents — the so-called homo economicus.

That criticism is perhaps a bit unfair, but it contains enough truth to be quite valid. Thaler and Sunstein, who kickstarted behavioural economics with their book Nudge, showed how the behaviour of real people, humans, can be quite different from the behaviour expected from econs. In the last 10 years or so, behavioural economics has successfully challenged some of the more flakey assumptions in microeconomics, and there is a growing consensus that economics and behavioural economics will eventually come together in a new, more realistic, unified framework for describing, understanding and predicting behaviour.

I believe that organizational management is limping a bit behind in this respect. Many of the structural elements of an organization — roles and responsibilities, goals and targets, reporting lines and so on — assume implicitly that employees are rational, have access to all relevant information, and will automatically make the choices that are in the interest of the organization as a whole.

In the same way that neoclassical economics failed to face up to the messy nature of real people and their so-called irrational behaviour, and hung on to its rigorous models, much of current management practice is a bit deaf and blind to the messy nature of the interaction between people and departments and up and down the organizational hierarchy, and prefers to rely on structural measures to try and influence behaviour.

And to me it is more than just an interesting parallel. I have found the likeness between economics and organization management to be remarkably rich and useful in my work. A lot of organizational behaviour can be traced back to the choices and decisions people make, and these are very similar to the decisions economic agents make when they allocate scarce resources like money. Humans as economic agents make trade-offs, consciously or unconsciously, and the same applies to humans in organizations.

And in a sense, here you have your science — decision theory, game theory, or behavioural science more broadly are disciplines that try to codify and explain how and why people act the way they do. But these are not domains that managers are typically comfortable or even familiar with, even those who have a scientific background.

“In my experience wearing economics lenses can help bring a more scientific approach to understanding organizational effectiveness”

That can change, of course! In my experience wearing economics lenses can help bring a more scientific approach to understanding organizational effectiveness, diagnosing dysfunction and formulating interventions to improve business performance. It doesn’t dismiss the conventional structural mechanisms for managing and influencing behaviour, but it acknowledges their limitations.

Of course people do, to some extent, make reasoned trade-offs based on agreed objectives and personal self-interest. But they are also subject to their personal beliefs and to those of the organization. Stuff like the mission statement or the corporate values, or even unspoken beliefs carried by the collective behaviours. For example, if controlling costs gets a lot of attention, that may influence the trade-offs of front-line staff who deal with customers, even if nobody tells them how to respond (or not to respond) to a customer issue.

And of course there are plenty of ways in which biases and choice architecture influence decisions in organizations. There are the obvious problems with the planning fallacy, the sunk cost fallacy and overconfidence that can easily hit the bottom line for example. But you can also look at a dysfunctional relationship between two departments and see signs of confirmation bias that reinforces the conflict, or at the role of social proof in the adoption or rejection of new ways of working.

Arjan: So, if I understand you correctly, in a sense you try to capture the ‘messiness’ of people, their choices and their behaviour indirectly in a kind of framework that has elements of both hard sciences and social sciences?

Koen: Yes, exactly! Once you start looking at an organization and how it operates with your economics and behavioural economics glasses, it becomes much easier to engage a scientific mindset. You can analyse behaviour and diagnose problems, you can use the existing body of behavioural science knowledge — as well as conventional economic concepts — to formulate hypotheses and interventions to test them, and you can define experiments to see what works or doesn’t work. Even if lab conditions, with large numbers of participants, randomly allocated to control and treatment groups, are often not possible. It is not because it is not a RCT that it is not a valid experiment.

And to go back to your original question, that is how I think reluctant managers can be persuaded of the benefits of scientific thinking: start from what is familiar and understood, introduce a robust framework that lends itself to critical analysis, try things out and verify what works. Just like in real science!

Do you like what you have read? Get a quarterly update of what I am busy with.