Making the shift to expertise
Observations from training junior doctors
Over some years of training junior psychiatrists in consultation skills, I began to notice a pattern. We (me and my co-trainer, Hannah Toogood) were relatively successful at training junior doctors to a pretty good standard: in general, all the trainees got to a good level with very few trainees not achieving a basic level of competency — but we were less successful in helping trainees make the shift to expert practice. This fact was most striking when I observed our invited co-facilitators, more senior psychiatrists often with ten or more years of practice. There seemed to be a distinct, qualitative difference in how this group approached the task of psychiatric consultation compared with the trainees. In particular: they always seemed to have an intuitive sense of where the consultation was going; if they did get lost, they were able to recover a sense of direction very quickly. Unlike the trainees, they did not seem to gather data blindly or to ask questions without a sense that there was a reason for the question; their consultations seemed very efficient, with little wasted effort. Lastly, because they always seemed to know where the consultation was going, their own anxiety seemed less. Their intuitions had a sense of ease, which sometimes seemed to rub off on the patient, often making a virtuous circle with the emotional temperature of the consultation dropping over its course such that the consultation became easier for both parties. The experienced clinicians were clearly doing something very different to the beginners and I started to wonder what this might be and whether it was possible to teach what they were doing from the get go.
When I started to talk to colleagues that I regarded as experts to ask them what they were doing, I noticed something else. They did not seem to be able to really describe how or why their own practice had developed in the direction it had. Even more interesting, some of them seemed to feel a little guilty at how their practice deviated from what they had been taught when they were beginners. They started to shift from foot to foot and talked about how they ‘took shortcuts’ and how they ‘wouldn’t recommend’ what they did to beginners learning psychiatry. It seemed that they couldn’t tell me why or how their practice was better than someone fresh to psychiatry, even though it was very clear to me as an observer that it was.
The null hypothesis
The null hypothesis was clear from the outset. Expertise is hard to acquire and there are no shortcuts. After two years of psychiatry training, beginners were still beginners and it was plainly unrealistic for them to function at the level of clinicians with ten or twenty years of experience. If I wanted experts I should put my class out into the wild and come back in 10 years time when (if they had been paying attention) they too would be experts. In essence this is the 10,000 hours model of expertise popularised by Malcolm Gladwell. I wasn’t really buying this, not least because it seemed so very inefficient: we were teaching medical students and trainee doctors one way of interacting with patients knowing that once they had learned their craft they would interact with patients in a completely different way, a way that seemed to be implicit learning that wasn’t written down anywhere. I remained convinced that there had to be a better path to learning expertise.
Expertise: lessons from the literature
My next step was to look at some of the literature on expertise, in particular the work of Anders Ericsson, the main source for Malcolm Gladwell. It seemed that what I had discovered was already well known to researchers in the field of expertise across several different disciplines. Beginners and experts approach the same task with a quite different set of cognitive tools and experts often have very limited insight (in terms of declarative knowledge) into their own expertise. It is not uncommon when pressing experts to describe what they are doing to force them back to the role of a beginner: they describe what they were taught when starting out in the field, even when it no longer bears any resemblance to why they actually do.
There was also evidence that experts reasoned in a particular way, an inductive and sometimes explicitly Bayesian way. This was interesting. Medical students are taught a process of data gathering that varies little from patient to patient: take a full history and perform a thorough examination. Only after having gathered this information, use the information to make a diagnosis and a differential diagnosis. The Bayesians were doing something different, diving in straight away with a fast and dirty hypothesis and then taking a history guided by this — quickly ruling out key differentials, wasting no time asking questions that didn’t help their calculus of probability, readily abandoning unproductive hypotheses and getting to a reasonable degree of certainty for an accurate diagnosis very efficiently. Data collection wasn’t a separate process from diagnostic reasoning, diagnostic reasoning was embedded into data collection. This seemed relevant, useful and eminently teachable. It had some of the qualities of the expert practice that I had noticed in my colleagues: in particular, it allowed for very lean, efficient consultations that had the teleonomic quality I had noticed. But it wasn’t a perfect match for psychiatry, where diagnosis is rarely more than part of the story — and often quite a small part. It was also a very expert driven model and not a good match for the way we had been teaching psychiatry: this was (and is) highly influenced by motivational interviewing (M.I.). M.I. eschews an expert led approach in favour of a collaborative approach where real efforts are made to share power with the patient. I needed a Bayesian approach that wasn’t just about diagnosis.
It was at this point that I turned to the literature on formulation. Most what I read seemed to fall foul of the same issue I had with diagnosis: that it was an expert driven practice. Formulation is in its essence a way of integrating the disconnected pieces of information brought by the patient with a theoretical framework brought by the practitioner. There seemed to be a level of discomfort with this in some of the authors but there wasn’t much getting away from it: making a formulation carries a lot of the same baggage as making a diagnosis for practitioners aspiring to patient centred work. Of all the accounts of formulation, one by Lucy Johnstone and Rudi Dallos stood out in its attempt to deal with this issue, which the authors partially resolved by clarity about process and outcome. It was possible to make a rudimentary formulation, what Johnstone and Dallos called a ‘micro-formulation’, with no intention to share this with the patient. A micro-formulation could be a very provisional sketch of what the practitioner thought might be going on in that moment. It might express a hunch on which the practitioner might base their next utterance and perhaps give the practitioner a rough sense of where the interview ought perhaps to be headed, with the proviso that it was just a hunch and if the new facts conflicted with it, it could (and should) be jettisoned. Jettisoning a hunch on the basis of new data leads to the formation of a second, slightly better hunch: this was the kind of Bayseian process I was looking for. I was interested to discover it wasn’t new and had been described by the Milan school of family therapists in the 1980s.
First tries at teaching microformulation
We tried my first iteration of a micro formulations exercise with the new intake of psychiatric trainees in September 2017. We split the trainees into groups and gave each group a realistic typical ‘referral letter’ (patients usually come to psychiatrists in the UK via a referral from a primary care doctor) and had each group make a rough and ready diagnosis, differential diagnosis and formulation — stage 1 of the exercise — before thinking how this might influence the approach to the patient across the four processes of M.I. (Engagement, Focussing, Evoking and Planning) — stage 2.
It went reasonably well but was laboured and more confusing than it should have been, partly because I provided encouragement to use some of the theoretical apparatus of a formal formulation — a grid of predisposing, precipitating, perpetuating and protective factors along one axis and biological, psychological and systemic along the other. I had also unhelpfully broken down some of the biological, psychological and systemic factors into sub factors in an attempt to crowbar some theory into the grid. This was both conceptually muddled and far too detailed, particularly for trainees in their first month of psychiatry. It was, however, partially successful: each group was able to generate useful differential diagnoses on minimal information and were able to see how this might inform their approach. All the groups were quickly able to see that engagement was a priority for different reasons in all of the scenarios we used — a useful teaching point in itself.
With Helen Mentha’s help, I slimmed it down and tweaked it for presentation at the Motivational Interviewing Network of Trainers forum in Ireland in October. This was a very different audience of experienced clinicians and trainers. Most of them seemed to think I was on to something, which was encouraging. I got more useful feedback: chiefly that I shouldn’t be leading with diagnosis, because it elicited a very dry information set where the patient’s story got lost. It was also helpful talking to Sam Malins who had been coming at the idea of educating clinicians’ intuition from a slightly different angle. He was also wrestling with the idea of intuition as a useful but also highly problematic addition to rationality in the consultation. Like a microformulation, intuition can give lightning flashes of insight, but can also be hopelessly wide of the mark. Sam too was thinking about how to train people to develop hypotheses but not get overly attached to them. He had encouraging people to make a guess about what is happening with the patient by paying attention to what is going on with their own body and emotional state.
I got a few more pointers at the UK MINT meeting in November and the version linked to below now includes that feedback too.
Revising the microformulation
My thinking about microformulation had partly come from wanting to develop expertise, but had also come from a particular pedagogical problem I had. After the initial three days of training, our trainees come back for ten half days of simulation: they split into small groups and practice scenarios in front of each other with an actor. I’ve written previously about how we structure this and the ways we give feedback.
Often, the first trainee of the afternoon does the worst interview of the session. Other trainees, with the benefit of watching first, do better. It doesn’t seem to matter much who goes first or second: even with a very strong trainee going first, the pattern tends to persist.
This suggested that the second and subsequent trainees were picking up information from the first iteration of the simulation and using this information to improve their own performance — which fitted in with some of the ideas I was having about Bayseian reasoning. The second trainee had observed, learned and revised their thinking: they had used what they had learned to improve their approach to the patient.
As a teacher, I was keen to improve the experience of the first trainee: I don’t think handing out experiences of failure willy-nilly contributes much to trainee development. So I became interested in finding the minimum time the first trainee needs to give the second trainee meaningful new information with which to revise their initial microformulation. My guess — as yet untested — is that it is about 90 seconds. This should be short enough that the first trainee doesn’t get burnt, but long enough that the second trainee sees enough to make a better go at the interview. I think it also provides a useful corrective to the idea that consultations are entirely rational and scientific — the only way you can get much useful information in 90 seconds is to start listening to intuition.
The current state of the exercise
I’ve simplified the exercise to get rid of all the traditional apparatus of formulation, to instead suggest that there are three stories to think about: the patient’s story, the referrer’s story and the clinician’s story. This is more helpful for beginners: no need for the theoretical meta-narratives of C.B.T. or psychoanalysis that they may not yet have learned, but room to incorporate them later as needed. Simplification in this way may also partly address the ethical worry that bringing theoretical meta-narratives risks distorting the patient’s story or drowning out their voice (is it still a formulation if it doesn’t contain any theory?).
Adding in a separate ‘referrer story’ is there really as a foundation stone for later teaching. Although considering the patient’s relationship to help is useful in itself (thanks to Nicole deZoysa for drawing my attention to Reder and Freeman’s paper on the relationship to help), I am introducing an idea that comes back later in our course of consultation triangles (in this case, clinician, patient, referrer) and the idea that all sides of the triangle need to be addressed for a successful outcome— this being part of thinking systemically about the patient.
Having a ‘clinician’s story’ section allows me to fold some of Sam Malin’s ideas about intuition and one’s own emotional reactions in to the exercise in a more tidy way. The questions of ‘What do you hope might be going on?’ and ‘What do you hope it isn’t?’ are his: he suggests using these questions as a springboard into talking about practitioner strengths and development needs (and possibly blindspots). I also borrowed some ideas from the Balint tradition here.
Going through these several iterations has helped me clarify my thinking on where diagnosis and assessment belong: as a parallel track across M.I.’s four processes (thanks Helen Mentha). Making a diagnosis essentially means doing an assessment, which can be a hard thing to combine with ‘classic’ M.I. where the focus is on the patient and their perspectives: but it’s hard, not impossible. So I’ve conceptually separated assessment and diagnosis out in the exercise, to the extent it is now a freestanding optional extra.
I’m still interested to know what people think about the process:
- is microformulation the right name given that there is no longer a marriage of patient data with expert knowledge?
- is it a good thing or a bad thing to have jettisoned all the theory? Has the baby been lost with the bathwater? Or, conversely, is the irretrievably subjective nature of the initial microformulation (analogous to the Bayseian prior) inimical to patient centred practice? Should I teach ‘formulation as object/outcome/event’ as something quite separate?
- what do you think about revising a microformulation after 90 seconds? Should I do this bit of the session or should one of the trainees?
- what would make it a more useful exercise?