Guidelines for auditing crisis management
By Professor Arjen Boin, Leiden University
Crises can push societies to the brink. When a crisis goes bad, the result may be human tragedy, structural damage and financial disaster. After a crisis, people have a right to understand what happened and why things did not always happen as planned or as might have been expected. It is important to learn lessons so that future emergencies will not turn into disasters. Auditors can play an important role to provide such post-crisis insights, contributing to a learning government. Arjen Boin is Professor of Public Institutions and Governance at the Institute of Political Science at Leiden University and has published extensively on crisis and disaster management, including examining the management of hurricane Katrina and, more recently, national responses to the COVID-19 crisis. In this article, Arjen Boin identifies which pivotal issues need to be addressed when evaluating a crisis. He introduces some core guidelines that can enable auditors to provide real added value instead of presenting obvious conclusions.
Evaluating crisis management: why it matters
As the smoke of the COVID-19 wreckage slowly clears, a set of pressing questions is coming into full view. Why were modern, European states unprepared to respond to the ‘crisis foretold’ that was COVID-19? How do we explain the differences in crisis regimes rolled out over Europe? What role did the EU play in supporting and coordinating the responses of member states? In short, the question is: how should we assess national and international responses to this transboundary crisis?
Auditors in every public sphere will be busy for a long time answering these and other questions. They will evaluate whether crisis managers stuck to their plans and procedures, and whether they improvised adequately in the face of inevitable planning shortcomings. They will assess whether crisis managers acted in time and made the right decisions, and whether leaders communicated clearly and convincingly.
Auditors know that their verdicts matter, as these will serve as key input for the accountability process. Politicians will have to account for the way they managed the biggest crisis in decades; audits will serve as primary exhibits of failure (crisis audits rarely give rise to success stories).
There are good reasons to take this audit task seriously. One reason is that the legitimacy of public governance, political leadership and democratic practices is strongly affected by perceptions of whether crises have been managed adequately. In times of crisis, as the saying goes, the public turns its hopeful gaze to their leaders. They had better perform. A second reason is more functional: we will likely see more crises that rate high on the ‘mega’ scale. We had better learn whatever lessons can be learned from the COVID-19 experience if we are to cope with cyber disturbances, energy shortages, new forms of terrorism, climate-fuelled migration and extreme weather events. There really is no time to lose.
The European Union cannot afford to lag behind in this respect. The entrenched devotion to integration is creating plenty of ‘externalities’ — negative effects — that may take on crisis-like proportions at ground level. The EU has slowly begun to build crisis management capacities, which it loves to tout in times of non-crisis. In the wake of COVID-19, new transboundary crises will undoubtedly cast light on the functionality and rationale of EU crisis mechanisms. Let’s just say a bit of scrutiny won’t hurt — especially in light of the new crisis powers and accompanying funding that undoubtedly will materialise.
How to evaluate a crisis?
The importance of the task begs the question of how to best perform a crisis audit. It is a task that auditors rarely have to perform and for which, it is fair to assume, they are not trained. They must assess how a government has coped with a unique event for which it could not have been prepared — or at least not fully. There is never a clear policy or plan –- with schedules, clear aims and tools to be wielded — that can realistically and usefully serve as the base line for a thorough evaluation. The best they can hope to have is an abstract set of principles and hopeful assumptions, which Professor Lee Clarke has famously dismissed as ‘fantasy plans’. Every crisis shows that governments start with a plan that cannot work and is thus quickly (and usually justifiably) jettisoned. Auditors are better off not paying too much attention to these plans, as they will only give rise to fantasy evaluations.
How, then, can we perform a proper audit of crisis management? Auditors need a clear idea what crisis management entails, why it is so hard, and what may therefore reasonably be expected from all those who play a role formulating and executing a response to unfolding disruption. Let us have a quick look at these building blocks of a good evaluation.
First, we need to unpack the idea of crisis management: what we mean are the efforts of government officials, in conjunction with citizens, businesses and non-governmental organisations, to arrest a developing threat and limit its consequences for society. These efforts serve different aims. The extent to which these aims are accomplished largely determines the effectiveness of crisis management.
Governments must work to prevent foreseeable and preventable crises from occurring. That sounds like an undeniable truth. Yet, two questions inevitably arise. Which crises are really ‘foreseeable’ or predictable with a reasonable degree of accuracy (and what is ‘reasonable’ in this regard)? Second, should something that is preventable be prevented at every cost? How high a price tag will a society tolerate for prevention strategies and prevention failures?
The sooner an emerging crisis is recognised, the better the chances of nipping it in the bud. But the problem is how to recognise a crisis that has never been experienced before. Governments need to collect information on indicators that are usually only identifiable in hindsight. Even if the indicators are clear (think of a pandemic), it is not easy for crisis experts to focus the attention of decision-makers on the information that points to the need for action. Experts will quickly discover that threats are competing for attention and politicians are not always easily diverted from existing threat agendas.
Once a crisis has emerged in full view, it is essential to understand its causes, dynamics and effects. More information leads to better response strategies, at least in theory. Alas, a defining characteristic of crisis is the lack of verified and useful information. Many data points, little insight — this sums up the predicament of crisis decision-makers. It usually takes quite some time for a full picture of the situation to emerge. Experts are consulted, but they do not always offer useful clarifications.
Political leaders will have to make critical decisions without the information they would normally have. It is therefore understandable that some of these decisions will not work out as intended. They will have to get partners in the response network to collaborate, even if there is no clear framework of authority and these partners have never worked together before. A bit of confusion, overlap and inefficiency are to be expected. Solutions will have to be found for problems that are new or unsolvable (at least in the short term). Suboptimal improvisation is the norm during a crisis.
Leaders have to provide citizens with hope and direction. They have to suggest that things are under control, even if nobody thinks they are. Political leaders will want to project leadership, even if they are probing in the dark. Media and opposition figures will test the waters, criticising leaders as soon as they think it is acceptable. That happens quite a bit earlier on social media. In a large response network, multiple leaders will emerge. It is not always possible to have all them sing from the same hymn sheet.
if crisis management starts out as a bit of a fuzzy experiment — discovering what works in light of limited information — one might expect leaders and their assistants to display a keen desire to learn, quickly. But it is hard to learn during a crisis, especially if you cannot be sure of the feedback you receive. It can also be hard to act upon a seemingly adequate lesson: political leaders are loath to perform sudden U-turns in public, especially if they have spent political capital on convincing citizens to follow them down a sacrificial path.
In hindsight, failure is easier to spot than success
In short, it is hard to perform these crisis management tasks. Even if leaders manage to be fairly effective, they still may not appear successful. In a cruel paradox, it is rarely obvious in real time when crisis management measures have been effective, whereas chaos and unintended consequences are always visible and tend to inform public impressions of crisis leadership. Many leaders find out the hard way that effectiveness does not automatically translate into legitimacy.
That is another good reason for cool and detached auditing. But before auditors — often equipped with extensive authority to look behind the scenes — can begin with their important task, they will have to deliberate about two critical issues.
First, they need to take into account the conditions under which the crisis in question has to be managed. Every crisis is different and some are easier to manage than others. Clearly, crises that recur often and are dealt with by highly trained responders should be easier to control than a ‘black swan’ that surprises and outwits the response apparatus. Crises that originate in far-away domains may be harder to influence than ones closer to home. We may expect more from well-funded, stable governments than leaders who oversee depleted coffers and a vulnerable population.
Second, they need to formulate criteria for measuring success. They must answer the question: ‘What can we reasonably expect from crisis responders on each crisis task, given the limitations under which they must operate?’ They must, in other words, formulate an answer to the following questions:
- Was this crisis actually preventablein light of the knowledge available at the time? If so, how could it have been prevented? At what price? What unintended consequences could such preventive efforts have had?
- How much information is enough for leaders to act on warnings about an impending crisis and how much ambiguity can they tolerate? What if leaders have made an explicit risk assessment but chosen to accept the risk (in light of a ‘false positive’)? Must leaders always act on the risk of a crisis occurring, even if it is only a possibility?
- Given that it always takes some time to understand a situation, how long might it take for an adequate picture of a crisis to emerge?
- Should leaders act before they have a complete and adequate picture of the situation? How long can they wait? If they do and their decision backfires, are they to blame?
- What can we reasonably expect from ad hoc, high-pressured cooperation between organisations that have never worked together before?
- Should leaders be expected to communicate everything they think they know and don’t know? If they do, how is miscommunication evaluated?
- Should leaders correct their strategy if new but unverified information suggests that it is not working or even counter-productive?
Guidelines for delivering real added audit value
Many evaluation reports produce tendentious conclusions and lessons that simply do not make any sense. These reports are often accepted at face value, which almost guarantees that they will serve as a template for future evaluation efforts. Auditors can avoid this sad state of affairs by sticking to a few simple guidelines.
Differentiate between evaluating, learning and blaming
A thorough evaluation forms the basis for learning and blaming (accounting). It should be nothing more than establishing the facts (what happened, exactly?) and determining whether the various crisis management processes deviated from reasonable expectations. Assigning guilt should be left to others. Learning lessons is probably also better done by others.
Specify your assessment criteria
Without a clear statement of standards used, the outcomes of any evaluation exercise are near meaningless. Unfortunately, too many evaluation reports appear to be the outcomes of impressionistic analysis rather than a thorough, criteria-driven evaluation. Many of these reports reinvent the wheel as they point to misunderstandings, faulty decision-making, lack of clear communication and insufficient coordination without explaining how they arrived at their — usually scathing — verdict.
Avoid N=1 thinking
A crisis is a unique situation. It is therefore tempting to view the management of that crisis as a unique effort, requiring inductive analysis. But such backward mapping will inevitably lead to primary causes, in the same way that reading a detective back to front will quickly tell the reader who the perpetrator is. Crisis management consists of difficult but not unique tasks, which are best studied against a baseline that is derived from the study of other crises.
Use theory, involve academics
It is easy to identify failure factors. But these obvious factors are only valid when they don’t feature in similar crises that were well managed. A comparative perspective is critical to arrive at valid insights. Theory is required, which can be delivered by academics. Don’t reinvent the wheel — involve academics who have studied other crises and know which factors really made a difference.
To explain is to blame, as the old saying goes. But it really is possible to analyse crisis management processes without assigning blame to individuals. It all starts with the assumption that all those involved deserve the benefit of the doubt. Assume they tried their hardest. Remember that everybody can make a mistake. Look for structural factors that bred individual errors. Look for evidence of failed checkpoints or cultures with blind spots. That will help organisations improve their crisis management preparedness.
Lessons are not simply the opposite of failures
Coordination failed, so we recommend improved coordination. This is one of the more simplistic lessons to appear perennially in crisis reports. A lesson tells the reader something about the underlying factors that enabled failure to persist. A lesson may be formulated without recourse to an accompanying recommendation. The lesson that plans often do not work is valuable as it tells the reader not to invest too much hope in planning alone.
But tell it like it is
Simplistic, uninformed analysis is bad. So is the tendency to politely hide critical findings behind a façade of woolly formulations and legalese. If auditors document abject failure, a dereliction of duty in clear violation of pre-formulated criteria of failure, they should call it out. Crisis managers have a duty to do their best. Inexcusable behaviour must clearly documented. The politics of the situation should never play a role in formulating the conclusions of a report.
Taking crisis auditing to the next level
In the wake of COVID-19, auditors will have to redefine the process of crisis evaluation. For too long it has been the preserve of opportunistic politicians with an agenda, catering to accountability concerns at least as much as to the need to understand what happened and how we can do better next time. Crisis evaluation is in need of professionalisation. Every crisis presents an opportunity, it is often said. Let this be the opportunity to take crisis auditing to the next level.
This article was first published on the 3/2021 issue of the ECA Journal. The contents of the interviews and the articles are the sole responsibility of the interviewees and authors and do not necessarily reflect the opinion of the European Court of Auditors.