Improving Governance by Peer Review: Food Safety and Beyond
By Daniel Ho, William Benjamin Scott and Luna M. Scott Professor of Law and Becky Elias of Seattle’s public health department
Trust in government has declined precipitously over the past few decades. In the Boston Review, Daniel Ho, the William Benjamin Scott and Luna M. Scott Professor of Law, and Becky Elias, manager of the food program in the public health department in Seattle — King County, discuss one possible reason for that erosion of trust: the seeming arbitrariness of some decisions by public officials. They argue that “peer review” — a process by which public officials review each other’s work and deliberate based on such review — can help to improve the consistency of such decisions. Scholars have long theorized about the impact, but Elias and Ho set out to ground such claims. Over a period of two years, Elias and Ho worked with the food program in King County and designed a randomized controlled trial to evaluate the effects of a peer review program. Their study confirmed that inconsistency indeed plagues frontline decision making by public officials: Even when food inspectors observe identical conditions, they can disagree on health code implementation 60% of the time. The peer review intervention, however, improved this state of affairs. The intervention increased violations cited and, as Elias and Ho describe, “[b]ecause the increase was driven by inspectors who had previously been loathe to cite violations, the net effect was [that] peer review improved consistency across inspectors.” They argue that more widespread practice of peer review could promote the consistency of law and, ultimately, trust in government. The full findings are forthcoming in the Stanford Law Review (draft here). Ho and Elias discuss their research with Stanford Lawyer.
How did you two come to work together?
Elias: Three years ago when I joined the Food Program, Seattle King County was exploring expanding its information disclosure around food safety. Specifically, we were looking into restaurant grading, where we would place signs in the windows of restaurants to summarize results of inspections. Many of my colleagues had doubts that restaurant grades would actually provide the public with accurate information to make informed choices. My then-manager handed me a copy of Dan’s Yale Law Journal “Fudging the Nudge” piece, which captured — through empirical analysis — the very concerns our staff were talking about: the consistency of the inspection process and how restaurant pressure for good grades might shift program resources away from food safety to grade appeals. His research truly resonated with our staff. I wanted to learn more, so I gave him a call.
Ho: When Becky called me, my first reaction was astonishment. It isn’t every day that county bureaucrats hold an impromptu seminar over a 115-page law review article of mine. We then had a series of conversations about the very real concerns that Seattle King County recognized and what to do about it. I was impressed by the county’s commitment to design a policy intervention that could potentially address core challenges of inspection, accuracy and consistency, as well as be rigorously evaluated.
How pervasive a problem is the accuracy and consistency of inspections?
Ho: The problem is pervasive and spans beyond food safety inspections. Anytime that public officials carry out complex legal provisions in decentralized and discretionary ways, there is a question of the consistency of such implementation. The underlying article discusses a wide range of examples, spanning federal, state, and local boundaries, such as immigration asylum adjudication, social security disability adjudication, child welfare determinations, nursing home inspections, and patent examinations. Many simply view these disparities as a basic fact of the public sector, and scholars have made little headway in developing proven solutions. Jerry Mashaw argued that these problems in front-line administration violate due process, and that due process requires some better management system — you can think of the peer review intervention as answering that call.
Elias: We see it on the ground all the time. The problem of inspection consistency was voiced by restaurant operators, the public, and even our own staff in a series of stakeholder meetings. Our staff had lengthy deliberations over the value of inspection consistency, and the main goal was very clear: to build trust and credibility — with each other, with restaurant operators and the public.
How did you come up with peer review as a way to address this problem?
Ho: The academic literature has long theorized about peer review as a way to manage so-called “street-level bureaucracy.” Bill Simon, emeritus at Stanford and now at Columbia, and Chuck Sabel at Columbia had written on the topic for a number of years as a central element of the influential theory of “democratic experimentalism.” In our early conversations, we developed a handful of potential interventions, and this one stuck.
Elias: It’s easy to forget how independent, and potentially isolating, the job of a food inspector can be. Most of time is spent doing inspections on your own, engaging with people who are not happy to see you. And if you haven’t been in the restaurant before and start marking violations, it is not uncommon to hear, “the last inspector didn’t do it like that,” which can cause inspectors to doubt each other. At the same time, each of our staff members has a wealth of experience and knowledge that could be shared with others. As a result, peer review seemed like a natural intervention to try to make use of all of the resident knowledge our team has. The Food and Drug Administration promotes a form of “standardization” of inspectors to improve the quality of inspections, and we saw the idea of peer review as an extension of that effort.
One of the unique aspects of the collaboration is that you structured it as a randomized intervention. Was that difficult?
Ho: Randomized controlled trials, such as the Washington experiment, are the gold standard for evaluating the causal effects of an intervention. Policymakers can sometimes be reluctant to engage in a randomized evaluation, but it can often prove to be fair, cost-effective, and designed to meet local constraints. For instance, we made sure here to tailor the intervention to minimize commute times and ensure equity across the treatment and control groups. The reason why this kind of an evaluation can be cost-effective is that we should assess whether or not it has the intended benefits before deciding whether to invest resources to implement the program wholesale.
Elias: Local governments don’t always have the capacity to run these kinds of trials. Dan’s research team helped us design, implement, and analyze the intervention, which made it much easier on the county side. They had capacity to analyze our data in a way we never had, and what we learned from the experience felt as though he was turning on the lights for us. I strongly encourage other jurisdictions interested in implementing and evaluating initiatives to contact him!
Which findings are you most excited about?
Elias: What was most eye-opening was observing the amount of learning during the intervention. There were staff members who came alive during the process in a way we’d never seen before. Initially, we were afraid that reviewing each other’s work might cause the staff to grow frustrated, but the learning process really engaged people and boosted morale. People have become much more comfortable asking each other questions, sharing ideas, and being more interactive overall.
Ho: Many people have suggested that governments should engage in greater evaluation of policy initiatives, but the uptake has been low. The most exciting part about the project was to be able to design a randomized controlled trial in an actual regulatory enforcement setting.
Are there future projects in store?
Ho: I’ve been very impressed by King County’s openness to learning from and improving policy based on data. The data here allowed us to design a more evidence-based grading system, and I hope that we can design further evaluations to improve regulatory policy in King County and elsewhere.
Elias: We will be working to implement and evaluate the new grading system over the course of the next year. The County also has an Equity and Social Justice Initiative where we think there is incredible learning and evaluation potential. We’re hoping to possibly partner up around that.
Originally published at law.stanford.edu on July 6, 2016.