Better CoS: Decision-Making

Rob Dickins
9 min readOct 21, 2019

--

Note: this is the first in a planned “Better CoS” series, a collection of posts aimed at skill building for Chiefs of Staff.

There is no shortage of management literature regarding decision-making. The volume of books and articles ranges from “pop culture” psychology to peer-reviewed scientific journals to insightful research by top-tier consulting firms. Often, however, all this powerful guidance falls short of making an impact in day-to-day decision-making where it matters — in the leadership meeting.

As a Chief of Staff (CoS), I’ve observed the gap that can exist between understanding better decision-making intellectually and making changes to decision-making approaches operationally. Decision-makers always want more data to inform a better choice, but often a more impactful enabler is the consideration of the approach the team will take to reach the decision.

In my experience, there are three ingredients that together form the recipe for better team decision-making in a general sense:

  • some understanding of how humans think and interact (i.e., an awareness of the body of knowledge related to human psychology, behavioral economics, and cognitive bias);
  • some understanding of appropriate facilitation techniques, given how humans think and interact (i.e., skills and practices that are designed to help mitigate the potential negative effects of the above and/or amplify the positive), and
  • a nominated agent who has the capacity, capability, and authority to design or modify how the team approaches a decision.

This third element about having someone who is designated to take action is often the biggest difference between wanting to approach decisions more thoughtfully and actually doing it. It’s also something that a CoS is perfectly positioned to do.

As someone who has been in a CoS role for nearly eight years, I am passionate about cultivating and contributing to a community of people who see value in the role and have clarity on what it can entail. In my first post on the subject, I offered three potential orientations for a CoS role. In my second post, I described a potential 90-day onboarding plan for a new CoS. In this post, I’d like to unpack a key skill area: decision-making.

There is no one “perfect way” to approach decisions that will work for every context or situation. Below, I offer some thoughts on a “playbook” of sorts for decision-making, aimed at Chiefs of Staff (or other “nominated agents” who can affect the approach). This playbook is less of a step-by-step guide than a suggestion of specific elements to explicitly consider, based on my experience with this topic. The playbook consists of three main elements — The Owner, The Process, and The Outcome. My hope is that empowered Chiefs of Staff can leverage this thinking to improve decision-making in their leadership teams and more broadly across their organizations.

The Owner

“So, who owns this decision?”

Has this question ever come up in the middle of a project or review meeting? Clarifying who the decision owner is for an effort sounds so obvious….but it’s surprisingly easy to have complex initiatives or projects rise up organically in organizations without a clearly documented set of roles and responsibilities keeping pace.

Just think of the topic of data in any organization today. There are potentially various internal efforts (around data collection, access, security, privacy, and ethical use) amidst an evolving technology environment (with machine learning and artificial intelligence) and a dynamic regulatory environment (with SOC2, GDPR, and numerous other territory-specific efforts).

Such projects are just visible reminders to that it’s important to frequently review decision-ownership around important initiative areas.

There are three tests, in particular, that merit validation:

  • The owner is explicitly identified. In such situations, it’s critical to identify and document who the decision-maker is and the scope of their authority so the broader stakeholder community is aware.
  • The owner is empowered. Once explicitly identified, the decision owner needs to be empowered by leadership to make decisions, even if such empowerment extends beyond the individual’s typical scope of responsibilities. The default decision-making style for an empowered owner should be consultative; consensus is not always possible and it should not be a goal. From a culture perspective, effective decision-making often requires those in the stakeholder community to be comfortable with the sentiment of “disagree and commit”.
  • The owner is made accountable. Of course, with great power comes great responsibility. Once explicitly identified and appropriately empowered, the decision owner must be made accountable for the design of the decision approach and the determination of stakeholders who will be engaged as a part of it. In the absence of these responsibilities, we are just empowering lackluster decision-making. The decision owner does not have to develop and manage these things herself — this is where a CoS can play a central role — but that owner must be accountable for the effort and what it yields in the end.

The Process

“How might we approach this decision?”

This is one of my favorite questions to pose as a CoS. I typically raise it as the team I’m working with considers a significant decision coming on the horizon. We will, of course, do all of the obvious things — gather insights from key stakeholders, collect relevant data, and determine the date by which we need to make the call. But the question above goes beyond these logical and expected elements.

Those first three words — how might we — are trigger words for those who have had any contact with the field of Human-Centered Design. They are a signal that we are facing a design challenge. In the context of a team decision on the horizon, the phrase implies we need to think more deeply about how we might design and structure our interactions in order to land on a desirable outcome. Is this sense, an even better question could be “How might we use this decision opportunity to illustrate what great decision-making looks like for our teams?”

There are a few key attributes to consider when designing a referenceable process:

  • The process must be transparent. It’s important to design a process and methodology that is transparent to those in the stakeholder community. Inevitably, not everyone will be happy with the result, and that should not be the goal. That said, all stakeholders should have the opportunity to understand when, how and why such a decision was made (more on this later under “The Outcome”).
  • The design should actively mitigate bias. Much has been written about cognitive bias (also referred to as implicit bias or unconscious bias) in the realm of decision-making. But there is a difference between being generally aware of cognitive bias and designing for it in terms of specific process steps or mechanisms. I have found listing out specific biases that we are worried about to be a helpful step in designing planned interventions. We will never remove all bias, but any attempt to reduce it adds value.
  • The process should be calibrated to the need. Not all decisions are alike. Some decisions are routine and tactical; others are “big-bet”, strategic decisions for the entire company. It’s helpful to develop some form of framework that helps the team calibrate the appropriate level of process and preparation to apply to a given decision approach.

The Outcome

“What was the decision (and why)?”

The ultimate objective at the end of any decision process is a quality outcome. “Quality” in this sense can mean a couple of different things. First, of course, is the concept of an optimal choice given our objectives and constraints. This is the substantive result that is unique to the circumstances in question. But even when an optimal outcome is selected among a range of choices, there are inevitably those in the stakeholder community who perceive themselves as “winners” or “losers”. This brings up the second aspect of quality that is broadly relevant — quality in the sense that we are actually proud to talk about the process we used.

I have found that it’s important to consider certain elements on the front end…when we are initially designing the decision approach:

  • The outcome must be clear. The choice that is made must be clear and explainable. In other words, “what” was decided must be simply stated and quickly communicated to those who are impacted by it.
  • The outcome must be defensible. “Why” the decision was taken is also critical. Decision-support tools can be crucial in this situation. Not only can a set of guiding principles, objective decision criteria, or filtering tests help the leadership team actually make the decision, they also are the basis of a well-reasoned and objective rationale that can be shared with others post-decision.
  • The outcome is ideally summarized visually. Finally, humans are visual creatures. Using visual artifacts that reflect the rationale and support the decision-making can also be used to communicate back out the stakeholder community. This is another area where learning and applying Human-Centered Design methods can add a lot of value.

Putting it All Together — An Example

As I said at the beginning of this post, it’s one thing to read about elements that enable better decision-making, but quite another to operationally implement such things. As a way to illustrate how a CoS might put such thinking into practice, I’ll offer a tangible, real-world example.

Here is the context: Our organization was running an internal competition to solicit new ideas for investment projects. This was widely publicized within the company, and we knew there would be broad interest in what we selected to advance and why. In other words, the importance of having a strong process and a good outcome was high, so we needed to calibrate our investment in the process accordingly.

Below is a breakdown of the process I designed in partnership with members of the decision committee; in italics are the reasons we did it this way, tying back to the points made above (which are in bold):

  1. The eight-person reviewing committee had already established governance protocols; we generally look for majority alignment, but strictly-speaking, the decision-making style is consultative with the CEO retaining the final vote. So decision ownership was clear.
  2. Prior to the competition starting, the reviewing committee defined a set of 5 tests that we could use to objectively assess the attractiveness of incoming proposals; such tests included questions like “is there a market?” and “can we win?”. These filtering tests provided an objective and defensible framework for selecting proposals.
  3. Upon announcing the competition, we published a clear timeline for when submissions were to be submitted, when the reviewing committee would make a decision, and when proposal submitters would hear back regarding the decisions. This provided transparency.
  4. As part of the instruction, we directed submitters to send proposals into an alias; the alias had only myself on it (instead of the whole committee). This helped mitigate the risk of earlier proposals having different impressions than those submitted closer to the due date (a form of Anchoring bias).
  5. In the end, we received 20 proposals by the published due date.
  6. With the proposals in hand, I then redacted the personal information in each proposal (which included employee name, role, manager, and organization). This step helped to mitigate a range of potential biases on gender, ethnicity, geography and past performance; this is similar to the rationale behind Blind Recruitment.
  7. Next, I loaded the redacted proposals into a web-based survey tool. Each committee member was asked to score each proposal against the 5 tests using a 4-point scale (ranging from “Definitely Meets” test to “Definitely Does Not” meet the test). This step supported numerical scoring at both the proposal level and test level, which enabled the next step below. Also, by having committee members do the assessments independently in advance, we were able to mitigate “group think” biases like Anchoring, Authority Bias and the Bandwagon Effect.
  8. With all scoring complete, but still prior to our decision meeting, I took the scoring data and built representations to aid decision-making, including heatmaps and scatterplots. This allowed us to start the meeting with a highly visual summary of the collective thinking of the reviewing committee.
  9. The decision-making meeting to discuss the 20 proposals was scheduled for 2.5 hours. We spent the majority of the time pressure-testing how the data landed, seeing if our intuition matched the results and why (or why not). In the end, we confidently selected the proposals to advance and ended the meeting early.
  10. Last, but definitely not least, we held debriefs with all who had submitted, including those who were told their proposal would not advance. This crucial step provided transparency to those who submitted and others in the stakeholder community. In each debrief, we explained not only the outcome but the overall design of the decision process.

This was a significant amount of process because we calibrated it to a significant need. Many other types of decisions we wrestle with day-to-day might involve much less preparation and leverage only 1–2 of the steps above. It’s not difficult to imagine how elements of this process could be applied to common situations that bear similar characteristics (e.g., selecting a new hire from a field of qualified candidates, or deciding the best next steps on a specific project).

Decision-making is a complex and multi-faceted topic. Improving decision-making, though, it not an overwhelming task. In my view, this is a skill area that Chiefs of Staff can and should invest in; the result will contribute significantly to the performance of the leadership team and the broader organization.

--

--

Rob Dickins

Chief of Staff to the CEO @ Autodesk; passionate advocate, instructor, and facilitator of Human Centered Design