Why Don’t They Trust Us?

John Cutler
6 min readNov 16, 2018

Have you ever walked into your favorite restaurant, ignored the menu, and asked the chef to surprise you? What made you trust the chef? Consider how you “do business” with a carpenter, plumber, or car mechanic you’ve come to trust. I love my car mechanic. I give him carte blanche to do whatever he sees fit to maximize my return on investment in my vehicle, and keep me and my family safe. He trusts me to pay up.

Now, imagine you are planning a wedding. You meet a caterer (or photographer) vying for your business. You’ve never met them before (or eaten their food). Do you ask them to surprise you? Or a new mechanic…do you ask them to “fix anything while you’re at it?” I’m guessing the answer is both cases is No.

So, let’s just cut straight to the chase. Most product development teams are not (fully) trusted to deliver a high level outcome or solve a problem. I say this with no value-judgement attached. A synonym for distrust is lack of confidence, which feels a bit better to say out loud. Why do we sometimes lack confidence in a team’s ability to deliver a high level outcome?

  • Context & Judgement: Concerned they don’t have all the required context. Not fully confident in their judgement regarding the solution. Concerned they may not reach out for help, ask the right questions, or notify others of their intentions. A sense that we know best, or that our specific idea is better.
  • Proof: We have not observed them generate successful outcomes. Or more broadly, we haven’t observed a pattern of reliably keeping promises.
  • Self-interest: A perceived sense of self-interest (“they don’t have my needs in mind”). A perceived sense of not caring (“I’m not sure their heart is in it”).
  • Risk: We fear being punished because of the team’s bad decisions. The stakes are high and we “cannot afford to be in the dark!” We “need this to be perfect” or “can’t afford this to go wrong”.
  • Coherence: The outcome is too distant! There’s no way to “measure progress”, so the only alternative is to agree on prescriptive solutions, and measure completion (as a proxy for progress).

Note: We all fall victim to fundamental attribution bias whereby we’re more likely to attribute system-level reasons to our own problems, and blame other people directly. And, we may be ourselves lacking in experience.

So why, in internal software product development, do we see the New Mechanic, or even worse the Mechanic We Distrust But Are Forced To Use dynamic take hold? Shouldn’t the working relationships more resemble Our Favorite Mechanic or Favorite Chef? The distrust goes both ways as well…all of our bullets above could be reversed to describe how “the teams” feel about the people managing/leading them.

Instead of a customer/service-provider analogy, let’s consider a healthy team of carpenters (a partnership situation). What does the lead-carpenter do when an apprentice “can’t be trusted” to do a particular job? The LC pairs, teaches, and coaches. The apprentice accepts help. Bingo!

Why does this work? Let’s take a look at Larry Maccherone’s Trust Algorithm:

Trust = (Credibility + Reliability + Empathy)/Apparent Self-Interest

Where…

  • Credibility = How well you actually know what you are talking about
  • Reliability = How often and quickly do you do what you say
  • Empathy = How much you show that you care about someone else’s interests
  • Apparent self-interest = How apparent it is that your words and actions are in your own interest

In our carpentry example, the lead-apprentice has credibility, they show an interest in the apprentice’s success, and they reliably provide help when needed.

Now, let’s take a look at a common dynamic in software product development.

  • Project manager doesn’t understand software development. CEO understands a 20 year old version of software development from a different domain and randomly overrules the team without rationale. Team member doesn’t understand sales. Only designers understand design. (Credibility, Reliability, Empathy)
  • Team has been hamstrung by technical debt (Reliability, Credibility)
  • Management hasn’t offered resources to help relieve the debt. (Reliability, Apparent self-interest, Credibility)
  • Manager doesn’t understand the intricacies of this particular technology challenge and keeps asking the team to represent/vouch for it (Credibility, Empathy, Apparent self-interest)
  • Team doesn’t have some of the requisite skills for this particular effort. so they opt to use a friendly but sub-optimal technology which backfires (Credibility, Reliability, Apparent Self-Interest)
  • Team member is hogging all of the “interesting work” to gold-plate their resume for their next gig (Credibility, Apparent Self-Interest)
  • Product Manager seems unwilling to admit missteps (Apparent self-interest, Empathy, Reliability, Credibility)
  • Someone starts using feature velocity as a proxy for progress (Credibility, Empathy)

I could go on and on. You get the idea. See how easily distrust is sown? Part of the issue here is purely systemic. For example, in an environment optimized for high work-in-progress, you find all kinds of seemingly untrustworthy behavior:

Note local agendas, back-channeling, more cooks, distrust, opacity, loss of morale, and cutting corners….all can be easily mapped to lapses in Empathy, Reliability, Credibility, and Apparent Self-Interest.

It goes even deeper. Many of these issues boil down to what Amy Edmonson calls professional culture clash:

It seemed that teaming across industry boundaries was really, really hard. OK, so … We had inadvertently discovered what I call “professional culture clash” with this project. You know, software engineers and real estate developers think differently — really differently: different values, different time frames — time frames is a big one — and different jargon, different language. And so they don’t always see eye to eye. I think this is a bigger problem than most of us realize. In fact, I think professional culture clash is a major barrier to building the future that we aspire to build.

The culture clash can extend to members of the same “culture” (e.g. Engineers) who find themselves divided by generations, prior work environments, opinions on individualism vs. collectivism, etc. There’s also the influence of seniority…a senior engineering leader is experiencing a different dynamic (pressures, history, etc.) from, let’s say, a new engineering hire.

It is for all these reasons that we find many software product development organizations optimized for distrust: upfront planning, prescriptive missions, individual-level goals, etc. etc. It’s a wicked, self-perpetuating problem. Many processes are uniquely designed to “hold people to account” (instead of deliver outcomes) because there is an underlying sense of lack of accountability. The only (depressing) saving grace is that systems tend to find a mediocre equilibrium instead of completely imploding.

So what does this all mean? What can you do? Optimizing for distrust is a race to the bottom. My experience is that we’re afraid to talk openly about trust and confidence (especially when trust and confidence is lacking). So attacking this issue head on can be extremely difficult. Fundamental attribution bias also leaves us biased against a less personal, systems-level explanation.

The key, I think, is to create safe situations whereby cross-functional (culture spanning) teams can 1) build empathy, credibility, and establish a collective interest, and 2) reliably keep promises to each other (and the organization). The rest of the organization, in return, has to fulfill its side of the bargain, lest this be viewed as the stereotypical “the teams must prove they can deliver FIRST and then we’ll trust THEM” problem (which reduces credibility and empathy right off the batt).

Instead of creating hand-offs to silo cultures…take the plunge to integrate them. This will be hard in the short-term, as explained by Bob Putnam, Professor of Public Policy at Harvard University, in this fascinating Freakonomics episode on Trust:

In the short run, increases in diversity seem to be correlated with decreases in social capital. Diversity, in the long-run, is a big advantage.

Basically…working Larry’s trust algorithm. This is why I am so passionate about Starting Together…it puts people on equal footing when confronting a problem together. With that established, teams can progressively shift to being more outcome focused as the level of collective trust increases. You will need constraints, but with time they will become more implicit and less explicit.

And PS, you’ll always have someone talk about the “real world”

--

--

John Cutler

Multiple hat-wearer. Prod dev nut. I love wrangling complex problems and answering the why with qual/quant data. @johncutlefish on Twitter.