30 ways to think about a problem
I’ve long wanted to write an article called How to disagree, to lay out the way I try to be good at offering thoughtful dissent, versus just being counterproductively negative. However, Gabriel Weinbergs recent and excellent Mental Models I Find Repeatedly Useful made me realize that what I really wanted to write about wasn’t how to disagree, but to offer a few ideas for how to think, and specifically, how to think critically, and explore and evaluate ideas.
Now why would you need any help thinking? In Gabriels article, he refers to a speech by the great Charlie Munger, where he outlines typical tendencies of thought for people. There’s a tendency in there that I fall for all the time, and that I see other people falling for as often — number 18, Availability Misweighting Tendency. What it basically is, is your brain overvaluing easily available information versus information that is less readily available.
In the special case of thinking, how does it work? Well, for me, I tend to evaluate an idea by comparing it to a mental model — for example, by judging it on a set of criteria for that kind of idea. If I look at a project plan, I evaluate it on what I know about project plans. Where the misweighting tendency kicks in, is that if I’ve just read an article about opportunity costs, you bet I will look really hard for opportunity costs in the plan. Similarly, if I’ve just taken a course on coaching, you bet my management advice and 1:1’s for the next two months will be overly-heavy on coaching.
To avoid this, I’ve found it necessary to continuously refer back to a corpus of ideas and mental models to refresh them in my mind. Not that I’ll necessarily forget them otherwise, just that I’ll misweight their importance based on where in my mental LRU cache they happen to be.
This list is necessarily shorter than Gabriels — I can’t keep 80 ideas in my first-level cache. Some of them are identical and some of them were not on his list. I present all of them here since they make up my general check-list.
So, without further ado, here’s my 30 or so ideas I try to keep near at hand to help me think critically, coach, teach, reason and dissent. I’ve tried to roughly group them in categories.
When you evaluate plans and ideas for moving forward
- Opportunity Cost. What could we do if we weren’t doing this? Is this the best use of time / resources / energy? Could we seek alternative outcomes? What’s the marginal utility gain (see below) at various scope?
- Sunk Cost. Are we continuing down this track simply because we’ve spent a bunch of time down this track already? If not — what has changed that now makes this option more viable than before? Knowledge, people, market, technology, …?
- Loss Aversion / Risk Aversion / Cover Your Ass. Are we spending time trying to cut risk to a point where we spend more time risk-minimizing than the time lost from betting would cost, on an expectation-adjusted basis? I.e. are we spending 40 hours to cover for a 5% chance of an 800 hour loss?
- Incentives & Systems thinking. Are there incentives put in place to avoid a tragedy of the commons? Assume people act in their rational self-interest, does the system still operate as expected? What feedback loops assist and enforce? Are there carrots and sticks in place? Prefer honey over vinegar and only put in place enforceable sticks.
- Worst possible outcome. What’s the worst possible thing that can happen? Can you make the decision smaller by minimizing the worst possible outcome? Relatedly, consider the likely worst outcome.
- Tactics or strategy. Is what’s being proposed a tactic or a strategy? On what level? Strategy is what gets you to the battlefield, tactics is what you do when you get there. To what strategy is this roadmap a tactic?
- Unknown unknowns. Have you considered unknown unknowns? Can you brainstorm your way to uncover some of them? If an unknown unknown causes the worst possible outcome, is that acceptable?
- Last responsible moment. Can this decision be made later? Does waiting give us more information? Can we talk a decision now that maintains optionality? How much optionality do we need to sacrifice to conquer uncertainty?
When you examine general arguments
- Questioning the Premise / Five Whys. Do you agree with the problem as stated? Is the world view reflected in the problem compatible with yours? Can the person stating the problem reasonably have the right information to properly state it? If no on any of these, Ask Why five times to find your way to root cause of misalignment.
- Cognitive Biases (specifically Selection bias, survivorship bias, availability bias, confirmation bias, hindsight bias). Are any of the biases mentioned in parenthesis at play? Specifically hindsight bias (this product was doomed to fail) and confirmation bias (of course React would be better, node is awesome).
- Anchoring. Why do you believe it’s that number? Can you argue it from first principle, or did someone just pull something related out of their ass? Just because someone wants salary X, does putting a raise at 1(±0.05)*X give you a fair salary?
When you’re in the thick of being a leader
- Coaching & Learning on the job. Are you accidentally offering a solution to a problem or robbing someone of an opportunity for learning? Is the risk in the worst case failure such that you should let someone try and fail, to build longer term leverage in your team? Would giving them the answer be a case of loss aversion?
- Leverage & leveraging. Is this activity leveraged? Does delegating it create leverage? Can I otherwise leverage it? If I write this down, do I go from 1 minute in — 1 minute out, to 60 minutes in — 100 hours out? Does this activity compound with or can leverage from a company activity?
- Does it look good naked? If this decision or action was known to a wider set of stakeholder, would you still do it? Even if it’s a though choice, is it defensible? Can you mentally justify it and verbalise such a justification in a compelling way?
- Consider all parties. Have you considered the hidden stakeholders? I.e. if you promote someone or hand someone a job, did you run an open process? Was it done fairly? If not, what are the implications for all the people not in the room? See How To Manage.
- Information asymmetry. Do you know something that the transacting party does not? Can you gain leverage from it? Can you gain trust from removing the asymmetry and levelling the playing field? Are there ways to reveal the structure without revealing the contents? I.e. do you have latitude to show why things are the way they are without revealing what they are? See public salary ladders.
When you try to understand why things are failing
- Good Intent & Best Effort. In case of a fuck-up, assume everyone acted with their best intent and tried their hardest. Now evaluate the decisions they made. Why did they do them?
- Root Cause. Is what’s being described truly a root cause? Can you diagram it out? Ask why five times. What are the other dimensions of the problem? Have you properly considered people, process, product, culture, technology?
- Shipping solves all problems. Most teams become happier when they ship. When did the team last ship? Why can’t they ship? In general, if they have any big problem, first make sure they are shipping. Treat anything that causes them not to ship as a blocking impediment.
When you look at team setup, project plans etc
- Deadlines (Forcing functions). Does the team have a deadline? Is it honest? What happens if they miss it? Does an external stakeholder rely on you? What other forcing functions can you put in place?
- Order of Magnitude — Economies of scale — Addressable market — Scalability. Is this estimate orders of magnitude correct? 10 vs 100 vs 1000 hours? Does this solution scale linearly? Are there fixed costs that create barriers to entry and scalability? Is there enough of a TAM for this to make a dent at our baseline? Reversibly, is this cost big enough to make a dent at our current scale?
- Optimas. Is this a local or global optimum? Does our organisation force local optima? Are teams correctly set up to overcome local optima? Reversibly, does this choice sacrifice the whole to optimize for the parts? I.e. if writing the build system in Go makes the build system 10% faster, but requires everyone to learn Go, is it a false optimum?
- Divide and conquer. How do you slice the problem so that you get to a problem that is tractable? Can you start with one part and solve that completely? Is there a smaller problem that is both useful to solve and possible to solve tractably that would give comparable effects?
- Scope cutting — Scope, Time, Quality triangle — Marginal utility. Is this really the smallest increment forward? Remember the triangle of scope, time, quality. Remember that the requested scope does not change the available capacity. Anything that’s below the line capacity/week * #weeks gets cut — what do you want to cut? Is the marginal utility of a story such that it falls within the MVP? What happens if we remove any given story?
When you evaluate arguments, especially technical ones
- Technical debt & Big rewrites. Is the proposal a big rewrite in disguise? What institutional knowledge have we picked up that makes us think we’ll do it better this time? Relatedly, do you know the debt level of your organisation? How susceptible is any given team to a default?
- Not Invented Here / Build vs Buy / Transaction Costs. Are we building this because we are forced to because no other solution exists, because owning our own infrastructure would give us unique advantages, or because we don’t trust the external tech / team, believe their stuff is shitty because we don’t want to look at their code base and so on? Are those reasons valid?
- Work in Process & Cycle Time — Inventory — Little’s Law. How much inventory does this team have? How many things are in process and how long do they take on average? If they did one less thing in parallel, could they do more things? If not, that’s likely an overspecialized team. Is it correctly staffed? Can you cut inventory by shipping regularly?
- Cost/Benefit Analysis. Does doing this make sense? Are we paying too much for what we get? Can the team articulate an unbiased C/BA for all proposed alternatives? Is it non-bullshit?
- Nirvana Fallacy. Is the solution being rejected for falling short of an unattainable perfect solution? I.e. is the ugly hack that takes 5 hours being rejected because it falls short of a 1000 hour solution?
- Appeal to emotion / Exaggeration. Does this pitch contain too many adjectives? Does the data belie the argument, or does the argument build entirely on emotion — Special case is “It’s the right thing to do” or “There is value in the long tail because Gladwell said so”.
The categorisation above isn’t necessarily perfect or linearly independent. Hopefully it gives some structure to where my need for these ideas have arisen. And hopefully some of them are useful to you as well.
If you think there’s something obvious I’ve missed, or if there’s something you think is insane, please drop me a comment below.