Strategic Laziness

Rus H
13 min readFeb 22, 2017

--

How I learned to run ambitious software projects

This article describes a simple, effective principle for developing new software that I learned from a very wise manager I once had. Anyone can apply it, and I’ve found it worth applying at any level of the project — from fixing one bug to launching a large project. In some ways this is merely a simplified re-explanation of the motivation for “agile” or “XP”, but I will highlight the key element that I think is most important. We came to call it Strategic Laziness.

Origin story

A few years ago I joined an ambitious new project to see if we could build an “indoor GPS”. Our goal was to let smartphones know their precise spatial position while inside buildings. The work would later become the indoor map and location features of Google Maps, but it was far from complete at that time. My career experience up to then was in building web-based consumer software, with mixed results. I didn’t know a damned thing about hardware, machine learning, maps, or GPS. “But,” I figured, “these guys must know what they’re doing or else we wouldn’t be getting paid to be here. Right?”

On my third day on the team, we drove out to a local auditorium. We were there to meet with its director and look at the interior of the theater building. We stood on the stage and looked with despair at the curved, steeply sloping rows of seats. The stage was flat, but the audience area was divided unevenly into at least 20 different ramps, balconies, sections, and platforms. Given that our technology worked by mapping and displaying rectangular buildings floor by floor, the auditorium was a nightmare. It didn’t have floors at all; it was a complex 3D space with an incoherent seating arrangement and random slopes. The layout completely broke our system’s assumptions about how mapping and location would work, and we had no real plan for handling it.

Our lead engineer then turned to the theater director and asked, “Suppose your customers’ phones could show them a detailed map of this building, and direct them to their seats. Would that be useful?” The owner politely commented that it sounded helpful, but our engineer pressed the point. “Do people have trouble finding their seats now? Would this technology really help anyone?” The director eventually admitted that the signs on the walls were usually sufficient for people to find their way. And that since ushers were on hand all night, a high-tech map would probably not get much use from elderly theater attendees.

I was stunned. I realized that our lead engineer had brought us here specifically because the theater building might reveal fatal challenges for our technology. Not only that, but he spent the visit fishing for even bigger problems: Like the possibility that even if we could handle this venue, it might not be of any interest to real customers. I had never before worked on a project where our founding purpose was openly questioned. Was he trying to get us all fired?

We returned to base. There, the team frankly discussed the lack of both feasibility and utility of our technology for theaters, stadiums, and other kinds of popular venues that had these problems. I felt like crying, but our lead engineer was calm. “Let’s focus on high-traffic venues like malls and airports for now. We know our model works better there, and businesses already want it. We can always build a more complex system that handles theaters later. In the meantime, let’s make sure our business development team knows about this limitation so that we don’t spend effort on those venues.”

It was a wise decision. We acknowledged our limitations, avoided a quagmire of unhappy customers, and sped up the project by taking a costly, low-value technical problem off our plate. It’s important to know that over the next few years we went on to have much bigger problems! We ended up making much bigger direction changes as a result, so theater mapping was moot anyway. In retrospect, pre-building the infrastructure to handle stadium seating would have been a complete waste. And worse, it would have cost us the time we needed to handle the bigger problems that would come later.

Speak the devil’s name aloud

It was a career-changing experience for me to work with a team that methodically plans for its own failure. Before and since, I’ve had many coworkers who say things like “it has to work”, “failure is not an option”, “we’ll definitely need to do it this way”, “we must ship on time”, and so on. Given how frequently software projects are late, over budget, or fail entirely, it’s ridiculous to pretend that any project will go 100% as planned. What these people are really saying is:

I don’t want to think about changing course. I would rather close my eyes and crash this plane into a hillside.”

These experiences taught me that we must be honest with ourselves about what the biggest risks are. What don’t we know? What might go wrong? What will we do about it? Even when it becomes obvious that our original goal is impossible, we still owe it to our executives and customers to pursue the best feasible outcome. Sometimes the best outcome is nothing more than “we quickly and cheaply discovered that this project wasn’t worthwhile. You’re welcome.” But often it’s better than that! There are many excellent “Plan B’s” out there… but we have to first be open to letting go of “Plan A”.

Radical self-honesty can be painful. It’s an emotional challenge to admit to failures and be willing to make necessary course corrections. I’ve noticed that many of my coworkers expect themselves to somehow make every engineering decision optimally, even though there is rarely enough information to do so. They sometimes even deny when a decision they’ve made isn’t turning out well! When I was a young, insecure engineer with a chip on my shoulder, I certainly did that.

But as I’ve worked on harder and harder projects, I’ve realized that wrong decisions are inevitable. Pretending otherwise only damages the team’s culture and lines of communication. I’ve been fortunate to find managers who encourage experimentation and accept that our mistakes are usually a result of doing something new and unknown, not incompetence.

Unknowns and learnings, not tasks and schedules

I’ve never seen an interesting software project begun with enough information to ensure its success. Not knowing exactly how to build the software is a big problem. Not knowing exactly what’s needed is an even bigger problem. That’s why most of the time, the right thing to do is not to plan out a long schedule based on what we know at the start. Instead, we identify the biggest risks and unknowns, and then launch cheap, experimental projects to teach ourselves about those unknowns as quickly as possible. This may involve learning about what our customers want, whether a piece of technology does what we expect, or whether a dataset we need really exists and is useable. Whatever.

Each time we de-mystify a key unknown, we are able to nudge our chances of success upward. Speed of learning is important! Because the sooner we learn, the sooner we’re headed in the right direction. The sooner we change direction, the less time we waste working from our old, flawed understanding of the problem. Any schedule made without the benefit of our latest knowledge should be thrown out the window.

Optimize for being wrong

On very mature software projects, it’s important to optimize for Big Software concerns: stability, scale, efficiency, polish, and collaboration amongst many teams in a large organization. These are hard problems, and there’s a fine art to handling them well.

But when we’re first getting started, working toward a first system version, or significantly reworking a software system for a second version, I’ve found that the major problem is not scale or stability or collaboration or efficiency. The major problem is the unknowns. Obviously we have our hunches, and we try our best ideas to solve the problems we can see from here. But we will be wrong a lot, despite our best efforts. That’s because we’re doing something new and unknown. This doesn’t just happen once at the start of the project; being wrong about how the project will go is a constant reality. You would not believe how much great code I’ve thrown away in my career.

For me, these risky decisions happen every day. Mostly they’re in the form of deciding what software systems and features to build next. I’ve realized that if we’re going to be wrong a lot, we should prefer decisions that are as cheap as possible to change later. (“Later” is when we often have useful new information!) In software design, all other things being equal, this means choosing an approach that doesn’t cost much to do, and doesn’t cost much to undo.

If the new software turns out to meet an important need, then we can always continue investing in it. We can even completely replace it later with something that is more sophisticated. Once we’ve learned more about what was needed from the first version, we’re more confident that it’s worthwhile to invest more deeply in the second, more complex version.

But if the new software turns out to be unused, unnecessary, or otherwise a bad choice, we have cheaply learned from that experience. Then we can shelve the unwanted component, or erase it, or keep it around if it isn’t annoying anyone. Often we learn that the right next step is to work on something else entirely.

You may have noticed that this approach does not optimize for minimal development costs. Even when our hunches are correct, it costs more overall to build a simple first version, replace it with an enhanced second version, and so on. If we had perfect information about the future, we could simply build the new software system perfectly the first time. No need to waste time iterating if you have a crystal ball! But when we are wrong, we know it sooner and we can change course more affordably.

The biggest risks are probably not technical

By the time I’m hired onto a project, there’s a presumption that there’s a hard software problem to solve. But the biggest failure of my career was caused by building a great solution to a hard software problem. It had high reliability, a thoughtful security design, an intuitive user experience, and was ready to scale to millions of users. For what it was, we built it reasonably quickly despite our small team. If the software part of the project were all that had mattered, we’d have been wildly successful.

Unfortunately, we eventually learned that the dataset upon which our entire product vision depended… didn’t exist. We needed data from partners, but they were only interested in doing press releases using our name to get positive “green energy” PR. Most of them had no intention of working with us beyond their one tiny pilot program. What’s worse, we eventually learned that their data quality was so bad that they often had to “synthesize” customer data; meaning, they falsified missing data points! That was not acceptable for our product, which depended totally upon getting accurate data. The partners knew our product wasn’t going to work. But they didn’t tell us.

By the time we realized we needed to change direction, over two years had passed. We had ideas for new approaches, but there was no budget left to try them. We were all fired. The project was shut down and the company was embarrassed. What I should have done instead was use the software team’s talents to help our business team quickly discover these problems with their partnership strategy. It’s not particularly fun software work, but we could probably have built a quick-and-dirty system with real data in a month or two. We could have acquired data from a real partner by getting on an airplane and persuading them to work with us on a proof-of-concept. Perhaps if we’d raised the alarm sooner, we could have realized that we should move on to Plan B. Perhaps I would still be working on that project today.

Software can serve our goals in many ways. But it’s important to focus effort on the most important risks and unknowns — which often aren’t hard technical problems! When the business people don’t really understand who our customers are or what they want, the software work should probably be user research and throw-away prototyping to gather customer feedback. Compared to what the business can learn from a simple prototype, working on software foundations or infrastructure is a waste of time and money.

Practicing “Strategic Laziness”

The practices which I learned from that team — frankly considering failure, learning about unknowns, and assuming there will be future course changes — came to define everything we did on that project, and every project I’ve done since. When I think in these terms, my software development strategy may seem indistinguishable from the behavior of a lazy, impulsive hack. Self-respecting software engineers are often horrified by my proposals! I often appear to be hasty, short-sighted, and unwilling to lift a finger to prepare for the future. Exactly.

I think the key software development strategies are, roughly:

1. Ruthlessly cut preparations for the future.

Software engineers come in all temperaments. One common temperament likes to build software that handles every imagined possibility for all the ages to come. (I know this type well because I used to be one!) They prepare for future feature needs with extra abstractions. They prepare for repetitive processes by automating manual tasks. They prepare for larger data volumes with extra optimizations. These things feel like “quality” to a self-respecting software engineer, but there’s a key problem with doing a lot of future-proofing: we are often wrong about what’s needed in the future! Once we learn more about our unknowns, we often change our mind completely about the direction of the project. When that happens (and it has happened to me many times), all that preparation delayed our learnings, created useless complexity that we tripped over, and was generally a waste of time and money.

2. Prefer stop-gaps to paying costs up front.

On a project where we’re learning more every month about what we’re doing, this month’s seemingly necessary big costs often don’t seem so necessary next month. Buying an expensive piece of equipment, pre-paying for a year of compute costs, spending time building seemingly needed infrastructure, integrating a complicated third party system… no matter how obvious a choice these things seem at the time, doing them has certain cost but uncertain benefit. If there’s a cheaper, crappier stop-gap that could be put in place to serve the need for a few weeks or months, that’s probably worth doing first. Quickly un-blocking the project has a lot of value, because we can immediately move on to learn about other aspects of the problem which might matter more. When the stop-gap has turned out to be surprisingly good, or when we’ve stopped needing it entirely, we’ve been delighted that we didn’t waste our money on the expensive solution.

3. Dip a real toe into the real water ASAP.

We seem to learn the most from trying to build a working system that solves the real business problem. “Real” might mean serving real customers, processing real data sets, handling real loads, whatever. We could certainly spend lots of time doing theoretical feasibility assessments, reading academic papers, or other research. But these activities still leave open the risk that our assumptions are wrong in some way that we don’t understand yet. Learning from past work is important, but we won’t know for sure until we succeed for ourselves. (A working system under the team’s control is also an invaluable laboratory for teaching ourselves what the unknowns are, and iterating.) So we want to get our own real-world laboratory up and running as soon as possible.

4. Just a toe, though.

In the process of creating our first system, we’ve often been tempted to make the endeavor more elaborate. Maybe we want to prepare for more customers, more features, more scale, or more fanfare. We might even wish to make the effort into a “launch”, with public relations splashes and marketing and so on. In my experience, these scope expansions increase risk and needlessly delay the moment when we get to start learning about our unknowns. What’s worse, the increased scope, cost, and visibility also typically increase everyone’s reluctance to risk any failures. We learn much more when we find a way to have a small, cheap, private laboratory in which to fail quickly and quietly. On projects where every misstep is a potential public embarrassment for the company, I haven’t been able to get much done.

5. Cheap software, not buggy software.

Over the years I’ve employed software quality measures like unit tests, methodical production engineering, and code reviews to varying degrees. Too much means a slow project, but too little means the software is a buggy mess. Although skipping quality processes definitely accelerates the project at first, lots of bugs can undermine the learning process itself. If a buggy experimental system doesn’t work when sent into battle, it’s hard to be sure why. Was the failure due to our flawed understanding of the endeavor? Or was it just the bugs? We might learn the wrong lesson from our experiments. We might give up too easily, or we might gain false confidence from misleading data. There’s an art to finding this balance. My preferred strategies are to hire engineers who demonstrate high individual work quality, build whatever automated tests are cheap, accept that some manual testing is needed, and to generally keep the system design simple and unsurprising.

Does your project need some Strategic Laziness?

My career experience consists mostly of new, small, high-risk projects. Your needs might be different. Here is a simple experiment that you can conduct for yourself to judge whether you might benefit from this strategy:

  1. Keep a prioritized “TODO” list of work items for your project.
  2. Every time you add work to your TODO list, also jot down today’s date on that work item.
  3. When you complete a TODO list item, note its age. How long ago did you first decide you needed to do it?
  4. When you cancel a TODO list item, note its age. How long ago was it that you thought this was a good idea, even though now you don’t?

Ask yourself: are you completing those large, old projects? Making good progress on those strategic priorities from three quarters ago? Not erasing many of those old items? They’re all still looking relevant and important to you, are they? If so, you’ve been doing a good job of predicting the future and you may not desperately need to change how you run your project.

On the other hand: Are you mostly doing things that you didn’t even know existed last week or last month? Are those old priorities looking kind of irrelevant given what you know now? Do you tend to wipe out your plans frequently, reacting to customer demands or reorgs or leadership summits or whatever? If so, you might be doing yourself more harm than good by trying to build complex software to handle your future needs. It might be time for some Strategic Laziness.

--

--