* and everyone else
I used to work for a company that valued shipping. I use shipping loosely — it means going through a checklist of things that defined my job. My compensation and advancement were not tied to how well the product did in the market nor my impact for the company. As long as I shipped on time, I was doing well.
I then worked at a company that focused on impact. It didn’t matter much what I shipped or when I shipped it, so long as it had impact on the business. I was rated every 6 months, which meant I needed to have real, measurable impact during those 6 months.
Think for a second about the implications of this:
At the ship company, a project could take 3 years, while I kept advancing.
At the impact company, I didn’t advance if I didn’t have meaningful impact for a year.
At the ship company, QA’s role was to stop engineering from shipping something that was clearly broken.
At the impact company, QA’s role was to work with engineering to make sure the right features work well.
At the ship company, release dates were sacred.
At the impact company, we shipped when we thought the product was good enough to have positive impact on users.
The ship company was a fortune 500 company. It doesn’t exist anymore.
The impact company is here and doing very well.
What is Impact?
The obvious answer is business impact. Making more money for the company, saving money, increasing the number of customers, the number of transactions, reducing the cost of support, etc. Focus on the company’s top goals.
There are a lot of other ways to impact the company, so long as they tie to those goals:
- Reducing incidents
- Fixing a major set of bugs for a product area to make it better for users
- Increasing performance in user noticeable areas
- Making infrastructure cheaper / faster / more stable
- Building systems and frameworks that help others build things faster (and are adopted)
- Reducing developer time to diagnose / build / ship
- Increasing the number of quality people you hire
- Mentoring (and having the mentees acknowledge that you helped them)
Not everything is measurable.
Fixing a bunch of UI glitches that impact a specific feature won’t be easily measurable, but it does increase overall brand sentiment and the enjoyment users have from your service. In the long term it’ll impact net promoter score.
Removing tech debt, or making systems more reliable is not always measurable. It’s still critical and impactful since your development speed will slow down if you don’t make the system easy to extend. The art is in picking things that will actually help, not refactoring to a ‘cool new technology’.
Doing code reviews helps your team get better by teaching others new things about the code, while driving for a high quality bar.
If you think it’s important and don’t know how to measure it, talk with your manager or your peers. See if they understand the value of doing it at this time. They might also have a suggestion for how to measure the impact.
While not everything is measurable, you should cultivate a bias towards measuring.
Why a Bias toward Measuring?
Consider the following statements that typically show up in people’s self reviews:
- “I shipped my features on time.”
- “I participate in architectural reviews for front-end projects within the company.”
- “Helped coordinate mobile and front-end engineering, analytics, and QA.”
- “I participate in the culture-setting meetings.”
These are checklist items. They are ways you can have impact, but not the actual impact you have.
Now consider these examples:
- “Enabled the checkout flow directly from the product page to increase purchases by 10%”
- “We used to take 20 minutes per product line and 20% of the time that process failed, which meant that we needed to re-run it again. After my change we were able to sync new product lines in 10 seconds. We never had to touch that process again.”
- “I participated in 20 interviews this half, leading to 2 new hires.”
- “I scheduled a sync meeting for all client engineers. We identified 3 areas that need refactoring. 2 of them area already complete and crashes have gone down 20%.”
It is very clear why these projects had impact for the company. They either increased a business metric, saved time and freed up people to work on other things, or in general improved the company.
But I’m not the PM!
Engineering and product management should be aligned on building the right products with minimum effort. Engineers look at the product every day, we read the code, and we go through the process of deploying and changing and fixing. Impactful things don’t have to be new products. They can be UI fixes, more stable systems, things that make it easier to use the product. For example, identifying critical actions that are below the fold on mobile devices, or improving a login funnel to transition people directly into an abandoned checkout flow, could have large impact on the company’s key metrics. These are all things engineers can identify, communicate and drive.
How to Think about Impact
Think about impact in what your company values:
- Did you “write a spec”, or “write a spec for a service that changed X”?
- Did you “review a lot of code changes”, or “reviewed sensitive code changes and are sought after by A, B and C to review things”?
- Did you “send out a survey”, or “sent out a survey, identified the key problematic area and convinced the tools team to add it to their roadmap”?
- Did you “build a system” or “built a system that facilitated X and was adopted by teams A and B”?
- Did you “run a test” or “ran a test that taught us that users don’t like getting messages during their lunch hours”?
Without thinking about the impact you’re having, you might just be wasting your and other people’s time.
Thinking about impact throughout the product creation process takes different forms.
- Does this matter? How and Why?
- How will you measure success?
- Is success aligned with the company’s top goals?
- If you do this and get result X, do you know how to proceed? If you do not know how you’ll deal with the results (especially if they don’t bear out what you expect), think what other information you’d like to measure to understand what’s going on. Don’t just assume success.
- Is this really the MVP, or is this over-engineered?
- Can you create a tech-debty version in 2 weeks to see if the concept is good, before spending 6 months building a full system?
- Do you need to instrument anything to understand success / failure?
- How quickly will you know if this is a valuable avenue to pursue?
- As you build features, try them out. Do you think they’ll do what they’re meant to do? What would you change? What feels incorrect or suboptimal from a user perspective?
- Do you have suggestions that will make the prototype better? Suggestions to test the concept faster?
During Launch / Testing / Monitoring
- How early can you know if you’ve set up the system correctly?
- How often are you going to check on results? Waiting 4 weeks to discover that an experiment is not set up correctly is wasteful.
- Do you have results? What do they tell you you should do?
- Did the experiment fail? That’s great! What did you learn about your product or your customers that will help the company make better decisions? Who needs to know?
Isn’t This Rewarding Luck?
Consider what your company’s performance reviews emphasize. Do you get rewarded for motion (shipping), or for progress (impact)?
The goal is to reward people who make good decisions. Some luck is involved in everything we do, but think about the following two people’s performance over 3 consecutive performance review cycles.
The Engineer’s Engineer:
Term A: Great planning, project didn’t pan out.
Term B: Great coding skills, projects had minimal impact.
Term C: Architect-level thinking about the system, caused 4 teams to have to rewrite their interfaces. A way to make the current system better was discarded since it wasn’t cool enough.
The “Lucky” Engineer:
Term A: Ran lots of short-term experiments, 10% increase to Widget Production.
Term B: Reworked the site’s design, 5% reduction in abandoned carts.
Term C: Refactored backend stack, page load time decreased by 20% driving 5% decrease in bounce rate.
No one is THAT lucky. The “Lucky” engineer is making some smart choices. She’s picking the right projects, she’s finding ways to verify if something is worth doing, she’s evaluating work with the end result in mind.
The Engineer’s Engineer is wasting some amazing technical capabilities on the wrong things.
Do you Promote a Short Term View? What about Long Term Projects?
I’m promoting a balanced approach with a preference for short term. If you start with a long term approach, you’ll have lots of things that might be great next year, but the company might not be around long enough to see them realized. If you start with a short term focus, you then find ways to accelerate projects, to refine your MVP, to very quickly identify a wrong path and terminate it. Companies should define a mix of long, medium, and short-term bets. One mix could be 70% of projects with impact within 6 months, 20% within 1 year, and 10% longer term. Calibrate to your situation and your industry, and evaluate based on the product’s and company’s life cycle.
As for long-term projects, do them well! Sometimes you need to create a new database because you can see that your infra will fall over in a year. Sometimes you need to research a new technology that will unlock new product opportunities and business lines. Think about the following:
- Is this critical for the business? In what terms? Can you convince others working with you that this is more important than other things on your plate right now?
- Is there a simpler, shorter, maybe less cool way to achieve what you’re doing?
- Is there a way to quickly test the hypothesis before building the entire system? The last thing you want is to work for a year on a new ML infrastructure and train it on people’s preferences, only to discover that those don’t drive sales.
- Will the rest of the company keep using the ‘legacy’ technology and improving it, to the point that when you’re done with the new technology, you’ll have two well-functioning systems that now compete? Maybe incrementality is better and less disruptive than the new shiny thing?
- Will the impact of a big project be proportional to the time it takes? If not, why is it worth doing?
If you decide to pursue a project, set clear milestones and make sure you’re making real progress towards each, not just “still working on it”. Keep asking — does this still make sense?
Does This Mean You Punish Failure?
Our definition of failure might differ:
- An experiment that shows that a change in the system will not improve the business is not a failure so long as you’ve learned something about your customers or your product.
- An experiment that you ran for 4 weeks, only to discover that you set it up incorrectly and have to rerun it is a failure of execution.
- A year-long project that fails is hard. Consider why it failed? Was there something you could have done to either make it more successful, or to learn earlier that it would fail? This is a management failure as well — why did this project start and why wasn’t it re-evaluated and stopped?
I don’t punish failure. I reward taking risk and success. Making the right choices and executing well will give you a much higher chance of success.
P.S. Which companies was I talking about?
Most companies don’t fail for just one reason. The “ship” company had other issues that caused it to eventually be sold off. The impact company is Facebook. My current company, Lyft, is pursuing impact.