The Speed of Innovation

Nov 25, 2020 · 5 min read

Author: Bhavini Soneji — VP of Engineering

Some of the biggest challenges faced by product technology leadership today are:

  • Balancing velocity and quality
  • Balancing product features and tune-ups (refractory)
  • Balancing staffing investment in product teams and platform teams
  • Determining principles to drive prioritization & decision making for all of the above

There are various factors that should be considered in determining how to address these — e.g. size of the company, phase of the company (growth phase or trying to find product market fit, etc.), maturity of the technology team, and tooling. There is no right or wrong answer, but ultimately, the largest determinant of success is ”speed of innovation”.

Let’s start by looking at what constitutes the roadmap. If we simplify the world, there are two categories of items in the roadmap:

  • Innovation (product features)
  • Tune-Up Items driving the speed of Innovation

Let’s drill down into this second category (Tune-Up), which makes up the backbone that enables innovation in the first place. This consists of automating mundane, time-consuming, manual workflows carried out by internal staff (developers, customer support, etc).

Example Investing for:

  • Releasing fast with quality
  • Fast incident detection and response. You don’t want to fly blind.
  • Building common reusable components.
  • Continuously refactoring to ensure components are secure, scalable, and available, with low latency and continuous delivery. It is like servicing a car: some are light weight like oil change and some are more invasive tuneups.

So, what this amounts to is a flywheel, which feeds into itself. Investing in Tune-Up (speed of innovation) items, leads to higher development velocity, which leads to improved product innovation, which leads to scaling the team, which then leads to more investment into speed of innovation.


Now, let’s get back to the product innovation. The prioritization for this category consists of evaluating the impact by answering these questions:

  • What is the customer impact?
  • What is the business impact?
  • What is the operational efficiency of the business impact?

Once what has been decided you need to move on to the how, while keeping the core essence of “done is better than perfect”. The key here is speed to market and validating the hypothesis using data from customers, quickly. Here are some steps

  • Step 1: Early ways to get your hypotheses validated from just using sketches or prototypes (test builds) that are sent to only some select few users (beta).
  • Step 2: If that pans out, moving to the next step of doing minimal MVP and A/B testing.
  • Step 3: Once there is heat on the feature and it’s decided to ship broadly, taking the time to put the right operational and quality gates to scale the feature to a large audience.

The key is follow-through and not running after the next shiny object. Once the team has fine-tuned and adopted enhancements based on data and customer learnings, only then does the feature drive optimal outcomes for the customer. Give teams the space and time to complete this last mile of optimizations.



Just like product innovation evaluation, we need to evaluate these “run the business” items with a similar lens, e.g. asking questions like:

  • What is the customer impact? (e.g bugs, latency, lengthy turnarounds, lengthy time to value)
  • What is the business impact? (e.g. brand trust, revenue impact due to downtimes)
  • What is the impact on the operational efficiency of the business? (e.g. productivity of developers and internal staff)


This is not one-size-fits-all. The investment that a company makes depends on the product/tech maturity, tech team size and number of different business/product lines. For example, a young startup needs to create products and POCs instead of investing heavily in speed of innovation. On the other hand, a startup that is focusing on growth needs to invest more.

So, the key is alignment and prioritization with the Product and Engineering teams during company strategy planning. This should be part of the product and tech DNA of the company and not an afterthought. Alignment is critical in making sure everyone is rowing in the same direction.

Next comes metric/measurement to inform data driven decision making. This is achieved by capturing metrics around product development phase , overhead on production incidents and defining KPIs to measure the success against. This metric/measurement topic warrants a separate blog dedicated to dive deep.

As feature (application) teams increase or expand to different business/product lines, staffing horizontal teams becomes critical to laying the technology foundation, driving reusability, and consistency. Two areas of horizontal teams are:

  • Platform teams: these layout common building blocks that application teams build on or reuse.
  • Framework/ infrastructure teams: these layout the tooling and infrastructure framework that applications teams integrate with to drive continuous release with quality gates, and to be able to drive fast detection and response with technology.

Lastly, we are never done. We need to repeat the process of Align -> Prioritize -> Measure.

There will be times when teams might be focusing more on product innovation. Sometimes that might have to swing into focusing on refactoring to meet speed, scale, and operational excellence. Refactoring is like your car needing servicing, some are light weight like oil changes OR some are more invasive tuneups.

Bottom line here is: have transparency and clear communication on decisions and tradeoffs, while having flexibility to align with business priorities and meet hard deadlines, if needed.

Here is the youtube recording of a talk I did on this topic, which was given at the Elevate 2020 Summit — the largest virtual summit for engineering and product leaders


Blog for Headspace Engineers.