Growth Opportunities for Data Product Management

Phil Hopkins
Agile Insider
Published in
4 min readAug 14, 2019
Image by Gerd Altmann from Pixabay

The most satisfying days in a product manager’s life are often the earliest ones in a project, when we can shape the outcome by defining the product vision and strategy, by suggesting a key innovation, by creating a definitive document, or even by steering the team away from a bad decision.

But there is nothing like the feeling of delivering a completed product at the end of a long process and seeing it work in production, gain acceptance among users, and generate meaningful statistics that support its path to wide adoption.

Growth opportunities for internally-facing products, including data pipelines, are sometimes hard to clarify or uncover. For a data product manager, these can be best defined in terms of improvements to data availability, to data quality, and reduced latency.

Case Study: Growing the Value of a Data Pipeline at Bloomberg

In my first product manager role at an investment bank, I was responsible for a team that ingested from Bloomberg, and from other real-time data providers, every market price for every product that the bank traded around the clock.

That challenge was sizable, but in order to grow the value of the data pipelines that we provided to the firm, the head of the market data group gave us a new goal.

We were asked to decrease the length of time, already measured in milliseconds, required by our data pipeline to process the market prices, reformat them, and pass them to trading systems. This would allow the algorithmic trading programs that work without human intervention to automatically buy and sell securities much quicker in response to price changes. If we were to accomplish this goal, the competitive advantage for the whole company would be measured in the tens of millions of dollars, perhaps more.

One morning over coffee with an engineering lead, I learned about the delta compression approach we leveraged to store market prices on disk, a technique that saved storage space on our systems. Rather than storing every price as a 32-bit integer, we stored most of them as just a few bits that reflected only the deviation from the first price of the day for that stock.

We played with the idea of using delta compression to reduce latency by shrinking the size of the messages that conveyed to trading systems every price we had just ingested. We wondered if delta compression could make the messages small enough that we could process them faster. We tried out the idea in a test environment, and sure enough, the smaller messages required less time and space in the memory queue on the network buffer, so the throughput was much faster.

I created the requirements with the engineering lead, we projected the return on investment across multiple years, and we received approval to implement a proof of concept. We ran it in parallel to production for three months, and then rolled it out on one trading desk and saw significant improvements in trading speed. Latency was reduced by two thirds. The bank profited tremendously from the first rollout, soon choosing to expand the rollout across all trading systems.

We had realized our goal for growing the value added by our data pipelines, while also increasing the competitive advantage by generating greater algorithmic trading profits. This also caused the esteem in which we were held to rise at Bloomberg. Overall, a winning result for everyone.

Lesson Learned

What we learned is that for data products, growth is not measured in consumer software shipped and adopted, but by incremental changes that usually don’t have shiny new UI features. Nevertheless, the satisfaction of delivering value at the bottom line is always compelling.

--

--