When your beloved product bites the dust..

image courtesy looper.com

Thinking and designing a product is akin to chasing your crush— I naively started to write 'planning your marriage’ but that is a problem of much higher complexity. The key message is that you get extremely attached to the output, while the process is roller-coaster.

Whenever you showcase a version to customers and they give any feedback, positive or negative, your blood pressure spikes up. Every sprint is towards something tangible. You want to see fully functional and integrated effort.

Failure is a reality, infact failure is a scenario of higher probability. You just prefer it to fail sooner than later, while hoping it doesn’t at all.

Long build = Higher cost = Greater attachment = Super dejection

The ideal path is : User research → MVP → Rapid prototyping → User feedback → Iterate. Which results in —

Short sprints = Low cost = Easy to detach = Low dejection

But, it isn’t so easy always as there are a number of external factors that influence the development sprints.

Case in point is the comprehensive smart meter data analysis application that we built as a prototype for a large utilities player. The problem targeted was (near)real-time energy consumption monitoring for better supply and demand management.

Smart Metering is on an ambitious journey as the traditional meters are to be replaced. It must be done with customers’ consent and the cost of meter is to be borne by customer. All the data generated (half-hour consumption reads in this case, so 48 data points in a day per customer) becomes available to the supplier which can help in better load balancing, peak demand management and dynamic pricing.

In parallel, there are customer concerns of cost loading, monitoring and data misuse (when you are usually at home and when you are not). Not everyone is tech savvy and privacy is a strong argument. Although, analysis of the consumption data can bring out extremely pocket-friendly recommendations and it is a win-win for supplier and customer.

There were two sides to this application to start with, one for the residences and commercial properties to understand their consumption behavior, and second for the energy supplier to see the aggregate demand and anomalies.

We built the first prototype on Tableau and after first customer feedback, started with HTML / python development . Our choice of visualizations and drill-downs with cross-filtering created a great first impression and we were asked to create a functional web version. They shared anonymized and fudged (multiply any number between 0.9 and 1.1) consumption reads of last 5 years for close to 100k properties, and their bill plans. We could quickly create models for consumption segments (k-nn clustering), user profile and peer benchmarks.

Next were anomaly detection models to check outage, spikes and outliers. With consumption gradients we carried out appliance identification e.g. room heating, water heating, lights and others. Lastly, we modeled for best fit bill plan given the weekday+weekend hourly consumption signatures. These models helped us to find actionable energy saving recommendations.

We also created dummy data of provisioned capacity to build comparative view between actual aggregate consumption and forecast for the supplier. Anomalies and consumption heatmaps were loaded on a geolocation view using leaflet.js with the feasibility to drill-down at customer level. A roadmap feature was to link up actions directly with the selections made here. The toughest part here was time series forecasting of consumption at city level for which we built an ARIMA model.

Next, this application evolved for other services such as engineer support, billing, complaints, etc. The idea of uberisation came up in context of engineer capacity optimisation and we started to make the third side of this application — engineer view. It was necessary to make the application good for support services given the smart meter roll-out was also to be tracked with this. While customers get to see allocated engineer and ETA, engineers get to see their job schedule and route.

Hence came the problem of territory alignment (traveling salesman) and vehicle routing for the engineers once there is a daily schedule created on the basis of requests from customer and routine jobs from supplier. Even emergency alerts were to be provisioned. Aim is to maximize attended jobs and minimize commute distance. It must also factor-in the collection of replacement part and equipment from the warehouse. Post mathematical formulation of the problem we implemented it with linear programming. We used python implementations of max-p regions problem and skater algorithm for territory creation and vehicle routing.

Great story so far!

There is an underlying problem though— a lot of this happened without regular interim feedback and based on competing product features. In fact, we were going through a lot of internal reviews than proactive customer surveys. The feedback on this application was cut-short due to lack of sales reach. All this development and testing took close to 18 weeks. Now, we had an appealing and bloated MVP at hand. We even set it up with some scripts to upload part of data real-time reflecting the actual consumption scenario. Refreshing application at certain intervals added more data points to the consumption views. Jazzy!

Few screens in the customer view mode

So, to put it all together, over 6 months we built an application with real-time data ingestion capability, creating three different interfaces — customer, supplier and engineer, and fully responsive HTML pages with DC/D3.js charts. It had clustering, segmentation, profiling, time series forecasting, territory alignment, scheduling, vehicle routing and geo-location views. Beautiful combination of technology and analytics development. Shoutout to the data scientists and developers in the team.

So, what made the product fail? Why did the hearts break?

Answer : Go-to-market failure

We picked up a product idea in a space where we only had 3 clients to get feedback from and there wasn’t enough sales support to take this out to many other prospects. Even the analytics sales lead moved out during the same period. To make it worse, operations leadership decided to partner with another market-ready application provider in a white-label mode so that they can make inroads into other utilities providers for service contracts. Yes, you can relate this to the issues mentioned in my last post.

Our home-grown market-ready application was relegated to a cold box. It has been close to a year now. What is the future of this product?

A quick Google search on smart meter analytics will give you an insight on the current and future prospects of such an application. With the rise of IoT devices it will become imperative to utilize loads of data to find useful actionable patterns. Smart meter adoption in developing countries hasn’t even started and there is only a percentage coverage in US/EU. The market potential for such an application is tremendous.

Guess we made something that couldn’t immediately gather confidence within the sales and operations leadership at that moment.

Back with a vengeance

To be optimistic, the product is just waiting for right time to bounce back. There is always re-usability of the components depending on how modular you made them.

It was best for the team to move on (at least temporarily) and focus on another advanced analytics problem. Thus we started with information extraction tool (so much client attention! everyone has unstructured documents / text to be made useful).

I will talk about that sometime later in the context of Bill of Lading. There is an interesting story about document classification, ocr / xml parsing and word2vec deep learning.

Cheers!