“By Christmas I’ll either have a successful product or be out of a job.”
I clearly remember the conversation that I had with my wife at the start of 2018 about the new venture I had been asked to take on in my company and the risks it entailed. A year on and I’ve a wry satisfaction in my ability to predict the future so accurately. I’m also very well placed to reflect on the experiences of the last year and share some of the lessons learned from the highs and lows of creating an entirely new product from scratch.
A good track record
Late in 2017 I’d come off the back of establishing an entirely new Product Owner team at River. I was heading up a strong team alongside having personally leading the creation of both a high-revenue brand loyalty platform and a new central platform that strategically positioned the company to deliver all of their previously bespoke “Employee Engagement” programmes from one technology stack. I had established a solid reputation in my company for aligning teams and delivering valuable software even in challenging circumstances and the chaos of agency work.
Aside from the one loyalty platform the company’s client programmes typically fell into one of two main categories, along with employee engagement they also provided programmes which were internally labelled “BI” but were realistically less about Business Analytics than the distribution of Management Information, sometimes labelled “MI” (see here for an explanation of the difference). These programmes were based on the distribution of operational metrics to key roles to identify problem areas and establish and communicate action plans to address them. These programmes were focussed in the automotive sector and were typically more profitable as they were supported by operational rather than HR budgets.
Having done some early internal domain modelling of the existing client programmes I and my team were excited about the opportunity to create a new core platform based on the strong value proposition that our automotive customers were getting from their BI programmes, and it felt like the right time to do it.
A great idea at the time
Around the same time a new senior exec was brought in to commercially lead the BI offerings. Rather than progressing the previous trend of separating the BI and Engagement areas of the business, the new vision that they brought with them was to bring these two together. By bringing employee engagement principles around distributed data the thinking was that we could empower and motivate employees and teams to drive for measurable success.
The new vision was presented in all-company meetings as the new strategic direction of the business. It was a neat idea in principle, and tied into the rhetoric that was coming from some thought leaders in the field of employee engagement who were using terms such as “data democratisation” as the future of driving engagement and responsibility across the modern workforce.
Out of an off-site workshop with the sales and marketing team and the CEO/COO the company vision was expanded on and the key marketing messages decided. The question of validating there was a genuine market for the idea apparently did come up during this early workshop session, but the weight of belief in the idea was such that preliminary market identification and validation simply weren’t perceived as necessary.
Just build a product
Instead the approach decided upon was to go ahead and create a new software product to support the vision. This is the point that I was brought in to head up the product development. It made sense for me to lead this as I was able to fulfil both product ownership and the overall running of the development and supporting operations and had done an amount of preliminary work on analysing our BI offerings. One thing that I wasn’t able to provide was the ability to identify and validate the existence of new target markets for the product, however given the prior experience that we had in others in the senior team who were covering that aspect of the venture it seemed like we had everything well covered.
Our approach to creating the product was simple, borrowing heavily from Erik Ries’s writing on “Lean Startup”
- Create some user journeys to act as a prototype for the concept validate the concept with early customers
- Create a minimal viable product (MVP) with a core feature set as quickly as possible
- Identify a set of friendly early adopters that reflected our target market and run small scale, well supported MVP trials
- Refine the core feature set based on feedback from the trials and productise these features
- Round out with core third party data integrations and any ‘hygiene’ features that would be standard expectations for a generally available SaaS product
- Progress to beta and GA launch
Ironically while Ries’s book is aimed at the importance of validated learning, even though our approach was based on creation of an MVP and trialling, at no point was it honestly considered that we wouldn’t progress beyond the MVP stage. As a company our collective confidence in the idea was such that any attempts at risk mitigation were perceived as negativity quickly argued down.
- I recall suggesting the option of starting by throwaway prototyping to validate the idea. We decided that ‘just building it’ would allow us to launch quicker once we’d proved the idea.
- I was keen not to name the product the name of the parent company as I felt it would alienate people working on other projects and prevent us to create different products in future if this one wasn’t successful. Again the confidence that this product was the future of the company was so high we not only named it after the company, but presented at an all company meeting how this product was the future and would replace the existing revenue generating business in 3 years.
- When I shared experiences from talking to venture capitalists, specifically around the low percentages of success in start-up software products and the need to take an experimental approach, this again fell against the rhetoric of our somehow being a special case with a much higher chance of success.
- Discussing my concerns around my own personal reputation creating a product to support an idea which we had no evidence would work my boss agreed that I was taking a risk, however it would be worth it if the product succeeded and my reputation shouldn’t suffer if it didn’t — we agreed it was a gamble but a worthwhile one.
I’m not suggesting here that I was the only one raising concerns, I know that others were, or that I wasn’t caught up in the positivity of it — as a product leader you have to get behind your product and the collective momentum behind the idea was very hard to resist. It was simply that we had a strong leadership group with an unwavering belief in our being able to improve business productivity which was based on little more than theoretical idealism and an unproven concept.
A positive start
A couple of months in and we seemed to be progressing well on a few fronts. We had conversations going with some promising potential trial partners and were getting positive feedback from contacts in various companies and fields about our MVP. After a false start an initial feature set was coming together into a coherent shape that we could really learn from.
Then coming into the summer our momentum started to slow. It seemed that, despite the encouraging conversations and feedback, it was proving very hard to get any trials going. The positive messaging that we were getting from high level execs wasn’t translating into a genuine need or desire to progress when the responsibility for moving the trials forward was given to someone down the chain with more hands on concerns. More than once a positive conversation with a C-level executive turned into a painful endeavour of chasing and harrying to try to progress to an active trial — folks simply had more pressing things to occupy their attention.
Even those partners that were keen to trial were large organisations with very slow processes of sign-off that meant starting the trials was a very drawn-out process and as we hit the summer holiday lull things inevitably stalled. Meanwhile we were still developing — we’d refined target features based on theoretical use by our known triallists but were still operating in the absence of genuine feedback. We’d gone well beyond what we could validly describe as an MVP and pressure was growing for me to improve the sales demo, on-boarding, styling and branding of the product to make it look more ‘finished’ despite not having validated the feature set in successful trials.
Our attempts to trial internally or ‘eat or own dog food’ also dragged. Some of the challenges were practical, in terms of having the right data and internal time to support the implementation — we were after all a busy agency business. I personally think that there were also motivational challenges. There was a tangible reluctance in some folks to support something that people been told was the future of the company and more important than their own jobs. Probably more telling was that people were less enthusiastic to engage and collaborate around their data than our vision suggested — even the few teams that were genuinely enthusiastic weren’t using it routinely. These combined factors left internal implementation as a challenging endeavour and one we should have paid more attention to.
Getting Genuine feedback
Coming out of the summer lull we managed to regain momentum, get a few trials up and running and start to get objective and subjective feedback. The anecdotal feedback from those leading the trials was good and they and their teams were enthusiastic about the product. They understood what it was trying to achieve and found the experience of using the product intuitive. The raw numbers coming back from our usage analytics were less positive though. While people liked the idea of what the software delivered, and the experience of using it, it wasn’t giving them enough day to day value to be habit forming.
A number of concerns became apparent.
- People were less than enthusiastic about engaging around their data. Whilst a nice idea, in practical use the people we trialled with showed little interest in engaging, connecting and improving around performance data. Our attempts at internal use had provided an early indication of this, and further trials bore out the evidence that most folks are far less interested in understanding numbers that showed how they contribute to company success than we hoped. We’d essentially invalidated the initial hypothesis upon which the decision to build a product had been created.
- We didn’t get expert UX researchers involved early enough Our internal UX skills were very much at the UI design end of the UX spectrum and from the start I knew we needed to bring more expertise in. My initial strategy of bringing in an expert consultant hit a stumbling block when none I spoke to had availability until the Autumn. When I changed focus to bring in a more senior permanent hire it was a 6 month process — a very long time in the world of a startup.
- Each triallist that we had were trying to solve a different problem One company was trying to use the product to manage a staged process and would have benefitted from features more aligned with a workflow management system, another was wanting to do more metrics capture from the workforce and a survey action planning tool would have been more appropriate, whilst internally our needs were pulling more towards wanting to track future forecasts rather than past performance. Trying to deliver features to support each case in the short term would have led to a diluted system that weakly supported a number of use cases and was exposed to competition on each, however any suggestion of narrowing our focus was resisted as it was felt it restricted our sales potential.
- We were trying to combine research and sales — Another challenge that we were facing was that, in order to progress more opportunities and open up more customer conversations I was coming under some pressure to optimise the sales demonstration and on-boarding process. At the same time I knew that we hadn’t established a strong enough concept of target market and value proposition to justify optimising delivery over discovery and learning. It was clear that we were attempting to use the same pipeline of leads to provide our research opportunities as our sales one and as a result were coming into conflict around how we managed those relationships. I was attempting to use our prospects to research, validate problems and understand their priorities, whilst at the same time others were trying to engage them in more traditional sales conversations and slick product demos.
- Agencies were struggling to understand It wasn’t just internally that we were facing challenges. We’d engaged some external parties to assist with SEO (Search Engine Optimisation) and our Website design. Each of them were struggling with identifying the audience and key value proposition for the message. Again they identified the risk of being too broad in our target markets and proposition and therefore not compelling enough in any one.
Changing the Approach
I was convinced that something needed to change. Aware of a gap in my own experience at the start I’d focused a lot of energy on researching further on the subject of product to market validation. One book in particular “the Mom Test” came as a recommendation from a friend and was invaluable in opening my eyes to the dangers of taking compliments and positive feedback as any kind of validation. Everything I read suggested that our conversations should be 80% listening to the customer and their burning issues yet we were taking up 90% of the conversation selling our idea.
The more that I built out my knowledge and spoke to other experts on the subject the more convinced I was that we were neglecting the number one priority in the entire process which was to identify and validate an initial target market and value proposition that was narrow enough to deliver with the small team that we had. Sharing this excellent post by Tren Griffin I shared my concerns
“What we need to do first is really establish a market opportunity — is the gap we perceive real and credible that we can establish a position of people wanting to buy software they don’t already have to fill it?”
One of our third party agencies provided an opportunity to confirm this when they introduced us to a UX and product market research specialist who was available at short notice (we were still only half way through waiting for our permanent UX researcher to start). I approached him with a proposal of refining our on-boarding journey whilst at the same time providing an objective critical assessment of our overall position. His conclusion was consistent with my own thoughts — our pressure to optimise delivery was inappropriate and our priorities should still be on market discovery and value proposition. We presented this challenging message back to the senior leaders. It was a difficult conversation, however mostly they appreciated the bravery in challenging our position and were at least reassured that we had the best interests of the business very much at heart.
I immediately shifted focus on defining a narrower and more defined market to aim at. I redirected sales demos into research interviews. I categorised our existing leads into a clearer set of distinct opportunities and we commissioned further paid interviews to gather opinion and experience to pursue those opportunities. One use case in particular had come up repeatedly as one that provided a clear and compelling value for multiple organisations and the potential benefits were high. I was optimistic that pivoting towards this would be a strong positive step, require little alteration in terms of the software and give a much clearer vision to aim for.
Whether this is the case or not I’ll never know. Unfortunately the company stopped being able to fund product investment in the face of uncertain revenue both from product and our automotive sector programmes and so my team was laid off. This saddened me greatly. I’d always known my own position carried a level of risk, but being an agency whose most valuable asset was development time, it never crossed my mind that our developers would not be found work elsewhere in the business.
A lot to learn
So in a year we envisioned, built, trialled and shut down an innovative and I think pretty cool data product that had great potential but didn’t deliver the immediate success that expectation had been built around. In terms of mistakes made and lessons learned, I’ve only scratched the surface here and will focus on key learnings in future posts. To summarise a few of the key things I’ll take away
- Treat a product startup as an experiment or a series of experiments. Even if the venture has enthusiastic backing from the highest levels of leadership, nothing is above being tested including the very assumptions on which the product is being built. Although we tried to incorporate learning into the creation of the product — ultimately we led with the assumption that our idea was the right one and invested in a strong marketing message around this, which left it much more difficult to adapt and pivot when that idea proved flawed.
- Don’t go too broad — our reluctance to focus on a specific market sector and problem meant that we were trying to solve too many problems for too many people. Rather than early trials giving us feedback to focus and drive forward we ended up dividing our attention too thinly on delivering capabilities that progressed one of our early adopters at a time.
- Separate research from sales — we relied on sales channels to provide research subjects and feedback in the form of triallists, which presented challenges as sales interests and sensibilities conflict with the candid honesty of research. Sales cycles are slow as people are suspicious and reluctant to engage in them. Treating research as a separate activity supports faster feedback and more honest responses. Establishing user-research expertise or at least techniques into any start-up venture is a top priority and one that needs to be in place before development or design.
- Ignore compliments — We took a lot of validation in the early stages from positive feedback on our message from reasonably influential or senior folks. What we should have appreciated more is that compliments are free, and shouldn’t be treated as any kind of evidence for the validity of a product, unless they are tied to some kind of commitment. More than one ‘enthusiastic’ conversation dried up when we tried to progress to a trial or commercial discussion.
- Don’t build a product, solve a problem — our entire approach was based on creating a product based on an idea, which relied on an ideal. A lot of folks liked the product but no-one needed it urgently enough. One of my favourite quotes of last year comes from “this post by Marcel van Es” “The truth is that nobody cares about your ideas, they care about the problems you can solve for them.” Most disruptive technologies don’t typically create an idea, they overcome a problem or limitation with the existing solutions that provides a compelling enough improvement to justify the effort of switching.
Some of these merit a more detailed examination and I may dive deeper into these areas in future posts.
Mistakes aren’t failure
The thing to bear in mind here is that we didn’t fail — yes we made mistakes, however mistakes are essential to learning which is essential to success. They only become failure when we don’t learn from them to drive future success. Very few products hit the bullseye first time, and having the capacity to identify opportunities and pivot towards them is a fundamental for successful product innovation. A quote from the great post by Tren Griffin
“Startups need 2–3 times longer to validate their market than most founders expect.”
We simply ran out of time to be successful. Sadly therefore, the successes that will come from much of the learnings of my team and I on this venture will not be in the same company — their loss will be someone else’s gain as I had an awesome team and I’m pleased to say that I know most of my them have rightly been snapped up into new roles.
For me personally I have a vastly expanded knowledge in how to, and how not to, launch a new product from nothing. I know what I would do differently now, however without going through this process I wouldn’t have that knowledge, so I can’t say I would ever have done things differently.
I am happy that I’ve taken an approach of integrity and honesty in a challenging situation. Commissioning a critical external appraisal of our position and presenting this back to the leadership group was one of the most difficult weeks of my career. It was also probably one that I’ll be most proud of looking back. Soon after I left I read This great post from Gibson Biddle describing the reasons why Netflix Friends persisted for 5 years in the face of no evidence of success. Gibson’s explanations of how hard it is to critically evaluate and potentially kill projects in the face of enthusiastic top-level support struck a familiar chord and gave me reassurance that approach that I took in the end was needed.
My prediction to my wife those many months ago eventually proved accurate and in Jan 2019 I found myself looking for a new role. I’m now happily leading the product effort at a tech startup in Bristol, UK. Even coming off the back of a project that was ultimately unsuccessful, I took more experience with me from 2018 than perhaps any year than I’ve worked before.
On the subject of Lean Startup and Product Market fit I recommend these posts
- “12 Things about product market fit” by Tren Griffin https://a16z.com/2017/02/18/12-things-about-product-market-fit/
- Forbes — “How To Achieve Product-Market Fit” by Hayley Liebsen https://www.forbes.com/sites/hayleyleibson/2018/01/18/how-to-achieve-product-market-fit/
The book that created the Lean Startup idea is a must-read
- “The Lean Startup” by Eric Ries
and the following books, recommended to me by a friend, shined a light for me on the flaws in our approach to talking to customers and provided a direction for the approach that we pivoted towards at the end
A simple explanation of “Data democratisation” can be found here
Originally published on Adam Knight’s blog at https://www.a-sisyphean-task.com.