How a startup builds products

Tyler Myracle
Building RigUp
Published in
11 min readFeb 8, 2018

Whenever we’re talking to people during the interview process here at RigUp one question that inevitably comes up is “what is the product process like?”. We’re still a relatively young company so I wouldn’t say that we have it figured out by any means, but we’ve found something that works for us. Here’s how a feature makes from an idea in someone’s head to the hands of our customers.

First Things First

Start with the business objectives. As a company we currently set goals and objectives on a roughly quarterly basis, we find this gives us enough time to pursue work that will produce a meaningful impact in the business without introducing excessively long feedback loops. Once it’s clear what the top line goals for the business are for the next ~3 months we have a discussion around how the product team can generate the most leverage in that time. Is it driving more sign ups? Focus on retention? Improving engagement to drive more organic invites and word of mouth inbounds? Is there a new feature we could de-risk that will open up a new sales channel in the future? NPS or customer satisfaction not trending the right direction?

Once we’ve determine how the team can most meaningfully contribute to the growth of the business, it’s time to figure out the nuts and bolts of how we’re going to work towards our objective. Typically we’ll start with the product backlog and identify features and improvements that can contribute towards our team goals and re-prioritize those accordingly. Occasionally, we’ll also conduct a brainstorming and ideation session if we feel like the product backlog leaves something to be desired. At RigUp we truly believe that the best ideas come from a diversity of perspectives which means breaking out of our product bubble and talking to the other teams in the company.

User Feedback

The primary source of inputs for any of our product ideation and brainstorming sessions is user feedback. We’re big users of Slack and we take advantage of some key integrations to ensure we’re keeping our finger on the pulse of our users. User feedback is communicated with the team through 3 primary means:

  • A dedicated channel named #productfeedback for any ideas and feedback from within the company. Dogfooding wherever possible is crucial.
  • A channel integrated with our CRM that pipes in meeting notes from our sales team. This ensures that we’re staying on top of what our customers are asking for, what problems we’re not adequately solving, and what new opportunities might exist in the future. Trend spotting is the name of the game here.
  • Intercom has been invaluable to ensuring customer satisfaction and we also have a dedicated channel for all customer conversations. To help with the analyzing a large volume of conversations, our customer success team will make note of anything product related that comes up from multiple people so we can quickly identify trends.

This results in more feature ideas and product improvements than we could possibly go execute on in the near term. Often there are also ideas suggested that while important don’t necessarily align with the key objectives of the business at the time. One of the fun parts of working at a startup is that there’s always fires burning, the trick is knowing which ones to fight and which ones to let burn.

Prioritization

At this point we typically have a list of potential features to go execute on for the next few weeks or months. Now comes the fun part of laying out the product roadmap and deciding what is going to be shipped and when. I’m a big fan of the RICE model as described by Intercom in a blog post here. We typically look at features and prioritization by 4 main criteria:

  • Reach: how many users is this work going to impact. This prevents us from working on projects that are “cool”, but aren’t meaningful to significant portion of our customers. Larger reach = higher score.
  • Impact: how much impact will this work have on each user that encounters it? More impact = higher score.
  • Confidence: how confident are we that this work is going to improve the metric we’re targeting? Is this more of an investigative feature or a known shortcoming of the existing product? Higher confidence = higher score.
  • Effort: how much work is this going to take? This isn’t just in terms of engineering resources, but marketing, sales, design, and operations as well. Lower effort = higher score.

Whether you list out and compute an exact score for each feature is up to your team, however, there should be an agreement on the relative size of each of these factors so that it’s apparent to everyone on the team where things shake out. By going through this exercise our features usually fall into one of four buckets:

  • High value, low effort. These are the no-brainers that your team has to do.
  • High value, high effort. These are bigger bets and usually very few of these get executed within a quarterly scope. This one usually comes down to confidence of outcomes and the process of prioritizing these features usually spurs further user research and market analysis.
  • Low value, low effort. These are the “junk food” of product work. They make the team feel accomplished by moving another task into the “done” bucket, but they aren’t really impacting the business. Stay away from these.
  • Low value, high effort. This is the stuff that sinks companies. Needless to say, don’t venture into this area.

Execution

At this point we’ve:

  • Identified the top level objectives for the company
  • Determined how the product team can contribute most effectively
  • Identified the feature work that contributes towards our goals
  • Prioritized features by reach, impact, confidence, and effort to ensure we’re operating with as much leverage as possible

Now it’s time to execute. None of what we’ve done to this point matters if we can’t ship, but first we need to figure out exactly what we’re going to ship to our users.

The amount of effort put into feature specs depends on the scope of the change involved. It can vary from a well thought out description on the accompanying JIRA card to a multi-page product spec with accompanying technical spec.

We’ve usually done some high level scoping when figuring out the prioritization of features, our goal at this stage is to identify functionality that has to exist when shipping a given feature in order to test a given hypothesis. Our team utilizes google docs for writing feature specs for ease of access and collaboration. The feature spec isn’t some grand and elaborate pixel by pixel description of the feature and is typically no longer than 6 pages.

When writing feature specs we aim to address each of the following:

  • Background and context: is there anything that will help the reader better understand what’s going on here? For us this is typically an explanation of a particular industry specific phenomena.
  • Problem statement: what problem are we trying to solve and why is it important?
  • Proposed solution: how are we going to solve this and why do we think this is a viable solution?
  • Risks and unknowns: we’re always making decisions with incomplete information. It’s important the known unknowns are shared with the team so we can attempt to minimize them and acknowledge them.
  • Success criteria: how do we know if we got it right? How do we know when we need to move on and try something new?

New feature specs are shared not only with the product team, but the company as a whole. This gives everyone a chance to comment and discuss the feature to ensure we’re taking advantage of the full brain power of the company. There are times where features are scoped by the product team that don’t fully address an issue that customer success or sales has been encountering, this allows us to identify those changes now rather than after we’ve already built the thing.

Now we get into the design process which is probably a whole blog post in and of itself. We utilize a pattern library that allows our design team to maintain a consistent design language and put together mockups incredibly fast. Once a first round of mockups is ready they are shared with the rest of the product and engineering team for a first round of comments. We utilize Invision at RigUp and have found it tremendously useful for facilitating conversations around feature designs. Things that typically come up during this process are:

  • Does the functionality exist to solve the problem as described in the spec? Have key aspects of the proposed solution been translated to the user interface?
  • Usability. Are users going to be able to find the feature when they need it and figure out how to use it. Our users span a wide range of computer know-how so it’s paramount things are as intuitive as possible.
  • Is everything in the UI technically viable? Where is the data being displayed coming from, what exactly happens when a user performs an action, etc? This is where having engineering involved in the design process is critical.

Once we feel that the designs are in a good place we’ll share them with the rest of the company for comments. This is particularly useful in ensuring usability is where we need it to be. It’s easy to get deep into the feature to the point where everything seems obvious. You say to yourself “of course when you want to do X, you just click this button over here”. The true test is when a fresh set of eyes reviews the designs. Do they immediately “get it”? Is there confusion on how to accomplish a particular task? We’ll typically take one more pass at the designs after collecting feedback from everyone.

The engineering team has already been fairly involved up to this point so the hand off from design to engineering is typically pretty seamless. Once engineering feels like they have enough context and definition to start hacking on a solution we’re off to the races. We leave it up to the engineering managers and engineers to split up tasks in whatever way makes most sense for their working style. Status updates are shared in the engineering slack channel every morning, this allows the team to make sure everything is on track and identify any potential blockers.

Every week we have our one required meeting for the product and engineering team where we get together to discuss progress on projects, demo new features and infrastructure changes, and communicate any important company news or events. We also alternate weekly between a product oriented deep dive and a technical engineering deep dive. In each of these meetings we’ll discuss upcoming projects, retrospectives, and progress towards metrics and KPIs.

Our development and deployment process is (again) a whole blog post of its own, but I’ll touch on a few key points here. We utilize a CI/CD service with automated testing that is kicked off any time an engineer pushes code to our hosted repository. All code changes are done on a new branch and submitted for code review (we utilize Bitbucket for hosting and review). Once the changes have been reviewed and approved and there’s been a successful build for that branch we’ll merge it into the master branch. Any time code is merged to master this automatically kicks off a build process that will run all of the tests and if successful deploy the code to a staging app. This staging app allows us to verify changes in the production environment and further test functionality internally before deploying to the production app.

Once the feature has been tested and verified internally, we ship it and start the learning process. We typically deploy changes to the production app on a daily basis, our CI/CD process helps us ship quickly while maintaining quality and platform integrity. Ok we shipped a feature, now what?

Keeping Score

We’re not just building features and writing code for the hell of it, we’re trying to change the energy industry for the better. To do that you have to keep score of whether your changes moved your metrics in the way you anticipated.

We get 0 points for just getting a feature out the door…

In one of my favorite blog posts Stewart Butterfield, CEO at Slack, says “We get 0 points for just getting a feature out the door if it is not actually contributing to making the experience better for users…”. If you’re not measuring the impact of the features you shipped and learning from your users you’re dead in the water as a product team.

One thing I’ve learned is that keeping score is often trickier than it seems on the surface and it’s something that we’re always working to improve on. We tend to measure features by both high level metrics and feature specific ones. This allows us to quickly see if a feature has impacted the business in a meaningful way, and if not, was the failure at the feature level or were we wrong about the anticipated impact of the feature to the business.

In order to answer these questions we pull in data from 3 primary sources:

  • The database. We make heavy use of SQL queries against a follower database to answer questions around feature adoption and results.
  • Heap and Mixpanel. At the time our team is currently utilizing both. We’ve been utilizing Heap for a while and love that we can ship something it’ll start collecting data before we even know what we want to analyze. We started using Mixpanel recently because it can track across web, iOS, and Android whereas Heap (at least as of now) only covers web and iOS.
  • Mandrill. This is what we use for email and we’ve written some scripts that make use of their API to pull in data for how emails are performing. Specifically how many times it was sent, the open rate, and the click through rate. We use this to A/B test subject lines and copy where needed.

Some things are really easy to measure and attribute to product changes, for example: did invites and referrals increase after shipping a change that proposed to do so? Look at the count before and after deployment and you can quickly figure that out. Others can be a little muddier like number of sign ups of a particular cohort. Was marketing running a campaign that could have impacted this? Did the sales team roll out a new strategy? This is why clearly defining how you’re going to measure something and what the source of truth is for the data are really important.

Establishing a consistent set of metrics and KPIs and reporting on them helps keep our team honest. When building something it’s incredibly easy to lie to yourself, you say to yourself “we launched a lot of cool features this quarter, we got so much accomplished” and walk away feeling great even if none of those features moved the needle. This is something that I’ve had to learn the hard way and we’re constantly improving how we hold ourselves accountable for the work we’re doing.

Putting It All Together

While all of this sounds like a very linear and step by step process, in reality is much more of a fluid and continuous process. Retrospectives prompt another round of improvements which leads to more design and engineering work. Features get moved up and down the priority queue as we learn more about our users and the market. We’re a lean team and the energy industry is a big space with lots of problems to go solve. One of the biggest challenges is making sure we’re saying no to all of the good ideas so that we can pursue the great ones.

P.S. Are you a software engineer, designer, or product manager interested in how you can help bring the energy industry into the 21st century? We’re hiring! Learn more and apply here.

--

--