Learning How to Increase Conversion Rate at Hipcamp

Zach Conn
6 min readJul 11, 2019

--

On March 11th 2019, Xanthe Travlos (product manager at Hipcamp) and I (software engineer at Hipcamp) were put in charge of leading conversion efforts. We were tasked with creating a 60% YoY increase in our conversion rate of orders completed to people on the website in the month of July 2019.

I spent the next three months working on nothing but that goal. Here is what I learned.

Note: In this doc, there are learnings about increasing conversion rate specifically. If you are interested in learnings about team dynamics, see this post.

TLDR

Start simple and move towards more complex, bigger wins once you get a handle on user behavior and your own analysis process. Be consistent and rigorous when analyzing results.

Picking experiments

Identify the biggest opportunities for improvement. Consider:

  1. Low conversion to the next step in the funnel. We found this to be the listing page and checkout.
  2. Highest overall impact on the end of funnel metric. We learned that a 1% increase in checkout is greater than a 1% increase on the homepage, because not everyone who completes an order enters on the homepage but everyone who completes an order does go through checkout.

Get some quick wins by adopting best practices in your industry. Take note of features that would be particularly relevant to your product. For example, showing recent searches would be helpful on Hipcamp, as it is on other travel sites. Promoting business travel options would not be helpful for Hipcamp.

Simplify the user flow to reduce cognitive load on a user. As Lenny Rachitsky from Airbnb said, “Some of the biggest guest conversion gains I’ve ever seen at Airbnb came from simple tweaks that gave users fewer things to think about.” Our pricing widget experiment, which removed pricing info from date selection step, saw a 6% increase in checkouts started.

Understand the “why” behind numbers of user behavior. Use tools such as fullstory and in-person customer empathy sessions. Find out why people are dropping off and what needs you are not serving properly. We identified a 52% drop-off at site selection, but it was not until we watched users fail at zooming on a site map that we understood why they were dropping off.

Think bigger about happy paths down the funnel. Start doing this once you have formed a habit of experimentation. This helps you to avoid pushing users down the funnel without understanding the deeper issue (e.g., landing in an empty room). We found that getting people from homepage to search was hiding a bigger problem — landing on unavailable inventory.

Test everything. If you do not test everything (or if you only rely on pre-post test), you will not know if your feature moved the metric. Was it your new UX or did more people camp because it was sunny this weekend? Additionally, even if you know your feature is a winner, it is important to know the magnitude. This will help you find areas of sensitivity (see below). If you are limited by lack of data, consider bundling your experiments into “new world” and “old world” to keep track of the cumulative effect. The only reason not to test, is if it will slow down development significantly (e.g., too hard to test) or it is a blocking bug (don’t want to leave any users with the bug experience)

Analyzing experiments

Establish good experiment hygiene. We launched A/A experiments and A/B tests that were a one word change the first week. This helped us make sure we were setting them up right and that our experiments framework was trustworthy. Understanding your data takes time! You want to make sure you are assigning participants properly and handling bot traffic well. Some other good hygiene practices include running tests with nearby metrics that don’t have interference with other tests in funnel. Another is including a downstream metric to make sure things don’t go haywire.

Use an in-house framework for deeper analysis. This allowed us to look beyond just a “converting metric”. For example, we wanted to know how many people reached the search experience with dates instead of just how many people reached the search experience.

Determine a framework for concluding experiments. There are many different ways to do this, each with their pros and cons. Selecting one will allow you to be consistent across experiments. Here are the ways I have seen:

  • Setting a specified amount of time (e.g., 2 weeks) after which you will have enough data points for statistical significance
  • Setting a specific number of data points for statistical significance
  • Making a probability weighted call before statistical significance

The last one occurs mostly in small companies when you don’t have a lot of data and you prioritize speed over precision. In all cases, this calculator is helpful.

Additionally, It is helpful to think about temporal patterns when coming up with your framework for concluding experiments. At Hipcamp, we have large variance in behavior by day of the week, so we enforce a minimum of one week for all experiments.

Do not peek. Or if you peek (it can be fun to track), make sure to stick to your predetermined framework. I have seen many experiments flip from one variant early on to a different winner at statistical significance — it only takes one to make you feel dumb and then you won’t do this again.

Measure the effect of your experiments on the overall metric. For example, if 30% of people saw this page and it ran on desktop, which affects 50% of people, and it raised conversion by 10%, you have earned a 1.5% (30% x 50% x 10%) increase on overall conversion. Note, that there are many independent variables that affect overall conversion (e.g., shifting traffic mix), so just watching overall conversion over time is not sufficient. It is important to ensure that your work is actually making a difference and show it.

Measure conversion from different angles. Orders divided by people on the site is not enough — it is too noisy. One way to combat the noise is to look at conversion by funnel step. Another is to look at conversion by core action to core action (e.g., search to book).

Track areas of sensitivity. Changes in these parts of the product result in the largest variance in conversion. They won’t all be positive, but they move the needle a great deal. We found these to be our pricing widget, search UX and search algorithm. These areas of sensitivity are important to identify, so you can monitor them closely on a new experiment and come up with new ideas to improve them.

Final Thoughts

Conversion can be a scary world when you start looking under the hood. Avoid jumping in too deep right off the bat, and focus on getting your framework, process and analysis in order in the beginning. Understand the “why” behind user behavior and get some quick wins in the biggest opportunities for improvement.

Once you feel comfortable with the basics, you should go after bigger wins that enable a “happy path” for a user and target areas of sensitivity. Analyze your results with a consistent framework, from different angles and understand how that bubbles up to the overall goal.

--

--