Being A Design Experimenter
It took us a while at Kindoma to get a good process down for measuring and evaluating the effectiveness of our user experience—but I think we’ve got it down to a pretty good science now. In this post, I’d like to share with you one of the most impactful design experiments I conducted while at Kindoma.
What follows is an attempt to explain how we optimized a specific metric in our on-boarding funnel. An issue that many game, app, and SaaS developers struggle with. And while the final solution will seem like a simple one, I think you’ll be surprised by the amount of work that went into producing it.
Each metric in our on-boarding funnel is a necessary step the user has to take before they can reach the intended action. But these steps also represent a point of friction where we would expect to see no more than 25% drop-off rate (less than 10% in a perfect world).
Below is a chart showing the specific metrics we track in our Drawtime app’s on-boarding funnel.
In version 2.2 of Drawtime there was a glaringly obvious problem with the percentage of users making it through the funnel to ‘% Invite Sent’. This was ultimately limiting the amount of calls being made.
In response to this our priority for v2.3 and beyond was clearly to move that metric in an upward direction to see if that would positively affect our ‘Calls Made’ KPI. Our single most important metric.
Though a user having a call isn’t directly tied to our monetization strategy, when one person calls another in Drawtime or Storytime it sets up the defining user experience. It tells us that people don’t just want to share a video call, they want to play with someone who’s far away. This, is the principle under which Kidoma is founded. Therefore if user are making calls in our app, it shows us that they agree with our primary value proposition and are likely to pay for it.
Even though a user having a call isn’t directly tied to our monetization strategy …it sets up the defining user experience.
So how were we going to encourage people to send an invite?
We used usertesting.com and set up observation labs with friends and family in which we passively watched them run through the on boarding. But all that observation showed us was that our UI wasn’t the problem. Not for sending invites anyway. No one was confused by what we were asking them to do in the on boarding process. There had to be something deeper going on in our natural users’ psyche.
So we added a new script to Drawtime that after 3 days would send out an email to users who successfully registered an account but didn’t send an invite. The email simply said…
Though we gained some valuable insights this way it didn’t generate
any consistent or specific responses that would help us move the
% Invite Sent metric.
it wasn’t really the experience of friction that our users where reacting to but the promise of it.
We always suspected the real problem was friction. But we had already gone to great lengths in our design to reduce friction as much as possible without having to completely reengineer the way the App fundamentally worked.
An expensive and lengthy task we were unwilling to undertake until we had exhausted every other possible solution within our current framework.
So our next step was to take what we had learned and generate a few hypotheses to test.
- We tested a graphic that we thought better communicated the value proposition.
- We tested other methods for sending invites—like text messages or direct link copying.
Neither really made an impact.
We made another pass at reducing friction and it finally occurred to me that it wasn’t really the experience of friction that our users where reacting to but the promise of it.
So I proposed an A/B test on a simple language change that would have a minimal impact to our on-boarding flow…
By testing the wording of the buttons on the landing screen from
“Create an Account” (the promise of friction) vs. “Start a Call” (the value proposition) I hypothesized our users would accept the friction of creating an account in our B split and more likely send an invite because their focus would be redirected to the desired end result… having a call.
Initially, Our B split showed a big drop off in the amount of users pressing ‘Create an Account’ which now only came up as a prompt after the user tapped “Start a Call.” But what we also saw was — of those that did tap ‘Create an Account’ — more users where sending invites and having calls. A lot more than in split A.
This meant that though we initially reduced the top of the funnel, our flow to the bottom of the funnel was better tuned to our core users.
Fixing the top of the funnel is still a concern but that’s a marketing problem now. Much less of a concern than ensuring we have a proper on-board flow that gives our users exactly what they expect.
We decided to implement split B across the platform for versions 2.4, and 2.5. The metrics speak for themselves.
While this shows how instinct plays a big part in the design experimentation process, it also shows how the research and the metrics are key to understanding how good those instincts are.
Never, through all of our user observations, surveys, and direct questioning did anyone ever say to us something that would have more quickly led us to the final solution. Yet I know if we didn’t follow the path we took, I can say with a high degree of confidence that things wouldn’t have turned around as well as they did.
What do you think? Anything you would add to this? Is there something you would have done differently? We’re always looking to fine-tune our processes and would love to get your feedback!