BookComb to JANE

Kirsten Gale
ux bootcamp
Published in
13 min readMay 3, 2020

My role: UX Consultant | Duration: 3 weeks

Overview

My team and I worked with a pre seed startup, BookComb, to validate or invalidate its initial product and feature ideas. We were presented the problem of how to connect readers to more personalized recommendations. BookComb’s goal is to change the way books are categorized in order to search for books beyond the genre system and into more nuanced classifications. Through user research we were able to validate our client’s original feature ideas and expand on them further.

Problem Research

Before we could begin ideating around a solution or iterating on BookComb’s current feature ideas we needed to familiarize ourselves with the problem space and its nuances. We began with outlining a hypothesis and assumptions to take with us into our user research.

Hypothesis and Assumptions

“Super Readers” want a quick and easy way to find personalized book recommendations.

  • Super readers get recommendations in person at bookstores.
  • Super readers want more categorization in the current book organization system.
  • Super readers want personalized and nuanced suggestions based on their unique preferences.
  • The lack of a tagging system for books creates a “best seller phenomenon,” which is leading readers to buy books that may be popular, but not necessarily ones that they enjoy reading.
  • Super readers prefer to support independent sellers, publishers, and authors.

Initial Problem Statement

Current book categorization systems and experiences do not give readers apposite suggestions.

How might we connect super readers to highly personal and educated book suggestions that match their unique preferences?

User Interviews

Before beginning user interviews we had many ideas about our would-be-user: a super-reader. Super-readers typically read about 10 or more books per year, however, this brought up several questions. All questions revolved around how they differ from average readers. With the idea that super-readers and average readers may differ in their shopping and reading habits, we wanted to explore both types of readers in user interviews.

We sought out both super-readers and average readers by sending out a screener survey and making the threshold of at least 4+ books per year for pleasure. We captured 34 responses in total, 29 of which met the criteria. Of the eligible candidates, we interviewed a total of 17 users, 12 were super-readers and 5 were average readers.

Affinity Map

With a total of 17 interviews, we had a ton of data to work with, so to better understand all of it we consolidated our notes and organized them in an affinity map. We completed the practice blind, meaning we did not label average readers until after we formed groupings to ensure there was no bias in the original group creation.

Affinity map groupings excluding the outliers

Although we were able to develop a number of insights from this practice we highlighted a few as key.

Key Insights

  • Readers enjoy passively discovering books.
  • Readers find books by personalized recommendations.
  • Readers consult authoritative sources for book recommendations.
  • Readers engage in research before reading or purchasing a book.
  • Although readers believe that book tastes are highly subjective, they value the opinions of their peers.
  • Accessibility is very important to readers.
  • Readers find reading to be a way to relax and escape everyday life.
  • Readers want to know what the popular books are but do not assume it means they’re good.

Persona

There were few differences between super-readers and average readers that came up in our affinity map, but all were small and did not warrant the development of two personas. The differences were that super-readers buy fewer books with cost being their prohibitor and more often practice due-diligence to ensure they’re reading a quality book. Average readers choose to read to learn something new more often and display hesitation in recommending books to others.

With the insights above we were able to form a persona, the primary representation of our user. Meet Maya, the Architect, New Yorker, and frustrated book lover.

User Journey

To understand the issues Maya faces we mapped out her user journey. Through combining all of our interview notes and stories we found three common themes about a readers’ discovery process. Readers, influenced by environmental factors and reading goals, had three types of discovery processes; passive discovery, active discovery, or a combination of both.

For Maya’s user journey we honed in on the final process of the combination of passive and active discovery. This enabled us to model the most common process as well as the most amount of problems a reader faces.

Maya’s journey starts when Maya herself doesn’t even know she’s looking for a book. She’s at lunch with a friend when he casually tells her a book he recently read, but she doesn’t think much of it at the time. When she finishes her current book, she sets out to find a new one. She loved reading about a woman of color in NYC, so she seeks a book with the same content. Unfortunately, she’s taken from list to list constantly thinking “this wasn’t what I was looking for”. Eventually she lands on a review about a book from her favorite author. The book isn’t what she was looking for, but she trusts the author. As she begins researching the book she remembers “Oh yeah! My friend recommended this book too!”. She’s feeling better but looks at a few user reviews for some final due diligence. Finally, she purchases the book.

Maya’s full user journey

The clear areas of opportunity for our product are…

  • Connect Maya to friend’s book recommendations
  • Help Maya remain aware of books she wants to read
  • Connect Maya to books based on her specific interests
  • Provide Maya with curated recommendations
  • Connect Maya to others’ reviews of books (critics and peers)
  • Help Maya purchase the book

With a better understanding of the problem space and Maya, we were in a position to allow these data to help shape our product decisions.

Revised Problem Statement

With our research forming a better understanding of the problem space and Maya we re-evaluated our original hypothesis and assumptions, and revised our initial problem statement.

Readers want more individualized book recommendations that coalesce their favorite genres, personal interests, preferences, and mood. Current book categorizations focus too much on the genre or are not nuanced enough.

How might we help Maya generate and maintain an awareness of books that concisely meet her unique search criteria?

Business Research

Understanding the problem space also meant investigating the competitive landscape to see what solutions are already being offered. We began with Sinek’s Golden Circle exercise to understand our product’s unique value. With having a clear idea of the why behind BookComb we were able to conduct analyses with competitive and comparative businesses.

Sinek’s Golden Circle

In order to help BookComb differentiate itself from competitors we needed to utilize some brand strategy practices, such as Sinek’s Golden Circle, where we defined the what, how, and why behind BookComb.

This framework provided a rationale to future decisions when deciding a feature set for the MVP, but immediately allowed us to observe the competitive and comparative landscape.

Competitive Analysis: Petal Diagram

We looked to companies that currently solve aspects of our problem space to further our understanding of the space and the opportunities it has.

Competitive and Comparative Analysis: Features

We compared BookComb to several competitors and comparators on the basis of specific product features to better understand the strengths and weaknesses of the current technology within the space.

Competitive Analysis:

We compared Jane to several competitors, including GoodReads, Barnes and Noble, and BookBub. We analyzed each based on a specific set of features to determine industry standards as well as potential opportunities for growth. The key insights we took from this method were that book discovery websites must include a search bar, curated lists (editor’s picks, best of, trending, etc.), and a way to save books to personalized lists. The latter being an essential insight as this was not included as part of Jane’s original concept. Not only do all of the competitors provide their users with a way to save items, but most even recommend books based on those they have already saved. These book discovery websites put a strong emphasis on learning whom the user is and predicting what they’ll want to read next and providing those predictions before the user knowing themself. That being said, there are a few areas where Jane can focus on differentiating from its competitors, including offering an onboarding process to immediately learn about the user’s interests and preferences since only a few competitors currently do that. They can also build a social connection with other users and eventually build out specific author pages to keep tabs on your favorites, which not currently offered by competitors.

Comparative Analysis:

We compared Jane to several comparators, businesses with similar functions, to see what features they offer their users. The key insights we took from this are that Jane should utilize an onboarding process to learn users’ interests when signing up and that the landing page should provide users with a passive browse through products. Almost all comparators give their users an onboarding process, specifically Netflix and Artsy. Then all comparators send users to a browse landing page where items are categorized in some way. Lastly, although the tagging system on IMDb seems to be a bit neglected, it should be noted that they show five tags for an item with the option to see the remaining (30–100+).

Design Ideation

Informed by our problem and business research we were able to head into our design phase where we began to translate our insights into features as well as look more closely at BookComb’s initial feature ideas.

Insights to Features

We transformed out insights into actionable features that solve our users’ needs. The original ideas presented at brief were validated and supported at this stage but were expanded to include users’ needs for research and suggestion preferences.

MoSCoW Map

With our actionable features in mind we organized them into categories to help see what was vital to our client and our users. We employed the MoSCoW Map method to do this, where all features are placed in either the must, should, could, and won’t category. Priority level was assigned by answering the question; how critical is this feature to helping users accomplish the fundamental goal of finding books related to a specific and personal interest.

Feature Prioritization

Knowing what features were placed within our must category wasn’t enough. We then took all features and mapped them on a graph based on low to high effort/expense and whether it was essential or a nice to have. From there we were able to say all items within the left top quadrant should be included within our MVP product.

Design Studio

The above feature practices were able to inform and orient us going into our round of design studio. Here we were fortunate to be joined by our client who founded the two feature ideas that make up BookComb as well as commissioned us to validate and improve upon them. The design studio was done quickly, with rapid creation and iteration of several ideas in a collaborative space.

Design studio

To close out our design studio, we all converged on a single design for the two features of BookComb.

Minimum Viable Product

When BookComb became JANE; the site where you discover your next great read.

Brand Creation

Our client did not only task us with bringing her two feature ideas to life, but to also bring BookComb to life with a brand and identity. Open to new names as well, is what led us from BookComb to JANE.

So why JANE? We landed on this name because we wanted it to represent being a matchmaking system for readers and books, much like Jane Austen was for the characters in her novels. Also, we wanted to put an emphasis on having the site elicit the feeling that book recommendations are coming from someone who has read the book before… bringing us to “Read by JANE.”

In the middle you can see the illustration style we chose for JANE, we leveraged these free graphics online to create a friendly tone within the interface and on the right you can see our typeface choices, play fair was strictly used for the name and logo and inter was for the rest of our copy.

Design Testing

Before landing on JANE and a brand look, we had to complete a few rounds of usability testing to figure out the site’s structure, specifically the home page layout and how the features will work within the site. This was done in three rounds of testing, the first two in mid-fidelity and the last in high-fidelity. All tests were focused on our clients two main features; the filtered search and the book recommendation search.

Usability Test: Round 1

Below are the mid-fidelity wireflows that were tested during this first round, the flows depict both tasks we were giving our users to complete.

During this first round of testing, users had a good sense of what the site was about, however, both features performed poorly overall. Users found the process of completing both tasks to be overwhelming and long, and did not understand the purpose of a few pages. General design issues that came to our attention included all elements on the screen being too large and the feature names. At first the filtered search was named “Smart Search,” and the book recommendation feature was called the “Tarot” search.

In the first task we found that users were having difficulty locating the side bar filter menu. We also found that users did not know where to locate the specific tag words infidelity, and modernist among the categorization system. In the second task, this is where we found users thinking the overall process was long winded. They were surprised to be taken to the tagging step where they were asked to tag the books based on why they liked it. Finally, the review step also confused them and some even wondered if those were the recommendations.

Usability Test: Round 3

Our second round of testing closely resembled our high-fidelity, so below shows the hi-fi flows for the same two tasks.

The feature names went from “Smart Search” and “Tarot” to “Enhanced Search” and “Recommendation Engine,” both of which tested extremely well as users were less confused about what each feature did. The filter search also switched from a side bar menu to a horizontal mega menu with each category having its own mini search bar. The recommendation engine was also shortened overall by removing the final review step, as well as having each book be tagged immediately after selecting it in a pop up.

Many users still find the search feature difficult. It could be presumed that success rates would be higher if the search bar were functional, as most users in every test immediately gravitated to search first. Some users mentioned the granular level of detail for the recommendation engine was a bit too much, suggesting this isn’t a feature they would use every time they search for a book — but would use once as an onboarding experience. Yet, by and large, testing showed success.

Looking Ahead

Read by JANE is a solution for readers who are looking for highly personalized recommendations that match their reading tastes. It also strives to make our users understand their literary tastes better by providing a more informative and helpful book classification system.

Although we learned a lot about our target user group and validated portions of our clients idea, there is still more work to be done before JANE can be launched.

Next Steps

Screens to be designed next:

  • Sign-in
  • User profile
  • Book product page
  • User lists (favorites, wish list/reading list)
  • Search bar

Feature considerations:

  • User-to-user messaging
  • User generated reviews and ratings

In addition to building out more screens and considering new features, there are a few things that require testing to validate the full idea. These include the mobile site, the site from the perspective of a signed in user, the classification/categorization system used for the filters, and lastly a personalized testing environment.

In terms of testing the site from the perspective of a signed in user, we believe there are some ways to improve the recommendation engine for users. Since users still expressed that they would not utilize this feature every time they are looking for a recommendation we think it should be tested as an onboarding experience. Also, we think it should be tested by leveraging books that users have saved to their personalized favorites list, that way the tagging step can potentially be skipped if it were already done in the saving process.

For the filter classification we think utilizing a tree study and card sort may work to help our client determine what works best. Also, since the features themselves are based on providing such specific personalized recommendations, we feel that it gets lost on users during testing when we are providing them with what their interests and favorite books are. For this reason, we think testing in a personalized environment next would help users get the full effect of the features.

Lastly, we believe that JANE will have to expand its product offerings beyond elite books as we found that users, even super-readers, read a variety of book types.

--

--