Fern was a personal styling platform born in Lyst’s internal incubator programme (known as “catalyst”). Fern was to be like having your own personal shopper; “imagine walking into a store where everything had been chosen specifically for you” we often said. This was achieved by connecting users with their own personal stylists, combined with algorithms that would learn a users taste over time.
We wanted to build a product with strong user retention and product market fit— quickly. How? We would rapidly experiment and iterate over a short period of time. This way we would come to the conclusion of whether Fern was a viable business as soon as possible.
The app itself, our technology, and ways of working were all built around the idea of speed.
Fern was iOS only to narrow scope and the support we needed to give. For a long time Fern was menswear only so as to narrow the problem space we were trying to solve for.
We built the application using React so we could:
- Develop very quickly (and use existing frontend developers), and
- Ship updates on iOS without going through the usual rigmarole of App Store updates which could sometimes take two days or more to be approved.
We tried to reduce overheads as much as possible. Time spent messing around in meetings or finagling tickets is time wasted, mostly. This meant:
- Dumping the usual JIRA/Confluence combo in favour of the much simpler Trello,
- One team meeting scheduled per week: Sprint planning. A second meeting was introduced later to analyse OKR progress. Team members still had their usual one-on-ones during the week.
One week sprints. It was unusual to build anything that took longer than two days. If it did take longer there needed to be a strong rationale for doing so. QA was done on an ad-hoc basis. Updates were available on TestFlight almost immediately.
When we did ship a feature, pre-configured paid adverts (usually via Facebook) were activated. This allowed us to push volume to our new feature, allowing us to gather data quickly. This data would then be analysed via Amplitude. Finally a call could be made whether the feature was working as intended, and the feature’s success or lack thereof.
User research mostly took the form of usability testing carried out via usertesting.com. At first using prototypes, but as we picked up speed we began simply testing live versions. Results were generally gathered quickly, although we began to run out of suitable UK participants towards the end of the project.
The team setup changed slightly over the course of the project. But generally we settled around ten to twelve people:
- One product manager
- One product designer
- Two full stack engineers
- Two engineers focused on recommendations
- Five full time stylists
- One to two other remote stylists that would cover odd working hours.
A note on designing at speedNot everything we shipped was pixel perfect. In fact a lot of it wasn’t. Although this was difficult at first, we had to make our peace with it. “Is it better than what we have?” was often the approach in these matters (hat tip to Shape Up: Decide When to Stop). Usually the answer was yes.As a designer I was compelled to try and fix little details and deviations from the original designs. But our approach necessitated aggressive prioritisation to keep moving forward. Would moving this element a few pixels to the left move the right needles? Probably not. What about spending time thinking about how to create a buying process? Probably yes. I eschewed minor UI imperfections to buy myself time to think about the next big problem.
Anatomy of Fern
Being a product based on the idea of showing users things they would like, personalisation played a key role. There were three main types of personalisation:
- 1:1 — One stylist choosing items for one user. Highly relevant and high quality items but also very time consuming to create.
- 1:many — One or more stylists choosing items for a “segment” (see below). Not particularly relevant to users, but very high quality products. See “discover edits” below.
- Algorithmic — Our recommendation algorithm choosing items based on signals gathered from the user (likes, dislikes, brands, categories, colours, materials, etc). Highly relevant to the user (often too much so), however quality of products can vary significantly.
In order to help us understand user styles at a macro level we created “style buckets” and budget levels. There were six styles for men, seven for women, and four budget levels. Combining a style with a budget gave us a “segment”. Segments would determine the kind of “discover edits” a user sees, as well as giving stylists a broad strokes understanding of a users taste and price point.
Fern’s core feature set centred on showing users collections of products. These collections were internally known as “edits”. There were four types of edits:
- “Weekly picks”, chosen by an algorithm for individual users
- “Private edits” created by a stylist based on an individual user’s specific request. Initially this was the only kind of edit, until the introduction of weekly picks and discover edits
- “Category edits” (introduced later), items chosen for individual users by algorithm for a particular category
- “Discover edits” created by the team of stylists for a whole segment of users.
When a user entered an edit, they entered a special product browsing mode. User would then browse the products in that edit, one product at a time. Actions (buying, size selection), and information (product description, brand, etc.) were on each product page.
What we did
Below is a summary of some of the work we shipped over the six months or so I was on the team. Note that this is what I had documented and to hand. What is not included is the work we did on recommendations, paid advertising, the trojan work stylists did to create edits, and probably some other work that has fallen between the cracks.
Fern existed in a couple of different forms before I joined the team. Firstly it existed as a simple Facebook Messenger app. After that an initial version of the app was created.
The first three months, March to May 2019
I joined the team at this time, after completing a design sprint for Lyst’s mobile app team. Being the first product designer to join the team meant there was some low hanging fruit to tackle out of the gate (e.g. cleaning up UI, implementing standard mobile design patterns, etc.). But I also had to become familiar with the product, the team and their ways of working.
Work for this quarter comprised of some fairly foundational features that were critical to our success. Major projects included:
- Algorithmic recommendations — Using an algorithm make picks for users. This necessitated an IA overhaul as there were now several different types of edits.
- Product level feedback — Letting users “dislike” products in an effort to gather more signals.
- Onboarding — A hugely important piece of the puzzle where we gather user preferences. We went through many, many variations of our onboarding experience to try and strike a balance between getting high quality signals and keeping friction relatively low.
- Buy in Fern — Launched the first version of in-app buying experience (initially Apple Pay only). Before this, users were directed off to partner websites to purchase items. This was a tricky feature due to the unique way purchasing worked on Fern.
- Various other quality of life features including new item badges, creating user stores instantly post-onboarding, in-app notifications, various UI tweaks to product pages, showing sale prices, etc.
Second three months, June to August 2019
The coming of a new quarter meant one big change for the team: We would have OKRs (objective and key results) to meet for the first time. The reason for this was that we now needed to prove to the business that continued investment in Fern was a worthwhile endeavour. We had three OKRs:
- User engagement — 50% of users who enter the app (post-onboarding) would like five items within three days.
- Scale — Bring 10,000 users per month into the app.
- Revenue — Average $10 GMV per user.
With a strong foundation to build upon, and metrics to meet, we had much stronger direction in this quarter. The flagship features we shipped included:
- Onboarding — Continued adjustment and optimisation of onboarding. With more volume coming through this quarter there was increased pressure to have a more efficient funnel.
- Product browsing simplification—A dramatically simpler approach to browsing products. We had known the previous iteration was not working as intended and finally had the chance to change it.
- Forced like/dislikes — In order to gather even more signals, we forced users to like or dislike an item to see the next one. This was a big call by the team as the previous product browsing experience (above) was working well.
- Category edits — Adding the ability for users to generate edits for themselves based on specific categories. This proved to be a big driver in engagement.
- Launched womenswear — Launched womenswear across Fern, learning a huge amount in the process. Unsurprisingly the shopping habits of women were different to those of men!
- Referral scheme—Created a referral scheme. After much research into similar schemes we settled on a fairly standard approach. Users share their unique url from the “invite” tab. Users being referred enter their email address into a website and then download the app.
- Restructuring IA — Changed structure of the app a couple of times to try and relate “rating” and “liking” to each other more.
- Shipping, returns, etc.—Made shipping free, communicated how Fern works in an effort to be more upfront and transparent with users about the buying process.
- Sizing — Gathered user sizing preferences to only show them items in their size and know when their size was out of stock.
Pulling the plug
The quarter ended and we had met one OKR (engagement), mostly met another (scale), and completely missed the third (revenue). Unfortunately the result we missed was the one about making money. Unsurprisingly the business decided it would no longer be pursuing Fern and shut the project down.
While it was sad to end something that we had put so much work into, it was also the nature of the beast.
Reflecting on these few months of work has reared up a few areas we could have improved on.
1. We were getting the wrong users
The majority of people coming onboard were getting a disappointing experience. Our ads (showing products to users, with a link to download the app) implied that Fern was a typical fashion ecommerce app, which it wasn’t. Newly onboarded users would look for the product in the ad they clicked (we didn’t know it) or would look for a search bar to find that item (we didn’t have one). A more targeted approach to user acquisition, where we are clearer about Fern’s value proposition, could have alleviated this (albeit at a much higher cost-per-install).
In addition nearly all of our traffic was paid, which would likely have created headaches for us down the line.
2. Too much quant, not enough qual
We lost sight of the experience individual users were getting. Analytics were always combed through in detail but qualitative testing was more ad-hoc. When we did qualitative testing it was often of dubious quality (e.g. with non-Fern users, or with internal employees). This was much better than nothing, but rarely did we talk to our own users. Looking back it seems obvious, but we should have made more effort to talk to more Fern users.
This was compounded by the fact that so many features were being shipped that it was difficult to stay on top of what the user experience was even supposed to be. Slowing down (which wasn’t really possible) or keeping and up-to-date record for what the user experience was supposed to be may have helped this.
3. It turns out a lot of our images sucked
Late in the game we shipped some analytics that tracked the number of images we were displaying per product. We found that around twenty-five percent of products users were viewing had only a single image. This problem was exacerbated by the fact that many of out top retailers had this problem for all their products!
Would you buy an item of clothing if there was only a single image to go on? Neither would I.
4. Maybe our business model was wrong
Our business model was the same as Lyst’s: an affiliate model with ecommerce flavouring. This proved very difficult to make successful in our app environment. Perhaps a different approach, such as subscription, could have worked better.
The implication is that fixing these issues would mean success for the project, but reality is more complex and grey than that. Perhaps fixing these issues would have meant success, or maybe the product would have lasted another quarter or two. Maybe we could have addressed all these issues and it wouldn’t have helped at all. The truth is nobody knows.
We failed, of that there can be no doubt. But there is a nagging, tantalising possibility that what we sought was just over the horizon. That if we had a little more time, or had been just a little faster, or had just a little more luck, that we would have succeeded.
I personally learned so much in the trials and tribulations of this project. But more important to note is the amazing team that willed this product into reality. They are a brilliant bunch of people and it was so much fun coming to work every day with them. There was never a dull moment and I would jump at the chance to work with them again.