The Fern Expedition

Our setup

The app itself, our technology, and ways of working were all built around the idea of speed.

  • Develop very quickly (and use existing frontend developers), and
  • Ship updates on iOS without going through the usual rigmarole of App Store updates which could sometimes take two days or more to be approved.
  • Dumping the usual JIRA/Confluence combo in favour of the much simpler Trello,
  • One team meeting scheduled per week: Sprint planning. A second meeting was introduced later to analyse OKR progress. Team members still had their usual one-on-ones during the week.
  • One product manager
  • One product designer
  • Two full stack engineers
  • Two engineers focused on recommendations
  • Five full time stylists
  • One to two other remote stylists that would cover odd working hours.
A note on designing at speedNot everything we shipped was pixel perfect. In fact a lot of it wasn’t. Although this was difficult at first, we had to make our peace with it. “Is it better than what we have?” was often the approach in these matters (hat tip to Shape Up: Decide When to Stop). Usually the answer was yes.As a designer I was compelled to try and fix little details and deviations from the original designs. But our approach necessitated aggressive prioritisation to keep moving forward. Would moving this element a few pixels to the left move the right needles? Probably not. What about spending time thinking about how to create a buying process? Probably yes. I eschewed minor UI imperfections to buy myself time to think about the next big problem.

Anatomy of Fern

Being a product based on the idea of showing users things they would like, personalisation played a key role. There were three main types of personalisation:

  • 1:1 — One stylist choosing items for one user. Highly relevant and high quality items but also very time consuming to create.
  • 1:many — One or more stylists choosing items for a “segment” (see below). Not particularly relevant to users, but very high quality products. See “discover edits” below.
  • Algorithmic — Our recommendation algorithm choosing items based on signals gathered from the user (likes, dislikes, brands, categories, colours, materials, etc). Highly relevant to the user (often too much so), however quality of products can vary significantly.
The various menswear and womenswear styles. Combining one of these with a budget this would give us a “segment” for a user.
  • “Weekly picks”, chosen by an algorithm for individual users
  • “Private edits” created by a stylist based on an individual user’s specific request. Initially this was the only kind of edit, until the introduction of weekly picks and discover edits
  • “Category edits” (introduced later), items chosen for individual users by algorithm for a particular category
  • “Discover edits” created by the team of stylists for a whole segment of users.

What we did

Below is a summary of some of the work we shipped over the six months or so I was on the team. Note that this is what I had documented and to hand. What is not included is the work we did on recommendations, paid advertising, the trojan work stylists did to create edits, and probably some other work that has fallen between the cracks.

Before joining

Fern existed in a couple of different forms before I joined the team. Firstly it existed as a simple Facebook Messenger app. After that an initial version of the app was created.

Fern v1

The first three months, March to May 2019

I joined the team at this time, after completing a design sprint for Lyst’s mobile app team. Being the first product designer to join the team meant there was some low hanging fruit to tackle out of the gate (e.g. cleaning up UI, implementing standard mobile design patterns, etc.). But I also had to become familiar with the product, the team and their ways of working.

All the screens of the roughly thirty two features we shipped from March to May 2019.
  • Algorithmic recommendations — Using an algorithm make picks for users. This necessitated an IA overhaul as there were now several different types of edits.
  • Product level feedback — Letting users “dislike” products in an effort to gather more signals.
  • Onboarding — A hugely important piece of the puzzle where we gather user preferences. We went through many, many variations of our onboarding experience to try and strike a balance between getting high quality signals and keeping friction relatively low.
  • Buy in Fern — Launched the first version of in-app buying experience (initially Apple Pay only). Before this, users were directed off to partner websites to purchase items. This was a tricky feature due to the unique way purchasing worked on Fern.
  • Various other quality of life features including new item badges, creating user stores instantly post-onboarding, in-app notifications, various UI tweaks to product pages, showing sale prices, etc.

Second three months, June to August 2019

The coming of a new quarter meant one big change for the team: We would have OKRs (objective and key results) to meet for the first time. The reason for this was that we now needed to prove to the business that continued investment in Fern was a worthwhile endeavour. We had three OKRs:

  1. User engagement — 50% of users who enter the app (post-onboarding) would like five items within three days.
  2. Scale — Bring 10,000 users per month into the app.
  3. Revenue — Average $10 GMV per user.
All the screens of the roughly thirty six features we shipped from June to August 2019.
  • Onboarding — Continued adjustment and optimisation of onboarding. With more volume coming through this quarter there was increased pressure to have a more efficient funnel.
  • Product browsing simplification—A dramatically simpler approach to browsing products. We had known the previous iteration was not working as intended and finally had the chance to change it.
  • Forced like/dislikes — In order to gather even more signals, we forced users to like or dislike an item to see the next one. This was a big call by the team as the previous product browsing experience (above) was working well.
  • Category edits — Adding the ability for users to generate edits for themselves based on specific categories. This proved to be a big driver in engagement.
  • Launched womenswear — Launched womenswear across Fern, learning a huge amount in the process. Unsurprisingly the shopping habits of women were different to those of men!
  • Referral scheme—Created a referral scheme. After much research into similar schemes we settled on a fairly standard approach. Users share their unique url from the “invite” tab. Users being referred enter their email address into a website and then download the app.
  • Restructuring IA — Changed structure of the app a couple of times to try and relate “rating” and “liking” to each other more.
  • Shipping, returns, etc.—Made shipping free, communicated how Fern works in an effort to be more upfront and transparent with users about the buying process.
  • Sizing — Gathered user sizing preferences to only show them items in their size and know when their size was out of stock.

Pulling the plug

The quarter ended and we had met one OKR (engagement), mostly met another (scale), and completely missed the third (revenue). Unfortunately the result we missed was the one about making money. Unsurprisingly the business decided it would no longer be pursuing Fern and shut the project down.

Problems identified

Reflecting on these few months of work has reared up a few areas we could have improved on.

1. We were getting the wrong users

The majority of people coming onboard were getting a disappointing experience. Our ads (showing products to users, with a link to download the app) implied that Fern was a typical fashion ecommerce app, which it wasn’t. Newly onboarded users would look for the product in the ad they clicked (we didn’t know it) or would look for a search bar to find that item (we didn’t have one). A more targeted approach to user acquisition, where we are clearer about Fern’s value proposition, could have alleviated this (albeit at a much higher cost-per-install).

In addition nearly all of our traffic was paid, which would likely have created headaches for us down the line.

2. Too much quant, not enough qual

We lost sight of the experience individual users were getting. Analytics were always combed through in detail but qualitative testing was more ad-hoc. When we did qualitative testing it was often of dubious quality (e.g. with non-Fern users, or with internal employees). This was much better than nothing, but rarely did we talk to our own users. Looking back it seems obvious, but we should have made more effort to talk to more Fern users.

3. It turns out a lot of our images sucked

Late in the game we shipped some analytics that tracked the number of images we were displaying per product. We found that around twenty-five percent of products users were viewing had only a single image. This problem was exacerbated by the fact that many of out top retailers had this problem for all their products!

4. Maybe our business model was wrong

Our business model was the same as Lyst’s: an affiliate model with ecommerce flavouring. This proved very difficult to make successful in our app environment. Perhaps a different approach, such as subscription, could have worked better.

Conclusion

We failed, of that there can be no doubt. But there is a nagging, tantalising possibility that what we sought was just over the horizon. That if we had a little more time, or had been just a little faster, or had just a little more luck, that we would have succeeded.

Yours truly (left) and the stylist team.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store