During my 2017 Paperless Post Product Design Internship — I had the opportunity to work with many amazing people and also lead my own personal research project on implementing and designing a recommendation system for the product’s asset tools.
I chose the Product Design (PD) internship at Paperless Post (PP) because PP was the only place that offered an internship that would give me the flexibility to work cross-functionally across multiple disciplines. When I applied, I was torn between the PD internship and the Data Science Internship. As a then rising 5th year in the Brown University | Rhode Island School of Design Dual Degree Program, working towards a BFA in Painting and BA in Applied Mathematics. Ultimately, I chose the PD internship because one of the Senior Designers told me that it was going to be a more interesting role — in the end, PP decided not to hire a Data Science Intern, so coincidentally it all worked out.
I came into PP at an interesting time, right after they had reorganized and restructured. Luckily, this meant that there was a lot of white space for me to work with. I came in with fairly specific goals, of which the main one was a question I wanted to answer for myself: “How can you incorporate data and computational processes in a design workflow?” The answer I came to, was in the form of a fairly comprehensive summer project born from countless one-on-one’s and conversations, a smarter way to present backdrops to the product’s users so that they’re incentivized to actually click on them rather than being overwhelmed with choice. For the next 11 weeks, I made my own process (one-week design sprints) and roadmap with metrics for success and deliverables along with value propositions.
Through my many conversations, I came to understand that PP is very design-centric and thus would never want algorithms and machines to put designs in front of our users. However, anyone can see that having to match tens of thousands of cards for our designers is not particularly the most efficient or time well spent. Therefore, I wanted to frame my project for building a backdrop recommendation system within the following framework:
Technology should always help the designer, not take away agency. This project isn’t intended to replace human work or the eye of a designer with an algorithm but to augment it.
Firstly, I conducted a heuristic analysis of the site, along with competitive analyses of other products that used recommendation systems in the same market as PP. I also conducted several user tests which helped to validate my hypothesis to a degree: “If there is a smarter selection of backdrops a user can choose from, will there be an overall increase rate of attachment and conversion?” Then the actual problem came of how exactly we might go about giving our users a smarter and more curated selection.
With this in mind, I started thinking of how to implement something at the end of (by then) 7 weeks. I started by building out a Rails 5 app (and also learning it because I had never touched it before though I knew Java and Python) that could work as a functioning survey application to show users a random or particular card and its corresponding recommended backdrops. Then with the help of a senior dev, got it hosted on Heroku by the company’s hackathon(and won!).
This part is more technical so if you’re not technically savvy then skip below to the Design section —
The basic underlying process and logic for how a backdrop was scored was based on color analysis (utilizing several aspects of color theory that every art student would know by heart) and the use of Google Vision API. This was primarily because as a designer, I valued color as the biggest priority for a backdrop to be able to look reasonable with a card, above pattern or theme. The diagram below shows the data models in my Rails 5 app, which also functioned as a stand alone app. Due to being denied access to the AWS cloud storage (S3), I hacked together a script to just scrape all the image url’s and card ID’s from the public PP Rails API endpoints and populating my own database with it, in the Image model. Then I hit Google Vision API with those image url’s in order to populate the color data in ColorProfile which in turn, I would need to use in making the initial recommendations. The Rule model contains all the color analysis algorithms (such as ‘complementary color’) which forms the basis for the recommendations to make a Match. Then these would be fed into the Survey Controller and every time a user would submit their preferred backdrops, through a form, those would be recorded as a Response in my own database.
Here is an example of what a Survey might look like hosted on Heroku:
Additionally, I was able to set up endpoints on my Heroku app in order for the PP Rails app to make an API call from and get the recommended backdrops since I had also incorporated the Paperless Post’s ID’s for each card in my data model. It’s slow as it makes the call synchronously on the backend, so there are obviously many areas for optimization and improvement, but since the basic functionality works, it could be rolled into the Rails app or turned into a service.
Implementing a Learning Algorithm
After the survey was up and running and collecting user data, I wanted to use the deterministic color analysis algorithm as a basis for implementing a reinforcement learning algorithm. I found that of the 209 surveys taken, the most popular was the “Highlight” rule. I used this single rule and refactored my code in order to make a system that offered just a single recommendation as a proof of concept. After going through a rapid succession of different models to use with a Senior Data Scientist at PP, I decided to use a type of Bayesian update and reinforcement learning based on the Beta-Binomial Model for rewarding and penalizing recommendations. I only implemented this locally but was able to get it functional for a low magnitude of successes (hit submit with the recommended backdrop) and failures (hit submit with a backdrop other than the recommended one).
Finally, though the more technical aspects of this project was placed within a design process and the framework of design principles, I wanted to bookend the more data and engineering heavy parts with actual design work and UX/UI solutions. Parallel to implementation of the Rails app, I held a design session to start thinking about how these recommendations would even surface in the browsing/creation funnel. The Senior Designer who I was working with help me gather all the designers in a room together and hashed out some designs which then I could iterate into a UX/UI map and prototypes with short, medium, and long term solutions.
The following is a map of all the possible solutions for end user interaction starting from the current one at the bottom, and building complexity towards the long-term implementation near the top.
This was followed by prototypes on Sketch with a click through video on InVision. Below are the clickthrough design prototype screen recordings.
Short Term Prototype
Medium Term Prototype
Long Term Prototype
I think that the Long Term Prototype can be iterated on more extensively because it has the most room to take form in many shapes and also A/B test. The Long Term view starts to introduce an idea of a live preview that would let a user enter their details and hit preview, then make some final additions or changes. Due to the amount of time and resources I had, I was only able to prototype just the inclusion of the Backdrop asset, specifically since that is where I think a set of automated recommendations could really shine.
These proofs of concept provide a foundation for the pursuit of possibly implementing an offer of smarter assets for our users. Actually integrating these into the current product was outside of the scope for an intern but I kept my research and work in consideration of how it might be able to be productionalized. To look towards the future, these designs could be implemented in A/B tests in the form of short term, medium term, and long term goals.
Additionally if PP were to implement a recommendation system based on image analysis, it is important to keep in mind that the highest selling packages are actually ‘Upload Your Own” packages (which allows users full control over what their card looks like) therefore, having on the fly real-time analysis of the images after the user has finished uploading their own media assets would offer the greatest impact on upsells and conversion rate.
Since hitting the Google Vision API thousands of times a day is not the most cost-effective, this can be feasibly implemented in-house through a k-means clustering algorithm for analyzing most dominant RGB colors.
Furthermore, as far the learning algorithm is involved, more complexity would be needed to offer a multivariate analysis of recommendations and improved color analysis algorithm in order to account for certain rules applying more to one card than another since not all cards are made equal.
At Paperless Post, I was surrounded by all sorts of helpful and smart people who really gave me everything I needed to push myself as far as I could go. I’m glad that I was able to in a way, structure my own internship to incorporate design, engineering, and data. The intersection of these three are often not things a normal internship asks for nor informs but it was a profound learning experience for me to see how all three function in a product company and be able to fluidly move in between them, utilizing them to create a harmonious process of building, measuring, and learning.