SNAPSTYLE for SHOPSTYLE
One more project crossed off the GA UXDI list. This time around, we were given the freedom to identify our own problem and apply it to the company of our choice. Partnered up with two other fashion-loving, shopping-savvy students, Tatum Rehorn and Stephanie Del Rio, I was excited to see where our similar interests and creative minds would take us. After several hours of brainstorming, this is what we came to:
FEATURE IMPLEMENTATION | SNAPSTYLE for SHOPSTYLE
ShopStyle is a leading online retail aggregator that allows its users to have access to over 12 million items in just one platform. Its fashion-savvy and price-conscious consumers are ever-searching for the best styles at the best prices across a multitude of retailers. ShopStyle presents a seamless way for customers and retailers to connect, but its customers still are not immune to frustration free shopping experiences. From research we gathered, my team and I therefore identified the problem:
Fashion consumers are not always sure how to describe what they are looking for. They are inspired by trends seen on the streets or in the media, but are often unsure of where they can purchase these items and/or at the price points they are looking for.
Our solution? Visual Search through a native mobile platform — SnapStyle.
Using advanced image recognition technology available today, ShopStyle has the opportunity to empower shoppers to search for, discover and buy anything they can photograph.
How did we get here? What was our process?
STEP ONE | USER RESEARCH
We sent out a survey regarding online shopping to the GA community and our social networks, finding that our target audience consisted of mainly female millennials, therefore following through with this demographic in our user interviews. From there, we took the main points found in each interview and created an affinity map to efficiently understand our user and pinpoint his/her key frustrations when online shopping.
We found that 60% of all survey respondents stated that a central frustration was “finding what I’m looking for”, from which 100% of our user interviews ranked in their top two online shopping frustrations. Interviewers stated:
“I hate searching because it’s only accurate if I type the keywords 100% correctly.”
“I don’t always know the words to describe the item I want to find. Sometimes when I try to look up something like winter work attire, the results are completely different from what I was expecting.”
“I’m not always sure which categories to use… is the jacket I’m looking for considered a ‘Jacket’ or ‘Outerwear’?”
Leading to our persona creation…
STEP TWO | MARKET RESEARCH
Why visual search?
- Image Recognition Market Worth estimated to grow from $9.65 billion in 2014 to $25.65 billion by 2019.
- 90% of information transmitted to the brain is visual, and it’s processed 60,000 times faster than text.
We looked into two leading companies who specialize in image recognition technology: (1) Slyce and (2) Cortexica. Both leaders in the industry have worked with some of the biggest names in retail, ie: Neiman Marcus, Amazon, Macy’s, Urban Outfitters, and Net-a-Porter to name a few. The technology allows the user to find the most similar items to their searched image on the web. Based on the business needs and opportunities, ShopStyle would partner with one of these resources to implement SnapStyle.
We also reviewed several would-be competitors in this field, in mobile apps:
- Mobile commerce is expected to generate up to $252 billion between 2015–2020 (Women’s Wear Daily)
- Based on survey results, millennials are the central user demographic, whose statistics say that: (1) 60% of millennials say that it is convenient to use a smartphone to research or purchase a product on the go, and (2) > 50% use their smartphones to research products while shopping.
STEP THREE | FEATURE DESIGN
From our competitive analysis and user and market research, we identified how to best prioritize the features we needed to implement for our MVP (minimum viable product). To the left, you can see the ones we selected encircled, leaving the rest for future, more technologically challenging development.
Below is our central user flow for achieving the main goal of the user — achieving relevant search results from photo image search.
Lastly, in our interface design, we wanted to maintain some conventions that we already see in other photo-taking apps such as Instagram. In the wireframes below, we have centered the “SnapStyle” icon in the middle of the app’s global navigation, as well as the photo-taking button on the camera page, similar to that of Instagram. We have also maintained the ShopStyle aesthetic consistency throughout the flow.
From our user testings, we found 3 main findings :
- First iteration: users attempted to scroll through search suggestions horizontally as opposed to vertically, per the 1st iteration’s design
- Both iterations: most users gestured a two finger zoom on the cropping page, but some users did not feel inclined to zoom and jumped right to the scanning page
- Second iteration: user tests were more successful and users found flow to be intuitive
Based on our findings, we concluded with the final updated prototype presented in a demo version below:
STEP FOUR | NEXT STEPS
- More Testing
- Build Error Message Implementation
- Restructure Profile Pages — make more engaging
- Saved pictures/styles
- Detailed outfit search
- Instagram/Social Media — “Share My Style”
- Search Results Advanced Filtration Options
- Make categories clickable on results page
- Enhance visuals
- Internal check-out
- Explore more biometric technology to help users in other areas (ie: size/fit)
I learned so much from collaborating with my teammates on this project. It’s been a wonderful journey learning how to work with so many different minds and personalities within the UX world so far, and I’m so excited to see what’s next. Even in the best of group dynamics, there will always be some hiccups along the way that require strong communication and clarity, that may sometimes lead to even further creativity. With that, on to our next and final UXDI project…