Visual search as a trend for 2020s
Jul 28, 2019 · 8 min read

My name is Alexander Krivolap, I am an engineer and an entrepreneur. I am a co-founder and CTO at Oyper, where we work on a lot of computer vision and visual search problems. I would like to talk about how Visual Search is powering a new era in e-commerce. I am going to cover what is happening in the market as a whole, and touch on specific trends right now.


With the advent of social networks and mobile phones with large screens convenient for consuming images and videos, visual content came to dominate our attention. We take for granted our ability to go online and download any picture, interact with it and share it with friends. This applies to online shopping, where we choose and buy clothes based on their photos.

Services such as Instagram and Pinterest have turbocharged the proliferation of visual content as well. And triggered a new wave of development of e-commerce in the process. Bloggers started to produce content advertising makeup and fashion, and their popularity grew exponentially because of ease of consumption and accessibility for the audience. Any girl could easily ‘participate’ by leaving a comment for their celebrity or influencer, for example inquiring about the brand of a dress in a photo. Images became so important that Pinterest, for example, added search by image functionality.

Photos taken from open sources

Liketoknowit began as an online platform for influencers to blog and link looks and clothes. The company later developed its own app, which helps to find and buy clothes from photos or screenshots by searching for similar photos in their database, which contained pre-tagged photos and led to specific product pages in online stores through affiliate links.

Photos taken from open sources

Awareness was emerging that there was a lot of commercial potential in this space. At the same time (2012–2013), it was understood that tagging and mapping images was slow and inefficient, and better technology was needed to do it at scale. Enterprising data scientists began developing more powerful AI models, which could more accurately and economically detect and classify clothes and other objects in photos, and offering this opportunity to fashion retailers. Fashion retailers could clearly see the potential to increase sales and conversion in their online stores with the help of such technology. This also presented an opportunity to experiment with diverse user experiences on their sites, with the goal of increased engagement and sales.

Some retailers chose to work on this technology in-house and some experimented with third party solutions.

According to the latest research from Amazon, by 2021, early adopters of voice and visual search techniques stand to realize a 30% increase in revenue compared to the market at large.

Visual search market continues to mature and expand. Pinterest and Instagram, who were early to realize the power of visual content remain the leaders, handsomely rewarded by the market for their leadership. We are now very used to consuming online content ‘with our eyes’, and Oyper’s role in the ecosystem is to help to make this easy and convenient.

1. Lamoda case (wild-eyes)

The main goal of introducing visual search was to help customers find things faster. Additionally, it was important to keep up with the times and offer users new, interesting ways of shopping.

Photos taken from open sources

Search by image also encourages purchases of multiple items to complete the look, which increases average check and the number of goods in the basket. It also increased the number of product views and the number of unplanned impulse purchases.

Finally, owing to this new mode of finding and buying things, downloads of Lamoda app increased as well.

2. Alibaba case

Alibaba was one of the first to integrate image search. Their approach was to announce competitions and adopt the solutions for some of the top talent in China.

Photos taken from open sources

3. SnapChat

Snapchat been piloting a new approach to finding items on Amazon, rolling it out gradually to a limited number of their users. Their implementation is fairly straightforward — a user points their phone camera at an object, while long pressing on the screen. If a successful match is found a card appears on screen with a link to the product, or something similar. Clicking on the card redirects the user to the Amazon app (if it is installed) or to This functionality gave Amazon merchants a whole new promotion channel without any effort on their part.

Snap is consistently and diligently working on e-commerce innovation, competing fiercely with Instagram and Pinterest. The company already offers a number of self service tools to advertisers and is trying to make the buying process as user friendly as possible.

Photos taken from open sources

4. Amazon announcement

Amazon announced a new tool called StyleSnap, based on deep learning algorithms, which will allow the company to grow its foothold in the fashion retail space. Amazon made the introduction at the 2019 annual MARS conference hosted by Amazon founder Jeff Bezos. StyleSnap will search for clothes close in style to the source image provided by the user. StyleSnap is coming soon to the Amazon app for iOS and Android, although the exact timing has not been provided. The function will be activated with a click on the camera icon in the corner of the app. The user can use their own photo or screenshot, or upload one they found on the internet. The AI ​​algorithm for object recognition will then determine and classify items of clothing which are present, for example skirts or dresses, and then look for the most similar products on Amazon.

Photos taken from open sources

5. Google Lens

It is an image recognition technology developed by Google, designed to search for relevant information related to the objects which it identifies using visual analysis based on a neural network. First announced during Google I/O 2017, it was initially introduced as a separate application, and then integrated into the standard Android camera app.

Google has said in its blog that at the start, 250 thousand objects were stored in its library, now their total number exceeds 1 billion. What is interesting is that a large number of these objects were added through Google Shopping service, which is designed to search for various products at online stores. Since Google Lens algorithm is checked against thousands of search engine images, the identification of an object from a picture does not take long. This method allowed Google to work with an impressive amount of data, but it has its drawbacks. For example, it does not take into account old items that do not appear in online stores, such as retro consoles or cassette players, and therefore, Google Lens is not able to recognize them. Additionally, Google’s representatives note that the algorithm may not always work correctly due to a combination of other factors, such as lack of similar photographs submitted by its users in the dataset, on which the model was trained, the viewing angle and the quality of the picture. According to them, potential solutions may include offering the artificial intelligence more images taken on smartphone cameras.

6. Yandex’s Sloy

Yandex started a closed beta test of the Sloy (“Layer”) application themed around fashion and style.

Using augmented reality layer, the service automatically recognizes garments in the frame, including video. Sloy also allows the user to virtually try on accessories and masks. The service includes a feed with subscriptions and recommended videos. Clicking on a video from the feed would surface additional information about an item featured in it, and other similar videos. The service also allows users to discuss what they find with other son the platform.

What we do? And who we are? Where do we go?

Oyper operates under the b2b model — we offer our partners the following solutions:

Find Similar — search by image.

Similar Products — a solution to the “out of stock” problem. When a customer clicks on a product, our technology immediately surfaces replacement recommendations from the catalog.

Smart Recommendations — stylistically complementary products to the selected one.

Style DNA — the user chooses an occasion (currently 14, but can be scaled to many more) and sets a price range. AI generates stylish occasion-appropriate looks from the images of available products.

Autotagging — Oyper’sAI automatically identifies and labels a large number of attributes for each product in a picture.

We currently offer our solutions to partners on a monthly subscription through our API. Integration is super simple and with proper access, takes no more than a week. Oyper assumes the cost of integration.

Underlying all these solutions is Oyper’s core visual search technology. Our primary goal is to create the most accurate technology which recognizes the greatest number of attributes of each item. The accuracy of the recommendations depends on the model’s ‘understanding’ of even the most subtle details of clothing, both in the context of searching for similar things and creating new looks.

Power of video content

It would be a mistake to overlook the role of video content in this story. Oyper courageously leaped into a previously unknown area — we recognize things in video. We call this StreamAPI. A click while watching a movie can take the viewer to a relevant store right away, or immediately after the video has ended.

Background doesn’t play any role. Any will do — we will classify and detect all the clothes.

Oyper essentially built a tool for video content owners, which allows them to convert viewers into shoppers, allowing them to make previously impossible impulse purchases. This is a new monetization model for video content owners and a large leap forward in technology. It takes the act of watching movies to a new level of user involvement. Movies and e-commerce become intertwined into one large cinema e-commerce industry, which allows the user to multitask and save their time. Video content providers, who use our technology, will be able to forgo a paid subscription model, being able to earn much more from monetizing their content. The users, in turn, get an excellent multifunctional service, watching high-quality non-pirated video and and shopping at the same time.

Oyper detects and classifies the clothes from the movie

Future of visual search

Our ambitions are not limited to fashion only — people buy almost everything online. We want to create more opportunities to buy accessories, furniture and even food using the advantages of visual search — our technology scales and customizes easily, helping us to attract partners and customers. We are currently working on our first pilot implementations. We also showed our work to colleagues from Google, Amazon and Apple, and they all found it very promising. We are very proud of this and want to work even harder to build all our ideas and add to the list of ‘unicorns’ in the future.

In the last couple of months we’ve been seeing a lot more interest in our products compared to even six months ago. The trend of visual search is gaining momentum.

Written by

We make all visual content shoppable. Yes, video streams, too.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade