Billion Scale Exhaustive Similarity Search

Chris McCormick
Nearist.ai
Published in
3 min readSep 12, 2018

--

Nearist has recently benchmarked exhaustive (or “brute force”) k-NN search on a dataset of 1 billion image descriptors (the deep1b dataset). A single server containing Nearist’s Vector Search Accelerator (VSX) cards was able to find the 100 nearest neighbors to a query vector in 698ms. To put this search task into perspective, it would require 21 Tesla K80 GPUs (each of which has 24GB of RAM) in order to store and search these 1 billion descriptors.

We at Nearist believe that the ability to easily perform exact, rather than approximate, nearest neighbor search at this scale has important implications for businesses with large search tasks.

Background

Similarity Search, or the more technical name “k-Nearest Neighbor Search”, is used in many industry applications. k-NN allows for comparing content based on its abstract qualities instead of just structured meta data. This is accomplished using machine learning techniques which create vector representations of the content, which can then be compared using a simple distance metric like the Euclidean distance.

As the amount of content managed by companies continues to grow, many of these similarity search applications require searching through tens to hundreds of millions of items. Examples include searching through 4.5 million listings at Airbnb, comparing 126M unique search queries to 43M unique advertisements at Yahoo, or searching billions of images at Facebook.

Approximation Techniques

The industry-standard approach to handling k-NN search at this scale is to research, develop, and fine-tune an approximation technique which dramatically reduces search time with a modest drop in accuracy. Approximate Nearest Neighbor (ANN) techniques are very intellectually satisfying — they are clever in design and are very efficient at search time. However, they incur a number of costs to a business:

(1) Approximation techniques require valuable developer time to research and develop:

  • Studying different approaches
  • Conducting experiments to compare their effectiveness
  • Fine-tuning parameters to optimize their speed vs. accuracy trade-off

This is valuable time that the ML team could be spending on higher-value tasks, but ANN search is generally perceived as the only feasible solution.

(2) Approximation techniques require a time-consuming indexing step:

  • Changes to the dataset cannot be made as frequently because the index may take hours to rebuild.

(3) Approximation techniques sacrifice accuracy.

While a modest drop in accuracy may be acceptable in some applications, many other applications are tied closely to revenue, where improvements in accuracy translate to real dollars:

  • Improved accuracy in ad targeting can measurably improve click through rate.
  • Improved accuracy in product recommendation can measurably increase conversion rates.
  • Improved content recommendations can measurably improve user retention.

Another Way

Advances in GPU technology, as well as the release of Nearist’s own hardware and VSX cloud service, are enabling exact k-NN search at a much larger scale than has previously been feasible. If your team is building a feature requiring k-NN search, consider whether the opportunity cost, indexing time, and lost accuracy of an ANN technique might outweigh the cost of a simpler, hardware-accelerated, exact search solution.

Nearist’s VSX Card

Thanks for reading! We’d really appreciate it if you recommend this post (by clicking the ❤ button) so other people can find it.

P.S. If you enjoyed this article, please consider checking out Nearist.ai, or let’s talk to discuss how can we help accelerate your machine learning efforts.

--

--