How the World Looks to Your Phone
[Cross-posted from a Quora answer I wrote here.]
One of Foursquare’s most useful features is its ability to send you just the right notification at just the right time, whether it’s visiting a new restaurant, arriving in a city, or hanging out with old friends at the neighborhood bar:
We take a lot of pride in our location technology (also known as Pilgrim) being the best in the industry, enabling us to send these relevant, high-precision, contextual notifications.
Pilgrim is actually not just a single technology, but a set of them, including:
- The Foursquare database (7 billion check-ins at 80 million places around the world)
- Stop detection (has the person stopped at a place, or is the person just stopped at a traffic light?)
- “Snap-to-place” (given a lat/long, wifi, and other sensor readings, at which place is the person located?)
- Client-side power management (do this all without draining your battery!)
- Content selection (given that someone has stopped at an interesting place, what should we send you?)
- Familiarity (has the person been here before? have they been in the neighborhood? or is it their first time?)
- (and much more…)
We could write a whole post about each of these, but perhaps the most interesting technology is “snap-to-place.” It’s a great example of how our unique data set and passionate users allow us to do things no one else can do.
We have these amazing little computers that we carry around in our pockets, but they don’t see the world in the same way that you and I do. Instead of eyes and ears, they have GPS, a clock, wifi, bluetooth, and other sensors. Our problem, then, is to take readings from those sensors and figure out which of those 80 million places in the world that phone is located.
Most companies start with a database of places that looks like this:
(That’s Washington Square Park in the middle, with several NYU buildings and great coffee shops nearby.)
For every place, they have a latitude and longitude. This is great if your business is giving driving directions or making maps. But what if you want to use these pins to figure out where a phone is?
The naive thing to do is to just drop those pins on a map, draw circles around them, and say the person is “at” a place if they are standing inside the circle. Some implementations also resize the circles based on how big the place is:
This works fine for big places like parks or Walmarts. But in dense areas like cities, airports, and malls (not to mention multi-story buildings, where places are stacked on top of each other), it breaks down. All these circles overlap and there’s no good way to tell places apart.
So if that’s not working, you might spend a bunch of time and money looking at satellite photos and drawing the outline of all the places on the map:
This is incredibly time consuming, but it’s possible. Unfortunately, our phones don’t see the world the way a satellite does. GPS bounces off of buildings, gives funny readings and bad accuracies. Different mobile operating systems have different wifi and cell tower maps and translate those in different ways into latitude and longitude. And in multi-story buildings, these polygons sometimes encapsulate dozens of places stacked vertically. The world simply doesn’t look like nice neat rectangles to a phone.
So what does Foursquare do? Well, our users have crawled the world for us and have told us more than 7 billion times where they’re standing and what that place is called. Each time they do, we attach a little bit more data to our models about how those places look to our phones out in the real world. To our phones, the world looks like this:
This is just a projection into a flat image of a model with hundreds of dimensions, but it gives an idea of what the world actually looks like to our phones. We use this and many other signals, (like nearby wifi, personalization, social factors, and real-time check-ins) to help power the push notifications you see when you’re exploring the city. Glad you’re enjoying them!
Interested in the machine learning, search, and infrastructure problems that come with working with such massive datasets on a daily basis? Come join us!
– Andrew Hogue, Blake Shaw, Berk Kapicioglu, and Stephanie Yang