Introducing Geo Street Talk

Helps getting to places in NYC, a little easier.

Over the years living in and exclusively using public transit to get around New York City, I’ve built a spatial memory cache of the city in my head. This cache contains just enough spatial information to move without having to rely on accessing a richer, more explicit form of turn-by-turn navigation that our smartphones offer.

For example, if I was in Times Square and had to get to Central Park, I know to move North or if I was in Dumbo and had to get to Prospect Park, I have a general sense of the direction I need to go without checking my phone.

Let’s say you had to go to some place in New York City and were given 2 addresses to get there.

111 8th Avenue in Manhattan
8th avenue between 15th & 16th streets in Manhattan

Address #1 requires access to my phone and input the address into the maps app to get a general direction of where I need to go.

Address #2 on the other hand, contains that general sense of direction pre- baked into its representation. For someone even vaguely familiar with NYC, this can make the decision to move, that much easier.

I have lived in NYC for ~10 years and however awesome the grid system is, it’s hard to maintain a spatial memory cache good enough to recall that 111 8th Avenue is between 15th and 16th street.

Let’s create another hypothetical.

Say in an autonomous vehicle future, you needed to get somewhere quickly. Wouldn’t it be great if you could just hop into the vehicle and tell it to go to West 3rd between Sullivan and Thompson?

Vehicle autonomy would be that much more accessible if the user experience involves exchanging a simple voice command with your vehicle to move to some destination the same way you would if you were in a taxicab.

Today, this is not possible because map providers like Google store addresses in a format that limits representations to individual House Numbers like 111 8th avenue which happens to be Google NYC’s office address. ;)

Geo Street Talk

Geo Street Talk (GST) enables a slightly more intuitive way of communicating points on a street. We created GST to convert any location coordinate along any NYC street to:

“On Street” between “From Street” and “To Street” 
On-Street between From-Street and To-Street

…pretty much the usual way any person casually expresses a location in any city. If the coordinate happens to be on a street intersection, GST outputs that too.

You are at the Intersection of X and Y street

GST does the reverse too (somewhat janky at the moment). When you provide an On-Street, From-Street, To-Street as inputs, GST displays the center location of the street segment so that this supposed autonomous vehicle can start moving on a simple voice command.

GST is built on top of NYC’s Department of Planning’s LION file, a dataset that contains data about every NYC street segment and intersection.

GST can be built for any city that has similar data about its streets and intersections.

Try out Geo Street Talk here

GST is a work in progress and we hope that upon further development, feedback, and support, any digital service that requires some street-level location intelligence can consume GST to output a more Humans of NewYork readable addresses on their apps or websites. We plan to integrate GST with voice applications such as Google Speech and Amazon Alexa to allow a truly conversational geo-interface.

GST even lends itself to supporting a more intuitive method to digitally survey streets for citywide maintenance aka our SQUID project.

The SQUID project, towards a new digital standard for citywide street and bike lane maintenance

We even got a standup routine to go along with GST. For the next iteration, we will try for 11 up and 1 over sans obscenities. ;)

Let us know (in the comments) how you plan to use it and how we can improve.

Geo Street Talk was built by Manushi Majumdar and Aaron D’souza who are bonafide street data heroes and passionate enough about cities, streets, and data that they delivered this project on the side of busy grad school deadlines and job hunts.

Special thanks to Akshay Penmatcha, Nicola Machitela, Felipe Diego, Sichen Tang & Geoff Perrin.