The Symphony of the Sleepless City

Tracking the sounds of New York City.

Nick Ey
VisUMD
4 min readOct 31, 2022

--

Image by Felix Dilly from Pixabay.

The bass line rumble of traffic, the shrill wail of an emergency vehicle, and the syncopated rhythms of construction equipment fill the air of America’s most densely populated city — New York, New York. For years, New York City (NYC) has been revered as “the city that never sleeps” — an endearing term used to describe its 24/7 street vendors, clubs, and restaurants. But with such a lively city comes significant implications, namely the constant buzz of noise littering the air, meandering its way through the streets and lingering in the ears of the dreary, overstimulated populous. Noise pollution is not a new thing in societies, but recent legislature across the world’s largest cities implies that the impacts are being felt, and improvements are necessary to enhance the well-being of those who live there. NYC’s approach toward reducing noise has primarily centered around noise codes and quiet areas. These codes limit a variety of activities, from boom-box bans to construction cancellations, and everything in between. While these approaches have been widely adopted across the city, a team of researchers from NYU argue that more can be done, and their new audio-sensing framework, Urban Rhapsody, may be the solution to the puzzle.

In collaboration with the Sounds of New York City (SONYC) team, a group dedicated to capturing their namesake, researchers from the Urban Rhapsody project are utilizing data samples from across various boroughs of NYC to identify what instruments make up the symphony of ever-present background noise.

The spatial distribution of sensors in New York City.

If this data could be effectively sorted through and tagged, these noise patterns could be used in a variety of civil, legal, and infrastructure solutions to combat noise pollution.

To start, researchers began to analyze the sound clips captured by the sensors located around the city. These sensors are designed to capture ten second bursts of audio at varying intervals to ensure a representative collection of the city soundscape throughout the day. Over the past 5 years, the sensors have collected over 60 terabytes of data, or the equivalent of 15 million streaming-quality songs. The sheer scale of the captured audio presented the Urban Rhapsody and SONYC teams with two significant problems: The first being that interpreting audio clips is a very manual and labor-intensive task. Secondly, without a frame of reference, such as a map, calendar, or other visual indicators, analysts will lose all context for their reviews, potentially skewing results. Therefore, it was pertinent for the Urban Rhapsody team to build a system that could sift through the captured audio and tag it for them — but how do you build such a system?

The simple answer is — you get help from whomever you can. Enter community-driven analysts. By building a system that is accessible to anyone, you can outsource your analyst work to the layman. Armed with a web-connected device and a pair of headphones, any NYC resident can access the audio logs and provide their interpretation of what they are listening to — they just need an app to do so. Through the magic of data visualization, the Urban Rhapsody team transformed a database of recordings into a visual medium that users could interact with. One may choose to focus on a specific time, place, or day of the year to analyze, or they can even sort through previously-tagged data to see trends in data and learn from it.

The user interface of the Urban Rhapsody system.

As more feedback is provided, the Urban Rhapsody system can form a mental model of the sounds and audio clips. This mental model actively learns and grows more intelligent with every input, and, in theory, will become intelligent enough to someday identify sounds on its own.

The capabilities of the Urban Rhapsody tool were put to the test in two case studies — one focusing on the identification of construction noise, and the second tracking the migratory pattern of songbirds. When filtering for construction noise, researchers discovered significant correlations between days with increased construction noise in and New York City’s “311” non-emergency service phone complaints, demonstrating the tool’s capability to identify unique sounds. When filtering for bird songs, researchers discovered that they could track the locations in which migratory songbirds took up residence during the calendar year, and at what time during the year they came and went. This demonstrated the tools’ capability in location acuity and audio parsing features, and potential machine learning opportunities for audible trends.

Though these examples demonstrate the immense capabilities of the Urban Rhapsody framework, the researchers at SONYC hope to continue developing the system and machine learning capabilities. Their hope is that, in coordination with researchers from various other fields, that Urban Rhapsody can provide the insights needed to make positive changes in human-centered machine learning, acoustics, and urban science. With the Urban Rhapsody framework, the researchers have provided policymakers with the musical notation and instrumentation that is NYC’s noise pollution. The city now just needs the right conductor to manage the swirl of sound and churn out a well-balanced symphony.

If you are interested in learning more about the SONYC project, or even participate in the analysis of audio clips, please visit their website at https://wp.nyu.edu/sonyc/.

References:

  • Rulff, J., Miranda, F., Hosseini, M., Lage, M., Cartwright, M., Dove, G., Bello, J., & Silva, C. (2022, May 25). Urban Rhapsody: Large-scale exploration of urban soundscapes. Retrieved October 28, 2022, from https://arxiv.org/abs/2205.13064

--

--