It’s Only 6 Degrees

Exploring the NYC Floodplain with VR and augmented haptics

Iltimas Doha
5 min readJan 31, 2018

For this project I looked at the 2020 100-year floodplain, and the 2050 500-year floodplain maps provided by FEMA and the New York City government in the NYC Open Data database. A floodplain map shows areas that will flood given a large storm, dictated by the likeliness for the storm. So 100-year floodplain is the flooding for a “once-in-100-year” flood, or storm with a 0.01% chance of landing. I found that data could have the potential to evolve urban planning after experiencing Hurricane Sandy in 2011. But even as someone who experienced Sandy, I felt the map did little to nothing to understand the direct impact of the flooding.

2050 500-year floodplain

Before setting out on my own journey to provide a closer relationship to the floodplain, I looked to what other organizations and artists were doing to help better understand the impact of natural disasters.

Precedents

Propublica’s WebGl Flood Map of Sandy

An interactive web info-graphic that show which building were effected by Hurricane Sandy. This was state-of-the-art in WebGL and Data Visualization in its time, which they also wrote about in an article on how they built the interactive: https://www.propublica.org/nerds/how-we-made-the-3-d-new-york-city-flood-map

Landscape Metrics’ Advancing Waters

An interactive web infographic that allow users to manipulate the amount of flooding to see areas impacted as well as toll count of people, schools, transportation and waste facilities impacted.

Jeffrey Linn’s N. Y. Sea, 100' Sea Rise Map Poster

This detailed map shows NYC reduced to large bays and small islands as they would appear in the case of a 100 foot sea rise.

Eve Mosher’s HighWaterLine

Artist Eve Mosher draws floodplain lines with chalkline in Brooklyn, bringing intangible data into the real world and building that understanding with communities.

Synthesizing

After gathering relevant precedents I began mapping out them out as prototypes using Stephanie Houde and Charles Hill’s Model of What Prototypes Prototype.

Here I organized the projects based off their lean towards “Role”, “Look and Feel”, and “Implementation(‘Tech’ in the photo)”.

Gaps in purple

Once they were mapped out, I began to label and identify key features the projects had that made them effective.

The features were those who had high impact, dynamic, short or long time frame and readability.

I then found a gap in projects demonstrating the impact of flooding in NYC, a place where I could add to the conversation, and formed this driving question:

How might we explore New York City’s susceptibility to flooding, via FEMA Floodplain maps, to create an experience that shows the magnitude of vulnerability in a tangible form?

I returned to the Model of What Prototypes Prototype as a framework to ideate on what the final project would look like.

Here I brainstormed on different approaches to Technology, Role, and Look and Feel prototypes. I then ordered each idea for each category from what would be most lo-fi to hi-fi, and how each iteration moved towards another category. These 9 iterations all work in a spiraling motion aiming towards the center. This center is the ideal product, balancing Tech, Look, and Role.

For Role, there are 3 possible iterations that refined what the user could take away from the experience, starting with visualizing climate change to actual physical feeling of the impact. For Tech, the range starts from generating 3D environments, continues to VR, and ends on AR. Finally for Look and Feel, the levels of interaction start with adding flooding, continuing to manipulating, ending with being able to fully explore an environment.

The first prototype was a Wizard of Oz prototype, exploring VR/3D environments as well as an ability to manipulate water levels. Using paper and transparency paper I created a headset with a static view of static block, where the user then used the transparency to manipulate water levels. After receiving successful validation that this approach would benefit from committing time to building this with tech, I moved towards a further refined prototype.

The final prototype atttempts to integrate augmented haptics and move the environment into VR.

My inspiration for looking into augmented haptics happened after listening to a presentation by Daniel Leithinger, a researcher at MIT’s Tangible Media Lab. The talk presented ways in which physical presence and forces can augment and enhance computational experiences. I decided to extend this idea to add another dimension to the floodplain data. In this project, the data is water, therefore what better element than to add water itself!

Utilizing Unity, the Mapbox API, and Google Cardboard, this final prototype places users in a New York City block with rapidly rising water levels. While the user watches the water rise around and above them, the user is situated in a tub/bucket as water is poured on their feet.

Based off of feedback from users, the prototype brings in a more visceral and emotional element to this problem. The next step is to implement a more direct one to one ratio between the haptics and the data it reflects.

:(

Unfortunately, all of my source code and a lot of documentation, including the app itself, screen recording of the app, a documentation of the Wizard of Oz prototype, was lost when my hard drive corrupted.

At least I still have the gif :)

--

--

Iltimas Doha

EYEBEAM Resident ‘14-’15 ⁠ Parsons’ Design+Technology BFA ‘15-’19 ⁠ Eid and Chill 🕋