Stonewall Forever

Arnaud Tanielian
Team Stink
Published in
9 min readJun 28, 2019

Building interactive experiences that live forever

In 2019, to commemorate the legacy of the Stonewall Riots, considered to be a galvanizing force in the fight for LGBTQ+ equality in the United States, and around the world, Stink Studios partnered with The LGBT Community Center and with support from Google to create Stonewall Forever, an immersive digital experience that features previously unheard perspectives from the LGBTQ+ community and expands access to key narratives from LGBTQ+ history.

At the center of the experience is the first “living monument” to 50 years of Pride, which users can explore through a website or augmented reality (AR) app. Users can learn about the impact of the Stonewall Riots on the past, present, and future of Pride and add their own reflection on LGBTQ+ history to the ever-growing monument (User Generated Content, or UGC).

Visit the website here, or download the AR app on the App Store or Google Play.

A safe and solid architecture

To tackle this challenge, we started by setting up an architecture optimized for static data, shared between the website and the AR app.

Behind the scenes, we have a staging environment using App Engine, with a CMS to update the content and moderate the UGC.

The production website is a Firebase application, with static data deployed on a bucket for the AR app to use. The videos are hosted on YouTube, and the images are hosted on a bucket using the Google Image API to serve optimized images.

In the middle, we used Docker for local development and deployment.

Staging environment

First, we created an App Engine based on Django (Python) so it would be easier to create an administration interface. On top of being easily integrated within an App Engine ecosystem, Django also offers static builds off the shelf with Django Bakery . We used Jinja2 templates shared with the front-end using the nunjucks-loader.

Data is stored in an SQL database. Both local and staging environments are plugged to it, so the developers can work with the latest and the client can see any changes in the CMS directly reflected on the staging environment.

Finally, using App Engine allowed us during the process to deploy different versions (prototypes, alpha, beta…) for review without taking down the main stage.

Public environment

We use Firebase, which is an excellent solution for hosting static websites. In order to make sure it would work on launch day, we created a temporary Firebase instance so we could build and deploy to this instance for QA.

Because the AR app needed to be submitted on the different app stores (Apple and Google Play) before the website went live, we duplicated all the static data on a bucket so the app could tap into it without waiting for the final website deployment.

Building much?

Building and deploying was one of the more challenging efforts during the project. Using Google Cloud Build and Docker, it took us multiple iterations to find the best optimized way to build and deploy. One of main issues was with the number of files: as UGC flowed in, the number of files to build and optimize increased as well. We ended up dividing the build into 3 builds, because 1 single build timed out after 10 minutes (even if you specify to go over that time…):

  • Build #1 — Front-end. This build is triggered automatically when we push on the master branch of the project. It creates bundles (JS, CSS…) ready for production. Those files are zipped and put on a bucket.
  • Build #2 — HTML. In the CMS, we created a deploy button that triggers build #2 and #3. This build uses django-bakery to generate all HTML pages and JSON files needed for the applications. We had to optimize the bakery step, so instead of doing multiple SQL requests/page (which was considerably slowing down the build), we only made a few at the beginning of the process and passed down the data.
  • Build #3 — Optimization and deployment. Once all the HTML and JSON files are generated, we used critical.js to generate a custom CSS critical for every HTML page. We also minified the HTML as well. As Node JS is single threaded, we leveraged Worker Threads so the script could use as many processes (one core = one process… in theory) as it can handle, depending of the machine. On Cloud Build, it used over 32 workers! See our gist for more details. Finally, we deployed the public folder to Firebase.

In the end, it now takes around ~6 minutes to build and deploy over 2000 optimized pages and 2000 JSON files, when it was taking over 15 minutes before all of our optimizations 🚀

Front-end

Before diving into WebGL, let’s look a bit at the front-end. As mentioned before, in order to fully embrace static rendering, we shared templates with the back-end using Jinja2.

Therefore, it’s a full Vanilla JS application: no framework, only small libraries here and there. We used Redux to make our application states driven, coupled with Immutable.

We worked with Samsy to handle the WebGL part. He set up his own application, and built an API as a bridge between the main Front-end application and the WebGL.

And as always, we’ve kept accessibility in mind (still ongoing) when building the website. With keyboard navigation, semantic tags, aria-* attributes, we aimed to make this experience as accessible as possible for all users.

Performances are still a top priority. We split the code into multiple chunks: a vendor chunk containing all the libraries, as well as a main one to kick off the app. Every page was split into its own chunk as well, containing the template and CSS of the page, injected on runtime. The WebGL application has been split too, loaded only when you’re getting into the monument.

We also leveraged the Google Image API: every time a user uploads an image, we’re storing the picture on a bucket and use the get_serving_url() function (see doc) to serve a dynamic url to the website and AR app. Therefore, depending of the user context (device, viewport size, connection quality, webp support…) we can serve the most optimized image.

Finally, we turned the JS app into a PWA so users can experience the website offline on their phone.

A quick Lighthouse audit:

Make it dynamic: Cloud Functions to the rescue

As users can submit their own piece of content (UGC), we needed to create an endpoint so both the website and the AR app could submit user content to the CMS for moderation.

To prevent spamming, we used recaptcha v3 which provides a token+ bot prevention system out of the box, pluggable to any architecture.

To help moderators, we submit images to the Vision API and text to the Natural Language API. It flags any potentially inappropriate content, especially when it comes to language. But as human moderation is still the best way to go, these services are only here to flag content.

WebGL

WebGL developers, your Creative Partners

These types of projects aren’t very common. This one is definitely a unique piece, and everyone in the team contributed to every single aspect of it.

WebGL projects, because of their nature, provide more challenges for the team, but also more solutions. Samsy is a very talented developer, who brings more than just technical skills on the table. From prototype to creative inputs, developers are always here to take your projects to the next level ;)

Rendering a living monument

We ended up running 10,000 moving particles, with an occlusion and selective bloom pass, scrolling camera path, color washing… All running smoothly, even on low end devices.

Keeping that in mind, the rendering pipeline of this experience is quite complex. Performance-wise, making the experience available on a wide range of devices was definitely one of the biggest technical challenges here.

We solved these challenges with a ton of optimizations: instancing shapes, reducing WebGL calls and bindings, reducing over-draws, more efficient post-processing effects… all cooked up in a custom version of three.js:

Multi render target rendering (WEBGL_draw_buffers WebGL extension)
The app rendered multiple texture output per rendering. Therefore, we rendered multiple pass and output multiple textures in a single draw call, useful for the selective bloom post processing effect (diffuse + occlusion).

This extension is useful for anyone who wants to set up a deferred shading.
More about multi target rendering here: https://hacks.mozilla.org/2014/01/webgl-deferred-shading/

Vertex array object (OES_vertex_array_object extension)
This was another extension that encapsulated all the attributes buffer (position, normal, custom…) of a geometry into a single object.
It reduced dramatically the number of geometry-related bindings.

Big Triangle post-processing
Instead of a quad, we used a big triangle to render 2D pass on screen.

More here: https://michaldrobot.com/2014/04/01/gcn-execution-patterns-in-full-screen-passes/

Recycled particles
As the monument is “infinite”, the particles were recycled depending on the camera height. They start from the bottom and move to the top, without a CPU intervention.

Glslify
We used glslify-imports to share common logic, and common functions with different shaders. It worked the same way as a classic JS import, but for shaders.

Global uniforms system
Every shader extended a set of global uniforms such as time, scroll position, etc. It helped to set a global state, unified to all shaders.

For instance, we used this system for when a user enters a collection and sees the title screen (see later). It helped us creating the “color-wash” effect, we avoided using a post processing effect: a global color was blended to every output fragment shader in the scene (360 background, interactive and non-interactive particles).

Prototypes! Prototypes everywhere!

In the early stages of the project, we worked on a lot of prototypes, going back and forth between the creative team, the dev team and the client: defining the monument shape, the shape of one particle, the color system, the motion etc.

Clusters and distribution

Organizing the content within the monument in a logical and meaningful way was a big challenge. As the content is organized in collections, we translated that concept to a 3D space, gathering the content into clusters. In between them, we let some “space” to introduce a collection with a title screen.

In a cluster, we wanted the content to be close from the camera by all time. As the placement of the interactive particles was procedural, we distributed the content along the camera path so users would be able to find the content, even on small viewport.

Custom tools and editors

We quickly realized we needed some tools to be able to fully customize the experience.

Starting with debug panels, we were able to play around with simple attributes, such as colors, or placement. In the end, we accessed more values, such as the bloom intensity.

But one of the most challenging parts was to be able to control the camera path during the ascension in the monument. For this, Samsy created a custom view, so we could edit the camera path, as well as the camera “look at”.

#StonewallForever #Stonewall50

In honor of this Pride Month and World Pride | Stonewall 50, please consider supporting The LGBT Community Center, Heritage of Pride, It Gets Better Project, GLAAD, Human Rights Campaign, the Trevor Project, GLSEN, or any organization of your choice that benefits the LGBTQIA+ community.

If you would like to volunteer for the 2019 World Pride | Stonewall 50 March, sign up at this link.

And don’t forget to add your own story to the monument!

--

--

Arnaud Tanielian
Team Stink

Also known as Danetag on the Internets. Engineering Manager @ Shopify