WebSocket Network

Active Theory
Active Theory Case Studies
4 min readSep 15, 2016

Paper Planes and Paint Party were interactive installations created for the big screen at Google I/O 2016. Just prior to the keynote, as attendees gathered and viewing parties started to stream the event live, both these interactive experiences took the stage.

Attendees participating in the Paint Party installation were able to pick a color and throw paint splats from their mobile devices at the interactive screen. Audience members participating in the Paper Planes installation were able to catch, fold, stamp and throw virtual paper planes from their mobiles devices into the big screen where the planes would instantly appear.

Paper Planes is now back online here.

The Challenge

7,000 attendees and 530 organized viewing parties in over 100 countries around the world, including over 1 million people from China, were expected to tune in for the event. We needed to ensure that each device and screen would connect reliably and each message would send instantly.

For this global experience to work and perform at scale, we set up a network of WebSocket servers to manage connections, route messages and handle the magic behind the scenes.

What is a WebSocket?

A WebSocket is a web technology that allows for a persistent “socket” connection between a web browser and a server. Once established, both sides can send and receive messages directly over the network with very low latency and a message footprint of just a few bytes.

The Network

The core of the network was the funnel server. All messages were routed through the funnel before being displayed on the 50 foot screen on stage. Messages from attendees at I/O, both paint splats and paper planes, were routed directly to the funnel. To cater to the global audience, additional relay servers handled connections from outside of the event location in Mountain View. Global participants were routed to the nearest relay server based on their location. These relay servers would then dispatch collections of messages to the funnel server.

Hardware

This server infrastructure was setup using Compute Engine on the Google Cloud Platform.

The main funnel server comprised 32 CPU cores, each serving a dedicated Socket.IO server on its own thread and port. Relay servers, each with 16 CPU cores, were set up in each Google Cloud Platform region around the world to create sub networks in us-central, us-east, asia-east, and europe-west.

Software

The WebSocket servers were Linux VM’s running multiple Node.js processes and Socket.IO servers on individual ports. The number of threads scaled based on the number of CPU cores.

Just like the client-side codebase, the servers also ran from a single codebase with custom modules and configurations set depending on each server’s function and region. Code was deployed across all servers in one go, allowing for rapid development and testing.

As connections were established, the server would register the client, allowing each end to keep track of the other. In the case of an intermittent network failure, broken connections would reconnect as soon as the network was restored. Each process could safely restart in just a few seconds, and all connections from mobile devices and screens to the server would be automatically re-established without the user noticing.

Outside of these servers, Google App Engine and Datastore were used to save and retrieve plane and stamp data. Google Cloud Storage was used to store all static assets.

Geolocation

Google App Engine provides approximate geolocation data with each request header as a built-in service. This data, along with IP detection for known Google IP addresses, was used to determine if a user was at the event, or, if not, which city they were connecting from.

With this location data, planes were caught, stamped and thrown across the globe, waiting to be caught again. The city and country location was stamped on the plane along the way.

In addition, there was no need to fiddle around with inputting codes to establish the mobile to desktop connection. By simply loading the experience on a phone or desktop, the user connected right away to the same sub network based on their location.

Load Balancing

Real time stats, including the number of concurrent connections and average response times for each socket server, were maintained for each WebSocket server. Tracking this data allowed us to continually route new connections to the server with the greatest number of available connections.

From our load tests, we recognized that each relay server was capable of handling 16,000 concurrent socket connections safely without any performance drop. Triggers were in place to switch over to additional relay servers instantly if any server reached the maximum number of concurrent connections.

Inside the production trailer. Graphic machines receiving socket messages and rendering both experiences to the big screen.

The Result

The socket network performed well, handling 40,000 planes thrown during the 30-minute period preceding the keynote. Every message was throttled to a maximum visible limit per second and rendered by powerful graphics machines on the big screen.

If you threw a paper plane, you can be sure it made it to Google I/O.

--

--