Alone with Others: shared online experiences, decentralised

Random Studio Editor
Random Studio
Published in
5 min readApr 19, 2021

Some counterintuitive choices allowed us to deliver a live online experience without complex backend architecture.

Sketching out our plans

Over a year into the Covid pandemic, we know the playbook: an event gets cancelled irl (calling it “in real life” seems oddly ironic now) so we move it online. The challenge is making the virtual event more engaging than another fatigue-inducing Zoom call.

At worst, we get performers seemingly trapped inside actual cages made of smartphones so they can do video calls with their fans. At best, some artists have produced impressive stage shows inside the game Fortnite or mind-bending motion capture performances using custom software.

But not all audiences have virtual reality headsets hooked up to gamer-spec PCs. And not all performers have the expertise, time or budget for elaborate digital worldbuilding.

Nobody, it seems, has found the single and undisputed best technological approach to live events online; that leaves us free to experiment.

My name is Serious Constraint and I am Legion

This is the fun niche we found ourselves supporting: the Fashion Department of the Royal Academy of Fine Arts Antwerp wanted to present their 2020 graduation show, but online. These are students, not professionals, and their works are as diverse as they are. Much of the material and media would be delivered last-minute and would leave no time to meticulously craft hundreds of assets.

Cinema4D can do a lot… but how to translate this all onto the web?

The students needed a catwalk. The creatives on our team wanted high-fidelity 3D environments and interesting, lifelike animated models — in other words, the “worst of both worlds” when it comes to time and computing resources. Another constraint: it all needs to run in the browser (desktop and mobile), and since we’re nowhere near WebGPU we have to be extremely cautious about 3D rendering capabilities.

Sketching some UX prototypes helped us to focus on the essentials: what would the user actually see, and what would they be able to do?

How would the browser cope with a 3 hour performance? And how to recover (gracefully) if something went wrong?

Challenge 1: stream more vertices than you deserve

The “models” in our virtual catwalk would need to be dynamically animated and rendered. They needed to shimmy in a convincing way, although when representing 200+ designers we could cut ourselves some slack by going very abstract rather than lifelike. Once we had delightful moving clothes hangers on which the students could attach their works, we had to consider the resources left over for the environments.

The 3D avatars would carry the weight of the whole show on their, uh, shoulders.

Our approach here was to cheat, artfully. We pre-rendered high fidelity sets to video (using proper 3D graphics software rather than in-browser), and lined up the camera angles for the 3D scene. This was possible because our camera angles were predefined — we narrowed them down to 4, which the user could switch manually, or automatically follow a “curated” view.

Previsualising the combination of video and 3D rendering

That meant four HD video streams would need to play in sync with… predefined-yet-also-realtime animated models? And that’s where the real fun begins.

Challenge 2: deterministic live playback

In a typical game engine, the “state” of everything in the current frame is determined only by the state of the previous frame, then transformed by a combination of player input, agent behaviour (e.g. enemies or NPCs) and simulated physics.

The approach to creating a shared online experience (think Fortnite or World of Warcraft), then, means somehow collecting all of the player inputs, computing the state centrally and streaming it back to all the players. This is not cheap, especially as you add thousands of users. This is “many-to-many”.

On the other end of the spectrum is “one-to-many”, for example a live-streamed video. This requires one authoritative, 100% reliable source.

We did neither of these.

Not many-to-many, not one-to-many, but something you might describe as “one show, many show instances”. We replicated the same show on every single viewer’s device, with entirely static assets — with only the clock (time of day) determining the state of the virtual scene in front of you.

This seems counterintuitive to the common notions of “live” and even “event”. Every viewer was seeing the same show, at the same time — but the pixels, sound and vertices on screen were indeed being generated on their own devices, in realtime. It was like delivering thousands of mechanical music boxes, each with the same music; if thousands of people start at the same time, and turn the lever at precisely the same speed, they experience the “same” performance, but separately. Our website performed the same trick, but of course with the “lever” turned in precise unison, by the clock.

Consider the benefits of time travel

Having gleefully glossed over the existential implications of putting on a live show that was both a shared experience and also unique for each viewer, we turn to some interesting implications for this approach.

Firstly, it meant that our role as developers became less about “rule making” and more about “choreography”. We had to make student-provided show assets, BablyonJS 3D graphics, React components and HLS video streams dance in perfect harmony to a score written in JSON. We largely automated the process of turning carefully named folders and files (provided by students) into a show with the appropriate sections and timings. Therefore there was no Content Management System, and strictly speaking no need to manually plan around every asset delivered on the deadline just before the show started. (We did end up doing some manual fine-tuning). The positions of models on the catwalk and the cueing of video content would be interpolated purely in reference to the clock.

Secondly, it meant we could “rehearse” the performance ahead of time, as often as we liked, to test and debug. This meant literally setting the clocks on our computers to the future, to check what the show would look like at various moments in its multi-hour duration.

Thirdly, it made archiving and replay straightforward: just load up and run the whole show against a different clock. No need to store video frames.

By embracing rather than fighting against the constraints of the technology, we ended up with a unique format. Our curiously “distributed real-time” show helped Antwerp’s Fashion departments present their work on a worthy platform, all in the time of Covid.

Watch the full show as it happened (minus the interactivity).

--

--