Pottermore: Discover your Patronus

Technical case-study

Active Theory
Active Theory Case Studies
11 min readDec 12, 2016

--

Discover Your Patronus Case Study Video

Pottermore.com launched the Patronus experience in September 2016, allowing fans to access one of the most famous magical elements in J.K. Rowling’s Wizarding World.

This is the one and only authentic Patronus experience, devised with detailed input and direction from J.K. Rowling who has had a very clear and well-documented vision of what a Patronus is, how it behaves and what it looks like, for almost twenty years. Pottermore’s challenge, was to take that detailed, intricate description — a piece of J. K. Rowling’s imagination, and create an experience that worked on all screen-sizes, was smooth on mobile and with a small file size that worked all over the world, in areas of varied internet-penetration, on a wide variety of browsers and devices and of course, fully accessible. We were happy to help Pottermore realise this vision.

Users are rewarded their animal Patronus, for partaking in the experience. This required the right balance of corporeal animal form and non-corporeal magical ‘particles’ as well as accurate animation cycles attached. For example: a hummingbird Patronus has to fly like a hummingbird, not like a magpie. And of course, with all of those considerations, it still had to feel utterly magical for the fans.

Building an optimized infinite forest

One of the first Patronus scenes for Potter fans is in the 3rd book, Prisoner of Azkaban. In the scene, Harry, in the Forbidden Forest, discovers his Patronus is a stag. Inspired by this scene, without trying to recreate it, we wanted to give fans this feeling by creating an environment characterised by darkness and uncertainty, similar to the one Harry confronted. The environment is deliberately unpolished; a dark and misty forest, accompanied by an originally-scored, binaural and responsive soundscape, suggesting both discovery and danger.

By summoning a single happy memory and then responding to a series of those short, time-sensitive multiple-choice questions on screen, users are able to discover and ultimately be presented with their Patronus — which has been chosen for them using J.K. Rowling’s own secret algorithm. The music, the danger, subsides when the users’ Patronus forms on screen.

In building the environment, we needed a solution that enabled an infinite forest as the amount of time spent could vary per user depending on their responses.

The user had to be able to exit the forest to return to the lake scene at any moment, once the logic had determined a suitable Patronus. This environment needed to feel huge and endless. At the same it had to be efficient enough to run on mobile.

We came upon the solution of using repeatable tiles, which could be placed in any direction, much like the tiles in the board game Carcassonne. With this technique, the user can fly through the forest without the feeling of repetition.

Carcassonne uses repeatable tiles that can be placed in any direction.

Each tile was made up of a base-ground geometry, some custom tree geometry, and the positions of hundreds of trees that could be placed dynamically. Below is one of the tiles used in production.

Multiple camera paths were created for each possible orientation, meaning that you almost never see the same location in a forest tile twice from the same point of view.

Exporting all of the trees together in the geometry would have blown out our file size. Instead, we created two master trees, which are duplicated and placed following the exported positions with some randomized rotation and scale. Then, all of the geometries were merged together in order to reduce draw calls as much as possible. The process of loading and combining the geometries was executed on a separate thread, leaving the main thread available to load other assets and generate the WebGL scene earlier.

Initially, we had made several tiles that would interchange, but the technique worked so well that we felt we could get away with one forest tile and one lake tile. This minimized the loading and generation time.

The Discovery Journey : Text in WebGL

The core logic of determining a Patronus, created by J.K. Rowling herself, is the core content of the experience, so we aimed to portray it in the best possible way. We decided to blend the text into the 3D environment and write completely custom text shaders to render the questions.

There are both pros and cons to this approach : with the power of shaders, any effect imaginable is possible, allowing you to create new and striking interactions; rendering and animation performance is also much higher than trying to animate text in the DOM, added to the fact that no extra compositing between the WebGL and other DOM elements is necessary. On the other hand, quality of the text is dependent on the WebGL scene’s rendering quality, meaning text might appear blurry if that level doesn’t match that of the screen’s density. Worst case, this means that small text can become illegible. As our text was very large, even when we reduce the quality of the render in order to improve performance, it still looks clear.

As we work more heavily in the 3D web, we’re noticing more and more that compositing CSS on top of WebGL slows things down. We have to leave a fairly large slice of the frame budget for this process, which is unfortunate, as the elements rendered with CSS are usually minor.

Edging ever closer with each WebGL project, we have to assume that we are not far away from projects rendered 100% in WebGL, allowing for optimum level of control over performance, and resembling a development process closer to that of a video game than a web site.

Making authentic Patronuses

We were tasked with creating a great many individual Patronuses. We knew timeline was going to make it very tight, but we were determined to give the fans the best possible experience by making each model animated in 3D.

Some of the Patronuses on the list were animals we had never heard of, and finding reference material was a challenge.

Others were fantastical creatures that are unique to J.K. Rowling’s Wizarding World, requiring our artists to rely heavily on J.K. Rowling’s detailed descriptions, the ultimate execution of which happily met with her approval.

Each Patronus had its own set of parameters, including the speed of the run cycle and the distance that it should cover in order to make the movement as realistic as possible.

Earlier in the year, we developed a tool called Antimatter which makes the complex task of GPGPU coding flexible enough to enable creative coding. Writing specific particle behaviors, we developed a system that placed particles along the shape of the animal model which would then flow through a field of curl noise. We discovered that when animating the mesh, the natural changes of vertex positions from frame to frame created a natural velocity that enhanced the particle flow.

At one point, we found that increasing the amount of particles beyond a certain limit had no visual difference. Upon looking deeper, the reason for this was that we were using the geometry’s vertices as the origin points for the particles. As there are only so many vertices, once there are many hundreds of thousands of particles, they start to double up on top of each other.

So we created a script to disperse the particles across the faces of the geometry. Not only did this allow each particle to be unique — greatly improving the visualization — it also meant that the even spread of particles helped to obscure the underlying geometry, and models with a lower polygon density still had the same level of particle detail as those with many more polygons.

Particles from vertices (gold) and evenly spread (blue)

For the solid “body” of the Patronus our goal was to create a shader that visually looked like a combination of plasma and light. To do this, we started with a base of a phong shader with subsurface scattering. From there, we introduced two shades of blue that are interpolated depending on the direction of a vertex normal. A bright area in the center of the mesh gives a strong light source.

To give the mesh translucency, we created a faked refraction effect by rendering the background trees to a texture and sampling that texture based on the screen space UV coordinates of the mesh by using gl_FragCoord. We offset that texture a bit by using a normal map to create a refracted plasma effect.

Translucent Dolphin Patronus

As some animations could be shared between several Patronuses using the same rig, we kept the model and animation data in separate files. This modularity also sped up the development process, as we could focus on the animation separately to the models themselves.

Custom tools

We wrote a number of custom exporters to facilitate the communication between our artists’ 3D software and Threejs. These included geometry buffers, animation rigs and cycles, object locations, and lighting information. Writing the exporters ourselves also meant we could omit any unnecessary information, further contributing to a reduced file size.

The time it took to write these tools paid off tenfold in improved efficiency. The goal was to let the artists do as much of the 3D preparation and manipulation as possible in proven 3D software rather than hand written code.

A simple example of a Maya exporter written in Python code.

To use this script, a user selects a group maya, and then runs the script — exporting the positions of each of the group’s children into one long array. This is the exact script used on this project to export the tree locations for each of the tiles into the format with the smallest file-size possible.

Maximizing Performance on Every Device¹

To cater to the broad audience of Harry Potter fans around the world, we developed the project to allow it to scale in visual rendering quality. This enabled the best visual fidelity a device can render while maintaining the target 60 frames-per-second. Using information about a device’s GPU as a quick CPU performance test on load, we categorize the device into a performance tier.

We also created dozens of tests for features within the project such as:

  • Number of trees within the forest
  • Density of particle simulation

Visual effects, such as water reflection

Additional lighting effects and post-processing

By taking into account the device performance tier within each of these tests, we are able to manually adjust each feature, such as reducing the density of trees for older mobile devices or toggle water reflections off. Scaled across the whole project, we gain precise control over the rendering and performance.

Accessibility enabled

With heavy use of WebGL and text rendered in 3D scenes, we took the approach of including a hidden accessibility view that runs in parallel to WebGL context for a best of both worlds approach. This accessibility view includes bare-bones HTML, handles keyboard input and DOM-specific accessibility support according to WCAG 2.0 accessibility guidelines.

This approach allowed us to provide enhanced accessibility support — screen readers will read out additional instructional prompts and information than displayed on screen, additional buttons are made available as alternatives to gesture input, and keyboard input can be used to complete the experience. To see this accessibility view in action, click here.

Audio - by Plan8

We wanted this experience to feel magical, to represent the dreamy and mysterious character associated with the Patronus, but also add a cold and almost gloomy vibe.

One challenge was how to create a sound design that would work for various length of time and still build up the tension as the user advances in the experience. And keeping the filesize to a bare minimum.

Plan8 Sound Studio

We ended up using six layers of short loops with different intention that together creates the full ambience. For each question completed we change the mix of these loops to build up the suspense towards the end.

Since the camera can take a different path each time, we couldn’t use a static sound design for the camera motion. Creating multiple tracks for each possible camera path would be too much to load. The solution was to use a shorter wind loop run through a lowpass filter controlled by the camera position. In this way you can actually hear when the camera ducks under a fallen tree or rises up to the tree tops.

For the particles following the wand we wanted a glittering sound that was responsive to the users’ movement. Playing one sound per particle was not an option for performance reasons. Instead we use the speed of the wand to control the volume and panning of a loop. This method works really well for this type of interaction as the volume follows the movement in a convincing way.

For the tension to build throughout, the six layers consisted of drones, strings, noise and sound fx using Native Instruments Damage, that would add on top of each other to be added in as you progress through the quiz. To even out the gloomy/dark feeling these elements give, we added a celesta to make it feel magical and mysterious, which comes in when the Patronus is summoned.

A lot of glass sounds were used for the magic sounds. When you add a lot of reverb & delays to them (SoundToys Echo Boy) and reverse some of them, it gets really ambient and dreamy, yet defined. The wand (mouse pointer) had to be made into a constant loop of glittering sounds that would fade in and out as you move the mouse.

The sound that plays when the Patronus appears is a mix of noise with a lot of flanger & phaser, and a sub that gives it some bottom, mixed in with some glass sounds as mentioned above.

Lastly, some wind chimes processed similarly as the glass were added to give it some extra sparkle in the top end.

Launched!

Fans worldwide have discovered their Patronuses, with the experience trending for days on numerous major online publications. We are extremely grateful to have been a part of Pottermore’s Wizarding World by creating this experience for their fans and look forward to collaborating and creating more magic with them in the future.

View the experience.

--

--