AR Winter Wonderland: Celebrating the Holidays With WebAR

Jason Webb
Bluecadet
Published in
7 min readFeb 18, 2022
Edited video showing screencasts captured from multiple devices running the same AR holiday scene in different environments including Los Angeles, Minneapolis, and multiple locations in Philadelphia.

Every year at Bluecadet we like to make a festive technology project to celebrate the holidays and ring in the new year, and 2021 was no different! We’re always researching and experimenting with new technologies to stay at the top of our game, and recently AR has been on our minds a lot.

Due to COVID, museums and brands have been eager to find new ways to engage their audiences using contactless, or even remote, interactions. Many are wondering whether AR is a viable option, and we think it definitely has a lot of creative potential to explore.

However, at the moment the fastest, most stable, and most robust AR engines (Apple’s ARKit and Android’s ARCore) are only available for native apps, which means creators have to navigate the world of app store publishing and visitors have to install another app, which may only work on newer, expensive phones.

Fortunately, a few platforms and technologies (8th Wall, Zappar, AR.js, and others) have been popping up recently that are using web technologies to provide at least some of the same features as their native counterparts. Now we can build fun AR experiences that nearly anyone can try out without any special apps or fancy hardware!

Go ahead and scan the QR code below to see for yourself!

Cartoon snowman peeking out from inside an opened present, with a QR code linking to the live app shown on the side of the present.
Scan the QR code above, or go to the following URL to try the app: https://bluecadet.8thwall.app/holiday-card-2021

In case the app isn’t working or you don’t have a phone nearby, here is a recording of what you would see:

Screen recording of a phone screen’s camera feed showing a digital present placed on the ground of a real pedestrian bridge spanning a large, semi-frozen river in Minneapolis. As the phone is moved around, the present appears to stay in one spot on the ground. The user taps the present, causing it to shrink away and be replaced by a large cartoon winter scene with dancing gingerbread people, trees, bouncing snowmen, balloons, and an arch with bells and the text “Happy New Year”.

How it works

After evaluating a few different platforms and code packages, we ended up landing on 8th Wall. Their Cloud Editor, live app previews, multiple 3D framework integrations, and built-in project hosting and deployment all make it easy to set up, build, and launch small to mid-sized projects like ours in no time!

World tracking

At the core of our AR experience is 8th Wall’s magical world tracking functionality, which uses a custom SLAM engine built in C++ and compiled for the web with WebAssembly. A JavaScript package developed by 8th Wall interacts with the engine and exposes an API with interfaces for ThreeJS, A-Frame, and other modern 3D web frameworks.

We used 8th Wall’s Cloud Editor with ThreeJS for this experience, which was pretty easy to get up and running with thanks to 8th Wall’s sample projects, which can used as starter templates for new projects.

8th Wall’s JavaScript package provides a scene object that automatically stays aligned with the floor of the environment around you in real-time, adjusting both its orientation and scale to fit as best it can. For me the “a-ha” moment happened when I realized that the XZ plane of the scene was constantly being realigned to the horizontal surface found by the SLAM engine — as you add meshes to the local coordinate system of the scene, they ALL get transformed as 8th Wall aligns the scene to the environment!

Assets

The 3D models and animations that you see in the experience were all created by our wonderful Motion Designer Siji Chen using Blender, then exported to compressed GLB files. On the web, asset size makes a huge difference in UX because the assets have to be downloaded by the browser on page load. Our models were ~7–10MB normally, but only ~2MB compressed!

In ThreeJS the assets were loaded with GLTFLoader and DracoLoader, then animated by just loading up and playing all the baked-in animations created by Siji using ThreeJS’s animation system. Having all the animations, materials, and geometry all in a single file really saved us time!

The lighting and environment map setup was borrowed from Don McCurdy’s glTF Viewer app, whose source code helped in a pinch.

Screenshot of the entire winter scene model loaded into the glTF Viewer web app, with a black toolbar shown at the top.

UI

The experience also includes some flat 2D UI components (built with standard HTML, CSS, and vanilla JavaScript) that sit on top of the canvas, providing some additional features like:

  • A “Help” dialog with usage tips, including tips for screen reader users.
  • A “reset” button that allows you to force the world tracking engine to reset, in case the 3D models get “stuck” somewhere you don’t want them.
  • A manual control interface allowing you to explore the objects in the scene using crosshairs without having to move your phone.
From left to right: (left) initial screen showing a modal dialog with introductory text; (middle) the same modal dialog with different text containing numbered accessibility tips; (right) phone camera view of a wooden desk with a virtual present placed on it, with the app’s UI components visible.

Accessibility

We always want to make sure that the projects we create can be enjoyed equally by the widest possible audience, which means creating content and interfaces that are accessible to people with a wide range of abilities.

We work hard to incorporate guidance from the Web Content Accessibility Guidelines (WCAG 2.1 level AA) into our work, and for normal web apps they are a tremendous resource. For AR experiences, though, there are some challenges that are not so easy to solve, so we have to get creative.

There is still a lot of research to be done on the topic of AR accessibility, so we recommend looking into communities like XR Access and A11yVR to see what people are thinking about. The XR Accessibility User Requirements Working Group Note published by the W3C’s APA working group is worth a read too!

UI accessibility

All of the flat 2D UI components were built using simple, semantic HTML elements and minimal ARIA so that screen reader users can easily find, understand, and operate all the controls on the screen.

Using <button>s for buttons, text alternatives for icons, and standard patterns for common components (like modal dialogs) are all easy ways to make a huge difference in accessibility!

Manual control interface

In the bottom left corner of the screen is a small gamepad controller icon button that toggles a set of crosshairs and some directional control buttons.

As the crosshairs pass over objects in the scene, human-friendly labels are shown identifying what those objects are. These labels also get read out automatically by screen readers so that blind and visually-impaired visitors can explore the scene too.

App screenshot with a virtual present appearing to be placed on wooden desk, with two dotted lines (one horizontal, one vertical) spanning screen edge to edge. The manual control interface is expanded, with four directional arrow buttons and a pointer finger icon next to the triggering button.
The manual control interface in action.

For blind and visually-impaired visitors, it’s important that content that is being displayed visually is also available as text somehow (see WCAG 1.1.1). This is especially tricky in AR because part of what a visitor sees on screen is the actual room around them, which is hard (maybe impossible) to appropriately describe in an automated way.

For the digital content though, we tried to include information that seemed relevant like general descriptions of the scene, announcements of state changes (like opening the present), and contextual instructions explaining how to interact with the scene.

Here is a recording of what it’s like to use the manual control interface with TalkBack on an Android phone. It’s not perfect, but it’s a start!

Screen recording of me (a novice TalkBack user) exploring the app with TalkBack on Android. I’m swiping left/right to navigate between elements and double tapping to activate them.

Object labelling

In future AR apps, we’re not sure whether a manual control interface will be the way to go, but we are confident that it’ll be important for digital content to be labelled with metadata somehow to convey what it represents for assistive technology users, kind of like alt text for 2D images.

In theory we could maintain a separate tree of label data whose structure matches the structure of the models in the glTF file, but keeping them synchronized as designs are worked on would be difficult.

In our models I noticed that the designer had labelled various “collections” of meshes with human-readable names just to help organize them in Blender, which actually get included in the glTF data. With a little post-processing (like removing the numbers) these could work as labels!

As a visitor moves the crosshairs around in the manual control interface, a ray is cast from the camera to the scene (through the crosshairs’ intersection point). When an object is found, its parent model (“collection” in Blender) is identified and its human-readable name displayed as a visible text label and announced through an ARIA live region.

For the most part this works great, until multiple models get grouped together under an object named “_center” at the last minute! Maybe in future projects this can be mitigated by the designer, but even better would be a file format that decouples object names from their nesting structures.

Blender screenshot of the 3D scene with a tree view of the mesh objects visible. The tree view is highlighted with a red border.

Future work

Accessibility in AR is still an emerging field, with lots of work ahead to come up with inclusive standards and proven patterns that professionals can follow without having to become researchers themselves.

This presents a real challenge for institutions that want to be at the cutting edge, delivering inclusive experiences that attract and excite visitors. Many of these institutions want their work to be as inclusive as possible, and are legally obligated to do so, but simply don’t know how when it comes to AR.

Over time, standards and best practices will inevitably emerge as the industry continues to build and share solutions, and we’re excited to see what that brings! Until then, we’ll continue to do our best to experiment with AR, follow the on-going research, and listen to feedback to help create the future of accessible immersive experiences!

Cartoon illustration of two small, happy snowmen standing on either side of a set of two Christmas present boxes.

--

--

Jason Webb
Bluecadet

Creative technologist exploring digital morphogenesis through code, simulation, and digital fabrication.