User-generated content (UGC) for Augmented Reality (AR) — Social media
We talk about powering UGC for AR. However, these words may sound too abstract for most people out there. This article explains their meaning in plain English and dives deeper into the first example of how we are putting this in practice: social media.
Note: if you are not yet familiar with augmented reality, you can check this plain-English definition, Wikipedia’s comprehensive introduction or this short video by Mashable (from 2015, but still works).
Content for augmented reality
Before we dive into “user-generated content for AR” in particular, let’s first discuss what we mean with “content for AR” in general. Uniquely, AR involves two fundamental content components:
- The reality that you are experiencing.
- The digital content that you are bringing into it to augment it.
As a user of AR, I can freely decide about my reality (the real environment where I play the digital content). For instance, I can take a selfie with puppy ears at home or in a cool restaurant, or I can play my AR pokémon on my hand or at a beautiful outdoors spot.
However, for the digital content, most users (except for AR creators/developers) are limited set of pre-defined options. For example, you may have another AR filter that adds glasses instead of puppy ears, but if you wanted kangaroo ears and no-one has created a filter for that, you can’t do it without specialised tools and training.
If you think about it, this is the same limitation video/film content had some decades back. Yes, people could watch a variety of films and video pieces in theatres and TV (equivalent to the AR content pushed by brands and big tech companies today). Also, independent professionals could film a clip with more humble equipment, and some lucky people had their own video cameras (equivalent to current professional and prosumer AR creators/developers). But normal users were not able to shoot their own video back then. Today, in contrast, thanks to smartphone cameras, a normal user can record their own video — that’s how platforms like YouTube and TikTok have become so incredibly popular. In other words, users’ ability to create their own content has opened video platforms to user-generated content (UGC).
Is there an equivalent to a smartphone camera for AR that enables all users to create/capture the digital content for it? In other words, is there UGC for AR platforms?
Where are we today?
Tools to generate content for AR have evolved massively in the last few years. Different solutions have emerged for different platforms. To get us started, in this article, we will focus on Social media AR.
With over 200 million users engaging with AR content every day on Snapchat and at least 600 million doing the same every month on Facebook and Instagram, social media seems to be the most widely adopted type of AR by far. These content creation and sharing apps use AR to provide their camera with many new creative functionalities, known as AR lenses or AR effects. These were very simple in origin (like puppy ears and lipstick colours) and could only be applied to faces and flat surfaces. Today, the AR engines of these platforms can identify the full human body and fairly complex spaces, even use machine learning to add amazing AR elements on top of all that. Also, they have introduced more degrees of AR interactivity between what the camera is seeing and the digital elements, enabling more complex use-cases, like shopping and gaming. For example, you can digitally place on your real table a lamp you are thinking of buying so that it automatically scales to the size it would actually have. Or you can play with a digital character that runs away from you and hides behind real objects. With such creative possibilities, big tech companies, like Snap Inc. and Facebook, would be limiting the growth of their platforms if they centralised all the AR content creation themselves. Instead, their platforms are open to the global community of studios and creators to join in.
In order to empower this community and make the creation of content more dynamic, social media companies have created amazing tools for AR content creation. Two of the best-known examples are Lens Studio (by Snap) and Spark (by Facebook). These are tools that can be used by professionals and prosumers to create their own AR lenses/effects — in fact, hundreds of thousands of creators around the world already use these tools. They require a bit of training to get started, an accesible setup (a computer to run them) and enable a high-level of complexity for the most advanced creators.
The question for us: can we go one step further? Can there be tools that complement the existing ones and empower the hundreds of millions of social media AR users to become AR content creators? Can there be something as straightforward as a smartphone camera app to create content for AR?
UGC for AR
This is a specially interesting approach to AR user-generated content for a few reasons. First, it’s human content: a quick look at the most popular platforms shows that human content is key. How many pictures or videos include real people in them vs other types of content? Bringing our real selves into our content is our favourite way of expression, and that’s what these tools are enabling. Second, it’s full 3D: it’s only proper AR if the content can integrate with the real environment. Even simple lenses with puppy ears are 3D for a good reason, flat content doesn’t look great in our real (3D) spaces. Third, it’s mobile: most users create content where and when they can and want. Asking them to pre-create content in a computer to then share on mobile sets a critical point of friction to unlock UGC at scale.
Are we there yet?
While Volu has the potential to unlock UGC for AR, users can only enjoy their AR content in the app itself for now. To realise our vision, we are working to integrate with the best AR platforms so users’ content can be shared and enjoyed wherever they want. These are the remaining few steps to make this come true for social media AR:
- File size: currently, a 5-second vologram (a piece of AR content representing a real person) may require around 30MB. AR lenses/effects are limited to a fraction of that. Different techniques will enable the reconciliation of these two. For example, as part of cutting-edge initiative inspired by Ignite, our friends at Repronauts found a way to massively reduce the size of a vologram thanks to their 3D post-processing pipeline. For now, as the process is not yet fully automated, we are doing so only for brands who want to bring their key people into AR content. However, this already proves that we are very close to overcoming the file size limitations.
- Format: volumetric video (the format of Volu’s captures) is very new. As a sequence of static 3D models (in the same way a video is a sequence of pictures), you need a software component, called player, for it to be displayed (as there are specific “video players”). The limitation is that social media apps do not include such a player yet. So far, our work mostly involves the manual play of one 3D model after the other — not efficient, but it works. Using this technique, our friends at Makerlounge posted back in May 2019 the first Instagram effect with a real person moving in it. Technically, it was a very low-resolution version of one of our volograms playing 3D-model after 3D-model in a loop, but with their creativity and tech skills, it looked great!
- Direct integration: while AR content can be created with a smartphone thanks to Volu, as of today, it still needs to be exported and manually imported into a lens/effect. Today, amazing creators like Studio ANRK are pioneering the integration of longer sequences of volumetric video (e.g., the small-size ones by Repronauts) into amazing lenses using Lens Studio. Artists/creators of their level will always be at the forefront, creating professional pieces for the most demanding clients. For more casual everyday content, we can’t wait to integrate our technology directly with social media platforms, so that any user can create their own AR lens directly with their smartphone!
I want to know more!
If you want to see the creativity that AR user-generated content is enabling, follow @getvolu on Twitter, Instagram and TikTok. Also, if you want to be the first to know about our progress powering this content revolution and the integration of Volu with AR platforms, including social media, you can subscribe to our Medium blog (the one you are reading) or follow us (Volograms) on social media: tw, li, ig, fb. Thanks for reading!