Ordinary Objects 🫖— prototyping notes

Using augmented reality to transform everyday objects into fictional devices using Lens Studio

Jasmine Roberts
5 min readJul 24, 2023

“Ordinary Objects” is a Lens that recognizes objects in the environment and retrieves relevant quotes from stories related to those objects.”

The insights derived from the design process of this text-driven AR experience hold valuable lessons that can be utilized in a diverse array of narrative-driven projects, whether within the Lens Studio or in other platforms.

Lens Studio by Snap provides distinctive features for creating augmented reality experiences. It also includes readily available templates that are easily accessible. If you are familiar with using the Unity Game Engine, you will find that Lens Studio follows a similar framework.

Despite the minimalism of the final lens, there were many underlying considerations of both user experience and Lens Studio constraints in bringing this idea to life.

Here is how I created this experience in Lens Studio:

In March 2021, I received a prompt from Snap to develop a Lens utilizing scanning features (object detection) for release in May 2021.

Prior to May, a Lens creator could train and create their own models using Tensorflow and Lens Studio. The newly released Lens Studio build has a “Scan Object” example containing a library of about 80 objects.

So, what inspired the Lens concept?

One of the first concepts that came to mind was a scavenger hunt or escape-room type experience; however, after reviewing pre-existing Lenses, I found many scavenger hunt Lenses developed by other creators (I strongly recommend doing an application review before you begin developing so you have an idea what is currently available)

I wanted to leverage the object detection feature and differentiate my Lens for the Partner Summit.

AR often overlooks the potential of object recognition, an underutilized feature within contextual AI.

Augmented reality presented itself as a platform that offers unique affordances, distinct from what can be accomplished with traditional apps.When brainstorming ideas, I found inspiration in my immediate environment. I sought to explore unconventional possibilities.

I was watching Fahrenheit 451. In the narrative of the story, books are outlawed, and the dissemination of knowledge is heavily controlled. This led me to contemplate ways to counteract such restrictions and promote the importance of information and literature.

The Lens concept developed, which identifies objects and extracts quotes from stories, is a a creative response to that inspiration, allowing users to explore and access information in a visually engaging and interactive manner.

By leveraging technology to encourage curiosity and provide access to a broader range of narratives, the lens concept aligns with the themes of intellectual freedom and the power of storytelling depicted in “Fahrenheit 451.

Fahrenheit 451 (2018) Panning over collected book quotes
Captain Beatty (Michael Shannon) reading a quote by Emil Siorhan

After concluding that was the concept I was pursuing, when I looked at roses I thought of the Shakespeare quote “A rose by any other name would smell as sweet”, when I saw grapes I thought, “In the souls of the people the grapes of wrath are filling and growing heavy, growing heavy for the vintage.” etc.

It is very clear objects play a significant role in storytelling as they hold the potential to evoke emotions, convey symbolism, and provide context within narratives.

To ensure practicality, I limited the scope of my database to three books with freely available licenses. These selected books include “Adventures of Pinocchio,” “Do Androids Dream of Electric Sheep?,” and “Alice in Wonderland.” From these texts, I extracted quotations to incorporate into the lens concept.

Sample slides from pitching the concept

The initial priority was to enable the scanning functionality and display full sentences on a canvas.

(Left) Identifies bottle as “drink” (Right) Identifies dogs

Design

Since this was primarily a text-based experience, one of the the most important design choices was the typeface itself.

I searched “best book fonts.” The search returned “Old style” typefaces. Old style typefaces are serif fonts that have moderate contrast between the thick and thin parts of the letters like Baskersville, Garamond, Caslon, Sabon, and Bembo. Of these fonts, I selected Bembo. Bembo is very common in movie and book titling.

The Bembo font in use

For the user interface, or the buttons and text that would be anchored to the screen I wanted to use a complementary font. Oldstyle fonts are typically paired with “Humanist” fonts like Optima, Gill Sans, Frutiger, Basetica. I selected Basetica for the text.

Below are the completed mocks made in Blender:

(Right) Lens identifies ‘cup; object in camera view (Middle) User places reticle placing book animation with sentence containing the word cup (Left) User translates animation and another quote containing the word “cup” appears
(Left) Screencapture of Lens: Initial State (looking for objects), (Middle) finds a “table’ so pulls a quote with a table, (Right) finds “table” again so pulls another quote about a table.

Development Challenges

Lens Studio’s scripting capabilities primarily relying on visual scripting and predefined behaviors, which restricted the level of text customization and interactivity. In Lens Studio, there is not a text handler analogue like Text Mesh Pro for Unity.

  1. Instantiating Lines as Separate Objects: To have text appear on separate lines, I instantiated individual line objects instead of rendering the entire text as a single block.
  2. Managing Line Length and Truncation: To ensure that lines didn’t exceed a specific length, I implemented logic to measure the length of each line of text. By monitoring the length of the current line, I could determine when to truncate the text and move on to the next line.
  3. Word Completion before Truncation: To maintain word integrity and prevent words from being split across lines, I made sure that words were completed before truncating the text. If a word couldn’t fit entirely, I moved it to the next line.

I hope detailing the process helped you, let me know if there are any questions! Post them in the comments or send them to me directly via mail. If you enjoyed this article please leave a 👏🏽

--

--

Jasmine Roberts

AR/VR Engineer ॐ Microsoft, Google, Unity, PlayStation, NASA