How We Turn Physical Products into Realistic 3D Models for AR

Sam Cribbie
Shopify AR/VR
Published in
7 min readDec 6, 2017

Hello all! My name is Sam and I’m a 3D artist on the VR/AR team at Shopify.

Recently we collaborated with Magnolia, a home and lifestyle brand by Chip and Joanna Gaines, to add augmented reality (AR) functionality to their new shopping app: Magnolia Market.

Magnolia Market’s “Preview in your home” functionality.

The app lets you preview select home-ware accessories in your space to give you a better idea if it’s the right fit. Each product available for AR was 3D modeled by our team, and we put an emphasis on detail to make sure they were as accurate and realistic as possible.

Since launching the app, we’ve been getting a lot of questions about how we went about turning those products into photo-realistic 3D models.

In this post I’ll give a brief and simplified summary. This won’t be a step by step guide, but hopefully it will shed a bit of light on the process.

Press play to interact with the 3D model.

Available Methods

There are a couple different options to consider when it comes to creating a 3D model from a real world object.

Option One: Photogrammetry

Using a series of photos taken of a real world object, photogrammetry software can create an accurate high-density mesh of most objects. The “mesh” is a group triangles that define the shape of the object. Along with the mesh, it will also create texture images that define the colour of the object.

These photos also define how light interacts with that object, letting the program know how rough or smooth the object is. However, while photogrammetry can be extremely effective for some objects, it can be highly ineffective for others. So what makes an object a good or bad candidate for photogrammetry?

Bad Candidates

  • smooth surface
  • transparent
  • reflective/ shiny
  • featureless ( i.e. one solid colour; no visual patterns to detect)

Good Candidates

  • rough surface
  • opaque
  • lots of visual patterns on surface (i.e. colour changes, texture, depth)

It is important to mention that even with a good candidate, the model output will be non-optimized. The mesh will be comprised of more triangles than necessary. For most cases the models have fewer triangles in order to be usable in things like video games, or in our case, a mobile AR app.

Option Two: 3D Scanning

3D scanning is similar to photogrammetry, but uses more specialized hardware.This technology shares a lot of the pitfalls that photogrammetry does. Though it can be highly effective when it comes to accuracy, it provides a non-optimized model and texture set. This means the file size will be larger than necessary and will potentially require manual work to make it ready for use. Furthermore, it can be expensive to buy a good 3D scanner or have your object 3D scanned elsewhere.

Photo credit: Makerbot.com

Option Three: 3D Modeling Programs

In this option an artist starts with a blank digital space and creates the model from scratch. This can be a time consuming process and needs the skills of an experienced modeler to get right. However, the results can be visually accurate and fully optimized for purposes such as ours.

The majority of our products were modeled using a 3D modeling program called Maya. Next they were brought into Substance Painter, or Mudbox for texture painting. For the few products we saw as good candidates for photogrammetry, we used a program called RealityCapture.

Our toolbox: Autodesk Maya and Mudbox, Algorithmic’s Substance Painter, and RealityCapture

Our Process

Step One: Taking Reference Photos and Measurements

The first step for each product is taking good reference photos. We were lucky enough to have Magnolia send us each product, which was a huge help during the entire process.

Just a few of the products sent to us by Magnolia.

There are a two things to keep in mind when trying to shoot good reference photos:

  1. Long focal length: it’s important to use a lens that has a long focal length. Otherwise the photo will be skewed by a perspective that makes things closer to the camera appear to be much larger. This kind of photo is not ideal to model against.
Photo credit: Stephen Eastwood

2. Varying views: Usually front, back, left side, right side and bottom are sufficient to create an accurate model.

Photo credit: Andrei Serghiuta

These photos are then imported in the Maya scene for reference when we build the model. The goal here is accuracy. If the model does not reflect the real world proportions of the product, the AR representation becomes misleading.

Then we take careful measurements of height, length, width of each part. Sometimes it requires drawing up of an extensive diagram depending on how complex the object is.

Step Two: Modeling

For the full modeling and painting process check out this link: https://youtu.be/t68hb1alb7g.

After we have our measurements and scene file set up, we start with a primitive shape (i.e. a sphere, cylinder or cube) and add detail until we have an accurate representation of the product. It’s also important to remember that the mesh complexity has to stay relatively low, in order for the app to load it fast.

Reference Image compared to the created mesh.

After the modeling portion is finished, we export two files. One with a low-density mesh, and one with a high-density mesh, and import them into Substance Painter, where we add texture.

Step Three: Painting the Textures

The high-density mesh will be used to generate smooth texture maps. Substance Painter will use these initial texture maps for the generation of various effects, like edge wear, scratches and rust.

Texturing in Substance Painter

Then we adjust our Substance Painter viewport to match the environmental lighting the models will be viewed in. We did this by using a 360° photo of the office.

Substance Painter can generate realistic imperfections

Substance works a lot like Photoshop: you add detail, textures, and colour adjustments in layers. What makes a model look real is capturing the imperfections. The layers of scratches, fingerprints, and chipped paint of the real world object. Rarely is anything in real life one colour, completely clean, or perfectly reflective.

Some of the layers that make up the watering can texture

Magnolia products generally have a rustic look that give them a well worn, antique feel. Capturing the same feeling in the 3D models was extremely important.

For an object like this watering can, a nicely polished surface would look out of place

Prepping Models for AR

Before the models can be used, they must be exported in a format used by your 3D engine of choice. In our case, we used the Collada (DAE) format as we were building a native Swift app using XCode, and the textures were exported separately as JPEGs.

Texture maps for the watering can from left to right: diffuse, roughness, occlusion, normal, metalness

An additional texture is needed for the product’s contact shadow. Without shadows, products look like they’re floating above the surface. These are generated beforehand in Maya, and saved as a texture to be displayed underneath the product mesh.

3D model with contact shadow (left), 3D model without contact shadow (right)

The last asset needed is the environment texture. This is a 360 photo that provides information for lighting and reflections. Without this, materials such as metals and plastics will appear lifeless and plain.

Moving Forward

While we found a good workflow for creating high quality 3D models of products, the process is still time-consuming and manual. The challenge is going to be scaling this to our merchants. Our next step is to look at ways of adding a 3D artists to our Partner program, and to find affordable techniques that merchants can do themselves.

Have you tried the Magnolia Market app? Let us know what you think in the comments!

--

--