An Introduction to 3D Scanning

A simple explanation of the basics behind the technology

In the last few months — ever since I signed on to work at a mobile 3D scanning company — I’ve had to explain to my friends what exactly 3D scanning is and how it is applied to real world situations. They all know that 3D scanning sounds vaguely cool, but they don’t have any idea what it really means.

I quickly found that it’s tricky to find the right balance between providing a surface level description and an in-depth seminar that’s way too long and way too technical. The goal of this article is to strike that balance — to break down the basics of 3D scanning in a simple and understandable manner. I’ll talk about what enables 3D scanning, some of the ways it’s currently used, and what the future holds for the technology.

What is 3D scanning?

3D scanning is a tool for computer vision to recognize and reproduce three-dimensional content — it enables computers to see the world. What do I mean by “see the world”? By using technology that is most easily related to echolocation, computers have the ability to take in 3D information about their surroundings, parse it into an understandable format, and then re-create that physical space into a digital space. It stores this information in “points,” or lines of information that designate locations in the x-, y-, and z-axes.

Check out this awesome video to learn more about computer vision.

The end goal is to take the world around us — what our eyes see every day — and transfer that vision to the digital sphere. 3D scanning allows the digital representation of what’s around us to also be in three dimensions, unlike a flat drawing or blueprint. What you choose to do with this information is what makes 3D scanning so powerful.

Visualization of a complex, multi-room 3D scan.

How does 3D scanning happen?

There are two sides to every 3D scanning application: collection of data and processing of information. Acquiring point clouds — a term used to reference 3D data — requires a 3D camera system; processing the collected information requires algorithms to organize the data into whatever structure you’re using it for. The methods of acquiring 3D scan data are myriad, but modern 3D cameras for mobile devices like the iPhone X and the Sony Xperia Z are starting to include one of two methods: Time-of-Flight (ToF) and Structured Light (SL) for real time depth capture.

Apple’s just announced iPhone X utilizes a Structured Light 3D camera to recognize faces.

Here’s a quick, simple explanation. Both Structured Light and Time-of-Flight have a camera (sometimes more than one) and a light projector; the projector shines infrared light at the object that you want to scan, while the cameras in the scanner record the light that is reflected back. The difference: Structured Light shines a pattern that when distorted is used to triangulate depth, whereas time-of-flight shines diffuse light at a known wavelength and the reflected light speed is compared to generate a point cloud. Just like different camera lenses, ToF and Structured Light IR projectors are used for different purposes and each have their pros and cons in different situations.

If you’re eager to get into the nitty gritty details, I encourage you to check out this article describing Time-of-Flight or this one describing Pattern Projectors.

It’s possible to directly measure and visualize the point cloud data acquired from a 3D scan in real time; in most cases, however, we want to record the scan data, to enable more possibilities to use the scan information. We do this by inputting the point cloud into algorithms that construct models of the scanned object in real time. This might take the form of a polygon mesh model, a surface model (called a NURBS model), or a solid model for computer-aided design.

For a deeper dive into the details of different model types, take a look at this description of polygon meshes. Here’s an explanation of NURBS curves and surfaces, and finally, here’s one for solid modeling.

Point cloud of a 3D scan of a rabbit (L); polygon mesh visualization of the 3D scan data (R).

How is 3D scanning being used?

3D scanning is useful for almost any application or objective that requires accurate 3D modeling. Consider, for example, the construction industry: 3D scanning can help designers, engineers, and builders more easily develop and manipulate realistic models of their projects, and better visualize their end goals. It can help law enforcement to more accurately document forensics and crime scenes; it can model and recreate historical artifacts for preservation and documentation; and it can improve medical prosthetics by capturing the shape of a patient or his/her limbs or other body parts in three dimensions.

A granite bust of Amenemhat III of Egypt (c. 1800 BC), from the British Museum. The original (L) and the 3D scan (R).

All of this is enabled by simply utilizing the scan itself — going a step further, you can imagine using the raw data of 3D scans to reverse engineer existing parts or correct and extend surfaces that might otherwise be locked in their inherent form. When you give computers an understanding of space and 3D vision to map three-dimensional data in real time, programs can use that data to make informed decisions. So what’s next?

Why does 3D scanning matter?

The reason why we care so much about 3D scanning right now is the current push to make these capabilities mobile — simplifying 3D scanning to the level of using the camera on your smartphone. The decreasing size of 3D cameras will open the door to a lot of new possibilities and advanced applications.

A year ago, there wasn’t a single smartphone that had any form of 3D scanning capability. Today, four phones — 1) Lenovo’s Phab 2 Pro; 2) Asus’s recently released Zenphone AR; 3) Sony’s Xperia XZ1; and 4) Apple’s newly announced iPhone X— have built-in 3D camera technology. And this is just the beginning: More and more phones will have depth-sensing cameras included in their hardware, enabling 3D scanning in the hands of you, me, and everyone else walking down the street. It’s not just smartphones; there’ll also be 3D cameras in every VR headset, game console, and various other devices. That’s an enormous amount of power — and potential — in your hands.

3D cameras will soon be ubiquitous in smartphones, game consoles, and other devices, allowing anyone—even you or me—to easily 3D scans through our pocket devices.

Previously, only a few people had access to all those potential uses of 3D scanning that I described earlier — the design and reverse engineering and artifact preservation. You had to bring in a large device to scan an object, then transfer the file to a computer, where you could then manipulate and work with the three-dimensional model. And people had neither the time nor the resources — i.e., access to a scanning device or the right modeling software — to do all that. But with 3D cameras starting to become mobile, that’s all changing: everyone will now have the ability to create realistic models through 3D scanning and to work with and create things using these models.

Let’s go back to the example of improving medical prosthetics. Let’s say you injured your wrist — imagine being able to scan your wrist with your own phone and then receive a customized wrist brace that fits your wrist perfectly, instead of some generic, ill-fitting, off-the-shelf brace. Take that same scenario, and scale it to creating a cast or fitting a prosthetic joint. That’s what’s next.

I hope that this short introduction has provided a better sense of some of the principles and technology that underlie 3D scanning and pointed a path forwards to the future of the field. It’s an exciting time for the industry, right as the technology of 3D scanning becomes ever more mobile — and as a whole new world of possibilities opens up alongside that accessibility. We’re entering a new age of mass customization, one in which we’ll all be able to personally customize, manipulate, and design the world around us. For those of us who work with 3D scanning, we’re excited to see how the technology will develop and grow as more and more people — even my friends that are still confused about what exactly 3D scanning is — start to innovate and create with it.