Girls rocking some 3D glasses at Women in Computing Day 2018.

3D Visualization, Computing, and Teaching Young Women — By Clark Dorman

Laurian Vega
Women in Computing Newsletter
10 min readMay 1, 2018

--

Clark Dorman is a Chief Engineer at Next Century and a long time session lead for Women in Computing Day. I’m sure Clark is amazing at his day job of leading cutting edge research products, but there is something I find even more amazing about Clark. I love the fact that Clark has been bringing his two daughters to Women in Computing Day for years and recently they have begun helping him run sessions. Nothing, in my mind, is more important than raising the next generation of female engineers — and he is raising two amazing technologists that are bootstrapping even more young engineers. Together him and his daughters have run this event about 3D Modeling a couple of times and it is a real crowd pleaser. Clark wrote up this explanation of his session and materials. It is fascinating and covers the history of viewing 3D images and how he ran his session. Also, the presentation Clark made for the event is also available online.

This is the first post in a series coming out detailing the sessions run at the recent WCD in Virginia. There are a few more to come. Enjoy!

Introduction

When we think of computers, we usually think about what they do for us, like showing us a web page or movie, or problem solving, or sending an email. In turn, programmers think about how to get the computers to do those things. However, an important area of computer science is figuring out how people interact with computers, both as users and programmers. And to improve our interaction with computers, we have to understand both how computers work and how people work, and the strengths and weaknesses of both.

One of the hottest area of computer science, programming, and human computer interaction right now (HCI) is 3D. The Oculus Rift started the latest rise in popularity of 3D interaction, and in the past couple of years, we have seen the HTC Vive, the Playstation 4 VR, and others. Using them, we can explore immersive worlds and manipulate objects in space in ways that just are not possible with standard screens. But how do they work? This year in Women in Computing, we explored 3D visualizations, discovered how they are work, and how to make our own.

How 3D Vision Works

When we experience the world, it does not just have width and height (2 dimensions), it has depth (the third); that is, the distance that the objects are away from us. However, vision work by detecting light that hits the back of our eyes, on a 2D surface called the retina. The appearance of a 3D world is reconstructed by our brain based on the information that comes from the eyes. We use several different features of the incoming light to reconstruct the 3D world, including focus, perspective (the size of objects), and object recognition. Most importantly though, we use binocular disparity, the difference in what our two eyes see.

Hold your index finger about a foot away from your face and look at it. Close your left eye, while continuing to look at your finger and notice what is in the background. Now, open your left eye and close your right eye, and notice what is in the background. The backgrounds have changed, because you are looking at your finger from different locations, so they see different things. When you look at a computer screen, each eye sees the same thing, so it does not look the same.

A Long History

So, the secret to making pictures 3D is to show the eyes different things. But, not just any different things, they have to show what eyes would see. It turns out that people discovered this quite a while ago, and have been inventing ways to show the left and right eyes different things for over a 150 years. This is a stereoscope:

By User Davepape on en.wikipedia — Photo by Davepape, Public Domain, https://commons.wikimedia.org/w/index.php?curid=961098

The stereoscope works by showing the user two different pictures. The pictures were taken from two different viewpoints, with the left picture from the viewpoint of a left eye, and the right picture from the viewpoint of a right eye.

Here is an example picture that can be viewed in a stereoscope, called a stereograph card:

By Underwood & Underwood [Public domain], via Wikimedia Commons

When you first look at the pictures above, you may think ‘They are the same picture’. But they are not. They differ slightly because they were taken a couple of inches apart, so things that are closer are shifted slightly compared to things behind them. That’s all it takes for the eyes to combined the two pictures and form a 3D impression of the pictures.

The stereoscope also has some magnifying lenses so your eyes think that they are looking far away, even though they are looking at pictures 8 inches from your face. The stereoscope works great, but of course the pictures don’t move. Also, only one person can see it at a time, and they are fairly expensive.

An alternative approach is colored glasses, called anaglyph glasses. These were very popular in the 1950s for movies.

By Snaily (Own work) [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/)], via Wikimedia Commons

The left eye of the glasses is normally red and the right is usually cyan (a blue color) or, sometimes, green. The glasses work by viewing an picture which consists of two images printed over each other on a piece of paper or a screen. The image shown to the left eye is printed in cyan. The left eye, with a red lens, can see the image. The cyan-colored lens means that the right eye cannot see the image, because everything that it sees is cyan! The eye shown to the right eye is printed in red, so the right eye can see it very well but the left eye cannot. Below is an example red-cyan anaglyph picture. It was taken by the Curiosity rover on Mars and is provided by NASA:

https://www.nasa.gov/multimedia/guidelines/index.html

When you go and see a 3D movie, you are using the latest version of this sort of technology. There are polarized lenses in the glasses that you wear. Two different movie are shown on the screen, each with the images polarized in different directions. That means that each eye only sees one of the movies.

Making Your Own

Making your own 3D images is surprisingly easy. You need:

  • A camera
  • A printer
  • Red-cyan glasses
  • Software to combine images into stereo pairs for anaglyph glasses

The camera can be a cell phone, digital SLR, or web cam. Any printer you have at home or school will work. The red-cyan glasses can be gotten from Amazon or other places online. We provided glasses for the students in Women in Computing. There are several different free programs that you can use. We used StereoPhoto Maker. You just download it and unzip it and it is ready to go.

To use it, StereoPhoto Maker, you first take a picture of a scene. That will the be the left photo. Then, move the camera a couple of inches to the right and take a picture of the same scene, and that will be the right photo. You should do this as quickly as possible so that the two images look as similar as possible. The camera should be centered on the same location for each picture. Then, you start up StereoPhoto Maker and import the left and right images. It looks like this:

One of the most important steps at this point is to make sure that the pictures are aligned. Because our eyes are very sensitive to differences in pictures, it will be difficult for them to combine the two images unless they are very close. If one of them is looking in a slightly different direction or tilted, then it makes it difficult for the brain to make a single image out of them. Fortunately, StereoPhoto Maker comes with an auto-alignment process. Click it, and the images will probably shift a little so that the differences are minimized except for the change in perspective.

Then, click on the anaglyph button, and you will see your anaglyph:

You can view this with glasses, or can print it out to view with your friends.

If you have a stereoscope, you can print out a piece of paper with the correct dimensions and shape to go into it. It looks like this:

Compare this to the original stereograph card.

At the Women In Computing Day, we had our participants pose and produce their own anaglyph and stereograph cards for viewing and to take home.

Modern 3D

During Women in Computing Day, we used two new ways to view 3D images. The underlying idea behind these is the same: they present slightly different images to the left and right eyes.

Google Cardboard

First, we used the Google Cardboard device. The Cardboard is a simple version of the stereoscope. It has two lenses and can hold a cell phone. The advantage of this is that the cell phone can show many different images or videos. In addition, by using the compass and gravity sensors built into modern cell phones, the cell phone can tell what direction you are looking, and show you what would be visible in that direction. So, if there is an application that shows you a forest scene, as you turn your head, you can see all around the forest, including up, down, and behind you. This causes you to have a sense of being in the scene.

There are now a large number of free applications that work with the Cardboard. Here is a list of a number of them: https://thinkmobiles.com/blog/best-google-cardboard-apps/. During WIC, we looked at several of them, including Titans of Space and a rollercoaster application.

Google Cardboard image by Clark Dorman. Licensed under the Creative Commons Attribution 2.0 Generic license.

Playstation VR

The other modern device that we used a headset connected to a computer or console. Below is an image of modern headset, in this case the HTC Vive. Other devices include the Oculus Rift and the Playstation 4 VR.

(This image was originally posted to Flickr by pestoverde at https://www.flickr.com/photos/30364433@N05/16948507128. It was reviewed on 1 May 2015 by FlickreviewR and was confirmed to be licensed under the terms of the cc-by-2.0.)

The one that we used was the Playstation 4 VR (PS4 VR). Like the Google Cardboard, it has sensors that can tell what direction you are looking. However, unlike the Cardboard, the Playstation has a camera that can see the user, so it can also tell where the user’s head is. This allows the user to interact with the scene in an even more realistic way. For example, you can look around objects. If the forest scene that you are in has a tree in way, you can lean left or right to look around it. Similarly, you can look over or under objects in the scene.

I thought that people in WIC would want a fairly calm and peaceful experience with the PS4 VR. I had set up a scuba diving experience called Ocean Descent, so that the user could stand on an underwater platform and view the plants and animals as they go by. There is also a wonderful bird simulator, where you can fly over Paris called Eagle Flight. However, they were not interested in those nearly as much as they wanted to do Street Luge, where you go down a steep mountain road at high speed on a board, dodging cars and trucks.

Old and New Technologies

People figured out how the eyes work to create a 3D sense of the world many years ago. When the photography was available, people made stereoscopes and then later anaglyph glasses to allow users to view 3D scenes in static images.

Now, we have screens that can show us moving images, or change the images based on the direction or location of the user. This gives the user a sense of ‘immersion’ because they can interact with the scene in a more natural way.

However, the underlying principles of how 3D glasses work is exactly the same, because it is based in how humans work.

In the future, the immersiveness will improve, as developers learn more about how to give the user the sense of being in the scene and interacting with the world. Developers are also working on the next technological leap, which will be to overlay images and objects on the real world, called Augmented Reality. But that will be for a future WIC Day.

--

--