Getting 3D Shapes for VR from 2D images with AI to make a Self Aware Robot’s brain!

Longer title: Getting 3D shapes for VR from 2D images with AI Neural Networks for the purpose of building a Self Aware Networks inside a Robot Brain and also for making general AR VR content such as interactive Volumetric Video.

This article in The Neural Lace Journal was written by Micah Blumberg whose work can also be found at and VRMA is powered in part by River Studios. Read more about Micah at the end of the page.

One idea here is that if we can get 3D Shapes from 2D images with AI then perhaps AI could reconstruct the entire human world in 3D from just 2D photos, including the insides of buildings, and the insides of caves provided we have the photos. I started thinking about this after looking at photos from the recent Burning Man 2017, photos of the different art projects that people created to stand in the desert. My question was how can we integrate this into a VR app if the person who took the photos didn’t take enough photos for a traditional Photogrammetry program.

Using AI to figure out the geometry of a photo or from a 360 movie is a very different way (different from photogrammetry or volumetric video) to turn content that was captured with a camera into a very solid looking 3D scene in Virtual Reality or in Google’s new ARCore tool for AR on Android phones.

Join the ARCore Group on Facebook

With the goal being to use neural networks to create an internal representation of a 3D space inside an artificial brain from any kind of image, even without a depth sensor.

The eye sight in one of my eyes is pretty bad and sometimes I notice that something I am looking at doesn’t have real depth (wearing glasses only fixes my depth perception somewhat) something that I know is 3D, like a lamp, will sometimes look like a 2D photo to me, even though it is a real 3D object.

New AI technique creates 3-D shapes from 2-D images (July 25, 2017)

See this article from Berkeley Artificial Intelligence Research BAIR on “Digitally reconstructing 3D geometry from images” It’s called:

High Quality 3D Object Reconstruction from a Single Color Image

Aug 23, 2017

Next up is a pretty good Paper on the topic, co-written by a well known AI Deep Learning guy Andrew NG. It’s called Make3D

Make3D: Learning 3D Scene Structure from a Single Still Image

Ashutosh Saxena, Min Sun and Andrew Y. Ng

Here is an additional resource from Stanford that includes links to more Andrew Ng papers on this topic:

3-D Reconstruction from a Single Still Image

This next “Apple” paper that I am going to share is more about image recognition with AI in general, but the reason I am sharing it is because of it’s relevance to creating a robot brain that has not only recognition of images, letters, words, and can combine correctly the audio recognition of a word with the visual recognition of a 2D dimensional word but it also has that data linked to concepts of 3D spatial representations that it can both recognize and create linking a 3D spatial representation correctly to that 2D word and or to the sound of that word. Such that if you ask Siri for a Castle that you can look at in ARkit Siri will be able to turn your words, from your mouth, into a 3D image you can see on your iPhone thanks to ARkit, and at some point a self aware robot cycles on it’s own internal 3D representations of concepts, which may be transferred between bundles of neurons via brainwaves with a broad distributed representation that is perhaps sent in a lossy way because you are dealing with the physics of a brain, but it’s still interesting to think about how we might achieve Self Aware Networks inside a Robot with these kinds of ideas.

Apple’s first AI paper focuses on creating ‘superrealistic’ image recognition

December 28, 2016

This link is a little bit outside the main topic, but its slightly related, it’s a link to an article about the Advanced Technologies Group at Uber, their job is to turn camera, laser, and other sensor data from Uber cars into a 3D Visualization you can for now see on the web, but perhaps later see in VR, or AR. Cars, specifically self driving cars, will generate a lot of useful images that we can turn into 3D data for a robot to make tempo spatial concepts with. The big difference is that Self Driving Cars are starting with 3D depth maps created by Lidar first, then Neural Networks are applied to predict objects from those 3D point clouds and how those objects are moving, including the speed and trajectory of their movement. Brains predict all causes, and in general that means animals and people predict objects, with tempo-spatial characteristics, and we also predict the trajectory and speed of those objects in our world, and we do a whole lot more than that, we predict the motivations and the minds of other humans, some day Death Star Robot will do this as well.


AUGUST 28, 2017

Here is an example of Deep Learning being applied to a 360 Video, but this time it’s not recognizing shapes, so much as it’s creating art, I guess this is similar in concept to Google’s DeepDream.

How Facebook Used AI To Make The Trippy Effects In This VR Film

The idea behind Jérôme Blanquet’s innovative film Alteration was for viewers to experience what it’s like to be an AI.

Don’t pass up this good wikipedia article on the general topic of 3D reconstruction, although I think this article could apply to multiple approaches “3D reconstruction from multiple images is the creation of three-dimensional models from a set of images. It is the reverse process of obtaining 2D images from 3D scenes.”

It’s interesting to think about computers creating tempo-spatial concepts when thinking about how to build a self aware robot. The human brain seems to have a lot of tempo-spatial brainwave activity that may be one of the best neural correlates of our time and space based sensory modalities. Vision, Hearing, Smells, Tastes, all of these tempo-spatial patterns have peaks and lows, like a wave, as they arrive in the canvas of your consciousness, and then leave again, emotions have peaks and valleys, but internal representations also are located temporarily in relationship to your sense of self which might be a direction concept, such that you might perceive a sound or a smell to be above or below you, to have a certain texture or size or volume. So if you want to help solve creating Self Aware Networks for Self Aware Robots please join Death Star Robot

There are many more similar papers, some of them from 2010 that are interesting and there are probably articles that I haven’t discovered yet so if you like this type of discussion you can also join my Self Aware Networks group on facebook the link is


About the author Micah Blumberg
I study the brain in order to think about how to build Neural Lace, Artificial Cortex, and Artificial Brains. That is also part of why I started writing and talking about Virtual Reality, Augmented Reality, Deep Learning Neural Networks, Self Driving Cars, and all the topics on the frontiers of Science & Technology. My journalism & research is powered in part by River Studios!