Depth maps, micropsia and virtualized cities


This is a collection of resources — an aide-memoire for an account of some recent developments in virtual reality photography and viewing that allow novel simulations of some of what are usually subconscious depth vision processes.

Stereo 360 depth map capture .. camera rigs

Here is one of my camera rigs for capturing over-under spherical panoramic sets of views of different kinds of scenes. This setup has the two fisheye cameras arranged one on top of the other and each panning around on the same vertical lens axis — shooting videos, from which I extract frames and stitch panorama 360 pairs. Over-under stereo pairs with vertical, instead of the usual, horizontal parallax.

The advantages of this approach, for stereoscopic panoramic depth map capture, rather than the conventional side by side stereo camera arrangement, are that it allows easier parallax-free identical automatic stitching of the individual camera image rotation sets (frames) even of action-filled scenes (with Smartblend or Photoshop panorama stitching for instance). With side by side pairs there are stitching difficulties from one or both cameras rotating off-axis and with action-filled scenes the close action elements occur at different lateral positions in the L/R equirectangular frame pairs and action autoblending algorithms can act on different areas of left and right views creating stereo mismatches content-wise in that part of the frame.

Google “depth map” Image Search

“depth map” image search

Another advantage is that the distribution of depth within the scene can easily be altered by locally and/or globally manipulating the depth map contrast or density. You can redistribute the parallax of different regions of the panorama (by changing the tone of gray of the depth map) so even very close features can be seen comfortably and very distant features, like of a far building can have some apparent depth modelling. With a plugin like Fixel Zone Selector in Photoshop you can move particular areas of the scene forward or back or extend or restrict them to a wider or narrower range of depths.
http://aescripts.com/fixel-zone-selector-1-ps/

In a page on suggested work for Google Summer of Code projects there is a proposal for a Defocus (from depth map) plugin where that kind of depth slicing is described:

http://github.com/MrKepzie/Natron/wiki/Google-Summer-of-Code-GSoC-ideas

Extract “depth-slices” from the image, using the depth image. A depth slice is black and transparent everywhere except where the depth is within some range, where it is the original RGBA data (it has to be premultiplied if input is unpremultiplied). If the value of stepBlend (see below) is not zero, one pixel may belong to two slices, with linear interpolation between slices.

The disadvantage for the depth map from stereo panorama concept is that calculating accurate detailed depth maps from stitched stereo panorama image pairs is currently an imperfect science and some subjects are not suitable at all.

http://3dstereophoto.blogspot.com.au/2015/06/depth-map-automatic-generator-7-dmag7.html

But if you do have a nice depth map of a spherical scene and the corresponding image (a Left or Right or Top or Bottom 360 view) then there are significant advantages for 3d interactive playback (eg. in a vr headset).
Namely natural stereoscopic viewing at nadir and zenith, and compensation for head roll in vr headset viewing, and zero problems from stereo rivalry (from differences in L/R views). Also the image+depth map format lends itself to a wide variety of stereo playback devices — eg. autostereoscopic multiview displays.

Depth maps

A depth map is an image which encodes the distance of objects depicted in an image in shades of grey — or with different colors for different distances. eg.

— for Google Street View

http://0xef.files.wordpress.com/2013/05/depthmap.jpg

from: https://0xef.wordpress.com/2013/05/01/extract-depth-maps-from-google-street-view/

A tool for extracting panoramas from Google Street View :
https://runkit.com/npm/extract-streetview

Using Google depth data for augmenting Street View images with forest etc:

http://www.inear.se/2014/03/urban-jungle-street-view/

GSVPanodepth https://github.com/proog128/GSVPanoDepth.js/blob/master/examples/example.html

http://callumprentice.github.io/

Panogetter:
http://callumprentice.github.io/apps/pano_getter/index.html

Micropsia

… visual distortions of scale — very common … “Episodes of micropsia or macropsia occur in 9% of adolescents”

also part of Alice in Wonderland Syndrome

https://www.youtube.com/watch?v=kXtJSwqb0j8

top down visual processing theory … “ ‘highly unlikely objects tend to be mistaken for likely objects” eg. Charlie Chaplin mask video effect:

more masks …. Adolf Menzel’s Studio Wall

versus Gibson’s bottom-up Direct Theory of Perception (1966)

from Descartes

How much depth can we see?

He assigned the limits of visual resolution to the dimensions of the receptive fibres in the retina. Receptors were not known about at that time, and Descartes assumed that the retina consisted of the endings of the fibres of the optic nerve. Since these were of a particular size, Descartes argued that no object smaller than a fibre could be resolved. His correspondent, Marin Mersenne (1588–1648), stated that a grain of sand could be seen from a distance of 10 or 12 feet and calculated that it subtended an angle of 15” at the eye.

People have more or less depth acuity. Important if you want to be a artillery officer I guess.

https://en.wikipedia.org/wiki/Stereoscopic_acuity … stereo acuity can be trained!

3d fly for stereopsis testing

What kind of vr device was the stereoscope?

Krauss and Crary on the stereoscope in the 19C:

Sekula no doubt bases his reading on the descriptions offered by Oliver Wendell Holmes in the 1850s and ‘60s.5 These descriptions (not specifically discussed by Crary) speak of the stereograph as producing ‘a dream-like exaltation in which we seem to leave the body behind us and sail away into one strange scene after another, like disembodied spirits’ (qtd. in Krauss 138). Rosalind Krauss, on the other hand, stresses the way that the stereoscope gives an engrossed and isolated viewer the sensation of periodically refocussing one’s eyes as they look from plane to plane; by this means a movement of the eyes and a movement of the entire body are made synonymous.
These micromuscular efforts are the kinesthetic counterpart to the sheerly optical illusion of the stereograph. They are a kind of enactment, on a very reduced scale, of what happens when a deep channel of space is opened before one. The actual adjustment of the eyes from plane to plane within the stereoscopic field is the representation by one part of the body of what another part of the body (the feet) would do in passing through real space. (Krauss 138)
vertical eyes are fine for stereo
So it would seem that, contrary to Crary’s exclusive emphasis on embodiment, the key to the experience of the stereograph is that the eye is both disembodied and re-embodied. Or, to put it another way, in a single act of looking, the observer is moved back and forth between two separate but conjoined embodiments. Cut off from all distractions by the masked instrument held to the face, the eye of the viewer is dismembered from his or her immobilised body and induced to wander freely through the receding picture planes that unfold ahead. That same wandering eye simultaneously becomes a miniature prosthesis for another body; the viewer enjoys, as Holmes points out, the palpable sensation of turning into a flying phantom limb and thereby becoming an integral part of the representation being seen.

Owls are binocular vision super heroes — as are we— http://www.owlpages.com/owls/articles.php?a=5

The forward facing aspect of the eyes that give an owl its “wise” appearance, also give it a wide range of “binocular” vision (seeing an object with both eyes at the same time). This means the owl can see objects in 3 dimensions (height, width, and depth), and can judge distances in a similar way to humans. The field of view for an owl is about 110 degrees, with about 70 degrees being binocular vision.
By comparison, humans have a field of view that covers 180 degrees, with 140 degrees being binocular.

Poetic urban spaces — “the Philosopher’s Conquest”

Robert Hughes said … de Chirico could

..condense voluminous feeling through metaphor and association …… a bright ball of vapor hovers directly above its smokestack. Perhaps it comes from the train and is near us. Or possibly it is a cloud on the horizon, lit by the sun that never penetrates the buildings, in the last electric blue silence of dusk. It contracts the near and the far, enchanting one’s sense of space. Early de Chiricos are full of such effects

and elsewhere Hughes said about those kinds of places in that kind of de Chirico painting ..

It is an airless place, and its weather is always the same. The sun has a late-afternoon slant, throwing long shadows across the piazzas. Its clear and mordant light embalms objects, never caressing them, never providing the illusion of well-being. Space rushes away from one’s eye, in long runs of arcades and theatrical perspectives; yet its elongation, which gives far things an entranced remoteness and clarity, is contradicted by a Cubist flattening and compression.

The culture of time and space

https://monoskop.org/images/d/db/Kern_Stephen_The_Culture_of_Time_and_Space_1880-1918_Chapter_6.pdf