Great article about an implausible hypothesis.
Inverse Graphics seems to be a pretty accurate model of human sight based on our current knowledge about this space. While hierarchical learning and parameter sharing have been around for a few years, the proof-of-concept of Inverse Graphics in Computer Vision opens many new avenues for development.
There is no way that the brain can maintain a matrix-like 3D representation of the world around it. The brain’s neurons are way too slow to perform the calculations for such a complex structure. Besides, the brain does not need to model the world. The world is its own model. The brain just learns how to sense the world. Why calculate the world when the world already performs its own computations? It is obvious that the brain is using a different (much simpler and less computationally intensive) approach to invariant object recognition. Hinton does not understand it. Neither does Jeff Hawkins with his similar location hypothesis.