There’s a few themes that I wanted to reflect in this work: i) what we perceive to be real, what we see, is a reconstruction in our minds, a simplified model of the world, limited by our biology and physiology, ii) perception, including vision, is an active process, it requires action and integration; iii) the actions that we take, affects the reality and the meaning that we construct in our mind; iv) perhaps most importantly, even when presented with the same information, the same images, everybody will experience — will see — something unique and personal, which nobody else can see or maybe even understand.
I’m interested in these ideas both at a low-level, regarding our senses and perception. But also conceptually at a higher level regarding how we make meaning and what we consider to be truth; our biases and prejudices; how we’re unable to see ‘the whole truth’, or both sides of a story at the same time; how we interact with each other as a result of this, and its impact on society and politics.
The way our brain constructs a conscious visual percept is very complicated, and not necessarily a one-to-one representation of what is out there. i.e. what we see in our minds, is a reconstruction. And this reconstruction is based on who we are. Even though everybody is presented with the exact same images, what you see in this experience is different to what I see. I cannot see what you see, and you cannot see what I see. Furthermore, what we both see, is different to what is actually presented. We are both unable to see the ‘entirety’ of the ‘ground truth’, so to speak. And that’s what ties in to my higher level motivations.
Something that’s been troubling me for the past few years is our inabiliy to empathise with those that we don’t agree with. There’s a lot of talk about VR being the ultimate empathy machine. It’s true that being immersed in VR is potentially more effective than seeing the same scene in traditional film or photography (but that’s temporary of course, once upon a time film was deemed the ultimate empathy machine, and photography before that). But I think this is a limited view of empathy. I can empathise with you if you break your leg, because if I were to break my leg, it would also hurt. I can put myself — as I currently am — in your position, and imagine that pain. But I think it’s also very important to try and empathise with people who have radically different values — even opposing views — and empathise with their pain.
As a person who believes strongly in the eradication of borders, can I empathise with somebody who voted brexit? As a person who despises the ever increasingly authoritarian government in Turkey — my home country, can I empathise with those who adore it? Similar questions can be asked about the opposing factions in the US and all over Europe. It’s quite popular these days to talk about echo chambers, filter bubbles, fake news, alternative facts etc. These are all real phenomena, but I fear they are being used as some kind of excuse — even scapegoat — for some of the issues we’re having right now. I fear they are disguising what’s at the heart of the problem — which is in no way new — our own unconscious biases, our inability and unwillingness to listen to those who we radically oppose, to try to understand their actions and feelings from their own point of view. These are difficult issues, but it feels like we’re unable to even have any conversations around this right now. Because empathy and sympathy get mixed up, explanations and justifications get mixed up. And as a result it feels like we’re seeing more and more social and political polarisation around so many issues, resulting in increasing amounts of hate from all sides. And this VR piece was a reaction to all of this.
FIGHT explores these themes using Virtual Reality, Binocular Rivalry and various interaction models inspired by these ideas.
I have no idea what anybody ‘sees’ when they experience this work, even though everyone is presented with the same visuals. Of course one might point out that this is actually the case with everything. When you look at any image, or read any piece of text, or even as you read these very words, I have no idea what they mean to you — but that’s at a semantic level. Here I wanted to try and create something where the conscious visual experience itself is different for everyone.
Everybody literally sees something unique.
i) What we perceive to be real, what we see, is a reconstruction in our minds, a simplified model of the world, limited by our biology and physiology
I don’t know if you know the story of the Australian Jewel Beetle and the beer bottle.
It’s thought that the male identifies females based on three characteristics: i) she has to be brown, ii) covered in dimples, and iii) big. For millions of years these criteria have worked for them. However, with the introduction of man-made beer bottles littering the outback — also brown, covered in dimples, and quite large — this beetle almost went extinct. It’s only when the beer company changed their bottles did the males go back to mating with the females, not beer bottles. Evolution has equipped this beetle with a very simple perceptual model of the world — very simple, but also relatively cheap, such that it can fit and run in the body of a small insect. Most importantly, until beer bottles came along, this perceptual model served this animal incredibly well for millions of years, it didn’t need anything better.
Another example I like comes from German neurophysiologist Jörg-Peter Ewert, who researched how a toad decides whether a moving stimulus is prey or not.
He observed that a toad will attack a piece of cardboard as long as it looks and moves in a particular way. Namely, it attacks thin strips if they move in the direction of the strip’s long axis (Ewert calls this ‘worm-like’ configuration). And it ignores the same strips if they move sideways (called ‘anti-worm-like’ configuration). And square-ish pieces trigger escape behaviour.
Interestingly, this processing happens in the toad’s eyes. The retinal neurons process the visual stimulus, and transmit higher level information to the rest of the brain. The brain is not privy to the full visual information that enters the eye. It knows whether there is something worm-like or anti-worm-like in its visual field, and it triggers the appropriate behaviour: catch prey or avoid predator.
Clearly, these animals are not equipped to grasp the full reality of the world that they live in. They are only equipped with simple hacks of perception, adaptations to various constraints that have emerged during their evolution. When resources, space, processing power and energy is limited, one requires the most optimal perceptual model and response, not the most comprehensive and true to reality.
Now why do I talk about such lowly animals, such as a beetle or toad?
If you subscribe to the notion that God created Man in his image, then these examples are void.
But if, like me, you subscribe to the notion that we are products of an evolutionary random walk guided by natural selection, descent with modification, then we are merely a little bit further down this path compared to the beetle or toad. While it’s clear that our cognitive abilities are far superior, there’s no reason to believe that we are at the apex of creation, that we are gifted with the ultimate capacity to see, perceive and comprehend everything that the universe has to offer. In fact countless examples could be given of animals who’s senses far exceed ours, such as bats, sharks, spiders, octopuses etc. Who knows what it feels like to be an octopus?
And I find that a very humbling thought: The vision that we perceive of the world, is a simplified model that has been shaped by evolution — a model that has proven to be advantageous for our survival, as we live, hunt, and avoid predators — just like the humble beetle or toad. And there’s no reason for this model to be a fully accurate representation of the universe.
The fact that now we’re able to question and reflect on this very notion — with the tools of science, maths, art, philosophy, culture — is itself quite astonishing, but does not reverse the sentiment. If anything, it underlines it, as we discover more and more ways to hack our own senses and perception.
ii) Perception, including vision, is an active process, it requires action and integration
Our eyes are often likened to cameras, and the act of seeing to taking photographs. As if light falls on our retina, forms an image, and that image gets sent to the brain for processing. While anatomically there are some similarities between the human eye and a camera, that’s not how seeing works.
Since the days of the ancient Greeks, it was believed that we shot rays out of our eyes, and upon those rays hitting objects, we could see. “The eye obviously has fire within it, for when one is struck this fire flashes out” said Alcmaeon of Croton c450BCE. Plato added “Such fire has the property, not of burning, but of yielding a gentle light, […] coalesces with the daylight and is formed into a single homogeneous body in a direct line with the eyes, […] strikes upon any object it encounters outside […] passes on the motions of anything it comes in contact with […] throughout the whole body, to the soul and thus causes the sensation we call seeing”. Euclid, Ptolemy, and many other great thinkers for centuries elaborated on this idea, known as the Extramission theory of vision.
Of course, they were wrong. Even back then Democritus, Epicurus, Aristotle and many others believed in an Intromission theory of vision. Our eyes don’t emit light, but light — or other ‘particles’ — bounced or came off objects and entered the eye.
However, the Extramission theory does capture and underline one aspect of vision that sometimes gets forgotten in modern interpretations, and something that I really wanted to portray: that seeing is an active process.
While there’s roughly 130 million photoreceptors in the human eye, only in order of 1 million fibres in the optic nerve carry signal to the brain. There’s a considerable amount of preprocessing happening in the human eye itself, not too dissimilar to the toad. Furthermore, about half of this information carries signal from a tiny area on the retina known as the fovea (fovea centralis), only a couple mm in diameter. This is the high resolution, full colour section of the eye, only 2 degrees field of view. If I hold my arm out at full length, it can see about the size of two thumbnails wide. Everything outside of this, is low resolution, and mostly black and white. Yet my conscious experience of vision seems like it covers an enormous window, more than 180 degrees.
Several times a second, the brain sends messages to the eye muscles to make quick jerky movements, known as ‘saccades’. At the end of each saccade the eye fixates and focuses the fovea on various features of the scene that it’s looking at.
At each fixation, the brain integrates the current information it has about the scene, with the movement of the head and the eyes, and the new visual information from the high resolution foveal vision and lower resolution peripheral vision. At each fixation it can very quickly adjust exposure, focus and white balance giving the illusion — creating a visual perception — of a very large, high resolution image where everything is in focus and with a dynamic range exceeding 24 f-stops, if we were to compare it to a camera.
The brain also ‘fills in’ gaps, such as the blind spot that we all have in each eye where the optic nerve leaves the retina (or the lots of other little blind spots which you might have if you’ve played with lasers, as I have).
In fact, during these saccades, the flow of visual information to the brain is interrupted, so the brain fills in these temporal gaps as well. For example in the famous stopped clock illusion, a type of Chronostasis, when you look at a clock it seems frozen for longer than a second. This is because during the time it takes for your eyes to saccade over to and fixate on the clock, instead of your conscious visual experience being that of a whizzing motion blur, your brain retroactively fills in the post-saccadic image of the clock.
iii) The actions that we take, affects the reality and the meaning that we construct in our mind
Alfred Yarbus, in his seminal research in the 1950s and 1960s on eye tracking and vision, used this — what appears to be a torture device — and found that the meaning that we try to extract from a scene affects the way that our eyes scan it. E.g. if we want to guess the ages of the people, how long the visitor had been away for, what were they doing before the visitor came in etc. And the way we scan also depends on who we are.
Our eyes constantly, unconsciously scan the world. A lot of this information doesn’t make it to our conscious awareness. Instead, the information is processed and integrated to provide a single coherent model. Philosopher Alva Noe likens the act of seeing far closer to ‘seeing with one’s hands — as a blind person might — than to a camera.
I was fortunate enough to recently experience the wonderful Door Into The Dark by artist collective Anagram, in which you’re blindfolded, barefoot and using only your hands, feet and ears let loose to explore and find your way through the space. Besides the intended narrative that was embedded into the experience, I was constantly reminded of Alva Noe’s analogy of seeing. I was waving my arms and hands around, ‘saccading’, looking for salient features of the environment, ‘fixating’ on such features once I found them, running my hands up and down the various surfaces or objects which they encountered, constantly trying to build a mental image of what it was that I had come across. And I was thinking “this is exactly what my eyes usually do”, except that I had become so adept at integrating the movement of my eyes with the visual information received, that I was oblivious to the process, all that was presented to my conscious experience was a single coherent visual percept of a 3D world. Whereas when trying to see with my hands, I was very conscious of every little movement, and the integration between my movements and the response from my senses, in this case touch — and how that affected the way I built a mental picture.
iv) Everybody’s experience is unique and personal, nobody else can see or maybe even understand
Binocular Rivalry (BR)
Under normal circumstances, our two eyes usually receive information about the 3D scene that is in front of them from slightly different view points, and the brain combines these two signals — along with some other information — to produce a single spatial model of that scene.
However, when each eye is presented with dissimilar monocular images (i.e. images which are more different than the same scene from slightly different viewpoints), the brain has a hard time making sense of these conflicting signals and is unable to combine them into a coherent vision. Instead of ‘seeing’ both images at the same time, these ‘rival’ images fight for perceptual awareness. The conscious mind ‘sees’ only one of the two images, and which image it sees alternates somewhat randomly, switching every few seconds, with unstable, patchy transitions.
E.g. when red vertical lines are presented to the left eye, and blue horizontal lines are presented to the right eye, the conscious visual experience might be similar to the image on the right — but with additional slow waves of movements as fragments from each image swipe across the visual field (this is my personal experience, and will be different for everybody).
It’s generally thought that the rivalry is resolved in the early stages of visual processing, (i.e. at the ‘eye’ level). E.g. high contrast, moving images are more likely to be dominant over low contrast, static images. The relative ‘strength’ of rival images determine roughly how long they will remain dominant before being suppressed. In fact it’s possible to design images such that one image is permanently dominant while the other is permanently suppressed.
But there is also evidence that rivalry is resolved in higher cortical areas as well (i.e. at the ‘image’ or ‘concept’ level). In fact there is even evidence that associating certain ‘neutral’ images of faces with negative actions causes those particular faces to remain more dominant independent of the visual features of the face.
This phenomena is fascinating for many reasons.
Even when the stimulus is not moving (i.e. presented with static images), the conscious visual experience is dynamic. The mind sees two images alternating back and forth every few seconds, with patchy transitions swiping across the visual field. Something in the brain is causing the visual perception to oscillate.
This oscillation, and the conscious visual experience in general, is unique for each person, even if everybody is presented with the same rival stimulus. The duration of these perceptual oscillations, and the characteristics of the transitions, is dependent on the viewer’s physiology. For some people the transitions might be quick sudden cuts, while for others it might be slow wipes. It is also thought that individuals with autism or bipolar disorder experience much slower rates of perceptual alternation, with longer transitions.
Finally, signals from both images are still somewhere in the brain, even though they aren’t fully elevated to conscious awareness. One of the images (or sections of both) remains in the unconscious. This raises the question whether such unconscious signals can affect behaviour. And there is indeed evidence that images which are suppressed — i.e. the viewer is not consciously aware of — can still affect behaviour, e.g. seeing fearful faces can create signs of fear in the viewer even if the image is suppressed and the viewer is not consciously aware of it. (This is perhaps linked to phenomenon such as blindsight, in which a person has damage to the primary visual cortex and has no conscious experience of vision, yet is able to respond to visual stimulus such as guessing locations of objects, catching them, or even detecting emotions on images of faces).
So these are the motivations behind the work. It’s a Virtual Reality experience which slowly evolves through different abstract scenes and interaction models inspired by these ideas.
This is the room you enter, sit down and put the headset (Oculus Rift) on. When the guest is ready, the host presses a button to start the journey, which is a linear timeline 9 minutes long.
Below are a few snippets. Needless to say, the VR experience of looking around is quite different to seeing it static as below, also because the interaction actively depends on where you are looking.
I didn’t want to tease people, and wanted to give them a simple experience of rivalry pretty much straight away — so that they know what it is early on, and learn to deal with it. Then I can get on with the business of doing interesting things. It takes about a minute for the rivalry to reach maximum intensity, with a static scene, so people can explore, look around and see if they have any control over dominance vs suppression. This is the section where people usually start cursing quite loudly as they experience this for the first time, around 20–30s into the video above.
The room very slowly rotates in opposite directions for each eye, on a multi-axis rotation, slowly changing colour. During this section, the perceptual experience of the people I’ve spoken to varies dramatically. Some people report giant swipes cross their vision, revealing or hiding the left or right sides. One person reported seeing a sharp line down the middle of their vision, with the left side showing the left image, and the right side showing the right. One person reported sudden cuts between the two images. Personally, I usually see a flat background from one image, with a corner from the other image in a circular mask moving across it. Also, looking around often affects the dominance. I might see in my peripheral vision, a small circle of blue lines against a backdrop of red, slowly moving across my field of vision. My eyes saccade over to it, and this triggers the circle of blue to start expanding, and one by one more and more blue lines start popping up, suppressing the red. Then this process is inverted as the red starts creeping back in.
It’s also worth mentioning, that usually when we see, the images that we see appear to be taking place outside the head. I.e. when looking at the world, it appears that the world is external to us, and we are looking at it through a window — a hole in our head — which is our eyes. Even when looking at a flat 2D image, like a painting, a photograph, or an image on a screen, it still appears to be external, it appears as a flat image on an external medium — such as a canvas or screen. However, in my personal experience at least (and according to the dozen or so people that I’ve spoken to about this), in this experience, the visual percept doesn’t feel external, it feels to be physically located inside the head, or inside the eyes to be more specific. Which of course is exactly one of the notions that I wanted to portray: seeing is a phenomena which happens inside the head, it is not a window onto an objective reality.
It’s also very interesting to play with the moments where this boundary is crossed. E.g. slowly reducing the amount of rivalry between two images, a perceived image which feels to be floating inside the head, all of a sudden materialises as an external, physical object. E.g. about halfway through the video above.
Or by slowly increasing the rivalry between two images, a normal 3D looking scene which appears to be external and physical, suddenly appears to be floating inside the mind. E.g. about 16–17 seconds in the video below.
In this section, which I call the ‘Penetrating Gaze’, you literally push and deform where-ever you look. This is the start of trying to make the act of looking, feel like an active process.
Quite a few people have likened this to taking drugs and ‘tripping’. In a sense, this is understandable. Billions of years of evolution went into developing our sensorimotor system, particularly hundreds of millions of years for the development of vision and the integration of vision with movement and other senses to produce this coherent singular conscious experience of an external world that somehow makes ‘sense’ to us. Hallucinogens, such as LSD or Psilocybin ‘magic’ mushrooms etc., create perceptual distortions. They ‘hack’ these mechanisms that give rise to our conscious perceptual experiences. Interestingly, it’s generally thought that hallucinogens don’t necessarily alter the sensitivity of our senses, but they disrupt the brains ability to integrate multiple signals and generate a single cohesive percept. So in this sense, I can understand why people liken experiencing FIGHT to ‘tripping’, because it’s also disrupting the brains ability to generate a single cohesive percept of an external world. This likening to drugs or ‘tripping’ is also perhaps due to a lack of vocabulary. We just don’t have enough words in our day-to-day language to explain this kind of perceptual experience which is alien to most of us. Of course, I’m sure that the imagery that I’ve chosen also encourages these ideas.
This section which I refer to as ‘look painting’, is also quite a literal interpretation of the idea that seeing is an active process, but also that the act of moving and looking, is an act of creation. As you move and look around, you’re creating structure, a kind of network. On the network itself there is quite subtle rivalry, both in colour and texture, giving it an unusual shimmering effect.
It also seems that context has an effect on perception and the dominant signal. The background transitions from having very subtle colour rivalry, to quite intense colour, contour and movement rivalry. This has quite a profound effect on the perception of the foreground structure. E.g. While the background is mostly black with very subtle rivalry, I personally perceive the foreground structure as purple — a mix of red and blue (with a shimmering effect coming from the rivalry in both colour and texture). However when the rivalry of the background intensifies, I personally see thick waves of colour propagate across my vision, affecting which background I see, but along with it, similar waves of colour propagate across the foreground. E.g. some sections of the network appear red against a green background, while other sections appear blue against a purple background. And as these perceptual waves move across my vision, so do the colours along with them. Furthermore, looking around the image, creates movement, creates new structure and also sends new perceptual waves propagating outwards from points of fixation.
While I was developing this piece, I was initially thinking that I would not share any images or personal experiences of it so as not to influence anybody’s experience of it. Having now experienced it dozens (maybe hundreds) of times, I realise that no matter how much you know about the phenomena, or even how many times you experience it, it will always feel unique.
For anyone wondering about technical implementation details: the piece was developed in the Unity game engine (C#), the inbuilt Oculus Rift integration and a simple hack to allow different objects to be visible to different eyes (i.e. two cameras, and different layers for each eye). The audio uses HRTF binaural spatialisation with an 8-track score distributed across 8 static virtual sound sources positioned around the centre, and 3 additional moving virtual sound sources depending on the developments in the scene.
1. Anderson, E., Siegel, E. H., Bliss-Moreau, E. & Barrett, L. F. The Visual Impact of Gossip. Science (80-. ). 332, 1446–1448 (2011).
2. Bar, M. et al. Top-down facilitation of visual recognition. Proc. Natl. Acad. Sci. 103, 449–454 (2006).
3. Blake, R., Brascamp, J. & Heeger, D. J. Can binocular rivalry reveal neural correlates of consciousness? Philos. Trans. R. Soc. Lond. B. Biol. Sci. 369, 20130211 (2014).
4. Blake, R. & Tong, F. Binocular rivalry — Scholarpedia. Scholarpedia (2008). doi:10.4249/scholarpedia.1578
5. Buzsaki, G. Rhythms of the Brain. (Oxford University Press, 2006).
6. Carmel, D., Arcaro, M., Kastner, S. & Hasson, U. How to create and use binocular rivalry. J. Vis. Exp. 1–8 (2010). doi:10.3791/2030
7. Carter, O. L. et al. Meditation alters perceptual rivalry in Tibetan Buddhist monks (Presentation). Curr. Biol. 15, R412–R413 (2005).
8. Carter, O. L. et al. Meditation alters perceptual rivalry in Tibetan Buddhist monks. Curr. Biol. 15, R412–R413 (2005).
9. Carter, O. Binocular Rivalry Tutorial. (2006). Available at: http://visionlab.harvard.edu/Members/Olivia/tutorialsDemos/Binocular Rivalry Tutorial.pdf.
10. Carter, O. Hallucinogens & Perception Pharmacology of Perception. (2007).
11. Carter, O. L. et al. Modulating the Rate and Rhythmicity of Perceptual Rivalry Alternations with the Mixed 5-HT2A and 5-HT1A Agonist Psilocybin. Neuropsychopharmacology 30, 1154–1162 (2005).
12. Clark, A. Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behav. Brain Sci. 36, 181–204 (2013).
13. Dawkins, R. The Blind Watchmaker. (WW Norton & Company, 1986).
14. Dieter, K. C. & Tadin, D. Understanding Attentional Modulation of Binocular Rivalry: A Framework Based on Biased Competition. Front. Hum. Neurosci. 5, 155 (2011).
15. Durgin, F. H., Tripathy, S. P. & Levi, D. M. On the filling in of the visual blind spot: some rules of thumb. Perception 24, 827–840 (1995).
16. Gelder, B. de. Uncanny Sight in the Blind. Sci. Am. 302, 60–65 (2010).
17. Gershman, S., Vul, E. & Tenenbaum, J. Perceptual multistability as Markov chain Monte Carlo inference. in Advances in Neural Information Processing Systems 611619 (2009).
18. Gross, C. G. The Fire That Comes from the Eye. The Neuroscientist 5, 58–64 (1999).
19. Hayashi, R. & Tanifuji, M. Which image is in awareness during binocular rivalry? Reading perceptual status from eye movements. J. Vis. 12, 5–5 (2012).
20. Hoffman, D. D. Visual intelligence: How we create what we see. (WW Norton & Company, 2000).
21. Hohwy, J., Roepstorff, A. & Friston, K. Predictive coding explains binocular rivalry: An epistemological review. Cognition 108, 687–701 (2008).
22. Hong, S. W. & Shevell, S. K. The influence of chromatic context on binocular color rivalry: Perception and neural representation. Vision Res. 48, 1074–1083 (2008).
23. Hossieni, H., Fatah, J. M. A., Mohammad, S. & Naby, M. From Hyperion to Photon, a brief survey in the timeline of photon. Vietnam J. Sci. 3, 13–23 (2016).
24. Kanizsa, G. Subjective contours. Sci. Am. 234, 48–52 (1976).
25. Kitaoka, A. Akiyoshi’s illusion pages. Available at: http://www.ritsumei.ac.jp/~akitaoka/index-e.html.
26. Kitaoka, A. & Ashida, H. Phenomenal characteristics of the peripheral drift illusion. Vision 15, 261–262 (2003).
27. Knapen, T. Research Interests (Binoculary Rivalry). 2015–2016 (2017). Available at: http://tknapen.net/research.html.
28. Leopold, D. A. & Logothetis, N. K. Multistable phenomena: changing views in perception. Trends Cogn. Sci. 3, 254–264 (1999).
29. Logothetis, N. K. Single units and conscious vision. Philos. Trans. R. Soc. B Biol. Sci. 353, 1801–1818 (1998).
30. Lumer, E. D. Neural Correlates of Perceptual Rivalry in the Human Brain. Science (80-. ). 280, 1930–1934 (1998).
31. Malek, N., Mendoza-Halliday, D. & Martinez-Trujillo, J. Binocular rivalry of spiral and linear moving random dot patterns in human observers. J. Vis. 12, 16–16 (2012).
32. Maruya, K., Yang, E. & Blake, R. Voluntary action influences visual competition. Psychol. Sci. 18, 1090–1098 (2007).
33. Noë, A. Action in Perception. (MIT Press, 2004).
34. O’Regan, J. K. & Noë, A. A sensorimotor account of vision and visual consciousness. Behav. Brain Sci. 24, 939–973 (2001).
35. O’Shea, R. P., Parker, A., La Rooy, D. & Alais, D. Monocular rivalry exhibits three hallmarks of binocular rivalry: Evidence for common processes. Vision Res. 49, 671–681 (2009).
36. Paris, R., Blake, R. & Bodenheimer, B. A Pilot Study on Binocular Rivalry and Motion Using Virtual Reality. 18, 4503 (2014).
37. Pelekanos, V., Roumani, D. & Moutoussis, K. The effects of categorical and linguistic adaptation on binocular rivalry initial dominance. Front. Hum. Neurosci. 5, 1–8 (2012).
38. Pelphrey, K. A. et al. Visual scanning of faces in autism. J. Autism Dev. Disord. 32, 249–261 (2002).
39. Penny, W. Bayesian Models of Brain and Behaviour. Int. Sch. Res. Not. 2012, e785791 (2012).
40. Robertson, C. E., Kravitz, D. J., Freyberg, J., Baron-Cohen, S. & Baker, C. I. Slower Rate of Binocular Rivalry in Autism. J. Neurosci. 33, 16983–16991 (2013).
41. Robertson, C. E., Ratai, E. M. & Kanwisher, N. Reduced GABAergic Action in the Autistic Brain. Curr. Biol. 26, 80–85 (2016).
42. Sacks, O. A Neurologist’s Notebook: The Mind’s Eye. What the blind see. The New Yorker 48–59 (2003).
43. Sacks, O. A Neurologist’s Notebook: To See And Not See. The New Yorker (1993).
44. Schultz, W. & Dickinson, A. Neuronal Coding of Prediction Errors. Annu. Rev. Neurosci. 23, 473–500 (2000).
45. Scroggins, M. Binocular Rivalry and Luster. Available at: https://michaelscroggins.wordpress.com/explorations-in-stereoscopic-imaging/retinal-rivalry-and-luster/.
46. Tong, F. Competing Theories of Binocular Rivalry: APossible Resolution. Brain Mind 2, 55–83 (2001).
47. Tong, F., Nakayama, K. & Vaughan, J. T. ScienceDirect — Neuron : Binocular Rivalry and Visual Awareness in Human Extrastriate Cortex. Neuron 21, 753–759 (1998).
48. Tong, F., Meng, M. & Blake, R. Neural bases of binocular rivalry. Trends Cogn. Sci. 10, 502–511 (2006).
49. Wilson, H. R., Blake, R. & Lee, S. H. Dynamics of travelling waves in visual perception. Nature 412, 907–910 (2001).
50. Winer, G. A., Cottrell, J. E., Gregg, V., Fournier, J. S. & Bica, L. A. Fundamentally misunderstanding visual perception. Adults’ belief in visual emissions. Am. Psychol. 57, 417–424 (2002).
51. Yarbus, A. L. Eye movements and vision. (1967).
52. Binocular Rivalry — Wikipedia. Wikipedia Available at: https://en.wikipedia.org/wiki/Binocular_rivalry.