We don’t need to process the full image of the cat from the real world in order to classify it as a cat and to generate its model, because our brain can generalise. It has a general model — a kind of a template — of a cat stored in it’s memory and it can upload it to its dynamic model of the world whenever it recognises the cat’s pattern in sensory inputs.
Is Machine Learning Ready to Scale?
Yuri Barzov

Researchers from Radboud University in the Netherlands recently demonstrated with their experiments (dynamic fMRI scanning of human brain visiual pre-cortex) how our brain visualise the dynamic model of the real world: “We found that flashing only the starting point of a moving dot sequence triggered an activity wave in V1 that recreates the full stimulus sequence. This anticipatory activity wave was temporally compressed compared to the actual stimulus sequence and was present even when attention was diverted from the stimulus sequence. This preplay activity may reflect an automatic prediction mechanism for visual sequences.”

Like what you read? Give Yuri Barzov a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.