According to the “predictive processing” (pp) approach for describing the brains basic function mode, there may be a simple explanation, why current neutral network set ups cannot cope with illusions like that. They are just not build that way. The pp approach sees the brain as hierarchical multi layer prediction error minimization system, which is a different set up than the backpropagation used in current neural networks. As simultaneous top down predictions are checked against bottom up sensorical input, over time the predictions are getting better. Or in other words: deviate lesser from the sensorical input. When a prediction has proven to be very fit and stable over time, it might “win” even against sensorical input which “proves” it to be “wrong”. Therefore we fall for optical illusions, even when we know, that we are “fooled”. The insight model is just too good to handle 99.99999…% of all the other cases to be even short term put aside. To make neural networks more human, one must make them work a similar way. I think this is a good example, why we are still not model the brain with neural networks, we are just mimic certain functions (eg face recognition) in a very good way (with sometimes even better results then humans can achieve). But these systems are no more human as the mechanics of a robot who walks with its electro-hydraulic legs. Maybe new computer setups using the predictive processing approach will make neural networks more work like the human brain, what could help to understand ourselves and especially our “malfunctions” better. For a comprehensive description of the predictive processing theory I recommend two books: “The predictive mind" written by Jakob Hohwy and “Surfing Uncertainty” by Andy Clark.