Why AlphaGo Zero is a Quantum Leap Forward in Deep Learning
Carlos E. Perez

Both notions of “incomprehensibility” would seem tantamount to the notion of irreducibility in complex systems. In other words, sometimes there’s no decomposition into simpler terms we can understand (or equivalently, we must actually execute the program to see what it does). In fact, I’d say that “sometimes” is more like “almost always” when you consider the scope of possibilities within either abstract or physical systems. Human consciousness and general intelligence just might be irreducible in this sense, and perhaps it’s human hubris to believe we can reduce them to terms we can grasp.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.