The pursuit of true artificial intelligence, and the argument from physics
“God does not play dice” .. said the guy with the wackiest hairdo of his time, who also was one of the greatest scientists of modern era, Albert Einstein. He did not mean though that there is a God and he later clarified that “I do not believe in a personal God and I have never denied this but have expressed it clearly. If something is in me which can be called religious then it is the unbounded admiration for the structure of the world so far as our science can reveal it.” Stephen Hawkins explained this further in one of his lectures “that Einstein was very unhappy about this apparent randomness in nature. His views were summed up in his famous phrase, ‘God does not play dice’. He seemed to have felt that the uncertainty was only provisional: but that there was an underlying reality, in which particles would have well defined positions and speeds, and would evolve according to deterministic laws”. Later Hawkins claims “it seems Einstein was doubly wrong when he said, God does not play dice. Not only does God definitely play dice, but He sometimes confuses us by throwing them where they can’t be seen”
In 1960, Nobel laureate Eugene Wigner published an article “The Unreasonable Effectiveness of Mathematics in the Natural Sciences”. In the article he expressed that mathematical concepts have applicability far beyond the context in which they were originally developed. Based on his experience, he says “it is important to point out that the mathematical formulation of the physicist’s often crude experience leads in an uncanny number of cases to an amazingly accurate description of a large class of phenomena”. An example is Maxwell’s equations, derived to model the elementary electrical and magnetic phenomena known as of the mid 19th century. These equations also describe radio waves, discovered by David Edward Hughes in 1879. There are multiple similar examples of same mathematical equations supporting different natural phenomena (poisson’s equation, diffusion equation). Wigner sums up his argument by saying that “the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and that there is no rational explanation for it”.
So, in summary — a) its not natural that “laws of nature” exist and are based on mathematical equations. Moreover, the same underlying equations are observed in multiple natural phenomena b) Nature was believed to be deterministic until Quantum theory & then Blackhole happened (although Einstein did propose a hidden variable theory to explain the randomness in nature) and c) for now it is understood, that the future of the universe is not completely determined by the laws of science and its present state. One can calculate probabilities but can’t make definite predictions
A logical extension of above is that our vast universe predominantly operates on the basis mathematical equations (or just one large equation) with elements of randomness embedded in it (at a ‘quantum’ level and that is why the laws of science appear deterministic, to a very good approximation). This leads us to the main topic — the pursuit of true Artificial Intelligence.
Machine learning (ML), the core of AI, is basically a process to determine the underlying equation or function driving anything — from identifying a cat or Donald Trump in a photo, or telling that its Kurt Cobain playing by listening to the music, or identifying if an attempted credit card transaction is a fraudulent one or not — ML establishes the underlying equation. Considering the universe is largely based on mathematical equations, its fair to imply that human intelligence also works basis mathematics. Therefore if we pass on all possible data to an extremely powerful computer with a solid learning algorithm, we can theoretically replicate human intelligence to a good approximation by determining underlying equation(s).
Now, we already have a very good learning algorithm — Deep learning — which is based on how brain works. It has given excellent results on variety of problems including image processing and NLP . And while we continue to improve on algorithm front, we still are far away from a hardware that can match human brain’s capability. The processing power of human brain is still way more faster than the fastest supercomputer on earth.
“Nature is quantum, God dammit! So if you want to simulate it, you need a quantum computer” said the fabulous Richard Feynman (if you don’t know about him, you must read “Surely you joking Mr. Feynman” and you would know why I used the adjective). Quantum computing is a concept built on the principles (Superposition & Entanglement) of quantum physics. It takes advantage of the ability of subatomic particles to exist in more than one state at any time.
Photo Source : IBM Research
At present Google is leading the way and has unveiled Bristlecone, a new quantum computing chip with 72 qubits. Google claims that the new chip can achieve “quantum supremacy” in next few months. Quantum Supremacy is the point at which a quantum computer can do calculations beyond the reach of today’s fastest supercomputers. IBM has built a 50 qbit computer prototype and also have a 7 qubit computer power available on Cloud for academic research. Google has also released a software toolkit (Cirq) that makes it possible for anyone to create a algorithm and need not have a background in Quantum physics.
There is a lot of investment and talent going into quantum computing research. IBM already has a long list of paying customers in JPMorgan Chase, Daimler AG, Samsung, JSR Corporation, Barclays, Hitachi Metals, Honda, Nagase etc. who will access its 20 qbit computer. And hence it is not very far off that we will have a hardware that is good enough to match the power of human brains.
And this is when we would have, in practical sense, the possibility of acheiving true AI. Till then we will keep improving incrementally, by enhancing the learning algorithms and by reducing the size of the transistors.