# Black Holes, Consciousness, and Deep Learning

## Neural Networks and the Holographic Principle

Why are renowned and respected physicists like Leonard Susskind consulting for tech giants like Google on Artificial Intelligence and machine learning? What does gravitational physics and string theory have to do with AI and machine learning? Is it just because physicists are good at math, or is there some kind of connection between the theoretical physics of black holes and neural networks? Is the connection between a gravitational singularity like a black hole, and the technological singularity of a sentient and self aware AI deeper than just coincidentally being named “the Singularity”? In this article we will address these questions and how they might better help us understand the “*black-box issue*” so infamous to machine learning.

## Deep Learning and Neural Networks

Deep neural networks have seen a recent explosion in applications to nearly every aspect of our lives. They model our financial markets with arbitrary accuracy, driving investments and algorithmic stock trading. They classify images, performing facial recognition on only partial images. They write their own code and improve ours. They create webpages And apps for us from hand drawn sketches.

There are AI agents that can see through walls and predict the position and movement of humans by observing the perturbations in WiFi signals.

They drive cars. They assist us in deciding what to watch on Netflix and YouTube. They serve as therapists and doctors to us, assisting in medical diagnoses, new drug discovery, and treatment analysis.

Google has shown that AI is outperforming human doctors in treatment and diagnosis.

They have conversations with us and serve as friends. They create music. They create anime. They paint.

They create false videos, images, and voice recordings of real people in deep fakes, allowing social engineering en mass, with both positive and negative outcomes.

Using the same technology they can clone our voices with only 5 seconds of audio to learn from, allowing those of us who are camera shy the ability to create flawless podcasts, YouTube videos, and lectures.

They perform music concerts for us, and millions of us attend.

They play video games better than any human ever has. They perform sentiment analysis of our texts, comments, and the things we say and can even diagnose schizophrenic episodes and other mental illnesses from 2 minutes of smartphone video. AI can detect and understand human emotions. They track our spending and recommend products to us on Amazon. Neural networks predict traffic patterns and offer us the quickest route, improving our commutes and effecting the price of an Uber or Lyft. In fact, the DWave quantum computer was recently used to solve this problem in particular. They adapt what our search engines show us based on our personal data. They help in job placement and hiring protocols. They are used for national defense. They discover new materials and chemical compounds. They optimize anything that can be optimized and solve physics problems formerly thought intractable or impossible. Using automated theorem proving and natural language processing they even prove new theorems in mathematics and write academic papers. They learn how to correct errors in quantum computers.

AI aided us in capturing the first images of black holes.

This encounter between machine learning and black holes is not our primary focus however.

Here, we will be concerned with understanding deep neural networks in terms of some of the deepest and most fundamental physics known to humans, the physics of quantum gravity.

In this article we will discuss how layers of neural networks can be replaced with “*tensor networks*”, a tool well known to physicists since at least as far back as the early 1970s, when the famous mathematical physicist Sir Roger Penrose first used his graphical tensor notation to describe physics. This is a running theme in many of my articles, but for good reason, as we will see:

We all live in a giant quantum computer that is constantly running quantum machine learning algorithms.

Understanding how we can treat deep neural networks as tensor networks gives a direct connection to quantum computing and “*The Holographic Principle*,” a principle in the theory of quantum gravity and the AdS/CFT correspondence (which we will return to later on). This will not be a light read, but for the technically courageous, it will be a rewarding read that will help in understanding why deep learning is such an effective tool:

Neural Networks mimic quantum physics and the quantum machine learning algorithms running on the quantum computer we call the universe.

In this article we will also address the “*black-box*” issue so often spoken of in the machine learning community. We will explain what it is, and how it can be mitigated using theoretical physics fundamental to the understanding of black holes, gravity, and quantum physics.

We will also make direct connections to quantum computing, and the sexy new topic of *quantum machine learning*. Replacing layers of deep neural networks with tensor networks is only one step away from replacing those same layers with quantum circuits and the methods of quantum neural networks. To top it all off, we will provide references to Github tutorials and open source software that actually puts all of this theory into action. We’ll look at how you can train your own quantum neural networks, on real quantum computers. The takeaway: The black hole singularity and the technological singularity of sentient AI very closely related via their mathematical structure. Moreover, this is not technology of the future. It is available to the sufficiently curious and mathematically adept to use and explore now, on actual quantum computers.

## Black Hole Physics and Tensor Networks

Tensor networks are a visual computational tool used by theoretical physicists for at least 50 years now. They go back to the graphical notation Roger Penrose developed in a physics paper from the early 1970s. Roger Penrose is a well respected physicists working in areas like gravitational physics and quantum physics based theories of consciousness. He has written several books on these subjects and continues to be a prolific and influential thinker.

Penrose’s graphical tensor network notation is currently proving to be one of the most useful tools in machine learning. Tensor networks can replace layers in neural networks, serving as a fundamental component of deep learning models. In fact, Google has built an entire library called TensorNetwork that runs on top of its famous machine learning platform TensorFlow. TensorNetwork allows a user to replace layers of neural networks built in TensorFlow with, well, tensor networks. Without getting into too many of the mathematical details here, tensor networks provide an efficient way to compute operations on tensors such as tensor contractions, which are similar to matrix multiplication and dot products of vectors. They offer significant speed ups in computations allowing us to train neural networks much faster and more efficiently. They do this in part by encoding certain kinds of symmetries, reducing the need to consider all possible solutions to only those solutions with certain properties respecting the symmetries encoded by the tensor network. To see a basic tutorial I created on using Google’s TensorNetwork library, checkout my Github repository.

Penrose is not the only person to have developed a graphical notation for computations in physics though. Richard Feynman, widely regarded by fellow physicists as a first rate genius, developed the famous Feynman diagram, a visual computational tool used in quantum mechanics to describe the interactions between particles like electrons and photons. Feynman diagrams serve as an interesting segue into our next topic.

## Quantum Computing and Circuit Diagrams

We see similar diagrams being used in quantum computing to describe the changes qubits undergo when passing through quantum gates. These circuit diagrams are one of the fundamental visual tools used in Qiskit and other IBM quantum computing tools. They show up in Google’s quantum computing language Cirq as well. Practically every academic paper on quantum computing has at least one quantum circuit diagram in it.

It turns out, quantum circuit diagrams, tensor networks, and Feynman diagrams are all essentially the same thing. They are visual representations of quantum processes. They depict the structure of a process that a quantum system (such as a collection of qubits in a quantum computer) undergo. What is fascinating is that it is this structure that these diagrams encode that is truly of interest, not the particles or qubits themselves.

The structure of the process itself tells us much more about the problem we are studying than the initial or final states of the qubits or the system of particles.

For a thorough introduction to quantum computing and circuit diagrams, check out Quantum Computation and Quantum Information.

## Simulating Quantum Circuits with Tensor Networks

As I have discussed in other articles, we can simulate many quantum processes with tensor networks efficiently on classical computers. This is contradictory to much of the hype around quantum computers and can easily lead to misunderstandings. There are many things we cannot simulate with a classical computer, many that can be simulated by quantum computers, and in some cases, quantum computers do offer significant speed up and computational power. That doesn’t mean you need to have access to a quantum computer to reap the benefits of quantum computing though. In fact, sometimes it’s simply a matter of rephrasing your problem in the language of quantum computing. It turns out, performing machine learning tasks using tensor networks on specialized hardware like Google’s Tensor Processing Units (TPUs) mimics the quantum machine learning processes in quantum variational circuits and provides substantial improvements in machine learning performance. Google has built an entire library TensorNetwork, to run on top of TensorFlow, allowing users to replace layers of neural networks with tensor networks. It has been shown in research that short rang neural networks are really good at learning topological states of matter. Google has also invested in consultants such as respected physicist Leonard Susskind, a well known string theorist who works on developing our understanding of quantum gravity.

## Deep Learning, Hamiltonians, and Surface Codes

An excellent example of how deep neural networks can efficiently and accurately model incredibly complex physical systems is in applications of deep learning to quantum computing. For example, deep learning agents have shown an incredible ability to solve for Hamiltonians. Applying machine learning to topological quantum computing has shown that machine learning can help researchers working on quantum error correction. For a good introduction to topological error correcting codes check our this reference.

One specific example of this is learning topological states of matter, which has applications to materials science and new drug discovery. This can be seen in the zero-shot learning in this Nature article, where a model reaches 97.4% accuracy and is tested on topological states it has never seen before.

## Consciousness, Quantum Information Theory, and Holography

A theory of quantum information can be seen to show up inside of neural network due to the fact that layers of neural network can be replaced by tensor networks. This means studying neural networks through the use of quantum information theory is a good way to understand how much information and complexity can be stored inside of a deep neural network. Understanding this allows us to understand why neural networks are so good at what they do.

In quantum information theory, there is a principle known as the holographic principle that is related to the information content of black holes. It describes, using tensor networks, how the information content of the surface of the black hole, i.e. the event horizon, is related to the interior of the black hole. In particular, in any 3-dimensional region of space, the information content of that region can be completely encoded on the surface of that region. Since black holes pack in the most information possible into a region of space, they can be thought of as incredibly efficient quantum computers. This idea of information content in a region extends to our universe as well.

Physicists like Leonard Susskind have on more than one occasion stated that we may very well live inside a black hole and that we may very well simply be holographic projections from the outer surface of that black hole.

As it turns out, we all live in a giant quantum computer that is constantly running quantum machine learning algorithms. The physicist Sir Roger Penrose and neuroscientist Stuart Hameroff believe these quantum computations could be the origin of consciousness itself. They believe tiny fluctuations at the quantum level, with wave function collapses cause by gravitational forces, cause moments of proto-consciousness. From this, small structures in our brains called microtubules are able to organize these tiny moments of consciousnesses that permeate our universe.

In essence, they have formulated a theory that implies every object in the universe, even the universe itself, has some degree of consciousness. Penrose and Hameroff believe our brains are squeezing out enough entanglement in these microtubules to be effective quantum computers with quantum error correction.

Although the theory was widely disputed and criticized for years, there have been recent discoveries supporting their ideas. One issue many took with the theory was the possibility that quantum processes might have any influence at room temperature or at macroscopic scales. However, it was recently discovered that quantum physical processes play a key role in photosynthesis in plants. Other examples are the discovery that quantum algorithms are crucial and fundamental in biology. This fact can be exploited by biocomputing technologies to realize certain quantum algorithms using currently available technology. So, while it may not yet be a definitive answer to the origin of consciousness, it can certainly serve as inspiration for those pursuing innovation in machine learning, quantum computing, and data processing. It may price useful to use biophysical processes to perform parts of computations traditionally only relegated to the realm of, well, machines

We may find there is less division between “natural” and “artificial” intelligence than is generally assumed.

If you have ideas to share, questions, or if you are in need of consulting services for quantum machine learning contact the author via LinkedIn, visit The Singularity website, or checkout the Github tutorials.