The Eigenvalue Equation: Explained

An Introduction to Quantum Eigenstuffs

Yash
Quantaphy
7 min readJul 30, 2022

--

Schrödinger’s equation is often considered the most important equation in quantum mechanics — just as Newton’s second law takes the spotlight with classical systems.

Conceptually, the Schrödinger equation is the quantum counterpart of Newton’s second law in classical mechanics. Given a set of known initial conditions, Newton’s second law makes a mathematical prediction as to what path a given physical system will take over time. The Schrödinger equation does pretty much the same: it predicts the evolution of a wave function over time.

Now for the inevitable questions “What is a wavefunction?” and “What is Schrödinger’s equation?”, check these out below.

The Mathematical Description

I will rush through this one fairly quickly because, well, the maths isn’t all that important. In principle, this is the same equation you will meet in the context of quantum mechanics.

Owing to the arrow on the x, we can make the reasonable assumption that we’re dealing with vectors, i.e., quantities that have both magnitude and direction. These are represented in columns of horizontal and vertical components.

Now, A can be thought of as a matrix that is applied to the vector x. Applying a matrix to a vector is equivalent to transforming it in some way. So when we apply A to x, we transform x into some other vector, t. Ax = t. Now, if t is some multiple of x, it’s called an eigenvector. Eigen is german for “same”. More on that in a second.

If you’re unfamiliar with matrices, they’re essentially a way of encoding information with some finite number of rows and columns.

Each “x” represents an element of the matrix. You have an n number of columns and a d number of rows.

For now, to not get ahead of ourselves, we’ll just keep to 2x2 matrices, i.e., matrices with two rows and two columns. Here’s a quick run-through of matrix-vector products:

An example of how that works:

The vector we start off with gets transformed into some new vector.

For some matrices, however, we can sometimes find vectors that don’t exactly get transformed into another one — apart from maybe being stretched or shrunk.

Applying our matrix, A, to this vector yields the same vector stretched by a factor of four. Here, four is called an eigenvalue. When this happens, we say that we have found an eigenvector for our matrix. x is an eigenvector of A.

This is why we’re working with what is called the eigenvalue equation. For a given matrix, we can find a vector that is transformed by some some matrix to yield the same vector stretched by some factor of the eigenvalue. We don’t necessarily need to worry about how this is done; take me on blind faith here.

The Quantum Mechanical Description

If we wanted to answer the question of what’s truly fundamental in this Universe, we’d need to investigate matter and energy on the smallest possible scales. On such scales, reality starts behaving in strange, counterintuitive ways. We can no longer describe reality as being made of individual particles with well-defined properties like position and momentum. Instead, we enter the realm of the quantum: where fundamental indeterminism rules, and we need an entirely new description of how nature works. But even quantum mechanics itself has its failures here. They doomed Einstein’s greatest dream — of a complete, deterministic description of reality — right from the start.

If we lived in an entirely classical, non-quantum Universe, making sense of things would be easy. As we divided matter into smaller and smaller chunks, we would never reach a limit. There would be no fundamental, indivisible building blocks of the Universe. Instead, our world would be made of continuous material, where if we build a proverbial knife, we’d always be able to cut something into smaller and smaller chunks.

With quantum physics, new rules are needed, and to describe them, new counterintuitive equations. The idea of an objective reality goes out the window, replaced with notions like probability distributions rather than predictable outcomes, wavefunctions rather than positions and momenta, and Heisenberg uncertainty relations rather than individual properties.

These paradigm shifts necessitated something completely new. Something that helped describe the quantum world. Enter: the eigenvalue equation.

The eigenvalue equation

This is an equation that largely belonged in mind-numbing linear algebra courses until Schrödinger invoked them in his ideas. These concepts are absolutely central in quantum physics, so no short answer can do justice to the situation.

In general, the wavefunction stores all the information available to the observer about a quantum system. Often in discussions of quantum mechanics, the terms eigenstate and wavefunction are used interchangeably. This will make sense in a moment.

Breaking the equation down, Â is the placeholder for an arbitrary operator. In physics, an operator is a function over a space of physical states onto another space of physical states. In simpler words, applying an operator to a wavefunction will yield another function. But it goes deeper than just this.

Any observable, i.e., any quantity which can be measured in a physical experiment, should be associated with an operator. The operators must yield real eigenvalues, since they are values that may come up as the result of the experiment. The eigenvalue is what is denoted by k on the right-hand side but we’ll get to this in a moment. What’s important to note is that operators can be thought of as making a measurement on a quantum system.

For a mathematical introduction to eigenstuffs, I’d recommend 3b1b’s videos on the essence of linear algebra:

Back to operators. Simply put, an operator is a generalization of the concept of a function applied to a function. Whereas a function is a rule for turning one number into another, an operator is a rule for turning one function into another. For the time-independent Schrödinger equation, the operator of relevance is the Hamiltonian operator (often just called the Hamiltonian or the big H). The Hamiltonian is an operator that represents the total energy of the system. The first term represents the kinetic energy and the second is the potential. You don’t need to worry about what these mean yet.

The Hamiltonian equation for a single nonrelativistic particle in one dimension

And since in the case of Schrödinger’s equation, the eigenvalues are the possible energies that the system can have if it is in a state of well-defined energy, we can rewrite it in the time-independent form to conveniently give us this:

We read this as “the Hamiltonian operates on the eigenfunction to yield an energy eigenvalue times the same function”.

The above equation is a type of eigenvalue equation. Applying an operator H to our wavefunction will yield an eigenstate, E. But well, what does this mean?

It is a general principle of quantum mechanics that there is an operator for every physical observable. If the wavefunction that describes a system is an eigenfunction of an operator, then the value of the associated observable is extracted from the eigenfunction by operating on it with the appropriate operator. In other words, it is not true that all wavefunctions will hold the eigenvalue equation — only the eigenfunctions will. These are a characteristic (pun intended) few that, when an operator is applied to yield an eigenvalue.

The value of the observable for the system is then the eigenvalue, and the system is said to be in an eigenstate. In other words, applying an operator to an eigenfunction will yield the experimental result of measuring that observable. The time-independent equation states this principle mathematically for the case of energy as the observable. If the wavefunction is not the eigenfunction of the operation, then the measurement will give an eigenvalue (by definition), but not necessarily the same one for each measurement and so the system is not said to be in an eigenstate. But what sense would it mean to have different energies each time we measure something?

Since not all wavefunctions are eigenfunctions of an operator, they can usually be written as a superposition of eigenstates. The energy eigenfunctions are especially important because they provide a very convenient way to express the evolution over time of the system. When the Hamiltonian is time-independent, each energy eigenstate evolves in a very simple manner. So a good way to proceed is to write the state of the system as a superposition of energy eigenstates:

This reads “the wavefunction is a superposition sum of all eigenstates”.

We do this because as opposed to a deterministic algebraic product of the matrix and a vector, we have an operator applied to a probabilistic quantum state. So with each possible “measurement”, we might be met with a different eigenvalue.

You should pause to appreciate how powerful a result that is. We just solved the whole problem of the evolution of a wave function over time all in one go. Really, what it says is that the investment of effort to find the eigenvalues and eigenfunctions, ϕ, is entirely worth it. A complete introduction to this would have the length of a small textbook, but I hope this is helpful.

Linked here is another great introduction to the eigenvalue equation. If this piqued your interest, check it out.

Feel free to tear me apart if I have made any errors! With that, I will end this here. Thank you for reading!

--

--

Yash
Quantaphy

Physics undergraduate | Top Writer in Space, Science, and Education