Motivating the Fourier Series Through the Wave Equation

Kensei S.
7 min readAug 19, 2023

--

In my previous article I derived an equation for the (separable) solutions to the wave equation using separation of variables but we can actually do even more by imposing initial conditions. However, in doing so this requires an understanding of the Fourier series as we will see in this article.

Initial Conditions

To see where the Fourier series comes into play we will first define our initial conditions. Since the wave equation is a second order partial differential equation, we will need two initial conditions: one for u(x, 0) and one for uₓ(x, 0). Since these are just dependent on x we can define single variable functions for each:

Initial condition for the wave equation

Where f(x) is a function that describes the initial shape of the wave and g(x) is a function that describes the initial velocity of the wave. Now, to refresh your memory, the general solution to the wave equation (with certain boundary conditions) I derived last post is shown below:

Using this formula, we can let t = 0 to get an expression for f(x) and then also differentiate it with respect to x and again let t = 0 to get an expression for g(x). This will give us two equations that relate the constants Aₙ and Bₙ to the initial conditions f(x) and g(x):

Hence, we have two separate equations that help us determine the constants Aₙ and Bₙ based off our initial conditions. This is where the Fourier series becomes important.

According to Wolfram MathWorld, “A Fourier series is an expansion of a periodic function in terms of an infinite sum of sines and cosines”. If we look at the expressions we have above, they are precisely an infinite sum of sine function, with each term have a corresponding coefficient to them. Now, while you may be thinking “but f(x) and g(x) aren’t always period right?” it turns out that we can make them periodic even if they aren’t within our interval from x = 0 to x = l. Here’s what I mean. Suppose we have some function of x (such as u(x, 0) = f(x)) and some interval where our wave will be vibrating between. Since we are only interested about the shape of this function within this interval, we don’t care about how the function behaves outside of the interval. Hence, what we can do is construct our desired shape within the interval and basically copy and paste it across the space outside the interval (left and right) which gives us the required periodic behavior. Therefore, even if the function isn’t periodic within the interval, we can make it so that it is periodic over all. This means that within a certain interval, we can construct practically any function just by adding an infinite number of sinusoidal function. While it is a bit more complicated than this and the explanation was a bit hand-wavy, hopefully you can sort of see the intuition behind it.

Fourier Coefficients

Now that we know the expressions above are just the Fourier series, we can try to figure out how to determine the coefficients which are also known as the Fourier coefficients. For this post I will just focus on the equation with the Aₙ coefficients but the same process can be done to find the coefficients for Bₙ. To determine the Fourier coefficients, we can make use of a technique known as Fourier trick by multiplying both sides of the series expansion for f(x) by sin(n’πx/l) where n’ ∈ ℤ⁺ and then integrating over the interval as shown below. Why we are doing this will become clear soon.

Note that while I interchanged the integral and summation sign in the second line, this is not always possible but for the sake of this post I will just assume this interchange is valid. Now, if we focus on the integral on the right, we can solve it by making use of this product-to-sum identity: sinAsinB = 1/2 [cos(A - B) - cos(A + B)]. Using this identity, we can integrate:

Now since both n and n’ are integers, (n - n’) and (n + n’) must also be integers and hence sin(n - n’)π and sin(n + n’)π are always equal to 0 for any combination of n and n’ and hence the entire expression must be equal to zero. While it may seem like the important Aₙ has just disappeared, it turns out that the integral does not actually equal zero. So where did we go wrong? Well, we have actually inadvertently divided by zero in the fourth step since (n - n’)π can actually equal zero in the case where n = n’. Therefore, in every other case the integral does equal zero as we showed above, but when n = n’ we need to redo the integral. So, we can go back to the point before we integrated and substitute n = n’:

And hence, when n = n’, the integral does actually equal to a value that involves the coefficient we are interested in. Rearranging will give the following:

General formula for the Fourier coefficients

This is precisely the expression for calculating the Fourier coefficients for the Fourier series. Given some initial shape function f(x), we can multiply it by sin(n’πx/l), integrate, and then multiply by 2/l which leaves you an expression that will tell you the general formula for the n’th Fourier coefficient. For example, if we let f(x) = 1, you could perform the integral and find that the Fourier coefficients are2(1 – (-1))/nπ. I have plugged these into the Fourier series and plotted them on Desmos below:

Notice that we have a practically straight line at y = 1 between the given interval which is very close to the shape of f(x). While I am only summing up to n = 100000 in this image, if you were to sum up to infinity, you would get precisely f(x) (actually it’s a bit more complicated if you consider the Gibbs phenomenon but we will ignore that for now).

I hope you can sort of see the power behind the Fourier series as it allows an infinite sum of sinusoidal functions to turn into any function you wish in a given interval. While in this article I only went through the Fourier sine series, there also exists the Fourier cosine series (which is very similar) and a complex Fourier series that makes use of exp(inπx/l) as a replacement of sin(nπx/l).

Orthogonality of Fourier Series

While our job is technically done here, I would like to just discuss another way of looking at Fourier series using terminology that is more often used in linear algebra. Yes, linear algebra even though we did not talk about vectors or matrices a single time throughout this entire article. As you will see soon, the Fourier series has a deep connection with linear algebra as it is really just a complete basis of orthogonal functions.

To break down what this means, let’s recall that the Fourier series is a linear combination of sinusoidal functions. For example, with the sine series, it is a linear combination of sin(πx/l), sin(2πx/l), sin(3πx/l), and so on each with corresponding coefficients. It actually turns out that when you take two distinct terms from this set of functions, they will be orthogonal to each other, meaning that their inner product is zero. Well, we actually don’t even need to prove this since we unknowingly already did when we showed that the integral was zero when n n’. However, when you take the inner product between two same terms, you would get a nonzero value as we also showed earlier. At this point you might notice where the linear algebra becomes relevant since this reminiscent of orthogonal basis vectors from linear algebra. For example the i j k hat unit vectors are orthogonal to each other just like how the distinct sine terms are orthogonal to each other.

Additionally, the set of functions in the Fourier series is also complete, meaning that any other function (under certain conditions) can be constructed by a linear combination of the set of functions. Comparing this to vectors, this is the exact same as how the i j k hat unit vectors form a basis of orthonormal vectors that can form any other vector in ℝ³ just by adding them up in a linear combination (i.e. it spans the vector space ℝ³).

Comparing the orthogonality and the completeness of the terms in the Fourier series to analogous cases in linear algebra is quite illuminating in my opinion as it provides a slightly more intuitive way of understanding the Fourier series rather than just a result of several integrals. There is definitely still a lot more to talk about the Fourier series and it’s inherent linear algebra properties, but hopefully this article sort of explains how the wave equation (and other PDEs like the heat equation which was what Fourier was studying when he originally came across his famous series) lead into this notion of the Fourier series and how the series really works. Thank you for reading.

References

Fourier Series. (2023). Wolfram.com; Wolfram Research, Inc. https://mathworld.wolfram.com/FourierSeries.html

Partial differential equations: An introduction, 2nd edition. (n.d.). http://debracollege.dspaces.org/bitstream/123456789/414/1/PDE%2C%20WALTER%20A.%20STRAUSS.pdf

--

--