Solving the One Dimensional Wave Equation Using Separation of Variables

Kensei
7 min readAug 17, 2023

--

Following my previous post where I derived the one dimensional wave equation I will now attempt to find a solution using the method of separation of variables. If you have not read my previous post, you can read it here:

While there are other methods of solving the wave equation that result in different solutions (for example try searching up d’Alembert’s formula), I will be focusing on solving it using the method of separation of variables that is most often used by physicists.

Separation of Variables

To begin, it is important to set up boundary conditions as they are necessary for solving partial differential equations (actually initial conditions are required too but I will leave that for a future post). Let’s say we have boundary conditions at x = 0 and x = l so that the displacement at those two points are always 0. Or in other words, u(0, t) = u(l, t) = 0. This is analogous to having a string fixed to two points and letting it vibrate just like how guitar strings are just vibrating strings fixed to two points .

Now we can start solving the wave equation. The first step in the method of separation of variables requires us to make an important assumption. This assumption is that our solutions are ‘separable’. What separable means is that the solution can be split up as a product of a function with respect to x and a function with respect to t:

Where X(x) and T(t) are functions to be determined. What this allows us to do is differentiate u(x, t) with respect to x and t just by differentiating the corresponding function and treating the other one as a constant. Therefore, if we then plug this into the wave equation and rearrange we get the following:

Now, if we focus on this last equation, it can be seen that the left hand side is a function that depends only on t (remember c is just a constant) and the right hand side is a function that depends only on x. For these two sides to be equal for all possible values of x and t, they must be constant since if they weren’t we could hold the left side at a certain value of t and change the x value for the right hand side which would break the equality. Hence, both of them must be equal to some constant k:

Notice how we just transformed a single partial differential equation into two separate ordinary differential equations which are significantly easier to solve. However, before actually moving on it is important to think about whether this constant k is positive, negative, or even zero as each possibility will affect the outcome of our solution. To spoil the answer, it is supposed to be negative, but this is definitely not obvious at first glance, so I will go through the other two options to see why they don’t work.

Starting with k = 0, we get X’’(x) = 0 which if we integrate twice gives X(x) = Ax + B for some constants A and B. If we now impose our boundary condition u(0, t) = 0, we get B = 0 and hence X(x) reduces to X(x) = Ax. Now, imposing the other boundary condition, u(l, t) = 0, we get Al = 0 and hence A = 0. But this means X(x) = 0 and hence u(x, t)= 0 regardless of our function for T(t). This result essentially states that if the string starts at equilibrium then it will stay at equilibrium forever which is not a very interesting result to us. These type of results are what we call ‘trivial solutions’ as they are obvious and do not give us any new information.

We can now move on the case where k > 0. In this case we can let k = β² for some real number β as this ensures that k is indeed positive. In this case we get the following differential equations:

We will focus on the one on the right for now which we can solve by inspection as it requires a function whos second derivative is equal to itself times β². Two functions that satisfy this requirement are eᵝˣ and eᵝˣ as you can confirm easily. Since the general solution to a differential equation is the linear combination of every possible solution, the general solution to this differential equation is X(x) = Aeᵝˣ + Beᵝˣ for some constants A and B. We can again impose the boundary conditions as we did before. Starting with u(0, t) = 0, this gives A = -B, and secondly u(l, t) = 0 gives Aeᵝˡ + Beᵝˡ = 0. Combining these two results we get A(eᵝˡ - eᵝˡ ) = 0. If we rewrite this in a slightly better way we get the following:

Looking at this equation, the expression inside the parentheses can only be zero if the exponential in the numerator is equal 1 but the only time this is true is when β = 0 which returns us to the case where k = 0. Therefore, to avoid this A must be zero, but since A = -B, B must also be zero and hence we get the trivial solution of X(x) = 0 which we are not interested in.

Finally, we can look at the correct case of k < 0. To do this, we can let k = -β² for some real number to ensure it is negative. This time our two solutions will be of the form eⁱᵝˣ and eⁱᵝˣ so if we combine these as a linear combination our general solution is:

While this form can be useful at times, we can rewrite this in a more convenient way by making use of Euler’s formula:

Where C and D are new constants. The same process can be done for the differential equation for T(t) to get T(t) = Acos(βct) + Bsin(βct) for some constants A and B. Now if we impose the boundary condition of u(0, t) = 0, this gives X(0) = Ccos(0) + Dsin(0) = C = 0 and hence C is zero. Imposing the seocnd boundary condition of u(l, t) = 0 will give us Dsin(βl) = 0. Obviously, if D = 0 we will return to the trivial solution so we want the sine term to be zero at all times, which is possible if βl = where n is an integer. Rearranging, this tells us that β = /l. If we then combine all this information and recall that u(x, t) = X(x)T(t) we get the following:

(Note that I have absorbed the constant D into A and B). As you can see, we now have an infinite number of possible solutions each corresponding to different values for the integer n with the coefficients A and B differing for different values of n as well. Since the general solution is a linear combination of these solutions, our general solution is:

What this equation tells us is that the shape of a vibrating string can be described as a linear combination of sinusoidal terms. This may not be surprising considering how sinusoidal functions behave like waves, but this equation could even describe the motion of triangular waves depending on the constants Aₙ and Bₙ. This essentially means that we could make the sharp angle in the triangle just by adding up a bunch of sinusoidal functions like the ones shown below for example even though sinusoidal functions only have curves.

While I wont talk about it in this post since it got significantly longer than I originally anticipated, this is a great way to motivate the concept of the ‘Fourier series’ as it deals with this idea of adding an infinite amount of sinusoidal functions to create any shape. In fact, if we just impose initial conditions to this final solution, we will be required to explore the Fourier series which I may write a post about soon. The initial conditions that we choose are precisely what determine the constants Aₙ and Bₙ.

Anyway, that is it for now and I hope you enjoyed this post. Thank you for reading.

References

Partial differential equations: An introduction, 2nd edition. (n.d.). http://debracollege.dspaces.org/bitstream/123456789/414/1/PDE%2C%20WALTER%20A.%20STRAUSS.pdf

--

--

Kensei

Student with an interest in mathematics and physics