Ordinary Differential Equations/Second Order
In this chapter we will primarily be focused on linear second order ordinary differential equations. That is, we will be interested in equations of the form
While it doesn't often enter into the business of finding solutions to differential equations it is important to keep in mind when there is even hope that a solution exists. The following theorem tells us at least one case where we can hope to find solutions.
- Theorem Suppose and are continuous functions defined on an open interval and that . Then there exists a unique function y(x) defined on that satisfies the ordinary differential equation and satisfies the initial conditions , .
Putting the proof of this fact aside for now, even knowing this statement still provides us with a lot of information. In particular it gives some idea of how many solutions there are. One way of looking at what this theorem is saying is that a solution is completely determined by two numbers, namely and
We first reduce this problem to the homogeneous case, that is . Later we will introduce methods that will allow us to leverage our understanding of the homogeneous problem to better understand the non-homogeneous case. Thus we are interested in the problem of fiding solutions to
The first thing to notice is that if and are solutions to (LH), then for any two real numbers and , then is also a solution. This may be directly verified by substituting into the left hand side of (LH).
If you're familiar with linear algebra, then you'll recall that a transformation is called linear if . So what we are really seeing is that the left hand side of the ODE is a linear transformation on functions, and it is for this reason the equation is called linear.
Now this gives us a very interesting fact for the homogeneous case. Recall we mentioned above that our existence theorem tells us all solutions are parametrized by two initial conditions. Putting this together with the fact that linear combinations of solutions to the homogeneous problem are again solutions, it becomes interesting to investigate what initial value problems we can solve simply by taking linear combinations of solutions that we already know.
This is, given fixed numbers and we consider the problem
Suppose we know two solutions to the homogeneous problem, and but suppose that and don't satisfy the initial conditions. Since we are interested in solving the initial value problem and and we know that linear combinations of solutions are again solutions we can ask the question: "Is it possible that ?"
If that were the case we could evaluate to check the initial conditions. So we would need to have that:
and
But it is important not to lose sight of the fact that we are assuming that and are just fixed functions that we know. So , , , and are simply four numbers that we know.
This means we are really trying to solve the following linear system with two equations and two unknowns:
From linear algebra we know that such a system can be solved for any set of initial conditions and provided we know the determinant of the coefficient matrix is not zero. In this two by two case that is simply . This determinant, in the subject of ODE's, is named after the mathematician who first used it systematically. It is known as the Wronskian, which we will now give a more formal definition.
- Definition: Given functions and the Wronskian of and is the function .
Our discussion above can be summarized by the following theorem.
Constant Coefficients
The first tractable problem is to consider the case when and are constants. For convenience we also allow to have a non-zero constant. Thus we are interested in the equations.
where a, b and c are real numbers with .
The homogeneous equation associated with this is
Our experience with first order differential equations tells us that any solution to has form (in this case ). It turns out to be worth effort to see if such a function will ever be the solution to the equation we are considering. So we simply substitute in to our equation to get:
Since is never zero, the only way for the product to be zero is if happend to satisfy:
This equation is known as the characteristic equation associated with the homogeneous differential equation and the polynomial is called the characteristic polynomial. Since are real numbers there are three cases to consider.
Real distinct roots
The first case is that , in which case the quadratic formula furnishes us with two real numbers so that . In this case our calculation above shows us that and are two different solutions to our equation. As you will show in the exercises the Wronskian of and is not zero in this case. Thus we have found two solutions to the equation, and by our theorem we can represent every solution as a linear combination of these two solutions.
Complex roots
The second case to consider is when . In this case the theory is almost identical. Since the coefficients of the characteristic equation we know we may right and and that and are two solutions, and in fact form a fundamental solution set.
This being said, it is perhaps a bit disturbing to some of us to describe a real valued solution to an ode with real coefficients (and real initial data) using complex numbers. For this reason it is atheistically pleasing to find two real valued solutions. In order to do this it helps to know a little bit about what it even meas to raise a number to the a complex power.
In our setting the answer is provided by Euler's formula, which states that for a real number : . Let's take a quick look at why this is the formula makes any sense at all. The idea is it examine the series power series for . Then plugging in for and collecting real and imaginary parts we get:
This calculation is justified because these power series are absolutely convergent, and so we may rearrange the terms as we see fit. For more general complex numbers we may define as . Thus using these definitions we may rewrite our two solutions as:
- .
Since any linear combination of these two solutions is again a solution we note that two particularly nice linear combinations are:
For those uncomfortable with complex variables the above discussion may seem a bit unclear. But it may simply be considered as motivation. That is if we remember and one may directly verify that and solve (LH). It is also left to the reader to verify that . This in this case as well we also find a fundamental solution set.
Repeated Real Roots
In the case when finding two solutions is slightly more difficult. In this case our characteristic polynomial factors into . In this case we have only one root, namely . We still obtain the solution , the question becomes how do we find a second solution?
Luckily there is one very nice property of the characteristic polynomial. In general, if a polynomial the a repeated root then the derivative of our polynomial also has this root. (Since the polynomial depends on r, we mean here a derivative with respect to r.) In our case this is easily seen, let then we have
- and so
- .
Since our characteristic polynomial came from considering We might hope that taking a derivative in might help us find another solution to try.
So we start by considering:
- .
Now if then and . Hence
On the other hand, remembering that derivatives commute, we might have calculated this a bit differently to get:
That is we are really just looking at plugged into our differential equation, but we know from our first calculation this should be zero. So it seems that should be a solution.
Changing the order of the derivatives in and the derivatives in is allowed because has continuous derivatives of all orders in and . So we can let . It can be checked that and so we again have that and form a fundamental solution set.
More about the Wronskian
In the above conversation we it was always necessary to check the Wronskian at the initial point in order to see if the set of functions formed a fundamental solution set. This leaves us with the uncomfortable possibility that perhaps our fundamental solution set at one point would not be a fundamental solution set if we choose to have our initial conditions at . Thankfully this turns out not to be the case.
To begin proving this we start by taking the derivative of .
- .
Next we use the equation (LH) to work out what and are.
- and
Thus
By inspection we see that . We know the solution to this ODE is given by
- .
Finally if we plug in we get that . Thus we can write our final formula as
The important thing for us to notice is that is never zero. So for any real number we see that if and only if . This tells us exactly that either and are a fundamental solution set or they are not, where we take our initial data does not change that fact.
2: Series Solutions
As mentioned when we began second order ODE's, equations of the form are guaranteed to have a unique solution when ,, and are continuous on an open interval that includes the initial condition. However, problems of this form are not guaranteed to have a closed-form solution, a solution that can be expressed in terms of "well-known" functions like and . We can get arround this by using Taylor's theorem from Calculus. Because we don't know the solution itself, we try a solution of the form , a power series, instead of using the definition of the Taylor series.
Series Solutions of Homogeneous ODE's
Much like the method used for constant coefficients, we take our assumed solution form, differentiate it, and plug it into the equation. We then collect each series into a single series after matching both the powers of and the indices. Because the collected series is equal to zero in the homogeneous case, each coefficient of x must also be equal to zero. We then use this fact to find a recurrence relationship between the successive values of .
Series Solutions of Nonhomogeneous ODE's
Series Solutions of ODE's about Regular Singular Points
Series Solutions of ODE's about Irregular Singular Points
3: Hypergeometric Equations
4: Frobenius Solution to the Hypergeometric Equation
5: Legendre Equation
6: Bessel Equation
The Bessel differential equation has the form