Ordinary Differential Equations/Second Order

From testwiki
Jump to navigation Jump to search

In this chapter we will primarily be focused on linear second order ordinary differential equations. That is, we will be interested in equations of the form (LIVP){y+p(x)y+q(x)y=g(x)y(x0)=y0y(x0)=y'0.

While it doesn't often enter into the business of finding solutions to differential equations it is important to keep in mind when there is even hope that a solution exists. The following theorem tells us at least one case where we can hope to find solutions.

Theorem Suppose p(x),q(x) and g(x) are continuous functions defined on an open interval I and that x0I. Then there exists a unique function y(x) defined on I that satisfies the ordinary differential equation y(x)+p(x)y+q(x)y=g(x) and satisfies the initial conditions y(x0)=y0, y(x0)=y0.

Putting the proof of this fact aside for now, even knowing this statement still provides us with a lot of information. In particular it gives some idea of how many solutions there are. One way of looking at what this theorem is saying is that a solution is completely determined by two numbers, namely y0 and y'0

We first reduce this problem to the homogeneous case, that is g(x)=0. Later we will introduce methods that will allow us to leverage our understanding of the homogeneous problem to better understand the non-homogeneous case. Thus we are interested in the problem of fiding solutions to

(LH)y+p(x)y+q(x)y=0

The first thing to notice is that if y1 and y2 are solutions to (LH), then for any two real numbers c1 and c2, then c1y1+c2y2 is also a solution. This may be directly verified by substituting into the left hand side of (LH).

(c1y1+c2y2)+p(x)(c1y1+c1y2)+q(x)(c1y1+c2y2)=c1y1+c2y2+c1p(x)y1+c2p(x)y2+c1q(x)y1+c2q(x)y2=c1(y1+p(x)y1+q(x)y1)+c2(y2+p(x)y2+q(x)y2)=c10+c20=0

If you're familiar with linear algebra, then you'll recall that a transformation is called linear if T(v+w)=T(v)+T(w). So what we are really seeing is that the left hand side of the ODE is a linear transformation on functions, and it is for this reason the equation is called linear.

Now this gives us a very interesting fact for the homogeneous case. Recall we mentioned above that our existence theorem tells us all solutions are parametrized by two initial conditions. Putting this together with the fact that linear combinations of solutions to the homogeneous problem are again solutions, it becomes interesting to investigate what initial value problems we can solve simply by taking linear combinations of solutions that we already know.

This is, given fixed numbers y0 and y'0 we consider the problem

(LHIVP){y+p(x)y+q(x)y=0y(x0)=y0,y(x0)=y'0.

Suppose we know two solutions to the homogeneous problem, y1 and y2 but suppose that y1 and y2 don't satisfy the initial conditions. Since we are interested in solving the initial value problem y(x0)=y0 and y(x0)=y'0 and we know that linear combinations of solutions are again solutions we can ask the question: "Is it possible that y=c1y1+c2y2?"

If that were the case we could evaluate y to check the initial conditions. So we would need to have that:

y(x0)=c1y1(x0)+c2y2(x0)

and

y(x0)=c1y1(x0)+c2y2(x0)

But it is important not to lose sight of the fact that we are assuming that y1 and y2 are just fixed functions that we know. So y1(x0), y1(x0), y2(x0), and y2(x0) are simply four numbers that we know.

This means we are really trying to solve the following linear system with two equations and two unknowns:

{c1y1(t0)+c2y2(t0)=y0c1y1(t0)+c2y2(t0)=y0

From linear algebra we know that such a system can be solved for any set of initial conditions y0 and y'0 provided we know the determinant of the coefficient matrix is not zero. In this two by two case that is simply y1(x0)y2(x0)y2(x0)y1(x0). This determinant, in the subject of ODE's, is named after the mathematician who first used it systematically. It is known as the Wronskian, which we will now give a more formal definition.

Definition: Given functions y1 and y2 the Wronskian of y1 and y2 is the function W(y1,y2)(x):=y1(x)y2(x)y2(x)y1(x).

Our discussion above can be summarized by the following theorem.

Template:TextBox

Constant Coefficients

The first tractable problem is to consider the case when p(x) and q(x) are constants. For convenience we also allow y to have a non-zero constant. Thus we are interested in the equations.

ay+by+cy=g(x),

where a, b and c are real numbers with a0.

The homogeneous equation associated with this is

ay+by+cy=0.

Our experience with first order differential equations tells us that any solution to ayby=0 has form erx (in this case r=b/a). It turns out to be worth effort to see if such a function will ever be the solution to the equation we are considering. So we simply substitute y=erx in to our equation to get:

ar2erx+brerx+cerx=(ar2+br+c)erx=0,

Since erx is never zero, the only way for the product to be zero is if r happend to satisfy:

ar2+br+c=0

This equation is known as the characteristic equation associated with the homogeneous differential equation and the polynomial ar2+br+c is called the characteristic polynomial. Since a,b,c are real numbers there are three cases to consider.

Real distinct roots

The first case is that b24ac>0, in which case the quadratic formula furnishes us with two real numbers r1,r2 so that ar12+br1+c=0=ar22+br2+c. In this case our calculation above shows us that er1x and er2x are two different solutions to our equation. As you will show in the exercises the Wronskian of er1x and er2x is not zero in this case. Thus we have found two solutions to the equation, and by our theorem we can represent every solution as a linear combination of these two solutions.

Template:TextBox


Template:TextBox

Complex roots

The second case to consider is when b24ac<0. In this case the theory is almost identical. Since the coefficients of the characteristic equation we know we may right r1=z+iw and r2=ziw and that er1x and er2x are two solutions, and in fact form a fundamental solution set.

This being said, it is perhaps a bit disturbing to some of us to describe a real valued solution to an ode with real coefficients (and real initial data) using complex numbers. For this reason it is atheistically pleasing to find two real valued solutions. In order to do this it helps to know a little bit about what it even meas to raise a number to the a complex power.

In our setting the answer is provided by Euler's formula, which states that for a real number θ: eiθ=cosθ+isinθ. Let's take a quick look at why this is the formula makes any sense at all. The idea is it examine the series power series for ex=1+x+x2/2+x3/3!+=n=0xnn!. Then plugging in iθ for x and collecting real and imaginary parts we get:

1+iθθ22iθ33!+θ44!+iθ55!=(1θ22+θ44!)+i(θθ33!+θ55!)=cosθ+isinθ.

This calculation is justified because these power series are absolutely convergent, and so we may rearrange the terms as we see fit. For more general complex numbers we may define ez+iw as ezeiw. Thus using these definitions we may rewrite our two solutions as:

y1=ezx(cos(wx)+isin(wx))y2=ezx(cos(wx)isin(wx)).

Since any linear combination of these two solutions is again a solution we note that two particularly nice linear combinations are:

y~1=y1+y22=ezxcos(wx)y~2=y1y22i=ezxsin(wx)

For those uncomfortable with complex variables the above discussion may seem a bit unclear. But it may simply be considered as motivation. That is if we remember z=b2a and w=4acb22a one may directly verify that y~1 and y2~ solve (LH). It is also left to the reader to verify that W(y~1,y~2)(x0)0. This in this case as well we also find a fundamental solution set.

Template:TextBox

Template:TextBox

Repeated Real Roots

In the case when b24ac=0 finding two solutions is slightly more difficult. In this case our characteristic polynomial factors into a(r+b2a)2. In this case we have only one root, namely r1=b2a. We still obtain the solution y1=er1x, the question becomes how do we find a second solution?

Luckily there is one very nice property of the characteristic polynomial. In general, if a polynomial the a repeated root then the derivative of our polynomial also has this root. (Since the polynomial depends on r, we mean here a derivative with respect to r.) In our case this is easily seen, let P(r)=a(r+b2a)2 then we have

P(r)=2a(r+b2a)    and so
P(r1)=2a(r1+b2a)=2a(b2a+b2a)=0.

Since our characteristic polynomial came from considering a(erx)+b(erx)+c(erx) We might hope that taking a derivative in r might help us find another solution to try.

So we start by considering:

ddr(a(erx)+b(erx)+c(erx))=ddr((ar2+br+c)erx)=(2ar+b)erx+(ar2+br+c)xerx.

Now if r=r1=b2a then (2ar1+b)=0 and (ar12+br1+c)=0. Hence (2ar1+b)er1x+(ar12+br1+c)xerx

On the other hand, remembering that derivatives commute, we might have calculated this a bit differently to get:

ddr(a(erx)+b(erx)+c(erx))=a(ddrerx)+b(ddrerx)+c(ddrerx)=a(xerx)+b(xerx)+c(xerx)

That is we are really just looking at xerx plugged into our differential equation, but we know from our first calculation this should be zero. So it seems that xerx should be a solution.

Changing the order of the derivatives in x and the derivatives in r is allowed because ert has continuous derivatives of all orders in x and r. So we can let y2=xer1x. It can be checked that W(y1,y2)(x0)0 and so we again have that y1 and y2 form a fundamental solution set.

Template:TextBox

Template:TextBox

More about the Wronskian

In the above conversation we it was always necessary to check the Wronskian at the initial point x0 in order to see if the set of functions formed a fundamental solution set. This leaves us with the uncomfortable possibility that perhaps our fundamental solution set at one point x0 would not be a fundamental solution set if we choose to have our initial conditions at x1. Thankfully this turns out not to be the case.

Template:TextBox

To begin proving this we start by taking the derivative of W(y1,y2).

W(y1,y2)=(y1y2)(y2y1)=y1y2+y1y2y2y1y2y1=y1y2y2y1.

Next we use the equation (LH) to work out what y1 and y2 are.

y1=p(x)y1q(x)y1 and
y2=p(x)y2q(x)y2

Thus

W(y1,y2)=y1(p(x)y2q(x)y2)y2(p(x)y1q(x)y1)=p(x)y1y2+p(x)y2y1

By inspection we see that W(y1,y2)=p(x)W(y1,y2). We know the solution to this ODE is given by

W(y1,y2)=Cex0xp(t)dt.

Finally if we plug in t0 we get that C=W(y1,y2)(x0). Thus we can write our final formula as

W(y1,y2)(t)=W(y1,y2)(x0)ex0xp(t)dt.

The important thing for us to notice is that ex0xp(t)dt is never zero. So for any real number x we see that W(y1,y2)(t)=0 if and only if W(y1,y2)(x0)=0. This tells us exactly that either y1 and y2 are a fundamental solution set or they are not, where we take our initial data does not change that fact.

2: Series Solutions

As mentioned when we began second order ODE's, equations of the form y+p(x)y+q(x)y=g(x) are guaranteed to have a unique solution when p(x),q(x), g(x)and are continuous on an open interval that includes the initial condition. However, problems of this form are not guaranteed to have a closed-form solution, a solution that can be expressed in terms of "well-known" functions like x2 and sin(x). We can get arround this by using Taylor's theorem from Calculus. Because we don't know the solution itself, we try a solution of the form y=n=0an(xx0)n, a power series, instead of using the definition of the Taylor series.

Series Solutions of Homogeneous ODE's

Much like the method used for constant coefficients, we take our assumed solution form, differentiate it, and plug it into the equation. We then collect each series into a single series after matching both the powers of x and the indices. Because the collected series is equal to zero in the homogeneous case, each coefficient of x must also be equal to zero. We then use this fact to find a recurrence relationship between the successive values of an.

Template:TextBox

Series Solutions of Nonhomogeneous ODE's

Series Solutions of ODE's about Regular Singular Points

Series Solutions of ODE's about Irregular Singular Points

3: Hypergeometric Equations

4: Frobenius Solution to the Hypergeometric Equation

5: Legendre Equation

6: Bessel Equation

The Bessel differential equation has the form x2y+xy+(x2n2)y=0

7: Mathieu Equation

8: Continued Fraction Solutions

Template:BookCat