Ordinary Differential Equations/Preliminaries from calculus

From testwiki
Jump to navigation Jump to search

In this section, we shall do some preparations that will come in handy later, when we need them in order to prove existence/uniqueness theorems. This is since those do rely heavily on some techniques from calculus, which may not usually be taught within a calculus course. Hence this section.

We shall begin with very useful estimation inequalities, called Gronwall's inequalities or inequalities of Gronwall type. These allow us, if we are given one type of estimation (involving an integral with a product of functions), to conclude another type of estimation (involving the exponential function).

Gronwall's inequalities

Template:TextBox

Proof:

We define a new function by

r(t):=M+t0tf(s)h(s)ds.

By the fundamental theorem of calculus, we immediately obtain

r(t)=f(t)h(t)h(t)(M+t0tf(s)h(s)ds)=h(t)r(t),

where the inequality follows from the assumption on f. From this follows that

r(t)h(t)r(t)0.

We may now multiply both sides of the equation by et0th(s)ds and use the equation

(r(t)et0th(s)ds)=r(t)et0th(s)ds+r(t)((h(t))et0th(s)ds)=(r(t)h(t)r(t))et0th(s)ds (by the product and chain rules)

to justify

(r(t)et0th(s)ds)0.

Hence, the function

tr(t)et0th(s)ds

is non-increasing. Furthermore, if we set t=t0 in that function, we obtain

r(t0)e0=M.

Hence,

r(t)et0th(s)dsr(0)=Mr(t)Met0th(s)ds.

From f(t)r(t) (assumption) follows the claim.

This result was for functions extending from t0 to the right. An analogous result holds for functions extending from t0 to the left:

Template:TextBox

Note that this time we are not integrating from t0 to t, but from t to t0. This is more natural either, since this means we are integrating in positive direction.

Proof 1:

We rewrite the proof of theorem 12.1 for our purposes.

This time, we set

r(t)=M+tt0f(s)h(s)ds,

reversing the order of integration in contrast to the last proof.

Once again, we get r(t)h(t)r(t)r(t)+h(t)r(t)0. This time we use

(r(t)ett0h(s)ds)=r(t)ett0h(s)ds+h(t)r(t)ett0h(s)ds

and multiply r(t)+h(t)r(t)0 by ett0h(s)ds to obtain

(r(t)ett0h(s)ds)0,

which is why

tr(t)ett0h(s)ds

is non-decreasing. Now inserting t0 in the thus defined function gives

r(t0)e0=M,

and thus for tt0

r(t)ett0h(s)dsMr(t)Mett0h(s)ds.

Proof 2:

We prove the theorem from theorem 12.1. Indeed, for tt0 we set f~(t):=f(t) and h~(t):=h(t). Then we have

f~(t)M+tt0f(s)h(s)ds=M+t0tf~(s)h~(s)ds

by the substitution ss. Hence, we obtain by theorem 12.1, that

f~(t)Met0th~(s)ds

for tt0. Therefore, if now tt0,

f(t)=f~(t)Met0th~(s)ds=Mett0h(s)ds.

The Arzelà–Ascoli theorem

Template:TextBox

Proof:

Let (xn)n be an enumeration of the set [a,b]. The set {fn(x1)|n} is bounded, and hence has a convergent subsequence (fk1,n(x1))n due to the Heine–Borel theorem. Now the sequence (fk1,n(z2))n also has a convergent subsequence (fk2,n(x2))n, and successively we may define fkm,n in that way.

Set flm:=fkm,m for all m. We claim that the sequence (flm)m is uniformly convergent. Indeed, let ϵ>0 be arbitrary and let δ such that |xy|<δn:|fn(x)fn(y)|<ϵ/3.

Let N1 be sufficiently large that if we order a,x1,,xN1,b ascendingly, the maximum difference between successive elements is less than δ (possible since is dense in ).

Let N2 be sufficiently large that for all n{1,,N1} and k1 |flN2+k(xn)flN2(xn)|<ϵ/3.

Set N:=max{N1,N2}, and let kN. Let y[a,b] be arbitrary. Choose xn such that |xny|<δ (possible due to the choice of N1). Due to the choice of δ, the choice of N2 and the triangle inequality we get

|flN+k(y)flN(y)||flN+k(y)flN+k(xn)|+|flN+k(xn)flN(xn)|+|flN(xn)flN(y)|<ϵ/3+ϵ/3+ϵ/3=ϵ.

Hence, we have a Cauchy sequence, which converges due to the completeness of 𝒞([a,b]).

Convergence considerations

In this section, we shall prove two or three more or less elementary results from analysis, which aren't particular exciting, but useful preparations for the work to come.

Template:TextBox

Proof: Let ϵ>0 be arbitrary. Since g is a continuous function defined on a compact set, it is even uniformly continuous (this is due to the Heine–Cantor theorem). This means that we may pick δ>0 such that xy<δg(x)g(y)<ϵ for all x,yK. Since fnf uniformly, we may pick N such that for all kN and t[a,b], fk(t)f(t)<δ. Then we have for kN and t[a,b] that

gfk(t)gf(t)<ϵ.

The next result is very similar; it is an extension of the former theorem making g time-dependent.

Template:TextBox

Proof:

First, we note that the set [a,b]×K is compact. This can be seen either by noting that this set is still bounded and closed, or by noting that for a sequence in this space, we may first choose a convergent subsequence of the "induced" sequence of K and then a convergent subsequence of what's left in [a,b] (or the other way round).

Thus, the function g is uniformly continuous as before. Hence, we may choose δ>0 such that |ts|+xy<δ implies g(t,x)g(s,y)<ϵ (note that ||+ is a norm on [a,b]×K and since this space is still finite-dimensional, all norms there are equivalent; at least to the norm with respect to which continuity is measured).

Since fnf uniformly, we may pick N such that for all kN and t[a,b], fk(t)f(t)<δ. Then for kN and all t[a,b], we have

g(t,fn(t))g(t,f(t))<ϵ.

Banach's fixed-point theorem

We shall later give two proofs of the Picard-Lindelöf existence of solutions theorem; one can be given using the machinery above, whereas a different one rests upon the following result by Stefan Banach.

Template:TextBox

Proof:

First, we prove uniqueness of the fixed point. Assume x,y are both fixed points. Then

d(x,y)=d(f(x),f(y))λd(x,y)(1λ)d(x,y)=0.

Since 0λ<1, this implies d(x,y)=0x=y.

Now we prove existence and simultaneously the claim about the convergence of the sequence y,f(y),f(f(y)),f(f(f(y))),. For notation, we thus set z0:=y and if zn is already defined, we set zn+1=f(zn). Then the sequence (zn)n is nothing else but the sequence y,f(y),f(f(y)),f(f(f(y))),.

Let n0. We claim that

d(zn+1,zn)λnd(z1,z0).

Indeed, this follows by induction on n. The case n=0 is trivial, and if the claim is true for n, then d(zn+2,zn+1)=d(f(zn+1),f(zn))λd(zn+1,zn)λλnd(z1,z0).

Hence, by the triangle inequality,

d(zn+m,zn)j=n+1n+md(zj,zj1)j=n+1n+mλj1d(z1,z0)j=n+1λj1d(z1,z0)=d(z1,z0)λn11λ.

The latter expression goes to zero as n and hence we are dealing with a Cauchy sequence. As we are in a complete metric space, it converges to a limit x. This limit further is a fixed point, as the continuity of f (f is Lipschitz continuous with constant λ) implies

x=limnzn=limnf(zn1)=f(limnzn1)=f(x).

Template:BookCat