Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/August 2007

From testwiki
Jump to navigation Jump to search

Problem 1

A set of functions {g1,,gn}C[a,b] is a Chebyshev system if


(i) The set is linearly independent.


(ii) If g is a linear combination of {g1,,gn} which is not identically zero, g has at most n1 distinct zeros in [a,b].


Problem 1a

Show that {g1,,gn} is a Chebyshev system if and only if for any n distinct points x1,,xn[a,b] the matrix A with ai,j=gj(xi),1i,jn is non-singular.

Solution 1a

Forward Direction

We want to prove the following statement:


If {g1,,gn} is a Chebyshev system, then for any n distinct points x1,,xn[a,b] the matrix A with ai,j=gj(xi),1i,jn is non-singular.


Writing out the matrix A yields:


A=(g1(x1)g2(x1)gn(x1)g1(x2)g2(x2)gn(x2)g1(xn)g2(xn)gn(xn))


Since the set {g1,,gn} is linearly independent, there does not exist any non-zero sets of constants of αi such that i=1nαigi(x)=0 for any x[a,b]. Hence, the columns of the matrix A are linearly independent which implies that A is non-singular.

Reverse Direction

Proof of (i)

Assume A is non-singular.


This implies the columns of


A=(g1(x1)g2(x1)gn(x1)g1(x2)g2(x2)gn(x2)g1(xn)g2(xn)gn(xn))


are linearly independent. Since A is non-singular for any choice of {x1,x2,,xn}, {g1,g2,,gn} is a linearly independent set and we have shown (i).

Proof of (ii)

By hypothesis, g(x) is a linear combination of {g1,g2,,gn} i.e.


g(x)=α1g1(x)+α2g2(x)++αngn(x)


for αi not all zero.


Assume for the sake of contradiction that g(x) has n zeros at x1^,x2^,,xn^


This implies the following n equations:


g(x1^)=0=α1g1(x1^)+α2g2(x1^)++αngn(x1^)g(x2^)=0=α1g1(x2^)+α2g2(x2^)++αngn(x2^)g(xn^)=0=α1g1(xn^)+α2g2(xn^)++αngn(xn^)


Rewriting the equations in matrix form yields


(g1(x1^)g2(x1^)gn(x1^)g1(x2^)g2(x2^)gn(x2^)g1(xn^)g2(xn^)gn(xn^))(α1α2αn)=(000)


Since αi are not all zero, this implies that the columns of A, {g1,g2,,gn}, are linearly dependent, a contradiction.


Hence, g has at most n1 zeros, and we have shown (ii).

Problem 1b

Let fCm+1[a,b] be such that f(m+1)(x)0 for all x[a,b]. Let gj(x)=xj1,j=1,,m+1. Show that {g1,,gm+1,f} is a Chebyshev system. For this, you may use results from polynomial interpolation without proof.

Solution 1b

We have to prove:

(i) {1,x,x2,,f} is a linearly independent set


(ii)any linear combination of this set has at most m zeros.

Proof of (i)

If we evaluate g1,g2,,gm+1 at m+1 distinct points, {x1,x2,,xm+1}, we have the following Van Der Monde matrix whose determinant is non-zero.


(111x1x12x1mxm+1xm+1m)


Hence, g1,g2,,gm+1 are linearly independent.


Assume for the sake of contradiction that f is a linear combination of g1,g2,,gm+1, that is


f(x)=β0g1(x)+β1g2(x)++βmgm+1(x)=β01+β1x+β2x2++βmxm.


Hence, f(x) is a polynomial of degree m. Taking m+1 derivatives of f(x) yields


f(m+1)(x)=0


which contradicts the hypothesis. Therefore {1,x,x2,,f} is a linearly independent set.

Proof of (ii)

Let h(x)=β0g1(x)+β1g2(x)++βmgm+1(x)+βm+1f(x).

Suppose h has m+2 (or more) zeros in [a,b]. By Rolle's Theorem, the (n+1)st derivative of f vanishes on the given interval, a contradiction.



(i) and (ii) show that {1,x,x2,,f} is a Chebyshev system.

Problem 2

Let

In(f)=k=1nwn,kf(xn,k),axn,kb(0)

be a sequence of integration rules.

Problem 2a

Suppose

limnIn(xk)=abxkdx,k=0,1,(1)

and

k=1n|wn,k|M,n=1,2,(2)

for some constant M. Show that

limnIn(f)=abf(x)dx

for all fC[a,b]

Solution 2a

By the Weierstrass Approximation Theorem, given ϵ>0, there exists a polynomial p(x) such that

maxaxb|f(x)p(x)|min{ϵ21M,ϵ21ba}(3)

Let

I(f)=abf(x)dx

Adding and subtracting I(p) and In(p), yields,


I(f)In(f)=[I(f)I(p)]+[I(p)In(p)]+[In(p)In(f)]


By the triangle inequality and equation (2) and (3),


|I(f)In(f)||I(fp)|+|I(p)In(p)|+|In(pf)|ab|f(x)p(x)|dx+|I(p)In(p)|+k=1n|wn,k||f(xn,k)p(xn,k)|ϵ2(ba)abdx+|I(p)In(p)|+ϵ2Mk=1n|wn,k|ϵ2(ba)(ba)+|I(p)In(p)|+ϵ2MM=ϵ+|I(p)In(p)|

By equation (1), when n ,

|I(p)In(p)|=0


Hence for arbitrary small ϵ>0 as n,

|I(f)In(f)|ϵ


i.e.

I(f)=limnIn(f)

Problem 2b

Show that if all wn,k>0 then (1) implies (2).

Solution 2b

For k=0, equation (1) yields,

limnIn(x0)=abx0dxlimnIn(1)=ab1dx=(ba)


Letting f(x) in equation (0) yields,

In(1)=k=1nwn,k1=k=1nwn,k


Combining the two above results yields,

limnIn(1)=limnk=1nwn,k=(ba)M


Since (ba) is finite, there exists some number M such that (ba)M.


Since wn,k>0,

limnk=1n|wn,k|M

i.e. equation (2).

Problem 3

Consider the real system of linear equations

Ax=b(1)

where A is non singular and satisfies

(v,Av)>0

for all real v, where the Euclidean inner product is used here.

Problem 3a

Show that (v,Av)=(v,Mv) for all real v where M=12(A+AT) is the symmetric part of A.

Solution 3a

Substituting for M in (v,Mv) and expanding the inner product we have,


(v,Mv)=(v,12(A+AT)v)=(v,12Av+12ATv)=(v,12Av)+(v,12ATv)=12(v,Av)+12(v,ATv)


From properties of inner products we have,


(v,ATv)=(Av,v)=(v,Av)


Hence,


(v,Mv)=12(v,Av)+12(v,ATv)=12(v,Av)+12(v,Av)=(v,Av)

Problem 3b

Prove that

(v,Av)(v,v)λmin(M)>0

where λmin(M) is the minimum eigenvalue of M.

Solution 3b

First Inequality

(v,Av)(v,v)=(v,Mv)(v,v)=vTMvvTv

Since M is symmetric, it has a eigenvalue decomposition

M=QTΛQ

where Q is orthogonal and Λ is a diagonal matrix containing all the eigenvalues.

Substitution yields,

(v,Av)(v,v)=vTQTΛQvvTv

Let

Qv=x

This implies the following three relationships:

v=QTxvTQT=xTvT=xTQ

Substituting,

vTQTΛQvvTv=xTΛxxTQQTx=xTΛxxTx

Expanding the numerator we have,

xTΛx=[x1x2xn][λ1λ2λn][x1x2xn]=[x1x2xn][λ1x1λ2x2λnxn]=λ1x12+λ2x2++λn2xn2=i=1nλixi2

Expanding the denominator yield

xTx=i=1nxi2

Substituting,

xTΛxxTx=i=1nλixi2i=1nxi2λmin(M)i=1nxi2i=1nxi2=λmin(M)

Second Inequality

From part(a)

(v,Av)=(v,Mv)>0

for all real v.


Therefore M is positive definite which implies all its eigenvalues are positive. In particular,


λmin(M)>0

Problem 3c

Now consider the iteration for computing a series of approximate solutions to (1),


xk+1=xk+αrk


where rk=bAxk and α is chosen to minimize rk+12 as a function of α. Prove that


rk+12rk2(1λmin(M)2λmax(ATA))12


Solution 3c

First, we want to write rk+12 as a function of α i.e.


f(α)=rk+12


Changing the indices of equation rk from k to k+1,substituting for bAxk, and applying the definition of norm yields,


rk+12=bAxk+12=bAxkαArk2=rkαArk2=(rkαArk,rkαArk)=(rk,rk)α(Ark,rk)α(rk,Ark)+α2(Ark,Ark)


From a property of inner products and since A is symmetric,


(rk,Ark)=(ATrk,rk)=(Ark,rk)


Hence,


f(α)=rk+12=(rk,rk)2α(Ark,rk)+α2(Ark,Ark)


Next we want to minimize f(α) as a function of α. Taking its derivative with respect to α yields,


f(α)=2(Ark,rk)+2α(Ark,Ark)


Setting f(α)=0 and solving for α gives


0=2(Ark,rk)+2α(Ark,Ark)0=(Ark,rk)+α(Ark,Ark)(Ark,rk)=α(Ark,Ark)α=(Ark,rk)(Ark,Ark)


Substituting for α into rk+12 gives,


rk+12=(rk,rk)2α(Ark,rk)+α2(Ark,Ark)=(rk,rk)2(Ark,rk)(Ark,Ark)(Ark,rk)+(Ark,rk)2(Ark,Ark)2(Ark,Ark)=(rk,rk)2(Ark,rk)2(Ark,Ark)+(Ark,rk)2(Ark,Ark)=(rk,rk)(Ark,rk)2(Ark,Ark)

By definition of norm,


rk2=(rk,rk)


Hence


rk+12rk2=1(Ark,rk)2(rk,rk)(Ark,Ark)


Multiplying and dividing the second term on the right hand side of the above equation by (rk,rk) and applying a property of inner product yields,


rk+12rk2=1(Ark,rk)2(rk,rk)(rk,rk)2(rk,ATArk)


From the result of part (b), we have our desired result:


rk+12rk2(1λmin(M)2λmax(ATA))12

Template:BookCat