Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/August 2005

From testwiki
Jump to navigation Jump to search

Problem 1

Derive the one-, two-, and three-point Gaussian quadrature formulas such that

11f(x)x4dxj=1nf(xj)wj

Give Bounds on the error of these formulas.

Solution 1

Find Orthogonal Polynomials w.r.t. Weighting Function

For this problem, we need the first three orthogonal polynomials P1,P2, and P3 with respect to a weighted inner product

f,g=11f(x)g(x)w(x)dx

where w(x)=x4 in our case. This can be done using Gram-Schmidt Orthogonalization on the basis {1,x,x2,x3}

Zero Order Polynomial

Let P0(x)=1

First Order Polynomial

P1(x)=x1,x1,11=x11x5dx11x4dx02/5=x

Second Order Polynomial

P2(x)=x21,x21,11x,x2x,xx=x211x6dx11x4dxx11x7dx11x6dx02/7=x257

Third Order Polynomial

P3(x)=x31,x31,11x,x3x,xxx257,x3x257,x257(x257)=x3072x11x8dx11x6dx0=x379x

Find Roots of Polynomials

The roots of these polynomials will be the interpolation nodes used in Gauss Quadrature.

PolynomialRootsP1(x)=x0P2(x)=x257±57P3(x)=x379x0,±79

Formula to Compute Coefficients

In Gaussian quadrature we have

wj=11li(x)x4dx, where
li(x)=j=1,jinxxjxixj

1 point formula

l1(x)=1


w1=11x4dx=25


11f(x)x4dx25f(x1)

where x1 is the root of P1(x).

2 point formula

l1(x)=xx1x1x2

l2(x)=xx1x2x1


w1=11xx1x1x2x4dx=2x25(x2x1)

w2=11xx1x2x1x4dx=2x15(x1x2)


11f(x)x4dxw1f(x1)+w2f(x2)

where x1 and x2 are the roots of P2(x).

3 point formula

l1(x)=xx2x1x2xx3x1x3

l2(x)=xx1x2x1xx3x2x3

l3(x)=xx1x3x1xx2x3x2


w1=11xx2x1x2xx3x1x3x4dx=1(x1x2)(x1x3)(27+2x2x35)

w2=11xx1x2x1xx3x2x3x4dx=1(x2x1)(x2x3)(27+2x1x35)

w3=11xx1x3x1xx2x3x2x4dx=1(x3x1)(x3x2)(27+2x1x25)


11f(x)x4dxw1f(x1)+w2f(x2)+w3f(x3)

where x1x2 and x3 are the roots of P3(x).

Derive Error Bound

We know that this quadrature is exact for all polynomials of degree at most 2n1.


We choose a polynomial p of degree at most 2n1 that Hermite interpolates i.e.


p(xi)=f(xi)p(xi)=f(xi)(0in1)


The error for this interpolation is


f(x)p(x)=f(2n)(ζ(x))(2n)!i=0n1(xxi)2


Compute the error of the quadrature as follows:


E=11f(x)x4dxi=0n1wif(xi)=11f(x)x4dxi=0n1wip(xi)=11f(x)x4dx11p(x)x4dx=11(f(x)p(x))x4dx=11f(2n)(ζ(x))(2n)!i=0n1(xxi)2x4dx=f(2n)(ξ)(2n)!11x4i=0n1(xxi)2(pn(x))2dx


where the last line follows from the mean value theorem of integrals.


Notice that pn(x) is simply the polynomials orthogonal with respect to weight function since xi are the roots of the polynomials.


Hence, the error for 1 point gaussian quadrature is


E=f(2)(ξ)(2)!11x4x2dx=f(2)(ξ)7


The error for 2 point quadrature:


E=f(4)(ξ)(4)!11x4(x257)2dx


The error for 3 point quadrature:


E=f(6)(ξ)(6)!11x4(x379x)2dx

Problem 2

We wish to solve Ax=b iteratively where


A=[111121111]


Show that for this A the Jacobi method and the Gauss-Seidel method both converge. Explain why for this A one of these methods is better than the other.

Solution 2

Jacobi Method

Decompose A into its diagonal, lower, and upper parts i.e.


A=D+(L+U)


Derive Jacobi iteration by solving for x as follows:


Ax=b(D+(L+U))x=bDx+(L+U)x=bDx=b(L+U)xx=D1(b(L+U)x)

Gauss-Seidel Method

Decompose A into its diagonal, lower, and upper parts i.e.


A=(D+L)+U


Derive Gauss-Sediel iteration by solving for x as follows:


Ax=b((D+L)+U)x=b(D+L)x+Ux=b(D+L)x=bUxx=(D+L)1(bUx)

Jacobi Convergence

Convergence occurs for the Jacobi iteration if the spectral radius of D1(L+U) is less than 1 i.e.


ρ(D1(L+U))<1


The eigenvalues of the matrix


D1(L+U)=[01112012110]


are λ=0,0,0 i.e. the eigenvalue 0 has order 3.


Therefore, the spectral radius is ρ=0<1.

Gauss-Seidel Convergence

Convergence occurs for the Gauss-Seidel iteration if the spectral radius of (D+L)1U is less than 1 i.e.


ρ((D+L)1U)<1


The eigenvalues of the matrix


(D+L)1U=[01101210120]


are λ=0,14+7i4,147i4

Therefore, the spectral radius is ρ=116+716=22.7<1.

Comparison of Methods

In general, the Gauss-Seidel iteration converges faster than the Jacobi iteration since Gauss-Seidel uses the new components of xi as they become available, but in this case ρJacobi<ρGaussSeidel, so the Jacobi iteration is faster.

Problem 3

Consider the three-term polynomial recurrence

pk+1(x)=(xμk)pk(x)νk2pk1(x) for k=1,2,,

initialized by p0(x)=1 and p1(x)=xμ0, where each μk and νk is real and each νk is nonzero.

Problem 3a

Prove that each pk(x) is a monic polynomial of degree k, and for every n=0,1,, one has

span{p0(x),p1(x),,pn(x)}=span{1,x,x2,,xn}


Solution 3a

We prove the claim by by induction.

Base Case

p0(x)=1 is monic with degree zero, and span{p0(x)}=span{1}.


p1(x)=xμ0 is monic with degree one, and span{p0(x),p1(x)}=span{1,x}.


p2(x)=(xμ0)(xμ1)ν12 is monic with degree 2, and span{p0(x),p1(x),p2(x)}=span{1,x,x2}.

Induction Step

Assume pk1(x) is monic with degree k1, and span{p0(x),p1(x),,pk1(x)}=span{1,x,,xk1}.


Also, assume pk(x) is monic with degree k, and span{p0(x),p1(x),,pk(x)}=span{1,x,,xk}.


Then from the hypothesis, pk+1(x)=(xμk)pk(x)νk2pk1(x) is monic with degree k+1.


Also,


span{p0(x),p1(x),,pk+1(x)}=span{1,x,,xk+1}


since pk+1(x) is just xn+1 plus a linear combination of lower order terms already in span{1,x,,xk}.

Problem 3b

Show that for every k=0,1, the polynomial pk(x) has k simple real roots that interlace with the roots of pk1(x).

Solution 3b

We prove the claim by induction.

Base Case

Let's consider the case k=2. We know that


p2(x)=(xμ0)(xμ1)ν12.


The quadratic formula shows that p2(x) has two simple real roots.


Let r21 and r22 be the zeros of p2(x). Then, we have (because of sign changes) that


p2(μ0)=ν12 and limxp2(x)>0 there exists r22(μ0,) such that p2(r22)=0.

p2(μ0)=ν12 and limxp2(x)>0 there exists r21(,μ0) such that p2(r21)=0.


In conclusion, r21<μ0r11<r22..

Induction Step

Let rk1,1,,rk1,k1 and rk,1,,rk,k be the simple real roots of pk1(x) and pk(x), respectively, such that the roots of pk are interlaced with the roots of pk1, i.e.,


rk,1<rk1,1<rk,2<rk1,2<<rk1,k1<rk,k.


Then, we have that


pk+1(rk,1)=(rk,1μk)pk(rk,1)0νk2pk1(rk,1)=νk2pk1(rk,1),


pk+1(rk,2)=(rk,2μk)pk(rk,2)0νk2pk1(rk,2)=νk2pk1(rk,2).


From the induction hypothesis pk1(rk,1) and pk1(rk,2) have different signs since rk,1<rk1,1<rk,2. Then, there exists rk+1,1(rk,1,rk,2) such that pk+1(rk+1,1)=0.


Proceeding in the same manner for all the intervals (rk,j,rk,j+1), we obtain that there exists rk+1,j(rk,j,rk,j+1) such that pk+1(rk+1,j)=0 for j=1,,k1.


We now consider the smallest and largest roots of pk+1 i.e. rk+1,0,rk+1,k+1


Since for k=0,1,2,, pk is a monic polynomial,


limxpk(x)=


Hence for any x>rk (any x larger than the largest root of pk)


pk(x)>0


Therefore


limxpk+1(x)>0 and pk+1(rk,k)=νk2pk1(rk,k)<0,


implies there exists rk+1,k(rk,k,) such that pk+1(rk+1,k)=0.


If k+1 is even, then by similar reasoning


limxpk+1(x)>0 and pk+1(rk,1)=νk2pk1(rk,1)<0, there exists rk+1,0(,rk,1) such that pk+1(rk+1,0)=0.


If k+1 is odd,


limxpk+1(x)<0 and pk+1(rk,1)=νk2pk1(rk,1)>0, there exists rk+1,0(,rk,1) such that pk+1(rk+1,0)=0.

Problem 3c

Show that for every n=0,1, the roots of pk+1(x) are the eigenvalues of the symmetric tridiagonal matrix


T=(μ0ν100ν1μ1ν20ν2ν20νn00νnμn)


Solution 3c

Let Tn+1=(μ0ν100ν1μ1ν20ν2ν20νn00νnμn)


Then det(xIn+1Tn+1) is a monic polynomial in x of degree n+1.


Expanding this determinant along the last row, we have

det(xIn+1Tn+1)=(xμn)det(xInTn)νn2det(xIn1Tn1)(1)

where det(xInTn) and det(xIn1Tn1) are monic polynomials of degree n and n1, respectively.


Notice that det(xI1T1)=xμ0p1(x) and if we let p0(x)1, then (1) is equivalent to the three-term recurrence stated in the problem.


Thus, pn+1(x)det(xIn+1Tn+1)=0 shows that the roots of pn+1(x) are the eigenvalues of Tn+1.

Template:BookCat