Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/January 2007

From testwiki
Jump to navigation Jump to search

Problem 1

Let x1,x2,,xn be distinct real numbers, and consider the matrix

V=(1x1x12x1n11x2x22x2n11xnxn2xnn1)

Prove that V1 can be expressed as

V1=(α(1),α(2),α(3),,α(n))

where the column vector α(j)=(α1(j),α2(j),,αn(j))T generates a polynomial

pj(x)=i=1nαi(j)xi1

satisfying

pj(x)=i=1,ijn(xxi)i=1,ijn(xjxi)

You may cite anything you know about polynomial interpolation to justify that V is nonsingular.

Solution 1

We want to show that


VV1=I


or equivalently the ith, jth entry of VV1 is 1 for i=j and 0 for ij i.e.

{VV1}ij=δij={1 if i=j0 if ij

First notice that

{VV1}ij=[1+xi1+xi2++xin1][α1(j)α2(j)αn(j)]=k=1nαk(j)xik1

Also notice that

pj(xi)=k=1nαk(j)xik1=δij

Hence,

{VV1}ij=pj(xi)=δij

Problem 2

Let A be a real matrix of order n with eigenvalues λ1,λ2,,λn and n linearly independent eigenvectors v1,v2,,vn.

Suppose you want to find an eigenvector vj corresponding to the eigenvalue λj and you are given σ such that

|λjσ|<|λiσ| for all ij

Specify a numerical algorithm for finding vj and give a convergence proof for this algorithm demonstrating that it is convergent under appropriate circumstances

Solution 2

Shifted Inverse Power Method

Let

A~=(AσI)1

Then,

λi~=1|λiσ|

which implies

λj~>λ~i for all ij

Since shifting the eigenvalues and inverting the matrix does not affect the eigenvectors,

A~vi=λi~vii=1,2,,n


Assume vi=1 for all i. Generate w0,w1,w2, to find vj. Start with arbitrary w0 such that w0=1.


For k=0,1,2,
  w^k+1=A~wk
  wk+1=w^k+1w^k+1
  λj(k+1)=(w^k+1,wk)(wk,wk)=(A~wk,wk)(wk,wk) (Rayleigh quotient)
End



Convergence of Power Method

If |λ1|>|λi|, for all i1, then Akw0 will be dominated by v1.


Since v1,v2,,vn are linearly independent, they form a basis of Rn. Hence,

w0=α1v1+α2v2++αnvn


From the definition of eigenvectors,

Aw0=α1λ1v1+α2λ2v2++αnλnvnA2w0=α1λ12v1+α2λ22v2++αnλn2vnAkw0=α1λ1kv1+α2λk2v2++αnλnkvn


To find a general form of wk, the approximate eigenvector at the kth step, examine a few steps of the algorithm:

w^1=Aw0w1=w^1w^1=Aw0Aw0w^2=Aw1=A2w0Aw0w2=w^2w^2=A2w0Aw0A2w0Aw0=A2w0A2w0w^3=A3w0A2w0w3=A3w0A2w0A3w0A2w0=A3w0A3w0


From induction,

wk=Akw0Akw0

Hence,

wk=Akw0Akw0=α1λ1kv1+α2λ2kv2++αnλnkvnAkw0=λ1k(α1v1+α2(λ2λ1)kv2++αn(λnλ1)kvn)Akw0

Comparing a weighted wk and v1,

Akw0λ1kwkα1v1=α2(λ2λ1)kv2++αn(λnλ1)kvnα2|λ2λ1|k++αn|λnλ1|k

since vi=1 by assumption.

The above expression goes to 0 as k since |λ1|>|λi|, for all i1. Hence as k grows, wk is parallel to v1. Because wk=1, it must be that wk±v1.


Problem 3

Suppose A is an upper triangular, nonsingular matrix. Show that both Jacobi iterations and Gauss-Seidel iterations converge in finitely many steps when used to solve A๐ฑ=๐›.

Solution 3

Derivation of Iterations

Let A=D+L+U where D is a diagonal matrix, L, is a lower triangular matrix with a zero diagonal and U is an upper triangular matrix also with a zero diagonal.


The Jacobi iteration can be found by substituting into Ax=b, grouping L+U, and solving for x i.e.


Ax=b(D+L+U)x=bDx+(L+U)x=bDx=b(L+U)xx=D1(b(L+U)x)


Since L=0 by hypothesis, the iteration is


x(i+1)=D1(bUx(i))


Similarly, the Gauss-Seidel iteration can be found by substituting into Ax=b, grouping D+L, and solving for x i.e.


Ax=b(D+L+U)x=b(D+L)x+Ux=b(D+L)x=bUxx=(D+L)1(bUx)


Since L=0 by hypothesis, the iteration has identical from as Jacopi:


x(i+1)=D1(bUx(i))

Convergence in Finite Number of Steps

Jacobi and Gauss-Seidel are iterative methods that split the matrix Aโ„n×n into D, U, and L: Diagonal, Upper (everything above the diagonal), and Lower (everything below the diagonal) triangular matrices, respectively. Their iterations go as such


x(i+1)=(D+L)1(bUx(i)) (Gauss-Seidel)
x(i+1)=D1(b(U+L)x(i)) (Jacobi)


In our case A is upper triangular, so L is the zero matrix. As a result, the Gauss-Seidel and Jacobi methods take on the following identical form


x(i+1)=D1(bUx(i))


Additionally, x can be written


x=D1(bUx)


Subtracting x from x(i+1) we get


e(i+1)=x(i+1)x=D1(bUx(i))D1(bUx)=D1U(x(i)+x)=D1Ue(i)=D1U(D1Ue(i1))=(D1U)2e(i1)=(D1U)i+1e(0)


In our problem, D1 is diagonal and U is upper triangular with zeros along the diagonal. Notice that the product D1U will also be upper triangular with zeros along the diagonal.

Let R=D1U, Rโ„n×n


R=(0***00**00*000)


Also, let R~โ„n×n be the related matrix


R~=(00***000**0000*0000000000)=(๐šT๐ŸŽ๐šT)


Where ๐š is k×(nk), T is k×k, and ๐ŸŽ is (nk)×(nk).


Finally, the product RR~ (call it (1)) is


RR~=(0***00**00*000)(00***000**0000*0000000000)=(000**0000**0000*0000000000)=(๐šT~๐ŸŽ๐šT)


Here T~ is almost identical in structure to T, except that its diagonal elements are zeros.

At this point the convergence in n steps (the size of the starting matrix) should be apparent since R is just R~ where k=0 and each time R is multiplied by R~, the k-th super-diagonal is zeroed out (where k=0 is the diagonal itself). After n1 applications of R, the result will be the zero matrix of size n.

In brief, Rn is the zero matrix of size n.


Therefore e(n)=(D1U)ne(0)0, i.e. the Jacobi and Gauss-Seidel Methods used to solve A๐ฑ=b converge in n steps when Aโ„n×n is upper triangular.

Examples of (1)

n=3


R=(0**00*000)


R2=(0**00*000)(0**00*000)=(00*000000)


R3=(0**00*000)(00*000000)=(000000000)


n=4


R=(0***00**000*0000)


R2=(0***00**000*0000)(0***00**000*0000)=(00**000*00000000)


R3=(0***00**000*0000)(00**000*00000000)=(000*000000000000)


R4=(0***00**000*0000)(000*000000000000)=(0000000000000000)

Template:BookCat