Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/January 2005
Problem 1
|
Given , let denote the best uniform approximation to among polynomials of degree , i.e.
|
Solution 1
Since both and are continuous, so is the function
and therefore attains both a maximum and a minimum on .
Let and
If , then there exists a polynomial (a vertical shift of the supposed best approximation which is a better approximation to .
Therefore,
Equality holds if and only if .
Problem 2a
|
Find the node and the weight so that the integration rule
|
Solution 2a
Let . Then
which implies after computation
Let . Then
which implies after computation
Problem 2b
|
Show that no one-point rule for approximating can be exact for quadratics. |
Solution 2b
Suppose there is a one-point rule for approximating that is exact for quadratics.
Let .
Then,
but
,
a contradiction.
Problem 2c
|
In fact
|
Solution 2c
Let . Then
Using the values computed in part (b), we have
which implies
Problem 2d
|
Let and be two polynomials of degree . Suppose and agree at , , and . Show
|
Solution 2d
There exist such that if is a polynomial of degree
Hence,
Problem 3
|
Let be nonsingular and . Consider the following iteration for the solution
|
Problem 3a
|
Show that if all eigenvalues of have positive real part then there will be some real such that the iterates converge for any starting vector . Discuss how to choose optimally in case is symmetric and determine the rate of convergence. |
Solution 3a
Convergence of any starting vector
Using the iteration , we have that
Then, computing norms we obtain
.
In particular, considering , we have that
.
Now, in order to study the convergence of the method, we need to analyze . We use the following characterization:
Let's denote by an eigenvalue of the matrix . Then
implies
i.e.
The above equation is quadratic with respect to and opens upward. Hence using the quadratic formula we can find the roots of the equation to be
Then, if for all the eigenvalues of the matrix , we can find such that , i.e., the method converges.
Choosing alpha if A symmetric
On the other hand, if the matrix is symmetric, we have that . Then using the Schur decomposition, we can find a unitary matrix and a diagonal matrix such that , and in consequence,
This last expression is minimized if , i.e, if . With this optimal value of , we obtain that
which implies that
.
Finally, we've obtained that
.
Problem 3b
|
Show that if some eigenvalues of have negative real part and some have positive real part, then there is no real for which the iterations converge. |
Solution 3b
If for , then convergence occurs if
Hence .
Similarly, if for , then convergence occurs if
Hence .
If some eigenvalues are positive and some negative, there does not exist a real for which the iterations converge since cannot be both positive and negative simultaneously.
Problem 3c
|
Let for a matrix norm associated to a vector norm. Show that the error can be expressed in terms of the difference between consecutive iterates, namely
(The proof of this is short but a little tricky) |
Solution 3c
Problem 3d
|
Let be the tridiagonal matrix
|
Solution 3d
By Gerschgorin's theorem, the magnitude of all the eigenvalues are between 1 and 5 i.e.
Therefore,
We want
Therefore,