Statistics/Numerical Methods/Basic Linear Algebra and Gram-Schmidt Orthogonalization

From testwiki
Revision as of 23:22, 26 May 2024 by imported>Kaltenmeyer (sp)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Introduction

Basically, all the sections found here can be also found in a linear algebra book. However, the Gram-Schmidt Orthogonalization is used in statistical algorithm and in the solution of statistical problems. Therefore, we briefly jump into the linear algebra theory which is necessary to understand Gram-Schmidt Orthogonalization.

The following subsections also contain examples. It is very important for further understanding that the concepts presented here are not only valid for typical vectors as tuple of real numbers, but also functions that can be considered vectors.

Fields

Definition

A set R with two operations + and * on its elements is called a field (or short (R,+,*)), if the following conditions hold:

  1. For all α,βR holds α+βR
  2. For all α,βR holds α+β=β+α (commutativity)
  3. For all α,β,γR holds α+(β+γ)=(α+β)+γ (associativity)
  4. It exist a unique element 0, called zero, such that for all αR holds α+0=α
  5. For all αR a unique element α, such that holds α+(α)=0
  6. For all α,βR holds α*βR
  7. For all α,βR holds α*β=β*α (commutativity)
  8. For all α,β,γR holds α*(β*γ)=(α*β)*γ (associativity)
  9. It exist a unique element 1, called one, such that for all αR holds α*1=α
  10. For all non-zero αR a unique element α1, such that holds α*α1=1
  11. For all α,β,γR holds α*(β+γ)=α*β+α*γ (distributivity)

The elements of R are also called scalars.

Examples

It can easily be proven that real numbers with the well known addition and multiplication (IR,+,*) are a field. The same holds for complex numbers with the addition and multiplication. Actually, there are not many more sets with two operations which fulfill all of these conditions.

For statistics, only the real and complex numbers with the addition and multiplication are important.

Vector spaces

Definition

A set V with two operations + and * on its elements is called a vector space over R, if the following conditions hold:

  1. For all x,yV holds x+yV
  2. For all x,yV holds x+y=y+x (commutativity)
  3. For all x,y,zV holds x+(y+z)=(x+y)+z (associativity)
  4. It exist a unique element 𝕆, called origin, such that for all xV holds x+𝕆=x
  5. For all xV exists a unique element v, such that holds x+(x)=𝕆
  6. For all αR and xV holds α*xV
  7. For all α,βR and xV holds α*(β*x)=(α*β)*x (associativity)
  8. For all xV and 1R holds 1*x=x
  9. For all αR and for all x,yVholds α*(x+y)=α*x+α*y (distributivity wrt. vector addition)
  10. For all α,βR and for all xVholds (α+β)*x=α*x+β*x (distributivity wrt. scalar addition)

Note that we used the same symbols + and * for different operations in R and V. The elements of V are also called vectors.

Examples:

  1. The set IRp with the real-valued vectors (x1,...,xp) with elementwise addition x+y=(x1+y1,...,xp+yp) and the elementwise multiplication α*x=(αx1,...,αxp) is a vector space over IR.
  2. The set of polynomials of degree p, P(x)=b0+b1x+b2x2+...+bpxp, with usual addition and multiplication is a vector space over IR.

Linear combinations

A vector x can be written as a linear combination of vectors x1,...xn, if

x=i=1nαixi

with αiR.

Examples:

  • (1,2,3) is a linear combination of (1,0,0),(0,1,0),(0,0,1) since (1,2,3)=1*(1,0,0)+2*(0,1,0)+3*(0,0,1)
  • 1+2*x+3*x2 is a linear combination of 1+x+x2,x+x2,x2 since 1+2*x+3*x2=1*(1+x+x2)+1*(x+x2)+1*(x2)

Basis of a vector space

A set of vectors x1,...,xn is called a basis of the vector space V, if

1. for each vector xV exist scalars α1,...,αnR such that x=iαixi 2. there is no subset of {x1,...,xn} such that 1. is fulfilled.

Note, that a vector space can have several bases.

Examples:

  • Each vector (α1,α2,α3)IR3 can be written as α1*(1,0,0)+α2*(0,1,0)+α3*(0,0,1). Therefore is {(1,0,0),(0,1,0),(0,0,1)} a basis of IR3.
  • Each polynomial of degree p can be written as linear combination of {1,x,x2,...,xp} and therefore forms a basis for this vector space.

Actually, for both examples we would have to prove condition 2., but it is clear that it holds.

Dimension of a vector space

A dimension of a vector space is the number of vectors which are necessary for a basis. A vector space has infinitely many number of basis, but the dimension is uniquely determined. Note that the vector space may have a dimension of infinity, e.g. consider the space of continuous functions.

Examples:

  • The dimension of IR3 is three, the dimension of IRp is p .
  • The dimension of the polynomials of degree p is p+1.

Scalar products

A mapping <.,.>:V×VR is called a scalar product if the following holds for all x,x1,x2,y,y1,y2V and α1,α2R :

  1. <α1x1+α2x2,y>=α1<x1,y>+α2<x2,y>
  2. <x,α1y1+α2y2>=α1<x,y1>+α2<x,y2>
  3. <x,y>=<y,x> with α+ıβ=αıβ
  4. <x,x>0 with <x,x>=0x=𝕆

Examples:

  • The typical scalar product in IRp is <x,y>=ixiyi.
  • <f,g>=abf(x)*g(x)dx is a scalar product on the vector space of polynomials of degree p.

Norm

A norm of a vector is a mapping .:VR, if holds

  1. x0 for all xV and x=0x=𝕆 (positive definiteness)
  2. αv=αx for all xV and all αR
  3. x+yx+y for all x,yV (triangle inequality)

Examples:

  • The Lq norm of a vector in IRp is defined as xq=i=1pxiqq.
  • Each scalar product generates a norm by x=<x,x>, therefore f=abf2(x)dx is a norm for the polynomials of degree p.

Orthogonality

Two vectors x and y are orthogonal to each other if <x,y>=0. In IRp it holds that the cosine of the angle between two vectors can expressed as

cos((x,y))=<x,y>xy.

If the angle between x and y is ninety degree (orthogonal) then the cosine is zero and it follows that <x,y>=0.

A set of vectors x1,...,xp is called orthonormal, if

<xi,xj>={0 if ij1 if i=j.

If we consider a basis e1,...,ep of a vector space then we would like to have a orthonormal basis. Why ?

Since we have a basis, each vector x and y can be expressed by x=α1e1+...+αpep and y=β1e1+...+βpep. Therefore the scalar product of x and y reduces to

<x,y>  =<α1e1+...+αpep,β1e1+...+βpep> 
=i=1pj=1pαiβj<ei,ej>
=i=1pαiβi<ei,ei>
=α1β1+...+αpβp. 

Consequently, the computation of a scalar product is reduced to simple multiplication and addition if the coefficients are known. Remember that for our polynomials we would have to solve an integral!

Gram-Schmidt orthogonalization

Algorithm

The aim of the Gram-Schmidt orthogonalization is to find for a set of vectors x1,...,xp an equivalent set of orthonormal vectors o1,...,op such that any vector which can be expressed as linear combination of x1,...,xp can also be expressed as linear combination of o1,...,op:

1. Set b1=x1 and o1=b1/b1

2. For each i>1 set bi=xij=1i1<xi,bj><bj,bj>bj and oi=bi/bi, in each step the vector xi is projected on bj and the result is subtracted from xi.

Example

Consider the polynomials of degree two in the interval[1,1] with the scalar product <f,g>=11f(x)g(x)dx and the norm f=<f,f>. We know that f1(x)=1,f2(x)=x and f3(x)=x2 are a basis for this vector space. Let us now construct an orthonormal basis:

Step 1a: b1(x)=f1(x)=1

Step 1b: o1(x)=b1(x)b1(x)=1<b1(x),b1(x)>=1111dx=12

Step 2a: b2(x)=f2(x)<f2(x),b1(x)><b1(x),b1(x)>b1(x)=x11x 1dx21=x021=x

Step 2b: o2(x)=b2(x)b2(x)=x<b2(x),b2(x)>=x11x2dx=x2/3=x3/2

Step 3a: b3(x)=f3(x)<f3(x),b1(x)><b1(x),b1(x)>b1(x)<f3(x),b2(x)><b2(x),b2(x)>b2(x)=x211x21 dx2111x2x dx2/3x=x22/32102/3x=x21/3

Step 3b: o3(x)=b3(x)b3(x)=x21/3<b3(x),b3(x)>=x21/311(x21/3)2dx=x21/311x42/3x2+1/9 dx=x21/38/45=58(3x21)

It can be proven that 1/2,x3/2 and 58(3x21) form a orthonormal basis with the above scalarproduct and norm.

Numerical instability

Consider the vectors x1=(1,ϵ,0,0),x2=(1,0,ϵ,0) and x3=(1,0,0,ϵ). Assume that ϵ is so small that computing 1+ϵ=1 holds on a computer (see http://en.wikipedia.org/wiki/Machine_epsilon). Let compute a orthonormal basis for this vectors in IR4 with the standard scalar product <x,y>=x1y1+x2y2+x3y3+x4y4 and the norm x=x12+x22+x32+x42.

Step 1a. b1=x1=(1,ϵ,0,0)

Step 1b. o1=b1b1=b11+ϵ2=b1 with 1+ϵ2=1

Step 2a. b2=x2<x2,b1><b1,b1>b1=(1,0,ϵ,0)11+ϵ2(1,ϵ,0,0)=(0,ϵ,ϵ,0)

Step 2b. o2=b2b2=b22ϵ2=(0,12,12,0)

Step 3a. b3=x3<x3,b1><b1,b1>b1<x3,b2><b2,b2>b2=(1,0,0,ϵ)11+ϵ2(1,ϵ,0,0)02ϵ2(0,ϵ,ϵ,0)=(0,ϵ,0,ϵ)

Step 3b. o3=b3b3=b32ϵ2=(0,12,0,12)

It obvious that for the vectors

- o1=(1,ϵ,0,0) 

- o2=(0,12,12,0)

- o3=(0,12,0,12)

the scalarproduct <o2,o3>=1/20. All other pairs are also not zero, but they are multiplied with ϵ such that we get a result near zero.

Modified Gram-Schmidt

To solve the problem a modified Gram-Schmidt algorithm is used:

  1. Set bi=xi for all i
  2. for each i from 1 to n compute
    1. oi=bibi
    2. for each j from i+1 to n compute bj=bj<bj,oi>oi 

The difference is that we compute first our new bi and subtract it from all other bj. We apply the wrongly computed vector to all vectors instead of computing each bi separately.

Example (recomputed)

Step 1. b1=(1,ϵ,0,0), b2=(1,0,ϵ,0), b3=(1,0,0,ϵ)

Step 2a. o1=b1b1=b11+ϵ2=b1=(1,ϵ,0,0) with 1+ϵ2=1

Step 2b. b2=b2<b2,o1>o1=(1,0,ϵ,0)(1,ϵ,0,0)=(0,ϵ,ϵ,0) 

Step 2c. b3=b3<b3,o1>o1=(1,0,0,ϵ)(1,ϵ,0,0)=(0,ϵ,0,ϵ) 

Step 3a. o2=b2b2=b22ϵ2=(0,12,12,0)

Step 3b. b3=b3<b3,o2>o2=(0,ϵ,0,ϵ)ϵ2(0,12,12,0)=(0,ϵ/2,ϵ/2,ϵ)

Step 4a. o3=b3b3=b33/2ϵ2=(0,16,16,26)

We can easily verify that <o2,o3>=0.


Application

Exploratory Project Pursuit

In the analysis of high-dimensional data we usually analyze projections of the data. The approach results from the Theorem of Cramer-Wold that states that the multidimensional distribution is fixed if we know all one-dimensional projections. Another theorem states that most (one-dimensional) projections of multivariate data are looking normal, even if the multivariate distribution of the data is highly non-normal.

Therefore in Exploratory Projection Pursuit we judge the interestingness of a projection by comparison with a (standard) normal distribution. If we assume that the one-dimensional data x are standard normal distributed then after the transformation z=2Φ1(x)1 with Φ(x) the cumulative distribution function of the standard normal distribution then z is uniformly distributed in the interval [1;1].

Thus the interesting can measured by 11(f(z)1/2)2dx with f(z) a density estimated from the data. If the density f(z) is equal to 1/2 in the interval [1;1] then the integral becomes zero and we have found that our projected data are normally distributed. An value larger than zero indicates a deviation from the normal distribution of the projected data and hopefully an interesting distribution.

Expansion with orthonormal polynomials

Let Li(z) a set of orthonormal polynomials with the scalar product <f,g>=11f(z)g(z)dz and the norm f=<f,f>. What can we derive about a densities f(z) in the interval [1;1] ?

If f(z)=i=0IaiLi(z) for some maximal degree I then it holds

11f(z)Lj(z)dz=11i=0IaiLi(z)Lj(z)dz=aj11Lj(z)Lj(z)dz=aj

We can also write 11f(z)Lj(z)dz=E(Lj(z)) or empirically we get an estimator a^j=1nk=1nLj(zk).

We describe the term 1/2=i=1IbiLi(z) and get for our integral

11(f(z)1/2)2dz=11(i=0I(aibi)Li(z))2dz=i,j=0I11(aibi)(ajbj)Li(z)Lj(z)dz=i=0I(aibi)2.

So using a orthonormal function set allows us to reduce the integral to a summation of coefficient which can be estimated from the data by plugging a^j in the formula above. The coefficients bi can be precomputed in advance.

Normalized Legendre polynomials

The only problem left is to find the set of orthonormal polynomials Li(z) upto degree I. We know that 1,x,x2,...,xI form a basis for this space. We have to apply the Gram-Schmidt orthogonalization to find the orthonormal polynomials. This has been started in the first example.

The resulting polynomials are called normalized Legendre polynomials. Up to a scaling factor the normalized Legendre polynomials are identical to Legendre polynomials. The Legendre polynomials have a recursive expression of the form

Li(z)=(2i1)Li1(z)(i1)Li2(z)i

So computing our integral reduces to computing L0(zk) and L1(zk) and using the recursive relationship to compute the a^j's. Please note that the recursion can be numerically unstable!

References

  • Halmos, P.R. (1974). Finite-Dimensional Vector Spaces, Springer: New York

Template:BookCat