Linear Algebra/Row Equivalence

From testwiki
Revision as of 15:10, 25 May 2010 by imported>Thenub314 (adding noincludes for print version)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Navigation We will close this section and this chapter by proving that every matrix is row equivalent to one and only one reduced echelon form matrix. The ideas that appear here will reappear, and be further developed, in the next chapter.

The underlying theme here is that one way to understand a mathematical situation is by being able to classify the cases that can happen. We have met this theme several times already. We have classified solution sets of linear systems into the no-elements, one-element, and infinitely-many elements cases. We have also classified linear systems with the same number of equations as unknowns into the nonsingular and singular cases. We adopted these classifications because they give us a way to understand the situations that we were investigating. Here, where we are investigating row equivalence, we know that the set of all matrices breaks into the row equivalence classes. When we finish the proof here, we will have a way to understand each of those classes— its matrices can be thought of as derived by row operations from the unique reduced echelon form matrix in that class.

To understand how row operations act to transform one matrix into another, we consider the effect that they have on the parts of a matrix. The crucial observation is that row operations combine the rows linearly.

Template:TextBox

(We have already used the phrase "linear combination" in this book. The meaning is unchanged, but the next result's statement makes a more formal definition in order.)

Template:TextBox

Template:TextBox

In this subsection we will use the convention that, where a matrix is named with an upper case roman letter, the matching lower-case greek letter names the rows.

A=(α1α2αm)B=(β1β2βm)

Template:TextBox

The proof below uses induction on the number of row operations used to reduce one matrix to the other. Before we proceed, here is an outline of the argument (readers unfamiliar with induction may want to compare this argument with the one used in the "General=Particular+Homogeneous" proof).[1] First, for the base step of the argument, we will verify that the proposition is true when reduction can be done in zero row operations. Second, for the inductive step, we will argue that if being able to reduce the first matrix to the second in some number t0 of operations implies that each row of the second is a linear combination of the rows of the first, then being able to reduce the first to the second in t+1 operations implies the same thing. Together, this base step and induction step prove this result because by the base step the proposition is true in the zero operations case, and by the inductive step the fact that it is true in the zero operations case implies that it is true in the one operation case, and the inductive step applied again gives that it is therefore true in the two operations case, etc.

Template:TextBox

Template:TextBox

The prior result gives us the insight that Gauss' method works by taking linear combinations of the rows. But to what end; why do we go to echelon form as a particularly simple, or basic, version of a linear system? The answer, of course, is that echelon form is suitable for back substitution, because we have isolated the variables. For instance, in this matrix

R=(237800001511000330000021)

x1 has been removed from x5's equation. That is, Gauss' method has made x5's row independent of x1's row.

Independence of a collection of row vectors, or of any kind of vectors, will be precisely defined and explored in the next chapter. But a first take on it is that we can show that, say, the third row above is not comprised of the other rows, that ρ3c1ρ1+c2ρ2+c4ρ4. For, suppose that there are scalars c1, c2, and c4 such that this relationship holds.

(000330)=c1(237800)+c2(001511)+c4(000021)

The first row's leading entry is in the first column and narrowing our consideration of the above relationship to consideration only of the entries from the first column 0=2c1+0c2+0c4 gives that c1=0. The second row's leading entry is in the third column and the equation of entries in that column 0=7c1+1c2+0c4, along with the knowledge that c1=0, gives that c2=0. Now, to finish, the third row's leading entry is in the fourth column and the equation of entries in that column 3=8c1+5c2+0c4, along with c1=0 and c2=0, gives an impossibility.

The following result shows that this effect always holds. It shows that what Gauss' linear elimination method eliminates is linear relationships among the rows.

Template:TextBox

Template:TextBox

We can now prove that each matrix is row equivalent to one and only one reduced echelon form matrix. We will find it convenient to break the first half of the argument off as a preliminary lemma. For one thing, it holds for any echelon form whatever, not just reduced echelon form.

Template:TextBox

For the proof we rephrase the result in more technical terms. Template:AnchorDefine the form of an m×n matrix to be the sequence 1,2,,m where i is the column number of the leading entry in row i and i= if there is no leading entry in that row. The lemma says that if two echelon form matrices are row equivalent then their forms are equal sequences.

Template:TextBox

That lemma answers two of the questions that we have posed: (i) any two echelon form versions of a matrix have the same free variables, and consequently, and (ii) any two echelon form versions have the same number of free variables. There is no linear system and no combination of row operations such that, say, we could solve the system one way and get y and z free but solve it another way and get y and w free, or solve it one way and get two free variables while solving it another way yields three.

We finish now by specializing to the case of reduced echelon form matrices.

Template:TextBox

Template:TextBox

We end with a recap. In Gauss' method we start with a matrix and then derive a sequence of other matrices. We defined two matrices to be related if one can be derived from the other. That relation is an equivalence relation, called row equivalence, and so partitions the set of all matrices into row equivalence classes.

(There are infinitely many matrices in the pictured class, but we've only got room to show two.) We have proved there is one and only one reduced echelon form matrix in each row equivalence class. Template:AnchorSo the reduced echelon form is a canonical form[2] for row equivalence: the reduced echelon form matrices are representatives of the classes.

We can answer questions about the classes by translating them into questions about the representatives.

Template:TextBox

Template:TextBox

Template:TextBox

Exercises

Template:Linear Algebra/Book 2/Recommended Template:TextBox Template:TextBox Template:TextBox Template:TextBox Template:TextBox Template:TextBox Template:Linear Algebra/Book 2/Recommended Template:TextBox Template:Linear Algebra/Book 2/Recommended Template:TextBox Template:Linear Algebra/Book 2/Recommended Template:TextBox Template:TextBox Template:Linear Algebra/Book 2/Recommended Template:TextBox Template:TextBox Template:TextBox Template:Linear Algebra/Book 2/Recommended Template:TextBox Template:TextBox Template:Linear Algebra/Book 2/Recommended Template:TextBox Template:TextBox Template:Linear Algebra/Book 2/Recommended Template:TextBox

/Solutions/

Footnotes

  1. More information on mathematical induction is in the appendix.
  2. More information on canonical representatives is in the appendix.

References

Template:Navigation

Template:BookCat