Solutions To Mathematics Textbooks/Algebra (9780132413770)/Chapter 3

From testwiki
Jump to navigation Jump to search

Exercise 1.2

Using an "educated guess" one observes that 5p1p1. With this, it is easy to see that 5173, 51119, 51138 and 51177.

Exercise 1.3

(x3+3x2+3x+1)(x4+4x3+6x2+4x+1)=x7+7x6+21x5+35x4+35x3+21x2+7x+17x7+1, as all the coefficients divisible by 7 reduce to 0.

Exercise 1.10

Let us denote the matrices (appearing in the same order as in the book) by 0,1,A,B. We need to check the following:

  • 0,1,A,B is a group with matrix addition and 0 as the identity. We get the following addition table
0 1 A B
0 0 1 A B
1 1 0 B A
A A B 0 1
B B A 1 0

So we see that the elements with addition form an abelian group with 0 as the identity.

  • 1,A,B is a group with 1 as the identity. Again we have the multiplication table
1 A B
1 1 A B
A A B 1
B B 1 A

Again, we see that the elements with matrix multiplication form an abelian group with 1 as the identity.

  • The distributive law follows from the distributive law for matrices in general.

Exercise 1.11

Writing out a product and sum of two elements from the given set, and noticing that the coefficients of both elements, and thus their sums and products are in 𝔽3, implies that the sum and product are in the set. To see that each non-zero element has an inverse in the + -operation is trivial. To see the same for the product operation, write the equations coming from the condition zz1=1, where z is a known element from the set and z1 a candidate for its inverse with unknown coefficients as a linear system. By Corollary 3.2.8 this system has a solution. The distribution law is immediate.

Exercise 2.2

a) The space of symmetric matrices is a vector space, since the sum of two symmetric matrices is a symmetric matrix, and a scaling of a symmetric matrix by a scalar is symmetric.

b) The space of invertible matrices is not a vector space, since it does not contain the zero matrix.

c) The space of upper triangular matrices is also a vector space by similar reasoning as used in part a).

Exercise 3.1

One possible basis for the space of symmetric matrices is for example the matrices Aij for i=1,,n,ji that have zeros everywhere but in the i,j and j,i entry. There are (n2) such matrices, and they are linearly independent, since no two such matrices have ones in the same entry. Furthermore, the matrices Aij are symmetric and clearly any symmetric matrix can be written as a linear combination of Aij.

Exercise 3.7

Let cijℝ be coefficients such that

i,jcijXiYjt=0 . (1)

The matrix i,jcijXiYjt has as the kth column the vector i,jcijYjkXi=i(jcijYjk)Xi where Yjkis the kth element of the vector Yj. Denote αi=jcijYjk, so that the (1) implies together with the condition that the vectors Xi form a basis that αi=0 for all i. So then we must have jcijYjk=0 for all k,i. This implies that jcijYj=0 for all i, but since the vectors Yj form a basis, we must have cij=0 for all i.

Exercise 3.8

Let A be the matrix with the vectors v1,,vn as column vectors. Let X,BFn. Then AX=B is equivalent to saying that B is a linear combination of the vectors v1,,vn. By Theorem 1.2.21, AX=B has a unique solution X if and only if A is invertible.

In particular, this means that also AX=0 has a unique solution X=(0,,0). This shows that 1) v1,,vn span the space Fn and 2) v1,,vn are linearly independent.

Exercise 4.2

a) (1111).

b) (001010100).

c) (112032) or (112032).

Exercise 4.3

The given operations correspond to row operations on matrices. By Theorem 1.2.16, any matrix that is invertible can be reduced to the identity using such operations. In Exercise 3.8 we proved that the columns of a matrix form a basis if and only if the matrix is invertible.

Exercise 4.4

a) Any basis in V corresponds to a matrix that is invertible, i.e., an element of GL2(𝔽p). On the other hand the column vectors of any element from GL2(𝔽) form basis vectors for V.

b) For GL2(𝔽p) we have that there are in total p4 matrices in 𝔽p2×2 of which we have to count the ones that are not invertible. Considering the columns of a matrix in 𝔽p2×2, we have

  • p21 first columns that are not the (0,0)t column vector, and p1 scalings of the first column with a value other than zero.
  • If the first column is (0,0)t, the second column can be chosen in p21 ways such that it is not also the (0,0)t vector.
  • If the second column is (0,0)t, the first column can also be chosen in p21 ways such that it is not the (0,0)t vector.
  • There is exactly one matrix with both columns (0,0)t.

Combining these facts, we get that there are p4(p21)(p1)(p21)(p21)1=p(p+1)(p1)2invertible matrices in 𝔽p2×2.

For SL2(𝔽p) we want to compute the number of matrices in 𝔽p2×2 with determinant equal to 1. In GL2(𝔽p) there are equally many elements with determinant 1, 2, 3, etc.Therefore, the number of elements in GL2(𝔽p) is the number of elements with determinant 1 times p1. From the previous calculation we thus get that the number of elements in SL2(𝔽p) is p(p+1)(p1).

Exercise 4.5

a) The key for finding the number of subspaces is to find the number of linearly independent vectors in 𝔽p3.

  • Subspaces of dimension 0: 1.
  • Subspaces of dimension 1: Each subspace is spanned by a nonzero vector of the form (a,b,c)t with a,b,c𝔽p. There are p31 such vectors. For any such given vector, there are p1 nonzero scalings with a scalar in 𝔽p. Hence, the number of linearly independent vectors is (p31)/(p1)=p2+p+1. Each such vector spans a subspace that is different from the subspaces spanned by the others.
  • Subspaces of dimension 2: Let W be some maximal collection of linearly independent vectors of 𝔽p3. We know that |W|=p2+p+1, and any two vectors from W span a two-dimensional subspace of 𝔽p3. We can choose two vectors from W in (p2+p+12) ways, but this is not the number of two-dimensional subspaces of 𝔽p3. Indeed, say we choose v1,v2W such that v1v2. Then V=Span(v1,v2) is a subspace of 𝔽p3 containing p2 points and p2/(p1)=p+1 linearly independent vectors. As p>1, this means V contains some vector wW,wv1,v2. The number of pairs of linearly independent vectors in V is (p+12), and hence the number of two-dimensional subspaces of 𝔽p3 is (p2+p+12)/(p+12)=p2+p+1. Another way to arriving the same conclusion is as follows: Let V be a subspace of 𝔽p3 of dimension 2. Then, V is spanned by two linearly independent vectors, and there is a vector v𝔽p3 such that vV. In other words, the vectors in V are linearly independent of v. We know that there are p2+p+1 linearly independent vectors in 𝔽p3, so whenever we choose one of such vectors, we are left with a subspace of dimension 2 that does not contain the chosen vector (but contains all the others). Hence, there are also p2+p+1 subspaces of dimension 2.
  • Subspaces of dimension 3: 1.

b) The case of 𝔽p4 can be generalised from the previous case:

  • Subspaces of dimeansion 0: 1.
  • Subspaces of dimension 1: The number of linearly independent vectors can be calculated similarly as in a), and we get (p41)/(p1)=p3+p2+p+1.
  • Subspaces of dimension 2: Similarly as in the case of 𝔽p3, we have that the number of two-dimensional subspaces is (p3+p2+p+12)/(p+12)=(p+1)(p2+p+1).
  • Subspaces of dimension 3: Similarly as in the case of 𝔽p3, we have that for each three-dimensional subspace we have a one-dimensional subspace "left over". Therefore the number of three-dimensional subspaces is p3+p2+p+1.
  • Subspaces of dimension 4: 1.

Exercise 5.1

Let V be the space of symmetric and W the space of skew-symmetric matrices. It is clear that dimV=12n(n+1) and dimW=12n(n1) and that VW contains only the zero matrix, so the spaces are independent. By Proposition 3.6.4 b), we have dim(V+W)=dimV+dimW=n2=dimℝn×n and so by Proposition 3.4.23, V+W=ℝn×n.

Exercise 5.2

The condition Tr(M)=0 introduces a linear dependency between the elements of the matrix. Therefore, we have dim(W1)=n21, and thus any one-dimensional subspace of ℝn×n that is independent of W1 suffices. For example we can take as W2 the span of the matrix for which the top-left corner element is 1 and the rest are 0. Then W1+W2=ℝn×n.

Exercise 6.1

The given vectors span the set of sequences that are constant apart from a finite set of indices.

Exercise M.3

a) Let x(t)=a2t2+a1t+a0 and y(t)=b2t2+b1t+b0, and f(x,y)=c2,0x2+c0,2y2+c1,1xy+c1,0x+c0,1y+c0,0. Then we have also f(x(t),y(t))=d4t4+d3t3+d2t2+d1t+d0. The coefficients di are linear in the coefficients cj,k, so

we can solve as follows

d4=c2,0a22+c0,2b22+c1,1a2b2d3=2c2,0a2a1+2c0,2b2b1+c1,1(a2b1+a1b2)d2=c2,0(a12+2a2a0)+c0,2(b12+2b2b0)+c1,1(a1b1+a2b0+a0b2)+c1,0a2+c0,1b2d1=c2,0(a12+2a1a0)+c0,2(b12+2b1b0)+c1,1(a1b0+a0b1)+c1,0a1+c0,1b1d0=c2,0a02+c0,2b02+c1,1a0b0+c1,0a0+c0,1b0+c0,0

Setting each di to zero yields a system of equations

(a22b22a2b20002a2a12b2b1a2b1+a1b2000a12+2a2a0b12+2b2b0a1b1+a2b0+a0b2a2b20a12+2a1a0b12+2b1b0a1b0+a0b1a1b10a02b02a0b0a0b01)(c2,0c0,2c1,1c1,0c0,1c0,0)=0.

By Corollary 1.2.14 this system has a solution where at least one of the coefficients cj,k is non-zero, so there is a polynomial f(x,y) that is not identically zero, but f(x(t),y(t))=0 for every t.

b) We can solve for example f(x,y)=x3+x2y2 using similar approach to part a).

c) Let x(t) be a polynomial of degree dx and y(t) polynomial of degree dy, so that x(t)=i=0dxaiti and y(t)=i=0dybiti. Let f(x,y)=i=0dyj=0dxci,jxiyj be a polynomial with unknown coefficients ci,jℝ. In order to have this polynomial vanish at f(x(t),y(t)), we have to solve the equations that set the coefficient of ti to 0 for each idydx in the polynomial f(x(t),y(t)). These equations are linear in ci,j, and there are dxdy+1 of them. On the other hand, there are (dx+1)(dy+1) variables ci,j, so by Corollary 1.2.14 the linear system has a non-zero solution. Note that in part a) we restricted the degree of the polynomial f(x,y) to 2, and thus did not end up with as many equations as in this proof.

Template:BookCat