Commutative Algebra/Algebras and integral elements

From testwiki
Jump to navigation Jump to search

Algebras

note to self: 21.4 is false when the constant polynomials are allowed!

Template:TextBox

Within an algebra it is thus true that we have an addition and a multiplication, and many of the usual rules of algebra stay true. Thus the name algebra.

Of course, there are some algebras whose multiplication is not commutative or associative. If the underlying ring is commutative, the ring gives a certain commutativity property in the sense of

r(sa)=(rs)a=(sr)a=s(ra).

Template:TextBox

Note that this means that Z, together with the operations inherited from A, is itself an R-algebra; the necessary rules just carry over from A.

Example 21.3: Let R be a ring, let S be another ring, and let φ:RS be a ring homomorphism. Then S is an R-algebra, where the module operation is given by

rs:=φ(r)s,

and multiplication and addition for this algebra are given by the multiplication and addition of S, the ring.

Proof:

The required rules for the module operation follow as thus:

  1. 1rs=φ(1R)s=1Ss=s
  2. r(s+t)=φ(r)(s+t)=φ(r)s+φ(r)t=rs+rt
  3. (r+r)s=φ(r+r)s=(φ(r)+φ(r))s=rs+rs
  4. r(rs)=φ(r)rs=φ(r)φ(r)s=φ(rr)s=(rr)s

Since in S we have all the rules for a ring, the only thing we need to check for the R-bilinearity of the multiplication is compatibility with the module operation.

Indeed,

(rs)t=φ(r)st=r(st)

and analogously for the other argument.

We shall note that if we are given an R-algebra A, then we can take a polynomial pR[x1,,xn] and some elements a1,,an of A and evaluate p(a1,,an)A as thus:

  1. Using the algebra multiplication, we form the monomials a1k1a2k2ankn.
  2. Using the module operation, we multiply each monomial with the respective coefficient: rk1,,kna1k1a2k2ankn.
  3. Using the algebra addition (=module addition), we add all these rk1,,kna1k1a2k2ankn together.

The commutativity of multiplication (1.) and addition (3.) ensure that this procedure does not depend on the choices of order, that can be made in regard to addition and multiplication.

Template:TextBox

Template:TextBox

Proof:

The first claim follows from the very definition of subalgebras of A: The closedness under the three operations. For, if we are given any elements of R[a1,,an], applying any operation to them is just one further step of manipulations with the elements a1,,an.

We go on to prove the equation

R[a1,,an]={a1,,an}ZAZ subalgebraZ.

For "" we note that since a1,,an are contained within every Z occurring on the right hand side. Thus, by the closedness of these Z, we can infer that all finite manipulations by the three algebra operations (addition, multiplication, module operation) are included in each Z. From this follows "".

For "" we note that R[a1,,an] is also a subalgebra of A containing {a1,,an}, and intersection with more things will only make the set at most smaller.

Now if any other subalgebra of A is given that contains a1,,an, the intersection on the right hand side of our equation must be contained within it, since that subalgebra would be one of the Z.

Exercises

  • Exercise 21.1.1:

Symmetric polynomials

Template:TextBox

That means, we can permute the variables arbitrarily and still get the same result.

This section shall be devoted to proving a very fundamental fact about these polynomials. That is, there are some so-called elementary symmetric polynomials, and every symmetric polynomial can be written as a polynomial in those elementary symmetric polynomials.

Template:TextBox

Without further ado, we shall proceed to the theorem that we promised:

Template:TextBox

Hence, every symmetric polynomial is a polynomial in the elementary symmetric polynomials.

Proof 1:

We start out by ordering all monomials (remember, those are polynomials of the form x1k1x2k2xn1kn1xnkn), using the following order:

x1k1x2k2xn1kn1xnkn<x1m1x2m2xn1mn1xnmn:{k1++kn<m1++mnor(k1++kn=m1++mn)(kj<mj, where j:=min1inkimi).

With this order, the largest monomial of sn,m is given by x1xm; this is because for all monomials of sn,m, the sum of the exponent equals m, and the last condition of the order is optimized by monomials which have the first zero exponent as late as possible.

Furthermore, for any given r1,,rn0, the largest monomial of

sn,1r1sn,nrn

is given by x1r1++rnx2r2++rnxn1rn1+rnxnrn; this is because the sum of the exponents always equals r1+2r2++(n1)rn1+nrn, further the above monomial does occur (multiply all the maximal monomials from each elementary symmetric factor together) and if one of the factors of a given monomial of sn,1r1sn,nrn coming from an elementary symmetric polynomial is not the largest monomial of that elementary symmetric polynomial, we may replace it by a larger monomial and obtain a strictly larger monomial of the product sn,1r1sn,nrn; this is because a part of the sum r1+2r2++(n1)rn1+nrn is moved to the front.

Now, let a symmetric polynomial fR[x1,,xn] be given. We claim that if x1k1x2k2xn1kn1xnkn is the largest monomial of f, then we have k1k2kn1kn.

For assume otherwise, say kj<kj+1. Then since f is symmetric, we may exchange the exponents of the j-th and j+1-th variable respectively and still obtain a monomial of f, and the resulting monomial will be strictly larger.

Thus, if we define for j=1,,n1

dj:=kjkj+1

and furthermore dn:=kn, we obtain numbers that are non-negative. Hence, we may form the product

h(x):=sn,1d1sn,ndn,

and if c is the coefficient of the largest monomial of f, then the largest monomial of

f(x)ch(x)

is strictly smaller than that of f; this is because the largest monomial of h is, by our above computation and calculating some telescopic sums, equal to the largest monomial of f, and the two thus cancel out.

Since the elementary symmetric polynomials are symmetric and sums, linear combinations and products of symmetric polynomials are symmetric, we may repeat this procedure until we are left with nothing. All the stuff that we subtracted from f collected together then forms the polynomial in elementary symmetric polynomials we have been looking for.

Proof 2:

Let fR[x1,,xn] be an arbitrary symmetric polynomial, and let d be the degree of f and n be the number of variables of f.

In order to prove the theorem, we use induction on the sum n+d of the degree and number of variables of f.

If n+d=1, we must have n=1 (since d=1 would imply the absurd n=0). But any polynomial of one variable is already a polynomial of the symmetric polynomial s1,1(x)=x.

Let now n+d=k. We write

f(x1,,xn)=g(x1,,xn)+x1xnh(x1,,xn),

where every monomial occurring within g lacks at least one variable, that is, is not divisible by x1xn.

The polynomial g is still symmetric, because any permutation of a monomial that lacks at least one variable, also lacks at least one variable and hence occurs in g with same coefficient, since no bit of it could have been sorted to the "x1xnh(x1,,xn)" part.

The polynomial h has the same number of variables, but the degree of h is smaller than the degree of f. Furthermore, h is symmetric because of

h(x1,,xn)=f(x1,,xn)g(x1,,xn)x1xn.

Hence, by induction hypothesis, h can be written as a polynomial in the symmetric polynomials:

h(x1,,xn)=p1(sn,1(x1,,xn),,sn,n(x1,,xn))

for a suitable p1R[x1,,xn].

If n=1, then f is a polynomial of the elementary symmetric polynomial s1,1(x) anyway. Hence, it is sufficient to only consider the case n2. In that case, we may define the polynomial

q(x1,,xn1):=g(x1,,xn1,0).

Now q has one less variable than f and at most the same degree, which is why by induction hypothesis, we find a representation

q(x1,,xn1)=p2(sn1,1(x1,,xn1),,sn1,n1(x1,,xn1))

for a suitable p2R[x1,,xn1].

We observe that for all j{1,,n1}, we have sn1,j(x1,,xn1)=sn,j(x1,,xn1,0). This is because the unnecessary monomials just vanish. Hence,

g(x1,,xn1,0)=p2(sn,1(x1,,xn1,0),,sn,n1(x1,,xn1,0)).

We claim that even

g(x1,,xn1,xn)=p2(sn,1(x1,,xn1,xn),,sn,n1(x1,,xn1,xn))(*).

Indeed, by the symmetry of g and sn,1,,sn,n1 and renaming of variables, the above equation holds where we may set an arbitrary of the variables equal to zero. But each monomial of g lacks at least one variable. Hence, by successively equating coefficients in (*) where one of the variables is set to zero, we obtain that the coefficients on the right and left of (*) are equal, and thus the polynomials are equal.

Integral dependence

Template:TextBox

A polynomial of the form

xn+an1xn1++a1x+a0 (leading coefficient equals 1)

is called a monic polynomial. Thus, r being integral over S means that r is the root of a monic polynomial with coefficients in S.

Whenever we have a subring SR of a ring R, we consider the module structure of R as an S-module, where the module operation and summation are given by the ring operations of R.

Template:TextBox

Proof:

1. 2.: Let r be integral over S, that is, rn=an1rn1++a1r+a0. Let bkrk+bk1rk1++b1r+b0 be an arbitrary element of S[r]. If j is larger or equal n, then we can express rj in terms of lower coefficients using the integral relation. Repetition of this process yields that 1,r,r2,,rn1 generate S[r] over S.

2. 3.: T=S[r].

3. 4.: Set M=T; T is faithful because if uS[r] annihilates T, then in particular u=u1=0.

4. 1.: Let M be such a module. We define the morphism of modules

ϕ:MM,mrm.

We may restrict the module operation of M to S to obtain an S-module. ϕ is also a morphism of S-modules. Further, set I=S. Then ϕ(M)M=IM (1S). The Cayley–Hamilton theorem gives an equation

rn+an1rn1++a1r+a0=0, an1,,a0S,

where r is to be read as the multiplication operator by r and 0 as the zero operator, and by the faithfulness of M, rn+an1rn1++a1r+a0=0 in the usual sense.

Template:TextBox

Proof:

Let sS. Since 𝔽 is a field, we find an inverse s1𝔽; we don't know yet whether s1 is contained within S. Since 𝔽 is integral over S, s1 satisfies an equation of the form

(s1)n+an1(s1)n1++a1s1+a0=0

for suitable an1,,a1,a0S. Multiplying this equation by sn1 yields

s1=(an1+an2s++a1sn2+a0sn1)S.

Template:TextBox

Proof 1 (from the Atiyah–Macdonald book):

If x,yR are integral over S, y is integral over S[x]. By theorem 21.10, S[x] is finitely generated as S-module and S[x][y]=S[x,y] is finitely generated as S[x]-module. Hence, S[x,y] is finitely generated as S-module. Further, S[x+y]S[x,y] and S[xy]S[x,y]. Hence, by theorem 21.10, x+y and xy are integral over S.

Proof 2 (Dedekind):

If x,y are integral over S, S[x] and S[y] are finitely generated as S-modules. Hence, so is

S[x]S[y]:={j=1najbj|n,ajS[x],bjS[y]}.

Furthermore, S[xy]S[x]S[y] and S[x+y]S[x]S[y]. Hence, by theorem 21.10, xy,x+y are integral over S.

Template:TextBox

Template:TextBox

Template:BookCat