Topics in Algebra, Chapter 3 Supplementary Problems

2013-07-07 math algebra topics-in-algebra

This page covers the supplementary problems at the end of chapter 3.

The problems below are paraphrased from/inspired by those given in Topics in Algebra by Herstein. The solutions are my own unless otherwise noted. I will generally try, in my solutions, to stick to the development in the text. This means that problems will not be solved using ideas and theorems presented further on in the book.

Herstein 3.1: Let R be a commutative ring. Define a prime ideal P of R to be an ideal such that if a,bR have abP, then aP or bP. Show that P is a prime ideal if and only if R/P is an integral domain.

Let a,bP be such that ab+P=(a+P)(b+P)=0+P.

R/P is an integral domain is equivalent to the statement that abP implies aP or bP, which is equivalent to the statement that P is a prime ideal.

Herstein 3.2: Let R be a commutative, unital ring. Prove that a maximal ideal of R is prime.

Theorem 3.5.1 states that M is a maximal ideal of commutative, unital ring R if and only if R/M is a field. Therefore if M is a maximal ideal of R, we have R/M a field and thus an integral domain. By exercise 3.1, M is a prime ideal.

Herstein 3.3: Exhibit a ring R where a prime ideal fails to be a maximal ideal.

The ring R cannot be a PID, because a prime ideal will always be maximal in that case: if P=(p) is a prime ideal, then an ideal I=(r) containing it will have rp which won’t happen in any non-trivial way. Then we avoid PIDs and look for things that are somewhat more exotic, but non-commutative rings are probably too exotic. A natural choice for investigation are polynomial rings.

One thought is something like (x2+1) in R[x]. This is a prime ideal but it’s also relatively easy to see that it is maximal. However, if we consider instead I=(x2+2) in R=Z[x], we have it. The generator x2+2 is irreducible so, because Z[x] is a UFD, the ideal is prime. The ideal J=(x2,2)={2m+nx2m,nZ[x]}

properly contains it because it contains elements (e.g. 2) with degree smaller than 2. However, JR because 1 is clearly not in J. Therefore I is prime but not maximal in R=Z[x].

Note that I=(x+2) and J=(x,2) would work via the same argument.

Herstein 3.4: Let R be a finite, commutative, unital ring. Prove that a prime ideal of R is maximal.

Let P be a prime ideal of R. We have by exercise 3.1 that R/P is an integral domain, and by lemma 3.2.2 that a finite integral domain must be a field. Therefore R/P is a field, so that theorem 3.5.1 gives us that P is a maximal ideal of R.

Herstein 3.5: With F a field, prove that F[x] is isomorphic to F[t].

The meaning of this exercise is not clear. It seems that we just want to show that the name of the indeterminate is irrelevant. This can be done by considering the obvious mapping ϕ:F[x]F[t] with ϕ(a0+a1x++anxn)=a0+a1t++antn.

Then ϕ((fg)(x))=(fg)(t)=f(t)g(t)=ϕ(f(x))ϕ(g(x))

and ϕ((f+g)(x))=(f+g)(t)=f(t)+g(t)=ϕ(f(x))+ϕ(g(x)).

Therefore ϕ is a homomorphism, and it is clear that it is both injective and onto.

Herstein 3.6: Classify all σAut(F[x]) which fix the base field F. That is, which automorphisms σ on F[x] have σ(a)=a for all aF?

Let fF[x] be given by f(x)=a0++anxn. Consider σ(f(x))=σ(a0+a1x++anxn)=a0+a1σ(x)++anσ(x)n

which has simplified because σ is a homomorphism and fixes the coefficients. From this we see that the F-fixing automorphism σ is determined entirely by where it maps the polynomial x=0+1x+0x2+. If deg(σ(x))=0, then the map can not be surjective, because only constants are in the image of σ. If deg(σ(x))>1, then again the map σ can not be surjective: the degree of a non-constant polynomial a0+a1σ(x)++anσ(x)n is ndeg(σ(x))deg(σ(x))>1. Hence no polynomials of degree 1 would be in the image of σ.

Therefore σ(x)=αx+β for some α,βF is the only remaining possibility. The question is whether σ(x) so chosen respects the fact that the map must be an automorphism. We have σ(f(x))=σ(a0+a1x++anxn)=a0+a1(αx+β)++an(αx+β)n=f(αx+β).

This is simply a composition and we easily see that σ is still a homomorphism: σ((f+g)(x))=(f+g)(αx+β)=f(αx+β)+g(αx+β)=σ(f(x))+σ(g(x))

and σ((fg)(x))=(fg)(αx+β)=f(αx+β)g(αx+β)=σ(f(x))σ(g(x)).

Suppose f,gF[x] are such that σ(f)=σ(g), and suppose α0. Then 0=(fg)(αx+β)=b0+b1(αx+β)++bm(αx+β)m

where bi are the differences of the coefficients of f and g, and m is the maximum of the two degrees. Starting from the degree m term, it is clear that bm=0 because there is no way to cancel the xm term otherwise. Next the degree m1 term suffers the same fate, and so on, down the chain. Thus all of the coefficients must vanish identically, so that σ(f)=σ(g) implies f=g, i.e. that σ is injective. This argument would fail if α=0 because there are no powers of x to speak of. However, σ is not injective in the α=0 case: σ(x)=σ(β) and xβ. Therefore we must restrict α0.

Observe that, if σ(f(x))=f(αx+β), then σ(α1xα1β)=α1(αx+β)α1β=x.

Therefore σ is surjective because a0+a1x++anxn

is the image under σ of a0+a1(α1xα1β)++an(α1xα1β)n.

To summarize, we have shown that if σ is an automorphism of F[x] which fixes the coefficient field F, then σ must be of the form σ(f(x))=f(αx+β)

where α,βF and α0.

If we wanted to replace F by a ring R, we note that if R is not an integral domain, then it is conceivable that deg(σ(x))>1 would be an acceptable situation, so there might be much more to study here. However, if R is an integral domain, the argument carries through but α must be a unit rather than simply non-zero.

Herstein 3.7: Let R be a commutative ring and let N={rRrm=0 for some mZ}.

Prove that

(a) N is an ideal of R.

(b) If (r+N)m=N in R/N for some mZ, then rN.

N is called the nilradical of R.

(a) If r,sN with rm=0 and sn=0, then (r+s)m+n=k=0m+n(m+nk)rksm+nk

by the binomial theorem. The idea here is that every term in this sum is zero because either r is raised to a high enough power or s is raised to a high enough power. This is most clear if we change the variable of summation to =km so that (r+s)m+n==mn(m+nm+)rm+sn.

Now if 0 we have rm+=0, while <0 gives sn=sn+=0. Hence (r+s)m+n=0 and so r+sN whenever r,sN.

In addition, if rN with rm=0 and αR, then (αr)m=αmrm=0 so αrN. Therefore, N is an ideal of R.

(b) If (r+N)m=N, then rmN. Thus there exists n such that 0=(rm)n=rmn, which shows that r also belongs to N.

This is reminiscent of R/N being an integral domain. It’s not that we necessarily have no zero divisors in R/N, but we have no non-trivial nilpotent elements in R/N. Of course, nilpotent elements are a particular class of zero divisor. If R/N were an integral domain, then we could say by exercise 3.1 that N is a prime ideal. While this might not be true, it is indeed true that the nilradical is related to prime ideals. It turns out that N is the intersection of all prime ideals of R (proof).

Herstein 3.8: Let R be a commutative ring and let A be an ideal of R. Let N(A)={rRrmA for some mZ}.

Prove that

(a) N(A) is an ideal of R containing A.

(b) N(N(A))=N(A).

N(A) is called the radical of A.

(a) If rA then r is immediately a member of N(A) because r1A. Thus AN(A). If r,sN(A) then the proof of exercise 3.7a carries over almost verbatim to show that r+sN(A). The various terms in the binomial expansion do not vanish, but they all belong to A, which is an ideal. Hence r+sN(A). If rN(A) with rmA and αR, then (αr)m=αmrmA so that αrN(A). Therefore N(A) is an ideal containing A.

(b) Let rN(N(A)) so that rmN(A) for some mZ. Then there exists nZ with (rm)n=rmnA and hence rN(A). This shows that N(N(A))N(A). By part (a), we already know that N(A)N(N(A)). Therefore N(N(A))=N(A).

The nilradical is the radical of the zero ideal. Note that the parts of exercise 3.7 are the respective special cases of the parts of exercise 3.8, despite the fact that part (b) of exercise 3.7 is expressed somewhat differently.

Herstein 3.9: Describe the nilradical of Z/nZ in terms of n.

If rZ/nZ is an element of the nilradical, then there exists m,aZ with rm=an. The prime factors of the left hand side will always be exactly those of r. If one of the prime factors of n is missing from r, then this equation will have no solution. To be more explicit, say that n=p1i1pkik;

then we claim that r belongs to the nilradical if and only if r is a multiple of p1pk. This statement correctly describes even the trivial case of r=0, but we exclude that case in what follows.

If r is non-zero and of this form, then take the power m large enough that each prime pl is raised to a power greater than its power in n, and then choose the coefficient a to make up the deficit.

On the other hand, if non-zero r is known to be in the nilradical, then we have rm=an for some m,a. If p is a prime dividing n, then it also must divide the left hand side, prm. As p is a prime, we have further that pr. This holds for each prime, so every prime dividing n must also appear, with at least one power, in r. This proves the claim.

As an example, consider Z/24Z where 24=233. The prime product is 23=6. The elements of the nilradical of Z/24Z are 6 (63=249), 26 (122=246), 36 (183=24243) and of course 0. On the other hand, something like 2130 is not in the nilradical because 24a is always divisible by 3 whereas 2m never is.

Herstein 3.10: Let R be a ring and let A,BR be ideals with AB=(0). Show that ab=0 whenever aA and bB.

Because A is an ideal (and therefore closed under external multiplication), abA; because B is an ideal, abB. Therefore abAB=(0) so ab=0.

Herstein 3.11: Let R be a ring and let Z(R)={xRxy=yx for all yR}. Prove that Z(R) is a subring of R.

This is the analogue of the center of a group, the set of all elements that commute with everything. Let x1,x2Z(R) and let y be an arbitrary element of R. Then x1x2y=x1(x2y)=x1(yx2)=(yx1)x2=yx1x2

showing that x1x2Z(R). Similarly, (x1+x2)y=y(x1+x2) so that x1+x2Z(R). Of course, if there exists 1R, then 1Z(R). Therefore Z(R) is a subring of R. It is patently a commutative ring.

Herstein 3.12: Let R be a division ring. Prove that Z(R) is a field.

By exercise 3.11, we know that Z(R) is a commutative subring. If R is a division ring then so too is Z(R). A commutative division ring is a field, by definition.

Herstein 3.13: Construct a polynomial of degree 3, irreducible over F=Z/3Z. Use it to construct a field of 27 elements.

It seems a difficult problem in general to construct arbitrary irreducible polynomials. However, with small degree and a small field of coefficients, we can force it through. We observe that we may restrict attention to fF[x] monic, because F is a field so the highest coefficient is invertible and factoring it out does not affect reducibility. In addition, a reducible degree 3 polynomial must have a linear factor because the only non-trivial way to partition 3 is 1+2. Then if f is monic and reducible, we will be able to write f(x)=(xα)(x2+βx+γ)

where the latter polynomial may be further reducible, but that is of no concern. Clearly such a polynomial must map some element αF to zero. Returning to the problem at hand, we now know that a degree 3 polynomial fF[x] which has no root in F must be irreducible.

Now a simple brute force search is easy. We restrict to monic degree 3 polynomials, searching for one which doesn’t map any of 0,1,2 to zero. For simplicity, we keep the x2 term out of it. The first thing to try is f(x)=x3+x+1 but it has f(1)=0. Next, f(x)=x3+2x+1, which works: f(0)=1, f(1)=1 and f(2)=1. Therefore one such irreducible polynomial is f(x)=x3+2x+1,

and there are other possibilities.

By exercise 3.9.7, F[x]/(x3+2x+1) is a field of 33=27 elements.

Note that the connection found here between roots and irreducibility is not of general use. There are sometimes polynomials over fields which have no roots but are nevertheless reducible, such as (x2+1)2=x4+2x2+1 over R. The observation is special for degree 3, where any reduction involves a degree one term. It does not even extend to higher odd degrees, because one can imagine a fifth degree polynomial that splits into irreducible factors of degree 2 and 3, neither of which has a root, e.g. (x22)(x32)=x52x32x2+4 over Q (both factors are irreducible by exercise 3.10.2).

Herstein 3.14: Construct a field with 625 elements.

625=54 so exercise 3.9.7 suggests that we look for a degree 4 polynomial f irreducible over F=Z/5Z. Then F[x]/(f) will be the desired field.

For degree 4, the methods of the previous problem are not helpful. Therefore we try brute force: writing down the simplest polynomials and manually checking that they are irreducible, hoping to get lucky. A few observations are helpful. 1) If fF[x] were reducible, it could be factored into two degree 2 polynomials, or into a degree 1 and a degree 3 polynomial. In the latter case, f must have a root in F due to its linear factor. Therefore we look for candidates which have no roots in F, but that is necessary and not sufficient. 2) Fermat’s little theorem says that x4=1 in our field. 3) The quadratic residues modulo 5 are {0,1,4}.

f(x)=x4+1 has no root because x4+1=2 for xF. We then try to factor it as x4+1=(x2+αx+β)(x2+γx+δ)

and find that the equations for each coefficient have a consistent solution: α=γ=0, β=2, δ=3. Thus x4+1=(x2+2)(x2+3) is reducible.

f(x)=x4+x+1 has a root at x=3. More generally, x4+kx+1 takes the values 2+kx, and quick inspection shows that any non-zero k gives a polynomial with a root, which is reducible.

f(x)=x4+x2+1 takes the values 2+x2{1,2,3} on F, so it has no roots. However, again writing down the equations for the coefficients in a product of quadratics, we find that x4+x2+1=(x2+x+1)(x2x+1) is reducible.

f(x)=x4+x2+x+1 takes the values 2+x+x2{2,3,4} on F, so it has no roots. The equations for the coefficients are γ=α, δ=β1, β+δ+αγ=1 and αδ+βγ=1. Combining these four equations, we see that β(βα2)=0andα(1β2)=1.

From the first of these new equations, we see that β=α2, because βδ=1 precludes the possibility that β=0. Next, we see that α(1α4)=1. However, we know that α4=1 by little Fermat, so we have our desired contradiction at long last. Thus f(x)=x4+x2+x+1 is irreducible over F and F[x]/(f) is a field of order 625.

Herstein 3.15: Let F be a field, let pF[x], and let R=F[x]/(p). Show that the nilradical N of R is (0) if and only if p is not divisible by the square of any polynomial.

For an element fF[x] to be in the nilradical N of R, it means there exists an integer n such that p(x) divides f(x)n. First we show that N being trivial implies that p cannot be divisible by a square. Consider the contrapositive statement: if p is divisible by a square, say p=f2g with f,gF[x], then (fg)2=pg(p) so that N is non-trivial because fg is in it. Hence if N is trivial then p must not be divisible by a square.

Now suppose that p is not divisible by any square. Using the fact that F[x] is a UFD, we can write p=π1πm with the πiF[x] irreducible and all being unique, πiπj if ij. If fN, then there is nZ with pfn. Then by lemma 3.7.6 every πif and therefore pf. Viewed in the quotient ring R, f=0. Hence if p is not divisible by a square, the nilradical of R is trivial.

Note that the squarefree property is truly necessary in the second paragraph. Take this example in the integers: p=18=232 and f=6. We have pf2 but of course pf.

Herstein 3.16: Prove that f(x)=x4+x3+x+1 is not irreducible over any field (note no x2 term).

We observe that 1, which belongs to any field, is a root of f. Hence f(x) is divisible by the polynomial (x+1) and is therefore irreducible.

Herstein 3.17: Prove that f(x)=x4+2x+2 is irreducible over Q.

This is immediate from the Eisenstein criterion with p=2.

Herstein 3.18: Let F be a finite field and let the characteristic of F be p. Prove that p is prime and that F=pn for some nZ. Also prove that if aF then apn=a.

First, the notion of characteristic (as defined by Herstein) is relevant here because a field is an integral domain (exercise 3.2.12). The characteristic of F is finite by the pigeonhole principle applied to the set {1,21,31,}: the list must repeat because F is finite, so there exist α,βZ such that α1=β1; therefore αβ1=0. By exercise 3.2.6, the characteristic of F is prime because it is finite.

Because F has characteristic p, we know that F0={0,1,2,,p1}F.

Now, we are familiar with F0Z/pZ; it is a field containing p elements. If F=F0 then we have shown that F=p1 and we are done. However, suppose there exists xF with x∉F0, and consider the set F1={α1x+α0α0,α1F0}.

Clearly F0F1F, and we will show in a moment that it contains p2 elements. Note that we will not claim that F1 is a field or even closed under multiplication. For instance, it is not clear at all at this point whether x2 would belong to F1. Nevertheless, we will find this construction useful. The sketch of the proof from this point is as follows: we repeat this procedure for as long as F contains an element outside our constructed subsets. The procedure surely terminates because each step generates a proper superset of the preceding step’s set, and F is finite. Moreover, the set generated in any step is p times as large as the preceding set. Hence, when the procedure terminates, we realize that the size of F must be a power of p.

First we show that F1 contains p2 elements. It is clear that we can enumerate p2 elements in F1 because there are p choices for each of the coefficients α0 and α1. However, must they all be unique? Yes. If α1x+α0=α1x+α0, then we see that (α1α1)x=α0α0.

If α1α1 is non-zero, then it is an invertible element of F0, so that we have x=(α1α1)1(α0α0)F0,

a contradiction. Therefore α1=α1 and consequently α0=α0, the element is unique. This proves that there are p2 elements in F1F.

Now we would like to show the validity of the procedure in general. Suppose Fk1F with Fk1=pk, and suppose there exists yF with y∉Fk1. Then construct Fk={αky+βαkF0, βFk1}.

We can enumerate pkp=pk+1 elements in Fk, and clearly FkF. Are any of the pk+1 elements duplicates? No. If αky+β=αky+β,

then (αkαk)y=ββ and the argument from above applies again: if αkαk is non-zero, then it is an invertible element of F0 and y=(αkαk)1(ββ).

We know, however, that Fk1 is closed under multiplication by elements of F0, because F0 is a field. Thus, this line of reasoning has us conclude that yFk1, a contradiction. Therefore we have αk=αk and consequently β=β, the two elements are identical. Now we have shown the recurrence Fk=pFk1.

Because F is finite, it will eventually be exhausted of its elements and we will have some maximal n such that FnF, but there does not exist another zF with z∉Fn. But this means that FFn, so that F=Fn and F=pn+1. This is the desired result: the number of elements in a finite field must be a power of a prime.

Finally, suppose that F=pn. We have that the non-zero elements of F form a group under multiplication, and there are pn1 of them. By Lagrange’s theorem, the order of any element divides pn1. Then if aF, we surely have apn1=1

or, what is the same, apn=a.

Herstein 3.19: Prove that a non-zero ideal in the Gaussian integers Z[i] must contain a positive integer.

Let IZ[i] be a non-zero ideal and let z=a+ibI with a,bZ not both zero. Then zzˉ=(a+ib)(aib)=a2+b2I

because I is closed under external multiplication. Because one of a or b is non-zero, a2+b2 is a positive integer.

Herstein 3.20: Let R be a ring such that x4=x for every xR. Prove that R is commutative.

Like exercise 3.4.19 (where x3=x for all x implies commutativity), this problem is hard. We follow the great proof posted by Steve D. on math.stackexchange.com.

First observe that x=(x)4=x4=x so that 2x=0 for any xR. Next, we consider the magic combination x2+x, which was also of interest in 3.5.19. We can try to take the fourth power, but it gives no information. However, we have that (x2+x)2=x4+x2=x2+x.

Now we make some subtle, and seemingly unrelated, statements.

(1) If x,yR have xy=0, then yx=0. Using the special property of R, we have yx=(yx)4=y(xy)3x=0.

(2) If xR has x2=x, then x commutes with every element of R: let yR and consider 0=xyx2y=x(yxy)=(yxy)x,

so that yx=xyx. In the final step, we used property (1). Now do this again, 0=yxyx2=(yyx)x=x(yyx),

which gives xy=xyx. Combining the two, we see that if x2=x then xy=xyx=yx for any yR.

We are not done, because not every xR satisfies x2=x. Let r,sR and expand the equality r((r+s)2+(r+s))=((r+s)2+(r+s))r

which holds because t=(r+s)2+(r+s) satisfies t2=t. Now, canceling the identical terms, we are left with (r2+r)s+rs2=s(r2+r)+s2r,

but we know that r2+r commutes with s, so we have rs2=s2r for arbitrary r,sR. Finally, we can make the statement that rs=(r+r2)sr2s=s(r+r2)sr2=sr

for any r,sR, so that R is commutative.

Herstein 3.21: Let R,R be rings and let ϕ:RR with (1) ϕ(x+y)=ϕ(x)+ϕ(y) for all x,yR, and (2) ϕ(xy)=ϕ(x)ϕ(y) or ϕ(xy)=ϕ(y)ϕ(x) for all x,yR. Prove that one of these two options must hold uniformly over the entire ring.

We follow Herstein’s hint, which is to fix aR and to consider the sets Wa={xRϕ(ax)=ϕ(a)ϕ(x)}

and Va={xRϕ(ax)=ϕ(x)ϕ(a)}.

That is, Wa is those x that fall into the first category and Va is those x that fall into the second category. We must have WaVa=R and, of course, a belongs to both so that neither is empty. We seek to prove that one or both of the sets is equal to R.

Suppose there exists bWa with b∉Va, so that ϕ(ab)=ϕ(a)ϕ(b) while ϕ(ab)ϕ(b)ϕ(a). If cR is arbitrary, consider ϕ(a(b+c))=ϕ(ab)+ϕ(ac)=ϕ(a)ϕ(b)+ϕ(ac).

We can evaluate this quantity in another way: either ϕ(a(b+c))=ϕ(a)ϕ(b+c)orϕ(a(b+c))=ϕ(b+c)ϕ(a).

If the first holds, we would have ϕ(a)ϕ(b)+ϕ(ac)=ϕ(a)ϕ(b)+ϕ(a)ϕ(c)

so that cWa as desired. The second case leads to ϕ(ac)=ϕ(c)ϕ(a)+ϕ(b)ϕ(a)ϕ(a)ϕ(b)ϕ(c)ϕ(a)

where we use the fact that ϕ(b)ϕ(a)ϕ(a)ϕ(b)0. Again, we must conclude that ϕ(ac)=ϕ(a)ϕ(c). Therefore if there exists bWa with b∉Va, then Wa=R. If the circumstance is reversed, so there exists bVa with b∉Wa, the same argument gives that Va=R in that case.

The only other case to consider is WaVa or vice versa. Say WaVa, then we know that R=WaVa=Va. Therefore for any fixed aR, one of Wa or Va is the entire ring.

Now we know that, for fixed aR, either Wa or Va is the whole of R. We must extend this to a global statement about R itself. Following the hint of this math.stackexchange.com post, we consider the two sets A={aRWa=R}andB={aRVa=R}.

As we know, AB=R. It is also easy to see that each of A and B is closed under addition: e.g. a,aA implies ϕ((a+a)x)=ϕ(ax)+ϕ(ax)=(ϕ(a)+ϕ(a))ϕ(x)=ϕ(a+a)ϕ(x)

so that a+aA. Suppose that AR and BR. If that is the case, then there exists aA with a∉B, and there exists bB with b∉A. Then to which set does a+b belong? If a+bA, then b=(a+b)aA is a contradiction. If a+bB, then a=(a+b)bB is a contradiction. Therefore we must conclude that one (or both) of A or B is the entire ring, which is the desired result.

Note: This very verbose “elementary” argument can be greatly simplified by the result mentioned in the linked hint. Specifically, if G is a group and G1,G2G satisfy G1G2=G, then G1=G or G2=G. The proof is essentially what was done in the preceding paragraph. Namely, this argument by contradiction: If G1G and G2G but G1G2=G, then there exists g1G1 with g1∉G2 and there exists g2G2 with g2∉G1. Now if g1g2G1, then g2=g11(g1g2)G1 is a contradiction. On the other hand, if g1g2G2, then g1=(g1g2)g21G2 is also a contradiction. Hence one or both subgroups is the entire group G.

In the context of this exercise, we can use this result twice. Note that Wa and Va are additive subgroups of R whose union is R, so one or the other must be the entire ring. Then A and B are also additive subgroups of R whose union is R.

Herstein 3.22: Let R be a unital ring with (ab)2=a2b2 for all a,bR. Show that R is commutative.

This problem is a standard follow-your-nose element manipulation exercise. In light of the subsequent exercises, it’s clearly going to be important that 1R, so we start by considering things like a(1+b). Let a,bR be arbitrary. We have [a(1+b)]2=(a+ab)2=a2+a2b+aba+(ab)2

but also, using the problem stipulation, [a(1+b)]2=a2(1+b)2=a2+2a2b+a2b2.

This simplifies to give a2b=aba. Similarly, consideration of [(1+a)b]2 gives ab2=bab.

Finally, if we expand [(1+a)(1+b)]2=(1+a)2(1+b)2

and cancel the obvious terms, we are left with ba+bab+aba=ab+a2b+ab2.

Using the previous two results, this simplifies to ab=ba. Therefore R is commutative.

Herstein 3.23: Find a non-commutative ring R in which (ab)2=a2b2 for all a,bR.

By exercise 3.22, the ring cannot be unital. The condition can be rewritten as a(baab)a=0

so we want most or all of the elements of R to be zero divisors (in some cases abba may be zero so we can’t make a general statement). My go-to examples of non-commutative rings are the quaternions and rings of matrices. Making even the simplest computations in the quaternions, we have things like (ij)2=1 while i2j2=+1. It is unlikely that a subring of the quaternions will satisfy the condition of this problem, so we set it aside.

I tried many things with 2×2 matrices which all ultimately failed to pan out. For instance, rings generated by simple (single non-zero entry) matrices with entries from the even integers did not satisfy the condition of the problem, and those matrices which square to zero (which easily satisfy the condition of the problem) end up giving a commutative subring.

The space of 3×3 matrices is a big one, so it is natural to restrict attention to one famous subring, the upper triangular matrices. It’s even a good idea to restrict further still, and only consider matrices with zeroes on the diagonal. It turns out that this works. Let A=(010000000),B=(001000000),C=(000001000).

Then we consider the set R={αA+βB+γCα,β,γZ}.

Note that AC=B while every other product of A,B,C is zero. Thus it is trivial that R is closed under multiplication, because (αA+βB+γC)(αA+βB+γC)=αγBR.

Furthermore, R is non-commutative, because (αA+βB+γC)(αA+βB+γC)=αγB

will generally not be the same as the previous product. Of course, R is closed under addition. Therefore R is a non-commutative subring of Mat3×3(Z). It also satisfies the condition of the problem, because any product of four matrices in R is proportional to B2=0. More explicitly, [(αA+βB+γC)(αA+βB+γC)]2=(αγB)2=0

and [(αA+βB+γC)]2[(αA+βB+γC)]2=(αγB)(αγB)=0.

Note that the ring of coefficients for R is really immaterial. Even Z2 would suffice, furnishing us with an eight-element ring satisfying the condition of the problem.

Herstein 3.24:

(a) Let R be a unital ring with (ab)2=(ba)2 for all a,bR. If, for any xR, 2x=0 implies x=0, then show that R is commutative.

(b) Let R be a unital ring with (ab)2=(ba)2 for all a,bR. Show that R may fail to be commutative if 2x=0 does not imply x=0 for all xR.

(c) Let R be a non-unital ring with (ab)2=(ba)2 for all a,bR and such that 2x=0 implies x=0 for all xR. Provide an example to show that R need not be commutative.

This exercise illustrates just how fragile the conditions in (a) are for ensuring that R is commutative. If they are relaxed in any respect, R no longer needs to be commutative.

(a) This is another follow-your-nose elementary manipulation. Based on the problem statement, we would like to end up with a result like 2(abba)=0. First note that [a(1+b)]2=(a+ab)2=a2+a2b+aba+(ab)2,

but this is also equal to [(1+b)a]2=(a+ba)2=a2+aba+ba2+(ba)2.

Comparing the two, we have a2b=ba2 for any a,bR. In other words, a square commutes with anything. Now, in particular, it is true that (1+a)2b=b(1+a)2. We write 0=(1+a)2bb(1+a)2=b+2ab+a2bb2baba2=2(abba),

again using the property derived above. Because 2x=0 implies x=0, we have that ab=ba for arbitrary a,bR and thus R is commutative.

(b) Here we seek to construct a non-commutative ring with (ab)2=(ba)2 for all a,bR. By part (a), we have the hint that this ring must contain some non-zero element x with 2x=0. This naturally suggests things like Z/2Z and Z/4Z. In order to get non-commutativity, it’s then natural to look at matrices over those rings.

As in problem 3.23, I tried various rings of 2×2 matrices over Z/2Z and Z/4Z, but they always failed. One can easily write down explicit expressions for 2×2 matrices a,b with (ab)2=(ba)2, and the resulting ring always ends up commutative. Convinced of the futility of 2×2 matrices, we look at 3×3 matrices over Z/2Z.

The solution of 3.23 does not work here because R is required to be unital and the strictly upper-triangular matrices lack an identity element. However, if we include the diagonal, then we have it: recall the notation A,B,C from the solution to 3.23, above, and consider the set R={x1+αA+βB+γCx,α,β,γZ/2Z}

(1 is the identity matrix). R is the 16 element set of all 3×3 upper-triangular matrices over Z, which we know to be a ring. It is unital (x=1, α=β=γ=0) and non-commutative: 1+A+B+C=(1+A)(1+C)(1+C)(1+A)=1+A+C.

Crucially, it also satisfies the condition (ab)2=(ba)2 for all a,bR: letting x=(x,α,β,γ) and E=(1,A,B,C), we have (xE)(xE)=xx+(xα+αx)A+(xβ+βx+αγ)B+(xγ+γx)C,

(xE)(xE)=xx+(xα+αx)A+(xβ+βx+αγ)B+(xγ+γx)C.

Squaring, [(xE)(xE)]2=2xx(xE)(xE)(xx)2+(xα+αx)(xβ+βx)B,

[(xE)(xE)]2=2xx(xE)(xE)(xx)2+(xα+αx)(xβ+βx)B.

Because 2=0, the first terms vanish and the result is proven.

I was stuck on this problem until getting guidance from Jack Schmidt in this math.stackexchange.com post. A modification to the ring of 3.23 should have been an obvious candidate, but hindsight is always 20-20!

(c) Consider the ring R from 3.23, the 3×3 strictly upper-triangular matrices over Z. It is non-unital, non-commutative, and 2x=0 implies x=0 for any xR. We also have (see the solution to 3.23) that (ab)2=0=(ba)2 for all a,bR, so it satisfies the requirements of this problem.

Herstein 3.25: Let R be a ring with no non-zero nilpotent elements such that (ab)2=a2b2 for all a,bR. Prove that R is commutative.

After struggling with this problem for a while, I posted it on math.stackexchange.com where I was directed to a paper by John Wavrik. The paper takes up several problems of the form “supposing <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML">R<span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"><span class="mord mathnormal" style="margin-right:0.00773em;">R has thus and such a property, prove <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML">R<span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"><span class="mord mathnormal" style="margin-right:0.00773em;">R is commutative”; it’s a nice companion to this stretch of problems in Topics in Algebra. Theorem 3 deals specifically with this problem. A rational argument was given ex post facto by Andreas Blass in the math.stackexchange.com thread, and I combine the ideas here.

To begin, and as a hint to anybody who might be trying this problem, you might say that the key observation in this problem is the following: it can be shown that (abba)3=0 for any a,bR. Therefore the condition of no non-zero nilpotents is stronger than it needs to be.

The condition of the problem can be rewritten a(abba)b=a[a,b]b=0,

showing that the commutator [a,b]=abba is killed when sandwiched on the left by a and on the right by b. Now, with the hint from above, we write out (abba)3=(abba)(abba)(abba)=ab(abba)abab(abba)baba(abba)ab+ba(abba)ba.\begin{aligned} (ab-ba)^3=&(ab-ba)(ab-ba)(ab-ba)\ &=ab(ab-ba)ab-ab(ab-ba)ba-ba(ab-ba)ab+ba(ab-ba)ba. \end{aligned}

The final term we already know to vanish, and the first term vanishes in the same way because 0=(ba)2b2a2=b(abba)a.

One way to proceed with the argument is to show that a(abba)a=b(abba)b=0. We use a nice trick pointed out by Andreas Blass. Note that [a+b,b]=[a,b] so use the first result to write 0=(a+b)[a+b,b]b=(a+b)(abba)b.

Subtracting a(abba)b=0 from this result, we find that b(abba)b=0.

Thus the commutator of [a,b] is annihilated when sandwiched on both sides by b. By symmetry (same argument applied to [b,a]=[a,b]), left and right multiplying by a also sends the commutator to zero. Hence the second and third terms of the expansion are also zero, and we have (abba)3=0. By the stipulation of the problem, this implies that abba=0 for arbitrary a,bR, so that R is commutative.

Herstein 3.26: Let R be a ring with no non-zero nilpotent elements such that (ab)2=(ba)2 for all a,bR. Prove that R is commutative.

I don’t have a good solution to this problem. It is treated as theorem 5 in the above-mentioned paper by John Wavrik. As in 3.25, the assumption of no nilpotents is stronger than necessary — apparently, it can be shown that (abba)5=0 for all a,bR. I would be interested to hear of an elegant way of showing it.

Herstein 3.27: Let p1,,pkZ be distinct primes, n=p1pk, and R=Z/nZ. Prove there exist exactly 2k elements xR with x2=x.

First, observe that x2x=0 modulo n means that n=p1pkx(x1).

Every prime in the list must individually divide either x or x1 (these scenarios are mutually exclusive, or else we would have a prime dividing x(x1)=1). Hence we have x0modpiorx1modpi

for every i{1,,k}. Each of the k primes has two choices, and there are therefore 2k potential solutions here. We must show that each choice gives rise to exactly one solution.

We can write a “choice tuple” C={a1,,ak}, representing the fact that x=aimodpi for each i. It is clear that two distinct choices C and C will never give rise to the same solution, x: for the choices to be distinct, they must differ in the remainder upon division by some pi. Therefore we need only show (i) that a solution exists for any choice C, and (ii) that the solution is unique, modulo n.

First, uniqueness: suppose x and x are two solutions for the same choice tuple C. Then they agree in every remainder upon division by the {pi} and thus xx0modpi

for all i{1,,k}. As a result, the product n=p1pk must also divide xx. In other words, xxmodn, which is the desired statement of uniqueness.

Now fix i{1,,k} and consider pi and npi=jipj. These two integers are coprime, so there exist ri,siZ with ripi+sinpi=1.

Multiply through by ai and define xi=ai(1ripi)=aisijipj.

From the first equality, we see that xiaimodpi. From the second equality, we see that xi0modpj for all ji. Now, if we define xi in this way for each i and construct x=i=1kxi,

it must solve our system of simultaneous congruences.

Now we have shown that, given a choice tuple C, a solution exists and is unique modulo n. Because there are 2k choice tuples, this proves that there are 2k solutions of x2=x in R.

This problem boils down to a special case of the Chinese Remainder Theorem, whose proof is essentially the same as the proof of existence presented above. A related problem is Herstein’s exercise 1.3.15.

Herstein 3.28: Construct a non-zero polynomial qZ[x] which has no rational roots but for which there exists a solution xZ to q(x)0modp for every prime p.

My first instinct here was to try the usual polynomials which have no roots over Q or R, such as x22 and x2+1. Recall the results of chapter 3.8, where Lemma 3.8.2 and exercise 3.8.4 combine to state that, if p is a prime, then x2+10modp

has a solution if and only if p=4k+1 for some kZ. Then our starting point is q0(x)=x2+1

which has no rational roots but does have a solution x to q0(x)0modp for any 4k+1 prime p. Proceeding from here, the plan is to tack on additional factors to cover the cases of other sorts of primes.

One might hope that there is another common non-trivial (i.e. not a perfect square in Z) quadratic residue among all 4k+3 primes, as 1 was common to all 4k+1 primes. This is not the case. However, we will prove below that if p is of the form 4k+3 with kZ, then one of 2 or 2 is always a quadratic residue modulo p (hint from JavaMan on math.stackexchange.com). Therefore, if we augment: q1(x)=(x2+1)(x2+2)(x22),

then it still has no rational roots, but it has a solution x to q1(x)0modp for any odd p. The only remaining concern is the prime 2, but we see that x=0 is already a root modulo 2.

In summary, the polynomial q(x)=(x2+1)(x2+2)(x22)Z[x]

has no rational roots but has a solution x to q(x)0modp for any prime p.

Lemma: If p is an odd prime, then there are as many quadratic residues as quadratic non-residues modulo p.

Proof: Note: for purposes of symmetry, we do not consider 0 to be a quadratic residue. The candidates for quadratic residues modulo p are {1,,p1}; there are an even number of them. We can enumerate the actual residues by simply considering the set {x2x{1,,p1}}Z/pZ.

In doing so, we note that (px)2=p22px+x2=x2modp, so fully half of those squares do not contribute distinct residues to the set. This puts an upper bound of (p1)/2 on the number of quadratic residues modulo odd prime p.

Are there any other duplicates? No. Suppose x and y square to the same residue: x2y2modp. Then (x+y)(xy)0modp so that p(xy) or p(x+y). Restricting x,y{1,,p1}, the first case gives y=x and the second case gives y=px. Therefore, there are no other duplicates: exactly half, (p1)/2, of the values {1,,p1} are quadratic residues modulo p. This leaves the other half as quadratic non-residues.

Lemma: If p=4k+3 is a prime, with kZ, and a{1,,p1}, then exactly one of a or a is a quadratic residue modulo p.

Proof: By the lemma above, there are (p1)/2 quadratic residues modulo p. For each one, we will exhibit a complementary, unique quadratic non-residue, thus accounting for all p1 values in {1,,p1}.

Let m{1,,p1} be a quadratic residue modulo p, with x such that x2mmodp. Suppose that m=pm is also a quadratic residue, with y such that y2mmodp. Then, because (m,p)=1, we can invert, writing y2m1modp

and hence (xy1)21modp.

This is impossible by exercise 3.8.4 because p3mod4. Therefore it must be that m is a non-residue. To each of the (p1)/2 quadratic residues m, we associate a quadratic non-residue m=pm. In this way, we enumerate the whole set {1,,p1}. Every element is therefore a quadratic residue or its complementary non-residue.

Corollary: If p=4k+3 is a prime, with kZ, then one of 2 or 2 is a quadratic residue modulo p.

comments powered by Disqus