This page covers the supplementary problems at the end of chapter 3.
The problems below are paraphrased from/inspired by those given in Topics in Algebra by Herstein. The solutions are my own unless otherwise noted. I will generally try, in my solutions, to stick to the development in the text. This means that problems will not be solved using ideas and theorems presented further on in the book.
Let be such that
is an integral domain is equivalent to the statement that implies or , which is equivalent to the statement that is a prime ideal.
Theorem 3.5.1 states that is a maximal ideal of commutative, unital ring if and only if is a field. Therefore if is a maximal ideal of , we have a field and thus an integral domain. By exercise 3.1, is a prime ideal.
The ring cannot be a PID, because a prime ideal will always be maximal in that case: if is a prime ideal, then an ideal containing it will have which won’t happen in any non-trivial way. Then we avoid PIDs and look for things that are somewhat more exotic, but non-commutative rings are probably too exotic. A natural choice for investigation are polynomial rings.
One thought is something like in . This is a prime ideal but it’s also relatively easy to see that it is maximal. However, if we consider instead in , we have it. The generator is irreducible so, because is a UFD, the ideal is prime. The ideal
properly contains it because it contains elements (e.g. ) with degree smaller than . However, because is clearly not in . Therefore is prime but not maximal in .
Note that and would work via the same argument.
Let be a prime ideal of . We have by exercise 3.1 that is an integral domain, and by lemma 3.2.2 that a finite integral domain must be a field. Therefore is a field, so that theorem 3.5.1 gives us that is a maximal ideal of .
The meaning of this exercise is not clear. It seems that we just want to show that the name of the indeterminate is irrelevant. This can be done by considering the obvious mapping with
Therefore is a homomorphism, and it is clear that it is both injective and onto.
Let be given by . Consider
which has simplified because is a homomorphism and fixes the coefficients. From this we see that the -fixing automorphism is determined entirely by where it maps the polynomial . If , then the map can not be surjective, because only constants are in the image of . If , then again the map can not be surjective: the degree of a non-constant polynomial is . Hence no polynomials of degree would be in the image of .
Therefore for some is the only remaining possibility. The question is whether so chosen respects the fact that the map must be an automorphism. We have
This is simply a composition and we easily see that is still a homomorphism:
Suppose are such that , and suppose . Then
where are the differences of the coefficients of and , and is the maximum of the two degrees. Starting from the degree term, it is clear that because there is no way to cancel the term otherwise. Next the degree term suffers the same fate, and so on, down the chain. Thus all of the coefficients must vanish identically, so that implies , i.e. that is injective. This argument would fail if because there are no powers of to speak of. However, is not injective in the case: and . Therefore we must restrict .
Observe that, if , then
Therefore is surjective because
is the image under of
To summarize, we have shown that if is an automorphism of which fixes the coefficient field , then must be of the form
where and .
If we wanted to replace by a ring , we note that if is not an integral domain, then it is conceivable that would be an acceptable situation, so there might be much more to study here. However, if is an integral domain, the argument carries through but must be a unit rather than simply non-zero.
(a) If with and , then
by the binomial theorem. The idea here is that every term in this sum is zero because either is raised to a high enough power or is raised to a high enough power. This is most clear if we change the variable of summation to so that
Now if we have , while gives . Hence and so whenever .
In addition, if with and , then so . Therefore, is an ideal of .
(b) If , then . Thus there exists such that , which shows that also belongs to .
This is reminiscent of being an integral domain. It’s not that we necessarily have no zero divisors in , but we have no non-trivial nilpotent elements in . Of course, nilpotent elements are a particular class of zero divisor. If were an integral domain, then we could say by exercise 3.1 that is a prime ideal. While this might not be true, it is indeed true that the nilradical is related to prime ideals. It turns out that is the intersection of all prime ideals of (proof).
(a) If then is immediately a member of because . Thus . If then the proof of exercise 3.7a carries over almost verbatim to show that . The various terms in the binomial expansion do not vanish, but they all belong to , which is an ideal. Hence . If with and , then so that . Therefore is an ideal containing .
(b) Let so that for some . Then there exists with and hence . This shows that . By part (a), we already know that . Therefore .
The nilradical is the radical of the zero ideal. Note that the parts of exercise 3.7 are the respective special cases of the parts of exercise 3.8, despite the fact that part (b) of exercise 3.7 is expressed somewhat differently.
If is an element of the nilradical, then there exists with . The prime factors of the left hand side will always be exactly those of . If one of the prime factors of is missing from , then this equation will have no solution. To be more explicit, say that
then we claim that belongs to the nilradical if and only if is a multiple of . This statement correctly describes even the trivial case of , but we exclude that case in what follows.
If is non-zero and of this form, then take the power large enough that each prime is raised to a power greater than its power in , and then choose the coefficient to make up the deficit.
On the other hand, if non-zero is known to be in the nilradical, then we have for some . If is a prime dividing , then it also must divide the left hand side, . As is a prime, we have further that . This holds for each prime, so every prime dividing must also appear, with at least one power, in . This proves the claim.
As an example, consider where . The prime product is . The elements of the nilradical of are (), (), () and of course . On the other hand, something like is not in the nilradical because is always divisible by whereas never is.
Because is an ideal (and therefore closed under external multiplication), ; because is an ideal, . Therefore so .
This is the analogue of the center of a group, the set of all elements that commute with everything. Let and let be an arbitrary element of . Then
showing that . Similarly, so that . Of course, if there exists , then . Therefore is a subring of . It is patently a commutative ring.
By exercise 3.11, we know that is a commutative subring. If is a division ring then so too is . A commutative division ring is a field, by definition.
It seems a difficult problem in general to construct arbitrary irreducible polynomials. However, with small degree and a small field of coefficients, we can force it through. We observe that we may restrict attention to monic, because is a field so the highest coefficient is invertible and factoring it out does not affect reducibility. In addition, a reducible degree polynomial must have a linear factor because the only non-trivial way to partition is . Then if is monic and reducible, we will be able to write
where the latter polynomial may be further reducible, but that is of no concern. Clearly such a polynomial must map some element to zero. Returning to the problem at hand, we now know that a degree polynomial which has no root in must be irreducible.
Now a simple brute force search is easy. We restrict to monic degree polynomials, searching for one which doesn’t map any of to zero. For simplicity, we keep the term out of it. The first thing to try is but it has . Next, , which works: , and . Therefore one such irreducible polynomial is
and there are other possibilities.
By exercise 3.9.7, is a field of elements.
Note that the connection found here between roots and irreducibility is not of general use. There are sometimes polynomials over fields which have no roots but are nevertheless reducible, such as over . The observation is special for degree , where any reduction involves a degree one term. It does not even extend to higher odd degrees, because one can imagine a fifth degree polynomial that splits into irreducible factors of degree and , neither of which has a root, e.g. over (both factors are irreducible by exercise 3.10.2).
so exercise 3.9.7 suggests that we look for a degree polynomial irreducible over . Then will be the desired field.
For degree , the methods of the previous problem are not helpful. Therefore we try brute force: writing down the simplest polynomials and manually checking that they are irreducible, hoping to get lucky. A few observations are helpful. 1) If were reducible, it could be factored into two degree polynomials, or into a degree and a degree polynomial. In the latter case, must have a root in due to its linear factor. Therefore we look for candidates which have no roots in , but that is necessary and not sufficient. 2) Fermat’s little theorem says that in our field. 3) The quadratic residues modulo are .
has no root because for . We then try to factor it as
and find that the equations for each coefficient have a consistent solution: , , . Thus is reducible.
has a root at . More generally, takes the values , and quick inspection shows that any non-zero gives a polynomial with a root, which is reducible.
takes the values on , so it has no roots. However, again writing down the equations for the coefficients in a product of quadratics, we find that is reducible.
takes the values on , so it has no roots. The equations for the coefficients are , , and . Combining these four equations, we see that
From the first of these new equations, we see that , because precludes the possibility that . Next, we see that . However, we know that by little Fermat, so we have our desired contradiction at long last. Thus is irreducible over and is a field of order .
For an element to be in the nilradical of , it means there exists an integer such that divides . First we show that being trivial implies that cannot be divisible by a square. Consider the contrapositive statement: if is divisible by a square, say with , then so that is non-trivial because is in it. Hence if is trivial then must not be divisible by a square.
Now suppose that is not divisible by any square. Using the fact that is a UFD, we can write with the irreducible and all being unique, if . If , then there is with . Then by lemma 3.7.6 every and therefore . Viewed in the quotient ring , . Hence if is not divisible by a square, the nilradical of is trivial.
Note that the squarefree property is truly necessary in the second paragraph. Take this example in the integers: and . We have but of course .
We observe that , which belongs to any field, is a root of . Hence is divisible by the polynomial and is therefore irreducible.
This is immediate from the Eisenstein criterion with .
First, the notion of characteristic (as defined by Herstein) is relevant here because a field is an integral domain (exercise 3.2.12). The characteristic of is finite by the pigeonhole principle applied to the set : the list must repeat because is finite, so there exist such that ; therefore . By exercise 3.2.6, the characteristic of is prime because it is finite.
Because has characteristic , we know that
Now, we are familiar with ; it is a field containing elements. If then we have shown that and we are done. However, suppose there exists with , and consider the set
Clearly , and we will show in a moment that it contains elements. Note that we will not claim that is a field or even closed under multiplication. For instance, it is not clear at all at this point whether would belong to . Nevertheless, we will find this construction useful. The sketch of the proof from this point is as follows: we repeat this procedure for as long as contains an element outside our constructed subsets. The procedure surely terminates because each step generates a proper superset of the preceding step’s set, and is finite. Moreover, the set generated in any step is times as large as the preceding set. Hence, when the procedure terminates, we realize that the size of must be a power of .
First we show that contains elements. It is clear that we can enumerate elements in because there are choices for each of the coefficients and . However, must they all be unique? Yes. If , then we see that
If is non-zero, then it is an invertible element of , so that we have
a contradiction. Therefore and consequently , the element is unique. This proves that there are elements in .
Now we would like to show the validity of the procedure in general. Suppose with , and suppose there exists with . Then construct
We can enumerate elements in , and clearly . Are any of the elements duplicates? No. If
then and the argument from above applies again: if is non-zero, then it is an invertible element of and
We know, however, that is closed under multiplication by elements of , because is a field. Thus, this line of reasoning has us conclude that , a contradiction. Therefore we have and consequently , the two elements are identical. Now we have shown the recurrence .
Because is finite, it will eventually be exhausted of its elements and we will have some maximal such that , but there does not exist another with . But this means that , so that and . This is the desired result: the number of elements in a finite field must be a power of a prime.
Finally, suppose that . We have that the non-zero elements of form a group under multiplication, and there are of them. By Lagrange’s theorem, the order of any element divides . Then if , we surely have
or, what is the same,
Let be a non-zero ideal and let with not both zero. Then
because is closed under external multiplication. Because one of or is non-zero, is a positive integer.
Like exercise 3.4.19 (where for all implies commutativity), this problem is hard. We follow the great proof posted by Steve D. on math.stackexchange.com.
First observe that so that for any . Next, we consider the magic combination , which was also of interest in 3.5.19. We can try to take the fourth power, but it gives no information. However, we have that
Now we make some subtle, and seemingly unrelated, statements.
(1) If have , then . Using the special property of , we have .
(2) If has , then commutes with every element of : let and consider
so that . In the final step, we used property (1). Now do this again,
which gives . Combining the two, we see that if then for any .
We are not done, because not every satisfies . Let and expand the equality
which holds because satisfies . Now, canceling the identical terms, we are left with
but we know that commutes with , so we have for arbitrary . Finally, we can make the statement that
for any , so that is commutative.
We follow Herstein’s hint, which is to fix and to consider the sets
That is, is those that fall into the first category and is those that fall into the second category. We must have and, of course, belongs to both so that neither is empty. We seek to prove that one or both of the sets is equal to .
Suppose there exists with , so that while . If is arbitrary, consider
We can evaluate this quantity in another way: either
If the first holds, we would have
so that as desired. The second case leads to
where we use the fact that . Again, we must conclude that . Therefore if there exists with , then . If the circumstance is reversed, so there exists with , the same argument gives that in that case.
The only other case to consider is or vice versa. Say , then we know that . Therefore for any fixed , one of or is the entire ring.
Now we know that, for fixed , either or is the whole of . We must extend this to a global statement about itself. Following the hint of this math.stackexchange.com post, we consider the two sets
As we know, . It is also easy to see that each of and is closed under addition: e.g. implies
so that . Suppose that and . If that is the case, then there exists with , and there exists with . Then to which set does belong? If , then is a contradiction. If , then is a contradiction. Therefore we must conclude that one (or both) of or is the entire ring, which is the desired result.
Note: This very verbose “elementary” argument can be greatly simplified by the result mentioned in the linked hint. Specifically, if is a group and satisfy , then or . The proof is essentially what was done in the preceding paragraph. Namely, this argument by contradiction: If and but , then there exists with and there exists with . Now if , then is a contradiction. On the other hand, if , then is also a contradiction. Hence one or both subgroups is the entire group .
In the context of this exercise, we can use this result twice. Note that and are additive subgroups of whose union is , so one or the other must be the entire ring. Then and are also additive subgroups of whose union is .
This problem is a standard follow-your-nose element manipulation exercise. In light of the subsequent exercises, it’s clearly going to be important that , so we start by considering things like . Let be arbitrary. We have
but also, using the problem stipulation,
This simplifies to give . Similarly, consideration of gives .
Finally, if we expand
and cancel the obvious terms, we are left with
Using the previous two results, this simplifies to . Therefore is commutative.
By exercise 3.22, the ring cannot be unital. The condition can be rewritten as
so we want most or all of the elements of to be zero divisors (in some cases may be zero so we can’t make a general statement). My go-to examples of non-commutative rings are the quaternions and rings of matrices. Making even the simplest computations in the quaternions, we have things like while . It is unlikely that a subring of the quaternions will satisfy the condition of this problem, so we set it aside.
I tried many things with matrices which all ultimately failed to pan out. For instance, rings generated by simple (single non-zero entry) matrices with entries from the even integers did not satisfy the condition of the problem, and those matrices which square to zero (which easily satisfy the condition of the problem) end up giving a commutative subring.
The space of matrices is a big one, so it is natural to restrict attention to one famous subring, the upper triangular matrices. It’s even a good idea to restrict further still, and only consider matrices with zeroes on the diagonal. It turns out that this works. Let
Then we consider the set
Note that while every other product of is zero. Thus it is trivial that is closed under multiplication, because
Furthermore, is non-commutative, because
will generally not be the same as the previous product. Of course, is closed under addition. Therefore is a non-commutative subring of . It also satisfies the condition of the problem, because any product of four matrices in is proportional to . More explicitly,
Note that the ring of coefficients for is really immaterial. Even would suffice, furnishing us with an eight-element ring satisfying the condition of the problem.
This exercise illustrates just how fragile the conditions in (a) are for ensuring that is commutative. If they are relaxed in any respect, no longer needs to be commutative.
(a) This is another follow-your-nose elementary manipulation. Based on the problem statement, we would like to end up with a result like . First note that
but this is also equal to
Comparing the two, we have for any . In other words, a square commutes with anything. Now, in particular, it is true that . We write
again using the property derived above. Because implies , we have that for arbitrary and thus is commutative.
(b) Here we seek to construct a non-commutative ring with for all . By part (a), we have the hint that this ring must contain some non-zero element with . This naturally suggests things like and . In order to get non-commutativity, it’s then natural to look at matrices over those rings.
As in problem 3.23, I tried various rings of matrices over and , but they always failed. One can easily write down explicit expressions for matrices with , and the resulting ring always ends up commutative. Convinced of the futility of matrices, we look at matrices over .
The solution of 3.23 does not work here because is required to be unital and the strictly upper-triangular matrices lack an identity element. However, if we include the diagonal, then we have it: recall the notation from the solution to 3.23, above, and consider the set
( is the identity matrix). is the element set of all upper-triangular matrices over , which we know to be a ring. It is unital (, ) and non-commutative:
Crucially, it also satisfies the condition for all : letting and , we have
Because , the first terms vanish and the result is proven.
I was stuck on this problem until getting guidance from Jack Schmidt in this math.stackexchange.com post. A modification to the ring of 3.23 should have been an obvious candidate, but hindsight is always 20-20!
(c) Consider the ring from 3.23, the strictly upper-triangular matrices over . It is non-unital, non-commutative, and implies for any . We also have (see the solution to 3.23) that for all , so it satisfies the requirements of this problem.