This page covers section 4.1 (“Elementary Basic Concepts” [of vector spaces and modules]).
Definition: Let be a non-empty set, let be a field, and let and be binary operations such that
Then is said to be a vector space over . The dot for multiplication will generally be omitted in what follows.
Example: If are both fields, then may be viewed as a vector space over .
Example: If is a field, then , with the obvious operations, is a vector space over .
Example: If is a field, then is a vector space over .
Example: If is a field, then is a vector space over , where is the set of polynomials over with degree less than .
Definition: If is a vector space over and forms a vector space using the same operations of , then is a subspace of . This is equivalent to the condition that for all and .
Definition: Let be vector spaces over . A homomorphism of vector spaces is a map such that and for all and .
The set of all homomorphisms between vector spaces and will be denoted .
Lemma 4.1.1: Let be a vector space over . Then
Lemma 4.1.2: Let be a vector space over and let be a subspace. Then is a vector space over , called the quotient space of by .
Theorem 4.1.1: Let be vector spaces and let be a surjective homomorphism with kernel . Then . Conversely, if is a vector space and a subspace, then there exists a homomorphism . (TODO: am I transcribing this correctly?)
Definition: Let be a vector space over and let be subspaces. If any admits a unique representation with for each , then is the internal direct sum of the .
Definition: Let be vector spaces over . The external direct sum of the is the set .
Theorem 4.1.2: The internal and external direct sums of are isomorphic. Hence we can refer to simply a direct sum having both of the above properties.
The problems below are paraphrased from/inspired by those given in Topics in Algebra by Herstein. The solutions are my own unless otherwise noted. I will generally try, in my solutions, to stick to the development in the text. This means that problems will not be solved using ideas and theorems presented further on in the book.
We have
by Lemma 4.1.1.
The map given by is an isomorphism.
For a homomorphism between vector spaces , . Let and , with the base field. We have because is a homomorphism, and so and . Thus is a subspace of .
(a) Let and . We have because sums and scalar products of continuous functions are again continuous. The function is the additive identity in this vector space. Other details can be taken for granted.
(b) The set of -times differentiable functions is a subset of the continuous functions, so it’s just necessary to check if the set is closed under linear combinations. Indeed, the sum of a differentiable function is again differentiable, so the set in question is a subspace of .
(a) Because is closed under addition and multiplication, and operations on are defined componentwise, all vector space axioms hold for .
(b) If and are two elements of and , then because
(c) Let and let . We have that
The first two terms are finite by assumption. The third term can be bounded: for real numbers , rearrange to give
so that
Hence and is a subspace of .
To show that is contained in , we must show that implies . Define the partial sums ; we have that for some . Therefore,
For any , there exists such that implies that . Let be given: by the previous statement, there exists so that implies . Thus for , we also have . This proves that , i.e. that .
is the set of homomorphisms . Given and , we can define a third homomorphism pointwise, i.e. by
It is straightforward to see that is again a homomorphism. is a vector space under this pointwise addition and scalar multiplication.
Let be the vector with a in the -th index and zeroes elsewhere. Similarly, let be the analogous thing. Given , we have , defining a matrix of coefficients for each . Now define by
That respects linear combinations is a rote computation. The kernel of is trivial, so it is an isomorphism.
Define by . This is a surjective homomorphism. The kernel of is the set ; a similar projection mapping establishes the isomorphism .
Let be the index of the first non-zero entry in , and let be the projection of onto its -th entry. Then and .
This is (a special case of) the result that a vector space is isomorphic to its double-dual.
With the result 4.1.7, we have
Given , and , we have that
where the last step is justified because and are each subspaces. Therefore is a subspace of .
Let be subspaces of over the field . If and , then because is a subspace and because is a subspace. Hence , and is a subspace.
This is the second isomorphism theorem.
The elements of look like where and . The elements of look like with . In both cases, elements of get turned into the zero coset.
Define the map by . To see that this is well-defined, consider that belong to the same coset: , so that their difference is which then implies that . We have ; thus any representative of a coset in the domain gets mapped to the same coset in the codomain.
is a homomorphism: for , , we have while .
The kernel of contains those elements which map to in the codomain, i.e. those elements of the domain where the component belongs to . We have that so , i.e. the kernel is trivial so that is injective.
is surjective because, given , we have .
Therefore is an isomorphism.
This is the fourth (“lattice”) isomorphism theorem.
There are a couple of natural-looking ways to map the objects in question (I tried , , etc.). However, the first isomorphism theorem (theorem 4.1.1) states that , so the subspaces of should probably look like where is a subspace of . Naturally, only makes sense if contains . Therefore, the map we define is given by , and it makes sense because of the way we have chosen (i.e. only considering subspaces that contain ).
The map is injective: let be mapped the same by , i.e. . We would like to show that this implies . If , then so there exists with . This implies that , so that
This proves that . The argument, made in reverse, gives also that , so we have proven that is injective.
is also surjective: if is a subspace of , then we can realize it as the image of a subspace of . Consider . It remains to show that , that is a subspace of , and that . Because is a subspace, and , so that . If and is a scalar, then , so is a subspace of . Finally, , which is the same thing as (TODO: fill in details without making notation worse?).
Therefore, is a bijection between and .
To say that is the internal direct sum of the is to say that has exactly one expression with each .
Because , we have that has at least one such expression. It remains to show that this expression is unique. Therefore, suppose that with for each . Then we have
and, rearranging,
The left hand side belongs to while the right hand side belongs to . By assumption, those two spaces intersect trivially, so that for each . Hence the two representations are identical, and we are done.
is the external direct sum of the , so it looks like
The subspaces which allow the -th entry to range over , while fixing the non- entries as zero, are the desired subspaces of isomorphic to . The conditions of exercise 4.1.15 are easily satisfied, so that is the internal direct sum of the .
(a) That is a homomorphism is a straightforward exercise
(b) In the language of matrices, this is the familiar question of when a matrix is invertible; the answer is “when the determinant is non-zero”. How does that come about from direct computation?
Let and consider the simultaneous equations
Multiplying the first equation by and the second by , and then subtracting the second from the first, we find
Performing a similar computation, we also find
In order for to be injective, it must have a trivial kernel. If , then
These equations have non-trivial solutions if and only if . Thus a necessary and sufficient condition for to be injective is that . As a result, this is also a necessary condition for to be an isomorphism.
The same condition is also sufficient for to be surjective, because the equations and are solvable for any by dividing by .
Therefore the necessary and sufficient condition for to be an isomorphism is that be non-zero.
I haven’t done this exercise, but I would be surprised if it is different from 4.1.17 in a meaningful way.
Put another way, the exercise is to show that a homomorphism between vector spaces and induces a natural homomorphism between their dual spaces and .
A diagram helps:
Here, the map is provided by , the map is some representative , and the desired map can be made in a natural way by composition. That is, we define a map by
It is easy to check that (1) the resulting is indeed an element of , and (2) that is a homomorphism.
(a) Looking slightly ahead, the intuition here is that the image of under a homomorphism will be too low-dimensional. Therefore, consider a supposed isomorphism . Because it is surjective, there are with and . Now, there exists such that , so we must have
This is a contradiction, so we conclude that no such exists.
(b) Suppose is an isomorphism, and . Then we have for any . Because is surjective, there must exist such that
Taking the first and second equations, and eliminating the terms, we find that
However, taking the first and third equations, and eliminating the terms, we also find that
These two results are inconsistent unless . In that case, we can explicitly solve for and derive a contradiction that while .
Thus we have shown that the map is not truly surjective, and therefore not an isomorphism.
The laborious arguments above make one appreciate (1) the elegance of doing linear algebra without explicit coordinates/choice of basis, and (2) the simplicity and utility of the concepts of linear independence, basis and dimension, which we eschew here because they are not introduced until the next section of the book.
Let be proper subspaces of such that . We can assume that each brings something of value to this union, i.e. that
In other words, for each , there exists some which only belongs to and none of the other subspaces. If this is not the case, then we can omit this : all of its elements are included elsewhere. In this sense, we can assume our set to have minimal size.
Because the subspaces are proper, we know that . Consider elements with and with . Let be distinct. The elements and belong to , so each belongs to some . Suppose both belong to the same ; then so must their difference: . By assumption, only belongs to , so that . Taking a step back, we see that this would force to also live in , a contradiction. Thus and are forced to belong to different subspaces.
Now, we enumerate some infinite subset of and construct the elements . Considering the pairwise, we see that every one must live in a different subspace from every other one: no finite number of subspaces will suffice. We conclude that no vector space over an infinite field can be realized as the union of finitely many of its proper subspaces.