Nakayama’s lemma

by ayushkhaitan3437

The Nakayama lemma as a concept is present throughout Commutative Algebra. And truth be told, learning it is not easy. The proof contains a small trick that is deceptively simple, but throws off many people. Also, it is easy to dismiss this lemma as unimportant. But as one would surely find out later, this would be an error in judgement. I am going to discuss this theorem and its proof in detail.

The statement of the theorem, as stated in Matsumura, is:

Let I be an ideal in R, and M be a finitely generated module over R. If IM=M, then there exists r\in R such that r\equiv 1\mod I, and rM=0.

What does this statement even mean? Why is it so important? Why are the conditions given this way? Are these conditions necessary conditions? These are some questions that we can ask. We will try and discuss as many of them as we can.

M is probably finitely generated so that we can generate a matrix, which by definiton has to be finite dimensional. Where the matrix comes in will become clear when we discuss the proof. What does IM=M imply? This is a highly unusual situation. For instance, if M=\Bbb{Z} and I=(2), then (2)\Bbb{Z}\neq\Bbb{Z}. I can’t think of examples in which I\neq (1), and IM=M. However, that does not mean that there do not exist any. What does it mean for r\equiv 1\mod I? It just means that r=1+i for some i\in I. That was fairly simple! Now let’s get on with the proof.

Let M be generated by the elements \{a_1,a_2,\dots,a_n\}. If IM=M, then for each generator a_i, we have a_i=b_{i1}a_1+b_{i2}a_2+\dots+b_{in}a_n, where all the b_{ij}\in I. We then have b_{i1}a_1+b_{i2}a_2+\dots+(b_{ii}-1)a_i+\dots+b_{in}a_n=0. Let us now create a matrix of these n equations in the natural way, in which the rows are indexed by the i‘s. The determinant of this matrix will be 0, as for any column vector that we multiply this matrix with, we will get 0. On expanding this determinant, we will get an expression of the form (-1)^n+ i, where i\in I. If n is odd, then just multiply the expression by -1. In either case, you get 1+i', where i\in I (i'=i or i'=-i).

Now as 1+i' is 0, we have (1+i')M=0. Hence, r=1+i' such that r\equiv 1\mod I and rM=0

The reason why the proof is generally slightly confusing is that it is done more generally. It is first assume that there exists a morphism \phi:M\to M such that \phi(M)\subset IM. Cayley-Hamilton is then used to give a determinant in terms of \phi, and then it is assumed that \phi=1. Here I have directly assumed that \phi=1, which made matters much simpler.

Advertisements