Minimal Polynomials of Linear transformations

by ayushkhaitan3437

I’m prepared to embarrass myself by writing about something that should have been clear to me a long time ago.

This is regarding something about the minimal polynomials of linear transformations that has always confused me. Let T\in L(V,V), where V is an n-dimensional vector space. Let us also assume that T has \{v_1,v_2,\dots,v_n\} as distinct eigenvectors, but the corresponding n eigenvalues may not be distinct. If the eigenvalues are \{a_1,a_2,\dots,a_k\} where k\leq n, it is then known that (x-a_1)(x-a_2)\dots (x-a_k) is the minimal polynomial of T.

We know that as polynomials, (x-a_1)(x-a_2)\dots (x-a_k) and (x-a_2)(x-a_1)\dots (x-a_k) are the same (note that I’ve exchanged the places of a_1 and a_2. However, when we substitute x=T, are (T-a_1I)(T-a_2I)\dots (T-a_kI) and (T-a_2T)(T-a_1T)\dots (T-a_kT) also the same? Remember that matrices are in general not commutative. In fact, if for matrices A and B we have AB=0, then it is not necessary that BA=0 too.

An earlier exercise in the book “Linear Algebra” by Curtis says that for f,g\in F[x], f(T)g(T)=g(T)f(T). Why is this? Because we’re ultimately going to get the same polynomial in terms of T. My mental block came from the fact that I was imagining T-a_iI to be a matrix which I didn’t know much about. I forgot that T-a_iI is a decomposition of a single matrix into two, and matrix multiplication, like the multiplication of complex numbers, is distributive. Hence everything works out as planned.

Advertisements