We have seen that it is not always possible to find a basis of eigenvectors of a linear map from a finite-dimensional vector space to itself. We recall that a square matrix #A# has diagonal form or is a diagonal matrix if all #(i,j)#-elements with #i\neq j# are zero. We discuss how to determine whether this is the case.
A linear map #L: V\to V#, where #V# is a finite-dimensional vector space, is called diagonalizable if #V# has a basis #\alpha# such that #L_\alpha# is a diagonal matrix.
The matrix #A# is called diagonalizable if #L_A# is diagonalizable. If more precision is needed and #V# is real (respectively, complex), we say that #A# is diagonalizable over the real (respectively, complex) numbers.
The #(n\times n)#-matrix #A# is diagonalizable if and only if it is conjugate to a diagonal matrix. This means that there is an invertible #(n\times n)#-matrix #T# such that #T AT^{-1}# is diagonal. Thanks to theorem Basis transitions in terms of matrices, this is equivalent to the fact that #L_A# has a diagonal matrix with respect to a suitable basis for #\mathbb{R}^n#.
The matrix \[A = \matrix{1&1\\ 0&1}\] is not diagonalizable. For, otherwise there would be numbers #a# and #b# such that #A# is conjugate to #D=\matrix{a&0\\ 0&b}#. But then we would have
\[\begin{array}{rclclclcl}a+b &=&\text{tr}(D) &=& \text{tr}(A )&=&1+1 &=& 2 \\ a\cdot b &=&\det( D) &=& \det(A) &=&1\cdot 1 -1\cdot 0 &=& 1 \end{array}\] Substituting #b = 2 - a#, an equality obtained from the first equation, into the second equation, we obtain the quadratic equation #a^2-2 a+1=0#, which has the single solution #a =1 #. This implies #b = 1#, so #D = I_2#, the identity matrix. Therefore, there is an invertible #(2\times2)#-matrix #T# with #A = T \,I_2T^{-1}=I_2 #. This contradicts the fact that the #(1,2)#-element of #A# is equal to #1#.
It may happen that a square matrix with real elements is not diagonalizable if we view the matrix as a linear map on a real vector space, but it is diagonalizable if we view the matrix as a linear map on a complex vector space (a complexification of #V#).
A well-known example is the matrix #\matrix{0&1\\ -1&0}# with complex eigenvalues #\ii# and #-\ii#.
This is why we speak of diagonalizability over the real numbers and diagonalizability over the complex numbers.
The following statement is easy to understand but very important:
Let #V# be a vector space of finite dimension #n# with basis #\alpha#. The following statements regarding a linear map #L:V\to V# are equivalent.
- #L# is diagonalizable.
- #L_\alpha# is diagonalizable.
- The sum of the dimensions of the eigenspaces of #L# over eigenvalues is equal to #n#.
- There is an invertible #(n\times n)#-matrix #T# such that #T^{-1}L_\alpha T# is a diagonal matrix.
In this case, the columns of the matrix #T# form a basis of #\mathbb{R}^n# or #\mathbb{C}^n# (according to #V# being real or complex) consisting of eigenvectors of #L_\alpha#.
This repeats what we discussed previously: it follows from the theorem Basis transitions in terms of matrices that there is a basis #\beta# for #V# with respect to which #L_\beta# is diagonal, if and only if there is an invertible #(n\times n)#-matrix #S# such that #SL_\alpha S^{-1}# is diagonal. In statement 4 we take #T = S^{-1}#.
We saw that the matrix \(A = \matrix{0&1\\ 0&0}\) is not diagonalizable. This means that #\mathbb{R}^2# has no basis of eigenvectors of #L_A# (in fact, not even #\mathbb{C}^2# has such a basis). The vector #\rv{1,0}# is an eigenvector #L_A# with eigenvalue #0#. Any other eigenvector of #L_A# lies in the span of #\rv{1,0}#. This matrix is not even diagonalizable over the complex numbers.
If there is an invertible #(n\times n)#-matrix #T# such that #T^{-1}L_{\alpha}T# is a diagonal matrix, then #T# may be found as a matrix whose columns are a basis of eigenvectors of #L_{\alpha}#. The determination of this basis can be carried out by means of a previously described procedure.
A direct consequence of theorem Recognizing diagonalizability is that, for a diagonalizable linear map #L#, we are able to find a basis with respect to which the matrix of the map has the diagonal form by starting with an arbitrary basis #\alpha# and finding a matrix #T# conjugating #L_\alpha# to a diagonal matrix, that is, such that #T^{-1}L_{\alpha}T# is a diagonal matrix:
Let #V# be a finite-dimensional vector space with basis #\alpha# and #L:V\to V# a linear mapping.
If #L# is diagonalizable, then we can find an invertible matrix #T# whose columns are eigenvectors of #L_\alpha#, such that #T^{-1}L_{\alpha}T# is a diagonal matrix. Then, the composition #\beta = L_T^{-1}\,\alpha# is a coordinatization of #V# such that #L_\beta# is in diagonal form.
If #L_\alpha# is already a diagonal matrix, then #T = I_n# where #n=\dim{V}#, suffices.
Suppose that #L# is diagonalizable. According to the above argument, the matrix #L_{\alpha}# is diagonalizable. This means that there is an invertible matrix #T# such that #D= T^{-1} L_\alpha T# is a diagonal matrix. After multiplication of the two sides by #T#, we find that \[L_\alpha\, T \vec{e}_i= T\, D\vec{e}_i\] for each #i=1,\ldots,n#, where #\basis{\vec{e}_1,\ldots,\vec{e}_n}# is the standard basis of #\mathbb{R}^n# or #\mathbb{C}^n# (if #V# is real or complex, respectively). On the left is the image of the vector \( T \vec{e}_i\) under #L_\alpha# and on the right the scalar multiple of said vector with scalar #d_{i}#, the #i#-th diagonal element of #D#. This shows that the #i#-th column \( T \vec{e}_i\) of #T# is an eigenvector of #L_\alpha# with corresponding eigenvalue #d_i#.
Because #L_T^{-1} = \beta\,\alpha^{-1}# we conclude, thanks to theorem Basis transition that \[L_\beta =\beta \alpha^{-1} \, L_\alpha \left(\beta \alpha^{-1}\right)^{-1} =T^{-1} L_{\alpha}\left(T^{-1}\right)^{-1} =T^{-1} L_{\alpha}T = D\]
If there appears to be a basis of eigenvectors, the setting up of the matrix with respect to such a basis is simple: the matrix is a diagonal matrix along whose diagonal the eigenvalues appear (in exactly the same order in which the corresponding eigenvectors appear in the basis). Thus, no explicit transformation is needed for this calculation.
The method can also be used to determine whether the linear map is diagonalizable.
For what value of #b# is the #(2\times2)#-matrix #A# below not diagonalizable over the complex numbers?
\[ A = \matrix{5 & b \\ 3 & -5}\]
#b = # #-{{25}\over{3}}#
The matrix #A# is not equal to a scalar multiple of the identity. Therefore #A# is diagonalizable (over the complex numbers) if and only if it has two different eigenvalues (possibly complex). This is the case if and only if the
characteristic polynomial has two different (possibly complex) roots.
The characteristic polynomial is
\[p_A(x) = x^2-\text{tr}(A)+\det(A) =x^2-3 b-25 \] The
discriminant of this quadratic polynomial is #12 b+100#. Now #A# has exactly one root (which must be real) if and only if #12 b+100 = 0#. Solving this linear equation in #b# gives #b = -{{25}\over{3}}#.