### Invariant subspaces of linear maps: Diagonalizability

### The notion of diagonalizability

We have seen that it is not always possible to find a basis of eigenvectors of a linear map from a finite-dimensional vector space to itself. We recall that a square matrix #A# has *diagonal form* or is a *diagonal matrix* if all #(i,j)#-elements with #i\neq j# are zero. We discuss how to determine whether this is the case.

Diagonalizability

A linear map #L: V\to V#, where #V# is a finite-dimensional vector space, is called **diagonalizable** if #V# has a basis #\alpha# such that #L_\alpha# is a diagonal matrix.

The matrix #A# is called **diagonalizable** if #L_A# is diagonalizable. If more precision is needed and #V# is real (respectively, complex), we say that #A# is **diagonalizable over the real **(respectively,** complex**)** numbers**.

The following statement is easy to understand but very important:

Recognizing diagonalizability

Let #V# be a vector space of finite dimension #n# with basis #\alpha#. The following statements regarding a linear map #L:V\to V# are equivalent.

- #L# is diagonalizable.
- #L_\alpha# is diagonalizable.
- The sum of the dimensions of the eigenspaces of #L# over eigenvalues is equal to #n#.
- There is an invertible #(n\times n)#-matrix #T# such that #T^{-1}L_\alpha T# is a diagonal matrix.

In this case, the columns of the matrix #T# form a basis of #\mathbb{R}^n# or #\mathbb{C}^n# (according to #V# being real or complex) consisting of eigenvectors of #L_\alpha#.

A direct consequence of theorem *Recognizing diagonalizability* is that, for a diagonalizable linear map #L#, we are able to find a basis with respect to which the matrix of the map has the diagonal form by starting with an arbitrary basis #\alpha# and finding a matrix #T# conjugating #L_\alpha# to a diagonal matrix, that is, such that #T^{-1}L_{\alpha}T# is a diagonal matrix:

Diagonalization and conjugationLet #V# be a finite-dimensional vector space with basis #\alpha# and #L:V\to V# a linear mapping.

If #L# is diagonalizable, then we can find an invertible matrix #T# whose columns are eigenvectors of #L_\alpha#, such that #T^{-1}L_{\alpha}T# is a diagonal matrix. Then, the composition #\beta = L_T^{-1}\,\alpha# is a coordinatization of #V# such that #L_\beta# is in diagonal form.

The method can also be used to determine whether the linear map is diagonalizable.

The matrix #A# is not equal to a scalar multiple of the identity. Therefore #A# is diagonalizable (over the complex numbers) if and only if it has two different eigenvalues (possibly complex). This is the case if and only if the

*characteristic polynomial*has two different (possibly complex) roots.

The characteristic polynomial is

\[p_A(x) = x^2-\text{tr}(A)+\det(A) =x^2+6 x-7 b+5 \] The

*discriminant*of this quadratic polynomial is #28 b+16#. Now #A# has exactly one root (which must be real) if and only if #28 b+16 = 0#. Solving this linear equation in #b# gives #b = -{{4}\over{7}}#.

**Pass Your Math**independent of your university. See pricing and more.

Or visit omptest.org if jou are taking an OMPT exam.