Invariant subspaces of linear maps: Eigenvalues and eigenvectors
Determining eigenvalues and eigenvectors
The following theorem shows that we can find a maximal set of linearly independent eigenvectors by finding these for all eigenvalues separately.
Independence of eigenvectors for different eigenvalues
Let #V# be a vector space and # L :V\rightarrow V# a linear map. Suppose that #\lambda_1,\ldots ,\lambda_n# are mutually different eigenvalues of #L#. Suppose further that #\alpha_i# is a set of linearly independent eigenvectors of # E_i = \ker{L-\lambda_i\,I_V}# for #i=1,\ldots,n#. Then the union of #\alpha_1,\ldots,\alpha_n# is linearly independent.
Determination of eigenvalues and eigenvectors can thus be carried out as follows.
Let #V# be an #n#-dimensional vector space, where #n# is a natural number, and let #L:V\to V# be a linear map. The eigenvalues of #L# and a maximum set of linearly independent eigenvectors can be found as follows:
- Set up the matrix #A = L_{\alpha}# of the linear map # L :V \rightarrow V# with respect to a basis #\alpha# of choice.
- Set up the characteristic equation #\det (A-\lambda \cdot I_n)=0# and solve it.
- For each eigenvalue #\lambda# solve the system #(A-\lambda\cdot I_n)\vec{v}=\vec{0}#. Each solution provides the coordinates of a vector from #E_{\lambda}#. The solutions constitute the space #E_{\lambda}#. Choose a basis of the solution space. The union of these bases for all of the eigenvalues is a maximal set of linearly independent eigenvectors, given in terms of coordinates with respect to the basis #\alpha#.
- Go back, if desired, from coordinates to vectors in #V#.
Some of the examples below show that a given linear transformation # L :V \rightarrow V# does not always have a basis of eigenvectors, so # L# is not always determined by a diagonal matrix. Yet, we can often find a fairly simple form of the matrix. Later we will discuss this in greater detail.
- List of eigenvalues: \([2,-4]\)
- Matrix whose columns are corresponding eigenvectors: \(\matrix{2&9\\ -1 & -4}\)
We start by solving the characteristic equation \(\det(A-\lambda\cdot I)=0\) of the matrix \(A\). Calculation and factorization of the characteristic polynomial gives
\[ \begin{array}{rcl}
\det(A-\lambda I) &=& \left\vert \begin{array}{cc} -52-\lambda & -108 \\ 24 & 50-\lambda \end{array} \right\vert\\ &=& (-52-\lambda)(50-\lambda)+108\cdot24 \\
&=& \left(\lambda-2\right)\cdot \left(\lambda+4\right)
\end{array}\] This means that the eigenvalues are \(\lambda_1 = 2\) and \(\lambda_2 = -4\).
Next we calculate the corresponding eigenvector for each eigenvalue.
For \(\lambda_1 = 2\) we determine the kernel of \(A -2\cdot I_2\) by row reducing the coefficient matrix:
\[\begin{array}[t]{ll}A -2\cdot I_2&=
\matrix{
-54 & -108 \\
24 & 48 }\\
&
\begin{array}[t]{ll} \sim\left(\begin{array}{cc} 1 & 2 \\ 24 & 48 \end{array}\right) & \\ \sim\left(\begin{array}{cc} 1 & 2 \\ 0 & 0 \end{array}\right) \end{array}
\end{array}\] Thus the eigenspace for \(\lambda_1 = 2\) equals \(\linspan{ \cv{2\\ -1}} \).
For \(\lambda_2 = -4\) we determine the kernel of \(A +4\cdot I_2\) by row reducing the coefficient matrix:
\[\begin{array}[t]{ll}A +4\cdot I_2&=
\matrix{
-48 & -108 \\
24 & 54
}\\
&
\begin{array}[t]{ll} \sim\left(\begin{array}{cc} 1 & {{9}\over{4}} \\ 24 & 54 \end{array}\right) & \\ \sim\left(\begin{array}{cc} 1 & {{9 }\over{4}} \\ 0 & 0 \end{array}\right) \end{array}
\end{array}\] Thus the eigenspace for \(\lambda_2 = -4\) equals \(\linspan{ \cv{9\\-4}}\).
We conclude that the list of eigenvalues is #\rv{2,-4}# and that a matrix whose columns are corresponding eigenvalues is \(\matrix{2&9\\ -1 & -4}\).
Or visit omptest.org if jou are taking an OMPT exam.