Let #V# be a finite-dimensional vector space and #L :V\rightarrow V# a linear map. According to The matrix of a linear map, for each choice of a basis #\alpha# for #V#, the map #L# is determined by the matrix #L_\alpha#.
We will look for a basis #\alpha# such that the structure of the matrix #L_\alpha# will be computationally simple. The simplest structure is the diagonal matrix. As we will see soon, it is not always possible to find one.
The diagonal of a matrix is the sequence of its #(i,i)#-entries; that is, those entries whose column numbers equal their row numbers.
A square matrix #A# has diagonal form (or: is a diagonal matrix) if all of its #(i,j)#-entries with #i\neq j# are equal to zero.
The matrix of multiplication by a scalar #\lambda# on #\mathbb{R}^n# is diagonal. All of its diagonal entries are equal to #\lambda#.
The matrix \[A = \matrix{1&1\\ 0&-1}\] is not diagonal. But the linear map #L_A:\mathbb{R}^2\to \mathbb{R}^2# has diagonal matrix \[L_\beta = \matrix{1&0\\ 0&-1}\] with respect to the basis
\[\beta = \basis{\rv{\frac12,0},\rv{-\frac12,1}}\]Indeed, the coordinate map corresponding to #\beta# is #\beta = \matrix{2&1\\ 0&1}# (the inverse of the matrix whose columns are the vectors of the basis #\beta#), so
\[L_\beta = \beta\, A\,\beta^{-1} =\matrix{2&1\\ 0&1}\,\matrix{1&1\\ 0&-1}\,\matrix{\frac12&-\frac12\\ 0&1} = \matrix{1&0\\ 0&-1}\]
The following statement is easy to understand but of crucial importance:
Let # L :V\rightarrow V# be a linear map and #\alpha =\basis{\vec{a}_1,\ldots ,\vec{a}_n}# a basis for the vector space #V#.
The matrix #L_\alpha# has the diagonal form
\[
L_\alpha =\left(\,\begin{array}{cccc}
\lambda_1 & 0 & \ldots & 0\\
0 & \lambda_2 & \ddots & \vdots\\
\vdots & \ddots & \ddots & 0\\
0 & \ldots & 0 & \lambda_n
\end{array}\,\right)
\]
if and only if # L( \vec{a}_i)=\lambda_i\vec{a}_i# for #i=1,\ldots ,n#.
The matrix #L_\alpha# has the above diagonal form if and only if for every #i# the #\alpha#-coordinates of # L(\vec{a}_i)# are equal to #\rv{0,\ldots ,0,\lambda_i,0,\ldots ,0}#, so if and only if # L( \vec{a}_i)=0\cdot\vec{a}_1+\cdots +\lambda_i\vec{a}_i+\cdots +0\cdot \vec{a}_n=\lambda_i\vec{a}_i#.
Later we will study in greater detail when a linear map has a basis with respect to which the matrix is diagonal. Here, it makes a difference whether we work with a complex or a real vector space. In order to see this, consider the scalar multiplication by #\ii# on the #1#-dimensional complex vector space #\mathbb{C}#. As we discussed above for #\mathbb{R}^n#, the complex #(1\times1)#-matrix of this linear map with respect to the basis #\basis{1}# is diagonal. We can also view #\mathbb{C}# as the #2#-dimensional real vector space with basis #\basis{1,\ii}#. In this case multiplication by #\ii# is still a linear map. It has matrix \[\matrix{0&-1\\ 1&0}\] with respect to the given basis. We will see below (in the example of a two-dimensional rotation) that there is no basis of the corresponding #2#-dimensional real vector space with respect to which the matrix of this linear map is a diagonal matrix.
The diagonal form is related to the following notions.
Let # L:V\rightarrow V# be a linear map. A vector #\vec{v}\neq\vec{0}# is called an eigenvector of # L # with eigenvalue #\lambda# if # L (\vec{v}) = \lambda\vec{v}#.
If #A# is an #(n\times n)#-matrix, then a vector of #\mathbb{R}^n# is called an eigenvector of #A# if it is an eigenvector of #L_A# and a number is called an eigenvalue of #A# if it is an eigenvalue of #L_A#.
Thus, an eigenvector is a vector distinct from the zero vector that is mapped by # L # onto a scalar multiple of itself; the scalar involved is the corresponding eigenvalue.
Consider the matrix \(A = \matrix{1&1\\ 0&1}\). The linear map #L_A:\mathbb{R}^2\to\mathbb{R}^2# has eigenvector #\rv{1,0}# with eigenvalue #1#. Every other eigenvector of #L_A# is in the span of #\rv{1,0}#. In particular, there is no basis of eigenvectors.
If #\vec{v}# is an eigenvector of a linear map, then so is every multiple of #\vec{v}# by a nonzero scalar. Thus, eigenvectors corresponding to a fixed linear map and a fixed eigenvalue are never unique.
The above theorem can also be formulated as follows:
Let # L :V\rightarrow V# be a linear map, where #V# is a vector space of finite dimension #n# and fix a basis #\alpha# for #V#.
The matrix #L_\alpha# is diagonal if and only if #\alpha# is a basis of eigenvectors. In the latter case, the eigenvalues appear on the diagonal.
Consider #\mathbb{R}^2# with standard dot product and the orthogonal projection on a line through the origin spanned by #\vec{a}#. If #\vec{b}# is a vector perpendicular to #\vec{a}#, then the vectors #\vec{a}# and #\vec{b}# are eigenvectors of the projection with eigenvalues #1# and #0#, respectively. The matrix with respect to the basis #\basis{\vec{a},\vec{b}}# is
\[
\matrix{
1 & 0\\
0 & 0
}
\] which is a diagonal matrix with the eigenvalues on the diagonal. Note the order of #1# and #0# on the diagonal: this corresponds to the order of the eigenvectors.
In #\mathbb{R}^2#, consider a rotation about the origin with an angle of #90^\circ# (anti-clockwise, so #270^\circ# clockwise). The corresponding matrix with respect to the standard basis is \[
\matrix{
0 & -1\\
1 & 0}
\] Not a single vector distinct from #\vec{0}# is mapped to a scalar multiple of itself. After all, if #\rv{x,y}# would be an eigenvector with eigenvalue #\lambda#, then
\[\matrix{
0 & -1\\
1 & 0}\cv{x\\ y} =\lambda \cv{x\\ y}\] This gives the two linear equations #-y=\lambda x# (for the first coordinate) and #x=\lambda y# (for the second coordinate), leading to #(1+\lambda^2)\cdot y = 0#. If #y=0#, then #x=0#, such that #\rv{x,y}# is not an eigenvector. We can therefore assume #y\ne0#. Now #\lambda^2=-1#; but this is impossible for a real number #\lambda#.
This linear map has no eigenvectors and there certainly is no basis of eigenvectors. For no choice of basis whatsoever, will the matrix of this rotation be diagonal.
This is immediate from the above theorem Recognition of diagonal form.
Consider the linear map #L:P_2\to P_2# on the vector space #P_2# of polynomials in #x# with degree at most #2# given by \[L(p(x))= x\cdot\frac{\dd }{\dd x}\left(p(x)\right)\] The basis #\alpha =\basis{1,x,x^2}# for #P_2# consists of eigenvectors of #L#. What are the eigenvalues of #L# corresponding to these eigenvectors?
Give your answer in the form of a list of length #3#.
\(\rv{0,1,2}\)
This follows from the following calculation, where #i=0,1,2#: \[L(x^i) =x\cdot \dfrac{\dd}{\dd x}\left(x^i\right) = x\cdot i\cdot x^{i-1} = i\cdot x^i\]