Previously we have seen that a linear map #L:V\to W# between two finite-dimensional vector spaces #V# and #W# is fixed as soon as we know the images under #L# of a basis #\alpha# for #V#. Now we show that the image is determined by a matrix if we know a basis #\beta# for #W#.
Consider therefore the following diagram, in which #n=\dim{V}# and #m=\dim{W}#.
\[\begin{array}{ccccc}
V&& \overset{L}{\longrightarrow}&& W\\\downarrow \alpha&&&&\downarrow\beta\\ \mathbb{R}^n &&\overset{\beta L \alpha^{-1}}{\longrightarrow}&&\mathbb{R}^m\\
\end{array}\]
Let #m# and #n# be natural numbers, #V# and #W# vector spaces of dimension #m# and #n#, respectively, and let #\alpha# be a basis for #V# and #\beta# a basis for #W#.
Consider a linear map #L :V\rightarrow W#. The composite map #\beta L \alpha^{-1}: \mathbb{R}^n\rightarrow\mathbb{R}^m# brings #\alpha(\vec{x})# to #\beta (L\vec{x})#. This is a linear map from #\mathbb{R}^n# to #\mathbb{R}^m# and is therefore defined by a matrix whose columns are
\[
(\beta L \alpha^{-1})(\vec{e}_i)=\beta ( L (\vec{a}_i))\phantom{xxx} \text{ for }\phantom{x}i=1,\ldots ,n
\] The #i#-th column of this matrix consists of the #\beta#- coordinates of the image #L\vec{a}_i#.
This matrix is indicated by #{}_\beta L_\alpha# and we call it the matrix of #L# with respect to the bases #\alpha# and #\beta#.
If #V=W# and #\beta=\alpha#, we denote the associated matrix also by #L_\alpha#; we speak of the matrix of #L# with respect to the basis #\alpha#.
To each vector #\vec{x}\in V# corresponds a unique row of coordinates #\alpha(\vec{x})# and to the image vector #L \vec{x}# a unique row of coordinates #\beta(L \vec{x})#.
The matrix #{}_\beta L_\alpha# gives us the linear map #L# at the level of coordinates: for example, in order to find the image of a vector #\vec{a}\in V#, we determine the coordinate vector #\alpha(\vec{a})# of #\vec{a}# and multiply it by #{}_\beta L_\alpha#; this provides the coordinate vector of #L\vec{a}#; finally, we convert this coordinate vector to the associated vector from #W#.
We will pay the most attention to the situation #V=W# and #\alpha=\beta#.
If #L: V=\mathbb{R}^n\rightarrow W=\mathbb{R}^m# is a linear map, and we use the standard bases #\varepsilon# for these spaces (in this notation we do not worry about the fact that #\varepsilon# has length #n# in #V# and length #m# in #W#), then the matrix #L_{\varepsilon}# relative to these bases is simply the matrix of #L# as defined in The matrix of a linear map in the coordinate space. This is because the coordinatizations of #V# and #W# are both the identity map.
The matrix of a linear map allows us to read off the image of a given vector. For example, if #V# is two-dimensional with basis #\alpha=\basis{\vec{a},\vec{b}}# and the linear map #A:V\rightarrow V# has matrix
\[
A_{\alpha}=
\left(\begin{array}{cc}
1 & 4 \\ -2 & 3
\end{array}\right)
\] relative to #\alpha#, then we read off from the matrix that #A\vec{a}=1\cdot\vec{a}-2\cdot \vec{b}# (first column) and #A\vec{b}=4\cdot\vec{a}+3\cdot\vec{b}# (second column). To determine the image of #\lambda \vec{a}+\mu\vec{b}# we must multiply #A_{\alpha}# by #\rv{\lambda ,\mu}#: \[
\left(\begin{array}{cc}
1 & 4 \\ -2 & 3
\end{array}\right)
\left(\begin{array}{c} \lambda \\ \mu \end{array}\right)=
\left(\begin{array}{c}
\lambda + 4\mu \\ -2\lambda +3\mu
\end{array}\right)
\] so we conclude that the image is #(\lambda + 4\mu) \vec{a} + (-2\lambda +3\mu)\vec{b}#.
Consider the vector space #P_2# of real polynomials of degree at most #2# in #x# and the linear map #D:P_2\rightarrow P_2# defined by #Dp=p'#, the derivative of #p# with respect to #x#. Take the basis #\alpha=\basis{1,x,x^2}# for #P_2#. The derivatives of the basis vectors are:
\[
\begin{array}{r l}
0 = & 0\cdot 1+0\cdot x+0\cdot x^2\\
1 = & 1\cdot 1+0\cdot x+0\cdot x^2\\
2x = & 0\cdot 1+2\cdot x+0\cdot x^2
\end{array}
\] The matrix #D_\alpha# is thus
\[
D_\alpha =\left(\,\begin{array}{rrr}
0 & 1 & 0\\
0 & 0 & 2\\
0 & 0 & 0
\end{array}\,\right)
\] As an illustration, we take the polynomial #p(x)=2x^2-3x+5#. The coordinate vector of #p# relative to #\alpha# is #\rv{5,-3,2}# and
\[
D_\alpha \left(\,\begin{array}{r}
5\\ -\,3 \\ 2
\end{array}\,\right)\ =\ \left(\,\begin{array}{rrr}
0 & 1 & 0\\
0 & 0 & 2\\
0 & 0 & 0
\end{array}\,\right)\ \left(\,\begin{array}{r}
5 \\ -\,3 \\ 2
\end{array}\,\right)\ =\ \left(\,\begin{array}{r}
-\,3\\ 4\\ 0
\end{array}\,\right)
\] #\rv{-3,4,0}# is the coordinate vector of #4x-3# and that is indeed the derivative of #p#.
We work in #\mathbb{R}^2# with the standard dot product. We determine the matrix #P_\varepsilon# relative to the standard basis #\varepsilon# of the orthogonal projection #P_\ell# on the straight line #\ell# with equation #2x-3y=0#. Determining the coordinates of # P_\ell\vec{e}_1# and #P_\ell \vec{e}_2# directly is somewhat difficult. Therefore, we first consider a basis #\alpha=\basis{\vec{a}_1,\vec{a}_2}# with respect to which #{P_\ell}# can be described easily. We take #\vec{a}_1=\rv{3,2}\in\ell# and #\vec{a}_2=\rv{2,-3}\perp\ell#. Then we have #{P_\ell}\vec{a}_1=\vec{a}_1=1\cdot \vec{a}_1+0\cdot \vec{a}_2# and #{P_\ell}\vec{a}_2=\vec{0}=0\cdot \vec{a}_1+0\cdot \vec{a}_2# so that the matrix #P_\alpha# is: \[ P_\alpha=\left(\,\begin{array}{rr} 1 & 0 \\ 0 & 0 \end{array}\,\right) \] The transition matrices are \[ {}_\varepsilon I_\alpha =\left(\,\begin{array}{rr} 3 & 2\\ 2 & -\,3 \end{array}\,\right) \quad \hbox{and}\quad {}_\alpha I_\varepsilon ={}_\varepsilon I_\alpha^{-1}=\frac{1}{13}\left(\,\begin{array}{rr} 3 & 2\\ 2 & -\,3 \end{array}\,\right)\] so \[ P_\varepsilon ={}_\varepsilon I_\alpha\ P_\alpha\ {}_\alpha I_\varepsilon =\frac{1}{13}\left(\,\begin{array}{rr} 9 & 6 \\ 6 & 4 \end{array}\,\right) \] We can find this result in another way. We put the #\varepsilon#-coordinates of the requirements #{P_\ell}\vec{a}_1=\vec{a}_1# and #{P_\ell} \vec{a}_2=\vec{0}# as rows in a matrix \[ \left(\,\begin{array}{rr} 3 & 2\\ 2 & -3 \end{array}\,\left|\,\begin{array}{rr} 3 & 2\\ 0 & 0 \end{array}\,\right.\right) \] which can be brought into the following reduced echelon form \[ \left(\,\begin{array}{rr} 1 & 0\\ 0 & 1 \end{array}\,\left|\,\begin{array}{rr} \frac{9}{13} & \frac{6}{13}\\\frac{6}{13} & \frac{4}{13} \end{array}\,\right.\right) \] Indeed, we find the same result as before:\[ P_\varepsilon=\frac{1}{13}\left(\,\begin{array}{rr} 9 & 6\\ 6 & 4 \end{array}\,\right) \]
Let #\alpha = \basis{\ln \left(x\right),1,{{1}\over{x}}}# and consider the subspace #V# of the vector space of real functions spanned by #\alpha#. Then #\alpha # is a basis for #V#.
Likewise, let #\beta = \basis{x-1,\ln \left(x\right),x\cdot \ln \left(x\right)}# and consider the subspace #W# of the vector space of real functions spanned by #\beta#. Then #\beta # is a basis for #W#.
The map #A:V\to W# given by #\left(A\,f\right)(x)=\int_{1}^x f(t)\,\dd t# is linear. Determine its matrix with respect to #\alpha# and #\beta#.
\({}_\beta A_{\alpha} =\) \( \matrix{-1 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \\ }\)
Integration by parts gives the images under #A# of the three vectors from #\alpha#:
\[\begin{array}{rcl}
A\left(\ln \left(x\right)\right) &=& x\cdot \ln \left(x\right)-x+1\\&=&{-1}\cdot\color{blue}{(x-1)}+0\cdot\color{blue}{(\ln \left(x\right))}+1\cdot\color{blue}{(x\cdot \ln \left(x\right))}\\
A\left(1\right) &=&x-1\\&=&{1}\cdot\color{blue}{(x-1)}+0\cdot\color{blue}{(\ln \left(x\right))}+0\cdot\color{blue}{(x\cdot \ln \left(x\right))}\\
A\left({{1}\over{x}}\right) &=&\ln \left(x\right)\\&=&{0}\cdot\color{blue}{(x-1)}+1\cdot\color{blue}{(\ln \left(x\right))}+0\cdot\color{blue}{(x\cdot \ln \left(x\right))}\end{array}
\]
Here the elements of #\beta# are written #\color{blue}{\text{blue}}#. Reading off the coordinates with respect to #\beta#, we find
\[{}_\beta A_\alpha = \matrix{-1 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \\ }\]