Linear maps: Matrices of Linear Maps
Basis transition
Let #V# be an #n#-dimensional vector space. Previously we discussed the coordinatization of #V# using a basis. Now we look at the relationship between coordinatizations using two bases:
\[
\alpha=\basis{\vec{a}_1,\ldots ,\vec{a}_n} \phantom{xx}\hbox{ and }\phantom{xx}\beta =\basis{\vec{b}_1,\ldots ,\vec{b}_n}
\] With each vector #\vec{x}\in V# now correspond two sets of coordinates: #\alpha (\vec{x})# with respect to the basis #\alpha# and #\beta(\vec{x})# with respect to the basis #\beta#. \[\begin{array}{ccccccc}
&&&V&&&\\ &\alpha&\swarrow&&\searrow&\beta&\\ \mathbb{R}^n&&&&&&\mathbb{R}^n\end{array}\] It is now clear what the relationship is between the #\alpha#-coordinates of #\vec{x}# and the #\beta#-coordinates of #\vec{x}#: we begin with the row of #\alpha#-coordinates, apply the map #\alpha^{-1}# to arrive at #\vec{x}\in V#, and then apply the map #\beta# in order to obtain the corresponding #\beta(\vec{x})#.
Coordinate transformation
Let #\alpha# and #\beta# be two bases for an #n#-dimensional vector space #V#. Then the linear map \[\beta\,\alpha^{-1}:\mathbb{R}^n\rightarrow\mathbb{R}^n\] is called the coordinate transformation from #\alpha# to #\beta#.
The coordinate transformation can be described by a matrix:
Matrix of a coordinate transformation
Let #\alpha# and #\beta# be bases for an #n#-dimensional vector space #V# and let \[{}_\beta I_\alpha=\left(\beta\,\alpha^{-1}\right)_\varepsilon\] be the matrix of the linear map #\beta\,\alpha^{-1}#.
If #\vec{x}# is the #\alpha#-coordinate vector of a vector #\vec{v}# in #V#, then the #\beta#-coordinate vector of #\vec{v}# is equal to #{}_\beta I_\alpha \,\vec{x}#.
The matrix #{}_\beta I_\alpha# is called the transition matrix of basis #\alpha# to basis #\beta#.
- transition matrix #{}_\beta I_\alpha=\matrix{0 & 1 & 0 \\ -{{1}\over{2}} & -{{1}\over{2}} & {{1}\over{2}} \\ {{1}\over{2}} & {{1}\over{2}} & {{1}\over{2}} \\ }#
- #\beta#-coordinate vector: \(\left[ -3 , {{1}\over{2}} , {{3}\over{2}} \right] \)
We can easily express the basis vectors of #\beta# as linear combinations of the basis vectors of #\alpha# :
\[
\begin{array}{rrr}
x-1= & -1\cdot 1+1\cdot x+0\cdot x^2\\
x^2-1= & -1\cdot 1+0\cdot x+1\cdot x^2\\
x^2+1= & 1\cdot 1+0\cdot x+1\cdot x^2 &
\end{array}
\] Hence, we know the #\alpha#-coordinates of the vectors of #\beta# and therefore the transition matrix of #\beta# to #\alpha#:
\[
{}_\alpha I_\beta = \matrix{-1 & -1 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 1 \\ }
\] The transition matrix #{}_\beta I_\alpha# is the inverse of this matrix. We find
\[
{}_\beta I_\alpha = \matrix{0 & 1 & 0 \\ -{{1}\over{2}} & -{{1}\over{2}} & {{1}\over{2}} \\ {{1}\over{2}} & {{1}\over{2}} & {{1}\over{2}} \\ }
\] How do we determine the #\beta#-coordinates of the vector #2 x^2-3 x+4#? The #\alpha#-coordinates of this vector are #\rv{4,-3,2}#. We can convert these into #\beta# coordinates using the matrix #{}_\beta I_\alpha#:
\[
{}_\beta I_\alpha\ \left(\,\begin{array}{r} 4\\ -3\\ 2
\end{array}\,\right) =
\frac{1}{2}\left(\,\begin{array}{rrr}
0 & 2 & 0\\
-1 & -1 & 1\\
1 & 1 & 1
\end{array}\,\right)
\left(\,\begin{array}{r}
4\\ -3\\ 2
\end{array}\,\right)\ =\matrix{-3 \\ {{1}\over{2}} \\ {{3}\over{2}} \\ }
\]
The first column of #{}_\beta I_\alpha# should contain the #\beta#-coordinates of the first basis vector of #\alpha#. Those #\beta#-coordinates are \[\rv{0,-\frac12,\frac12}\] and these correspond to the vector \[0\, (x-1)-\frac12 (x^2-1)+\frac12(x^2+1)=1\] The first basis vector #\alpha# is indeed equal to #1#. Verify for yourself that the second column contains the #\beta#-coordinates of #x# and the third column those of #x^2#.
Finally, we show that \(\left[ -3 , {{1}\over{2}} , {{3}\over{2}} \right] \) is indeed the #\beta#-coordinate vector of #2 x^2-3 x+4#:
\[
-3(x-1)+\frac12(x^2-1)+\frac32 (x^2+1)=2x^2-3x+4
\]
Or visit omptest.org if jou are taking an OMPT exam.