### Linear maps: Matrices of Linear Maps

### Determining the matrix of a linear map

As a result of the theorem *Linear map determined by the image of basis* a linear map from #\mathbb{R}^n# to #\mathbb{R}^m# is uniquely determined by the images of a basis #\basis{\vec{a}_1,\ldots ,\vec{a}_n}# for #\mathbb{R}^n#. If this basis is the standard basis #\basis{\vec{e}_1,\ldots,\vec{e}_n}#, then we can write down the corresponding matrix immediately using the theorem *Linear maps in coordinate space defined by matrices*. But if the basis differs from the standard basis, we have to perform some calculations first. These calculations are based on the following technique.

Matrix determination by row reduction Let #L:\mathbb{R}^n\to\mathbb{R}^m# be a linear map and #\alpha=\basis{\vec{a}_1,\ldots,\vec{a}_n}# a basis for #\mathbb{R}^n#. Form the #(n\times(n+m))#-matrix

\[\matrix{\vec{a}_1& L( \vec{a}_1)\\ \vec{a}_2& L( \vec{a}_2)\\ \vdots&\vdots\\ \vec{a}_n& L( \vec{a}_n)}\]

If we bring this matrix to the *reduced echelon form*, then the identity matrix appears on the left while the images of the standard basis vectors under #L# appear on the right (as row vectors). In other words, the #(n\times m)#-submatrix on the right is the transposed of #L_{\varepsilon}#.

\[\begin{array}{rcl}

L( \rv{ 1 , 0 , 0 } )&=&\rv{ 2 , 1 , 0 } \\ L( \rv{ 0 , 1 , 2 } )&=&\rv{ 5 , 0 , 8 } \\ L(\rv{ -1 , -2 , -3 } )&=&\rv{ -12 , -2 , -13 } \end{array}

\]

Determine the matrix #L_{\varepsilon}# of #L# with respect to the standard basis #\varepsilon=\basis{\vec{e}_1,\vec{e}_2,\vec{e}_3}#.

In terms of matrices, the conditions on \(L_{\varepsilon}\) can be written as

\[ \matrix{1 & 0 & 0 \\ 0 & 1 & 2 \\ -1 & -2 & -3 \\ }\, L_{\varepsilon}^\top = \matrix{2 & 1 & 0 \\ 5 & 0 & 8 \\ -12 & -2 & -13 \\ }\] wherein #L_{\varepsilon}^\top# is the

*transpose*of #L_{\varepsilon}#. We find #L_{\varepsilon}^\top# by performing row reduction operations on the matrix

\[ \left(\left.\begin{array}{ccc}1 & 0 & 0 \\ 0 & 1 & 2 \\ -1 & -2 & -3 \end{array}\ \right|\begin{array}{ccc}2 & 1 & 0 \\ 5 & 0 & 8 \\ -12 & -2 & -13 \end{array}\right)\] until it obtains the

*reduced echelon form*:

\[\begin{array}[t]{ll}

\left(\left.\begin{array}{ccc}1 & 0 & 0 \\ 0 & 1 & 2 \\ -1 & -2 & -3 \end{array}\ \right|\begin{array}{ccc}2 & 1 & 0 \\ 5 & 0 & 8 \\ -12 & -2 & -13 \end{array}\right)

&

\begin{array}[t]{ll} \sim\left(\begin{array}{rrr|rrr} 1 & 0 & 0 & 2 & 1 & 0 \\ 0 & 1 & 2 & 5 & 0 & 8 \\ 0 & -2 & -3 & -10 & -1 & -13 \end{array}\right) & \\\\ \sim\left(\begin{array}{rrr|rrr} 1 & 0 & 0 & 2 & 1 & 0 \\ 0 & -2 & -3 & -10 & -1 & -13 \\ 0 & 1 & 2 & 5 & 0 & 8 \end{array}\right) & \\\\ \sim\left(\begin{array}{rrr|rrr} 1 & 0 & 0 & 2 & 1 & 0 \\ 0 & 1 & {{3}\over{2}} & 5 & {{1 }\over{2}} & {{13}\over{2}} \\ 0 & 1 & 2 & 5 & 0 & 8 \end{array}\right) & \\\\ \sim\left(\begin{array}{rrr|rrr} 1 & 0 & 0 & 2 & 1 & 0 \\ 0 & 1 & {{3}\over{2}} & 5 & {{1}\over{2}} & {{13}\over{2}} \\ 0 & 0 & {{1}\over{2}} & 0 & -{{1}\over{2}} & {{ 3}\over{2}} \end{array}\right) & \\\\ \sim\left(\begin{array}{rrr|rrr} 1 & 0 & 0 & 2 & 1 & 0 \\ 0 & 1 & {{3 }\over{2}} & 5 & {{1}\over{2}} & {{13}\over{2}} \\ 0 & 0 & 1 & 0 & - 1 & 3 \end{array}\right) & \\\\ \sim\left(\begin{array}{rrr|rrr} 1 & 0 & 0 & 2 & 1 & 0 \\ 0 & 1 & 0 & 5 & 2 & 2 \\ 0 & 0 & 1 & 0 & -1 & 3 \end{array}\right) \end{array}

\end{array}

\] The #(3\times 3)#-submatrix on the right is equal to

\[L_{\varepsilon}^\top = \matrix{2 & 1 & 0 \\ 5 & 2 & 2 \\ 0 & -1 & 3 \\ }\] so the answer is:

\[L_{\varepsilon} = \matrix{2 & 5 & 0 \\ 1 & 2 & -1 \\ 0 & 2 & 3 \\ }\]

**Pass Your Math**independent of your university. See pricing and more.

Or visit omptest.org if jou are taking an OMPT exam.