### Linear maps: Matrices of Linear Maps

### The matrix of a linear map

*Previously* we have seen that a linear map #L:V\to W# between two finite-dimensional vector spaces #V# and #W# is fixed as soon as we know the images under #L# of a basis #\alpha# for #V#. Now we show that the image is determined by a matrix if we know a basis #\beta# for #W#.

Consider therefore the following diagram, in which #n=\dim{V}# and #m=\dim{W}#.

\[\begin{array}{ccccc}

V&& \overset{L}{\longrightarrow}&& W\\\downarrow \alpha&&&&\downarrow\beta\\ \mathbb{R}^n &&\overset{\beta L \alpha^{-1}}{\longrightarrow}&&\mathbb{R}^m\\

\end{array}\]

The matrix of a linear map

Let #m# and #n# be natural numbers, #V# and #W# vector spaces of dimension #m# and #n#, respectively, and let #\alpha# be a basis for #V# and #\beta# a basis for #W#.

Consider a linear map #L :V\rightarrow W#. The composite map #\beta L \alpha^{-1}: \mathbb{R}^n\rightarrow\mathbb{R}^m# brings #\alpha(\vec{x})# to #\beta (L\vec{x})#. This is a linear map from #\mathbb{R}^n# to #\mathbb{R}^m# and is therefore *defined by a matrix* whose columns are

\[

(\beta L \alpha^{-1})(\vec{e}_i)=\beta ( L (\vec{a}_i))\phantom{xxx} \text{ for }\phantom{x}i=1,\ldots ,n

\] The #i#-th column of this matrix consists of the #\beta#- coordinates of the image #L\vec{a}_i#.

This matrix is indicated by #{}_\beta L_\alpha# and we call it the **matrix of #L# with respect to the bases #\alpha# and #\beta#**.

If #V=W# and #\beta=\alpha#, we denote the associated matrix also by #L_\alpha#; we speak of the **matrix of #L# with respect to the basis #\alpha#**.

Likewise, let #\beta = \basis{x^2\cdot \euler^{x},x\cdot \euler^{x},\euler^{x}-1}# and consider the subspace #W# of the vector space of real functions spanned by #\beta#. Then #\beta # is a basis for #W#.

The map #A:V\to W# given by #\left(A\,f\right)(x)=\int_{0}^x f(t)\,\dd t# is linear. Determine its matrix with respect to #\alpha# and #\beta#.

Integration by parts gives the images under #A# of the three vectors from #\alpha#:

\[\begin{array}{rcl}

A\left(x^2\cdot \euler^{x}\right) &=& \left(x^2-2\cdot x+2\right)\cdot \euler^{x}-2\\&=&{1}\cdot\color{blue}{(x^2\cdot \euler^{x})}-2\cdot\color{blue}{(x\cdot \euler^{x})}+2\cdot\color{blue}{(\euler^{x}-1)}\\

A\left(x\cdot \euler^{x}\right) &=&\left(x-1\right)\cdot \euler^{x}+1\\&=&{0}\cdot\color{blue}{(x^2\cdot \euler^{x})}+1\cdot\color{blue}{(x\cdot \euler^{x})}-1\cdot\color{blue}{(\euler^{x}-1)}\\

A\left(\euler^{x}\right) &=&\euler^{x}-1\\&=&{0}\cdot\color{blue}{(x^2\cdot \euler^{x})}+0\cdot\color{blue}{(x\cdot \euler^{x})}+1\cdot\color{blue}{(\euler^{x}-1)}\end{array}

\]

Here the elements of #\beta# are written #\color{blue}{\text{blue}}#. Reading off the coordinates with respect to #\beta#, we find

\[{}_\beta A_\alpha = \matrix{1 & 0 & 0 \\ -2 & 1 & 0 \\ 2 & -1 & 1 \\ }\]

**Pass Your Math**independent of your university. See pricing and more.

Or visit omptest.org if jou are taking an OMPT exam.