We are now able to designate a linear mapping between dual spaces which corresponds to the transposed matrix.

If # L :V\rightarrow W# is a linear map from the vector space #V# to the vector space #W#, then there is a corresponding linear map # L ^\star : W^\star \rightarrow V^\star #, defined as follows:

\[L ^\star (f) = f\, L \]

for each linear function #f:W\rightarrow \mathbb{R}# from #W^\star #. This map # L ^\star # is called the **dual map induced** by #L#.

For #\vec{v}# in #V# and #f# in #W^\star# we have \[\left(L^\star (f)\right)(\vec{v}) = f(L(\vec{v}))\]

Let #P# be the vector space of all polynomials in #x# and let #s_a#, where #a# is a number, be the linear functional on #P# defined by #\varphi(f(x)) = f(a)#; in words: substitution of #x# by #a#. The value at #s_a# of the dual map of differentiation #D:P\to P# is given by

\[\begin{array}{rcl} \left(D^\star(s_a)\right)(f(x))&=& s_a(D(f(x)))= s_a(f'(x))= f'(a)\end{array}\] That is, #D^\star(s_a)# is the linear functional assigning to a polynomial #f(x)# the value of its derivative at #a#.

Let #L:\mathbb{R}^2\to\mathbb{R}^2# be the linear map given by \[L(\rv{x,y})=\rv{a\cdot x+b\cdot y,c\cdot x+d\cdot y}\] Then the column vectors of the matrix #A=L_\varepsilon# of #L# are the images of the standard basis vectors under #L#:

\[A= \matrix{a&b\\ c & d}\] Using the coordinates #x# and #y# as the dual basis of the standard basis #\epsilon#, we can calculate the dual map induced by #L# as follows:

\[\begin{array}{rcl}(L^\star(x))(\rv{p,q}) &=& x(L(\rv{p,q})) \\ &=& x(\rv{a\cdot p+b\cdot q,c\cdot p+d\cdot q})\\ & =& a\cdot p+b\cdot q\\ &=&(a\cdot x +b\cdot y)(\rv{p,q})\\ &&\text{and}\\ (L^\star(y))(\rv{p,q}) &=& y(L(\rv{p,q})) \\&=& y(\rv{a\cdot p+b\cdot q,c\cdot p+d\cdot q})\\ & =& c\cdot p+d\cdot q\\ &=&(c\cdot x +d\cdot y)(\rv{p,q})\end{array}\]

This means that\[L^\star(x) =a\cdot x +b\cdot y\quad\text{ and }\quad L^\star(y) =c\cdot x +d\cdot y\]

Therefore, the matrix of #L^\star# with respect to the dual basis #\delta = \basis{x,y}# is

\[L^\star_\delta = \matrix{a&c\\ b&d} = A^\top\]

The theorem below shows that the dual map is associated with the transposed matrix.

Let #V# and #W# be finite-dimensional vector spaces. If the linear map #L:V\rightarrow W# has matrix #A# with respect to the bases #\alpha =\basis{ \vec{a}_1, \ldots , \vec{a}_n}# for #V# and #\beta =\basis{ \vec{b}_1, \ldots , \vec{b}_m}# for #W#, then the matrix of # L ^\star # with respect to the dual basis #\beta^\star =\basis{ \vec{b}_1^\star , \ldots ,

\vec{b}_m^\star }# for #W^\star # and #\alpha^\star =\basis{\vec{a}_1^\star , \ldots , \vec{a}_n^\star} # for #V^\star # is equal to #A^\top#. In a formula: \[{}_{\alpha^\star}L^\star_{\beta^\star}= \left( {}_{\beta}L_{\alpha}\right)^\top\]

We need to determine the #i,j#-element of the matrix #A^\star# of #L^\star#. To this end we look for the #i# -th #\alpha^\star #-coordinate of the image vector# L ^\star (\vec{b}_j^\star )# of the #j#-th #\beta^\star #-basis vector:

\[\begin{array}{rcl} A^\star_{ij} &=& \left(\alpha^\star L^\star (\vec{b}^\star_j)\right)_{i} \\ &=& \left(\alpha^\star \left(\vec{b}^\star_j L\right)\right)_{i} \\ &=& \vec{b}^\star_j L(\vec{a}_i) \\ &=& A_{ji} \\ &=& \left(A^{\top}\right)_{ij} \end{array}\] This shows #A^\star =A^\top#.

If #V # and #W# are coordinate spaces, then the result can be worded as follows: Let # L :\mathbb{R}^n \rightarrow \mathbb{R}^m# be a linear map with matrix #A#. Then the dual map # L^\star:\mathbb{R}^m \rightarrow \mathbb{R}^n# has matrix #A^\top#. In a formula: \(\left(L_A\right)^\star = L_{A^\top}\). This fact was announced *earlier*.

Below we list two properties of dual maps that we already know for matrix transposition. We recall that #L(V,W)# is a vector space.

Let #U#, #V#, and #W# be vector spaces.

- The map #L(V,W)\to L(W^\star,V^\star)# that assigns to every linear map #L:V\to W# the linear map #L^\star:W^\star\to V^\star#, is linear.
- If #L:V\to W# and #M:U\to V # are two linear maps, then #(L\,M)^\star = M^\star\,L^\star#.

Both properties can be shown to hold by writing out all the definitions involved:

1. For scalars #a# and #b#, linear maps #L# and #N# from #V# to #W#, and elements #\varphi# in #W^\star# and #\vec{v}# in #V# we have \[\begin{array}{rcl}\left((aL+bN)^\star(\varphi)\right)(\vec{v}) &=&\varphi((aL+bN)(\vec{v}) )\\ &=& \varphi(aL(\vec{v}) + b N(\vec{v}) )\\ &=& a (\varphi(L(\vec{v})) + b (\varphi(N(\vec{v})) \\ &=& aL^\star( \varphi)(\vec{v}) +bN^\star(\varphi)(\vec{v})\\ &=&\left( (aL^\star)(\varphi)+(bN^\star)(\varphi)\right)(\vec{v})\\ &=&\left( (aL^\star+bN^\star)(\varphi)\right)(\vec{v})\end{array}\] Because #\vec{v}# is arbitrary, we conclude that \((aL+bN)^\star(\varphi) = (aL^\star+bN^\star)(\varphi)\) for all #\varphi# in #W^\star#, so \((aL+bN)^\star = aL^\star+bN^\star\). This proves linearity of the map that assigns to #L# the dual map #L^\star#.

2. For #L# and #M# as given, arbitrary #\varphi# in #W^\star#, and #\vec{u}# in #U# we have \[\begin{array}{rcl}\left((L\,M)^\star(\varphi)\right)(\vec{u}) &=& \varphi((L\,M)(\vec{u}))\\ &=& \varphi(L(M(\vec{u}) ))\\ &=& \left(L^\star(\varphi)\right)(M(\vec{u}))\\ &=&\left(M^\star \left(L^\star(\varphi)\right)\right)(\vec{u})\\ &=&\left(\left(M^\star \,L^\star\right)(\varphi)\right)(\vec{u})\end{array}\] We conclude that \((L\,M)^\star(\varphi) = \left(M^\star \,L^\star\right)(\varphi)\) for all #\varphi# in #W^\star#, so \((L\,M)^\star= M^\star L^\star\).

If we apply the second law to linear maps #L_A# en #L_B# for matrices #A# and #B# of suitable sizes, then it reads in terms of matrices: \[(A\,B)^\top = B^\top \, A^\top \]This is known from the *Rules of matrix multiplication**.*

*Earlier* we saw that an elementary operation on a matrix #A# correspons to a multiplication from the left by an elementary matrix #E#. In this case, #E^\top# is again an elementary matrix and, for #B = A^\top#, we have

\[ B\,E^\top =\left(E\,B^\top\right)^\top \]

We can interpret this for an arbitrary matrix #B# as: an **elementary column operation** (that is, multiplication from the right by #E^\top#) gives the same result as the transposed of an elementary row operation (that is, multiplication from the left by #E#) on the transposed of #B#.

For example, below the first column is subtracte #4# times from the second column by subtracting the first row #4# times from the second row in the transposed matrix and transposing the result:

\[\matrix{1&4\\ 2&8\\ 3 & 12}\,\matrix{1&-4\\ 0&1} =\left(\matrix{1&0\\ -4&1}\,\matrix{1&2&3\\ 4&8&12}\right)^\top = \matrix{1&0\\ 2&0\\ 3 & 0}\]

Let #P# be the vector space of all polynomials in #x# and write #p(x)=x^3-x^2-3\cdot x-3#. By #L# we denote multiplication by #p(x)# on #P#. This is a linear map #P\to P#. By #s# we denote the linear functional determined by substitution of #x# by #-1#; in a formula: #s(f(x)) = f(-1)#.

There is a number #g# such that #L^\star(s) = g\cdot s#. What is this number?

#g = # #-2#

This follows from the calculation

\[\begin{array}{rcl} \left(L^\star(s)\right)(f(x)) & = & s(L(f(x))\\ & = & s((x^3-x^2-3\cdot x-3)\cdot (f(x)))\\ & = & ((-1)^3-(-1)^2-3\cdot (-1)-3)\cdot (f(-1)))\\ & = & (-2)\cdot s (f(x)))\\ & = & \left(-2 \,s \right)(f(x))\end{array}\]

This implies that \( L^\star(s) = -2 s\). Therefore, the answer is #-2#.