As with real functions, for mappings to a vector space we can define the sum and the product by a constant factor.

Let # L :V\rightarrow W# and # M : V\rightarrow W# two linear mappings. Then the **sum mapping** # L + M :V\rightarrow W# is determined by

\[

( L + M )\vec{x}= L \vec{x}+ M \vec{x}

\] If #\alpha# is a scalar, then the **scalar multiple** #\alpha \cdot L:V\rightarrow W# is determined by

\[

(\alpha\cdot L )\vec{x} = \alpha \cdot ( L \vec{x})

\]

The linear combination #2\cdot L_3-3\cdot L_4#, where #L_a:\mathbb{R}\to\mathbb{R}# for a real number #a# represents multiplication by #a#, is the mapping#L_{-6}#, because

\[\begin{array}{rcl}\left(2\cdot L_3-3\cdot L_4\right)(x) &=&\left(2\cdot L_3\right)+\left(-3\cdot L_4\right)(x) \\ &&\phantom{xx}\color{blue}{\text{definition of addition}}\\&=& 2\cdot (L_3x)-3\cdot (L_4x)\\&&\phantom{xx}\color{blue}{\text{definition of scalar multiple}}\\&=& 2\cdot (3x)-3\cdot (4x)\\&&\phantom{xx}\color{blue}{\text{definition of }L_a}\\&=& -6x\\ &&\phantom{xx}\color{blue}{\text{simplified}}\\&=&L_{-6}(x)\\&&\phantom{xx}\color{blue}{\text{definition of }L_a}\end{array}\]

These mappings are again linear:

Let #L# and #M# both be linear maps with the same domain and codomain, and let #\alpha# be a scalar.

The sum mapping #L+M# and the scalar multiple #\alpha\cdot L# are linear.

\[

\begin{array}{rcll}

(L+M)(\vec{x}+\vec{y})&=&L(\vec{x}+\vec{y})+M(\vec{x}+\vec{y})&\color{blue}{\text{definition of sum mapping}}\\

&=&L\vec{x}+L\vec{y}+M\vec{x}+M\vec{y}&\color{blue}{\text{linearity }L\text{ and }M}\\

&=&(L+M)\vec{x}+(L+M)\vec{y}&\color{blue}{\text{definition of sum mapping}}\\

(L+M)(\alpha\vec{x})&=&L (\alpha\vec{x})+M(\alpha\vec{x})&\color{blue}{\text{definition of sum mapping}}\\

&=& \alpha L\vec{x}+\alpha M\vec{x}&\color{blue}{\text{linearity of }L\text{ and }M}\\

&=& \alpha\left(L\vec{x}+M\vec{x}\right)&\color{blue}{\alpha\text{ factored out}}\\

&=& \alpha\left(L+M\right)\vec{x}&\color{blue}{\text{definition of sum mapping}}

\end{array}

\] The linearity of #\alpha\cdot L# follows in a similar manner.

Consider the following linear differential equation for #y# as a function of #x#:

\[y''-x\cdot y'+2y=\sin(x)\] The left-hand side can be regarded as a linear mapping applied to the vector #y#. After all, let #D# be differentiation again and define

\[

L _{f(x)}(y)=f(x)\cdot y

\] check for yourself that the mapping #L _{f(x)}# is linear for every #f(x)#. The left-hand side of the differential equation is

\[

y''-x\cdot y'+2y =(D^2+ L _{-x}D+ L_2) (y)

\] The mapping #D^2+ L_{-x}D+ L _2# is linear, because

- #D# and # L _{f(x)}# are linear
- #D^2# and #L _{-x}D# are linear due to the theorem
*Linearity of a composition of linear mappings*
- a linear combination of linear mappings is linear due to the theorem
*of sum and scalar multiple of linear mappings*

We did not say which vector spaces are the domain and codomain of the linear mappings. This may depend on the situation in which we view the differential equation. Often we can consider the mappings of the vector space #V# of infinitely often differentiable functions to itself.

The set #F# of all linear transformations of a vector space #V# to a vector space #W# has been supplied with an addition and a scalar multiplication. With these operations, #F# is a vector space.

As with *compositions* of *certain linear maps determined by matrices*, the operations can be traced back to matrix operations:

Let #A# and #B# be two matrices of the same dimensions and let #L_ A# and #L_ B# be the corresponding linear mappings.

- The sum mapping # L_A +L_ B # is the linear mapping determined by the matrix #A+B#.
- For each scalar #\alpha#, the scalar multiple #\alpha\cdot L_A# is the linear mapping determined by the matrix #\alpha\cdot A #.

Suppose that both #A# and #B# are #(m\times n)#-matrices. For each vector #\vec{x}\in\mathbb{R}^n#, the mapping #L_ A\vec{x}# is equal to the matrix product #A\vec{x}\in\mathbb{R}^m# and the mapping #L_ B\vec{x}# to #B\vec{x}\in\mathbb{R}^m#. Therefore, for the sum mapping #L_ A +L_B :\mathbb{R}^n\rightarrow\mathbb{R}^m# we have

\[\begin{array}{rcl}

(L_A +L_B )\vec{x} &=& L_A \vec{x}+ L_B \vec{x} \\&&\phantom{xx}\color{blue}{\text{definition of the sum mapping}}\\&=& A\vec{x}+B\vec{x}\\&&\phantom{xx}\color{blue}{\text{definition of }L_A\text{ and } L_B}\\ &=&

(A+B)\vec{x} \\&&\phantom{xx}\color{blue}{\text{sum matrix property }}\\ &=&L_{A+B}\vec{x}\\ &&\phantom{xx}\color{blue}{\text{definition of }L_{A+B}}\end{array}

\] We conclude that the linear mapping #L_A+L_B# coincides with the linear mapping defined by the matrix #A+B#.

If #\alpha# is a scalar, then

\[\begin{array}{rcl}

(\alpha\cdot L_A)\vec{x} &=& \alpha\cdot (L_A \vec{x}) \\&&\phantom{xx}\color{blue}{\text{definition of scalar multiple of a mapping}}\\&=& \alpha\cdot A\vec{x}\\&&\phantom{xx}\color{blue}{\text{definition of }L_A}\\ &=&

(\alpha\cdot A)\vec{x}\\&&\phantom{xx}\color{blue}{\text{property of scalar multiple of a matrix}}\\ &=&L_{\alpha\cdot A}\vec{x}\\ &&\phantom{xx}\color{blue}{\text{definition of }L_{\alpha\cdot A}}\end{array}

\] We conclude that the linear mapping #\alpha\cdot L_A# coincides with the linear mapping defined by the matrix #\alpha\cdot A#.

In formulas:

\[\begin{array}{rcl}L_A+L_B &=& L_{A+B}\\ \alpha\cdot L_A &=&L_{\alpha\cdot A}\end{array}\]

We conclude that linear combinations of linear mappings are linear mappings. Below are some examples.

Consider the linear maps #F# and #G# from #\mathbb{R}^2# to #\mathbb{R}^2# given by

\[\begin{array}{rcl}F(\rv{x,y}) &=& \rv{4 {\it y}-3 {\it x},-3 {\it x}-4 {\it y}}\\

G(\rv{x,y}) &=& \rv{2 {\it y}-{\it x}, -3 {\it x}-3 {\it y}}\end{array}\]

Determine a vector #\rv{ax+by,cx+dy}# with real numbers #a#, #b#, #c#, and #d# that gives a function rule for

\[2\cdot F -8\cdot G\]

#(2\cdot F -8\cdot G)(\rv{x,y})=# # \rv{2 {\it x}-8 {\it y}, 18 {\it x}+16 {\it y}}#

The answer can be found as follows:

\[\begin{array}{rcl}

(2\cdot F-8\cdot G)(\rv{x,y}) &=&2\cdot F(\rv{x,y})-8\cdot G(\rv{x,y})\\

&&\phantom{xxx}\color{blue}{\text{definition addition and scalar multiplication}}\\

&=&2\cdot \rv{4 {\it y}-3 {\it x},-3 {\it x}-4 {\it y}}-8\cdot \rv{2 {\it y}-{\it x},-3 {\it x}-3 {\it y}}\\

&&\phantom{xxx}\color{blue}{\text{function rules for }F\text{ and } G\text{ filled in}}\\

&=& \rv{2 {\it x}-8 {\it y},18 {\it x}+16 {\it y}}\\

&&\phantom{xxx}\color{blue}{\text{expression simplified}}\\

\end{array}\]