The usual concepts for a function such as injectivity, surjectivity, bijectivity and inverse mappings are discussed here. Below #V# and #W# are vector spaces. We recall that, for #L :V\rightarrow V#, we write #L^2# for the composition #L\, L# and #L^n= L^{n-1} L# for #n=3,4,\ldots\ #. By #L^0# we mean the identity map #I:V\to V#.
The map #L:V\to W# is called
- injective if for every #\vec{x},\vec{y}\in V# the following holds: if #L(\vec{x})=L(\vec{y})# then #\vec{x}=\vec{y}#;
- surjective if for any #\vec{y}\in W# there exists a vector #\vec{x}\in V# with #L(\vec{x})=\vec{y}#;
- bijective if it is injective and surjective.
If #L# is bijective, then #L# is invertible (in other words, has an inverse), meaning there is a map #L^{-1}:W\to V# such that #L\cdot L^{-1} =I_W# and # L^{-1}\cdot L = I_V#. The map #L^{-1}# is called the inverse (map). In this case we call #L# an isomorphism.
If there exists an isomorphism of #V# to #W#, then #V# and #W# are called isomorphic.
If #L# is invertible, #W=V# and #n# is a natural number, then we denote by #L^{-n}# the composition #\left(L^{-1}\right)^n#.
The statements about the existence of #L^{-1}# are valid for all bijective maps. Therefore we do not give a separate proof here.
Let #L:\mathbb{R}\to\mathbb{R}# be the map with #L(x)=a\cdot x+b# for fixed real numbers #a# and #b#.
Then #L# is injective if and only if #a\ne0#, because
- if #a\ne0# and #x,y\in\mathbb{R}# satisfy #L(x)=L(y)#, then #a\,x+b=a\,y+b# and thus #a\,(x-y)=0#, so that #x=y#, showing that #L# is injective, and
- if #a=0#, then #x=0# and #y=1# satisfy #L(x)=b=L(y)# showing that #L# is not injective.
If #a\ne0# then #L# is also surjective: if #a\ne0# and #y\in\mathbb{R}#, then #x=\frac{1}{a}y-\frac{b}{a}# satisfies #L(x)=a\left(\frac{1}{a}y-\frac{b}{a}\right)+b=y#.
In particular, #L# is bijective if and only if #a\ne0#, in which case the inverse map is equal to
\[L^{-1}(x) = \frac{1}{a}x-\frac{b}{a}\]
The map #L# is linear if and only if #b=0#. In that case also #L^{-1}# is linear. This is not a coincidence, as is shown by the theorem below.
For each vector space #V# the identity map #I: V \rightarrow V# given by #I(\vec{v})=\vec{v}# is an isomorphism.
When #V# and #W# are both vector spaces, then the zero map #O: V \rightarrow W# given by #O(\vec{v})=\vec{0}# is not an isomorphism if #V# is not trivial.
Below we will see that #L^{-1}# is a linear map as well.
Two isomorphic vector spaces are essentially identical. By this we mean that the names of vectors and the like may vary, but that one vector space after application of the bijective map (read: the change of name) becomes identical to the other.
We will see later that any real finite-dimensional vector space of dimension #n# is isomorphic to a coordinate space. Therefore, after an appropriate translation, such a vector space can be seen as #\mathbb{R}^n#.
The inverse of an invertible linear map is also invertible and linear:
If the linear map #L: V\rightarrow W# is a bijection, then the inverse map #L^{-1}: W \rightarrow V# is also a linear map.
If #\vec{v}#, #\vec{w}\in W# and #\alpha#, #\beta# are scalars, then there are vectors #\vec{x}#, #\vec{y} \in V# with #L(\vec{x})=\vec{v}# and #L(\vec{y}) =\vec{w}# (because #L# is surjective). The linearity of #L# then yields \[ L(\alpha \vec{x} +\beta \vec{y} )=\alpha \vec{v} + \beta \vec{w}\] so # L^{-1}(\alpha \vec{v} + \beta \vec{w})= \alpha \vec{x} + \beta \vec{y}#. On the other hand we have #\vec{x}=L ^{-1}(\vec{v})# and #\vec{y}=L^{-1}(\vec{w})#. We conclude:
\[
L^{-1}(\alpha \vec{v} + \beta \vec{w}) = \alpha L ^{-1}(\vec{v}) +
\beta L ^{-1}(\vec{w})
\]
If a square matrix #A# has an inverse, then the corresponding linear map can also be inverted:
If #A# is an invertible #(n\times n)#-matrix, then the inverse of the linear map #L_A:\mathbb{R}^n \to \mathbb{R}^n# determined by #A# is equal to the linear map #L_{A^{-1}}:\mathbb{R}^n \to \mathbb{R}^n# determined by the matrix #A^{-1}#.
We need to verify that #L_AL_{A^{-1}}=L_{A^{-1}} L_A =I#. This follows immediately from the theorem Composition of maps defined by matrices:
\[\begin{array}{rcl}L_AL_{A^{-1}} &=&L_{A\,A^{-1}}\\ &&\phantom{xxx}\color{blue}{\text{composition of maps defined by matrices}}\\ &=&L_{I}\\ &&\phantom{xxx}\color{blue}{\text{definition inverse of a matrix}}\\ &=&I\\ &&\phantom{xxx}\color{blue}{\text{multiplication by the identity matrix is the identity map}}\end{array}\]
Likewise, it can be shown that \(L_{A^{-1}}L_A=I\).
Later we will derive that each linear mapping, after transition to a coordinate space, can be written as #L_A# for a suitable matrix #A#, and that this mapping is invertible if and only if #A# is invertible.
The linear map #L_A:\mathbb{R}^2\to \mathbb{R}^2#
determined by the matrix \[ A = \matrix{22 & 97 \\ -5 & -22 \\ }\] is invertible. The inverse is a linear map #L_B# determined by a #(2\times2)#-matrix #B#.
Calculate this #(2\times2)#-matrix #B#.
#B=# #\matrix{-22 & -97 \\ 5 & 22 \\ }#
Because #L_A^{-1} = L_{A^{-1}}# we can calculate #B# by inverting the matrix #A#:
\[ \begin{array}{rcl}A^{-1} &=&{ \matrix{22 & 97 \\ -5 & -22 \\ }}^{-1} \\ &=& \matrix{-22 & -97 \\ 5 & 22 \\ }\end{array}\]