Under linear isometries, inner products of pairs of vectors from a system do not change after transition to their images. In the following theorem we characterize the notion of linear isometry by means of orthonormal systems. Its usefulness will become clear when we discuss matrices of orthogonal maps.

Let #V# and #W# be real inner product spaces. For a linear map #L:V\rightarrow W#, the following two statements are equivalent:

- #L# is an isometry.
- For each orthonormal system #\vec{a}_1,\ldots ,\vec{a}_n# in #V#, the system #L(\vec{a}_1),\ldots , L(\vec{a}_n)# is orthonormal in #W#.

We prove each of the two implications separately.

#1\Rightarrow 2# If #L# is an isometry, then inner product and length are invariant under #L#. Let #\vec{a}_1,\ldots ,\vec{a}_n# be an orthonormal system. Then the inner product #\dotprod{L(\vec{a}_i)}{L(\vec{a}_j)}# is equal to #\dotprod{\vec{a}_i}{\vec{a}_j}#, and therefore equal to #1# if #i=j#, and equal to #0# if #i\neq j#. This means that #L(\vec{a}_1),\ldots,L(\vec{a}_n)# is an orthonormal system.

#2\Rightarrow 1# Let #\vec{x}\in V#. We show that #\norm{L(\vec{x})}=\norm{\vec{x}}#. If #\vec{x}=\vec{0}#, then this holds because both sides are equal to #0#. Suppose now that #\vec{x}# is unequal to the zero vector. Then #\frac{1}{\norm{\vec{x}}}\vec{x}# has length #1#. So, it is an orthonormal system of one vector, and therefore \[\norm{L\left(\frac{1}{\norm{\vec{x}}}\vec{x}\right)}=1\] We deduce from this:

\[\begin{array}{rcl}\norm{L(\vec{x})}&=&\norm{\norm{\vec{x}}\cdot L\left(\frac{1}{\norm{ \vec{x}}}\vec{x}\right)}\\ &&\phantom{xx}\color{blue}{\text{linearity of }L}\\ &=&\norm{\vec{x}}\cdot \norm{L\left(\frac{1}{\norm{\vec{x}}}\vec{x}\right)}\\ &&\phantom{xx}\color{blue}{\text{multiplicativity of the norm}}\\ &=&\norm{\vec{x}}\cdot 1\\ &&\phantom{xx}\color{blue}{\text{the equality deduced above}}\\&=&\norm{\vec{x}}\end{array}

\] Because #L# is linear, this corresponds to #L# being an isometry.

As in the case of orthogonal maps, orthonormal systems are mapped onto orthonormal systems by #L#. If #V=W#, then, in view of the proposition *Orthogonal maps and orthonormal systems* the theorem says that a linear map is a linear isometry if and only if it is orthogonal.

If #V# is finite-dimensional, then we only need study the images of a single orthonormal system in order to conclude that the map is an isometry.

Let #V# and #W# be real inner product spaces. Suppose that #V# has finite dimension #m#. For a linear map #L:V\rightarrow W#, the following three statements are equivalent:

- #L# is an isometry.
- For each orthonormal system #\vec{a}_1,\ldots ,\vec{a}_n# in #V#, the system #L(\vec{a}_1),\ldots , L(\vec{a}_n)# is orthonormal in #W#.
- There is an orthonormal basis #\vec{a}_1,\ldots ,\vec{a}_m# of #V# such that the system #L(\vec{a}_1),\ldots , L(\vec{a}_m)# is orthonormal in #W#.

It is sufficient to prove the following chain of implications: #1\Rightarrow 2 \Rightarrow 3 \Rightarrow 1#.

#1.\Rightarrow 2.# This follows from the previous theorem.

#2.\Rightarrow 3.# By use of the *Gram-Schmidt procedure* we can find an orthonormal basis #\vec{a}_1,\ldots ,\vec{a}_m# of #V#. In particular, this is an orthonormal system, so it is mapped onto an orthonormal system in #W# because of the assumption that statement #2.# holds.

#3.\Rightarrow 1.# Let #\basis{\vec{a}_1,\ldots ,\vec{a}_m}# be an orthonormal basis of #V#, so that #\basis{L(\vec{a}_1),\ldots ,L(\vec{a}_m)}# is an orthonormal system. Further, let #\vec{x}\in V#. If #\vec{x}=\lambda_1 \vec{a}_1 +\cdots + \lambda_m \vec{a}_m#, then #L(\vec{x})=\lambda_1 L(\vec{a}_1) + \cdots + \lambda_m L(\vec{a}_m)#. Now apply the *Pythagorean theorem* to both expressions (the square of the length of #\lambda_i \vec{a}_i# as well as of #\lambda_i L(\vec{a}_i)# is equal to #\lambda_i^2#):

\[

\norm{\vec{x}}^2 =\lambda_1^2 + \cdots + \lambda_m^2

\quad \hbox{and} \quad

\norm{L(\vec{x})}^2 = \lambda_1^2 + \cdots + \lambda_m^2

\] From this, it immediately follows that #\norm{L(\vec{x})}=\norm{\vec{x}}#, and so (because #L# is linear) that #L# is an isometry.

Let #V=\mathbb{R}^2# be the inner product space with the standard inner product and let #W =\mathbb{R}^2# be the inner product space with inner product #B# given by \[B(\rv{x_1,y_1},\rv{x_2,y_2}) = 4 x_1\cdot y_1+ 9 x_2\cdot y_2\] Determine the matrix #L_{\varepsilon}# of a linear isometry #L:V\to W#.

#L_{\varepsilon} = # # \matrix{\frac{1}{2}&0\\ 0&\frac{1}{3}}#

According to theorem

*Linear isometries of finite-dimensional inner product spaces and orthonormal systems*, the system #\basis{L(\vec{e}_1),L(\vec{e}_2)}#, where #\varepsilon = \basis{\vec{e}_1,\vec{e}_2}# is the standard basis of #V#, must be orthonormal in #W#. An obvious orthonormal basis for #W# is \[ \basis{\rv{\frac{1}{2},0}, \rv{0,\frac{1}{3} } } \] The matrix \[L_{\varepsilon}= \matrix{\frac{1}{2}&0\\ 0&\frac{1}{3}}\] transforms the standard basis of #V# to this orthonormal basis of #W# and so it is the matrix of a linear isometry #V\to W#.

The answer is not unique. Each matrix product of a correct answer by an orthogonal #(2\times2)#-matrix from the right also is the matrix of a linear isometry #V\to W#.