### Inner Product Spaces: Orthogonal projections

### Gram-Schmidt in matrix form

The *Gram-Schmidt procedure* we dealt with, and the corresponding relationship between a system of vectors and a system of orthonormal vectors, can also be captured in a matrix decomposition. We will first formulate the decomposition in a theorem, and then show how the associated matrices can be found.

QR decomposition If #A# is an #(m \times n)#-matrix with linearly independent column vectors, then this matrix can be written as a product \[A=Q\,R\] where #R# is an #(m\times n)#-upper triangular matrix, and the columns of the #(m \times m)#-matrix #Q# form an orthonormal system. This notation is called a **QR decomposition** of #A#.

There are several ways of finding the QR decomposition of a matrix. Here we discuss the method arising from the proof of the above theorem.

Determination of a QR decomposition For a given #(m\times n)#-matrix #A# whose columns are linearly independent, a QR decomposition can be found by the following extension of the *Gram-Schmidt procedure*:

Start with the matrices #P=A# and #S=I_n# (the identity matrix). For #k=1,\ldots,n# execute the following changes:

- Calculate #\vec{q}^*_k= \vec{a}_k - \sum_{j=1}^{k-1}\vec(\dotprod{\vec{a}_k}{\vec{q}_j})\,\vec{q}_j#.
- Calculate the length #\norm{\vec{q}^*_k}#.
- Calculate #\vec{q}_k=\frac{1}{\norm{\vec{q}^*_k}}\vec{q}^*_k#.
- Replace the #k#-th column of #P# by #\vec{q}_k#.
- Replace #S# by #D_{k,\norm{\vec{q}_k^*}}\,E_{1,k,(\dotprod{\vec{a}_k}{\vec{q}_1})}\, E_{2,k,(\dotprod{\vec{a}_k}{\vec{q}_2})}\,\cdots\,E_{k-1,k,(\dotprod{\vec{a}_k}{\vec{q}_{k-1}})}\,S#.

Determine an #(m\times (m-n))#-matrix #B# whose #m-n# columns form an orthonormal basis for the *orthogonal complement* of the span of the #n# columns of #P#.

Write #Q = \matrix{P&B}# and #R = \matrix{S\\ 0}#, where #0# stands for the zero matrix with dimensions #{(m-n)\times n}#.

Then #A = Q\, R# is a QR decomposition of #A#.

We describe the steps of the

*extended Gram-Schmidt procedure*in the table below. The left-hand column contains the changes to the columns of the matrix #A# prescribed by the Gram-Schmidt procedure to arrive at a matrix #Q# whose columns are an orthogonal system. The second column contains the elementary operation to get back the previous matrix from the left-hand column in terms of the matrix which is to be multiplied from the right.

The first line shows how the first column of #A# is normalized. The second line shows how, in the second column, a scalar multiple of the first is added to this column so as to be perpendicular to the first. The corresponding contribution to #R# is a matrix of which #(1,2)# is the only entry differing from the corresponding entry of the identity matrix #I_3#. And so on.

\[ \begin{array}{rclcl}\text{from } A\text{ toward }Q&\phantom{xx}&\text{contribution to }R&\phantom{xx}&\text{build-up of }R\\

\hline

\matrix{-{{2}\over{3}} & -{{8}\over{3}} & -{{1}\over{3}} \\ -{{2}\over{3}} & -{{5}\over{3}} & -{{4}\over{3}} \\ {{1}\over{3}} & {{1}\over{3}} & -{{1}\over{3}} \\ } && \matrix{1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ } && \matrix{1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ } \\ \matrix{-{{2}\over{3}} & -{{2}\over{3}} & -{{1}\over{3}} \\ -{{2}\over{3}} & {{1}\over{3}} & -{{4}\over{3}} \\ {{1}\over{3}} & -{{2}\over{3}} & -{{1}\over{3}} \\ } && \matrix{1 & 3 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ } && \matrix{1 & 3 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ } \\ \matrix{-{{2}\over{3}} & -{{2}\over{3}} & -{{1}\over{3}} \\ -{{2}\over{3}} & {{1}\over{3}} & -{{4}\over{3}} \\ {{1}\over{3}} & -{{2}\over{3}} & -{{1}\over{3}} \\ } && \matrix{1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ } && \matrix{1 & 3 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ } \\ \matrix{-{{2}\over{3}} & -{{2}\over{3}} & {{1}\over{3}} \\ -{{2}\over{3}} & {{1}\over{3}} & -{{2}\over{3}} \\ {{1}\over{3}} & -{{2}\over{3}} & -{{2}\over{3}} \\ } && \matrix{1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ } && \matrix{1 & 3 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ } \\ \matrix{-{{2}\over{3}} & -{{2}\over{3}} & {{1}\over{3}} \\ -{{2}\over{3}} & {{1}\over{3}} & -{{2}\over{3}} \\ {{1}\over{3}} & -{{2}\over{3}} & -{{2}\over{3}} \\ } && \matrix{1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ } && \matrix{1 & 3 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ } \\ \matrix{-{{2}\over{3}} & -{{2}\over{3}} & {{1}\over{3}} \\ -{{2}\over{3}} & {{1}\over{3}} & -{{2}\over{3}} \\ {{1}\over{3}} & -{{2}\over{3}} & -{{2}\over{3}} \\ } && \matrix{1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ } && \matrix{1 & 3 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ } \end{array}\]

The matrix #Q# appears at the bottom of the left column: #Q = {{1}\over{3}}\,\matrix{-2 & -2 & 1 \\ -2 & 1 & -2 \\ 1 & -2 & -2 \\ }#. The matrix #R# is the product of the matrices in the middle column in reverse order; the first line contains the right-most factor, and the last line contains the left-most factor. The right column displays the products of the contributions #R# up to the current line. The result is to be found at the bottom right: #R = \matrix{1 & 3 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ }#. Thus, we find

\[ A = {{1}\over{3}}\,\matrix{-2 & -2 & 1 \\ -2 & 1 & -2 \\ 1 & -2 & -2 \\ }\, \matrix{1 & 3 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ } \]

**Pass Your Math**independent of your university. See pricing and more.

Or visit omptest.org if jou are taking an OMPT exam.