### Orthogonal and symmetric maps: Applications of symmetric maps

### Quadratic forms

Symmetric maps can be used to define quadratic forms. A quadratic form on a vector space #V# is a second-degree homogeneous polynomial in the coordinates of a vector with respect to a fixed basis of #V#. We begin with a more intrinsic definition for the case of a real vector space. To this end we use the term polarization for the right hand side of the *polarization formula of an inner product*.

Quadratic form A **quadratic form** on a real vector space #V# is a function #q:V\to\mathbb{R}# with the following two properties:

**homogeneity**: For each scalar #\lambda# and each vector #\vec{x}# of #V# we have \(q(\lambda\cdot \vec{x}) = \lambda^2\cdot q(\vec{x})\).**bilinearity of polarization**: The real-valued map #f_q# on pairs of vectors from #V# defined by \( f_q(\vec{x},\vec{y}) =\frac12\left( q(\vec{x}+\vec{y}) - q(\vec{x})-q(\vec{y})\right)\) is bilinear.

The bilinear map #f_q# is symmetric and uniquely determined by #q#; it is called the **bilinear form** of #q#.

If #g# is a symmetric bilinear form on #V#, then #r(\vec{x}) =g(\vec{x},\vec{x})# is a quadratic form. Each quadratic form can be obtained in this way. Moreover, #g# is the bilinear form of #r#. We call #r# the **quadratic form defined by** #g#.

We will now show how the homogeneous polynomials of degree #2# appear after a basis has been fixed. Recall from *Coordinatization* that, if #\alpha# is a basis of #V#, the map #\alpha:V\to\mathbb{R}^n#, where #n=\dim{V}#, assigns the coordinate vector of a vector of #V# with respect to #\alpha#.

Quadratic forms and symmetric matrices Let #V# be a vector space of finite dimension #n# with basis #\alpha# and #q# a quadratic form on #V#.

- If #f# is the bilinear form of #q#, then there is a unique symmetric matrix #A# such that, for all #\vec{u},\vec{v}\in V#, we have \[ f(\vec{u},\vec{v}) =\dotprod{\alpha( \vec{u}) }{( A\,\alpha( \vec{v}))}\] We call #A# the
**matrix of**#q#**with respect to**#\alpha#. In particular, #q\circ\alpha^{-1}# is of the form \[q(\alpha^{-1}(\vec{x})) =\dotprod{ \vec{x} }{( A\,\vec{x})} = \sum_{i,j=1}^n a_{ij}x_i x_j\phantom{xx}\text{for }\vec{x} = \rv{x_1,\ldots,x_n}\in\mathbb{R}^n\] where #a_{ij}# is the #(i,j)#-entry of #A#. - If #\beta# is another basis of #V#, then the matrix #B# of #q# with respect to #\beta# is given by \[ B={}_\alpha I_{\beta}^\top\, A\,\; {}_\alpha I_{\beta}\]
- There exists a basis #\beta# for #V# such that the transformation matrix #{}_\alpha I_\beta# is orthogonal and the matrix #B# of #q# with respect to #\beta# is diagonal. In particular, #q\circ\beta^{-1}# is of the form \[q(\beta^{-1}(\vec{x})) = \sum_{i=1}^n b_ix_i^2\phantom{xx}\text{for }\rv{x_1,\ldots,x_n}\in\mathbb{R}^n\] where #b_i# are the eigenvalues of #A#. We call such a form a
**diagonal form**of #q#. - The bilinear form #f# of #q# is an inner product on #V# if and only if all of the eigenvalues of #A# are positive.

The collection of vectors at which a quadratic form assumes a fixed chosen value, is called a *quadric.* It is the set of solutions of a quadratic polynomial equation with several unknowns. In general, the equation of a quadric also involves linear terms in addition to a quadratic form. *Later* we will go into this further.

The matrix #A# is determined by

\[\begin{array}{rcl}q(x,y,z) &=& f(\rv{x,y,z},\rv{x,y,z})\\

&&\phantom{xxxxwwwwwwwxxxx}\color{blue}{f\text{ is the bilinear form of }q}\\

&=&\dotprod{\rv{x,y,z}}{\left(A\, \rv{x, y, z} \right)}\\

&&\phantom{xxxxwwwwwwwxxxx}\color{blue}{\text{definition of }A}\\

&=& {\matrix{x&y&z}}\,A\, \matrix{x\\ y\\ z}\\

&&\phantom{xxxxwwwwwwwxxxx}\color{blue}{\text{inner product rewritten as matrix product}}\\

&=&a_{11}x^2+(a_{12}+a_{21})xy+(a_{13}+a_{31})xz+a_{22}y^2+(a_{23}+a_{32})yz+a_{33}z^2\\

&&\phantom{xxxxwwwwwwwxxxx}\color{blue}{\text{matrix product worked out}}\\

&=& a_{11}x^2+2a_{12}xy+2a_{13}xz+a_{22}y^2+2a_{23}yz+a_{33}z^2\\

&&\phantom{xxxxwwwwwwwxxxx}\color{blue}{A\text{ is symmetric}}

\end{array}\] Comparison with the function rule #q(x,y,z) =-x^2-2 x y-2 x z-y^2+z^2# gives

\[\begin{array}{rclcr} a_{11}&=&\text{coefficient of } x^2 &=& -1 \\

a_{12}&=&\frac12(\text{coefficient of } x y) &=& -1 \\

a_{13}&=&\frac12(\text{coefficient of } x z )&=& -1 \\

a_{22}&=&\text{coefficient of } y^2 &=& -1 \\

a_{23}&=&\frac12(\text{coefficient of } y z) &=& 0 \\

a_{33}&=&\text{coefficient of } z^2 &=& 1

\end{array}\] The remaining elements of #A# now follow from the fact that #A# is symmetric. The conclusion is

\[\begin{array}{rcl} A &=& \matrix{-1 & -1 & -1 \\ -1 & -1 & 0 \\ -1 & 0 & 1 \\ }\end{array}\]

**Pass Your Math**independent of your university. See pricing and more.

Or visit omptest.org if jou are taking an OMPT exam.