The basis constructed in the following theorem is convenient in the description of coordinates.

Let #\alpha = \vec{a}_1, \ldots , \vec{a}_n# be a basis of #V#.

- There is exactly one basis #a_1^\star, \ldots , a_n^\star# of #V^\star# satisfying

\[a_i^\star(\vec{a}_j)= \begin{cases} 1& \quad \text{ if } i = j\\ 0& \quad \text{ if } i\neq j

\end{cases}

\]

In particular, #\dim{V}=\dim{V^\star}# if #V# is finite-dimensional.
- If #\vec{a} \in V#, then #\alpha(\vec{a}) = \rv{a_1^\star(\vec{a}), \ldots , a_n^\star(\vec{a})}# is the coordinate vector of #\vec{a}#.

The basis #a_1^\star, \ldots , a_n^\star# of #V^\star# is called the **dual basis** of #\vec{a}_1, \ldots , \vec{a}_n#.

1. According to the theorem *Linear map determined by basis*, there is, for each #i=1, \ldots , n#, exactly one linear function #a_i^\star : V\rightarrow \mathbb{R}# with the property specified in the claim.

It remains to prove that the system #a_1^\star, \ldots , a_n^\star# is a basis of #V^\star#. First, we show that the system is independent. Suppose that, for certain scalars \(\lambda_1 ,\ldots,\lambda_n \), we have

\[

\lambda_1 a_1^\star + \cdots + \lambda_n a_n^\star = 0 \qquad (=\text{the null function})

\]

Apply the left hand side to the vector #\vec{a}_i# and do the same for the right hand side. On the left we find #\lambda_i#; on the right we find #0#. As this is the case for #i=1,\ldots , n#, we conclude that #\lambda_1 = \cdots =\lambda_n=0#. Therefore, the system is independent.

Next we show that every vector #\varphi \in V^\star# belongs to the span #\linspan{a_1^\star, \ldots , a_n^\star}#. It is easy to verify that #\varphi(\vec{a}_1) a_1^\star +\varphi(\vec{a}_2) a_2^\star + \cdots + \varphi(\vec{a}_n) a_n^\star# and #\varphi# have the same values at #\vec{a}_1, \ldots , \vec{a}_n#. Again, because of the theorem *Linear map determined by basis*, this means that #\varphi= \varphi(\vec{a}_1) a_1^\star +\varphi(\vec{a}_2) a_2^\star + \cdots + \varphi(\vec{a}_n) a_n^\star#.

2. If #\vec{a} = \lambda_1 \vec{a}_1 + \cdots +\lambda_n \vec{a}_n#, then

\[

a_i^\star(\vec{a}) = \lambda_1 a_i^\star(\vec{a}_1) + \cdots + \lambda_n a_i^\star(\vec{a}_n)

=\lambda_i

\]

due to the linearity of #a_i^\star#.

This basis plays a role somewhat similar to that of an *orthonormal basis in an inner product space*. If we use the dot product in order to identify a vector space #V# of finite dimension #n# with #V^\star# by means of the correspondence \[\begin{array}{rcl}V&\leftrightarrow&V^\star\\ \vec{a}&\leftrightarrow&a^\star (\vec{x}) = \dotprod{\vec{a}}{\vec{x}}\end{array}\] then, for an orthonormal basis. the dual basis coincides with the orthonormal basis.

Let #\basis{\vec{e}_1, \vec{e}_2}# be the standard basis of #\mathbb{R}^2#.

The dual basis of the basis #\alpha =\basis{ \vec{e}_1+ \vec{e}_2, \vec{e}_1}# of #\mathbb{R}^2# is #\alpha^\star =\basis{ e_2^\star, e_1^\star -e_2^\star}#, where #\basis{e_1^\star, e_2^\star}# is the dual basis of #\basis{\vec{e}_1, \vec{e}_2}#.

The calculation of the #\alpha#-coordinates of the vector #\rv{3,7}=3\, \vec{e}_1 + 7\, \vec{e}_2# is then simple: the first coordinate is # e_2^\star (3\, \vec{e}_1 + 7\, \vec{e}_2)= 7#, the second coordinate is #({e}_1^\star -{e}_2^\star)(3\, \vec{e}_1 + 7\, \vec{e}_2)= 3-7=-4#.

The map #a_i^\star# assigns to each vector #\vec{x}# of #V# the coefficient #x_i# of #\vec{a}_i# in the expression \[\vec{x} = x_1\vec{a}_1+\cdots + x_n\vec{a}_n\] of #\vec{x}# as a linear combination of the basis vectors.

By use of matrix techniques, we can easily determine the dual basis of a basis for #\mathbb{R}^n#:

Let #\vec{a}_1, \ldots , \vec{a}_n# be a basis for #\mathbb{R}^n#, and collecting these vectors as columns in the matrix #A#. Then, the dual basis of #\vec{a}_1, \ldots , \vec{a}_n# consists of the rows of the inverse matrix of #A#.

The matrix #A# is equal to #{}_{\epsilon}I_{\alpha}#. Each vector of the dual basis #\vec{a}_1^\star, \ldots , \vec{a}_n^\star# of #\vec{a}_1, \ldots , \vec{a}_n# can be written as a linear combination of #{e}_1^\star, \ldots , {e}_n^\star#, say, #{a}_i^\star = b_{i1}\, {e}_1^\star + \cdots + b_{in}\, {e}_n^\star#. We collect the #b_{ij}# in the matrix #B#. It now follows that \[

{a}_i^\star (\vec{a}_j)= b_{i1}\cdot a_{1j}+ \cdots + b_{in}\cdot a_{nj}

\]

The right hand side is the entry #i,j# of the product matrix #B\,A#. The left hand side is equal to #1# if #i=j# and #0# otherwise. This means #I=B\,A#. So #B# is the inverse of #A#.

The rows of #B# provide the coefficients of the vectors with respect to the dual basis.

In order to determine the dual basis of #\vec{e}_1+ \vec{e}_2#, #\vec{e}_1#, we invert the matrix

\[

\left(\begin{array}{cc} 1 & 1\\ 1 & 0\end{array}\right)

\]

and find

\[

\left(\begin{array}{cc} 0 & 1\\ 1 & -1\end{array}\right)

\]

Thus, the dual basis is #{e}_2^\star# (first row), #{e}_1^\star -{e}_2^\star# (second row), as we have seen before.

Determine the dual basis of the basis #\basis{\rv{-1 , 0 }, \rv{-3 , 1 }}# for #\mathbb{R}^2#.

Give your answer in the form of a matrix whose rows are the coordinate vectors with respect to the dual basis of the standard basis.

#\matrix{-1 & -3 \\ 0 & 1 \\ }#

In order to determine the dual basis of #\basis{\rv{-1 , 0 }, \rv{-3 , 1 }}#,

*we invert* the matrix #A# whose columns are the basis vectors.

\[\begin{array}{rcl} A &=& \matrix{-1 & -3 \\ 0 & 1 \\ }\\

A^{-1} &=& \matrix{-1 & -3 \\ 0 & 1 \\ }\end{array}

\] According to the theorem

*Dual basis by use of the inverse matrix*, the dual basis consists of the rows of this matrix.