The following result makes clear that the coefficients of a vector with respect to an orthonormal basis of an infinite-dimensional inner product space describe the vector in ever increasing precision.
Suppose that #\basis{f_1,f_2,\ldots}# is an orthonormal system in an inner product space #V# (possibly complex). For every #f \in V#, the series #\sum_{k=1}^{n} \left| f \cdot f_k \right|^2# converges for #n\to\infty# and its limit satisfies
\[\begin{array}{rcl}\displaystyle \sum_{k=1}^{\infty}\left| f \cdot f_k \right|^2 \leq \displaystyle\left|\left| f \right|\right|^2 \end{array}\]
We will use theorem Convergence for monotonic sequences. Since all terms #\left| f \cdot f_k \right|^2# of the series are non-negative, the series is weakly increasing. So, if we prove the inequality \[\displaystyle \sum_{k=1}^{n}\left| f \cdot f_k \right|^2 \leq \displaystyle\left|\left| f \right|\right|^2 \] for each #n#, then we will have established that the series is bounded and the convergence criterion can be applied. Thus, it is left for us to prove this inequality.
Consider the partial sum #s_n=\sum_{k=1}^n (f\cdot f_k) f_k#. Then, for #j\le n#, we have
\[\begin{array}{rcl}\displaystyle (f-s_n) \cdot f_j &=& f \cdot f_j - s_n \cdot f_j \\ &&\phantom{xx}\color{blue}{\text{linearity of the inner product}}\\&=&\displaystyle f \cdot f_j - \left(\sum_{k=1}^n (f \cdot f_k) f_k \cdot f_j \right) \\&&\phantom{xx}\color{blue}{\text{definition of }s_n}\\&=& \displaystyle f \cdot f_j - \sum_{k=1}^n (f \cdot f_k) (f_k \cdot f_j) \\&&\phantom{xx}\color{blue}{\text{linearity of the inner product}}\\&=& \displaystyle\displaystyle f \cdot f_j - f \cdot f_j \\&&\phantom{xx}\color{blue}{\text{orthonormality of }f_1,f_2,\ldots}\\ &=&0 \end{array}\]
This means that #f-s_n# is orthogonal to each #f_j# for #j\le n#. Since #s_n# is a linear combination of #f_1,f_2,\ldots,f_n#, this implies #(f - s_n) \cdot s_n = 0#. By the Pythagorean theorem, this gives the equality \[ \begin{array}\displaystyle \left|\left| f \right|\right|^2=\left|\left| f - s_n + s_n \right|\right|^2= \left|\left| f - s_n\right|\right|^2 +\left|\left| s_n \right|\right|^2 \end{array} \]
In particular,
\[ \begin{array}{rcl}\displaystyle {\parallel s_n \parallel}^2\ \leq \ {\parallel f \parallel}^2 \end{array} \]
We now use Properties of orthogonal systems (or the complex version thereof)
\[\begin{array}{rcl}\displaystyle \sum_{k=1}^{n}\left| f \cdot f_k \right|^2 &=& {\big\Vert\sum_{k=1}^{n} (f \cdot f_k )\cdot f_k\big\Vert}^2 \\&&\phantom{xx}\color{blue}{\text{Pythagorean theorem}}\\ &=& {\parallel s_n\parallel}^2 \\&&\phantom{xx}\color{blue}{\text{definition }s_n}\\ & \leq &\left|\left| f \right|\right|^2 \\&&\phantom{xx}\color{blue}{\text{the above inequality}} \end{array}\]
This proves that, for each #n#, we have \[\displaystyle \sum_{k=1}^{n}\left| f \cdot f_k \right|^2 \leq \displaystyle\left|\left| f \right|\right|^2 \] In particular, the series is bounded. Since all terms #\left| f \cdot f_k \right|^2# of the series are non-negative, the series is weakly increasing. Thus, the series satisfies the conditions of the Convergence for monotonic sequences. Application of this theorem gives the statement about convergence. This finishes the proof of the theorem.
If we were to ignore issues about infinity, we could view the equality \[f = \sum_{k=1}^{\infty} (\dotprod{f}{f_k})\cdot f_k\] as an instance of the Pythagorean theorem. The right-hand side, however, need not represent a vector of #V#. An example showing that it need not belong to #V# is the inner product space #P# of all polynomials on #\mathbb{R}# with the inner product that makes #\basis{1,x,x^2,\ldots}# orthonormal. The exponential function \[\ee^x = \sum_{k=0}^\infty \frac{1}{k!} x^k\] is known not to be a polynomial function, and so does not belong to #P#, although every finite sum #\sum_{k=0}^n \frac{1}{k!} x^k# does.
We now turn our attention to answering the following question: given a orthonormal system, under which conditions can we write an approximation for #f# in terms of the #f_k#, i.e. #f=\sum_{k=1}^{\infty}(f\cdot f_k)f_k#? This question has an elegant answer provided we add some additional structure to our inner product space.
Let's start by slightly generalising the familiar notion of Fourier series for a function #f#.
Let #H# be a Hilbert space and #\{f_k\}_{k \in \mathbb{N}}# an orthogonal system in #H#. For any #f \in H# we say that #(f\cdot f_k)# is the #k#-th Fourier coefficient of #f# with respect to #\{f_k\}_{k \in \mathbb{N}}# and that the formal series
\[ \begin{array}{rcl}\displaystyle \sum_{n=1}^{\infty} (f\cdot f_k)f_k\end{array}\]
is the Fourier series of #f# with respect to #\{f_k\}_{k \in \mathbb{N}}#.
Now we state a theorem that, together with Bessel's inequality, will help formulate the solution.
Let #H# be a Hilbert space and let #\{f_k\}_{k \in \mathbb{N}}# be an orthonormal system in #H#. If #\{ c_k\}_{k \in \mathbb{N}}# is a sequence in #H#, #\sum_{k=1}^{\infty} c_k f_k# is convergent iff #\sum_{k=1}^{\infty} |c_k|^2# is.
Let's start by assuming that #\sum_{k=1}^{\infty} c_k f_k# converges and equals #x#. Then, by using the continuity of the inner product, we can write for each #k \in \mathbb{N}#:
\[\begin{array}{rcl}\displaystyle (x,f_k) = \displaystyle \left(\lim_{n \rightarrow \infty} \sum_{i=1}^n c_i f_i \cdot f_k \right)=\displaystyle \lim_{n \rightarrow \infty} \left( \sum_{i=1}^n c_i f_i \cdot f_k \right) =c_k \end{array}\]
and, by Bessel's inequality:
\[\begin{array}{rcl}\displaystyle \sum_{k=1}^{\infty} |c_k|^2 =\displaystyle \sum_{k=1}^{\infty} \left|\left( x \cdot f_k \right)\right|^2 \leq \displaystyle \left|\left| x\right|\right|^2\end{array}\]
Now we prove the converse by assuming that #\sum_{k=1}^{\infty} |c_k|^2# converges and we fix #n,m \in \mathbb{N}#. By the Pythagorean theorem, we have
\[\begin{array}{rcl}\displaystyle \left|\left| \sum_{k=1}^{n+m} c_k f_k-\sum_{k=1}^n c_k f_k \right|\right|^2 =\displaystyle \sum_{k=n+1}^{n+m}\left|\left| c_k f_k \right|\right|^2 = \sum_{k=n+1}^{n+m} |c_k|^2 \end{array}\]
Fixing #\varepsilon \gt 0# and choosing #N# such that #\sum_{k=1}^{\infty} |c_k|^2 \lt \varepsilon^2# we deduce that #\sum_{k=1}^{n} c_k f_k# is a Cauchy sequence and hence converges.
As an immediate corollary of this theorem, we have that
In a Hilbert space endowed with a orthonormal system #\{f_k\}_{k \in \mathbb{N}}#, the "generalised" Fourier series defined above converges for every #f \in H# and we can write
\[ \begin{array}{rcl}\displaystyle f=\sum_{n=1}^{\infty} (f\cdot f_k)f_k\end{array}\]
Now let #g=f-\sum_{n=1}^{\infty} (f\cdot f_k)f_k#. Using again the continuity of the inner product, we can write for each #i \in \mathbb{N}#
\[ \begin{array}{rcl}\displaystyle (g,f_i) &=& \displaystyle(f \cdot f_i) - \left(\sum_{n=1}^{\infty} (f\cdot f_k)f_k \cdot f_i \right) \\ &=& (f \cdot f_i) - (f \cdot f_i) = 0 \end{array}\]
so if the only element #v \in H# such that #(v,f_k)=0# for all #k# is #0# then we have #f=\sum_{n=1}^{\infty} (f\cdot f_k)f_k# for all #f \in H#
In light of this, we give the following definition of orthonormal basis for a Hilbert space #H#.
Let #H# be a Hilbert space and #S# be an orthonormal subset of #H#. Given #x \in H#, if #x\cdot f=0# for all #f \in S# implies #x=0# then #S# is a complete orthonormal subset of #H#. If #S# can be indexed by #\mathbb{N}#, we call it complete orthonormal subset or orthonormal basis for #H#.
Consider the inner product space #V# of all piecewise smooth real functions on \(\ivcc{-\pi}{\pi}\) with inner product given by \[\dotprod{f}{g}=\displaystyle\int_{-\pi}^{\pi}f(x)\cdot g(x)\,\dd x\] The system \[\basis{1,\sin( x),\cos( x),\sin(2\, x),\cos(2\, x),\ldots}\] is orthonormal. The corresponding coefficients #\dotprod{f(x)}{\cos(n\, x)}# and #\dotprod{f(x)}{\sin(n\, x)}# and the so-called Fourier coefficients of #f# (in the complex version, the orthonormal system is replaced by #\ee^{\complexi \, n x}# for #n=0,1,2,\ldots#).
Let's try to verify that they actually constitute an orthonormal basis. We need to impose the condition that for each #n \in \mathbb{N}#
\[\begin{array}{rcl}\displaystyle \int_{-\pi}^{\pi} f(x)\cos(n\,x) =\displaystyle \int_{-\pi}^{\pi} f(x)\sin(n\,x) = 0 \end{array} \]
But these are exactly the formulas for the coefficients #a_i#, #b_i# of the Fourier series of #f#, which must then be a constant: in particular, since
\[\begin{array}{rcl}\displaystyle \int_{-\pi}^{\pi} f(x) \cdot 1 =0 \end{array}\]
must hold as well, it must be #0#.
But the convergence theorem applies to the functions of #V#, so this series converges pointwise to #f# - except for at most a finite set of discontinuities - which is thus bound to be #0#. Hence #\basis{1,\sin( x),\cos( x),\sin(2\, x),\cos(2\, x),\ldots}# is indeed an orthonormal basis.