For every pair #y_1(t)#, #y_2(t)# of differentiable functions on a given interval the **Wronskian** is the function #W# defined by: \[W(t)=y_1(t)\cdot y_2'(t)-y_1'(t)\cdot y_2(t)\]

For example, if #y_1(t)=t# and #y_2(t)=t^2#, then their derivatives are #y_1'(t)=1# and #y_2'(t)=2t#, respectively, so their Wronskian is:

\[W(t) = t\cdot 2t-1\cdot t^2 = t^2\]

Those who are familiar with linear algebra will recognize in the Wronskian the determinant of a #(2\times2)#-matrix: \[ W(t)=\det\begin{pmatrix} y_1(t)&y_2(t)\\ y_1'(t)&y_2'(t)\end{pmatrix}\]

More generally, for every integer #n\ge2#, there is a Wronskian of #n# functions #y_1,\ldots,y_n# which are #n-1# times differentiable. The Wronskian is defined as the determinant of the #n\times n#-matrix whose #j#-th row consists of the #(j-1)#-st derivatives of the functions #y_1,\ldots,y_n#: \[W(t)=\det\begin{pmatrix} y_1(t)&y_2(t)& \cdots & y_n(t) \\ y_1'(t)&y_2'(t)& \cdots & y'_n(t)\\ y_1''(t)&y_2''(t)& \cdots & y''_n(t)\\ \vdots & \vdots & \ddots \\ y_1^{(n-1)}(t)&y_2^{(n-1)}(t)& \cdots & y^{(n-1)}_n(t) \end{pmatrix}\]

Here, #y^{(n-1)}# denotes the #(n-1)#-st derivative of #y#; in particular, #y^{(1)}=y'# and #y^{(2)}=y''#.

We recall from linear algebra that two functions #y_1(t)# and #y_2(t)# defined on a given interval (which may also be the whole straight line) are **linearly dependent,** if there are constants #\lambda_1# and #\lambda_2# that are not both equal to #0#, such that #\lambda_1\cdot y_1(t)+\lambda_2\cdot y_2(t) =0# for all #t# in the interval.

Let #y_1(t)# and #y_2(t)# be two differentiable functions on an interval.

If the pair of functions is linearly dependent, then their Wronskian is equal to #0# on the entire interval.

The differentiable functions #t^2# and #t\cdot |t|# are linearly independent on the interval #\ivcc{-1}{1}# while #W(t)=0# on that interval. This shows that it is not true that, if the Wronskian is equal to zero on an interval, the functions are linearly dependent.

The following alternative formulation of the theorem shows that a calculation of the Wronskian may suffice to establish that two functions are linearly independent:

If the Wronskian of #y_1(t)# and #y_2(t)# is not equal to #0# for a value of #t# in the interval, then the functions #y_1(t)# and #y_2(t)# are linearly independent on that interval.

Suppose that #y_1# and #y_2# are linearly dependent. Then there are constants #\lambda_1# and #\lambda_2#, not both equal to #0#, such that \[\lambda_1\cdot y_1(t)+\lambda_2\cdot y_2(t)=0\]

for all #t# in the given interval. Assume that #\lambda_1\ne0# (the reasoning in case #\lambda_2\ne0# is hardly different). Then #y_1(t)=-\frac{\lambda_2}{\lambda_1}\cdot y_2(t)#, so \(y_1'(t) = -\frac{\lambda_2}{\lambda_1}\cdot y_2'(t)\) and

\[\begin{array}{rcl}W(t) &=& y_1(t)\cdot y_2'(t)-y_1'(t)\cdot y_2(t) \\ &=&-\dfrac{\lambda_2}{\lambda_1}\cdot y_2(t)\cdot y_2'(t)+\dfrac{\lambda_2}{\lambda_1}\cdot y_2'(t)\cdot y_2(t)\\ & =& 0\end{array}\]

We know that a linear differential equation of second order has two linearly independent solutions. We can calculate the Wronskian of these two solutions:

Suppose that #y_1# and #y_2# are solutions of the homogeneous linear differential equation \[y''+p(t)\cdot y'+q(t)\cdot y=0\] defined on an interval around #a#. Then the Wronskian #W# of #y_1# and #y_2# satisfies \[W(t)=C\cdot\e^{-P(t)}\]

where \(C\) is a constant and \(P(t)=\int_a^t p(t)\,\dd t\).

If we now choose #y_1# and #y_2# such that \[y_1(a)=y_2'(a)=1\phantom{xxx}\text{ and }\phantom{xxx} y_1'(a)=y_2(a)=0\]

then these two solutions are linearly independent and their Wronskian satisfies \[W(t)=\e^{-P(t)}\]

First we verify that #W# satisfies a first order linear ODE:\[\begin{array}{rcl} W'(t) &=& \dfrac{\dd}{\dd t}\left(y_1(t)\cdot y_2'(t)- y_2(t)\cdot y_1'(t)\right)\\ &&\phantom{xx}\color{blue}{\text{definition of }W'}\\ &=& \dfrac{\dd}{\dd t}\left(y_1(t)\cdot y_2'(t)\right)- \dfrac{\dd}{\dd t}\left(y_2(t)\cdot y_1'(t)\right)\\ &&\phantom{xx}\color{blue}{\text{sum rule for differentiation}}\\ &=& y_1'(t)\cdot y_2'(t)+y_1(t)\cdot y_2''(t)- y_2'(t)\cdot y_1'(t)- y_2(t)\cdot y_1''(t)\\ &&\phantom{xx}\color{blue}{\text{product rule for differentiation (applied twice)}}\\ &=& y_1(t)\cdot y_2''(t) - y_2(t)\cdot y_1''(t)\\ &&\phantom{xx}\color{blue}{\text{simplified using } y_1'(t)\cdot y_2'(t) - y_2'(t)\cdot y_1'(t) = 0}\\ &=& y_1(t)\cdot\left(-p(t)\cdot y_2'(t)-q(t)\cdot y_2(t)\right) \\ &&\qquad -y_2(t)\cdot \left(-p(t)\cdot y_1'(t)-q(t)\cdot y_1(t)\right)\\ &&\phantom{xx}\color{blue}{\text{differential equation used to replace }y_1''\text{ and }y_2''}\\ &=& -y_1(t)\cdot p(t)\cdot y_2'(t) -y_1(t)\cdot q(t)\cdot y_2(t) \\ &&\qquad +\; y_2(t)\cdot p(t)\cdot y_1'(t) + y_2(t)\cdot q(t)\cdot y_1(t)\\ &&\phantom{xx}\color{blue}{\text{distributed } y_1(t) \text{ and } y_2(t)}\\ &=& -p(t)\cdot y_1(t)\cdot y_2'(t) + p(t)\cdot y_2(t)\cdot y_1'(t)\\ &&\phantom{xx}\color{blue}{\text{simplified using } y_2(t)\cdot q(t)\cdot y_1(t) - y_1(t)\cdot q(t)\cdot y_2(t) = 0}\\ &=& -p(t)\cdot\left(y_1(t)\cdot y_2'(t)- y_2(t)\cdot y_1'(t)\right)\\ &&\phantom{xx}\color{blue}{\text{factored out } -p(t)}\\ &=& -p(t)\cdot W(t)\\ &&\phantom{xx}\color{blue}{\text{definition of }W} \end{array}\]

As a consequence, #W# satisfies the *first order linear ODE* \(W'=-p(t)\cdot W(t)\) whose general solution is known to be

\[W(t)=C\cdot\e^{-P(t)}\]

where #C# is an integration constant and \(P(t)=\int_a^t p(t)\,\dd t\).

If we select solutions #y_1# and #y_2# of the original ODE with initial conditions at #a# as stated, then we have \[\begin{array}{rcl}C &=& C \cdot\e^{-P(a)}\\ &&\phantom{xx}\color{blue}{P(a)=\int_a^ap(t)\,\dd t=0}\\ &=& W(a)\\&&\phantom{xx}\color{blue}{W(t)=C\cdot\e^{-P(t)}}\\ &=& y_1(a)\cdot y_2'(a)-y_1'(a)\cdot y_2(a)\\&&\phantom{xx}\color{blue}{\text{definition of }W}\\&=&1\cdot 1-0\cdot 0 \\ &&\phantom{xx}\color{blue}{\text{initial conditions at }a}\\&=& 1\end{array}\]

so the Wronskian #W# satisfies \(W(t) = 1\cdot \e^{-P(t)}= \e^{-P(t)}\).

Suppose that #y_1# and #y_2# are linearly independent solutions to \[y''+p(t)\cdot y'+q(t)\cdot y=0\] defined on an interval around #a#. Then we can find linear combinations #\alpha\cdot y_1+\beta\cdot y_2# and #\gamma\cdot y_1+\delta\cdot y_2# that are solutions satisfying the initial conditions specified in the statement of the theorem. This means that the initial conditions on a pair of solutions can always be met.

Linear algebra provides an efficient way to find the constants #\alpha#, #\beta#, #\gamma#, #\delta#: Since #y_1# and #y_2# are linearly independent, their Wronskian is nonzero, and so the corresponding Wronskian matrix is invertible for each #t# in the given interval, in particular for #t=a#. This means that the matrix equation

\[\begin{pmatrix}y_1(a)&y_2(a)\\ y_1'(a)&y_2'(a)\end{pmatrix}\begin{pmatrix}\alpha&\gamma\\ \beta&\delta\end{pmatrix}=\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\]

has a unique solution. This solution expresses the initial conditions for the new linear combinations.

Compute the Wronskian of the functions #y_1# and #y_2# defined by:

\[y_1(t)=\euler^{t}\qquad\text{ and }\qquad y_2(t)=t\cdot \euler^ {- t }\]

#W= # #1-2\cdot t#

We start by computing the derivatives of the functions #y_1# and #y_2#:

\[\begin{array}{rcl} y_1'(t)&=&\displaystyle \euler^{t}\\ y_2'(t)&=&\displaystyle \left(1-t\right)\cdot \euler^ {- t }\end{array}\]

Now we have all the ingredients needed to compute the Wronskian #W#:

\[\begin{array}{rcl}W(t) &=&y_1(t)\cdot y_2'(t)-y_1'(t)\cdot y_2(t)\\ & =&

(\euler^{t})\cdot( \left(1-t\right)\cdot \euler^ {- t })\\&&\phantom{xxx} -(\euler^{t})\cdot( t\cdot \euler^ {- t })\\ &=& 1-2\cdot t\end{array}\]