### Differential equations: Linear second-order differential equations

### Uniqueness of solutions of linear 2nd-order ODEs

In the remainder of this chapter, we will be mainly concerned with linear second-order differential equations. Many ordinary differential equations of interest in science and technology are of this kind.

Thus, as we know, we will focus on differential equations having the following form, where \(a(t)\), \(b(t)\), \(c(t)\), and \(f(t)\) are functions with #a(t)\ne0#: \[a(t)\cdot \frac{\dd^2y}{\dd t^2}+b(t)\cdot\frac{\dd y}{\dd t}+c(t)\cdot y=f(t)\] It is also known that the equation is called *homogeneous* if \(f(t)=0\). The functions \(a(t)\), \(b(t)\), and \(c(t)\) are called **coefficients***.* If these coefficients are constant, the general solution can be expressed in terms of standard functions.

We may assume that the coefficient #a(t)# is distinct from zero (that is, the constant function #0#). For otherwise the differential equation would be of first order. Therefore, we can divide by #a(t)#. In the resulting equation, the coefficient of #y''# is equal to #1#. In this case, we say that the equation is in **standard form**.

Uniqueness of solutions of linear second-order differential equations

Let #t_0# be a point in an open interval #\ivoo{c}{d}# (that is to say: #c\lt t_0\lt d#) and let \(a_1\) , \(a_2\), and \(b\) be continuous functions on this interval. Then the initial value problem \[ y″ + a_1(t)\cdot y′ + a_2(t)\cdot y = b(t), \phantom{xxx}\text{with }\phantom{xx}y(t_0) = \alpha\phantom{xx}\text{ and }\phantom{xx} y′(t_0) = \beta\] where #\alpha# and #\beta# are constants, has a unique solution defined on the entire interval #\ivoo{c}{d}#.

Here are some examples, including the famous differential equation of the mathematical pendulum, and an example showing that uniqueness cannot be enforced by specifying the function value #y(t)# for two different values of #t#.

\(y(t)=A\cos(\omega\cdot t)+B\sin(\omega\cdot t)\) , wherein \(A\) and \(B\) are constants

We seek solutions of the form \[y(t)=\e^{\lambda\cdot t}\]

where \(\lambda\) is a yet to be determined constant. Substituting this function rule for #y# and the associated function rule \(y''(t)=\lambda^2\cdot\e^{\lambda\cdot t}\) for the second derivative of #y# in the ODE provides the following condition for \(\lambda\): \[\lambda^2 + \omega^2=0\] This equation in the unknown \(\lambda\) has two complex solutions: \(\lambda=\ii\cdot\omega\) and \(\lambda=-\ii\cdot\omega\). So \(y_1(t)=\e^{\ii\cdot\omega\cdot t}\) and \(y_2(t)=\e^{-\ii\cdot\omega\cdot t}\) are the complex solutions of the given ODE. These solutions are each other's complex conjugate. Since each linear combination \(c_1\cdot y_1(t)+c_2\cdot y_2(t)\) (with constants \(c_1\) and \(c_2\) ) is also a solution, we can single out two real solutions of the ODE, namely the real and the imaginary part of #y_1(t)#:

\[\begin{array}{rcl}

y_{\text{cos}}(t)&=&\frac{\e^{\ii\cdot\omega\cdot t}+\e^{-\ii\cdot\omega\cdot t}}{2}=\cos(\omega\cdot t)\\

&\text{and}& \\

y_{\text{sin}}(t)&=&\frac{\e^{\ii\cdot\omega\cdot t}-\e^{-\ii\cdot\omega\cdot t}}{2\ii}=\sin(\omega\cdot t)\end{array}\] Also, each real linear combination of these two functions, is a solution of the ODE.

We now use the fact that the dimension of the linear subspace of all solutions is equal to #2# in order to conclude that the general real solution of the given ODE can be described as follows with two arbitrary constants of integration #A# and #B#: \[y(t)=A\cos(\omega\cdot t)+B\sin(\omega\cdot t)\]

**Pass Your Math**independent of your university. See pricing and more.

Or visit omptest.org if jou are taking an OMPT exam.