Skip to main content

CLP-2 Integral Calculus

Section D.1 Simple ODE Solvers — Derivation

The first order of business is to derive three simple algorithms for generating approximate numerical solutions to the initial value problem
\begin{align*} y'(t)&=f\big(t,y(t)\big) \\ y(t_0)&=y_0 \end{align*}
The first is called Euler’s method because it was developed by (surprise!) Euler
 1 
Leonhard Euler (1707–1783) was a Swiss mathematician and physicist who spent most of his adult life in Saint Petersberg and Berlin. He gave the name \(\pi\) to the ratio of a circle’s circumference to its diameter. He also developed the constant \(e\text{.}\)
.

Subsection D.1.1 Euler’s Method

Our goal is to approximate (numerically) the unknown function
\begin{align*} y(t) &= y(t_0) + \int_{t_0}^t y'(\tau)\,\dee{\tau} \\ &= y(t_0) + \int_{t_0}^t f\big(\tau,y(\tau)\big)\,\dee{\tau} \end{align*}
for \(t\ge t_0\text{.}\) We are told explicitly the value of \(y(t_0)\text{,}\) namely \(y_0\text{.}\) So we know \(f\big(\tau,y(\tau)\big)\big|_{\tau=t_0}=f\big(t_0,y_0\big)\text{.}\) But we do not know the integrand \(f\big(\tau,y(\tau)\big)\) for \(\tau>t_0\text{.}\) On the other hand, if \(\tau\) is close \(t_0\text{,}\) then \(f\big(\tau,y(\tau)\big)\) will remain close
 2 
This will be the case as long as \(f(t,y)\) is continuous.
to \(f\big(t_0,y_0\big)\text{.}\) So pick a small number \(h\) and define
\begin{align*} t_1&=t_0+h\\ y_1&=y(t_0) + \int_{t_0}^{t_1} f(t_0,y_0)\,\dee{\tau} =y_0+f\big(t_0,y_0\big)(t_1-t_0) \\ &=y_0+f\big(t_0,y_0\big)h \end{align*}
By the above argument
\begin{equation*} y(t_1)\approx y_1 \end{equation*}
Now we start over from the new point \((t_1,y_1)\text{.}\) We now know an approximate value for \(y\) at time \(t_1\text{.}\) If \(y(t_1)\) were exactly \(y_1\text{,}\) then the instantaneous rate of change of \(y\) at time \(t_1\text{,}\) namely \(y'(t_1)=f\big(t_1,y(t_1)\big)\text{,}\) would be exactly \(f(t_1,y_1)\) and \(f\big(t,y(t)\big)\) would remain close to \(f(t_1,y_1)\) for \(t\) close to \(t_1\text{.}\) Defining
\begin{align*} t_2&=t_1+h=t_0+2h\\ y_2&=y_1 + \int_{t_1}^{t_2} f(t_1,y_1)\,\dee{t} =y_1+f\big(t_1,y_1\big)(t_2-t_1) \\ &=y_1+f\big(t_1,y_1\big)h \end{align*}
we have
\begin{equation*} y(t_2)\approx y_2 \end{equation*}
We just repeat this argument ad infinitum. Define, for \(n=0,1,2,3,\cdots\)
\begin{equation*} t_n=t_0+nh \end{equation*}
Suppose that, for some value of \(n\text{,}\) we have already computed an approximate value \(y_n\) for \(y(t_n)\text{.}\) Then the rate of change of \(y(t)\) for \(t\) close to \(t_n\) is \(f\big(t,y(t)\big)\approx f\big(t_n,y(t_n)\big)\approx f\big(t_n,y_n\big)\) and
This algorithm is called Euler’s Method. The parameter \(h\) is called the step size.
Here is a table applying a few steps of Euler’s method to the initial value problem
\begin{align*} y'&=-2t+y \\ y(0) & = 3 \end{align*}
with step size \(h=0.1\text{.}\) For this initial value problem
\begin{align*} f(t,y)&=-2t+y \\ t_0&=0 \\ y_0&=3 \end{align*}
Of course this initial value problem has been chosen for illustrative purposes only. The exact solution is
 3 
Even if you haven’t learned how to solve initial value problems like this one, you can check that \(y(t)=2+2t+e^t\) obeys both \(y'(t)=-2t+y(t)\) and \(y(0)=3\text{.}\)
\(y(t)=2+2t+e^t\text{.}\)
\(n\) \(t_n\) \(y_n\) \(f(t_n,y_n)=-2t_n+y_n\) \(y_{n+1}=y_n+f(t_n,y_n)*h\)
0 0.0 3.000 \(-2*0.0+3.000=3.000\) \(3.000+3.000*0.1=3.300\)
1 0.1 3.300 \(-2*0.1+3.300=3.100\) \(3.300+3.100*0.1=3.610\)
2 0.2 3.610 \(-2*0.2+3.610=3.210\) \(3.610+3.210*0.1=3.931\)
3 0.3 3.931 \(-2*0.3+3.931=3.331\) \(3.931+3.331*0.1=4.264\)
4 0.4 4.264 \(-2*0.4+4.264=3.464\) \(4.264+3.464*0.1=4.611\)
5 0.5 4.611
The exact solution at \(t=0.5\) is \(4.6487\text{,}\) to four decimal places. We expect that Euler’s method will become more accurate as the step size becomes smaller. But, of course, the amount of effort goes up as well. If we recompute using \(h=0.01\text{,}\) we get (after much more work) \(4.6446\text{.}\)

Subsection D.1.2 The Improved Euler’s Method

Euler’s method is one algorithm which generates approximate solutions to the initial value problem
\begin{align*} y'(t)&=f\big(t,y(t)\big) \\ y(t_0)&=y_0 \end{align*}
In applications, \(f(t,y)\) is a given function and \(t_0\) and \(y_0\) are given numbers. The function \(y(t)\) is unknown. Denote by \(\varphi(t)\) the exact solution
 4 
Under reasonable hypotheses on \(f\text{,}\) there is exactly one such solution. The interested reader should search engine their way to the Picard-Lindelöf theorem.
for this initial value problem. In other words \(\varphi(t)\) is the function that obeys
\begin{align*} \varphi'(t)&=f\big(t,\varphi(t)\big) \\ \varphi(t_0)&=y_0 \end{align*}
exactly.
Fix a step size \(h\) and define \(t_n=t_0+nh\text{.}\) By turning the problem into one of approximating integrals, we now derive another algorithm that generates approximate values for \(\varphi\) at the sequence of equally spaced time values \(t_0,\ t_1,\ t_2,\ \cdots\text{.}\) We shall denote the approximate values \(y_n\) with
\begin{equation*} y_n\approx\varphi(t_n) \end{equation*}
By the fundamental theorem of calculus and the differential equation, the exact solution obeys
\begin{align*} \varphi(t_{n+1})&=\varphi(t_n)+\int_{t_n}^{t_{n+1}}\varphi'(t)\ \dee{t} \\ &=\varphi(t_n)+\int_{t_n}^{t_{n+1}}f\big(t,\varphi(t)\big)\ \dee{t} \end{align*}
Fix any \(n\) and suppose that we have already found \(y_0,\ y_1,\ \cdots,\ y_n\text{.}\) Our algorithm for computing \(y_{n+1}\) will be of the form
\begin{equation*} y_{n+1}=y_n+\text{ approximate value of } \int_{t_n}^{t_{n+1}}f\big(t,\varphi(t)\big)\ \dee{t} \end{equation*}
In Euler’s method, we approximated \(f\big(t,\varphi(t)\big)\) for \(t_n\le t\le t_{n+1}\) by the constant \(f\big(t_n,y_n\big)\text{.}\) Thus
\begin{align*} \amp\text{Euler's approximate value for } \int_{t_n}^{t_{n+1}}f\big(t,\varphi(t)\big)\ \dee{t}\text{ is }\\ \amp\hskip0.6in \int_{t_n}^{t_{n+1}}f\big(t_n,y_n\big)\ \dee{t} =f\big(t_n,y_n\big)h \end{align*}
So Euler approximates the area of the complicated region \(\ 0\le y\le f\big(t,\varphi(t)\big)\text{,}\) \(t_n\le t\le t_{n+1}\ \) (represented by the shaded region under the parabola in the left half of the figure below) by the area of the rectangle \(\ 0\le y\le f\big(t_n,y_n\big)\text{,}\) \(t_n\le t\le t_{n+1}\ \) (the shaded rectangle in the right half of the figure below).
Our second algorithm, the improved Euler’s method, gets a better approximation by using the trapezoidal rule. That is, we approximate the integral by the area of the trapezoid on the right below, rather than the rectangle on the right above.
The exact area of this trapezoid is the length \(h\) of the base multiplied by the average of the heights of the two sides, which is \(\frac{1}{2}\big[f\big(t_n,\varphi(t_n)\big)+f\big(t_{n+1},\varphi(t_{n+1})\big)\big]\text{.}\) Of course we do not know \(\varphi(t_n)\) or \(\varphi(t_{n+1})\) exactly.
Recall that we have already found \(y_0,\cdots,y_n\) and are in the process of finding \(y_{n+1}\text{.}\) So we already have an approximation for \(\varphi(t_n)\text{,}\) namely \(y_n\text{.}\) But we still need to approximate \(\varphi(t_{n+1})\text{.}\) We can do so by using one step of the original Euler method! That is
\begin{equation*} \varphi(t_{n+1})\approx \varphi(t_n)+\varphi'(t_n)h \approx y_n+f(t_n,y_n)h \end{equation*}
So our approximation of \(\frac{1}{2}\big[f\big(t_n,\varphi(t_n)\big)+f\big(t_{n+1},\varphi(t_{n+1})\big)\big]\) is
\begin{equation*} \frac{1}{2}\Big[f\big(t_n,y_n\big)+f\Big(t_{n+1},y_n+f(t_n,y_n)h\Big)\Big] \end{equation*}
and
\begin{align*} &\text{Improved Euler's approximate value for } \int_{t_n}^{t_{n+1}}f\big(t,\varphi(t)\big)\ \dee{t} \text{ is }\\ &\hskip0.7in \frac{1}{2}\Big[f\big(t_n,y_n\big)+f\Big(t_{n+1},y_n+f(t_n,y_n)h\Big)\Big]h \end{align*}
Putting everything together
 5 
Notice that we have made a first approximation for \(\varphi(t_{n+1})\) by using Euler’s method. Then improved Euler uses the first approximation to build a better approximation for \(\varphi(t_{n+1})\text{.}\) Building an approximation on top of another approximation does not always work, but it works very well here.
, the improved Euler’s method algorithm is
Here are the first two steps of the improved Euler’s method applied to
\begin{align*} y'&=-2t+y\qquad y(0) = 3\cr \end{align*}
with \(h=0.1\text{.}\) In each step we compute \(f(t_n,y_n)\text{,}\) followed by \(y_n+f(t_n,y_n)h\text{,}\) which we denote \(\tilde y_{n+1}\text{,}\) followed by \(f(t_{n+1},\tilde y_{n+1})\text{,}\) followed by \(y_{n+1}=y_n+ \frac{1}{2}\big[f\big(t_n,y_n\big)+f\big(t_{n+1},\tilde y_{n+1}\big)\big]h\text{.}\)
\begin{alignat*}{8} t_0&=0 & y_0&=3 & &\implies & f(t_0,y_0)&=-2*0+3 =3 \\ & & & & &\implies & \tilde y_1&=3+3*0.1 =3.3 \\ & & & & &\implies & f(t_1,\tilde y_1)&=-2*0.1+3.3 =3.1 \\ & & & & &\implies & y_1&=3+\frac{1}{2}[3+3.1]*0.1 =3.305 \cr t_1&=0.1\quad & y_1&=3.305 & &\implies & f(t_1,y_1)&=-2*0.1+3.305 =3.105 \\ & & & & &\implies & \tilde y_2&=3.305+3.105*0.1 =3.6155 \\ & & & & &\implies & f(t_2,\tilde y_2)&=-2*0.2+3.6155 =3.2155 \\ & && & &\implies & y_2&=3.305+\frac{1}{2}[3.105+3.2155]*0.1 \\ & && & &\implies & &=3.621025 \end{alignat*}
Here is a table which gives the first five steps.
\(n\) \(t_n\) \(y_n\) \(f(t_n,y_n)\) \(\tilde y_{n+1}\) \(f(t_{n+1},\tilde y_{n+1})\) \(y_{n+1}\)
0 0.0 3.000 3.000 3.300 3.100 3.305
1 0.1 3.305 3.105 3.616 3.216 3.621
2 0.2 3.621 3.221 3.943 3.343 3.949
3 0.3 3.949 3.349 4.284 3.484 4.291
4 0.4 4.291 3.491 4.640 3.640 4.647
5 0.5 4.647
As we saw at the end of Section D.1.1, the exact \(y(0.5)\) is 4.6487, to four decimal places, and Euler’s method gave 4.611.

Subsection D.1.3 The Runge-Kutta Method

The Runge-Kutta
 6 
Carl David Tolmé Runge (1856–1927) and Martin Wilhelm Kutta (1867–1944) were German mathematicians.
algorithm is similar to the Euler and improved Euler methods in that it also uses, in the notation of the last subsection,
\begin{equation*} y_{n+1}=y_n+{\rm\ approximate\ value\ for \ } \int_{t_n}^{t_{n+1}}f\big(t,\varphi(t)\big)\ \dee{t} \end{equation*}
But rather than approximating \(\int_{t_n}^{t_{n+1}}f\big(t,\varphi(t)\big)\ \dee{t}\) by the area of a rectangle, as does Euler, or by the area of a trapezoid, as does improved Euler, it approximates by the area under a parabola. That is, it uses Simpson’s rule. According to Simpson’s rule (which is derived in Section 1.11.3)
\begin{align*} &\int_{t_n}^{t_n+h}f\big(t,\varphi(t)\big)\ \dee{t}\\ &\hskip0.5in\approx \tfrac{h}{6}\Big[f\big(t_n,\varphi(t_n)\big) +4f\big(t_n+\tfrac{h}{2},\varphi(t_n+\tfrac{h}{2})\big) +f\big(t_n+h,\varphi(t_n+h)\big)\Big] \end{align*}
Analogously to what happened in our development of the improved Euler method, we don’t know \(\varphi(t_n)\text{,}\) \(\varphi(t_n+\tfrac{h}{2})\) or \(\varphi(t_n+h)\text{.}\) So we have to approximate them as well. The Runge-Kutta algorithm, incorporating all these approximations, is
 7 
It is well beyond our scope to derive this algorithm, though the derivation is similar in flavour to that of the improved Euler method. You can find more in, for example, Wikipedia.
That is, Runge-Kutta uses
  • \(k_{1,n}\) to approximate \(f\big(t_n,\varphi(t_n)\big)=\varphi'(t_n)\text{,}\)
  • both \(k_{2,n}\) and \(k_{3,n}\) to approximate \(f\big(t_n+\tfrac{h}{2},\varphi(t_n+\tfrac{h}{2})\big) =\varphi'(t_n+\tfrac{h}{2})\text{,}\) and
  • \(k_{4,n}\) to approximate \(f\big(t_n+h,\varphi(t_n+h)\big)\text{.}\)
Here are the first two steps of the Runge-Kutta algorithm applied to
\begin{align*} y'&=-2t+y\qquad y(0) = 3 \end{align*}
with \(h=0.1\text{.}\)
\begin{alignat*}{2} t_0&=0 & y_0&=3 \\ & \implies & k_{1,0}&=f(0,3)=-2*0+3 =3 \\ & \implies & &y_0+\tfrac{h}{2}k_{1,0}=3+0.05*3=3.15 \\ \ & \implies & k_{2,0}&=f(0.05,3.15)=-2*0.05+3.15 =3.05 \\ & \implies & &y_0+\tfrac{h}{2}k_{2,0}=3+0.05*3.05=3.1525 \\ & \implies & k_{3,0}&=f(0.05,3.1525)=-2*0.05+3.1525 =3.0525 \\ & \implies & &y_0+hk_{3,0}=3+0.1*3.0525=3.30525 \\ & \implies & k_{4,0}&=f(0.1,3.30525)=-2*0.1+3.30525 =3.10525 \\ & \implies & y_1&=3+\tfrac{0.1}{6}[3+2*3.05+2*3.0525+3.10525]=3.3051708 \\ t_1&=0.1 & y_1&=3.3051708 \\ & \implies & k_{1,1}&=f(0.1,3.3051708)=-2*0.1+3.3051708 =3.1051708 \\ & \implies & &y_1+\tfrac{h}{2}k_{1,1}=3.3051708+0.05*3.1051708=3.4604293 \\ & \implies & k_{2,1}&=f(0.15,3.4604293)=-2*0.15+3.4604293 =3.1604293 \\ & \implies & &y_1+\tfrac{h}{2}k_{2,1}=3.3051708+0.05*3.1604293=3.4631923 \\ & \implies & k_{3,1}&=f(0.15,3.4631923)=-2*0.15+3.4631923 =3.1631923 \\ & \implies & &y_1+hk_{3,1}=3.3051708+0.1*3.4631923=3.62149 \\ & \implies & k_{4,1}&=f(0.2,3.62149)=-2*0.2+3.62149 =3.22149 \\ & \implies & y_2&=3.3051708+\tfrac{0.1}{6}[3.1051708+2*3.1604293+ \\ & & &\hskip1.0in+2*3.1631923+3.22149]=3.6214025 \\ t_2&=0.2 & y_2&=3.6214025 \end{alignat*}
Now, while this might look intimidating written out in full like this, one should keep in mind that it is quite easy to write a program to do this. Here is a table giving the first five steps. The data is only given to three decimal places even though the computation has been done to many more.
\(n\) \(t_n\) \(y_n\) \(k_{1,n}\) \(y_{n,1}\) \(k_{2,n}\) \(y_{n,2}\) \(k_{3,n}\) \(y_{n,3}\) \(k_{4,n}\) \(y_{n+1}\)
0 0.0 3.000 3.000 3.150 3.050 3.153 3.053 3.305 3.105 3.305
1 0.1 3.305 3.105 3.460 3.160 3.463 3.163 3.621 3.221 3.621
2 0.2 3.621 3.221 3.782 3.282 3.786 3.286 3.950 3.350 3.949
3 0.3 3.950 3.350 4.117 3.417 4.121 3.421 4.292 3.492 4.291
4 0.4 4.292 3.492 4.466 3.566 4.470 3.570 4.649 3.649 4.648
5 0.5 4.6487206
As we saw at the end of Section D.1.2, the exact \(y(0.5)\) is 4.6487213, to seven decimal places, Euler’s method gave 4.611, and improved Euler gave 4.647.
So far we have, hopefully, motivated the Euler, improved Euler and Runge-Kutta algorithms. We have not attempted to see how efficient and how accurate the algorithms are. A first look at those questions is provided in the next section.