Skip to main content

Subsection D.2.1 Local Truncation Error for Euler's Method

We now try to develop some understanding as to why we got the above experimental results. We start with the error generated by a single step of Euler's method.

Definition D.2.3 Local truncation error

The (signed) error generated by a single step of Euler's method, under the assumptions that we start the step with the exact solution and that there is no roundoff error, is called the local truncation error for Euler's method. That is, if \(\phi(t)\) obeys \(\phi'(t) = f\big(t,\phi(t)\big)\) and \(\phi(t_n)=y_n\text{,}\) and if \(y_{n+1}=y_n+ hf(t_n,y_n)\text{,}\) then the local truncation error for Euler's method is

\begin{equation*} \phi\big(t_{n+1}\big) -y_{n+1} \end{equation*}

That is, it is difference between the exact value, \(\phi\big(t_{n+1}\big)\text{,}\) and the approximate value generated by a single Euler method step, \(y_{n+1}\text{,}\) ignoring any numerical issues caused by storing numbers in a computer.

Denote by \(\phi(t)\) the exact solution to the initial value problem

\begin{equation*} y'(t)=f(t,y)\qquad y(t_n)=y_n \end{equation*}

That is, \(\phi(t)\) obeys

\begin{equation*} \phi'(t) = f\big(t,\phi(t)\big)\qquad \phi(t_n)=y_n \end{equation*}

for all \(t\text{.}\) Now execute one more step of Euler's method with step size h:

\begin{equation*} y_{n+1}=y_n+hf\big(t_n,y_n\big) \end{equation*}

Because we are assuming that \(y_n=\phi(t_n)\)

\begin{equation*} y_{n+1}= \phi(t_n)+ hf\big(t_n,\phi(t_n)\big) \end{equation*}

Because \(\phi(t)\) is the exact solution, \(\ \phi'(t_n) = f\big(t_n,\phi(t_n)\big)= f(t_n,y_n)\ \) and

\begin{equation*} y_{n+1}= \phi(t_n)+ h\phi'(t_n) \end{equation*}

The local truncation error in \(y_{n+1}\) is, by definition, \(\phi(t_{n+1})-y_{n+1}\text{.}\)

Taylor expanding (see (3.4.10) in the CLP-1 text) \(\phi(t_{n+1})=\phi(t_n+h)\) about \(t_n\)

\begin{equation*} \phi(t_n+h)=\phi(t_n)+\phi'(t_n)h+\tfrac{1}{2} \phi''(t_n)h^2 +\tfrac{1}{3!}\phi'''(t_n)h^3+\cdots \end{equation*}

so that

\begin{align*} &\phi(t_{n+1})-y_{n+1}\\ &= \big[\phi(t_n)+\phi'(t_n)h+\tfrac{1}{2} \phi''(t_n)h^2 +\tfrac{1}{3!}\phi'''(t_n)h^3+\cdots\big] -\big[\phi(t_n)+ h\phi'(t_n)\big]\\ &= \tfrac{1}{2} \phi''(t_n)h^2+\tfrac{1}{3!}\phi'''(t_n)h^3+\cdots \tag{E2} \end{align*}

Notice that the constant and \(h^1\) terms have cancelled out. So the first term that appears is proportional to \(h^2\text{.}\) Since \(h\) is typically a very small number, the \(h^3\text{,}\) \(h^4\text{,}\) \(\cdots\) terms will usually be much smaller than the \(h^2\) term.

We conclude that the local truncation error for Euler's method is \(h^2\) times some unknown constant (we usually don't know the value of \(\frac{1}{2} \phi''(t_n)\) because we don't usually know the solution \(\phi(t)\) of the differential equation) plus smaller terms that are proportional to \(h^r\) with \(r\ge 3\text{.}\) This conclusion is typically written

The symbol \(O(h^3)\) is used to designate any function that, for small \(h\text{,}\) is bounded by a constant times \(h^3\text{.}\) So, if \(h\) is very small, \(O(h^3)\) will be a lot smaller than \(h^2\text{.}\)

To get from an initial time \(t=t_0\) to a final time \(t=t_f\) using steps of size \(h\) requires \((t_f-t_0)/h\) steps. If each step were to introduce an error 3 For simplicity, we are assuming that \(K\) takes the same value in every step. If, instead, there is a different \(K\) in each of the \(n=(t_f-t_0)/h\) steps, the final error would be \(K_1h^2\!+\!K_2h^2\!+\!\cdots\!+\!K_nh^2\!+\!n O(h^3)= \bar K nh^2\!+\!n O(h^3)= \bar K(t_f-t_0)\, h\!+\!O(h^2)\text{,}\) where \(\bar K\) is the average of \(K_1\text{,}\) \(K_2\text{,}\) \(\cdots\text{,}\) \(K_n\text{.}\) \(Kh^2+O(h^3)\text{,}\) then the final error in the approximate value of \(y(t_f)\) would be

\begin{equation*} \frac{t_f-t_0}{h}\ \Big[Kh^2+O(h^3)\Big] = K(t_f-t_0)\, h+O(h^2) \end{equation*}

This very rough estimate is consistent with the experimental data for the dependence of error on step size with \(t_f\) held fixed, shown on the first graph after Remark D.2.2. But it is not consistent with the experimental time dependence data above, which shows the error growing exponentially, rather than linearly, in \(t_f-t_0\text{.}\)

We can get some rough understanding of this exponential growth as follows. The general solution to \(y'=y-2t\) is \(y=2+2t+c_0e^t\text{.}\) The arbitrary constant, \(c_0\text{,}\) is to be determined by initial conditions. When \(y(0)=3\text{,}\) \(c_0=1\text{.}\) At the end of step 1, we have computed an approximation \(y_1\) to \(y(h)\text{.}\) This \(y_1\) is not exactly \(y(h)=2+2h+e^h\text{.}\) Instead, it is a number that differs from \(2+2h+e^h\) by \(O(h^2)\text{.}\) We choose to write the number \(y_1=2+2h+e^h+O(h^2)\) as \(2+2h+(1+\epsilon)e^h\) with \(\epsilon=e^{-h} O(h^2)\) of order of magnitude \(h^2\text{.}\) That is, we choose to write

\begin{equation*} y_1=2+2t+c_0e^t\Big|_{t=h}\qquad\text{with }c_0=1+\epsilon \end{equation*}

If we were to make no further errors we would end up with the solution to

\begin{equation*} y'=y-2t,\qquad y(h)= 2+2h+(1+\epsilon)e^h \end{equation*}

which is 4 Note that this \(y(t)\) obeys both the differential equation \(y'=y-2t\) and the initial condition \(y(h)= 2+2h+(1+\epsilon)e^h\text{.}\)

\begin{align*} y(t) &= 2+2t+(1+\epsilon)e^t =2+2t+ e^t +\epsilon e^t \\ &=\phi(t) +\epsilon e^t \end{align*}

So, once as error has been introduced, the natural time evolution of the solutions to this differential equation cause the error to grow exponentially. Other differential equations with other time evolution characteristics will exhibit different \(t_f\) dependence of errors 5 For example, if the solution is polynomial, then we might expect (by a similar argument) that the error also grows polynomially in \(t_f\). In the next section, we show that, for many differential equations, errors grow at worst exponentially with \(t_f\text{.}\)