Skip to main content

Subsection D.2.2 Global Truncation Error for Euler's Method

Suppose once again that we are applying Euler's method with step size \(h\) to the initial value problem

\begin{align*} y'(t)&=f(t,y) \\ y(0)&=y_0 \end{align*}

Denote by \(\phi(t)\) the exact solution to the initial value problem and by \(y_n\) the approximation to \(\phi(t_n),\ t_n=t_0+nh\text{,}\) given by \(n\) steps of Euler's method (applied without roundoff error).

Definition D.2.5 Global truncation error

The (signed) error in \(y_n\) is \(\ \phi(t_n)-y_n \ \) and is called the global truncation error at time \(t_n\text{.}\)

The word “truncation” is supposed to signify that this error is due solely to Euler's method and does not include any effects of roundoff error that might be introduced by our not writing down an infinite number of decimal digits for each number that we compute along the way. We now derive a bound on the global truncation error.

Define

\begin{equation*} \varepsilon_n=\phi(t_n)-y_n \end{equation*}

The first half of the derivation is to find a bound on \(\varepsilon_{n+1}\) in terms of \(\varepsilon_n\text{.}\)

\begin{align*} \varepsilon_{n+1}&=\phi(t_{n+1})-y_{n+1} \\ &=\phi(t_{n+1})-y_n-hf(t_n,y_n) \\ &=[{\color{red}{\phi(t_n)}}-y_n]+h[{\color{blue}{f\big(t_n,\phi(t_n)\big)}}-f(t_n,y_n)]\\ &\hskip1.5in +[\phi(t_{n+1})-{\color{red}{\phi(t_n)}}-h{\color{blue}{f\big(t_n,\phi(t_n)\big)}}] \tag{E3} \end{align*}

where we have massaged the expression into three manageable pieces.

  • The first \([\cdots]\) is exactly \(\varepsilon_n\text{.}\)
  • The third \([\cdots]\) is exactly the local truncation error. Assuming that \(|\phi''(t)|\le A\) for all \(t\) of interest 6 We are assuming that the derivative \(\phi'(t)\) doesn't change too rapidly. This will be the case if \(f(t,y)\) is a reasonably smooth function., we can bound the third \([\cdots]\) by
    \begin{equation*} \big|\phi(t_{n+1})-\phi(t_n)-hf\big(t_n,\phi(t_n)\big)\big|\le \half Ah^2 \end{equation*}
    This bound follows quickly from the Taylor expansion with remainder ((3.4.32) in the CLP-1 text),
    \begin{align*} \phi(t_{n+1})&=\phi(t_n)+\phi'(t_n)h+\half \phi''(\tilde t)h^2 \\ &=\phi(t_n)+h\, f\big(t_n,\phi(t_n)\big)+\half \phi''(\tilde t)h^2 \end{align*}
    for some \(t_n\lt \tilde t\lt t_{n+1}\text{.}\)
  • Finally, by the mean value theorem, the magnitude of the second \([\cdots]\) is \(h\) times
    \begin{align*} |f\big(t_n,\phi(t_n)\big)-f(t_n,y_n)| &= F_{t_n}\big(\phi(t_n)\big)- F_{t_n}(y_n) \quad \text{where}\quad F_{t_n}(y) = f\big(t_n,y\big)\\ &=\big|F'_{t_n}(\tilde y)\big| \,|\phi(t_n)-y_n|\\ &\hskip1.35in\text{for some }\tilde y\text{ between }y_n\text{ and }\phi(t_n) \\ &=\big|F'_{t_n}(\tilde y)\big|\,|\varepsilon_n|\\ &\le B|\varepsilon_n| \end{align*}
    assuming that \(\big|F'_t(y)\big| \le B\) for all \(t\) and \(y\) of interest 7 Again, this will be the case if \(f(t,y)\) is a reasonably smooth function..

Substituting into (E3) gives

\begin{equation*} |\varepsilon_{n+1}| \le |\varepsilon_n| + Bh|\varepsilon_n| +\half Ah^2 = (1+Bh)|\varepsilon_n| +\half Ah^2 \tag{E4-n} \end{equation*}

Hence the (bound on the) magnitude of the total error, \(|\varepsilon_{n+1}|\text{,}\) consists of two parts. One part is the magnitude of the local truncation error, which is no more than \(\half Ah^2\) and which is present even if we start the step with no error at all, i.e. with \(\varepsilon_n=0\text{.}\) The other part is due to the combined error from all previous steps. This is the \(\varepsilon_n\) term. At the beginning of step number \(n+1\text{,}\) the combined error has magnitude \(|\varepsilon_n|\text{.}\) During the step, this error gets magnified by no more than a factor of \(1+Bh\text{.}\)

The second half of the derivation is to repeatedly apply (E4-n) with \(n=0,1,2,\cdots\text{.}\) By definition \(\phi(t_0)=y_0\) so that \(\varepsilon_0=0\text{,}\) so

\begin{alignat*}{2} &(\text{E4-0})\implies|\varepsilon_1|&&\le (1+Bh)|\varepsilon_0| +\tfrac{A}{2}h^2 =\tfrac{A}{2} h^2 \\ &(\text{E4-1})\implies|\varepsilon_2|&&\le (1+Bh)|\varepsilon_1| +\tfrac{A}{2} h^2 =(1+Bh)\tfrac{A}{2}h^2+\tfrac{A}{2}h^2\\ &(\text{E4-2})\implies|\varepsilon_3|&&\le (1+Bh)|\varepsilon_2| +\tfrac{A}{2}h^2 =(1+Bh)^2\tfrac{A}{2}h^2+(1+Bh)\tfrac{A}{2}h^2+\tfrac{A}{2}h^2 \end{alignat*}

Continuing in this way

\begin{equation*} |\varepsilon_n|\le (1+Bh)^{n-1}\tfrac{A}{2}h^2+\cdots+(1+Bh)\tfrac{A}{2}h^2+\tfrac{A}{2}h^2 =\sum_{m=0}^{n-1} (1+Bh)^m \tfrac{A}{2}h^2 \end{equation*}

This is the beginning of a geometric series, and we can sum it up by using \(\ \sum\limits_{m=0}^{n-1} ar^m=\frac{r^n-1}{r-1}a\ \) (which is Theorem 1.1.6(a)) with \(\ a=\tfrac{A}{2}h^2\ \) and \(\ r=1+Bh\ \) gives

\begin{equation*} |\varepsilon_n|\le \frac{(1+Bh)^n-1}{(1+Bh)-1}\frac{A}{2}h^2 =\frac{A}{2B}\big[(1+Bh)^n-1\big]h \end{equation*}

We are interested in how this behaves as \(t_n-t_0\) increases and/or \(h\) decreases. Now \(n=\frac{t_n-t_0}{h}\) so that \((1+Bh)^n=(1+Bh)^{(t_n-t_0)/h}\text{.}\) When \(h\) is small, the behaviour of \((1+Bh)^{(t_n-t_0)/h}\) is not so obvious. So we'll use a little trickery to make it easier to understand. Setting \(x=Bh\) in

\begin{equation*} x\ge 0\implies 1+x\le 1+x+\frac{1}{2}x^2+\frac{1}{3!}x^3+\cdots = e^x \end{equation*}

(the exponential series \(e^x= 1+x+\frac{1}{2}x^2+\frac{1}{3!}x^3+\cdots\) was derived in Example 3.6.5. gives 8 When \(x=Bh\) is large, it is not wise to bound the linear \(1+x\) by the much larger exponential \(e^x\text{.}\) However when \(x\) is small, \(1+x\) and \(e^x\) are almost the same. \(1+Bh\le e^{Bh}\text{.}\) Hence \((1+Bh)^n\le e^{Bhn}=e^{B(t_n-t_0)}\text{,}\) since \(t_n=t_0+nh\text{,}\) and we arrive at the conclusion

This is of the form \(K(t_f)h^k\) with \(k=1\) and the coefficient \(K(t_f)\) growing exponentially with \(t_f-t_0\text{.}\) If we keep \(h\) fixed and increase \(t_n\) we see exponential growth, but if we fix \(t_n\) and decrease \(h\) we see the error decrease linearly. This is just what our experimental data suggested.