Skip to main content

CLP-2 Integral Calculus

Section D.3 Variable Step Size Methods

We now introduce a family of procedures that decide by themselves what step size to use. In all of these procedures the user specifies an acceptable error rate and the procedure attempts to adjust the step size so that each step introduces error at no more than that rate. That way the procedure uses a small step size when it is hard to get an accurate approximation, and a large step size when it is easy to get a good approximation.
Suppose that we wish to generate an approximation to the initial value problem
\begin{equation*} y'=f(t,y),\qquad y(t_0)=y_0 \end{equation*}
for some range of \(t\)’s and we want the error introduced per unit increase
 1 
We know that the error will get larger the further we go in \(t\text{.}\) So it makes sense to try to limit the error per unit increase in \(t\text{.}\)
of \(t\) to be no more than about some small fixed number \(\varepsilon\text{.}\) This means that if \(y_n\approx y(t_0+nh)\) and \(y_{n+1}\approx y(t+(n+1)h)\text{,}\) then we want the local truncation error in the step from \(y_n\) to \(y_{n+1}\) to be no more than about \(\varepsilon h\text{.}\) Suppose further that we have already produced the approximate solution as far as \(t_n\text{.}\) The rough strategy is as follows. We do the step from \(t_n\) to \(t_n+h\) twice using two different algorithms, giving two different approximations to \(y(t_{n+1})\text{,}\) that we call \(A_{1,n+1}\) and \(A_{2,n+1}\text{.}\) The two algorithms are chosen so that
  1. we can use \(A_{1,n+1}-A_{2,n+1}\) to compute an approximate local truncation error and
  2. for efficiency, the two algorithms use almost the same evaluations of \(f\text{.}\) Remember that evaluating the function \(f\) is typically the most time-consuming part of our computation.
In the event that the local truncation error, divided by \(h\text{,}\) (i.e. the error per unit increase of \(t\)) is smaller than \(\varepsilon\text{,}\) we set \(t_{n+1}=t_n+h\text{,}\) accept \(A_{2,n+1}\) as the approximate value
 2 
Better still, accept \(A_{2,n+1}\) minus the computed approximate error in \(A_{2,n+1}\) as the approximate value for \(y(t_{n+1})\text{.}\)
for \(y(t_{n+1})\text{,}\) and move on to the next step. Otherwise we pick, using what we have learned from \(A_{1,n+1}-A_{2,n+1}\text{,}\) a new trial step size \(h\) and start over again at \(t_n\text{.}\)
Now for the details. We start with a very simple procedure. We will later soup it up to get a much more efficient procedure.

Subsection D.3.1 Euler and Euler-2step (preliminary version)

Denote by \(\phi(t)\) the exact solution to \(y'=f(t,y)\) that satisfies the initial condition \(\phi(t_n)=y_n\text{.}\) If we apply one step of Euler with step size \(h\text{,}\) giving
\begin{equation*} A_{1,n+1}=y_n+hf(t_n,y_n) \end{equation*}
we know, from (D.2.4), that
\begin{equation*} A_{1,n+1}=\phi(t_n+h)+Kh^2+O(h^3) \end{equation*}
The problem, of course, is that we don’t know what the error is, even approximately, because we don’t know what the constant \(K\) is. But we can estimate \(K\) simply by redoing the step from \(t_n\) to \(t_n+h\) using a judiciously chosen second algorithm. There are a number of different second algorithms that will work. We call the simple algorithm that we use in this subsection Euler-2step
 3 
This name is begging for a dance related footnote and we invite the reader to supply their own.
. One step of Euler-2step with step size \(h\) just consists of doing two steps of Euler with step size \(h/2\text{:}\)
\begin{equation*} A_{2,n+1} = y_n+\tfrac{h}{2}f(t_n,y_n) +\tfrac{h}{2}f\big(t_n+\tfrac{h}{2},y_n+\tfrac{h}{2}f(t_n,y_n)\big) \end{equation*}
Here, the first half-step took us from \(y_n\) to \(y_{\rm mid}=y_n+\frac{h}{2}f(t_n,y_n)\) and the second half-step took us from \(y_{\rm mid}\) to \(y_{\rm mid}+\frac{h}{2}f\big(t_n+\frac{h}{2},y_{\rm mid}\big)\text{.}\) The local truncation error introduced in the first half-step is \(K(h/2)^2+O(h^3)\text{.}\) That for the second half-step is \(K(h/2)^2+O(h^3)\) with the same
 4 
Because the two half-steps start at values of \(t\) only \(h/2\) apart, and we are thinking of \(h\) as being very small, it should not be surprising that we can use the same value of \(K\) in both. In case you don’t believe us, we have included a derivation of the local truncation error for Euler-2step later in this appendix.
\(K\text{,}\) though with a different \(O(h^3)\text{.}\) All together
\begin{align*} A_{2,n+1}&=\phi(t_n+h)+\big[ K\big(\tfrac{h}{2}\big)^2+O(h^3)\big] + \big[K\big(\tfrac{h}{2}\big)^2+O(h^3)\big] \\ &=\phi(t_n+h)+\half Kh^2+O(h^3) \end{align*}
The difference is
 5 
Recall that every time the symbol \(O(h^3)\) is used it can stand for a different function that is bounded by some constant times \(h^3\) for small \(h\text{.}\) Thus \(O(h^3)-O(h^3)\) need not be zero, but is \(O(h^3)\text{.}\) What is important here is that if \(K\) is not zero and if \(h\) is very small, then \(O(h^3)\) is much smaller than \(\half Kh^2\text{.}\)
\begin{align*} A_{1,n+1}-A_{2,n+1}&=\big[\phi(t_n+h)+Kh^2+O(h^3)\big] -\big[\phi(t_n+h)-\half Kh^2-O(h^3)\big] \\ &=\half Kh^2+O(h^3) \end{align*}
So if we do one step of both Euler and Euler-2step, we can estimate
\begin{equation*} \half Kh^2=A_{1,n+1}-A_{2,n+1}+O(h^3) \end{equation*}
We now know that in the step just completed Euler-2step introduced an error of about \(\half Kh^2\approx A_{1,n+1}-A_{2,n+1}\text{.}\) That is, the current error rate is about \(r=\frac{|A_{1,n+1}-A_{2,n+1}|}{h}\approx\half |K| h\) per unit increase of \(t\text{.}\)
  • If this \(r=\frac{|A_{1,n+1}-A_{2,n+1}|}{h}>\varepsilon\text{,}\) we reject
     6 
    The measured error rate, \(r\text{,}\) is bigger than the desired error rate \(\varepsilon\text{.}\) That means that it is harder to get the accuracy we want than we thought. So we have to take smaller steps.
    \(A_{2,n+1}\) and repeat the current step with a new trial step size chosen so that \(\half |K|(\text{new }h)\lt\varepsilon\text{,}\) i.e. \(\frac{r}{h}(\text{new }h)\lt\varepsilon\text{.}\) To give ourselves a small safety margin, we could use
     7 
    We don’t want to make the new \(h\) too close to \(\frac{\varepsilon}{r}{h}\) since we are only estimating things and we might end up with an error rate bigger that \(\varepsilon\text{.}\) On the other hand, we don’t want to make the new \(h\) too small because that means too much work — so we choose it to be just a little smaller than \(\frac{\varepsilon}{r}{h}\) \(\ldots\) say \(0.9\frac{\varepsilon}{r}{h}\) .
    \begin{equation*} \text{new }h=0.9\,\frac{\varepsilon}{r}\,h \end{equation*}
  • If \(r=\frac{|A_{1,n+1}-A_{2,n+1}|}{h}\lt\varepsilon\) we can accept
     8 
    The measured error rate, \(r\text{,}\) is smaller than the desired error rate \(\varepsilon\text{.}\) That means that it is easier to get the accuracy we want than we thought. So we can make the next step larger.
    \(A_{2,n+1}\) as an approximate value for \(y(t_{n+1})\text{,}\) with \(t_{n+1}=t_n+h\text{,}\) and move on to the next step, starting with the new trial step size
     9 
    Note that in this case \(\frac{\varepsilon}{r}>1\text{.}\) So the new \(h\) can be bigger than the last \(h\text{.}\)
    \begin{equation*} \text{new } h=0.9\,\frac{\varepsilon}{r}\,h \end{equation*}
That is our preliminary version of the Euler/Euler-2step variable step size method. We call it the preliminary version, because we will shortly tweak it to get a much more efficient procedure.

Example D.3.1.

As a concrete example, suppose that our problem is
\begin{equation*} y(0)=e^{-2},\ y'=8(1-2t)y,\ \varepsilon=0.1 \end{equation*}
and that we have gotten as far as
\begin{equation*} t_n=0.33,\ y_n=0.75,\ \ \text{trial }h=0.094 \end{equation*}
Then, using \(E=|A_{1,n+1}-A_{2,n+1}|\) to denote the magnitude of the estimated local truncation error in \(A_{2,n+1}\) and \(r\) the corresponding error rate
\begin{align*} f(t_n,y_n)&=8(1-2\times 0.33) 0.75=2.04 \\ A_{1,n+1}&=y_n+hf(t_n,y_n)=0.75+0.094\times 2.04=0.942 \\ y_{\rm mid}&=y_n+\tfrac{h}{2}f(t_n,y_n) =0.75+\tfrac{0.094}{2}\times 2.04=0.846 \\ f\big(t_n+\tfrac{h}{2},y_{\rm mid}\big) &=8\Big[1-2\big(0.33+\tfrac{0.094}{2}\big)\Big]0.846=1.66 \\ A_{2,n+1}&=y_{\rm mid}+\tfrac{h}{2}f(t_n+\tfrac{h}{2},y_{\rm mid}) =0.846+\tfrac{0.094}{2}1.66=0.924\\ E&=|A_{1,n+1}-A_{2,n+1}|=|0.942-0.924|=0.018\\ r&=\frac{|E|}{h}=\frac{0.018}{0.094}=0.19 \end{align*}
Since \(r=0.19>\varepsilon=0.1\,\text{,}\) the current step size is unacceptable and we have to recompute with the new step size
\begin{equation*} \text{new } h=0.9\frac{\varepsilon}{r}(\text{old }h) =0.9\ \frac{0.1}{0.19}\ 0.094 =0.045 \end{equation*}
to give
\begin{align*} f(t_n,y_n)&=8(1-2\times0.33)0.75=2.04 \\ A_{1,n+1}&=y_n+hf(t_n,y_n)=0.75+0.045\times 2.04=0.842 \\ y_{\rm mid}&=y_n+\tfrac{h}{2}f(t_n,y_n) =0.75+\tfrac{0.045}{2}\times 2.04=0.796 \\ f\big(t_n+\tfrac{h}{2},y_{\rm mid}\big) &=8\Big[1-2\big(0.33+\tfrac{0.045}{2}\big)\Big]0.796=1.88 \\ A_{2,n+1}&=y_{\text{mid}} +\tfrac{h}{2}f(t_n +\tfrac{h}{2},y_{\rm mid}) =0.796+\tfrac{0.045}{2}1.88 =0.838 \\ E&=A_{1.n+1}-A_{2.n+1}=0.842-0.838=0.004 \\ r&=\frac{|E|}{h}=\frac{0.004}{0.045}=0.09 \end{align*}
This time \(\,r=0.09\lt \varepsilon=0.1\,\text{,}\) is acceptable so we set \(t_{n+1}=0.33+0.045=0.375\) and
\begin{equation*} y_{n+1}=A_{2,n+1}=0.838 \end{equation*}
The initial trial step size from \(t_{n+1}\) to \(t_{n+2}\) is
\begin{equation*} \text{new }h = 0.9\frac{\varepsilon}{r}(\text{old }h) =0.9\,\frac{0.1}{0.09}\,.045=.045 \end{equation*}
By a fluke, it has turned out that the new \(h\) is the same as the old \(h\) (to three decimal places). If \(r\) had been significantly smaller than \(\varepsilon\text{,}\) then the new \(h\) would have been signficantly bigger than the old \(h\) - indicating that it is (relatively) easy to estimate things in this region, making a larger step size sufficient.
As we said above, we will shortly upgrade the above variable step size method, that we are calling the preliminary version of the Euler/Euler-2step method, to get a much more efficient procedure. Before we do so, let’s pause to investigate a little how well our preliminary procedure does at controlling the rate of error production.
We have been referring, loosely, to \(\varepsilon\) as the desired rate for introduction of error, by our variable step size method, as \(t\) advances. If the rate of increase of error were exactly \(\varepsilon\text{,}\) then at final time \(t_f\) the accumulated error would be exactly \(\varepsilon(t_f-t_0)\text{.}\) But our algorithm actually chooses the step size \(h\) for each step so that the estimated local truncation error in \(A_{2,n+1}\) for that step is about \(\varepsilon h\text{.}\) We have seen that, once some local truncation error has been introduced, its contribution to the global truncation error can grow exponentially with \(t_f\text{.}\)
Here are the results of a numerical experiment that illustrate this effect. In this experiment, the above preliminary Euler/Euler-2step method is applied to the initial value problem \(\ y'=t-2y,\ y(0)=3 \ \) for \(\varepsilon=\frac{1}{16},\frac{1}{32},\cdots\) (ten different values) and for \(t_f=0.2,\ 0.4,\ \cdots,\ 3.8\text{.}\) Here is a plot of the resulting \(\frac{\text{actual error at }t=t_f}{\varepsilon t_f}\) against \(t_f\text{.}\)
If the rate of introduction of error were exactly \(\varepsilon\text{,}\) we would have \(\frac{\text{actual error at }t=t_f}{\varepsilon t_f}=1\text{.}\) There is a small square on the graph for each different pair \(\varepsilon,t_f\text{.}\) So for each value of \(t_f\) there are ten (possibly overlapping) squares on the line \(x=t_f\text{.}\) This numerical experiment suggests that \(\frac{\text{actual error at }t=t_f}{\varepsilon t_f}\) is relatively independent of \(\varepsilon\) and starts, when \(t_f\) is small, at about one, as we want, but grows (perhaps exponentially) with \(t_f\text{.}\)

Subsection D.3.2 Euler and Euler-2step (final version)

We are now ready to use a sneaky bit of arithemtic to supercharge our Euler/Euler-2step method. As in our development of the preliminary version of the method, denote by \(\phi(t)\) the exact solution to \(y'=f(t,y)\) that satisfies the initial condition \(\phi(t_n)=y_n\text{.}\) We have seen, at the beginning of §D.3.1, that applying one step of Euler with step size \(h\text{,}\) gives
\begin{align*} A_{1,n+1}&=y_n+hf(t_n,y_n) \notag\\ &=\phi(t_n+h)+Kh^2+O(h^3) \tag{E5} \end{align*}
and applying one step of Euler-2step with step size \(h\) (i.e. applying two steps of Euler with step size \(h/2\)) gives
\begin{align*} A_{2,n+1} &= y_n+\tfrac{h}{2}f(t_n,y_n) +\tfrac{h}{2}f\big(t_n+\tfrac{h}{2}\,,\,y_n+\tfrac{h}{2}f(t_n,y_n)\big)\notag\\ &=\phi(t_n+h)+\tfrac{1}{2} Kh^2+O(h^3) \tag{E6} \end{align*}
because the local truncation error introduced in the first half-step was \(K(h/2)^2+O(h^3)\) and that introduced in the second half-step was \(K(h/2)^2+O(h^3)\text{.}\) Now here is the sneaky bit. Equations (E5) and (E6) are very similar and we can eliminate all \(Kh^2\)’s by subtracting (E5) from \(2\) times (E6). This gives
\begin{equation*} 2\text{(E6)} - \text{(E5):}\qquad 2A_{2,n+1}-A_{1,n+1} = \phi(t_n+h) +O(h^3) \end{equation*}
(no more \(h^2\) term!) or
\begin{equation*} \phi(t_n+h)= 2A_{2,n+1}-A_{1,n+1}+O(h^3) \tag{E7} \end{equation*}
which tells us that choosing
\begin{equation*} y_{n+1}=2A_{2,n+1}-A_{1,n+1} \tag{E8} \end{equation*}
would give a local truncation error of order \(h^3\text{,}\) rather than the order \(h^2\) of the preliminary Euler/Euler-2step method. To convert the preliminary version to the final version, we just replace \(y_{n+1}=A_{2,n+1}\) by \(y_{n+1} = 2A_{2,n+1}-A_{1,n+1}\text{:}\)
Let’s think a bit about how our final Euler/Euler-2step method should perform.
  • The step size here, as in the preliminary version, is chosen so that the local truncation error in \(A_{2,n+1}\) per unit increase of \(t\text{,}\) namely \(r=\frac{|A_{1,n+1}-A_{2,n+1}|}{h}\approx\frac{Kh^2/2}{h}=\frac{K}{2}h\text{,}\) is approximately \(\varepsilon\text{.}\) So \(h\) is roughly proportional to \(\varepsilon\text{.}\)
  • On the other hand, (E7) shows that, in the full method, local truncation error is being added to \(y_{n+1}\) at a rate of \(\frac{O(h^3)}{h}=O(h^2)\) per unit increase in \(t\text{.}\)
  • So one would expect that local truncation increases the error at a rate proportional to \(\varepsilon^2\) per unit increase in \(t\text{.}\)
  • If the rate of increase of error were exactly a constant time \(\varepsilon^2\text{,}\) then the error accumulated between the initial time \(t=0\) and the final time \(t=t_f\) would be exactly a constant times \(\varepsilon^2\,t_f\text{.}\)
  • However we have seen that, once some local truncation error has been introduced, its contribution to the global error can grow exponentially with \(t_f\text{.}\) So we would expect that, under the full Euler/Euler-2step method, \(\frac{{\rm actual\ error\ at\ }t=t_f}{\varepsilon^2 t_f}\) to be more or less independent of \(\varepsilon\text{,}\) but still growing exponentially in \(t_f\text{.}\)
Here are the results of a numerical experiment that illustrate this. In this experiment, the above final Euler/Euler-2step method, (D.3.2), is applied to the initial value problem \(\ y'=t-2y,\ y(0)=3 \ \) for \(\varepsilon=\frac{1}{16},\frac{1}{32},\cdots\) (ten different values) and for \(t_f=0.2,\ 0.4,\ \cdots,\ 3.8\text{.}\) In the following plot, there is a small square for the resulting \(\frac{\text{actual error at }t=t_f}{\varepsilon^2 t_f}\) for each different pair \(\varepsilon,t_f\text{.}\)
It does indeed look like \(\frac{{\rm actual\ error\ at\ }t=t_f}{\varepsilon^2 t_f}\) is relatively independent of \(\varepsilon\) but grows (perhaps exponentially) with \(t_f\text{.}\) Note that \(\frac{{\rm actual\ error\ at\ }t=t_f}{\varepsilon^2 t_f}\) contains a factor of \(\varepsilon^2\) in the denominator. The actual error rate \(\frac{{\rm actual\ error\ at\ }t=t_f}{t_f}\) is much smaller than is suggested by the graph.

Subsection D.3.3 Fehlberg’s Method

Of course, in practice more efficient and more accurate methods
 10 
There are a very large number of such methods. We will only look briefly at a couple of the simpler ones. The interested reader can find more by search engining for such keywords as “Runge-Kutta methods” and “adaptive step size”.
than Euler and Euler-2step are used. Fehlberg’s method
 11 
E. Fehlberg, NASA Technical Report R315 (1969) and NASA Technical Report R287 (1968).
uses improved Euler and a second more accurate method. Each step involves three calculations of \(f\text{:}\)
\begin{align*} f_{1,n}&=f(t_n,y_n) \\ f_{2,n}&=f(t_n+h,y_n+hf_{1,n}) \\ f_{3,n}&=f\left(t_n+\tfrac{h}{2},y_n+\tfrac{h}{4}[f_{1,n}+f_{2,n}]\right) \end{align*}
Once these three evaluations have been made, the method generates two approximations for \(y(t_n+h)\text{:}\)
\begin{align*} A_{1,n+1}&=y_n+\tfrac{h}{2}\left[f_{1,n}+f_{2,n}\right] \\ A_{2,n+1}&=y_n+\tfrac{h}{6}\left[f_{1,n}+f_{2,n}+4f_{3,n}\right] \end{align*}
Denote by \(\phi(t)\) the exact solution to \(y'=f(t,y)\) that satisfies the initial condition \(\phi(t_n)=y_n\text{.}\) Now \(A_{1,n+1}\) is just the \(y_{n+1}\) produced by the improved Euler’s method. The local truncation error for the improved Euler’s method is of order \(h^3\text{,}\) one power of \(h\) smaller than that for Euler’s method. So
\begin{equation*} A_{1,n+1} = \phi(t_n+h) + Kh^3+O(h^4) \end{equation*}
and it turns out
 12 
The interested reader can find Fehlberg’s original paper online (at NASA!) and follow the derivation. It requires careful Taylor expansions and then clever algebra to cancel the bigger error terms.
that
\begin{equation*} A_{2,n+1} = \phi(t_n+h) + O(h^4) \end{equation*}
So the error in \(A_{1,n+1}\) is
\begin{align*} E&=\big|Kh^3+O(h^4)\big| =\big|A_{1,n+1}-\phi(t_n+h)\big|+O(h^4) \\ &=\big|A_{1,n+1}-A_{2,n+1}\big|+O(h^4) \end{align*}
and our estimate for rate at which error is being introduced into \(A_{1,n+1}\) is
\begin{equation*} r=\frac{|A_{1,n+1}-A_{2,n+1}|}{h}\approx |K|h^2 \end{equation*}
per unit increase of \(t\text{.}\)
  • If \(r>\varepsilon\) we redo this step with a new trial step size chosen so that \(|K|(\text{new }h)^2\lt\varepsilon\text{,}\) i.e. \(\frac{r}{h^2}(\text{new }h)^2\lt\varepsilon\text{.}\) With our traditional safety factor, we take
    \begin{equation*} \text{new }h=0.9\sqrt{\frac{\varepsilon}{r}}\,h\qquad \text{(the new }h\text{ is smaller)} \end{equation*}
  • If \(r\le \varepsilon\) we set \(t_{n+1}=t_n+h\) and \(y_{n+1}=A_{2,n+1}\) (since \(A_{2,n+1}\) should be considerably more accurate than \(A_{1,n+1}\)) and move on to the next step with trial step size
    \begin{equation*} \text{new }h=0.9\sqrt{\frac{\varepsilon}{r}}\,h\qquad \text{(the new }h\text{ is usually bigger)} \end{equation*}

Subsection D.3.4 The Kutta-Merson Process

The Kutta-Merson process
 13 
R.H. Merson, ``An operational method for the study of integration processes’’ , Proc. Symp. Data Processing , Weapons Res. Establ. Salisbury , Salisbury (1957) pp. 110–125.
uses two variations of the Runge-Kutta method. Each step involves five calculations
 14 
Like the other methods described above, the coefficients \(1/3\text{,}\) \(1/6\text{,}\) \(1/8\) etc. are chosen so as to cancel larger error terms. While determining the correct choice of coefficients is not conceptually difficult, it does take some work and is beyond the scope of this appendix. The interested reader should search-engine their way to a discussion of adaptive Runge-Kutta methods.
of \(f\text{:}\)
\begin{align*} k_{1,n}&=f(t_n,y_n) \\ k_{2,n}&=f\big(t_n+\tfrac{1}{3}h,y_n+\tfrac{1}{3}hk_{1,n}\big)\\ k_{3,n}&= f\big(t_n+\tfrac{1}{3}h,y_n+\tfrac{1}{6}hk_{1,n}+\tfrac{1}{6}hk_{2,n}\big) \\ k_{4,n}&= f\big(t_n+\tfrac{1}{2}h,y_n+\tfrac{1}{8}hk_{1,n}+\tfrac{3}{8}hk_{3,n}\big) \\ k_{5,n}&= f\big(t_n+h,y_n+\tfrac{1}{2}hk_{1,n}-\tfrac{3}{2}hk_{3,n}+2hk_{4,n}\big) \end{align*}
Once these five evaluations have been made, the process generates two approximations for \(y(t_n+h)\text{:}\)
\begin{align*} A_{1,n+1}&=y_n+h\left[\tfrac{1}{2}k_{1,n}-\tfrac{3}{2}k_{3,n}+2k_{4,n}\right] \\ A_{2,n+1}&=y_n+h\left[\tfrac{1}{6}k_{1,n}+\tfrac{2}{3}k_{4,n} +\tfrac{1}{6}k_{5,n}\right] \end{align*}
The (signed) error in \(A_{1,n+1}\) is \(\frac{1}{120}h^5K+O(h^6)\) while that in \(A_{2,n+1}\) is \(\frac{1}{720}h^5K+O(h^6)\) with the same constant \(K\text{.}\) So \(A_{1,n+1}-A_{2,n+1} = \frac{5}{720}Kh^5+O(h^6)\) and the unknown constant \(K\) can be determined, to within an error \(O(h)\text{,}\) by
\begin{equation*} K=\frac{720}{5\,h^5}(A_{1,n+1}-A_{2,n+1}) \end{equation*}
and the approximate (signed) error in \(A_{2,n+1}\) and its corresponding rate per unit increase of \(t\) are
\begin{align*} E&=\frac{1}{720}K h^5=\frac{1}{5}(A_{1,n+1}-A_{2,n+1}) \\ r=\frac{|E|}{h}&=\frac{1}{720}|K| h^4=\frac{1}{5\,h}\big|A_{1,n+1}-A_{2,n+1}\big| \end{align*}
  • If \(r>\varepsilon\) we redo this step with a new trial step size chosen so that \(\frac{1}{720}|K|(\text{new h})^4\lt\varepsilon\text{,}\) i.e. \(\frac{r}{h^4}(\text{new }h)^4\lt\varepsilon\text{.}\) With our traditional safety factor, we take
    \begin{equation*} \text{new }h=0.9\left(\frac{\varepsilon}{r}\right)^{1/4}\,h \end{equation*}
  • If \(r\le \varepsilon\) we set \(t_{n+1}=t_n+h\) and \(y_{n+1}=A_{2,n+1}-E\) (since \(E\) is our estimate of the signed error in \(A_{2,n+1}\)) and move on to the next step with trial step size
    \begin{equation*} \text{new }h=0.9\left(\frac{\varepsilon}{r}\right)^{1/4}\,h \end{equation*}

Subsection D.3.5 The Local Truncation Error for Euler-2step

In our description of Euler/Euler-2step above we simply stated the local truncation error without an explanation. In this section, we show how it may be derived. We note that very similar calculations underpin the other methods we have described.
In this section, we will be using partial derivatives and, in particular, the chain rule for functions of two variables. That material is covered in Chapter 2 of the CLP-3 text. If you are not yet comfortable with it, you can either take our word for those bits, or you can delay reading this section until you have learned a little multivariable calculus.
Recall that, by definition, the local truncation error for an algorithm is the (signed) error generated by a single step of the algorithm, under the assumptions that we start the step with the exact solution and that there is no roundoff error
 15 
We should note that in serious big numerical computations, one really does have to take rounding errors into account because they can cause serious problems. The interested reader should search-engine their way to the story of Edward Lorenz’s numerical simulations and the beginnings of chaos theory. Unfortunately we simply do not have space in this text to discuss all aspects of mathematics.
. Denote by \(\phi(t)\) the exact solution to
\begin{align*} y'(t)&=f(t,y) \\ y(t_n)&=y_n \end{align*}
In other words, \(\phi(t)\) obeys
\begin{align*} \phi'(t) &= f\big(t,\phi(t)\big)\qquad\text{ for all }t \\ \phi(t_n)&=y_n \end{align*}
In particular \(\phi'(t_n)=f\big(t_n,\phi(t_n)\big)=f(t_n,y_n)\) and, carefully using the chain rule, which is (2.4.2) in the CLP-3 text,
\begin{align*} \phi''(t_n)&=\diff{}{t}f\big(t,\phi(t)\big)\Big|_{t=t_n} =\Big[f_t\big(t,\phi(t)\big)+f_y\big(t,\phi(t)\big)\phi'(t)\Big]_{t=t_n} \\ &=f_t(t_n,y_n)+f_y(t_n,y_n)\,f(t_n,y_n) \tag{E9} \end{align*}
Remember that \(f_t\) is the partial derivative of \(f\) with respect to \(t\text{,}\) and that \(f_y\) is the partial derivative of \(f\) with respect to \(y\text{.}\) We’ll need (E9) below.
By definition, the local truncation error for Euler is
\begin{equation*} E_1(h)=\phi(t_n+h)-y_n-hf\big(t_n,y_n\big) \end{equation*}
while that for Euler-2step is
\begin{equation*} E_2(h)=\phi(t_n+h)-y_n-\tfrac{h}{2}f(t_n,y_n) -\tfrac{h}{2}f\big(t_n+\tfrac{h}{2},y_n+\tfrac{h}{2}f(t_n,y_n)\big) \end{equation*}
To understand how \(E_1(h)\) and \(E_2(h)\) behave for small \(h\) we can use Taylor expansions ((3.4.10) in the CLP-1 text) to write them as power series in \(h\text{.}\) To be precise, we use
\begin{equation*} g(h)=g(0)+g'(0)\,h+\half g''(0)\,h^2+O(h^3) \end{equation*}
to expand both \(E_1(h)\) and \(E_2(h)\) in powers of \(h\) to order \(h^2\text{.}\) Note that, in the expression for \(E_1(h)\text{,}\) \(t_n\) and \(y_n\) are constants — they do not vary with \(h\text{.}\) So computing derivatives of \(E_1(h)\) with respect to \(h\) is actually quite simple.
\begin{alignat*}{2} E_1(h)&=\phi(t_n+h)-y_n-hf\big(t_n,y_n\big)\qquad & E_1(0)&=\phi(t_n)-y_n=0 \\ E'_1(h)&=\phi'(t_n+h)-f\big(t_n,y_n\big) & E'_1(0)&=\phi'(t_n)-f\big(t_n,y_n\big)=0 \\ E''_1(h)&=\phi''(t_n+h) & E''_1(0)&=\phi''(t_n) \end{alignat*}
By Taylor, the local truncation error for Euler obeys
Computing arguments of \(E_2(h)\) with respect to \(h\) is a little harder, since \(h\) now appears in the arguments of the function \(f\text{.}\) As a consequence, we have to include some partial derivatives.
\begin{align*} E_2(h)&=\phi(t_n+h)-y_n-\frac{h}{2}f(t_n,y_n) -\frac{h}{2}f\Big(t_n+\frac{h}{2},y_n+\frac{h}{2}f(t_n,y_n)\Big) \\ E'_2(h)&=\phi'(t_n+h)-\frac{1}{2}f(t_n,y_n) -\frac{1}{2}f\Big(t_n+\frac{h}{2},y_n+\frac{h}{2}f(t_n,y_n)\Big) \\ &\hskip2.5in-\frac{h}{2} \underbrace{\diff{}{h} f\Big(t_n+\frac{h}{2},y_n+\frac{h}{2}f(t_n,y_n)\Big)}_{\text{leave this expression as is for now}} \\ E''_2(h)&=\phi''(t_n+h) -2\times\frac{1}{2} \underbrace{\diff{}{h}f\Big(t_n+\frac{h}{2},y_n+\frac{h}{2}f(t_n,y_n)\Big)}_{\text{leave this one too}} \\ &\hskip2.5in-\frac{h}{2} \underbrace{\frac{\mathrm{d^2}}{\mathrm{d}h^2}f\Big(t_n+\frac{h}{2},y_n+\frac{h}{2}f(t_n,y_n)\Big)}_{\text{and leave this one too}} \end{align*}
Since we only need \(E_2(h)\) and its derivatives at \(h=0\text{,}\) we don’t have to compute the \(\frac{\mathrm{d^2 f}}{\mathrm{d}h^2}\) term (thankfully) and we also do not need to compute the \(\diff{f}{h}\) term in \(E_2'\text{.}\) We do, however, need \(\diff{f}{h}\Big|_{h=0}\) for \(E_2''(0)\text{.}\)
\begin{align*} E_2(0)&=\phi(t_n)-y_n=0 \\ E'_2(0)&=\phi'(t_n)-\frac{1}{2}f(t_n,y_n)-\frac{1}{2}f(t_n,y_n)=0 \\ E''_2(0)&=\phi''(t_n)-\diff{}{h} f\Big(t_n+\frac{h}{2},y_n+\frac{h}{2}f(t_n,y_n)\Big)\Big|_{h=0} \\ &=\phi''(t_n)- \frac{1}{2} f_t\Big(t_n+\frac{h}{2},y_n+\frac{h}{2}f(t_n,y_n)\Big)\Big|_{h=0} \\ &\hskip1.5in -\frac{1}{2} f(t_n,y_n)\, f_y\Big(t_n+\frac{h}{2},y_n+\frac{h}{2}f(t_n,y_n)\Big)\Big|_{h=0} \\ &=\phi''(t_n)- \frac{1}{2}f_t(t_n,y_n)-\frac{1}{2}f_y(t_n,y_n)\, f(t_n,y_n) \\ &=\half\phi''(t_n)\qquad\text{by (E9)} \end{align*}
By Taylor, the local truncation error for Euler-2step obeys
Observe that the \(K\) in (D.3.4) is identical to the \(K\) in (D.3.3). This is exactly what we needed in our analysis of Sections D.3.1 and D.3.2.