\begin{equation*}
\cA=B(h)+\tilde K h^{k+1}+\tilde K_1 h^{k+2}+\cdots
\end{equation*}
which says that \(B(h)\) is an approximation to \(\cA\) whose error is of order \(k+1\text{,}\) one better 2
That is, the error decays as \(h^{k+1}\) as opposed to \(h^k\) — so, as \(h\) decreases, it gets smaller faster.
than \(A(h)\)’s.
If \(A(h)\) has been computed for three values of \(h\text{,}\) we can generate \(B(h)\) for two values of \(h\) and repeat the above procedure with a new value of \(k\text{.}\) And so on. One widely used numerical integration algorithm, called Romberg integration 3
Romberg Integration was introduced by the German Werner Romberg (1909–2003) in 1955.
, applies this procedure repeatedly to the trapezoidal rule. It is known that the trapezoidal rule approximation \(T(h)\) to an integral \(I\) has error behaviour (assuming that the integrand \(f(x)\) is smooth)
\begin{align*}
T(h)& &&\text{ has error of order 2}\\
\end{align*}
so that, using (E6) with \(k=2\text{,}\)
\begin{align*}
T_1(h)&=\tfrac{4T(h/2)-T(h)}{3}&&\text{ has error of order 4}\\
\end{align*}
so that, using (E6) with \(k=4\text{,}\)
\begin{align*}
T_2(h)&=\tfrac{16T_1(h/2)-T_1(h)}{15}&&\text{ has error of order 6}\\
\end{align*}
so that, using (E6) with \(k=6\text{,}\)
\begin{align*}
T_3(h)&=\tfrac{64T_2(h/2)-T_2(h)}{63}&&\text{ has error of order 8}
\end{align*}
and so on. We know another method which produces an error of order 4 — Simpson’s rule. In fact, \(T_1(h)\) is exactly Simpson’s rule (for step size \(\frac{h}{2}\)).
EquationC.2.1.Romberg integration.
Let \(T(h)\) be the trapezoidal rule approximation, with step size \(h\text{,}\) to an integral \(I\text{.}\) The Romberg integration algorithm is
ExampleC.2.2.Finding \(\pi\) by Romberg integration.
The following table 4
The second column, for example, of the table only reports \(5\) decimal places for \(T(h)\text{.}\) But many more decimal places of \(T(h)\) were used in the computations of \(T_1(h)\) etc.
illustrates Romberg integration by applying it to the area \(A\) of the integral \(A=\int_0^1\frac{4}{1+x^2}\dee{x}\text{.}\) The exact value of this integral is \(\pi\) which is \(3.14159265358979\text{,}\) to fourteen decimal places.
\(h\)
\(T(h)\)
\(T_1(h)\)
\(T_2(h)\)
\(T_3(h)\)
1/4
3.13118
3.14159250246
3.14159266114
3.14159265359003
1/8
3.13899
3.141592651225
3.141592653708
1/16
3.14094
3.141592653553
1/32
3.14143
This computation required the evaluation of \(f(x)=\frac{4}{1+x^2}\) only for \(x=\frac{n}{32}\) with \(0\le n\le 32\) — that is, a total of \(33\) evaluations of \(f\text{.}\) Those \(33\) evaluations gave us \(12\) correct decimal places. By way of comparison, \(T\big(\frac{1}{32}\big)\) used the same \(33\) evaluations of \(f\text{,}\) but only gave us \(3\) correct decimal places.
As we have seen, Richardson extrapolation can be used to choose the step size so as to achieve some desired degree of accuracy. We are next going to consider a family of algorithms that extend this idea to use small step sizes in the part of the domain of integration where it is hard to get good accuracy and large step sizes in the part of the domain of integration where it is easy to get good accuracy. We will illustrate the ideas by applying them to the integral \(\int_0^1 \sqrt{x}\ \dee{x}\text{.}\) The integrand \(\sqrt{x}\) changes very quickly when \(x\) is small and changes slowly when \(x\) is large. So we will make the step size small near \(x=0\) and make the step size large near \(x=1\text{.}\)