Skip to main content

CLP-2 Integral Calculus

Section C.2 Romberg Integration

The formulae (E4a,b) for \(K\) and \(\cA\) are, of course, only
 1 
“Only” is a bit strong. Don’t underestimate the power of a good approximation (pun intended).
approximate since they are based on (E2), which is an approximation to (E1). Let’s repeat the derivation that leads to (E4), but using the full (E1),
\begin{equation*} \cA=A(h)+Kh^k+K_1h^{k+1}+K_2h^{k+2}+\cdots \end{equation*}
Once again, suppose that we have chosen some \(h\) and that we have evaluated \(A(h)\) and \(A(h/2)\text{.}\) They obey
\begin{align*} \cA&=A(h)+Kh^k+K_1h^{k+1}+K_2h^{k+2}+\cdots \tag{E5a}\\ \cA&=A(h/2)+K\big(\tfrac{h}{2}\big)^k +K_1\big(\tfrac{h}{2}\big)^{k+1} +K_2\big(\tfrac{h}{2}\big)^{k+2}+\cdots \tag{E5b} \end{align*}
Now, as we did in the derivation of (E4b), multiply (E5b) by \(2^k\) and then subtract (E5a). This gives
\begin{equation*} \left(2^k-1\right)\cA=2^kA(h/2)-A(h)-\half K_1 h^{k+1}-\tfrac{3}{4} K_2 h^{k+1}+\cdots \end{equation*}
and then, dividing across by \(\left(2^k-1\right)\text{,}\)
\begin{equation*} \cA=\frac{2^kA(h/2)-A(h)}{2^k-1}-\frac{1/2}{2^k-1} K_1 h^{k+1}-\frac{3/4}{2^k-1} K_2 h^{k+2}+\cdots \end{equation*}
Hence if we define our “new improved approximation”
\begin{equation*} B(h)=\frac{2^kA(h/2)-A(h)}{2^k-1}\ \text{and}\ \tilde K = -\frac{1/2}{2^k-1}K_1\ \text{and}\ \tilde K_1 = -\frac{3/4}{2^k-1}K_2 \tag{E6} \end{equation*}
we have
\begin{equation*} \cA=B(h)+\tilde K h^{k+1}+\tilde K_1 h^{k+2}+\cdots \end{equation*}
which says that \(B(h)\) is an approximation to \(\cA\) whose error is of order \(k+1\text{,}\) one better
 2 
That is, the error decays as \(h^{k+1}\) as opposed to \(h^k\) — so, as \(h\) decreases, it gets smaller faster.
than \(A(h)\)’s.
If \(A(h)\) has been computed for three values of \(h\text{,}\) we can generate \(B(h)\) for two values of \(h\) and repeat the above procedure with a new value of \(k\text{.}\) And so on. One widely used numerical integration algorithm, called Romberg integration
 3 
Romberg Integration was introduced by the German Werner Romberg (1909–2003) in 1955.
, applies this procedure repeatedly to the trapezoidal rule. It is known that the trapezoidal rule approximation \(T(h)\) to an integral \(I\) has error behaviour (assuming that the integrand \(f(x)\) is smooth)
\begin{equation*} I=T(h)+K_1h^2+K_2h^4+K_3h^6+\cdots \end{equation*}
Only even powers of \(h\) appear. Hence
\begin{align*} T(h)& &&\text{ has error of order 2}\\ \end{align*}

so that, using (E6) with \(k=2\text{,}\)

\begin{align*} T_1(h)&=\tfrac{4T(h/2)-T(h)}{3}&&\text{ has error of order 4}\\ \end{align*}

so that, using (E6) with \(k=4\text{,}\)

\begin{align*} T_2(h)&=\tfrac{16T_1(h/2)-T_1(h)}{15}&&\text{ has error of order 6}\\ \end{align*}

so that, using (E6) with \(k=6\text{,}\)

\begin{align*} T_3(h)&=\tfrac{64T_2(h/2)-T_2(h)}{63}&&\text{ has error of order 8} \end{align*}
and so on. We know another method which produces an error of order 4 — Simpson’s rule. In fact, \(T_1(h)\) is exactly Simpson’s rule (for step size \(\frac{h}{2}\)).

Example C.2.2. Finding \(\pi\) by Romberg integration.

The following table
 4 
The second column, for example, of the table only reports \(5\) decimal places for \(T(h)\text{.}\) But many more decimal places of \(T(h)\) were used in the computations of \(T_1(h)\) etc.
illustrates Romberg integration by applying it to the area \(A\) of the integral \(A=\int_0^1\frac{4}{1+x^2}\dee{x}\text{.}\) The exact value of this integral is \(\pi\) which is \(3.14159265358979\text{,}\) to fourteen decimal places.
\(h\) \(T(h)\) \(T_1(h)\) \(T_2(h)\) \(T_3(h)\)
1/4 3.13118 3.14159250246 3.14159266114 3.14159265359003
1/8 3.13899 3.141592651225 3.141592653708
1/16 3.14094 3.141592653553
1/32 3.14143
This computation required the evaluation of \(f(x)=\frac{4}{1+x^2}\) only for \(x=\frac{n}{32}\) with \(0\le n\le 32\) — that is, a total of \(33\) evaluations of \(f\text{.}\) Those \(33\) evaluations gave us \(12\) correct decimal places. By way of comparison, \(T\big(\frac{1}{32}\big)\) used the same \(33\) evaluations of \(f\text{,}\) but only gave us \(3\) correct decimal places.
As we have seen, Richardson extrapolation can be used to choose the step size so as to achieve some desired degree of accuracy. We are next going to consider a family of algorithms that extend this idea to use small step sizes in the part of the domain of integration where it is hard to get good accuracy and large step sizes in the part of the domain of integration where it is easy to get good accuracy. We will illustrate the ideas by applying them to the integral \(\int_0^1 \sqrt{x}\ \dee{x}\text{.}\) The integrand \(\sqrt{x}\) changes very quickly when \(x\) is small and changes slowly when \(x\) is large. So we will make the step size small near \(x=0\) and make the step size large near \(x=1\text{.}\)