Skip to main content

CLP-2 Integral Calculus

Section C.1 Richardson Extrapolation

There are many approximation procedures in which one first picks a step size h and then generates an approximation A(h) to some desired quantity A. For example, A might be the value of some integral abf(x)dx. For the trapezoidal rule with n steps, Δx=ban plays the role of the step size. Often the order of the error generated by the procedure is known. This means
(E1)A=A(h)+Khk+K1hk+1+K2hk+2+ 
with k being some known constant, called the order of the error, and K, K1, K2,  being some other (usually unknown) constants. If A(h) is the approximation to A=abf(x)dx produced by the trapezoidal rule with Δx=h, then k=2. If Simpson’s rule is used, k=4.
Let’s first suppose that h is small enough that the terms Khk+1+Khk+2+  in (E1) are small enough
 1 
Typically, we don’t have access to, and don’t care about, the exact error. We only care about its order of magnitude. So if h is small enough that K1hk+1+K2hk+2+  is a factor of at least, for example, one hundred smaller than Khk, then dropping K1hk+1+K2hk+2+  would not bother us at all.
that dropping them has essentially no impact. This would give
(E2)A=A(h)+Khk
Imagine that we know k, but that we do not know A or K, and think of (E2) as an equation that the unknowns A and K have to solve. It may look like we have one equation in the two unknowns K, A, but that is not the case. The reason is that (E2) is (essentially) true for all (sufficiently small) choices of h. If we pick some h, say h1, and use the algorithm to determine A(h1) then (E2), with h replaced by h1, gives one equation in the two unknowns A and K, and if we then pick some different h, say h2, and use the algorithm a second time to determine A(h2) then (E2), with h replaced by h2, gives a second equation in the two unknowns A and K. The two equations will then determine both A and K.
To be more concrete, suppose that we have picked some specific value of h, and have chosen h1=h and h2=h2, and that we have evaluated A(h) and A(h/2). Then the two equations are
(E3a)A=A(h)+Khk(E3b)A=A(h/2)+K(h2)k
It is now easy to solve for both K and A. To get K, just subtract (E3b) from (E3a).
(E3a)(E3b):0=A(h)A(h/2)+(112k)Khk(E4a)K=A(h/2)A(h)[12k]hk
To get A multiply (E3b) by 2k and then subtract (E3a).
2k(E3b)(E3a):[2k1]A=2kA(h/2)A(h)(E4b)A=2kA(h/2)A(h)2k1
The generation of a “new improved” approximation for A from two A(h)’s with different values of h is called Richardson
 2 
Richardson extrapolation was introduced by the Englishman Lewis Fry Richardson (1881--1953) in 1911.
Extrapolation. Here is a summary
This works very well since, by computing A(h) for two different h’s, we can remove the biggest error term in (E1), and so get a much more precise approximation to A for little additional work.

Example C.1.2. Richardson extrapolation with the trapezoidal rule.

Applying the trapezoidal rule (1.11.6) to the integral A=0141+x2dx with step sizes 18 and 116 (i.e. with n=8 and n=16) gives, with h=18,
A(h)=3.1389884945A(h/2)=3.1409416120
So (E4b), with k=2, gives us the “new improved” approximation
22×3.14094161203.1389884945221=3.1415926512
We saw in Example 1.11.3 that 0141+x2dx=π, so this new approximation really is “improved”:
  • A(1/8) agrees with π to two decimal places,
  • A(1/16) agrees with π to three decimal places and
  • the new approximation agrees with π to eight decimal places.
Beware that (E3b), namely A=A(h/2)+K(h2)k, is saying that K(h2)k is (approximately) the error in A(h/2), not the error in A. You cannot get an “even more improved” approximation by using (E4a) to compute K and then adding K(h2)k to the “new improved” A of (E4b) — doing so just gives A+K(h2)k, not a more accurate A.

Example C.1.3. Example 1.11.16 revisited.

Suppose again that we wish to use Simpson’s rule (1.11.9) to evaluate 01ex2dx to within an accuracy of 106, but that we do not need the degree of certainty provided by Example 1.11.16. Observe that we need (approximately) that |K|h4<106, so if we can estimate K (using our Richardson trick) then we can estimate the required h. A commonly used strategy, based on this observation, is to
  • first apply Simpson’s rule twice with some relatively small number of steps and
  • then use (E4a), with k=4, to estimate K and
  • then use the condition |K|hk106 to determine, approximately, the number of steps required
  • and finally apply Simpson’s rule with the number of steps just determined.
Let’s implement this strategy. First we estimate K by applying Simpson’s rule with step sizes 14 and 18. Writing 14=h, we get
A(h)=0.74685538A(h/2)=0.74682612
so that (E4a), with k=4 and h replaced by h, yields
K=0.746826120.74685538[124](1/4)4=7.990×103
We want to use a step size h obeying
|K|h41067.990×103h4106h179904=19.45
like, for example, h=110. Applying Simpson’s rule with h=110 gives
A(1/10)=0.74682495
The exact answer, to eight decimal places, is 0.74682413 so the error in A(1/10) is indeed just under 106.
Suppose now that we change our minds. We want an accuracy of 1012, rather than 106. We have already estimated K. So now we want to use a step size h obeying
|K|h410127.99×103h41012h17.99×1094=1299.0
like, for example, h=1300. Applying Simpson’s rule with h=1300 gives, to fourteen decimal places,
A(1/300)=0.74682413281344
The exact answer, to fourteen decimal places, is 0.74682413281243 so the error in A(1/300) is indeed just over 1012.