Section 3.6 Taylor Series
Subsection 3.6.1 Extending Taylor Polynomials
Recall that Taylor polynomials provide a hierarchy of approximations to a given function near a given point Typically, the quality of these approximations improves as we move up the hierarchy.
1
Please review your notes from last term if this material is feeling a little unfamiliar.
- The crudest approximation is the constant approximation
- Then comes the linear, or tangent line, approximation
- Then comes the quadratic approximation
- In general, the Taylor polynomial of degree
for the function about the expansion point is the polynomial, determined by the requirements that for all That is, and have the same derivatives at up to order Explicitly,
These are, of course, approximations — often very good approximations near — but still just approximations. One might hope that if we let the degree, of the approximation go to infinity then the error in the approximation might go to zero. If that is the case then the “infinite” Taylor polynomial would be an exact representation of the function. Let’s see how this might work.
Fix a real number and suppose that all derivatives of the function exist. Then, we saw in (3.4.33) of the CLP-1 text that, for any natural number
where is the Taylor polynomial of degree for the function expanded about and is the error in the approximation The Taylor polynomial is given by the formula
2
Did you take a quick look at your notes?
Equation 3.6.2.
while the error satisfies
3
This is probably the most commonly used formula for the error. But there is another fairly commonly used formula. It, and some less commonly used formulae, are given in the next (optional) subsection “More about the Taylor Remainder”.
Equation 3.6.3.
Note that we typically do not know the value of in the formula for the error. Instead we use the bounds on to find bounds on and so bound the error.
4
The discussion here is only supposed to jog your memory. If it is feeling insufficiently jogged, then please look at your notes from last term.
In order for our Taylor polynomial to be an exact representation of the function we need the error to be zero. This will not happen when is finite unless is a polynomial. However it can happen in the limit as and in that case we can write as the limit
This is really a limit of partial sums, and so we can write
which is a power series representation of the function. Let us formalise this in a definition.
Definition 3.6.4. Taylor series.
Demonstrating that, for a given function, can be difficult, but for many of the standard functions you are used to dealing with, it turns out to be pretty easy. Let’s compute a few Taylor series and see how we do it.
Example 3.6.5. Exponential Series.
Find the Maclaurin series for
Solution: Just as was the case for computing Taylor polynomials, we need to compute the derivatives of the function at the particular choice of Since we are asked for a Maclaurin series, So now we just need to find for all integers
We have now seen power series representations for the functions
We do not think that you, the reader, will be terribly surprised to see that we develop series for sine and cosine next.
Example 3.6.6. Sine and Cosine Series.
The trigonometric functions and also have widely used Maclaurin series expansions (i.e. Taylor series expansions about ). To find them, we first compute all derivatives at general
Now set
For all even numbered derivatives (at ) are zero, while the odd numbered derivatives alternate between and Very similarly, for all odd numbered derivatives (at ) are zero, while the even numbered derivatives alternate between and So, the Taylor polynomials that best approximate and near are
Reviewing the patterns we found in the derivatives, we conclude that, for all
and, in particular, both of the series on the right hand sides converge for all
We could also test for convergence of the series using the ratio test. Computing the ratios of successive terms in these two series gives us
for sine and cosine respectively. Hence as these ratios go to zero and consequently both series are convergent for all (This is very similar to what was observed in Example 3.5.5.)
We have developed power series representations for a number of important functions. Here is a theorem that summarizes them.
5
The reader might ask whether or not we will give the series for other trigonometric functions or their inverses. While the tangent function has a perfectly well defined series, its coefficients are not as simple as those of the series we have seen — they form a sequence of numbers known (perhaps unsurprisingly) as the “tangent numbers”. They, and the related Bernoulli numbers, have many interesting properties, links to which the interested reader can find with their favourite search engine. The Maclaurin series for inverse sine is which is quite tidy, but proving it is beyond the scope of the course.
Theorem 3.6.7.
Notice that the series for sine and cosine sum to something that looks very similar to the series for
So both series have coefficients with the same absolute value (namely ), but there are differences in sign. This is not a coincidence and we direct the interested reader to the optional Section 3.6.3 where will show how these series are linked through
6
Warning: antique sign–sine pun. No doubt the reader first saw it many years syne.
Example 3.6.8. Optional — Why is .
We have already seen, in Example 3.6.5, that
By (3.6.3)
for some (unknown) between and Fix any real number We’ll now show that converges to zero as
To do this we need get bound the size of and to do this, consider what happens if is positive or negative.
- If
then and hence - On the other hand, if
then and so
In either case we have that Because of this the error term
We claim that this upper bound, and hence the error quickly shrinks to zero as
Call the upper bound (except for the factor which is independent of ) To show that this shrinks to zero as let’s write it as follows.
Now let
Since does not depend not (though it does depend on ), the function does not change as we increase Additionally, we know that and so Hence as we let the above bound must go to zero.
Alternatively, compare and
When is bigger than, for example we have That is, increasing the index on by one decreases the size of by a factor of at least two. As a result must tend to zero as
Consequently, for all as claimed, and we really have
There is another way to prove that the series converges to the function Rather than looking at how the error term behaves as we can show that the series satisfies the same simple differential equation and the same initial condition as the function.
Example 3.6.9. Optional — Another approach to showing that is .
We already know from Example 3.5.5, that the series converges to some function for all values of . All that remains to do is to show that is really We will do this by showing that and satisfy the same differential equation with the same initial conditions. We know that satisfies
8
Recall that when we solve of a separable differential equation our general solution will have an arbitrary constant in it. That constant cannot be determined from the differential equation alone and we need some extra data to find it. This extra information is often information about the system at its beginning (for example when position or time is zero) — hence “initial conditions”. Of course the reader is already familiar with this because it was covered back in Section 2.4.
We can show that the error terms in Maclaurin polynomials for sine and cosine go to zero as using very much the same approach as in Example 3.6.8.
Example 3.6.10. Optional — Why and .
Let be either or We know that every derivative of will be one of or Consequently, when we compute the error term using equation 3.6.3 we always have and hence
In Example 3.6.5, we showed that as — so all the hard work is already done. Since the error term shrinks to zero for both and and
as required.
Subsubsection 3.6.1 Optional — More about the Taylor Remainder
In this section, we fix a real number and a natural number suppose that all derivatives of the function exist, and we study the error
made when we approximate by the Taylor polynomial of degree for the function expanded about We have already seen, in (3.6.3), one formula, probably the most commonly used formula, for In the next theorem, we repeat that formula and give a second, commonly used, formula. After an example, we give a second theorem that contains some less commonly used formulae.
Theorem 3.6.11. Commonly used formulae for the Taylor remainder.
Notice that the integral form of the error is explicit - we could, in principle, compute it exactly. (Of course if we could do that, we probably wouldn’t need to use a Taylor expansion to approximate ) This contrasts with the Lagrange form which is an ‘existential’ statement - it tells us that ‘ ’ exists, but not how to compute it.
Proof.
-
We will give two proofs. The first is shorter and simpler, but uses some trickery. The second is longer, but is more straightforward. It uses a technique called mathematical induction.Proof 1: We are going to use a little trickery to get a simple proof. We simply view
as being fixed and study the dependence of on To emphasise that that is what we are doing, we defineand observe thatSo, by the fundamental theorem of calculus (Theorem 1.3.1), the function is determined by its derivative, and its value at a single point. Finding a value of for one value of is easy. Substitute into to yield To find apply to both sides of Recalling that is just a constant parameter,So, by the fundamental theorem of calculus, andProof 2: The proof that we have just given was short, but also very tricky --- almost noone could create that proof without big hints. Here is another much less tricky, but also commonly used, proof.- First consider the case
When case of part (a). - Next fix any integer
and suppose that we already know that integration by parts gives in terms of That’s called a reduction formula. Combining the reduction formula with ( ) gives - Let’s pause to summarise what we have learned in the last two bullets. Use the notation
to stand for the statement “ ”. To prove part (a) of the theorem, we need to prove that the statement is true for all integers In the first bullet, we showed that the statement is true. In the second bullet, we showed that if, for some integer the statement is true, then the statement is also true. Consequently, is true by the first bullet and then is true by the second bullet with and then is true by the second bullet with and then is true by the second bullet with- and so on, for ever and ever.
That tells us that is true for all integers which is exactly part (a) of the theorem. This proof technique is called mathematical induction.9
While the use of the ideas of induction goes back over 2000 years, the first recorded rigorous use of induction appeared in the work of Levi ben Gershon (1288–1344, better known as Gersonides). The first explicit formulation of mathematical induction was given by the French mathematician Blaise Pascal in 1665.
-
We have already seen one proof in the optional Section 3.4.9 of the CLP-1 text. We will see two more proofs here.Proof 1: We apply the generalised mean value theorem, which is Theorem 3.4.38 in the CLP-1 text. It says thatfor some
strictly between10
In Theorem 3.4.38 in the CLP-1 text, we assumed, for simplicity, that To get (GVMT) when simply exchange and in Theorem 3.4.38. and We apply (GMVT) with and This givesDon’t forget, when computing that is a function of with just a fixed parameter.Proof 2: We apply Theorem 2.2.10 (the mean value theorem for weighted integrals). If we use the weight function which is strictly positive for all By part (a) this givesIf we instead use the weight function which is strictly positive for all This gives
Theorem 3.6.11 has provided us with two formulae for the Taylor remainder The formula of part (b), is probably the easiest to use, and the most commonly used, formula for The formula of part (a), while a bit harder to apply, gives a bit better bound than that of part (b) (in the proof of Theorem 3.6.11 we showed that part (b) follows from part (a)). Here is an example in which we use both parts.
Example 3.6.12.
In Theorem 3.6.7 we stated that
for all But, so far, we have not justified this statement. We do so now, using (both parts of) Theorem 3.6.11. We start by setting and finding the Taylor polynomials and the corresponding errors for
So the Taylor polynomial of degree for the function expanded about is
Theorem 3.6.11 gives us two formulae for the error made when we approximate by Part (a) of the theorem gives
and part (b) gives
for some (unknown) between and The statement (S1), that we wish to prove, is equivalent to the statement
and we will now show that (S2) is true.
- The case
- This case is trivial, since, when
for all - The case
- This case is relatively easy to deal with using (Eb). In this case
so that the of (Eb) must be positive andconverges to zero as - The case
-
When
is close to (Eb) is not sufficient to show that (S2) is true. To see this, let’s consider the example All we know about the of (Eb) is that it has to be between and For example, (Eb) certainly allows to be and thengoes to asNote that, while this does tell us that (Eb) is not sufficient to prove (S2), when is close to it does not also tell us that (which would imply that (S2) is false) — could equally well be and thengoes to asWe’ll now use (Ea) (which has the advantage of not containing any unknown free parameter ) to verify (S2) when Rewrite the right hand side of (Ea)The exact evaluation of this integral is very messy and not very illuminating. Instead, we bound it. Note that, forso that increases as increases. Consequently, the biggest value that takes on the domain of integration isand the integrandConsequently,converges to zero as for each fixed
So we have verified (S2), as desired.
As we said above, Theorem 3.6.11 gave the two most commonly used formulae for the Taylor remainder. Here are some less commonly used, but occasionally useful, formulae.
Theorem 3.6.13. More formulae for the Taylor remainder.
- If
is differentiableand11
Note that the function need not be related to It just has to be differentiable with a nonzero derivative. is nonzero for all strictly between and then the Taylor remainder strictly between and - (Cauchy form)
strictly between and
Proof.
As in the proof of Theorem 3.6.11, we define
and observe that and and
- Recall that the generalised mean-value theorem, which is Theorem 3.4.38 in the CLP-1 text, says that
strictly between and We apply this theorem with and This gives - Apply part (a) with
This gives strictly between and
Example 3.6.14. Example 3.6.12, continued.
In Example 3.6.12 we verified that
for all There we used the Lagrange form,
for the Taylor remainder to verify (S1) when but we also saw that it is not possible to use the Lagrange form to verify (S1) when is close to We instead used the integral form
We will now use the Cauchy form (part (b) of Theorem 3.6.13)
to verify
when We have already noted that (S2) is equivalent to (S1).
Write We saw in Example 3.6.12 that
So, in this example, the Cauchy form is
for some When
and are negative and and are (strictly) positive so that and- the distance from
to namely is greater than the distance from to namely so that
So, for
goes to zero as
Subsection 3.6.2 Computing with Taylor Series
Taylor series have a great many applications. (Hence their place in this course.) One of the most immediate of these is that they give us an alternate way of computing many functions. For example, the first definition we see for the sine and cosine functions is in terms of triangles. Those definitions, however, do not lend themselves to computing sine and cosine except at very special angles. Armed with power series representations, however, we can compute them to very high precision at any angle. To illustrate this, consider the computation of — a problem that dates back to the Babylonians.
Example 3.6.15. Computing the number .
There are numerous methods for computing to any desired degree of accuracy. Many of them use the Maclaurin expansion
12
The computation of has a very, very long history and your favourite search engine will turn up many sites that explore the topic. For a more comprehensive history one can turn to books such as “A history of Pi” by Petr Beckmann and “The joy of ” by David Blatner.
Unfortunately, this series is not very useful for computing because it converges so slowly. If we approximate the series by its partial sum, then the alternating series test (Theorem 3.3.14) tells us that the error is bounded by the first term we drop. To guarantee that we have 2 decimal digits of correct, we need to sum about the first 200 terms!
A much better way to compute using this series is to take advantage of the fact that
Again, this is an alternating series and so (via Theorem 3.3.14) the error we introduce by truncating it is bounded by the first term dropped. For example, if we keep ten terms, stopping at we get (to 6 decimal places) with an error between zero and
In 1699, the English astronomer/mathematician Abraham Sharp (1653–1742) used 150 terms of this series to compute 72 digits of — by hand!
This is just one of very many ways to compute Another one, which still uses the Maclaurin expansion of but is much more efficient, is
This formula was used by John Machin in 1706 to compute to 100 decimal digits — again, by hand.
Power series also give us access to new functions which might not be easily expressed in terms of the functions we have been introduced to so far. The following is a good example of this.
Example 3.6.16. Error function.
The error function
is used in computing “bell curve” probabilities. The indefinite integral of the integrand cannot be expressed in terms of standard functions. But we can still evaluate the integral to within any desired degree of accuracy by using the Taylor expansion of the exponential. Start with the Maclaurin series for
and then substitute
We can then apply Theorem 3.5.13 to integrate term-by-term:
For example, for the bell curve, the probability of being within one standard deviation of the mean, is
13
If you don’t know what this means (forgive the pun) don’t worry, because it is not part of the course. Standard deviation is a way of quantifying variation within a population.
This is yet another alternating series. If we keep five terms, stopping at we get (to 5 decimal places) with, by Theorem 3.3.14 again, an error between zero and the first dropped term, which is minus
Example 3.6.17. Two nice series.
Evaluate
Solution. There are not very many series that can be easily evaluated exactly. But occasionally one encounters a series that can be evaluated simply by realizing that it is exactly one of the series in Theorem 3.6.7, just with a specific value of The left hand given series is
The series in Theorem 3.6.7 that this most closely resembles is
Indeed
The right hand series above differs from the left hand series above only that the signs of the left hand series alternate while those of the right hand series do not. We can flip every second sign in a power series just by using a negative
which is exactly minus the desired right hand series. So
Example 3.6.18. Finding a derivative from a series.
Let Find the fifteenth derivative of at
Solution: This is a bit of a trick question. We could of course use the product and chain rules to directly apply fifteen derivatives and then set but that would be extremely tedious. There is a much more efficient approach that exploits two pieces of knowledge that we have.
14
We could get a computer algebra system to do it for us without much difficulty — but we wouldn’t learn much in the process. The point of this example is to illustrate that one can do more than just represent a function with Taylor series. More on this in the next section.
- From equation 3.6.2, we see that the coefficient of
in the Taylor series of with expansion point is exactly So is exactly times the coefficient of in the Taylor series of with expansion point - We know, or at least can easily find, the Taylor series for
Let’s apply that strategy.
- First, we know that, for all
- Just substituting
we have - So the coefficient of
in the Taylor series of with expansion point is
and we have
Example 3.6.19. Optional — Computing the number .
Back in Example 3.6.8, we saw that
for some (unknown) between and This can be used to approximate the number with any desired degree of accuracy. Setting in this equation gives
for some between and Even though we don’t know exactly, we can bound that term quite readily. We do know that in an increasing function of and so Thus we know that
15
Check the derivative!
So we have a lower bound on the error, but our upper bound involves the — precisely the quantity we are trying to get a handle on.
But all is not lost. Let’s look a little more closely at the right-hand inequality when
Now this is a pretty crude bound but it isn’t hard to improve. Try this again with
16
The authors hope that by now we all “know” that is between 2 and 3, but maybe we don’t know how to prove it.
Better. Now we can rewrite our bound:
If we set in this we get
So the error is between and — this approximation isn’t guaranteed to give us the first 2 decimal places. If we ramp up to however, we get
Since the upper bound on the error is and we can approximate by
and it is correct to six decimal places.
Subsection 3.6.3 Optional — Linking with trigonometric functions
Let us return to the observation that we made earlier about the Maclaurin series for sine, cosine and the exponential functions:
We see that these series are identical except for the differences in the signs of the coefficients. Let us try to make them look even more alike by introducing extra constants and into the equations. Consider
Let’s try to choose and so that these to expressions are equal. To do so we must make sure that the coefficients of the various powers of agree. Looking just at the coefficients of and we see that we need
Substituting this into our expansions gives
Now the coefficients of and agree, but the coefficient of tells us that we need to be a number so that or
We know that no such real number exists. But for the moment let us see what happens if we just assume that we can find so that Then we will have that
17
We do not wish to give a primer on imaginary and complex numbers here. The interested reader can start by looking at Appendix B.
If we now write this with the more usual notation we arrive at what is now known as Euler’s formula
Equation 3.6.20.
Euler’s proof of this formula (in 1740) was based on Maclaurin expansions (much like our explanation above). Euler’s formula is widely regarded as one of the most important and beautiful in all of mathematics.
18
It is worth mentioning here that history of this topic is perhaps a little rough on Roger Cotes (1682–1716) who was one of the strongest mathematicians of his time and a collaborator of Newton. Cotes published a paper on logarithms in 1714 in which he states (after translating his results into more modern notation). He proved this result by computing in two different ways the surface area of an ellipse rotated about one axis and equating the results. Unfortunately Cotes died only 2 years later at the age of 33. Upon hearing of his death Newton is supposed to have said “If he had lived, we might have known something.” The reader might think this a rather weak statement, however coming from Newton it was high praise.
Of course having established Euler’s formula one can find slicker demonstrations. For example, let
Differentiating (with product and chain rules and the fact that ) gives us
as required.
Equation 3.6.21. Euler’s identity.
Subsection 3.6.4 Evaluating Limits using Taylor Expansions
Taylor polynomials provide a good way to understand the behaviour of a function near a specified point and so are useful for evaluating complicated limits. Here are some examples.
Example 3.6.22. A simple limit from a Taylor expansion.
In this example, we’ll start with a relatively simple limit, namely
The first thing to notice about this limit is that, as tends to zero, both the numerator, and the denominator, tend to So we may not evaluate the limit of the ratio by simply dividing the limits of the numerator and denominator. To find the limit, or show that it does not exist, we are going to have to exhibit a cancellation between the numerator and the denominator. Let’s start by taking a closer look at the numerator. By Example 3.6.6,
Consequently
19
We are hiding some mathematics behind this “consequently”. What we are really using our knowledge of Taylor polynomials to write where and is between 0 and We are effectively hiding “ ” inside the “ ”. Now we can divide both sides by (assuming ): and everything is fine provided the term stays well behaved.
Every term in this series, except for the very first term, is proportional to a strictly positive power of Consequently, as tends to zero, all terms in this series, except for the very first term, tend to zero. In fact the sum of all terms, starting with the second term, also tends to zero. That is,
We won’t justify that statement here, but it will be justified in the following (optional) subsection. So
The limit in the previous example can also be evaluated relatively easily using l’Hôpital’s rule. While the following limit can also, in principal, be evaluated using l’Hôpital’s rule, it is much more efficient to use Taylor series.
20
Many of you learned about l’Hôptial’s rule in school and all of you should have seen it last term in your differential calculus course.
21
It takes 3 applications of l’Hôpital’s rule and some careful cleaning up of the intermediate expressions. Oof!
Example 3.6.23. A not so easy limit made easier.
In this example we evaluate
Once again, the first thing to notice about this limit is that, as x tends to zero, the numerator tends to which is and the denominator tends to which is also So we may not evaluate the limit of the ratio by simply dividing the limits of the numerator and denominator. Again, to find the limit, or show that it does not exist, we are going to have to exhibit a cancellation between the numerator and the denominator. To get a more detailed understanding of the behaviour of the numerator and denominator near we find their Taylor expansions. By Example 3.5.21,
so the numerator
By Example 3.6.6,
so the denominator
and the ratio
Notice that every term in both the numerator and the denominator contains a common factor of which we can cancel out.
As tends to zero,
- the numerator tends to
which is not and - the denominator tends to
which is also not
so we may now legitimately evaluate the limit of the ratio by simply dividing the limits of the numerator and denominator.
Subsection 3.6.5 Optional — The Big O Notation
In Example 3.6.22 we used, without justification, that, as tends to zero, not only does every term in
22
Though there were a few comments in a footnote.
We’ll now develop some machinery that provides the justification. We start by recalling, from equation 3.6.1, that if, for some natural number the function has derivatives near the point then
where
is the error introduced when we approximate by the polynomial Here is some unknown number between and As is not known, we do not know exactly what the error is. But that is usually not a problem.
In the present context we are interested in taking the limit as So we are only interested in -values that are very close to and because lies between and is also very close to Now, as long as is continuous at as must approach which is some finite value. This, in turn, means that there must be constants such that for all ’s within a distance of If so, there is another constant (namely ) such that
23
It is worth pointing out that our Taylor series must be expanded about the point to which we are limiting — i.e. a. To work out a limit as we need Taylor series expanded about and not some other point.
There is some notation for this behaviour.
Definition 3.6.24. Big O.
Let and be real numbers. We say that the function “ is of order near ” and we write if there exist constants such that
24
To be precise, and do not depend on though they may, and usually do, depend on
How should we parse the big O notation when we see it? Consider the following
First of all, we know from the definition that the notation only tells us something about for near the point The equation above contains “ ” which tells us something about what the function looks like when is close to Further, because it is “ ” squared, it says that the graph of the function lies below a parabola and above a parabola near The notation doesn’t tell us anything more than this — we don’t know, for example, that the graph of is concave up or concave down. It also tells us that Taylor expansion of around does not contain any constant or linear term — the first nonzero term in the expansion is of degree at least two. For example, all of the following functions are
In the next few examples we will rewrite a few of the Taylor polynomials that we know using this big O notation.
Example 3.6.25. Sine and the big O.
Let and Then
and the pattern repeats. So every derivative is plus or minus either sine or cosine and, as we saw in previous examples, this makes analysing the error term for the sine and cosine series quite straightforward. In particular, for all real numbers and all natural numbers So the Taylor polynomial of, for example, degree 3 and its error term are
Equation 3.6.26.
When we studied the error in the expansion of the exponential function (way back in optional Example 3.6.8), we had to go to some length to understand the behaviour of the error term well enough to prove convergence for all numbers However, in the big O notation, we are free to assume that is close to Furthermore we do not need to derive an explicit bound on the size of the coefficient This makes it quite a bit easier to verify that the big O notation is correct.
Example 3.6.27. Exponential and the big O.
Let be any natural number. Since we know that for every integer Thus
for some between and If, for example, then so that the error term
Equation 3.6.28.
You can see that, because we only have to consider ’s that are close to the expansion point (in this example, ) it is relatively easy to derive the bounds that are required to justify the use of the big O notation.
Example 3.6.29. Logarithms and the big O.
Let and Then
We can see a pattern for forming here — is a sign times a ratio with
- the sign being
when is odd and being when is even. So the sign is - The denominator is
- The numeratoris the product
25
Remember that and that we use the convention
Thus, for any natural number
26
It is not too hard to make this rigorous using the principle of mathematical induction. The interested reader should do a little search-engine-ing. Induction is a very standard technique for proving statements of the form “For every natural number …”. For example or It was also used by Polya (1887–1985) to give a very convincing (but subtly (and deliberately) flawed) proof that all horses have the same colour.
so
with
If we choose, for example then for any obeying we have and so that
27
Since If we now add 1 to every term we get and so You can also do this with the triangle inequality which tells us that for any we know that Actually, you want the reverse triangle inequality (which is a simple corollary of the triangle inequality) which says that for any we have
Equation 3.6.30.
Remark 3.6.31.
The big O notation has a few properties that are useful in computations and taking limits. All follow immediately from Definition 3.6.24.
- If
then - For any real numbers
and ) In particular, and any integer - For any real numbers
and and then whenever ) - For any real numbers
and with any function which is is also because whenever - All of the above observations also hold for more general expressions with
replaced by i.e. for The only difference being in (a) where we must take the limit as instead of
Subsection 3.6.6 Optional — Evaluating Limits Using Taylor Expansions — More Examples
Example 3.6.32. Example 3.6.22 revisited.
In this example, we’ll return to the limit
That is, for small is the same as up to an error that is bounded by some constant times So, dividing by is the same as up to an error that is bounded by some constant times — see Remark 3.6.31(b). That is
But any function that is bounded by some constant times (for all smaller than some constant ) necessarily tends to as — see Remark 3.6.31(a). . Thus
Reviewing the above computation, we see that we did a little more work than we had to. It wasn’t necessary to keep track of the contribution to so carefully. We could have just said that
so that
We’ll spend a little time in the later, more complicated, examples learning how to choose the number of terms we keep in our Taylor expansions so as to make our computations as efficient as possible.
Example 3.6.33. Practicing using Taylor polynomials for limits.
In this example, we’ll use the Taylor polynomial of Example 3.6.29 to evaluate and The Taylor expansion of equation 3.6.30 with tells us that
That is, for small is the same as up to an error that is bounded by some constant times So, dividing by is the same as up to an error that is bounded by some constant times That is
But any function that is bounded by some constant times for all smaller than some constant necessarily tends to as Thus
We can now use this limit to evaluate
Now, we could either evaluate the limit of the logarithm of this expression, or we can carefully rewrite the expression as Let us do the latter.
Example 3.6.34. A difficult limit.
In this example, we’ll evaluate the harder limit
28
Use of l’Hôpital’s rule here could be characterised as a “courageous decision”. The interested reader should search-engine their way to Sir Humphrey Appleby and “Yes Minister” to better understand this reference (and the workings of government in the Westminster system). Discretion being the better part of valour, we’ll stop and think a little before limiting (ha) our choices.
The first thing to notice about this limit is that, as tends to zero, the numerator
and the denominator
too. So both the numerator and denominator tend to zero and we may not simply evaluate the limit of the ratio by taking the limits of the numerator and denominator and dividing.
To find the limit, or show that it does not exist, we are going to have to exhibit a cancellation between the numerator and the denominator. To develop a strategy for evaluating this limit, let’s do a “little scratch work”, starting by taking a closer look at the denominator. By Example 3.6.29,
This tells us that looks a lot like for very small So the denominator looks a lot like for very small Now, what about the numerator?
- If the numerator looks like some constant times
with for very small then the ratio will look like the constant times and, as will tend to as tends to zero. - If the numerator looks like some constant times
with for very small then the ratio will look like the constant times and will, as tend to infinity, and in particular diverge, as tends to zero. - If the numerator looks like
for very small then the ratio will look like and will tend to as tends to zero.
The moral of the above “scratch work” is that we need to know the behaviour of the numerator, for small up to order Any contributions of order with may be put into error terms
Now we are ready to evaluate the limit. Because the expressions are a little involved, we will simplify the numerator and denominator separately and then put things together. Using the expansions we developed in Example 3.6.25, the numerator,
Similarly, using the expansion that we developed in Example 3.6.29,
Now put these together and take the limit as
The next two limits have much the same flavour as those above — expand the numerator and denominator to high enough order, do some cancellations and then take the limit. We have increased the difficulty a little by introducing “expansions of expansions”.
Example 3.6.35. Another difficult limit.
In this example we’ll evaluate another harder limit, namely
The first thing to notice about this limit is that, as tends to zero, the denominator tends to So, yet again, to find the limit, we are going to have to show that the numerator also tends to and we are going to have to exhibit a cancellation between the numerator and the denominator.
Because the denominator is any terms in the numerator, that are of order or higher will contribute terms in the ratio that are of order or higher. Those terms in the ratio will converge to zero as The moral of this discussion is that we need to compute to order with errors of order Now we saw, in Example 3.6.32, that
We also saw, in equation 3.6.30 with that
Substituting and using that (by Remark 3.6.31(b,c)), we have that the numerator
29
In our derivation of in Example 3.6.29, we required only that So we are free to substitute for any that is small enough that
and the limit
Example 3.6.36. Yet another difficult limit.
Evaluate
Solution: Step 1: Find the limit of the denominator.
This tells us that we can’t evaluate the limit just by finding the limits of the numerator and denominator separately and then dividing.
Step 2: Determine the leading order behaviour of the denominator near By equations 3.6.30 and 3.6.26,
Taking the difference of these expansions gives
This tells us that, for near zero, the denominator is (that’s the leading order term) plus contributions that are of order and smaller. That is
Step 3: Determine the behaviour of the numerator near to order with errors of order and smaller (just like the denominator). By equation 3.6.28
Substituting
by equation 3.6.26. Subtracting, the numerator
Step 4: Evaluate the limit.
Exercises 3.6.8 Exercises
Exercises — Stage 1 .
1.
Below is a graph of along with the constant approximation, linear approximation, and quadratic approximation centred at Which is which?
2.
3.
Below are a list of common functions, and their Taylor series representations. Match the function to the Taylor series and give the radius of convergence of the series.
function | series |
A. |
I. |
B. |
II. |
C. |
III. |
D. |
IV. |
E. |
V. |
F. |
VI. |
4.
- Suppose
for all real What is (the twentieth derivative of at )? - Suppose
for all real What is - If
what is What is
Exercises — Stage 2 .
5.
6.
7.
Using the definition of a Taylor series, find the Taylor series for centred at What is the interval of convergence of the resulting series?
8.
Using the definition of a Taylor series, find the Taylor series for centred at where is some constant. What is the radius of convergence of the resulting series?
Exercise Group.
9. (✳).
Find the Maclaurin series for
10. (✳).
i.e.
Find
11. (✳).
12. (✳).
13. (✳).
The first two terms in the Maclaurin series for are , where and are constants. Find the values of and
14. (✳).
Give the first two nonzero terms in the Maclaurin series for
15. (✳).
Find the Maclaurin series for
16. (✳).
Exercise Group.
17. (✳).
18. (✳).
Evaluate
19. (✳).
Evaluate
20. (✳).
Evaluate the sum of the convergent series
21. (✳).
Evaluate
22. (✳).
Evaluate
23.
Evaluate or show that it diverges.
24.
or show that it diverges.
25. (✳).
(b) Evaluate
26.
- Using the fact that
how many terms of the Taylor series for arctangent would you have to add up to approximate with an error of at most - Example 3.6.15 mentions the formulaUsing the Taylor series for arctangent, how many terms would you have to add up to approximate
with an error of at most - Assume without proof the following:
with an error of at most
27.
Suppose you wanted to approximate the number as a rational number using the Taylor expansion of How many terms would you need to add to get 10 decimal places of accuracy? (That is, an absolute error less than )
28.
Suppose you wanted to approximate the number as a rational number using the Maclaurin expansion of How many terms would you need to add to get 10 decimal places of accuracy? (That is, an absolute error less than )
You may assume without proof that
29.
Suppose you wanted to approximate the number as a rational number using the Taylor expansion of Which partial sum should you use to get 10 decimal places of accuracy? (That is, an absolute error less than )
30.
Suppose you wanted to approximate the number using the Maclaurin series of where is some number in Which partial sum should you use to guarantee 10 decimal places of accuracy? (That is, an absolute error less than )
You may assume without proof that
31.
for all
Give reasonable bounds (both upper and lower) on the error involved in approximating using the partial sum of the Taylor series for centred at
Remark: One function with this quality is the inverse hyperbolic tangent function.
30
Of course it is! Actually, hyperbolic tangent is and inverse hyperbolic tangent is its functional inverse.
Exercises — Stage 3 .
32. (✳).
Use series to evaluate
33. (✳).
Evaluate
34.
Evaluate using a Taylor series for the natural logarithm.
35.
36.
Evaluate the series or show that it diverges.
37.
Write the series as a combination of familiar functions.
38.
- Find the Maclaurin series for
What is its radius of convergence? - Manipulate the series you just found to find the Maclaurin series for
What is its radius of convergence?
39. (✳).
40. (✳).
41. (✳).
Using a Maclaurin series, the number is found to be an approximation for Give the best upper bound you can for
42. (✳).
43. (✳).
44. (✳).
- Find the Maclaurin series for
- It can be shown that
has an absolute maximum which occurs at its smallest positive critical point (see the graph of below). Find this critical point. -
Use the previous information to find the maximum value of
to within
45. (✳).
Let
- Find the Maclaurin series for
- Use this series to approximate
to within - Is your estimate in (b) greater than
Explain.
46. (✳).
Let
- Find the Maclaurin series for
- Use this series to approximate
to within - Is your estimate in (b) greater than or less than
47. (✳).
48. (✳).
Show that
49. (✳).
50.
The law of the instrument says “If you have a hammer then everything looks like a nail” — it is really a description of the “tendency of jobs to be adapted to tools rather than adapting tools to jobs”. Anyway, this is a long way of saying that just because we know how to compute things using Taylor series doesn’t mean we should neglect other techniques.
31
Quote from Silvan Tomkins’s Computer Simulation of Personality: Frontier of Psychological Theory. See also Birmingham screwdrivers.
- Using Newton’s method, approximate the constant
as a root of the function Using a calculator, make your estimation accurate to within 0.01. - You may assume without proof that
Using the fact that this is an alternating series, how many terms would you have to add for the partial sum to estimate with an error less than 0.01?
51.
Let Write as a sum of rational numbers with an error less than using the Maclaurin series for arctangent.
52.
- Sketch
- Assume (without proof) that
for all whole numbers Find the Maclaurin series for - Where does the Maclaurin series for
converge? - For which values of
is equal to its Maclaurin series?