Skip to main content

CLP-1 Differential Calculus

Section 3.4 Approximating Functions Near a Specified Point — Taylor Polynomials

Suppose that you are interested in the values of some function \(f(x)\) for \(x\) near some fixed point \(a\text{.}\) When the function is a polynomial or a rational function we can use some arithmetic (and maybe some hard work) to write down the answer. For example:
\begin{align*} f(x) &= \frac{x^2-3}{x^2-2x+4}\\ f(1/5) &= \frac{ \frac{1}{25}-3}{\frac{1}{25}-\frac{2}{5}+4 } = \frac{\frac{1-75}{25} }{\frac{1-10+100}{25}}\\ &= \frac{-74}{91} \end{align*}
Tedious, but we can do it. On the other hand if you are asked to compute \(\sin(1/10)\) then what can we do? We know that a calculator can work it out
\begin{align*} \sin(1/10) &= 0.09983341\dots \end{align*}
but how does the calculator do this? How did people compute this before calculators
 1 
Originally the word “calculator” referred not to the software or electronic (or even mechanical) device we think of today, but rather to a person who performed calculations.
? A hint comes from the following sketch of \(\sin(x)\) for \(x\) around \(0\text{.}\)
The above figure shows that the curves \(y=x\) and \(y=\sin x\) are almost the same when \(x\) is close to \(0\text{.}\) Hence if we want the value of \(\sin(1/10)\) we could just use this approximation \(y=x\) to get
\begin{gather*} \sin(1/10) \approx 1/10. \end{gather*}
Of course, in this case we simply observed that one function was a good approximation of the other. We need to know how to find such approximations more systematically.
More precisely, say we are given a function \(f(x)\) that we wish to approximate close to some point \(x=a\text{,}\) and we need to find another function \(F(x)\) that
  • is simple and easy to compute
     2 
    It is no good approximating a function with something that is even more difficult to work with.
  • is a good approximation to \(f(x)\) for \(x\) values close to \(a\text{.}\)
Further, we would like to understand how good our approximation actually is. Namely we need to be able to estimate the error \(|f(x)-F(x)|\text{.}\)
There are many different ways to approximate a function and we will discuss one family of approximations: Taylor polynomials. This is an infinite family of ever improving approximations, and our starting point is the very simplest.

Subsection 3.4.1 Zeroth Approximation — the Constant Approximation

The simplest functions are those that are constants. And our zeroth
 3 
It barely counts as an approximation at all, but it will help build intuition. Because of this, and the fact that a constant is a polynomial of degree 0, we’ll start counting our approximations from zero rather than 1.
approximation will be by a constant function. That is, the approximating function will have the form \(F(x)=A\text{,}\) for some constant \(A\text{.}\) Notice that this function is a polynomial of degree zero.
To ensure that \(F(x)\) is a good approximation for \(x\) close to \(a\text{,}\) we choose \(A\) so that \(f(x)\) and \(F(x)\) take exactly the same value when \(x=a\text{.}\)
\begin{gather*} F(x)=A\qquad\text{so}\qquad F(a)=A=f(a)\implies A=f(a) \end{gather*}
Our first, and crudest, approximation rule is
An important point to note is that we need to know \(f(a)\) — if we cannot compute that easily then we are not going to be able to proceed. We will often have to choose \(a\) (the point around which we are approximating \(f(x)\)) with some care to ensure that we can compute \(f(a)\text{.}\)
Here is a figure showing the graphs of a typical \(f(x)\) and approximating function \(F(x)\text{.}\)
At \(x=a\text{,}\) \(f(x)\) and \(F(x)\) take the same value. For \(x\) very near \(a\text{,}\) the values of \(f(x)\) and \(F(x)\) remain close together. But the quality of the approximation deteriorates fairly quickly as \(x\) moves away from \(a\text{.}\) Clearly we could do better with a straight line that follows the slope of the curve. That is our next approximation.
But before then, an example:

Example 3.4.2. A (weak) approximation of \(e^{0.1}\).

Use the constant approximation to estimate \(e^{0.1}\text{.}\)
Solution First set \(f(x) = e^x\text{.}\)
  • Now we first need to pick a point \(x=a\) to approximate the function. This point needs to be close to \(0.1\) and we need to be able to evaluate \(f(a)\) easily. The obvious choice is \(a=0\text{.}\)
  • Then our constant approximation is just
    \begin{align*} F(x) &= f(0) = e^0 = 1\\ F(0.1) &= 1 \end{align*}
Note that \(e^{0.1} = 1.105170918\dots\text{,}\) so even this approximation isn’t too bad..

Subsection 3.4.2 First Approximation — the Linear Approximation

Our first
 4 
Recall that we started counting from zero.
approximation improves on our zeroth approximation by allowing the approximating function to be a linear function of \(x\) rather than just a constant function. That is, we allow \(F(x)\) to be of the form \(A+Bx\text{,}\) for some constants \(A\) and \(B\text{.}\)
To ensure that \(F(x)\) is a good approximation for \(x\) close to \(a\text{,}\) we still require that \(f(x)\) and \(F(x)\) have the same value at \(x=a\) (that was our zeroth approximation). Our additional requirement is that their tangent lines at \(x=a\) have the same slope — that the derivatives of \(f(x)\) and \(F(x)\) are the same at \(x=a\text{.}\) Hence
\begin{align*} F(x)&=A+Bx & &\implies & F(a)=A+Ba&=f(a)\\ F'(x)&=B & &\implies & F'(a)=\phantom{A+a}B&=f'(a) \end{align*}
So we must have \(B=f'(a)\text{.}\) Substituting this into \(A+Ba=f(a)\) we get \(A=f(a)-af'(a)\text{.}\) So we can write
\begin{align*} F(x) &= A+Bx = \overbrace{f(a)- af'(a)}^A+ f'(a) \cdot x\\ &= f(a) + f'(a) \cdot(x-a) \end{align*}
We write it in this form because we can now clearly see that our first approximation is just an extension of our zeroth approximation. This first approximation is also often called the linear approximation of \(f(x)\) about \(x=a\text{.}\)
We should again stress that in order to form this approximation we need to know \(f(a)\) and \(f'(a)\) — if we cannot compute them easily then we are not going to be able to proceed.
Recall, from Theorem 2.3.4, that \(y=f(a)+f'(a)(x-a)\) is exactly the equation of the tangent line to the curve \(y=f(x)\) at \(a\text{.}\) Here is a figure showing the graphs of a typical \(f(x)\) and the approximating function \(F(x)\text{.}\)
Observe that the graph of \(f(a)+f'(a)(x-a)\) remains close to the graph of \(f(x)\) for a much larger range of \(x\) than did the graph of our constant approximation, \(f(a)\text{.}\) One can also see that we can improve this approximation if we can use a function that curves down rather than being perfectly straight. That is our next approximation.
But before then, back to our example:

Example 3.4.4. A better approximation of \(e^{0.1}\).

Use the linear approximation to estimate \(e^{0.1}\text{.}\)
Solution First set \(f(x) = e^x\) and \(a=0\) as before.
  • To form the linear approximation we need \(f(a)\) and \(f'(a)\text{:}\)
    \begin{align*} f(x) &= e^x & f(0) & = 1\\ f'(x) &= e^x & f'(0) & = 1 \end{align*}
  • Then our linear approximation is
    \begin{align*} F(x) &= f(0) + x f'(0) = 1 + x\\ F(0.1) &= 1.1 \end{align*}
Recall that \(e^{0.1} = 1.105170918\dots\text{,}\) so the linear approximation is almost correct to 3 digits.
It is worth doing another simple example here.

Example 3.4.5. A linear approximation of \(\sqrt{4.1}\).

Use a linear approximation to estimate \(\sqrt{4.1}\text{.}\)
Solution First set \(f(x)=\sqrt{x}\text{.}\) Hence \(f'(x) = \frac{1}{2\sqrt{x}}\text{.}\) Then we are trying to approximate \(f(4.1)\text{.}\) Now we need to choose a sensible \(a\) value.
  • We need to choose \(a\) so that \(f(a)\) and \(f'(a)\) are easy to compute.
    • We could try \(a=4.1\) — but then we need to compute \(f(4.1)\) and \(f'(4.1)\) — which is our original problem and more!
    • We could try \(a=0\) — then \(f(0)=0\) and \(f'(0) = DNE\text{.}\)
    • Setting \(a=1\) gives us \(f(1)=1\) and \(f'(1)=\frac{1}{2}\text{.}\) This would work, but we can get a better approximation by choosing \(a\) is closer to \(4.1\text{.}\)
    • Indeed we can set \(a\) to be the square of any rational number and we’ll get a result that is easy to compute.
    • Setting \(a=4\) gives \(f(4)=2\) and \(f'(4) = \frac{1}{4}\text{.}\) This seems good enough.
  • Substitute this into equation 3.4.3 to get
    \begin{align*} f(4.1) &\approx f(4) + f'(4) \cdot(4.1-4)\\ &= 2 + \frac{0.1}{4} = 2 + 0.025 = 2.025 \end{align*}
Notice that the true value is \(\sqrt{4.1} = 2.024845673\dots\text{.}\)

Subsection 3.4.3 Second Approximation — the Quadratic Approximation

We next develop a still better approximation by now allowing the approximating function be to a quadratic function of \(x\text{.}\) That is, we allow \(F(x)\) to be of the form \(A+Bx+Cx^2\text{,}\) for some constants \(A\text{,}\) \(B\) and \(C\text{.}\) To ensure that \(F(x)\) is a good approximation for \(x\) close to \(a\text{,}\) we choose \(A\text{,}\) \(B\) and \(C\) so that
  • \(f(a)=F(a)\) (just as in our zeroth approximation),
  • \(f'(a)=F'(a)\) (just as in our first approximation), and
  • \(f''(a)=F''(a)\) — this is a new condition.
These conditions give us the following equations
\begin{align*} F(x)&=A+Bx+Cx^2 & &\implies & F(a)=A+Ba+\phantom{2}Ca^2&=f(a)\\ F'(x)&=B+2Cx & &\implies & F'(a)=\phantom{A+a}B+2Ca&=f'(a)\\ F''(x)&=2C & &\implies & F''(a)=\phantom{A+aB+a}2C&=f''(a) \end{align*}
Solve these for \(C\) first, then \(B\) and finally \(A\text{.}\)
\begin{align*} C &=\half f''(a) & \text{substitute}\\ B &= f'(a) - 2Ca = f'(a)-af''(a) & \text{substitute again}\\ A &= f(a)-Ba-Ca^2 = f(a)-a[f'(a)-af''(a)]-\half f''(a)a^2\hskip-0.5in \end{align*}
Then put things back together to build up \(F(x)\text{:}\)
\begin{align*} F(x)&=f(a)-f'(a)a+\half f''(a)a^2 & &\text{(this line is $A$)}\cr &\phantom{=f(a)\hskip3pt}+f'(a)\,x\hskip3pt- f''(a)ax & & \text{(this line is $Bx$)}\\ &\phantom{=f(a)-f'(a)a\hskip3.5pt}+\half f''(a)x^2 & &\text{(this line is $Cx^2$)}\\ &=f(a)+f'(a)(x-a)+\half f''(a)(x-a)^2 \end{align*}
Oof! We again write it in this form because we can now clearly see that our second approximation is just an extension of our first approximation.
Our second approximation is called the quadratic approximation:
Here is a figure showing the graphs of a typical \(f(x)\) and approximating function \(F(x)\text{.}\)
This new approximation looks better than both the first and second.
Now there is actually an easier way to derive this approximation, which we show you now. Let us rewrite
 5 
Any polynomial of degree two can be written in this form. For example, when \(a=1\text{,}\) \(3 + 2x + x^2 = 6 + 4(x-1) + (x-1)^2\text{.}\)
\(F(x)\) so that it is easy to evaluate it and its derivatives at \(x=a\text{:}\)
\begin{align*} F(x) &= \alpha + \beta\cdot (x-a) + \gamma \cdot(x-a)^2 \end{align*}
Then
\begin{align*} F(x) &= \alpha + \beta\cdot (x-a) + \gamma \cdot(x-a)^2 & F(a) &= \alpha = f(a)\\ F'(x) &= \beta + 2\gamma \cdot(x-a) & F'(a)&=\beta = f'(a)\\ F''(x) &= 2\gamma & F''(a) &= 2\gamma = f''(a) \end{align*}
And from these we can clearly read off the values of \(\alpha,\beta\) and \(\gamma\) and so recover our function \(F(x)\text{.}\) Additionally if we write things this way, then it is quite clear how to extend this to a cubic approximation and a quartic approximation and so on.
Return to our example:

Example 3.4.7. An even better approximation of \(e^{0.1}\).

Use the quadratic approximation to estimate \(e^{0.1}\text{.}\)
Solution Set \(f(x) = e^x\) and \(a=0\) as before.
  • To form the quadratic approximation we need \(f(a), f'(a)\) and \(f''(a)\text{:}\)
    \begin{align*} f(x) &= e^x & f(0) & = 1\\ f'(x) &= e^x & f'(0) & = 1\\ f''(x) &= e^x & f''(0) & = 1 \end{align*}
  • Then our quadratic approximation is
    \begin{align*} F(x) &= f(0) + x f'(0) + \frac{1}{2} x^2 f''(0) = 1 + x + \frac{x^2}{2}\\ F(0.1) &= 1.105 \end{align*}
Recall that \(e^{0.1} = 1.105170918\dots\text{,}\) so the quadratic approximation is quite accurate with very little effort.
Before we go on, let us first introduce (or revise) some notation that will make our discussion easier.

Subsection 3.4.4 Whirlwind Tour of Summation Notation

In the remainder of this section we will frequently need to write sums involving a large number of terms. Writing out the summands explicitly can become quite impractical — for example, say we need the sum of the first 11 squares:
\begin{gather*} 1 + 2^2 + 3^2 + 4^2+ 5^2 + 6^2 + 7^2 + 8^2 + 9^2 + 10^2 + 11^2 \end{gather*}
This becomes tedious. Where the pattern is clear, we will often skip the middle few terms and instead write
\begin{gather*} 1 + 2^2 + \cdots + 11^2. \end{gather*}
A far more precise way to write this is using \(\Sigma\) (capital-sigma) notation. For example, we can write the above sum as
\begin{gather*} \sum_{k=1}^{11} k^2 \end{gather*}
This is read as
The sum from \(k\) equals 1 to 11 of \(k^2\text{.}\)
More generally

Definition 3.4.8.

Let \(m\leq n\) be integers and let \(f(x)\) be a function defined on the integers. Then we write
\begin{gather*} \sum_{k=m}^n f(k) \end{gather*}
to mean the sum of \(f(k)\) for \(k\) from \(m\) to \(n\text{:}\)
\begin{gather*} f(m) + f(m+1) + f(m+2) + \cdots + f(n-1) + f(n). \end{gather*}
Similarly we write
\begin{gather*} \sum_{i=m}^n a_i \end{gather*}
to mean
\begin{gather*} a_m+a_{m+1}+a_{m+2}+\cdots+a_{n-1}+a_n \end{gather*}
for some set of coefficients \(\{ a_m, \ldots, a_n \}\text{.}\)
Consider the example
\begin{gather*} \sum_{k=3}^7 \frac{1}{k^2}=\frac{1}{3^2}+\frac{1}{4^2}+\frac{1}{5^2}+ \frac{1}{6^2}+\frac{1}{7^2} \end{gather*}
It is important to note that the right hand side of this expression evaluates to a number
 6 
Some careful addition shows it is \(\frac{46181}{176400}\text{.}\)
; it does not contain “\(k\)”. The summation index \(k\) is just a “dummy” variable and it does not have to be called \(k\text{.}\) For example
\begin{gather*} \sum_{k=3}^7 \frac{1}{k^2} =\sum_{i=3}^7 \frac{1}{i^2} =\sum_{j=3}^7 \frac{1}{j^2} =\sum_{\ell=3}^7 \frac{1}{\ell^2} \end{gather*}
Also the summation index has no meaning outside the sum. For example
\begin{gather*} k\sum_{k=3}^7 \frac{1}{k^2} \end{gather*}
has no mathematical meaning; It is gibberish
 7 
Or possibly gobbledygook. For a discussion of statements without meaning and why one should avoid them we recommend the book “Bendable learnings: the wisdom of modern management” by Don Watson.
.

Subsection 3.4.5 Still Better Approximations — Taylor Polynomials

We can use the same strategy to generate still better approximations by polynomials
 8 
Polynomials are generally a good choice for an approximating function since they are so easy to work with. Depending on the situation other families of functions may be more appropriate. For example if you are approximating a periodic function, then sums of sines and cosines might be a better choice; this leads to Fourier series.
of any degree we like. As was the case with the approximations above, we determine the coefficients of the polynomial by requiring, that at the point \(x=a\text{,}\) the approximation and its first \(n\) derivatives agree with those of the original function.
Rather than simply moving to a cubic polynomial, let us try to write things in a more general way. We will consider approximating the function \(f(x)\) using a polynomial, \(T_n(x)\text{,}\) of degree \(n\) — where \(n\) is a non-negative integer. As we discussed above, the algebra is easier if we write
\begin{align*} T_n(x) &= c_0 + c_1(x-a) + c_2 (x-a)^2 + \cdots + c_n (x-a)^n\\ &= \sum_{k=0}^n c_k (x-a)^k & \text{using } \Sigma \text{ notation} \end{align*}
The above form
 9 
Any polynomial in \(x\) of degree \(n\) can also be expressed as a polynomial in \((x-a)\) of the same degree \(n\) and vice versa. So \(T_n(x)\) really still is a polynomial of degree \(n\text{.}\)
 10 
Furthermore when \(x\) is close to \(a\text{,}\) \((x-a)^k\) decreases very quickly as \(k\) increases, which often makes the "high \(k\)" terms in \(T_n(x)\) very small. This can be a considerable advantage when building up approximations by adding more and more terms. If we were to rewrite \(T_n(x)\) in the form \(\ds \sum_{k=0}^n b_k x^k\) the "high \(k\)" terms would typically not be very small when \(x\) is close to \(a\text{.}\)
makes it very easy to evaluate this polynomial and its derivatives at \(x=a\text{.}\) Before we proceed, we remind the reader of some notation (see Notation 2.2.8):
  • Let \(f(x)\) be a function and \(k\) be a positive integer. We can denote its \(k^\mathrm{th}\) derivative with respect to \(x\) by
    \begin{align*} \ddiff{k}{f}{x} && \left( \diff{}{x}\right)^k f(x) && f^{(k)}(x) \end{align*}
Additionally we will need

Definition 3.4.9. Factorial.

Let \(n\) be a positive integer
 11 
It is actually possible to define the factorial of positive real numbers and even negative numbers but it requires more advanced calculus and is outside the scope of this course. The interested reader should look up the Gamma function.
, then \(n\)-factorial, denoted \(n!\text{,}\) is the product
\begin{align*} n! &= n \times (n-1) \times \cdots \times 3 \times 2 \times 1 \end{align*}
Further, we use the convention that
\begin{align*} 0! &= 1 \end{align*}
The first few factorials are
\begin{align*} 1! &=1 & 2! &=2 & 3! &=6\\ 4! &=24 & 5! &=120 & 6! &=720 \end{align*}
Now consider \(T_n(x)\) and its derivatives:
\begin{alignat*}{4} T_n(x) &=& c_0 &+ c_1(x-a) & + c_2 (x-a)^2 & + c_3(x-a)^3 &+ \cdots+ & c_n (x-a)^n\\ T_n'(x) &=& &c_1 & + 2 c_2 (x-a) & + 3c_3(x-a)^2 &+ \cdots +& n c_n (x-a)^{n-1}\\ T_n''(x) &=& & & 2 c_2 & + 6c_3(x-a) &+ \cdots +& n(n-1) c_n (x-a)^{n-2}\\ T_n'''(x) &=& & & & 6c_3 &+ \cdots + & n(n-1)(n-2) c_n (x-a)^{n-3}\\ & \vdots\\ T_n^{(n)}(x) &=& & & & & & n! \cdot c_n \end{alignat*}
Now notice that when we substitute \(x=a\) into the above expressions only the constant terms survive and we get
\begin{align*} T_n(a) &= c_0\\ T_n'(a) &= c_1\\ T_n''(a) &= 2\cdot c_2\\ T_n'''(a) &= 6 \cdot c_3\\ &\vdots\\ T_n^{(n)}(a) &= n! \cdot c_n \end{align*}
So now if we want to set the coefficients of \(T_n(x)\) so that it agrees with \(f(x)\) at \(x=a\) then we need
\begin{align*} T_n(a) &= c_0 = f(a) & c_0 &= f(a) = \frac{1}{0!} f(a)\\ \end{align*}

We also want the first \(n\) derivatives of \(T_n(x)\) to agree with the derivatives of \(f(x)\) at \(x=a\text{,}\) so

\begin{align*} T_n'(a) &= c_1 = f'(a) & c_1 &= f'(a) = \frac{1}{1!} f'(a)\\ T_n''(a) &= 2\cdot c_2 = f''(a) & c_2 &= \frac{1}{2} f''(a) = \frac{1}{2!}f''(a)\\ T_n'''(a) &= 6\cdot c_3 = f'''(a) & c_3 &= \frac{1}{6} f'''(a) = \frac{1}{3!} f'''(a)\\ \end{align*}

More generally, making the \(k^\mathrm{th}\) derivatives agree at \(x=a\) requires :

\begin{align*} T_n^{(k)}(a) &= k!\cdot c_k = f^{(k)}(a) & c_k &= \frac{1}{k!} f^{(k)}(a)\\ \end{align*}

And finally the \(n^\mathrm{th}\) derivative:

\begin{align*} T_n^{(n)}(a) &= n!\cdot c_n = f^{(n)}(a) & c_n &= \frac{1}{n!} f^{(n)}(a) \end{align*}
Putting this all together we have
Let us formalise this definition.

Definition 3.4.11. Taylor polynomial.

Let \(a\) be a constant and let \(n\) be a non-negative integer. The \(n^\mathrm{th}\) order
 12 
It is sometimes called the \(n^\mathrm{th}\) degree Taylor polynomial, but its degree will actually be less than \(n\) if \(f^{(n)}(a)=0\text{.}\)
Taylor polynomial for \(f(x)\) about \(x=a\) is
\begin{align*} T_n(x) &= \sum_{k=0}^n \frac{1}{k!} f^{(k)}(a) \cdot (x-a)^k. \end{align*}
The special case \(a=0\) is called a Maclaurin
 13 
The polynomials are named after Brook Taylor who devised a general method for constructing them in 1715. Slightly later, Colin Maclaurin made extensive use of the special case \(a=0\) (with attribution of the general case to Taylor) and it is now named after him. The special case of \(a=0\) was worked on previously by James Gregory and Isaac Newton, and some specific cases were known to the 14th century Indian mathematician Madhava of Sangamagrama.
polynomial.
Before we proceed with some examples, a couple of remarks are in order.
  • While we can compute a Taylor polynomial about any \(a\)-value (providing the derivatives exist), in order to be a useful approximation, we must be able to compute \(f(a),f'(a),\cdots,f^{(n)}(a)\) easily. This means we must choose the point \(a\) with care. Indeed for many functions the choice \(a=0\) is very natural — hence the prominence of Maclaurin polynomials.
  • If we have computed the approximation \(T_n(x)\text{,}\) then we can readily extend this to the next Taylor polynomial \(T_{n+1}(x)\) since
    \begin{align*} T_{n+1}(x) &= T_n(x) + \frac{1}{(n+1)!} f^{(n+1)}(a) \cdot (x-a)^{n+1} \end{align*}
    This is very useful if we discover that \(T_n(x)\) is an insufficient approximation, because then we can produce \(T_{n+1}(x)\) without having to start again from scratch.

Subsection 3.4.6 Some Examples

Let us return to our running example of \(e^x\text{:}\)

Example 3.4.12. Taylor approximations of \(e^x\).

The constant, linear and quadratic approximations we used above were the first few Maclaurin polynomial approximations of \(e^x\text{.}\) That is
\begin{align*} T_0 (x) & = 1 & T_1(x) &= 1+x & T_2(x) &= 1+x+\frac{x^2}{2} \end{align*}
Since \(\diff{}{x} e^x = e^x\text{,}\) the Maclaurin polynomials are very easy to compute. Indeed this invariance under differentiation means that
\begin{align*} f^{(n)}(x) &= e^x & n=0,1,2,\dots && \text{so}\\ f^{(n)}(0) &= 1 \end{align*}
Substituting this into equation 3.4.10 we get
\begin{align*} T_n(x) &= \sum_{k=0}^n \frac{1}{k!} x^k \end{align*}
Thus we can write down the seventh Maclaurin polynomial very easily:
\begin{align*} T_7(x) &= 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \frac{x^5}{120} + \frac{x^6}{720} + \frac{x^7}{5040} \end{align*}
The following figure contains sketches of the graphs of \(e^x\) and its Taylor polynomials \(T_n(x)\) for \(n=0,1,2,3,4\text{.}\)
Also notice that if we use \(T_7(1)\) to approximate the value of \(e^1\) we obtain:
\begin{align*} e^1 \approx T_7(1) &= 1 + 1 + \frac{1}{2} + \frac{1}{6} + \frac{1}{24} + \frac{1}{120} + \frac{1}{720} + \frac{1}{5040}\\ &= \frac{685}{252} = 2.718253968\dots \end{align*}
The true value of \(e\) is \(2.718281828\dots\text{,}\) so the approximation has an error of about \(3\times10^{-5}\text{.}\)
Under the assumption that the accuracy of the approximation improves with \(n\) (an assumption we examine in Subsection 3.4.9 below) we can see that the approximation of \(e\) above can be improved by adding more and more terms. Indeed this is how the expression for \(e\) in equation 2.7.4 in Section 2.7 comes about.
Now that we have examined Maclaurin polynomials for \(e^x\) we should take a look at \(\log x\text{.}\) Notice that we cannot compute a Maclaurin polynomial for \(\log x\) since it is not defined at \(x=0\text{.}\)

Example 3.4.13. Taylor approximation of \(\log x\).

Compute the \(5^\mathrm{th}\) order Taylor polynomial for \(\log x\) about \(x=1\text{.}\)
Solution We have been told \(a=1\) and fifth order, so we should start by writing down the function and its first five derivatives:
\begin{align*} f(x) &= \log x & f(1) &= \log 1 = 0\\ f'(x) &= \frac{1}{x} & f'(1) &= 1\\ f''(x) &= \frac{-1}{x^2} & f''(1) &= -1\\ f'''(x) &= \frac{2}{x^3} & f'''(1) &= 2\\ f^{(4)}(x) &= \frac{-6}{x^4} & f^{(4)}(1) &= -6\\ f^{(5)}(x) &= \frac{24}{x^5} & f^{(5)}(1) &= 24 \end{align*}
Substituting this into equation 3.4.10 gives
\begin{align*} T_5(x)&= 0 + 1\cdot (x-1) + \frac{1}{2} \cdot (-1) \cdot (x-1)^2 + \frac{1}{6} \cdot 2 \cdot (x-1)^3\\ \amp\hskip0.5in+ \frac{1}{24} \cdot (-6) \cdot (x-1)^4 + \frac{1}{120} \cdot 24 \cdot (x-1)^5\\ &= (x-1) - \frac{1}{2}(x-1)^2 + \frac{1}{3}(x-1)^3 - \frac{1}{4}(x-1)^4 + \frac{1}{5}(x-1)^5 \end{align*}
Again, it is not too hard to generalise the above work to find the Taylor polynomial of order \(n\text{:}\) With a little work one can show that
\begin{align*} T_n(x) &= \sum_{k=1}^n \frac{(-1)^{k+1}}{k} (x-1)^k. \end{align*}
For cosine:

Example 3.4.14. Maclaurin polynomial for \(\cos x\).

Find the 4th order Maclaurin polynomial for \(\cos x\text{.}\)
Solution We have \(a=0\) and we need to find the first 4 derivatives of \(\cos x\text{.}\)
\begin{align*} f(x) &= \cos x & f(0) &= 1\\ f'(x) &= -\sin x & f'(0) &= 0\\ f''(x) &= -\cos x & f''(0) &= -1\\ f'''(x) &= \sin x & f'''(0) &= 0\\ f^{(4)}(x) &= \cos x & f^{(4)}(0) &= 1 \end{align*}
Substituting this into equation 3.4.10 gives
\begin{align*} T_4(x)&= 1 + 1\cdot (0) \cdot x + \frac{1}{2} \cdot (-1) \cdot x^2 + \frac{1}{6} \cdot 0 \cdot x^3 + \frac{1}{24} \cdot (1) \cdot x^4\\ &= 1 - \frac{x^2}{2} + \frac{x^4}{24} \end{align*}
Notice that since the \(4^\mathrm{th}\) derivative of \(\cos x\) is \(\cos x\) again, we also have that the fifth derivative is the same as the first derivative, and the sixth derivative is the same as the second derivative and so on. Hence the next four derivatives are
\begin{align*} f^{(4)}(x) &= \cos x & f^{(4)}(0) &= 1\\ f^{(5)}(x) &= -\sin x & f^{(5)}(0) &= 0\\ f^{(6)}(x) &= -\cos x & f^{(6)}(0) &= -1\\ f^{(7)}(x) &= \sin x & f^{(7)}(0) &= 0\\ f^{(8)}(x) &= \cos x & f^{(8)}(0) &= 1 \end{align*}
Using this we can find the \(8^\mathrm{th}\) order Maclaurin polynomial:
\begin{align*} T_8(x) &= 1 - \frac{x^2}{2} + \frac{x^4}{24} -\frac{x^6}{6!} + \frac{x^8}{8!} \end{align*}
Continuing this process gives us the \(2n^\mathrm{th}\) Maclaurin polynomial
\begin{align*} T_{2n}(x) &= \sum_{k=0}^n \frac{(-1)^k}{(2k)!} \cdot x^{2k} \end{align*}
Warning 3.4.15.
The above formula only works when \(x\) is measured in radians, because all of our derivative formulae for trig functions were developed under the assumption that angles are measured in radians.
Below we plot \(\cos x\) against its first few Maclaurin polynomial approximations:
The above work is quite easily recycled to get the Maclaurin polynomial for sine:

Example 3.4.16. Maclaurin polynomial for \(\sin x\).

Find the 5th order Maclaurin polynomial for \(\sin x\text{.}\)
Solution We could simply work as before and compute the first five derivatives of \(\sin x\text{.}\) But set \(g(x) = \sin x\) and notice that \(g(x) = - f'(x)\text{,}\) where \(f(x) =\cos x\text{.}\) Then we have
\begin{align*} g(0) &= -f'(0) = 0\\ g'(0) &= -f''(0) = 1\\ g''(0) &= -f'''(0) = 0\\ g'''(0) &= -f^{(4)}(0) = -1\\ g^{(4)}(0) &= -f^{(5)}(0) = 0\\ g^{(5)}(0) &= -f^{(6)}(0) = 1 \end{align*}
Hence the required Maclaurin polynomial is
\begin{align*} T_5(x) &= x - \frac{x^3}{3!} + \frac{x^5}{5!} \end{align*}
Just as we extended to the \(2n^\mathrm{th}\) Maclaurin polynomial for cosine, we can also extend our work to compute the \((2n+1)^\mathrm{th}\) Maclaurin polynomial for sine:
\begin{align*} T_{2n+1}(x) &= \sum_{k=0}^n \frac{(-1)^k}{(2k+1)!} \cdot x^{2k+1} \end{align*}
Warning 3.4.17.
The above formula only works when \(x\) is measured in radians, because all of our derivative formulae for trig functions were developed under the assumption that angles are measured in radians.
Below we plot \(\sin x\) against its first few Maclaurin polynomial approximations.
To get an idea of how good these Taylor polynomials are at approximating \(\sin\) and \(\cos\text{,}\) let’s concentrate on \(\sin x\) and consider \(x\)’s whose magnitude \(|x|\le 1\text{.}\) There are tricks that you can employ
 14 
If you are writing software to evaluate \(\sin x\text{,}\) you can always use the trig identity \(\sin(x)=\sin(x-2n\pi)\text{,}\) to easily restrict to \(|x|\le\pi\text{.}\) You can then use the trig identity \(\sin(x)=-\sin(x\pm\pi)\) to reduce to \(|x|\le\tfrac{\pi}{2}\text{.}\) Finally you can use the trig identity \(\sin(x)=\mp\cos(\tfrac{\pi}{2}\pm x))\) to reduce to \(|x|\le\tfrac{\pi}{4} \lt 1\text{.}\)
to evaluate sine and cosine at values of \(x\) outside this range.
If \(|x|\le 1\) radians
 15 
Recall that the derivative formulae that we used to derive the Taylor polynomials are valid only when \(x\) is in radians. The restriction \(-1 \leq x \leq 1\) radians translates to angles bounded by \(\tfrac{180}{\pi}\approx 57^\circ\text{.}\)
, then the magnitudes of the successive terms in the Taylor polynomials for \(\sin x\) are bounded by
\begin{alignat*}{3} |x|&\le 1 & \tfrac{1}{3!}|x|^3&\le\tfrac{1}{6} & \tfrac{1}{5!}|x|^5&\le\tfrac{1}{120}\approx 0.0083\\ \tfrac{1}{7!}|x|^7&\le\tfrac{1}{7!}\approx 0.0002\quad & \tfrac{1}{9!}|x|^9&\le\tfrac{1}{9!}\approx 0.000003\quad & \tfrac{1}{11!}|x|^{11}&\le\tfrac{1}{11!}\approx 0.000000025 \end{alignat*}
From these inequalities, and the graphs on the previous pages, it certainly looks like, for \(x\) not too large, even relatively low order Taylor polynomials give very good approximations. In Section 3.4.9 we’ll see how to get rigorous error bounds on our Taylor polynomial approximations.

Subsection 3.4.7 Estimating Change and \(\De x\text{,}\) \(\De y\) Notation

Suppose that we have two variables \(x\) and \(y\) that are related by \(y=f(x)\text{,}\) for some function \(f\text{.}\) One of the most important applications of calculus is to help us understand what happens to \(y\) when we make a small change in \(x\text{.}\)

Definition 3.4.18.

Let \(x,y\) be variables related by a function \(f\text{.}\) That is \(y = f(x)\text{.}\) Then we denote a small change in the variable \(x\) by \(\De x\) (read as “delta \(x\)”). The corresponding small change in the variable \(y\) is denoted \(\De y\) (read as “delta \(y\)”).
\begin{align*} \De y &= f(x+\De x) - f(x) \end{align*}
In many situations we do not need to compute \(\De y\) exactly and are instead happy with an approximation. Consider the following example.

Example 3.4.19. Estimate the increase in cost for a given change in production.

Let \(x\) be the number of cars manufactured per week in some factory and let \(y\) the cost of manufacturing those \(x\) cars. Given that the factory currently produces \(a\) cars per week, we would like to estimate the increase in cost if we make a small change in the number of cars produced.
Solution We are told that \(a\) is the number of cars currently produced per week; the cost of production is then \(f(a)\text{.}\)
  • Say the number of cars produced is changed from \(a\) to \(a+\De x\) (where \(\De x\) is some small number.
  • As \(x\) undergoes this change, the costs change from \(y=f(a)\) to \(f(a+\De x)\text{.}\) Hence
    \begin{align*} \De y &= f(a+\De x) - f(a) \end{align*}
  • We can estimate this change using a linear approximation. Substituting \(x=a+\De x\) into the equation 3.4.3 yields the approximation
    \begin{gather*} f(a+\De x)\approx f(a)+f'(a)(a+\De x-a) \end{gather*}
    and consequently the approximation
    \begin{gather*} \De y=f(a+\De x)-f(a)\approx f(a)+f'(a)\De x-f(a) \end{gather*}
    simplifies to the following neat estimate of \(\De y\text{:}\)
  • In the automobile manufacturing example, when the production level is \(a\) cars per week, increasing the production level by \(\De x\) will cost approximately \(f'(a)\De x\text{.}\) The additional cost per additional car, \(f'(a)\text{,}\) is called the “marginal cost” of a car.
  • If we instead use the quadratic approximation (given by equation 3.4.6) then we estimate
    \begin{gather*} f(a+\De x)\approx f(a)+f'(a)\De x+\half f''(a)\De x^2 \end{gather*}
    and so
    \begin{align*} \De y&=f(a+\De x)-f(a) \approx f(a)+f'(a)\De x +\half f''(a)\De x^2-f(a) \end{align*}
    which simplifies to

Subsection 3.4.8 Further Examples

In this subsection we give further examples of computation and use of Taylor approximations.

Example 3.4.22. Estimating \(\tan 46^\circ\).

Estimate \(\tan 46^\circ\text{,}\) using the constant-, linear- and quadratic-approximations (equations 3.4.1, 3.4.3 and 3.4.6).
Solution Note that we need to be careful to translate angles measured in degrees to radians.
  • Set \(f(x)=\tan x\text{,}\) \(x=46\tfrac{\pi}{180}\) radians and \(a=45\tfrac{\pi}{180}=\tfrac{\pi}{4}\) radians. This is a good choice for \(a\) because
    • \(a=45^\circ\) is close to \(x=46^\circ\text{.}\) As noted above, it is generally the case that the closer \(x\) is to \(a\text{,}\) the better various approximations will be.
    • We know the values of all trig functions at \(45^\circ\text{.}\)
  • Now we need to compute \(f\) and its first two derivatives at \(x=a\text{.}\) It is a good time to recall the special \(1:1:\sqrt{2}\) triangle
    So
    \begin{align*} f(x) &= \tan x & f(\pi/4) &= 1\\ f'(x) &= \sec^2 x = \frac{1}{\cos^2 x} & f'(\pi/4) &= \frac{1}{1/\sqrt{2}^2} = 2\\ f''(x) &= \frac{2\sin x}{\cos^3 x} & f''(\pi/4) &= \frac{2/\sqrt{2}}{1/\sqrt{2}^3} = 4 \end{align*}
  • As \(x-a=46\tfrac{\pi}{180}-45\tfrac{\pi}{180}=\tfrac{\pi}{180}\) radians, the three approximations are
    \begin{alignat*}{2} f(x)&\approx f(a) \\ \amp=1\\ f(x)&\approx f(a)+f'(a)(x-a) & &=1+2\tfrac{\pi}{180} \\ &=1.034907\\ f(x)&\approx f(a)+f'(a)(x\!-\!a)+\half f''(a)(x\!-\!a)^2& &=1+2\tfrac{\pi}{180}+\half 4\big(\tfrac{\pi}{180}\big)^2\\ & =1.035516 \end{alignat*}
    For comparison purposes, \(\tan 46^\circ\) really is \(1.035530\) to 6 decimal places.

Warning 3.4.23.

All of our derivative formulae for trig functions were developed under the assumption that angles are measured in radians. Those derivatives appeared in the approximation formulae that we used in Example 3.4.22, so we were obliged to express \(x-a\) in radians.

Example 3.4.24. Error inferring a height from an angle.

Suppose that you are ten meters from a vertical pole. You were contracted to measure the height of the pole. You can’t take it down or climb it. So you measure the angle subtended by the top of the pole. You measure \(\theta=30^\circ\text{,}\) which gives
\begin{gather*} h=10\tan 30^\circ=\tfrac{10}{\sqrt{3}}\approx 5.77\text{m}\qquad\qquad \end{gather*}
This is just standard trigonometry — if we know the angle exactly then we know the height exactly.
However, in the “real world” angles are hard to measure with such precision. If the contract requires you the measurement of the pole to be accurate within \(10\) cm, how accurate does your measurement of the angle \(\theta\) need to be?
Solution For simplicity
 16 
Mathematicians love assumptions that let us tame the real world.
, we are going to assume that the pole is perfectly straight and perfectly vertical and that your distance from the pole was exactly 10 m.
  • Write \(\theta=\theta_0+\De\theta\) where \(\theta\) is the exact angle, \(\theta_0\) is the measured angle and \(\De \theta\) is the error.
  • Similarly write \(h=h_0+\De h\text{,}\) where \(h\) is the exact height and \(h_0=\tfrac{10}{\sqrt{3}}\) is the computed height. Their difference, \(\De h\text{,}\) is the error.
  • Then
    \begin{align*} h_0&=10\tan\theta_0 & h_0+\De h&=10\tan(\theta_0+\De\theta)\\ \De h &= 10\tan(\theta_0+\De\theta) - 10\tan\theta_0 \end{align*}
    We could attempt to solve this equation for \(\De\theta\) in terms of \(\De h\) — but it is far simpler to approximate \(\De h\) using the linear approximation in equation 3.4.20.
  • To use equation 3.4.20, replace \(y\) with \(h\text{,}\) \(x\) with \(\theta\) and \(a\) with \(\theta_0\text{.}\) Our function \(f(\theta) = 10 \tan\theta\) and \(\theta_0 = 30^\circ = \pi/6\) radians. Then
    \begin{align*} \De y &\approx f'(a) \De x & \text{ becomes }&& \De h &\approx f'(\theta_0) \De \theta \end{align*}
    Since \(f(\theta)=10 \tan \theta\text{,}\) \(f'(\theta) = 10\sec^2\theta\) and
    \begin{gather*} f'(\theta_0) = 10\sec^2(\pi/6) = 10 \cdot \left(\frac{2}{\sqrt{3}} \right)^2 = \frac{40}{3} \end{gather*}
  • Putting things together gives
    \begin{align*} \De h &\approx f'(\theta_0) \De \theta & \text{ becomes }&& \De h & \approx \frac{40}{3} \De \theta \end{align*}
    We can then solve this equation for \(\De\theta\) in terms of \(\De h\text{:}\)
    \begin{align*} \De \theta & \approx \frac{3}{40} \De h \end{align*}
  • We are told that we must have \(|\De h| \lt 0.1\text{,}\) so we must have
    \begin{align*} |\De \theta| &\leq \frac{3}{400} \end{align*}
    This is measured in radians, so converting back to degrees
    \begin{align*} \frac{3}{400} \cdot \frac{180}{\pi} &= 0.43^\circ \end{align*}

Definition 3.4.25.

Suppose that you measure, approximately, some quantity. Suppose that the exact value of that quantity is \(Q_0\) and that your measurement yielded \(Q_0+\De Q\text{.}\) Then \(|\De Q|\) is called the absolute error of the measurement and \(100\frac{|\De Q|}{Q_0}\) is called the percentage error of the measurement. As an example, if the exact value is \(4\) and the measured value is \(5\text{,}\) then the absolute error is \(|5-4|=1\) and the percentage error is \(100\frac{|5-4|}{4}=25\text{.}\) That is, the error, \(1\text{,}\) was \(25\%\) of the exact value, \(4\text{.}\)

Example 3.4.26. Error inferring the area and volume from the radius.

Suppose that the radius of a sphere has been measured with a percentage error of at most \(\varepsilon\)%. Find the corresponding approximate percentage errors in the surface area and volume of the sphere.
Solution We need to be careful in this problem to convert between absolute and percentage errors correctly.
  • Suppose that the exact radius is \(r_0\) and that the measured radius is \(r_0+\De r\text{.}\)
  • Then the absolute error in the measurement is \(|\De r|\) and, by definition, the percentage error is \(100\tfrac{|\De r|}{r_0}\text{.}\) We are told that \(100\tfrac{|\De r|}{r_0}\le\varepsilon\text{.}\)
  • The surface area
     17 
    We do not expect you to remember the surface areas of solids for this course.
    of a sphere of radius \(r\) is \(A(r)=4\pi r^2\text{.}\) The error in the surface area computed with the measured radius is
    \begin{align*} \De A &=A(r_0+\De r)-A(r_0)\approx A'(r_0)\De r\\ &= 8\pi r_0 \Delta r \end{align*}
    where we have made use of the linear approximation, equation 3.4.20.
  • The corresponding percentage error is then
    \begin{gather*} 100\frac{|\De A|}{A(r_0)} \approx 100\frac{|A'(r_0)\De r|}{A(r_0)} = 100\frac{8\pi r_0|\De r|}{4\pi r_0^2} = 2\times 100\frac{|\De r|}{r_0} \le 2\varepsilon \end{gather*}
  • The volume of a sphere
     18 
    We do expect you to remember the formula for the volume of a sphere.
    of radius \(r\) is \(V(r)=\frac{4}{3}\pi r^3\text{.}\) The error in the volume computed with the measured radius is
    \begin{align*} \De V &=V(r_0+\De r)-V(r_0)\approx V'(r_0)\De r\\ &= 4\pi r_0^2 \Delta r \end{align*}
    where we have again made use of the linear approximation, equation 3.4.20.
  • The corresponding percentage error is
    \begin{gather*} 100\frac{|\De V|}{V(r_0)} \approx 100\frac{|V'(r_0)\De r|}{V(r_0)} = 100\frac{4\pi r_0^2|\De r|}{4\pi r_0^3/3} = 3\times 100\frac{|\De r|}{r_0} \le 3\varepsilon \end{gather*}
We have just computed an approximation to \(\Delta V\text{.}\) This problem is actually sufficiently simple that we can compute \(\Delta V\) exactly:
\begin{align*} \Delta V &= V(r_0 + \Delta r) - V(r_0) = \tfrac{4}{3} \pi (r_0 + \Delta r)^3 - \tfrac{4}{3} \pi r_0^3 \end{align*}
  • Applying \((a+b)^3=a^3+3a^2b+3ab^2+b^3\) with \(a=r_0\) and \(b=\De r\text{,}\) gives
    \begin{align*} V(r_0+\De r)-V(r_0)&=\tfrac{4}{3}\pi \left[r_0^3+3r_0^2\De r+3r_0\,(\De r)^2+(\De r)^3\right] - \tfrac{4}{3}\pi r_0^3\\ &=\tfrac{4}{3}\pi[3r_0^2\De r+3r_0\,(\De r)^2+(\De r)^3] \end{align*}
  • Thus the difference between the exact error and the linear approximation to the error is obtained by retaining only the last two terms in the square brackets. This has magnitude
    \begin{gather*} \tfrac{4}{3}\pi\big|3r_0\,(\De r)^2+(\De r)^3\big| =\tfrac{4}{3}\pi\big|3r_0+\De r\big|(\De r)^2 \end{gather*}
    or in percentage terms
    \begin{align*} 100\cdot \dfrac{1}{\tfrac{4}{3}\pi r_0^3} \cdot \tfrac{4}{3}\pi \big|3r_0\,(\De r)^2+(\De r)^3\big| &=100\left|3\frac{\De r^2}{r_0^2}+\frac{\De r^3}{r_0^3}\right|\\ &=\left(100 \frac{3\De r}{r_0}\right) \cdot \left(\frac{\De r}{r_0}\right) \left|1 +\frac{\De r}{3r_0}\right|\\ & \le 3\varepsilon \left(\frac{\varepsilon}{100}\right)\cdot \left(1+\frac{\varepsilon}{300}\right) \end{align*}
    Since \(\varepsilon\) is small, we can assume that \(1 + \frac{\varepsilon}{300} \approx 1\text{.}\) Hence the difference between the exact error and the linear approximation of the error is roughly a factor of \(\tfrac{\varepsilon}{100}\) smaller than the linear approximation \(3\varepsilon\text{.}\)
  • As an aside, notice that if we argue that \(\De r\) is very small and so we can ignore terms involving \((\De r)^2\) and \((\De r)^3\) as being really really small, then we obtain
    \begin{align*} V(r_0+\De r)-V(r_0) &=\tfrac{4}{3}\pi[3r_0^2\De r \underbrace{+3r_0\,(\De r)^2+(\De r)^3}_\text{really really small}]\\ &\approx \tfrac{4}{3}\pi \cdot 3r_0^2\De r = 4 \pi r_0^2 \De r \end{align*}
    which is precisely the result of our linear approximation above.

Example 3.4.27. Percentage error inferring a height.

To compute the height \(h\) of a lamp post, the length \(s\) of the shadow of a two meter pole is measured. The pole is 6 m from the lamp post. If the length of the shadow was measured to be 4 m, with an error of at most one cm, find the height of the lamp post and estimate the percentage error in the height.
Solution We should first draw a picture
 19 
We get to reuse that nice lamp post picture from Example 3.2.4.
  • By similar triangles we see that
    \begin{align*} \frac{2}{s} &= \frac{h}{6+s} \end{align*}
    from which we can isolate \(h\) as a function of \(s\text{:}\)
    \begin{align*} h &= \frac{2(6+s)}{s} = \frac{12}{s} + 2 \end{align*}
  • The length of the shadow was measured to be \(s_0=4\) m. The corresponding height of the lamp post is
    \begin{align*} h_0 &= \frac{12}{4} + 2 = 5m \end{align*}
  • If the error in the measurement of the length of the shadow was \(\De s\text{,}\) then the exact shadow length was \(s=s_0+\De s\) and the exact lamp post height is \(h=f(s_0+\De s)\text{,}\) where \(f(s)=\tfrac{12}{s}+2\text{.}\) The error in the computed lamp post height is
    \begin{gather*} \De h=h-h_0=f(s_0+\De s)-f(s_0) \end{gather*}
  • We can then make a linear approximation of this error using equation 3.4.20:
    \begin{align*} \De h &\approx f'(s_0)\De s =-\frac{12}{s_0^2}\De s =-\frac{12}{4^2}\De s \end{align*}
  • We are told that \(|\De s|\le\frac{1}{100}\) m. Consequently, approximately,
    \begin{gather*} |\De h|\le \frac{12}{4^2}\frac{1}{100}=\frac{3}{400} \end{gather*}
    The percentage error is then approximately
    \begin{align*} 100\frac{|\De h|}{h_0} & \le 100\frac{3}{400\times 5}=0.15\% \end{align*}

Subsection 3.4.9 The Error in the Taylor Polynomial Approximations

Any time you make an approximation, it is desirable to have some idea of the size of the error you introduced. That is, we would like to know the difference \(R(x)\) between the original function \(f(x)\) and our approximation \(F(x)\text{:}\)
\begin{align*} R(x) &= f(x)-F(x). \end{align*}
Of course if we know \(R(x)\) exactly, then we could recover \(f(x) = F(x)+R(x)\) — so this is an unrealistic hope. In practice we would simply like to bound \(R(x)\text{:}\)
\begin{align*} |R(x)| &= |f(x)-F(x)| \leq M \end{align*}
where (hopefully) \(M\) is some small number. It is worth stressing that we do not need the tightest possible value of \(M\text{,}\) we just need a relatively easily computed \(M\) that isn’t too far off the true value of \(|f(x)-F(x)|\text{.}\)
We will now develop a formula for the error introduced by the constant approximation, equation 3.4.1 (developed back in Section 3.4.1)
\begin{align*} f(x)&\approx f(a) = T_0(x) & \text{$0^\mathrm{th}$ Taylor polynomial} \end{align*}
The resulting formula can be used to get an upper bound on the size of the error \(|R(x)|\text{.}\)
The main ingredient we will need is the Mean-Value Theorem (Theorem 2.13.5) — so we suggest you quickly revise it. Consider the following obvious statement:
\begin{align*} f(x) &= f(x) & \text{now some sneaky manipulations}\\ & = f(a) + (f(x)-f(a))\\ &= \underbrace{f(a)}_{=T_0(x)} + (f(x)-f(a)) \cdot \underbrace{\frac{x-a}{x-a}}_{=1}\\ &= T_0(x) + \underbrace{\frac{f(x)-f(a)}{x-a}}_\text{looks familiar} \cdot (x-a) \end{align*}
Indeed, this equation is important in the discussion that follows, so we’ll highlight it
The coefficient \(\dfrac{f(x)-f(a)}{x-a}\) of \((x-a)\) is the average slope of \(f(t)\) as \(t\) moves from \(t=a\) to \(t=x\text{.}\) We can picture this as the slope of the secant joining the points \((a,f(a))\) and \((x,f(x))\) in the sketch below.
As \(t\) moves from \(a\) to \(x\text{,}\) the instantaneous slope \(f'(t)\) keeps changing. Sometimes \(f'(t)\) might be larger than the average slope \(\tfrac{f(x)-f(a)}{x-a}\text{,}\) and sometimes \(f'(t)\) might be smaller than the average slope \(\tfrac{f(x)-f(a)}{x-a}\text{.}\) However, by the Mean-Value Theorem (Theorem 2.13.5), there must be some number \(c\text{,}\) strictly between \(a\) and \(x\text{,}\) for which \(f'(c)=\dfrac{f(x)-f(a)}{x-a}\) exactly.
Substituting this into formula 3.4.28 gives
Notice that this expression as it stands is not quite what we want. Let us massage this around a little more into a more useful form
Notice that the MVT doesn’t tell us the value of \(c\text{,}\) however we do know that it lies strictly between \(x\) and \(a\text{.}\) So if we can get a good bound on \(f'(c)\) on this interval then we can get a good bound on the error.

Example 3.4.31. Error in the approximation in 3.4.2.

Let us return to Example 3.4.2, and we’ll try to bound the error in our approximation of \(e^{0.1}\text{.}\)
  • Recall that \(f(x) = e^x\text{,}\) \(a=0\) and \(T_0(x) = e^0 = 1\text{.}\)
  • Then by equation 3.4.30
    \begin{align*} e^{0.1} - T_0(0.1) &= f'(c) \cdot (0.1 - 0) & \text{with $0 \lt c \lt 0.1$} \end{align*}
  • Now \(f'(c) = e^c\text{,}\) so we need to bound \(e^c\) on \((0,0.1)\text{.}\) Since \(e^c\) is an increasing function, we know that
    \begin{align*} e^0 & \lt f'(c) \lt e^{0.1} & \text{ when $0 \lt c \lt 0.1$} \end{align*}
    So one is tempted to write that
    \begin{align*} |e^{0.1} - T_0(0.1)| &= |R(x)| = |f'(c)| \cdot (0.1 - 0)\\ & \lt e^{0.1} \cdot 0.1 \end{align*}
    And while this is true, it is rather circular. We have just bounded the error in our approximation of \(e^{0.1}\) by \(\frac{1}{10}e^{0.1}\) — if we actually knew \(e^{0.1}\) then we wouldn’t need to estimate it!
  • While we don’t know \(e^{0.1}\) exactly, we do know
     20 
    Oops! Do we really know that \(e \lt 3\text{?}\) We haven’t proved it. We will do so soon.
    that \(1 = e^0 \lt e^{0.1} \lt e^1 \lt 3\text{.}\) This gives us
    \begin{gather*} |R(0.1)| \lt 3 \times 0.1 = 0.3 \end{gather*}
    That is — the error in our approximation of \(e^{0.1}\) is no greater than \(0.3\text{.}\) Recall that we don’t need the error exactly, we just need a good idea of how large it actually is.
  • In fact the real error here is
    \begin{align*} |e^{0.1} - T_0(0.1)| &=|e^{0.1} - 1| = 0.1051709\dots \end{align*}
    so we have over-estimated the error by a factor of 3.
But we can actually go a little further here — we can bound the error above and below. If we do not take absolute values, then since
\begin{align*} e^{0.1} - T_0(0.1) &= f'(c) \cdot 0.1 & \text{ and } 1 \lt f'(c) \lt 3 \end{align*}
we can write
\begin{align*} 1\times 0.1 \leq ( e^{0.1} - T_0(0.1) ) & \leq 3\times 0.1 \end{align*}
so
\begin{align*} T_0(0.1) + 0.1 &\leq e^{0.1} \leq T_0(0.1)+0.3\\ 1.1 &\leq e^{0.1} \leq 1.3 \end{align*}
So while the upper bound is weak, the lower bound is quite tight.
There are formulae similar to equation 3.4.29, that can be used to bound the error in our other approximations; all are based on generalisations of the MVT. The next one — for linear approximations — is
\begin{align*} f(x) & =\underbrace{f(a)+f'(a)(x-a)}_{=T_1(x)}+\half f''(c)(x-a)^2 & \text{for some } c \text{ strictly between } a \text{ and } x \end{align*}
which we can rewrite in terms of \(T_1(x)\text{:}\)
It implies that the error that we make when we approximate \(f(x)\) by \(T_1(x) = f(a)+f'(a)\,(x-a)\) is exactly \(\half f''(c)\,(x-a)^2\) for some \(c\) strictly between \(a\) and \(x\text{.}\)
More generally
\begin{align*} f(x)=& \underbrace{f(a)\!+\!f'(a)\cdot(x\!-\!a)\!+\cdots+\!\frac{1}{n!}f^{(n)}(a)\cdot(x\!-\!a)^n}_{= T_n(x)} \!+\!\frac{1}{(n\!+\!1)!}f^{(n+1)}(c)\cdot (x\!-\!a)^{n+1} \end{align*}
for some \(c\) strictly between \(a\) and \(x\text{.}\) Again, rewriting this in terms of \(T_n(x)\) gives
That is, the error introduced when \(f(x)\) is approximated by its Taylor polynomial of order \(n\text{,}\) is precisely the last term of the Taylor polynomial of order \(n+1\text{,}\) but with the derivative evaluated at some point between \(a\) and \(x\text{,}\) rather than exactly at \(a\text{.}\) These error formulae are proven in the optional Section 3.4.10 later in this chapter.

Example 3.4.34. Approximate \(\sin 46^\circ\) and estimate the error.

Approximate \(\sin 46^\circ\) using Taylor polynomials about \(a=45^\circ\text{,}\) and estimate the resulting error.
Solution
  • Start by defining \(f(x) = \sin x\) and
    \begin{align*} a&=45^\circ=45\tfrac{\pi}{180} {\rm radians}& x&=46^\circ=46\tfrac{\pi}{180} {\rm radians}\\ x-a&=\tfrac{\pi}{180} {\rm radians} \end{align*}
  • The first few derivatives of \(f\) at \(a\) are
    \begin{align*} f(x)&=\sin x &f(a)&=\frac{1}{\sqrt{2}}\\ f'(x)&=\cos x &\\ f'(a)&=\frac{1}{\sqrt{2}}\\ f''(x)&=-\sin x &\\ f''(a)&=-\frac{1}{\sqrt{2}}\\ f^{(3)}(x)&=-\cos x & f^{(3)}(a)&=-\frac{1}{\sqrt{2}} \end{align*}
  • The constant, linear and quadratic Taylor approximations for \(\sin(x)\) about \(\frac{\pi}{4}\) are
    \begin{alignat*}{2} T_0(x) &= f(a) &&= \frac{1}{\sqrt{2}}\\ T_1(x) &= T_0(x) + f'(a) \cdot(x\!-\!a) &&= \frac{1}{\sqrt{2}} + \frac{1}{\sqrt{2}}\left(x\! -\! \frac{\pi}{4} \right)\\ T_2(x) &= T_1(x)\! +\! \half f''(a) \cdot(x\!-\!a)^2 &&=\! \frac{1}{\sqrt{2}} \!+\! \frac{1}{\sqrt{2}}\left(x\!-\! \frac{\pi}{4} \right) \!-\! \frac{1}{2\sqrt{2}}\left(x\! -\! \frac{\pi}{4} \right)^2 \end{alignat*}
  • So the approximations for \(\sin 46^\circ\) are
    \begin{align*} \sin46^\circ &\approx T_0\left(\frac{46\pi}{180}\right) = \frac{1}{\sqrt{2}}\\ &=0.70710678\\ \sin46^\circ &\approx T_1\left(\frac{46\pi}{180}\right) = \frac{1}{\sqrt{2}} + \frac{1}{\sqrt{2}} \left(\frac{\pi}{180}\right)\\ &=0.71944812\\ \sin46^\circ&\approx T_2\left(\frac{46\pi}{180}\right) = \frac{1}{\sqrt{2}} + \frac{1}{\sqrt{2}} \left(\frac{\pi}{180}\right) - \frac{1}{2\sqrt{2}}\left(\frac{\pi}{180}\right)^2\\ &=0.71934042 \end{align*}
  • The errors in those approximations are (respectively)
    \begin{alignat*}{3} &{\rm error\ in\ 0.70710678}& &=f'(c)(x-a)& &=\cos c \cdot \left(\frac{\pi}{180}\right)\\ &{\rm error\ in\ 0.71944812}& &=\frac{1}{2} f''(c)(x-a)^2& &=-\frac{1}{2} \cdot \sin c\cdot \left(\frac{\pi}{180}\right)^2\\ &{\rm error\ in\ 0.71923272}& &=\frac{1}{3!}f^{(3)}(c)(x-a)^3& &=-\frac{1}{3!}\cdot \cos c \cdot \left(\frac{\pi}{180}\right)^3 \end{alignat*}
    In each of these three cases \(c\) must lie somewhere between \(45^\circ\) and \(46^\circ\text{.}\)
  • Rather than carefully estimating \(\sin c\) and \(\cos c\) for \(c\) in that range, we make use of a simpler (but much easier bound). No matter what \(c\) is, we know that \(|\sin c|\le 1\) and \(|\cos c|\le 1\text{.}\) Hence
    \begin{alignat*}{3} &\big|{\rm error\ in\ 0.70710678}\big|& &\le \left(\frac{\pi}{180}\right)& & \lt 0.018\\ &\big|{\rm error\ in\ 0.71944812}\big|& &\le\frac{1}{2} \left(\frac{\pi}{180}\right)^2& & \lt 0.00015\\ &\big|{\rm error\ in\ 0.71934042}\big|& &\le \frac{1}{3!} \left(\frac{\pi}{180}\right)^3& & \lt 0.0000009 \end{alignat*}

Example 3.4.35. Showing \(e \lt 3\).

In Example 3.4.31 above we used the fact that \(e \lt 3\) without actually proving it. Let’s do so now.
  • Consider the linear approximation of \(e^x\) about \(a=0\text{.}\)
    \begin{align*} T_1(x) &= f(0) + f'(0)\cdot x = 1 + x \end{align*}
    So at \(x=1\) we have
    \begin{align*} e &\approx T_1(1) = 2 \end{align*}
  • The error in this approximation is
    \begin{align*} e^x - T_1(x) &= \frac{1}{2} f''(c) \cdot x^2 = \frac{e^c}{2} \cdot x^2 \end{align*}
    So at \(x=1\) we have
    \begin{align*} e - T_1(1) &= \frac{e^c}{2} \end{align*}
    where \(0 \lt c \lt 1\text{.}\)
  • Now since \(e^x\) is an increasing
     21 
    Since the derivative of \(e^x\) is \(e^x\) which is positive everywhere, the function is increasing everywhere.
    function, it follows that \(e^c \lt e\text{.}\) Hence
    \begin{align*} e - T_1(1) &= \frac{e^c}{2} \lt \frac{e}{2} \end{align*}
    Moving the \(\frac{e}{2}\) to the left hand side and the \(T_1(1)\) to the right hand side gives
    \begin{gather*} \frac{e}{2} \lt T_1(1) = 2 \end{gather*}
    So \(e \lt 4\text{.}\)
  • This isn’t as tight as we would like — so now do the same with the quadratic approximation with \(a=0\text{:}\)
    \begin{align*} e^x & \approx T_2(x) = 1 + x + \frac{x^2}{2}\\ \end{align*}

    So when \(x=1\) we have

    \begin{align*} e & \approx T_2(1) = 1 + 1 + \frac{1}{2} = \frac{5}{2} \end{align*}
  • The error in this approximation is
    \begin{align*} e^x - T_2(x) &= \frac{1}{3!} f'''(c) \cdot x^3 = \frac{e^c}{6} \cdot x^3 \end{align*}
    So at \(x=1\) we have
    \begin{align*} e - T_2(1) &= \frac{e^c}{6} \end{align*}
    where \(0 \lt c \lt 1\text{.}\)
  • Again since \(e^x\) is an increasing function we have \(e^c \lt e\text{.}\) Hence
    \begin{align*} e - T_2(1) &= \frac{e^c}{6} \lt \frac{e}{6} \end{align*}
    That is
    \begin{gather*} \frac{5e}{6} \lt T_2(1) = \frac{5}{2} \end{gather*}
    So \(e \lt 3\) as required.

Example 3.4.36. More on \(e^x\).

We wrote down the general \(n^\mathrm{th}\) order Maclaurin polynomial approximation of \(e^x\) in Example 3.4.12 above.
  • Recall that
    \begin{align*} T_n(x) &= \sum_{k=0}^n \frac{1}{k!} x^k \end{align*}
  • The error in this approximation is (by equation 3.4.33)
    \begin{align*} e^x - T_n(x) &= \frac{1}{(n+1)!} e^c \end{align*}
    where \(c\) is some number between \(0\) and \(x\text{.}\)
  • So setting \(x=1\) in this gives
    \begin{align*} e - T_n(1) &= \frac{1}{(n+1)!} e^c \end{align*}
    where \(0 \lt c \lt 1\text{.}\)
  • Since \(e^x\) is an increasing function we know that \(1 = e^0 \lt e^c \lt e^1 \lt 3\text{,}\) so the above expression becomes
    \begin{align*} \frac{1}{(n+1)!} \leq e - T_n(1) &= \frac{1}{(n+1)!} e^c \leq \frac{3}{(n+1)!} \end{align*}
  • So when \(n=9\) we have
    \begin{align*} \frac{1}{10!} \leq e - \left(1 + 1 + \frac{1}{2} +\cdots + \frac{1}{9!} \right) &\leq \frac{3}{10!} \end{align*}
  • Now \(1/10! \lt 3/10! \lt 10^{-6}\text{,}\) so the approximation of \(e\) by
    \begin{gather*} e \approx 1 + 1 + \frac{1}{2} +\cdots + \frac{1}{9!} = \frac{98641}{36288} = 2.718281\dots \end{gather*}
    is correct to 6 decimal places.
  • More generally we know that using \(T_n(1)\) to approximate \(e\) will have an error of at most \(\frac{3}{(n+1)!}\) — so it converges very quickly.

Example 3.4.37. Example 3.4.24 Revisited.

Recall
 22 
Now is a good time to go back and re-read it.
that in Example 3.4.24 (measuring the height of the pole), we used the linear approximation
\begin{align*} f(\theta_0+\De\theta)&\approx f(\theta_0)+f'(\theta_0)\De\theta \end{align*}
with \(f(\theta)=10\tan\theta\) and \(\theta_0=30\dfrac{\pi}{180}\) to get
\begin{align*} \De h &=f(\theta_0+\De\theta)-f(\theta_0)\approx f'(\theta_0)\De\theta \quad \text{which implies that} \quad \De\theta \approx \frac{\De h}{f'(\theta_0)} \end{align*}
  • While this procedure is fairly reliable, it did involve an approximation. So that you could not 100% guarantee to your client’s lawyer that an accuracy of 10 cm was achieved.
  • On the other hand, if we use the exact formula 3.4.29, with the replacements \(x\rightarrow \theta_0+\De\theta\) and \(a\rightarrow\theta_0\)
    \begin{align*} f(\theta_0+\De\theta)&=f(\theta_0)+f'(c)\De\theta & \text{for some $c$ between $\theta_0$ and $\theta_0+\De\theta$} \end{align*}
    in place of the approximate formula 3.4.3, this legality is taken care of:
    \begin{align*} \De h &=f(\theta_0\!+\!\De\theta)-f(\theta_0) =f'(c)\De\theta \quad \text{for some $c$ between $\theta_0$ and $\theta_0+\De\theta$} \end{align*}
    We can clean this up a little more since in our example \(f'(\theta) = 10\sec^2\theta\text{.}\) Thus for some \(c\) between \(\theta_0\) and \(\theta_0 + \De\theta\text{:}\)
    \begin{gather*} |\De h| = 10 \sec^2(c) |\De \theta| \end{gather*}
  • Of course we do not know exactly what \(c\) is. But suppose that we know that the angle was somewhere between \(25^\circ\) and \(35^\circ\text{.}\) In other words suppose that, even though we don’t know precisely what our measurement error was, it was certainly no more than \(5^\circ\text{.}\)
  • Now on the range \(25^\circ \lt c \lt 35^\circ\text{,}\) \(\sec(c)\) is an increasing and positive function. Hence on this range
    \begin{gather*} 1.217\dots = \sec^2 25^\circ \leq \sec^2 c \leq \sec^2 35^\circ = 1.490\dots \lt 1.491 \end{gather*}
    So
    \begin{align*} 12.17 \cdot |\De \theta| &\leq |\De h| = 10 \sec^2(c) \cdot |\De \theta| \leq 14.91 \cdot | \De \theta| \end{align*}
  • Since we require \(|\De h| \lt 0.1\text{,}\) we need \(14.91 |\De \theta| \lt 0.1\text{,}\) that is
    \begin{gather*} |\De \theta| \lt \frac{0.1}{14.91} = 0.0067\dots \end{gather*}
    So we must measure angles with an accuracy of no less than \(0.0067\) radians — which is
    \begin{gather*} \frac{180}{\pi} \cdot 0.0067 = 0.38^\circ. \end{gather*}
    Hence a measurement error of \(0.38^\circ\) or less is acceptable.

Subsection 3.4.10 (Optional) — Derivation of the Error Formulae

In this section we will derive the formula for the error that we gave in equation 3.4.33 — namely
\begin{align*} R_n(x) = f(x) - T_n(x) &= \frac{1}{(n+1)!}f^{(n+1)}(c)\cdot (x-a)^{n+1} \end{align*}
for some \(c\) strictly between \(a\) and \(x\text{,}\) and where \(T_n(x)\) is the \(n^\mathrm{th}\) order Taylor polynomial approximation of \(f(x)\) about \(x=a\text{:}\)
\begin{align*} T_n(x) &= \sum_{k=0}^n \frac{1}{k!} f^{(k)}(a). \end{align*}
Recall that we have already proved a special case of this formula for the constant approximation using the Mean-Value Theorem (Theorem 2.13.5). To prove the general case we need the following generalisation
 23 
It is not a terribly creative name for the generalisation, but it is an accurate one.
of that theorem:
Notice that setting \(G(x) = x\) recovers the original Mean-Value Theorem. It turns out that this theorem is not too difficult to prove from the MVT using some sneaky algebraic manipulations:

Proof.

  • First we construct a new function \(h(x)\) as a linear combination of \(F(x)\) and \(G(x)\) so that \(h(a)=h(b)=0\text{.}\) Some experimentation yields
    \begin{gather*} h(x)=\big[F(b)-F(a)\big]\cdot \big[G(x)-G(a)\big]- \big[G(b)-G(a)\big] \cdot \big[F(x)-F(a)\big] \end{gather*}
  • Since \(h(a)=h(b)=0\text{,}\) the Mean-Value theorem (actually Rolle’s theorem) tells us that there is a number \(c\) obeying \(a \lt c \lt b\) such that \(h'(c)=0\text{:}\)
    \begin{align*} h'(x) &= \big[F(b)-F(a)\big] \cdot G'(x) - \big[G(b)-G(a)\big] \cdot F'(x) & \text{ so}\\ 0 &= \big[F(b)-F(a)\big] \cdot G'(c) - \big[G(b)-G(a)\big] \cdot F'(c) \end{align*}
    Now move the \(G'(c)\) terms to one side and the \(F'(c)\) terms to the other:
    \begin{align*} \big[F(b)-F(a)\big] \cdot G'(c) &= \big[G(b)-G(a)\big] \cdot F'(c). \end{align*}
  • Since we have \(G'(x) \neq 0\text{,}\) we know that \(G'(c) \neq 0\text{.}\) Further the Mean-Value theorem ensures
     24 
    Otherwise if \(G(a)=G(b)\) the MVT tells us that there is some point \(c\) between \(a\) and \(b\) so that \(G'(c)=0\text{.}\)
    that \(G(a) \neq G(b)\text{.}\) Hence we can move terms about to get
    \begin{align*} \big[F(b)-F(a)\big] &= \big[G(b)-G(a)\big] \cdot \frac{F'(c)}{G'(c)}\\ \frac{F(b)-F(a)}{G(b)-G(a)} &= \frac{F'(c)}{G'(c)} \end{align*}
    as required.
Armed with the above theorem we can now move on to the proof of the Taylor remainder formula.

Proof of equation 3.4.33.

We begin by proving the remainder formula for \(n=1\text{.}\) That is
\begin{align*} f(x) - T_1(x) &= \frac{1}{2}f''(c) \cdot(x-a)^2 \end{align*}
  • Start by setting
    \begin{align*} F(x) &= f(x)-T_1(x) & G(x) &= (x-a)^2 \end{align*}
    Notice that, since \(T_1(a)=f(a)\) and \(T'_1(x) = f'(a)\text{,}\)
    \begin{align*} F(a) &= 0 & G(a)&=0\\ F'(x) &= f'(x)-f'(a) & G'(x) &= 2(x-a) \end{align*}
  • Now apply the generalised MVT with \(b=x\text{:}\) there exists a point \(q\) between \(a\) and \(x\) such that
    \begin{align*} \frac{F(x)-F(a)}{G(x)-G(a)} &= \frac{F'(q)}{G'(q)}\\ \frac{F(x)-0}{G(x) - 0} &= \frac{f'(q)-f'(a)}{2(q-a)}\\ 2 \cdot \frac{F(x)}{G(x)} &= \frac{f'(q)-f'(a)}{q-a} \end{align*}
  • Consider the right-hand side of the above equation and set \(g(x) = f'(x)\text{.}\) Then we have the term \(\frac{g(q)-g(a)}{q-a}\) — this is exactly the form needed to apply the MVT. So now apply the standard MVT to the right-hand side of the above equation — there is some \(c\) between \(q\) and \(a\) so that
    \begin{align*} \frac{f'(q)-f'(a)}{q-a} &= \frac{g(q)-g(a)}{q-a} = g'(c) = f''(c) \end{align*}
    Notice that here we have assumed that \(f''(x)\) exists.
  • Putting this together we have that
    \begin{align*} 2 \cdot \frac{F(x)}{G(x)} &= \frac{f'(q)-f'(a)}{q-a} = f''(c)\\ 2 \frac{f(x)-T_1(x)}{(x-a)^2} &= f''(c)\\ f(x) - T_1(x) &= \frac{1}{2!} f''(c) \cdot (x-a)^2 \end{align*}
    as required.
Oof! We have now proved the cases \(n=1\) (and we did \(n=0\) earlier).
To proceed — assume we have proved our result for \(n=1,2,\cdots, k\text{.}\) We realise that we haven’t done this yet, but bear with us. Using that assumption we will prove the result is true for \(n=k+1\text{.}\) Once we have done that, then
  • we have proved the result is true for \(n=1\text{,}\) and
  • we have shown if the result is true for \(n=k\) then it is true for \(n=k+1\)
Hence it must be true for all \(n \geq 1\text{.}\) This style of proof is called mathematical induction. You can think of the process as something like climbing a ladder:
  • prove that you can get onto the ladder (the result is true for \(n=1\)), and
  • if I can stand on the current rung, then I can step up to the next rung (if the result is true for \(n=k\) then it is also true for \(n=k+1\))
Hence I can climb as high as like.
  • Let \(k \gt 0\) and assume we have proved
    \begin{align*} f(x) - T_k(x) &= \frac{1}{(k+1)!} f^{(k+1)}(c) \cdot (x-a)^{k+1} \end{align*}
    for some \(c\) between \(a\) and \(x\text{.}\)
  • Now set
    \begin{align*} F(x) &= f(x) - T_{k+1}(x) & G(x) &= (x-a)^{k+1}\\ \end{align*}

    and notice that, since \(T_{k+1}(a)=f(a)\text{,}\)

    \begin{align*} F(a) &= f(a)-T_{k+1}(a)=0 & G(a) &= 0 & G'(x) &= (k+1)(x-a)^k \end{align*}
    and apply the generalised MVT with \(b=x\text{:}\) hence there exists a \(q\) between \(a\) and \(x\) so that
    \begin{align*} \frac{F(x)-F(a)}{G(x)-G(a)} &= \frac{F'(q)}{G'(q)} &\text{which becomes}\\ \frac{F(x)}{(x-a)^{k+1}} &= \frac{F'(q)}{(k+1)(q-a)^k} & \text{rearrange}\\ F(x) &= \frac{(x-a)^{k+1}}{(k+1)(q-a)^k} \cdot F'(q) \end{align*}
  • We now examine \(F'(q)\text{.}\) First carefully differentiate \(F(x)\text{:}\)
    \begin{align*} F'(x) &= \diff{}{x} \bigg[f(x) - \bigg( f(a) + f'(a)(x-a) + \frac{1}{2} f''(a)(x-a)^2 + \cdots \\ \amp\hskip2.5in+ \frac{1}{k!}f^{(k)}(a)(x-a)^k \bigg) \bigg]\\ &= f'(x) - \bigg( f'(a) + \frac{2}{2} f''(a)(x-a) + \frac{3}{3!} f'''(a)(x-a)^2 + \cdots \\ \amp\hskip2.5in+ \frac{k}{k!}f^{(k)}(a) (x-a)^{k-1} \bigg)\\ &= f'(x) - \bigg( f'(a) + f''(a)(x-a) + \frac{1}{2} f'''(a)(x-a)^2 +\cdots \\ \amp\hskip2.5in+ \frac{1}{(k-1)!}f^{(k)}(a)(x-a)^{k-1} \bigg) \end{align*}
    Now notice that if we set \(f'(x) = g(x)\) then this becomes
    \begin{align*} F'(x) &= g(x) - \bigg( g(a) + g'(a)(x-a) + \frac{1}{2} g''(a)(x-a)^2 + \cdots \\ \amp\hskip2.5in+ \frac{1}{(k-1)!}g^{(k-1)}(a)(x-a)^{k-1} \bigg) \end{align*}
    So \(F'(x)\) is then exactly the remainder formula but for an order \(k-1\) approximation to the function \(g(x) = f'(x)\text{.}\)
  • Hence the function \(F'(q)\) is the remainder when we approximate \(f'(q)\) with an order \(k-1\) Taylor polynomial. The remainder formula, equation 3.4.33, then tells us that there is a number \(c\) between \(a\) and \(q\) so that
    \begin{align*} F'(q) &= g(q) - \bigg( g(a) + g'(a)(q-a) + \frac{1}{2} g''(a)(q-a)^2 + \cdots \\ \amp\hskip2.5in + \frac{1}{(k-1)!}g^{(k-1)}(a)(q-a)^{k-1} \bigg)\\ &= \frac{1}{k!} g^{(k)}(c) (q-a)^k = \frac{1}{k!} f^{(k+1)}(c)(q-a)^k \end{align*}
    Notice that here we have assumed that \(f^{(k+1)}(x)\) exists.
  • Now substitute this back into our equation above
    \begin{align*} F(x) &= \frac{(x-a)^{k+1}}{(k+1)(q-a)^k} \cdot F'(q)\\ &= \frac{(x-a)^{k+1}}{(k+1)(q-a)^k} \cdot \frac{1}{k!} f^{(k+1)}(c)(q-a)^k\\ &= \frac{1}{(k+1)k!} \cdot f^{(k+1)}(c) \cdot \frac{(x-a)^{k+1}(q-a)^k}{(q-a)^k}\\ &= \frac{1}{(k+1)!} \cdot f^{(k+1)}(c) \cdot(x-a)^{k+1} \end{align*}
    as required.
So we now know that
  • if, for some \(k\text{,}\) the remainder formula (with \(n=k\)) is true for all \(k\) times differentiable functions,
  • then the remainder formula is true (with \(n=k+1\)) for all \(k+1\) times differentiable functions.
Repeatedly applying this for \(k=1,2,3,4,\cdots\) (and recalling that we have shown the remainder formula is true when \(n=0,1\)) gives equation 3.4.33 for all \(n=0,1,2,\cdots\text{.}\)

Subsection 3.4.11 Exercises

Exercises Exercises for § 3.4.1

Exercises — Stage 1 .
1.
The graph below shows three curves. The black curve is \(y=f(x)\text{,}\) the red curve is \(y=g(x)=1+2\sin(1+x)\text{,}\) and the blue curve is \(y=h(x)=0.7\text{.}\) If you want to estimate \(f(0)\text{,}\) what might cause you to use \(g(0)\text{?}\) What might cause you to use \(h(0)\text{?}\)
Exercises — Stage 2 .
In this and following sections, we will ask you to approximate the value of several constants, such as \(\log(0.93)\text{.}\) A valid question to consider is why we would ask for approximations of these constants that take lots of time, and are less accurate than what you get from a calculator.
One answer to this question is historical: people were approximating logarithms before they had calculators, and these are some of the ways they did that. Pretend you’re on a desert island without any of your usual devices and that you want to make a number of quick and dirty approximate evaluations.
Another reason to make these approximations is technical: how does the calculator get such a good approximation of \(\log(0.93)\text{?}\) The techniques you will learn later on in this chapter give very accurate formulas for approximating functions like \(\log x\) and \(\sin x\text{,}\) which are sometimes used in calculators.
A third reason to make simple approximations of expressions that a calculator could evaluate is to provide a reality check. If you have a ballpark guess for your answer, and your calculator gives you something wildly different, you know to double-check that you typed everything in correctly.
For now, questions like Question 3.4.11.2 through Question 3.4.11.4 are simply for you to practice the fundamental ideas we’re learning.
2.
Use a constant approximation to estimate the value of \(\log(x)\) when \(x=0.93\text{.}\) Sketch the curve \(y=f(x)\) and your constant approximation.
(Remember that in CLP-1 we use \(\log x\) to mean the natural logarithm of \(x\text{,}\) \(\log_e x\text{.}\))
3.
Use a constant approximation to estimate \(\arcsin(0.1)\text{.}\)
4.
Use a constant approximation to estimate \(\sqrt{3}\tan(1)\text{.}\)
Exercises — Stage 3 .
5.
Use a constant approximation to estimate the value of \(10.1^3\text{.}\) Your estimation should be something you can calculate in your head.

Exercises Exercises for § 3.4.2

Exercises — Stage 1 .
1.
Suppose \(f(x)\) is a function, and we calculated its linear approximation near \(x=5\) to be \(f(x) \approx 3x-9\text{.}\)
  1. What is \(f(5)\text{?}\)
  2. What is \(f'(5)\text{?}\)
  3. What is \(f(0)\text{?}\)
2.
The curve \(y=f(x)\) is shown below. Sketch the linear approximation of \(f(x)\) about \(x=2\text{.}\)
3.
What is the linear approximation of the function \(f(x)=2x+5\) about \(x=a\text{?}\)
Exercises — Stage 2 .
4.
Use a linear approximation to estimate \(\log(x)\) when \(x=0.93\text{.}\) Sketch the curve \(y=f(x)\) and your linear approximation.
(Remember that in CLP-1 we use \(\log x\) to mean the natural logarithm of \(x\text{,}\) \(\log_e x\text{.}\))
5.
Use a linear approximation to estimate \(\sqrt{5}\text{.}\)
6.
Use a linear approximation to estimate \(\sqrt[5]{30}\)
Exercises — Stage 3 .
7.
Use a linear approximation to estimate \(10.1^3\text{,}\) then compare your estimation with the actual value.
8.
Imagine \(f(x)\) is some function, and you want to estimate \(f(b)\text{.}\) To do this, you choose a value \(a\) and take an approximation (linear or constant) of \(f(x)\) about \(a\text{.}\) Give an example of a function \(f(x)\text{,}\) and values \(a\) and \(b\text{,}\) where the constant approximation gives a more accurate estimation of \(f(b)\) than the linear approximation.
9.
The function
\begin{equation*} L(x)=\frac{1}{4}x+\frac{4\pi-\sqrt{27}}{12} \end{equation*}
is the linear approximation of \(f(x)=\arctan x\) about what point \(x=a\text{?}\)

Exercises Exercises for § 3.4.3

Exercises — Stage 1 .
1.
The quadratic approximation of a function \(f(x)\) about \(x=3\) is
\begin{equation*} f(x) \approx -x^2+6x \end{equation*}
What are the values of \(f(3)\text{,}\) \(f'(3)\text{,}\) \(f''(3)\text{,}\) and \(f'''(3)\text{?}\)
2.
Give a quadratic approximation of \(f(x)=2x+5\) about \(x=a\text{.}\)
Exercises — Stage 2 .
3.
Use a quadratic approximation to estimate \(\log(0.93)\text{.}\)
(Remember that in CLP-1 we use \(\log x\) to mean the natural logarithm of \(x\text{,}\) \(\log_e x\text{.}\))
4.
Use a quadratic approximation to estimate \(\cos\left(\dfrac{1}{15}\right)\text{.}\)
5.
Calculate the quadratic approximation of \(f(x)=e^{2x}\) about \(x=0\text{.}\)
6.
Use a quadratic approximation to estimate \(5^{\tfrac{4}{3}}\text{.}\)
7.
Evaluate the expressions below.
  1. \(\displaystyle \ds\sum_{n=5}^{30} 1\)
  2. \(\displaystyle \ds\sum_{n=1}^{3} \left[ 2(n+3)-n^2 \right]\)
  3. \(\displaystyle \ds\sum_{n=1}^{10} \left[\frac{1}{n}-\frac{1}{n+1}\right]\)
  4. \(\displaystyle \ds\sum_{n=1}^{4}\frac{5\cdot 2^n}{4^{n+1}} \)
8.
Write the following in sigma notation:
  1. \(\displaystyle 1+2+3+4+5\)
  2. \(\displaystyle 2+4+6+8\)
  3. \(\displaystyle 3+5+7+9+11\)
  4. \(\displaystyle 9+16+25+36+49\)
  5. \(\displaystyle 9+4+16+5+25+6+36+7+49+8\)
  6. \(\displaystyle 8+15+24+35+48\)
  7. \(\displaystyle 3-6+9-12+15-18\)
Exercises — Stage 3 .
9.
Use a quadratic approximation of \(f(x)=2\arcsin x\) about \(x=0\) to approximate \(f(1)\text{.}\) What number are you approximating?
10.
Use a quadratic approximation of \(e^x\) to estimate \(e\) as a decimal.
11.
Group the expressions below into collections of equivalent expressions.
  1. \(\displaystyle \ds\sum_{n=1}^{10} 2n\)
  2. \(\displaystyle \ds\sum_{n=1}^{10} 2^n\)
  3. \(\displaystyle \ds\sum_{n=1}^{10} n^2\)
  4. \(\displaystyle 2\ds\sum_{n=1}^{10} n\)
  5. \(\displaystyle 2\ds\sum_{n=2}^{11} (n-1)\)
  6. \(\displaystyle \ds\sum_{n=5}^{14} (n-4)^2\)
  7. \(\displaystyle \dfrac{1}{4}\ds\sum_{n=1}^{10}\left( \frac{4^{n+1}}{2^n}\right)\)

Exercises Exercises for § 3.4.4

Exercises — Stage 1 .
1.
The 3rd order Taylor polynomial for a function \(f(x)\) about \(x=1\) is
\begin{equation*} T_3(x)=x^3-5x^2+9x \end{equation*}
What is \(f''(1)\text{?}\)
2.
The \(n\)th order Taylor polynomial for \(f(x)\) about \(x=5\) is
\begin{equation*} T_n(x)=\sum_{k=0}^{n} \frac{2k+1}{3k-9}(x-5)^k \end{equation*}
What is \(f^{(10)}(5)\text{?}\)
Exercises — Stage 3 .
3.
The 4th order Maclaurin polynomial for \(f(x)\) is
\begin{equation*} T_4(x)=x^4-x^3+x^2-x+1 \end{equation*}
What is the third order Maclaurin polynomial for \(f(x)\text{?}\)
4.
The 4th order Taylor polynomial for \(f(x)\) about \(x=1\) is
\begin{equation*} T_4(x)=x^4+x^3-9 \end{equation*}
What is the third order Taylor polynomial for \(f(x)\) about \(x=1\text{?}\)
5.
For any even number \(n\text{,}\) suppose the \(n\)th order Taylor polynomial for \(f(x)\) about \(x=5\) is
\begin{equation*} \sum_{k=0}^{n/2} \frac{2k+1}{3k-9}(x-5)^{2k} \end{equation*}
What is \(f^{(10)}(5)\text{?}\)
6.
The third order Taylor polynomial for \(f(x)=x^3\left[2\log x - \dfrac{11}{3}\right]\) about \(x=a\) is
\begin{equation*} T_3(x)=-\frac{2}{3}\sqrt{e^3}+3ex-6\sqrt{e}x^2+x^3 \end{equation*}
What is \(a\text{?}\)

Exercises Exercises for § 3.4.5

Exercises — Stage 1 .
1.
Give the 16th order Maclaurin polynomial for \(f(x)=\sin x+ \cos x\text{.}\)
2.
Give the 100th order Taylor polynomial for \(s(t)=4.9t^2-t+10\) about \(t=5\text{.}\)
3.
Write the \(n\)th order Taylor polynomial for \(f(x)=2^x\) about \(x=1\) in sigma notation.
4.
Find the 6th order Taylor polynomial of \(f(x)=x^2\log x+2x^2+5\) about \(x=1\text{,}\) remembering that \(\log x\) is the natural logarithm of \(x\text{,}\) \(\log_ex\text{.}\)
5.
Give the \(n\)th order Maclaurin polynomial for \(\dfrac{1}{1-x}\) in sigma notation.
Exercises — Stage 3 .
6.
Calculate the \(3\)rd order Taylor Polynomial for \(f(x)=x^x\) about \(x=1\text{.}\)
7.
Use a 5th order Maclaurin polynomial for \(6\arctan x\) to approximate \(\pi\text{.}\)
8.
Write the \(100\)th order Taylor polynomial for \(f(x)=x(\log x -1)\) about \(x=1\) in sigma notation.
9.
Write the \((2n)\)th order Taylor polynomial for \(f(x)=\sin x\) about \(x=\dfrac{\pi}{4}\) in sigma notation.
10.
Estimate the sum below
\begin{equation*} 1+\frac{1}{2}+\frac{1}{3!}+\frac{1}{4!}+\cdots +\frac{1}{157!} \end{equation*}
by interpreting it as a Maclaurin polynomial.
11.
Estimate the sum below
\begin{equation*} \sum_{k=0}^{100}\frac{(-1)^k}{(2k)!}\left(\frac{5\pi}{4}\right)^{2k} \end{equation*}
by interpreting it as a Maclaurin polynomial.

Exercises Exercises for § 3.4.6

Exercises — Stage 1 .
1.
In the picture below, label the following:
\begin{equation*} f(x) \qquad f\left(x+\Delta x\right) \qquad \Delta x \qquad \Delta y \end{equation*}
2.
At this point in the book, every homework problem takes you about 5 minutes. Use the terms you learned in this section to answer the question: if you spend 15 minutes more, how many more homework problems will you finish?
Exercises — Stage 2 .
3.
Let \(f(x)=\arctan x\text{.}\)
  1. Use a linear approximation to estimate \(f(5.1)-f(5)\text{.}\)
  2. Use a quadratic approximation to estimate \(f(5.1)-f(5)\text{.}\)
4.
When diving off a cliff from \(x\) metres above the water, your speed as you hit the water is given by
\begin{equation*} s(x)=\sqrt{19.6x}\;\frac{\mathrm{m}}{\mathrm{sec}} \end{equation*}
Your last dive was from a height of 4 metres.
  1. Use a linear approximation of \(\Delta y\) to estimate how much faster you will be falling when you hit the water if you jump from a height of 5 metres.
  2. A diver makes three jumps: the first is from \(x\) metres, the second from \(x+\Delta x\) metres, and the third from \(x+2\Delta x\) metres, for some fixed positive values of \(x\) and \(\Delta x\text{.}\) Which is bigger: the increase in terminal speed from the first to the second jump, or the increase in terminal speed from the second to the third jump?

Exercises Exercises for § 3.4.7

Exercises — Stage 1 .
1.
Let \(f(x)=7x^2-3x+4\text{.}\) Suppose we measure \(x\) to be \(x_0 = 2\) but that the real value of \(x\) is \(x_0+\Delta x\text{.}\) Suppose further that the error in our measurement is \(\Delta x = 1\text{.}\) Let \(\Delta y\) be the change in \(f(x)\) corresponding to a change of \(\Delta x \) in \(x_0\text{.}\) That is, \(\Delta y = f\left(x_0+\Delta x\right)-f(x_0)\text{.}\)
True or false: \(\Delta y = f'(2)(1)=25\)
2.
Suppose the exact amount you are supposed to tip is $5.83, but you approximate and tip $6. What is the absolute error in your tip? What is the percent error in your tip?
3.
Suppose \(f(x)=3x^2-5\text{.}\) If you measure \(x\) to be \(10\text{,}\) but its actual value is \(11\text{,}\) estimate the resulting error in \(f(x)\) using the linear approximation, and then the quadratic approximation.
Exercises — Stage 2 .
4.
A circular pen is being built on a farm. The pen must contain \(A_0\) square metres, with an error of no more than 2%. Estimate the largest percentage error allowable on the radius.
5.
A circle with radius 3 has a sector cut out of it. It’s a smallish sector, no more than a quarter of the circle. You want to find out the area of the sector.
  1. Suppose the angle of the sector is \(\theta\text{.}\) What is the area of the sector?
  2. Unfortunately, you don’t have a protractor, only a ruler. So, you measure the chord made by the sector (marked \(d\) in the diagram above). What is \(\theta\) in terms of \(d\text{?}\)
  3. Suppose you measured \(d=0.7\text{,}\) but actually \(d=0.68\text{.}\) Estimate the absolute error in your calculation of the area removed.
6.
A conical tank, standing on its pointy end, has height 2 metres and radius 0.5 metres. Estimate change in volume of the water in the tank associated to a change in the height of the water from 50 cm to 45 cm.
Exercises — Stage 3 .
7.
A sample begins with precisely 1 \(\mu\)g of a radioactive isotope, and after 3 years is measured to have 0.9 \(\mu\)g remaining. If this measurement is correct to within 0.05 \(\mu\)g, estimate the corresponding accuracy of the half-life calculated using it.

Exercises Exercises for § 3.4.8

Exercises — Stage 1 .
1.
Suppose \(f(x)\) is a function that we approximated by \(F(x)\text{.}\) Further, suppose \(f(10)=-3\text{,}\) while our approximation was \(F(10)=5\text{.}\) Let \(R(x)=f(x)-F(x)\text{.}\)
  1. True or false: \(|R(10)| \leq 7\)
  2. True or false: \(|R(10)| \leq 8\)
  3. True or false: \(|R(10)| \leq 9\)
  4. True or false: \(|R(10)| \leq 100\)
2.
Let \(f(x)=e^x\text{,}\) and let \(T_3(x)\) be the third order Maclaurin polynomial for \(f(x)\text{,}\)
\begin{equation*} T_3(x)=1+x+\frac{1}{2}x^2+\frac{1}{3!}x^3 \end{equation*}
Use Equation 3.4.33 to give a reasonable bound on the error \(|f(2)-T_3(2)|\text{.}\) Then, find the error \(|f(2)-T_3(2)|\) using a calculator.
3.
Let \(f(x)= 5x^3-24x^2+ex-\pi^4\text{,}\) and let \(T_5(x)\) be the fifth order Taylor polynomial for \(f(x)\) about \(x=1\text{.}\) Give the best bound you can on the error \(|f(37)-T(37)|\text{.}\)
4.
You and your friend both want to approximate \(\sin(33)\text{.}\) Your friend uses the first order Maclaurin polynomial for \(f(x)=\sin x\text{,}\) while you use the zeroth order (constant) Maclaurin polynomial for \(f(x)=\sin x\text{.}\) Who has a better approximation, you or your friend?
Exercises — Stage 2 .
5.
Suppose a function \(f(x)\) has sixth derivative
\begin{equation*} f^{(6)}(x)=\dfrac{6!(2x-5)}{x+3}. \end{equation*}
Let \(T_5(x)\) be the 5th order Taylor polynomial for \(f(x)\) about \(x=11\text{.}\)
Give a bound for the error \(|f(11.5)-T_5(11.5)|\text{.}\)
6.
Let \(f(x)= \tan x\text{,}\) and let \(T_2(x)\) be the second order Taylor polynomial for \(f(x)\) about \(x=0\text{.}\) Give a reasonable bound on the error \(|f(0.1)-T(0.1)|\) using Equation 3.4.33.
7.
Let \(f(x)=\log (1-x)\text{,}\) and let \(T_5(x)\) be the fifth order Maclaurin polynomial for \(f(x)\text{.}\) Use Equation 3.4.33 to give a bound on the error \(|f\left(-\frac{1}{4}\right)-T_5\left(-\frac{1}{4}\right)|\text{.}\)
(Remember \(\log x=\log_ex\text{,}\) the natural logarithm of \(x\text{.}\))
8.
Let \(f(x)=\sqrt[5]{x}\text{,}\) and let \(T_3(x)\) be the third order Taylor polynomial for \(f(x)\) about \(x=32\text{.}\) Give a bound on the error \(|f(30)-T_3(30)|\text{.}\)
9.
Let
\begin{equation*} f(x)= \sin\left(\dfrac{1}{x}\right), \end{equation*}
and let \(T_1(x)\) be the first order Taylor polynomial for \(f(x)\) about \(x=\dfrac{1}{\pi}\text{.}\) Give a bound on the error \(|f(0.01)-T_1(0.01)|\text{,}\) using Equation 3.4.33. You may leave your answer in terms of \(\pi\text{.}\)
Then, give a reasonable bound on the error \(|f(0.01)-T_1(0.01)|\text{.}\)
10.
Let \(f(x)=\arcsin x\text{,}\) and let \(T_2(x)\) be the second order Maclaurin polynomial for \(f(x)\text{.}\) Give a reasonable bound on the error \(\left|f\left(\frac{1}{2}\right)-T_2\left(\frac{1}{2}\right)\right|\) using Equation 3.4.33. What is the exact value of the error \(\left|f\left(\frac{1}{2}\right)-T_2\left(\frac{1}{2}\right)\right|\text{?}\)
Exercises — Stage 3 .
11.
Let \(f(x)=\log(x)\text{,}\) and let \(T_n(x)\) be the \(n\)th order Taylor polynomial for \(f(x)\) about \(x=1\text{.}\) You use \(T_n(1.1)\) to estimate \(\log (1.1)\text{.}\) If your estimation needs to have an error of no more than \(10^{-4}\text{,}\) what is an acceptable value of \(n\) to use?
12.
Give an estimation of \(\sqrt[7]{2200}\) using a Taylor polynomial. Your estimation should have an error of less than 0.001.
13.
Use Equation 3.4.33 to show that
\begin{equation*} \frac{4241}{5040}\leq\sin(1) \leq\frac{4243}{5040} \end{equation*}
14.
In this question, we use the remainder of a Maclaurin polynomial to approximate \(e\text{.}\)
  1. Write out the 4th order Maclaurin polynomial \(T_4(x)\) of the function \(e^x\text{.}\)
  2. Compute \(T_4(1)\text{.}\)
  3. Use your answer from 3.4.11.14.b to conclude \(\dfrac{326}{120} \lt e \lt \dfrac{325}{119}\text{.}\)

Exercises Further problems for § 3.4

Exercises — Stage 1 .
1. (✳).
Consider a function \(f(x)\) whose third order Maclaurin polynomial is \(4 + 3x^2 + \frac{1}{2}x^3\text{.}\) What is \(f'(0)\text{?}\) What is \(f''(0)\text{?}\)
2. (✳).
Consider a function \(h(x)\) whose third order Maclaurin polynomial is \(1+4x-\dfrac{1}{3}x^2 + \dfrac{2}{3}x^3\text{.}\) What is \(h^{(3)}(0)\text{?}\)
3. (✳).
The third order Taylor polynomial of \(h(x)\) about \(x=2\) is \(3 + \dfrac{1}{2}(x-2) + 2(x-2)^3\text{.}\)
What is \(h'(2)\text{?}\) What is \(h''(2)\text{?}\)
Exercises — Stage 2 .
4. (✳).
The function \(f(x)\) has the property that \(f(3)=2,\ f'(3)=4\) and \(f''(3)=-10\text{.}\)
  1. Use the linear approximation to \(f(x)\) centred at \(x=3\) to approximate \(f(2.98)\text{.}\)
  2. Use the quadratic approximation to \(f(x)\) centred at \(x=3\) to approximate \(f(2.98)\text{.}\)
5. (✳).
Use the tangent line to the graph of \(y = x^{1/3}\) at \(x = 8\) to find an approximate value for \(10^{1/3}\text{.}\) Is the approximation too large or too small?
6. (✳).
Estimate \(\sqrt{2}\) using a linear approximation.
7. (✳).
Estimate \(\sqrt[3]{26}\) using a linear approximation.
8. (✳).
Estimate \((10.1)^5\) using a linear approximation.
9. (✳).
Estimate \(\sin\left(\dfrac{101\pi}{100}\right)\) using a linear approximation. (Leave your answer in terms of \(\pi\text{.}\))
10. (✳).
Use a linear approximation to estimate \(\arctan(1.1)\text{,}\) using \(\arctan 1 = \dfrac{\pi}{4}\text{.}\)
11. (✳).
Use a linear approximation to estimate \((2.001)^3\text{.}\) Write your answer in the form \(n/1000\) where \(n\) is an integer.
12. (✳).
Using a suitable linear approximation, estimate \((8.06)^{2/3}\text{.}\) Give your answer as a fraction in which both the numerator and denominator are integers.
13. (✳).
Find the third--order Taylor polynomial for \(f(x)=(1 - 3x)^{-1/3}\) around \(x = 0\text{.}\)
14. (✳).
Consider a function \(f(x)\) which has \(f^{(3)}(x)=\dfrac{x}{22-x^2}\text{.}\) Show that when we approximate \(f(2)\) using its second order Taylor polynomial at \(a=1\text{,}\) the absolute value of the error is less than \(\frac{1}{50}=0.02\text{.}\)
15. (✳).
Consider a function \(f(x)\) which has \(f^{(4)}(x)=\dfrac{\cos(x^2)}{3-x}\text{.}\) Show that when we approximate \(f(0.5)\) using its third order Maclaurin polynomial, the absolute value of the error is less than \(\frac{1}{500}=0.002\text{.}\)
16. (✳).
Consider a function \(f(x)\) which has \(f^{(3)}(x)=\dfrac{e^{-x}}{8+x^2}\text{.}\) Show that when we approximate \(f(1)\) using its second order Maclaurin polynomial, the absolute value of the error is less than \(1/40\text{.}\)
17. (✳).
  1. By using an appropriate linear approximation for \(f(x)=x^{1/3}\text{,}\) estimate \(5^{2/3}\text{.}\)
  2. Improve your answer in 3.4.11.17.a by making a quadratic approximation.
  3. Obtain an error estimate for your answer in 3.4.11.17.a (not just by comparing with your calculator’s answer for \(5^{2/3}\)).
Exercises — Stage 3 .
18.
The 4th order Maclaurin polynomial for \(f(x)\) is
\begin{equation*} T_4(x)=5x^2-9 \end{equation*}
What is the third order Maclaurin polynomial for \(f(x)\text{?}\)
19. (✳).
The equation \(y^4+xy=x^2-1\) defines \(y\) implicitly as a function of \(x\) near the point \(x=2,\ y=1\text{.}\)
  1. Use the tangent line approximation at the given point to estimate the value of \(y\) when \(x=2.1\text{.}\)
  2. Use the quadratic approximation at the given point to estimate the value of \(y\) when \(x=2.1\text{.}\)
  3. Make a sketch showing how the curve relates to the tangent line at the given point.
20. (✳).
The equation \(x^4+y+xy^4=1\) defines \(y\) implicitly as a function of \(x\) near the point \(x=-1, y=1\text{.}\)
  1. Use the tangent line approximation at the given point to estimate the value of \(y\) when \(x=-0.9\text{.}\)
  2. Use the quadratic approximation at the given point to get another estimate of \(y\) when \(x=-0.9\text{.}\)
  3. Make a sketch showing how the curve relates to the tangent line at the given point.
21. (✳).
Given that \(\log 10\approx 2.30259\text{,}\) estimate \(\log 10.3\) using a suitable tangent line approximation. Give an upper and lower bound for the error in your approximation by using a suitable error estimate.
22. (✳).
Consider \(f(x)=e^{e^x}\text{.}\)
  1. Give the linear approximation for \(f\) near \(x=0\) (call this \(L(x)\)).
  2. Give the quadratic approximation for \(f\) near \(x=0\) (call this \(Q(x)\)).
  3. Prove that \(L(x) \lt Q(x) \lt f(x)\) for all \(x \gt 0\text{.}\)
  4. Find an interval of length at most \(0.01\) that is guaranteed to contain the number \(e^{0.1}\text{.}\)