Subsection 1.6.3 Back to the Main Text
We already know from our work above that polynomials are continuous, and that  rational functions are continuous at all points in their domains — i.e. where  their denominators are non-zero. As we did for  limits, we will see that continuity interacts “nicely” with arithmetic. This  will allow us to construct complicated continuous functions from simpler  continuous building blocks (like polynomials).
But first, a few examples…
Example 1.6.4. Simple continuous and discontinuous functions.
 Consider the functions drawn below
  
These are
\begin{align*}
f(x) &= \begin{cases} x&x \lt 1  \\
x+2 & x\geq 1 \end{cases}\\
g(x) &= \begin{cases} 1/x^2& x\neq0  \\
0 & x=0\end{cases}\\
h(x) &= \begin{cases}\frac{x^3-x^2}{x-1} & x\neq 1 \\
0 & x=1 \end{cases}
\end{align*}
Determine where they are continuous and discontinuous:
- 
When \(x \lt 1\) then \(f(x)\) is a straight line (and so a polynomial) and so it is  continuous at every point \(x \lt 1\text{.}\) Similarly when \(x \gt 1\) the function is a  straight line and so it is continuous at every point \(x \gt 1\text{.}\) The only point which  might be a discontinuity is at \(x=1\text{.}\) We see that the one sided limits are  different. Hence the limit at \(x=1\) does not exist and so the function is  discontinuous at \(x=1\text{.}\)
But note that that \(f(x)\) is continuous from one side — which?
 
The middle case is much like the previous one. When \(x \neq 0\) the \(g(x)\)  is a rational function and so is continuous everywhere on its domain (which is  all reals except \(x=0\)). Thus the only point where \(g(x)\) might be discontinuous is at  \(x=0\text{.}\) We see that neither of the one-sided limits exist at \(x=0\text{,}\) so the limit  does not exist at \(x=0\text{.}\) Hence the function is discontinuous at \(x=0\text{.}\)
 
- 
We have seen the function \(h(x)\) before. By the same reasoning as  above, we know it is continuous except at \(x=1\) which we must check separately.
By definition of \(h(x)\text{,}\) \(h(1) = 0\text{.}\) We must compare this to the limit as \(x  \to 1\text{.}\) We did this before.
\begin{align*}
\frac{x^3-x^2}{x-1} &= \frac{x^2(x-1)}{x-1} = x^2
\end{align*}
So \(\lim_{x  \to 1} \frac{x^3-x^2}{x-1} = \lim_{x  \to 1} x^2 = 1\neq h(1)\text{.}\) Hence \(h\)  is discontinuous at \(x=1\text{.}\)
 
 
 
This example illustrates different sorts of discontinuities:
The function \(f(x)\) has a “jump discontinuity” because the function  “jumps” from one finite value on the left to another value on the right.
 
The second function, \(g(x)\text{,}\) has an “infinite discontinuity” since  \(\lim f(x) =+\infty\text{.}\)
 
The third function, 
\(h(x)\text{,}\) has a “removable discontinuity” because we  could make the function continuous at that point by redefining the function at  that point. i.e. setting 
\(h(1)=1\text{.}\) That is
\begin{align*}
\text{new function }h(x) &= \begin{cases}
\frac{x^3-x^2}{x-1} & x\neq 1\\
1 & x=1
\end{cases}
\end{align*}
 
 
Showing a function is continuous can be a pain, but just as the limit laws help  us compute complicated limits in terms of simpler limits, we can use them to  show that complicated functions are continuous by breaking them into simpler  pieces.
Theorem 1.6.5. Arithmetic of continuity.
Let \(a,c \in \mathbb{R}\) and let \(f(x)\) and \(g(x)\) be functions that are  continuous at \(a\text{.}\) Then the following functions are also continuous at \(x=a\text{:}\)
\(f(x) + g(x)\) and \(f(x) - g(x)\text{,}\)
 
\(c f(x)\) and \(f(x) g(x)\text{,}\) and
 
\(\frac{f(x)}{g(x)}\) provided \(g(a) \neq 0\text{.}\)
 
 Above we stated that polynomials and rational functions are  continuous (being careful about domains of rational functions —  we must avoid the denominators being zero) without making it a formal  statement. This is easily fixed…
Lemma 1.6.6.
Let \(c \in \mathbb{R}\text{.}\) The functions
\begin{align*}
f(x) &= x & g(x) &= c
\end{align*}
are continuous everywhere on the real line
 This isn’t quite the result we wanted (that’s a couple of lines below) but it  is a small result that we can combine with the arithmetic of limits to get the  result we want. Such small helpful results are called “lemmas” and they will  arise more as we go along.
Now since we can obtain any polynomial and any rational function by carefully  adding, subtracting, multiplying and dividing the functions \(f(x)=x\) and  \(g(x)=c\text{,}\) the above lemma combines with the “arithmetic of continuity”  theorem to give us the result we want:
Theorem 1.6.7. Continuity of polynomials and rational functions.
Every polynomial is continuous everywhere. Similarly every rational  function is continuous except where its denominator is zero (i.e. on all its  domain).
With some more work this result can be extended to wider families of functions:
Theorem 1.6.8.
The following functions are continuous everywhere in their domains
polynomials, rational functions
 
roots and powers
 
trig functions and their inverses
 
exponential and the logarithm
 
 We haven’t encountered inverse trigonometric functions, nor exponential  functions or logarithms, but we will see them in the next chapter. For the  moment, just file the information away.
Using a combination of the above results you can show that many complicated  functions are continuous except at a few points (usually where a denominator is  equal to zero).
Example 1.6.9. Continuity of \(\frac{\sin(x)}{2+\cos(x)}\).
 Where is the function \(f(x) = \frac{\sin(x)}{2+\cos(x)}\) continuous?
 
We just break things down into pieces and then put them back together keeping  track of where things might go wrong.
The function is a ratio of two pieces — so check if the numerator is  continuous, the denominator is continuous, and if the denominator might be zero.
 
The numerator is \(\sin(x)\) which is “continuous on its domain”  according to one of the above theorems. Its domain is all real  numbers , so it is  continuous everywhere. No problems here.
 
The denominator is the sum of \(2\) and \(\cos(x)\text{.}\) Since \(2\) is a constant  it is continuous everywhere. Similarly (we just checked things for the  previous point) we know that \(\cos(x)\) is continuous everywhere. Hence the  denominator is continuous.
 
So we just need to check if the denominator is zero. One of the facts  that we should know  is that
\begin{gather*}
-1 \leq \cos(x) \leq 1\\
\end{gather*}
and so by adding 2 we get
\begin{gather*}
1 \leq 2+\cos(x) \leq 3
\end{gather*}
Thus no matter what value of 
\(x\text{,}\) \(2+\cos(x) \geq 1\) and so cannot be zero.
 
So the numerator is continuous, the denominator is continuous and  nowhere zero, so the function is continuous everywhere.
 
  
If the function were changed to \(\ds \frac{\sin(x)}{x^2-5x+6}\) much of the same  reasoning can be used. Being a little terse we could answer with:
Numerator and denominator are continuous.
 
Since \(x^2-5x+6=(x-2)(x-3)\) the denominator is zero when \(x=2,3\text{.}\)
 
So the function is continuous everywhere except possibly  at \(x=2,3\text{.}\) In order to verify that the function really is discontinuous at  those points, it suffices to verify that the numerator is non-zero at \(x=2,3\text{.}\)  Indeed we know that \(\sin(x)\) is zero only when \(x = n\pi\) (for any integer  \(n\)). Hence \(\sin(2),\sin(3) \neq 0\text{.}\) Thus the numerator is non-zero, while the  denominator is zero and hence \(x=2,3\) really are points of discontinuity.
 
Note that this example raises a subtle point about checking continuity when  numerator and denominator are 
simultaneously zero. There are quite a  few possible outcomes in this case and we need more sophisticated tools to  adequately analyse the behaviour of functions near such points. We will return  to this question later in the text after we have developed Taylor expansions (see  Section 
3.4).
 
 So we know what happens when we add subtract multiply and divide, what about  when we compose functions? Well - limits and compositions work nicely when  things are continuous.
Theorem 1.6.10. Compositions and continuity.
If \(f\) is continuous at \(b\) and \(\ds \lim_{x \to a} g(x) = b\) then  \(\ds \lim_{x\to a} f(g(x)) = f(b)\text{.}\) I.e.
\begin{align*}
\lim_{x \to a} f\left( g(x) \right) &= f\left( \lim_{x \to a} g(x) \right)
\end{align*}
Hence if \(g\) is continuous at \(a\) and \(f\) is continuous at \(g(a)\) then the  composite function \((f \circ g)(x) = f(g(x))\) is continuous at \(a\text{.}\)
 So when we compose two continuous functions we get a new continuous function.
We can put this to use
Example 1.6.11. Continuity of composed functions.
 
Where are the following functions continuous?
\begin{align*}
f(x) &= \sin\left( x^2 +\cos(x) \right)\\
g(x) &= \sqrt{\sin(x)}
\end{align*}
  
Our first step should be to break the functions down into pieces and study  them. When we put them back together we should be careful of dividing by zero,  or falling outside the domain.
The function \(f(x)\) is the composition of \(\sin(x)\) with \(x^2+\cos(x)\text{.}\)
 
These pieces, \(\sin(x), x^2, \cos(x)\) are continuous everywhere.
 
So the sum \(x^2+\cos(x)\) is continuous everywhere
 
And hence the composition of \(\sin(x)\) and \(x^2+\cos(x)\) is continuous  everywhere.
 
The second function is a little trickier.
The function \(g(x)\) is the composition of \(\sqrt{x}\) with \(\sin(x)\text{.}\)
 
\(\sqrt{x}\) is continuous on its domain \(x \geq 0\text{.}\)
 
\(\sin(x)\) is continuous everywhere, but it is negative in many places.
 
In order for \(g(x)\) to be defined and continuous we must restrict \(x\) so that  \(\sin(x) \geq 0\text{.}\)
 
- 
Recall the graph of \(\sin(x)\text{:}\)
Hence \(\sin(x)\geq 0\) when \(x\in[0,\pi]\) or \(x\in [2\pi,3\pi]\) or \(x\in[-2\pi,-\pi]\)  or…. To be more precise \(\sin(x)\) is positive when \(x \in [2n\pi,(2n+1)\pi]\) for any  integer \(n\text{.}\)
 
Hence \(g(x)\) is continuous when \(x \in [2n\pi,(2n+1)\pi]\) for any  integer \(n\text{.}\)
 
 Continuous functions are very nice (mathematically speaking). Functions  from the “real world” tend to be continuous (though not always). The key  aspect that  makes them nice is the fact that they don’t jump about.
The absence of such jumps leads to the following theorem which, while it can be  quite confusing on first glance, actually says something very natural —  obvious even. It says, roughly speaking, that, as you draw the graph \(y=f(x)\) starting at  \(x=a\) and ending at \(x=b\text{,}\) \(y\) changes continuously from \(y=f(a)\) to \(y=f(b)\text{,}\) with no  jumps, and consequently \(y\) must take every value between \(f(a)\) and \(f(b)\) at least once.  We’ll start by just giving the precise statement and then we’ll explain it in detail.
Theorem 1.6.12. Intermediate value theorem (IVT).
Let \(a \lt b\) and let \(f\) be a function that is continuous at all points \(a\leq x  \leq b\text{.}\) If \(Y\) is any number between \(f(a)\) and \(f(b)\) then there exists some  number \(c \in [a,b]\) so that \(f(c) = Y\text{.}\)
Like the \(\epsilon-\delta\) definition of limits , we should  break this  theorem down into pieces. Before we do that, keep the following pictures in mind.
Now the break-down
Let \(a \lt b\) and let \(f\) be a function that is continuous at all  points \(a\leq x \leq b\text{.}\) — This is setting the scene. We have \(a,b\) with  \(a \lt b\) (we can safely assume these to be real numbers). Our function must  be continuous at all points between \(a\) and \(b\text{.}\)
 
if \(Y\) is any number between \(f(a)\) and \(f(b)\) — Now we need  another number \(Y\) and the only restriction on it is that it lies between  \(f(a)\) and \(f(b)\text{.}\) That is, if \(f(a)\leq f(b)\) then \(f(a) \leq Y \leq f(b)\text{.}\) Or  if \(f(a) \geq f(b)\) then \(f(a) \geq Y \geq f(b)\text{.}\) So notice that \(Y\) could be  equal to \(f(a)\) or \(f(b)\) — if we wanted to avoid that possibility, then we  would normally explicitly say \(Y \neq f(a), f(b)\) or we would write that \(Y\)  is strictly between \(f(a)\) and \(f(b)\text{.}\)
 
there exists some number \(c \in [a,b]\) so that \(f(c) = Y\) — so  if we satisfy all of the above conditions, then there has to be some real  number \(c\) lying between \(a\) and \(b\) so that when we evaluate \(f(c)\) it is \(Y\text{.}\)
 
So that breaks down the theorem statement by statement, but what does it actually  mean?
Draw any continuous function you like between \(a\) and \(b\) — it must be continuous.
 
The function takes the value \(f(a)\) at \(x=a\) and \(f(b)\) at \(x=b\) — see  the left-hand figure above.
 
Now we can pick any \(Y\) that lies between \(f(a)\) and \(f(b)\) — see the  middle figure above. The IVT  tells us that  there must be some \(x\)-value that when plugged into the function gives us \(Y\text{.}\)  That is, there is some \(c\) between \(a\) and \(b\) so that \(f(c) = Y\text{.}\) We can also interpret  this graphically; the IVT tells us that the horizontal straight line \(y=Y\) must intersect  the graph \(y=f(x)\) at some point \((c,Y)\) with \(a\le c\le b\text{.}\)
 
Notice that the IVT does not tell us how many such \(c\)-values there are,  just that there is at least one of them. See the right-hand figure above. For  that particular choice of \(Y\) there are three different \(c\) values so that  \(f(c_1) = f(c_2) = f(c_3) = Y\text{.}\)
 
This theorem says that if \(f(x)\) is a continuous function on all of the  interval \(a \leq x \leq b\) then as \(x\) moves from \(a\) to \(b\text{,}\) \(f(x)\) takes every value between \(f(a)\) and \(f(b)\) at least once. To put this slightly  differently, if \(f\) were to avoid a value between \(f(a)\) and \(f(b)\) then \(f\)  cannot be continuous on \([a,b]\text{.}\)
 
It is not hard to convince yourself that the continuity of \(f\) is crucial to  the IVT. Without it one can quickly construct examples of functions that  contradict the theorem. See the figure below for a few non-continuous examples:
In the left-hand example we see that a discontinuous function can “jump” over  the \(Y\)-value we have chosen, so there is no \(x\)-value that makes \(f(x)=Y\text{.}\) The  right-hand example demonstrates why we need to be be careful with the ends of  the interval. In particular, a function must be continuous over the whole  interval \([a,b]\) including the end-points of the interval. If we only  required the function to be continuous on \((a,b)\) (so strictly between \(a\) and  \(b\)) then the function could “jump” over the \(Y\)-value at \(a\) or \(b\text{.}\)
If you are still confused then here is a “real-world” example
Example 1.6.13. The IVT in the “real world”.
 You are climbing the Grouse-grind  with a friend — call him Bob. Bob was eager and started at 9am. Bob,  while very eager, is also very clumsy; he sprained his ankle somewhere  along the path and has stopped moving at 9:21am and is just sitting  enjoying the  view. You get there late and start climbing at 10am and being quite fit you get to the  top at 11am. The IVT implies that at some time between 10am and 11am you  meet up with Bob.
 You can translate this situation into the form of the IVT as follows. Let \(t\)  be time and let \(a = \) 10am and \(b=\) 11am. Let \(g(t)\) be your distance along the trail. Hence  \(g(a) = 0\) and  \(g(b) = 2.9km\text{.}\) Since you are a mortal, your position along the trail is a  continuous function — no helicopters or teleportation or… We have no idea  where Bob is sitting, except that he is somewhere between \(g(a)\) and  \(g(b)\text{,}\) call this point \(Y\text{.}\) The IVT guarantees that there is some time \(c\)  between \(a\) and \(b\) (so between 10am and 11am) with \(g(c) = Y\) (and your  position will be the same as Bob’s).
Aside from finding Bob sitting by the side of the trail, one of the most  important applications of the IVT is determining where a function is zero. For  quadratics we know (or should know) that
\begin{align*}
ax^2+bx+c &= 0 & \text{ when } x &= \frac{-b \pm \sqrt{b^2-4ac}}{2a}
\end{align*}
While the Babylonians could (mostly, but not quite) do the above, the corresponding  formula for solving a cubic is uglier and that for a quartic is uglier still. One of  the most famous results in mathematics demonstrates that no such  formula exists for quintics or higher degree polynomials .
 
So even for polynomials we cannot, in general, write down explicit  formulae for their zeros and have to make do with numerical approximations —  i.e. write down the root as a decimal expansion to whatever precision we desire.  For more complicated functions we have no choice — there is no reason that the  zeros should be expressible as nice neat little formulas. At the same time,  finding the zeros of a function:
\begin{align*}
f(x) &= 0
\end{align*}
or solving equations of the form 
\begin{align*}
g(x) &= h(x)
\end{align*}
can be a crucial step in many mathematical proofs and applications.
 
For this reason there is a considerable body of mathematics which focuses just  on finding the zeros of functions. The IVT provides a very simple way to  “locate” the zeros of a function. In particular, if we know a continuous  function is  negative at a point \(x=a\) and positive at another point \(x=b\text{,}\) then  there must (by the IVT) be a point \(x=c\) between \(a\) and \(b\) where \(f(c)=0\text{.}\)
Consider the leftmost of the above figures. It depicts a continuous function  that is negative at \(x=a\) and positive at \(x=b\text{.}\) So choose \(Y=0\) and apply the  IVT — there must be some \(c\) with \(a \leq c \leq b\) so that \(f(c) = Y = 0\text{.}\)  While this doesn’t tell us \(c\) exactly, it does give us bounds on the possible positions  of at least one zero — there must be at least one c obeying \(a \le c \le b\text{.}\)
See middle figure. To get better bounds we could test a point half-way between  \(a\) and \(b\text{.}\) So set \(a' = \frac{a+b}{2}\text{.}\) In this example we see that \(f(a')\)  is  negative. Applying the IVT again tells us there is some \(c\) between \(a'\) and  \(b\)  so that \(f(c) = 0\text{.}\) Again — we don’t have \(c\) exactly, but we have halved the  range of values it could take.
Look at the rightmost figure and do it again — test the point half-way between  \(a'\) and \(b\text{.}\) In this example we see that \(f(b')\) is positive. Applying the IVT  tells us that there is some \(c\) between \(a'\) and \(b'\) so that \(f(c) = 0\text{.}\) This  new range is a quarter of the length of the original. If we keep doing this  process the range will halve each time until we know that the zero is inside  some tiny range of possible values. This process is called the bisection method.
Consider the following zero-finding example
Example 1.6.14. Show that \(f(x)=x-1+\sin(\pi x/2)\) has a zero.
 Show that the function \(f(x) = x-1+\sin(\pi x/2)\) has a zero  in \(0 \leq x \leq 1\text{.}\)
 This question has been set up nicely to lead us towards using the IVT;  we are  already given a nice interval on which to look. In general we might have to  test a few points and experiment a bit with a calculator before we can  start narrowing down a range.
 
Let us start by testing the endpoints of the interval we are given
\begin{align*}
f(0) &= 0 - 1 + \sin(0) = -1  \lt  0\\
f(1) &= 1-1+\sin(\pi/2) = 1  \gt  0
\end{align*}
So we know a point where \(f\) is positive and one where it is negative. So by  the IVT there is a point in between where it is zero.
  
BUT in order to apply the IVT we have to show that the function is  continuous, and we cannot simply write
 it is continuous
 We need to explain to the reader why it is continuous. That is — we  have to prove it.
 
So to write up our answer we can put something like the following —  keeping in mind we need to tell the reader what we are doing so they can follow along easily.
We will use the IVT to prove that there is a zero in \([0,1]\text{.}\)
 
First we must show that the function is continuous.
Since \(x-1\) is a polynomial it is continuous everywhere.
 
The function \(\sin(\pi x/2)\) is a trigonometric function and is also  continuous everywhere.
 
The sum of two continuous functions is also continuous, so \(f(x)\) is  continuous everywhere.
 
  
Let 
\(a=0, b=1\text{,}\) then
\begin{align*}
f(0) &= 0 - 1 + \sin(0) = -1  \lt  0\\
f(1) &= 1-1+\sin(\pi/2) = 1  \gt  0
\end{align*}
 
The function is negative at \(x=0\) and positive at \(x=1\text{.}\) Since the  function is continuous we know there is a point \(c \in [0,1]\) so that \(f(c) =  0\text{.}\)
 
Notice that though we have not used full sentences in our explanation here, we  are still using words. Your mathematics, unless it is very straight-forward  computation, should contain words as well as symbols.
  The zero of this function is actually located at about \(x=0.4053883559\text{.}\)
The bisection method is really just the idea that we can keep repeating the above  reasoning (with a calculator handy). Each iteration will tell us the location of the zero  more precisely. The following example illustrates this.
Example 1.6.15. Using the bisection method.
 
Use the bisection method to find a zero of
\begin{align*}
f(x) &= x-1+\sin(\pi x/2)
\end{align*}
that lies between \(0\) and \(1\text{.}\)
  
So we start with the two points we worked out above:
Repeat
\(a=0, b=0.5\) where \(f(0) \lt 0\) and \(f(0.5) \gt 0\text{.}\)
 
Test the point in the middle 
\(x = \frac{0+0.5}{2} = 0.25\)
\begin{align*}
f(0.25) &= -0.3673165675  \lt  0
\end{align*}
 
So our new interval will be \([0.25,0.5]\) since the function is negative  at \(x=0.25\) and positive at \(x=0.5\)
 
Repeat
\(a=0.25, b=0.5\) where \(f(0.25) \lt 0\) and \(f(0.5) \gt 0\text{.}\)
 
Test the point in the middle 
\(x = \frac{0.25+0.5}{2} = 0.375\)
\begin{align*}
f(0.375) &= -0.0694297669  \lt  0
\end{align*}
 
So our new interval will be \([0.375,0.5]\) since the function is negative  at \(x=0.375\) and positive at \(x=0.5\)
 
Below is an illustration of what we have observed so far together with a plot of the  actual function.
   
And one final iteration:
\(a=0.375, b=0.5\) where \(f(0.375) \lt 0\) and \(f(0.5) \gt 0\text{.}\)
 
Test the point in the middle 
\(x = \frac{0.375+0.5}{2} = 0.4375\)
\begin{align*}
f(0.4375) &= 0.0718932843 \gt 0
\end{align*}
 
So our new interval will be \([0.375,0.4375]\) since the function is  negative at \(x=0.375\) and positive at \(x=0.4375\)
 
So without much work we know the location of a zero inside a range of length  \(0.0625 = 2^{-4}\text{.}\) Each iteration will halve the length of the range and we  keep going until we reach the precision we need, though it is much easier to  program a computer to do it.