Section 6.5 (Optional) Properties of limits
Subsection 6.5.1 (Optional) Some properties of limits of sequences
When we work with sequences, it is not convenient to prove sequence convergence for each and every sequence individually. We can make use of some more general properties of limits of sequences to simplify our work. You will have already seen some “limit laws” when you studied calculus. We will prove some similar results in this section.
Theorem 6.5.1. Basic properties of limits of sequences.
Let \((x_n)\) and \((y_n)\) be sequences so that
\begin{equation*}
\lim_{n \to \infty} x_n = a \qquad \text{ and } \qquad
\lim_{n \to \infty} y_n = b
\end{equation*}
Additionally let \(c,d \in \mathbb{R}\text{.}\) Then
The limit of a sequence is unique
Linearity of limits: \(\ds
\lim _{n\to \infty }(c \cdot x_{n} + d \cdot y_{n})=c \cdot a+ d \cdot b
\text{.}\)
Product of limits: \(\ds \lim_{n\to \infty} \left(x_n \cdot y_n\right) = a \cdot b\text{.}\)
Reciprocal of limit: \(\ds\lim_{n\to\infty} \frac{1}{y_n} = \frac{1}{b}\) as long as \(b\neq 0\)
Ratio of limits: \(\ds \lim_{n\to\infty} \frac{x_n}{y_n} = \frac{a}{b}\) as long as \(b\neq 0\)
Notice that for the sequences
\((1/b_n)\) and
\((a_n/b_n)\) to be defined for all
\(n\) we need
\(b_n \neq 0\text{,}\) but we have not stated that in the theorem. This is because the condition that the limit
\(b_n \to b \neq 0\) implies that when
\(n\) is
large enough 66 we know that
\(b_n \neq 0\) — this is a consequence of
Lemma 6.5.5 below. This is enough to tell us that when
\(n\) is large everything is defined, and, typically, we don’t worry about what happens when
\(n\) is small.
Subsubsection 6.5.1.1 Uniqueness of limits
To prove the first property — uniqueness of limits — we need to do some scratch work to build up our intuition. A very standard approach to proving uniqueness is to assume that we have two objects satisfying the property and then show that those two things must actually be the same.
So, let \((x_n)\) be a convergent sequence and that
\begin{equation*}
\lim_{n\to\infty}x_n=K \qquad \text{and also} \qquad
\lim_{n\to\infty}x_n=L
\end{equation*}
ie, there are two limits. Of course, we don’t want these limits to actually be different, even though we’ve labelled them by different variables. We want to show that they are the same, that is \(K=L\text{.}\) In other words,
\begin{equation*}
\left(\text{the limit is unique}\right) \equiv
\left(
(\lim_{n\to\infty}x_n=K) \land ( \lim_{n\to\infty}x_n=L)
\implies (K=L)
\right).
\end{equation*}
Assume the hypothesis is true. So
\begin{equation*}
\lim_{n\to\infty}x_n=K \qquad \text{and also} \qquad
\lim_{n\to\infty}x_n=L
\end{equation*}
and then we try to show that \(K=L\text{.}\) Intuitively this makes sense. Since \(\lim_{n\to\infty} x_n= K\text{,}\) we know that we can make \(x_n\) arbitrarily close to \(K\) by making \(n\) large enough. Similarly, we can make \(x_n\) arbitrarily close to \(L\text{.}\) The only way this can happen is if \(K\) and \(L\) are also arbitrarily close to each other. And the only way that can happen is if they are actually the same.
This is an important point that we will have to prove. Namely, we are claiming that if two numbers are arbitrarily close to each other, then they must be equal. Rewriting this with quantifiers gives
\begin{equation*}
(\forall \epsilon > 0, |K-L|\lt \epsilon) \implies (K=L).
\end{equation*}
At first glance this might look a little hard to prove, but think about its contrapositive:
\begin{equation*}
(K \neq L) \implies (\exists \epsilon > 0 \st |K-L| \geq \epsilon).
\end{equation*}
So if two numbers are different, then we can find some positive number \(\epsilon\) so that the distance between those two numbers is bigger. That doesn’t sound so bad. It is a useful result, so we’ll make it into a lemma.
Lemma 6.5.2.
Let \(K,L \in \mathbb{R}\text{.}\) If for every \(\epsilon \gt 0\) we have that \(|K-L| \lt \epsilon\text{,}\) then we must have that \(K=L\text{.}\)
Proof.
We prove the contrapositive. Let \(K,L \in \mathbb{R}\) so that \(K \neq L\text{.}\) Then set \(\epsilon = \frac{|K-L|}{2}\text{.}\) Since \(K \neq L\) we know that \(\epsilon \gt 0\text{.}\) Then we have that \(|K - L| = 2\epsilon \gt \epsilon\) and so the result holds.
Okay, to recap, we have assumed that \(x_n \to K\) and \(x_n \to L\text{.}\) This means that
for all \(\epsilon_K \gt 0\text{,}\) there is some \(N_K \in \mathbb{N}\) so that for all \(n \in \mathbb{N}\) if \(n \gt N_K\) then \(|x_n-K| \lt \epsilon_K\text{,}\) and
for all \(\epsilon_L \gt 0\text{,}\) there is some \(N_L \in \mathbb{N}\) so that for all \(n \in \mathbb{N}\) if \(n \gt N_L\) then \(|x_n-L| \lt \epsilon_L\text{.}\)
Notice that we have carefully used different symbols for the
\(\epsilon\) and
\(N\) to describe the convergence of
\(x_n \to K\) and
\(x_n \to L\text{.}\) We do this so that we are not accidentally assuming anything extra
67 about how
\(x_n\) converges to
\(K\) or
\(L\text{.}\) Now, this tells us that when
\(n\) is big enough — ie
\(n \gt \max\{N_K,N_L \}\text{,}\) that
\begin{equation*}
|x_n -K | \lt \epsilon_K \qquad \text{and} \qquad
|x_n -L | \lt \epsilon_L.
\end{equation*}
But how do we use this, and the lemma above, to tell us about the size of \(|K-L|\text{?}\)
There is a really nice trick using the
Theorem 5.4.6 and a little algebra. First, we add zero in a sneaky way that allows us to rewrite
\(K-L\) in terms of
\((K-x_n)\) and
\((L-x_n)\text{:}\)
\begin{equation*}
|K-L| = |K-L+0| = |K-L + \underbrace{(x_n-x_n)}_{=0}| = |(K-x_n) + (x_n-L)|
\end{equation*}
Now apply the triangle inequality:
\begin{equation*}
|K-L| = |(K-x_n) + (x_n-L)| \leq |K-x_n| + |x_n-L| = |x_n-K| + |x_n-L|
\end{equation*}
This gives us a way to bound the distance between \(K\) and \(L\) in terms of the distances between \(x_n\) and \(K\) and between \(x_n\) and \(L\text{.}\) But our assumption about the convergence of \(x_n\) gives us exactly that information. That is
\begin{equation*}
|K-L| \leq |x_n-K| + |x_n-L| \lt \epsilon_K + \epsilon_L
\end{equation*}
Now, given any
\(\epsilon\text{,}\) we can
68 choose
\(\epsilon_K = \epsilon_L = \frac{\epsilon}{2}\text{.}\) Then
since \(x_n \to K\text{,}\) we know that there is some \(N_K\) so that when \(n \gt N_K\text{,}\) we have that \(|x_n-K| \lt \frac{\epsilon}{2}\text{.}\)
Similarly, since \(x_n \to L\text{,}\) we know that there is some \(N_L\) so that when \(n \gt N_L\text{,}\) we have that \(|x_n-L| \lt \frac{\epsilon}{2}\text{.}\)
Then our reasoning above tells us that
\(|K-L|\lt \epsilon\) providing
\(n \gt \max\{N_K, N_L\}\text{.}\) And finally we can use
Lemma 6.5.2 to complete the result.
Oof!
Proof of uniqueness of limits.
Let \((x_n)\) be a convergent sequence. We will prove that its limit is unique. To do so we prove that if \(x_n \to K\) and \(x_n \to L\) then we must have that \(K=L\text{.}\)
So assume that \(x_n \to K\) and \(x_n \to L\text{,}\) and let \(\epsilon \gt 0\text{.}\)
Since \(x_n \to K\text{,}\) there is some \(N_K \in \mathbb{N}\) so that for all \(n \gt N_K\) we have that \(|x_n - K| \lt \frac{\epsilon}{2}\text{.}\)
And, since \(x_n \to L\text{,}\) there is some \(N_L \in \mathbb{N}\) so that for all \(n \gt N_L\) we have that \(|x_n - L| \lt \frac{\epsilon}{2}\text{.}\)
So if we pick \(N = \max\{N_K, N_L\}\) then for all \(n \gt N\) the triangle inequality implies that
\begin{equation*}
|K-L| = |(K-x_n)+(x_n-L)| \leq |x_n-K| + |x_n-L| \leq \epsilon
\end{equation*}
Note that
\(K,L\) are constants so the inequality
\(|K-L| \lt \epsilon\) must hold independently of the value of
\(n\text{.}\) And since it holds for any
\(\epsilon \gt 0\) Lemma 6.5.2 implies that
\(K=L\) as required.
Subsubsection 6.5.1.2 Linearity of limits
No time to rest! Let’s get working on the linearity of limits. We prove this by breaking the result down into two simpler lemmas.
Lemma 6.5.3.
Let \(a,c \in \mathbb{R}\) and let \((x_n)\) be a sequence that converges to \(a\text{.}\) The sequence \((c \cdot x_n)\) converges to \(c \cdot a\text{.}\)
Lemma 6.5.4.
Let \(a,b \in \mathbb{R}\) and let \((x_n)\) and \((y_n)\) be sequences so that \(x_n \to a\) and \(y_n \to b\text{.}\) The sequence \((z_n) = (x_n+y_n)\) converges to \(a+b\text{.}\)
Once we prove both of these, the linearity of limits follows quite directly:
\begin{align*}
\lim _{n\to \infty }(c\cdot x_{n} + d\cdot y_{n}) \amp =
\lim _{n\to \infty }(c\cdot x_{n}) + \lim_{n\to \infty} (d\cdot y_{n})\\
\amp= c\cdot \lim _{n\to \infty }( x_{n}) + d\cdot\lim_{n\to \infty} (y_{n}).
\end{align*}
The first of these lemmas is a little easier than the second, so we’ll start there. And, as usual, we start with scratch work. Notice that when
\(c=0\) the result simplifies down to the statement that the constant sequence
\(x_n=0\) converges to
\(0\text{.}\) This is just
Example 6.4.3 and we can recycle that proof. So since we know how to prove the case
\(c=0\text{,}\) we can now work on
\(c \neq 0\text{.}\)
Notice that the statement is really a conditional. If \(x_n \to a\) then \(c \cdot x_n \to c\cdot a\text{.}\) We’ll assume that \(x_n \to a\) and then work towards showing that \(c \cdot x_n \to c\cdot a\text{.}\) To do this we have to prove that for all \(\epsilon \gt 0\text{,}\) there is some \(N \in \mathbb{N}\) so that for all \(n \in \mathbb{N}\) if \(n \gt N\) then \(|c x_n - c a| \lt \epsilon\text{.}\) Let’s manipulate this inequality a little:
\begin{equation*}
|c x_n - c a| = |c| |x_n-a|
\end{equation*}
and so it suffices for us to show that \(|x_n-a| \lt \frac{\epsilon}{|c|}\text{.}\)
Well, now we can put our assumption that \(x_n \to a\) to use. That assumption tells us that for any \(\epsilon_x \gt 0\text{,}\) there is \(N_x \in \mathbb{N}\) so that for all \(n \in \mathbb{N}\) when \(n \gt N_x\) then \(|x_n-a| \lt \epsilon_x\text{.}\) We are being careful to label those constants with the subscript \(x\) to help remind us that those constants describe the convergence of \(x_n \to a\text{.}\)
Since this works for any \(\epsilon_x\text{,}\) we are free to set \(\epsilon_x = \frac{\epsilon}{|c|}\text{.}\) Then we know there is \(N_x\) so that if \(n \gt N_x\) then \(|x_n-a| \lt \frac{\epsilon}{|c|}\) and thus \(|c||x_n-a| \lt \epsilon\text{,}\) just as we need. All that remains is to write it up as a neat proof.
Proof of Lemma 6.5.3.
Let \(\epsilon \gt 0\) and assume that \(x_n \to a\text{.}\) We split the proof into two cases, \(c = 0\) and \(c \neq 0\text{.}\)
When \(c=0\text{,}\) then we have that \(c \cdot x_n = 0\text{,}\) and hence we trivially have
\begin{equation*}
|c\cdot x_n - c \cdot a| = |0 - 0| \lt \epsilon
\end{equation*}
Thus \(0 x_n \to 0\text{.}\)
So now assume that \(c \neq 0\text{.}\) Since \(x_n \to a\text{,}\) we know that there exists \(N_x \in \mathbb{N}\) so that for all \(n \in \mathbb{N}\) when \(n \gt N_x\text{,}\) we have \(|x_n-a|\lt \frac{\epsilon}{|c|}\text{.}\) Let \(N= N_x\) and then provided \(n \gt N\text{,}\)
\begin{equation*}
|c\cdot x_n - c\cdot a| = |c| |x_n-a| \lt \epsilon.
\end{equation*}
And thus \(c\cdot x_n \to c\cdot a\) as required.
We can actually clean this proof up and write it as a single case. We had to separate our the case \(c=0\) so that we did not divide \(\epsilon\) by \(0\text{.}\) However, we should remember that we do have some flexibility. Here is an alternate, slightly cleaner proof.
Second proof of Lemma 6.5.3.
Let \(\epsilon \gt 0\) and assume that \(x_n \to a\text{.}\) We know that there exists \(N_x \in \mathbb{N}\) so that for all \(n \in \mathbb{N}\) when \(n \gt N_x\text{,}\) we have \(|x_n-a|\lt \frac{\epsilon}{|c|+1}\text{.}\) Let \(N= N_x\) and then provided \(n \gt N\text{,}\)
\begin{equation*}
|c\cdot x_n - c\cdot a| = |c| |x_n-a| \lt \frac{|c| \epsilon}{|c|+1} \lt \epsilon.
\end{equation*}
And thus \(c\cdot x_n \to c\cdot a\) as required.
Let us now turn to
Lemma 6.5.4. Notice that, again, it is really a conditional: “if those sequences converge to
\(a\) and
\(b\text{,}\) then their sum converges to
\(a+b\text{.}\)” So our proof will start by assuming the hypothesis is true and then working our way to the conclusion. We start by assuming that
\(x_n \to a\) and
\(y_n \to b\) and, as is always the case, it is a good idea to write down the meaning of the things that we have assumed and also to write down the meaning of what we want to show.
Our assumptions that
\(x_n \to a\) and
\(y_n \to b\) mean
69 70 :
for any \(\epsilon_x \gt 0\) there is some \(N_x \in\mathbb{N}\) so that for all \(n \in \mathbb{N}\text{,}\) if \(n \gt N\) then \(|x_n-a| \lt \epsilon_x\text{,}\) and
for any \(\epsilon_y \gt 0\) there is some \(N_y \in\mathbb{N}\) so that for all \(n \in \mathbb{N}\text{,}\) if \(n \gt N\) then \(|y_n-b| \lt \epsilon_y\text{.}\)
And we wish to show that
for any \(\epsilon \gt 0\) there is some \(N \in \mathbb{N}\) so that for all \(n \in \mathbb{N}\) if \(n \gt N\) then
\begin{equation*}
|(x_n+y_n)-(a+b)| \lt \epsilon.
\end{equation*}
The triangle inequality,
Theorem 5.4.6, helps us here. It tells us how to bound the quantity
\(|(x_n+y_n)-(a+b)|\) by
\(|x_n-a|\) and
\(|y_n-b|\text{:}\)
\begin{equation*}
|(x_n+y_n)-(a+b)| = |(x_n-a)+(y_n-b)| \leq |x_n-a| +|y_n-b|
\end{equation*}
And then since we have assumed that \((x_n), (y_n)\) converge to \(a,b\text{,}\) we know that by making \(n\) very big, we can make both \(|x_n-a|\) and \(|y_n-b|\) very small. This then implies that we can make \(|(x_n+y_n)-(a+b)|\) very small. In particular, if we can make both \(|x_n-a|\) and \(|y_n-b|\) smaller than \(\frac{\epsilon}{2}\text{,}\) then the triangle inequality tells us that \(|(x_n+y_n)-(a+b)|\) is smaller than \(\epsilon\text{.}\) This is precisely what we need to prove the result.
Time to use our assumptions
\(x_n \to a\) and
\(y_n \to b\text{.}\) Since
71 the definition of convergence works for
any choice of
\(\epsilon\text{,}\) we can pick
\(\epsilon_x = \epsilon_y = \frac{\epsilon}{2}\text{.}\) Then
there is \(N_x\) so that when \(n \gt N_x\text{,}\) \(|x_n-a| \lt \epsilon_x = \frac{\epsilon}{2}\text{,}\) and
there is \(N_y\) so that when \(n \gt N_y\text{,}\) \(|x_n-b| \lt \epsilon_y = \frac{\epsilon}{2}\text{.}\)
This means that for any \(n \gt \max\{N_x,N_y\}\) we have \(|x_n-a|+|x_n-b| \lt \epsilon\text{,}\) which, in turn, guarantees that \(|(x_n+y_n)-(a+b)| \lt\epsilon\text{.}\) Now we just have to tidy it up and write it in a nice proof.
Proof of Lemma 6.5.4.
Assume that \(x_n \to a\) and \(y_n \to b\text{.}\) We will show that \(z_n =x_n+y_n \to a+b\text{.}\)
Let \(\epsilon \gt 0\text{.}\) Then since \(x_n \to a\text{,}\) we know that there exists \(N_x \in \mathbb{N}\) so that for all \(n \in \mathbb{N}\) if \(n \gt N_x\) then \(|x_n-a| \lt \frac{\epsilon}{2}\text{.}\) Similarly, since \(y_n \to b\text{,}\) we know that there exists \(N_y \in \mathbb{N}\) so that for all \(n \in \mathbb{N}\) if \(n \gt N_y\) then \(|y_n-b| \lt \frac{\epsilon}{2}\text{.}\)
Now pick \(N = \max\{N_x, N_y\}\text{.}\) Then for all \(n \in \mathbb{N}\) with \(n \gt N\text{,}\) we have
\begin{align*}
|z_n - (a+b)| \amp = |x_n-a + y_n -b| \\
\amp \leq |x_n-a| + |y_n-b|\\
\amp \lt \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon.
\end{align*}
and thus \(z_n \to (a+b)\) as required.
Now that we have proved these two lemmas we can complete our proof of the linearity of limits:
Proof of the linearity of limits.
Let \(a,b,c,d \in \mathbb{R}\) and let \((x_n)\) and \((y_n)\) be sequences so that
\begin{equation*}
\lim _{n\to \infty }x_{n}=a \qquad \text{and} \qquad \lim _{n\to \infty }y_{n}=b.
\end{equation*}
\begin{equation*}
\lim _{n\to \infty } c\cdot x_{n}=c\cdot a \qquad \text{and} \qquad \lim _{n\to \infty }d\cdot y_{n}=d\cdot b.
\end{equation*}
\begin{equation*}
\lim _{n\to \infty } \left(c\cdot x_{n} + d\cdot b_n\right) =
\lim _{n\to \infty } c\cdot x_{n} + \lim_{n\to\infty} d\cdot b_n =
c \cdot a + c \cdot b
\end{equation*}
as required.
Notice that by working in this order, we have been careful to first establish the convergence of the sequences
\((c x_n)\) and
\((d y_n)\text{,}\) via
Lemma 6.5.3, before establishing the convergence of their sum. This is necessary because
Lemma 6.5.4 only works for the sum of convergent sequences.
Subsubsection 6.5.1.3 Product of limits
Again, the statement is really an impliction: “if \(x_n\to a\) and \(y_n \to b\) then \(x_n \cdot y_n \to a \cdot b\)”. So we assume that \(x_n \to a\) and \(y_n \to b\text{.}\) This means, roughly speaking, that when \(n\) is really big, we know that \(|x_n-a|\) and \(|y_n-b|\) are small. And from that we need to show that \(|x_n \cdot y_n - a\cdot b|\) is also small.
So we have to somehow express
\begin{equation*}
|x_n \cdot y_n - a\cdot b| \qquad \text{ in terms of } \qquad |x_n-a| \text{ and } |y_n-b|
\end{equation*}
and we can do it by carefully adding and subtracting terms.
\begin{align*}
(x_n \cdot y_n - a\cdot b) \amp =
(x_n \cdot y_n - a\cdot b) + \underbrace{(x_n \cdot b - x_n \cdot b)}_{=0}\\
\amp = x_n(y_n-b) + b (x_n-a)
\end{align*}
So then, a little application of the triangle inequality gives
\begin{align*}
|x_n \cdot y_n - a \cdot b| \amp = |x_n(y_n-b) + b(x_n-a) | \\
\amp \leq |x_n(y_n-b)| + |b(x_n-a)| \\
\amp = |x_n|\cdot |y_n-b| + |b|\cdot |x_n-a|
\end{align*}
Similar to the argument we used to prove
Lemma 6.5.4, we see that if we can keep
\(|x_n|\cdot |y_n-b| \lt \epsilon/2\) and
\(|b|\cdot |x_n-a| \lt \epsilon/2\text{,}\) then we are done. But, how can we do that? Well, we can recycle the ideas from the proof of
Lemma 6.5.3 to keep
\(|b|\cdot |x_n-a| \lt \epsilon/2\text{,}\) i.e.
\(|x_n-a| \lt \frac{\epsilon}{2|b|+1}\text{,}\) since
\(|b|\) is a constant. But that argument doesn’t work for the other term,
\(|x_n|\cdot |y_n-b|\) since
\(x_n\) need not be a constant.
However, we do know when \(n\) is very large that \(x_n\) must be close \(a\text{,}\) its limit. So we should be able to bound \(\frac{|a|}{2} \leq |x_n| \leq \frac{3|a|}{2}\) for some sufficiently large \(n\text{.}\) This, in turn, would allow us to bound
\begin{equation*}
\frac{|a|}{2} | y_n -b | \leq |x_n| \cdot |y_n-b| \leq \frac{3|a|}{2} |y_n-b|
\end{equation*}
And now, we use our control
72 over
\(|y_n-b|\text{,}\) to make sure that
\(\frac{3|a|}{2}|y_n-b| \lt \epsilon/2\text{.}\)
Let us make this intermediate result bounding
\(|x_n|\text{,}\) into a lemma. It takes a little careful juggling of inequalities and the reverse triangle inequality,
Corollary 5.4.7, helps us. Then we can use the lemma to finish our proof.
Lemma 6.5.5.
Let \(a \in \mathbb{R}\) and let \((x_n)\) be a sequence that converges to \(a\text{.}\) Then there is some \(N \in \mathbb{N}\) so that for all \(n \in \mathbb{N}\text{,}\) when \(n \gt N\text{,}\) we have
\begin{equation*}
\frac{|a|}{2} \leq |x_n| \leq \frac{3|a|}{2}.
\end{equation*}
Proof.
Let \(a\) and \((x_n)\) be as given, and let \(\epsilon = \frac{|a|}{2}\text{.}\) Then since \(x_n \to a\text{,}\) we know that there is \(N \in \mathbb{N}\) so that for all integer \(n \gt N\text{,}\)
\begin{equation*}
|x_n - a| \lt \frac{|a|}{2}.
\end{equation*}
\begin{equation*}
|x_n-a| \geq \left||x_n| - |a| \right|
\end{equation*}
and hence we know that
\begin{equation*}
\left||x_n| - |a| \right| \lt \frac{|a|}{2}
\end{equation*}
\begin{equation*}
-\frac{|a|}{2} \lt |x_n|-|a| \lt \frac{|a|}{2}
\end{equation*}
from which the result quickly follows by adding \(|a|\) to both sides.
Proof of the product of limits.
Let \((x_n)\) and \((y_n)\) be sequences so that \(x_n \to a\) and \(y_n \to b\text{.}\) Let \(\epsilon \gt 0\text{.}\) Then, since those sequences converge we know that
there is some \(N_x\) so that for all \(n \gt N_x\) we have \(|x_n-a| \lt \frac{\epsilon}{2|b|+1}\text{,}\) and
there is some \(N_y\) so that for all \(n \gt N_y\) we have \(|y_n-b| \lt \frac{\epsilon}{3|a|+1}\text{.}\)
Notice that we have chosen denominators \(2|b|+1\) and \(3|a|+1\) to avoid the possibility of dividing by zero when \(a\) or \(b\) is zero. We also know
by
Lemma 6.5.5 there is some
\(N_a\) so that for all
\(n \gt N_a\text{,}\) we have
\(|x_n| \lt \frac{3|a|}{2}\text{.}\)
Now assume that \(n \gt \max\{N_x, N_y, N_a\}\text{,}\) then
\begin{align*}
|x_n y_n - a b| \amp = |x_n (y_n-b) + b (x_n-a)|\\
\amp \leq |x_n| |y_n-b| + |b| |x_n-a|\\
\amp \leq \frac{3|a|}{2} |y_n-b| + |b||x_n-a| \amp \text{by bound on }|x_n|\\
\amp \lt \frac{3|a|}{2} \cdot \frac{\epsilon}{3|a|+1} + |b|\cdot \frac{\epsilon}{2|b|+1} \amp \text{convergence of } x_n, y_n\\
\amp \lt \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon.
\end{align*}
and thus \(x_n \cdot y_n \to a\cdot b\) as required.
Subsubsection 6.5.1.4 Ratio of limits
Again,
Item d is a conditional statement, so to prove it, we assume the hypothesis,
\((y_n)\to b\) and
\(b\neq 0\text{,}\) and then show that
\(\left(\dfrac{1}{y_n}\right)\to \dfrac{1}{b}\text{.}\)
So the assumption tells us that \(b\neq 0\) and for all \(\epsilon_y \gt 0\text{,}\) there is some \(N_y \in \mathbb{N}\) so that for all \(n \in \mathbb{N}\) when \(n \gt N_y\) then \(|y_n-b| \lt \epsilon_y\text{.}\)
While to prove the conclusion we need to show that for all \(\epsilon \gt 0\) there is some \(N \in \mathbb{N}\) so that for all \(n \in \mathbb{N}\) when \(n \gt N\) then \(\left|\frac{1}{y_n}-\frac{1}{b}\right| \lt \epsilon\text{.}\)
Obviously
74 we need to somehow relate this final inequality,
\(\left|\frac{1}{y_n}-\frac{1}{b}\right| \lt \epsilon\text{,}\) to the inequality we get from the convergence of
\(y_n\to b\text{,}\) namely
\(|y_n-b| \lt \epsilon_y\text{.}\) So, time to do some rewriting:
\begin{align*}
\left|\frac{1}{y_n}-\frac{1}{b}\right| \amp = \left| \frac{(b-y_n)}{b\cdot y_n} \right|\\
\amp = \frac{|b-y_n|}{|b\cdot y_n|}
= \frac{1}{|b|}\cdot \frac{1}{|y_n|} \cdot |y_n-b|
\end{align*}
And hence we need to choose \(N\) so that we can guarantee that
\begin{equation*}
\left|\frac{1}{y_n}-\frac{1}{b}\right| = \frac{1}{|b|}\cdot \frac{1}{|y_n|} \cdot |y_n-b| \lt \epsilon,
\end{equation*}
or equivalently:
\begin{equation*}
|y_n-b| \lt |b| \cdot |y_n| \cdot \epsilon.
\end{equation*}
Now since we have control
75 over the size of
\(|y_n-b|\text{,}\) we can make it really small. But, just as was the case when we proved the
product of limits c, we first have to bound
\(|y_n|\text{.}\) Thankfully we did all that hard work already when we proved
Lemma 6.5.5. That lemma tells us that there is some
\(N_b\) so that when
\(n \gt N_b\) we know that
\begin{equation*}
\frac{|b|}{2} \lt |y_n| \lt \frac{3|b|}{2}
\end{equation*}
Thus when \(n \gt N_b\text{,}\) we know that \(\frac{|b|^2}{2}\cdot \epsilon \lt |b| \cdot |y_n| \cdot \epsilon\) So, if we can guarantee that
\begin{equation*}
|y_n-b| \lt \frac{|b|^2}{2}\cdot \epsilon
\end{equation*}
then we have
\begin{equation*}
|y_n-b| \lt \frac{|b|^2}{2}\cdot \epsilon \lt |b| \cdot |y_n| \cdot \epsilon
\end{equation*}
and so \(\left|\frac{1}{y_n}-\frac{1}{b}\right| \lt \epsilon\) as required. Therefore we set \(\epsilon_y = \frac{|b|^2}{2}\cdot \epsilon \text{.}\)
The proof is ready to go. We just have to tidy things up and be careful of our various \(N\)’s and \(\epsilon\)’s.
Proof of the reciprocal of a limit.
Let \(b\in \mathbb{R}\) with \(b \neq 0\) and let \((y_n)\) be a sequence that converges to \(b\text{.}\)
Now let
\(\epsilon \gt 0\text{.}\) Since
\(y_n \to b\text{,}\) Lemma 6.5.5 implies that there is
\(N_b \in \mathbb{N}\) so that for all integer
\(n \gt N_b\)
\begin{equation*}
\frac{|b|}{2} \lt |y_n|.
\end{equation*}
Additionally, since \(y_n \to b\text{,}\) we can find \(N_y \in \mathbb{N}\) so that for all integer \(n \gt N_y\text{,}\)
\begin{equation*}
|y_n-b|\lt \frac{|b|^2}{2} \cdot \epsilon.
\end{equation*}
Thus, if we pick \(N=\max\{N_b,N_y\}\text{,}\) then for all integer \(n \gt N\text{,}\) we have
\begin{align*}
\left| \frac{1}{y_n}-\frac{1}{b} \right| \amp
= \frac{|y_n-b|}{|b|\cdot |y_n|}\\
\amp \lt |y_n-b| \cdot \frac{2}{|b|^2}\\
\amp
\lt \frac{|b|^2}{2} \cdot \epsilon \cdot \frac{2}{|b|^2} = \epsilon
\end{align*}
And therefore the result follows.
Now that we have proved both the product of limits property and the reciprocal of limits property, we get the ratio of limits property quite directly.
Proof of the ratio of limits.
\begin{equation*}
\lim_{n \to \infty} \frac{x_n}{y_n} =
\lim_{n \to \infty} x_n \cdot \frac{1}{y_n} =
\frac{a}{b}
\end{equation*}
as required.
Subsection 6.5.2 (Optional) Some properties of limits of functions
The basic properties of limits of functions are very similar to those satisfied by the limits of sequences and should be familiar to the reader who has taken a Calculus course.
Theorem 6.5.6. Basic properties of limits of functions.
Let \(a, K, L\in \mathbb{R}\) and let \(f\) and \(g\) be real valued functions so that
\begin{equation*}
\lim_{x\to a}f(x) = K \qquad \text{and} \qquad
\lim_{x\to a}g(x) = L.
\end{equation*}
Additionally let \(c,d \in \mathbb{R}\text{.}\) Then
The limit of a function at a given point is unique.
Linearity of limits: \(\ds \lim_{x\to a} (c\cdot f(x) + d \cdot g(x)) = cK + dL.\)
Product of limits: \(\ds \lim_{x\to a} f(x) \cdot g(x) = K\cdot L
\text{.}\)
Reciprocal of limit: \(\ds \lim_{x\to a} \frac{1}{g(x)} = \frac{1}{L}\) as long as \(L \neq 0\text{.}\)
Ratio of limits: \(\ds \lim_{x\to a} \frac{f(x)}{g(x)} = \frac{K}{L}\) as long as \(L \neq 0\text{.}\)
Notice that the properties of limits of sequences are very similar to the properties of limits of sequences. The proofs are actually very similar as well. The main difference is that instead of picking some threshold
\(N \in \mathbb{N}\) we need to pick
\(\delta\text{.}\) Further, where we picked
\(N\) to be at least as large as the other
\(N\)’s used to ensure that all inequalities are satisfied (eg the proof of
Item c in
Theorem 6.5.1), we will need to pick
\(\delta\) to be smaller than all the other
\(\delta\)’s used. Because of these similarities we are going to give the proofs without scratch-work; we recommend the reader refer back to
Subsection 6.5.1 for the ideas underlying behind the proofs.
Also notice that for the reciprocal
\(1/g(x)\) and ratio
\(f(x)/g(x)\) to be defined we require that
\(g(x) \neq 0\) but we have not stated this in the theorem. This is very similar to the situation for
Theorem 6.5.1 above. The condition that
\(L \neq 0\) tells that when
\(x\) is
close enough 76 to
\(a\) that
\(g(x) \neq 0\) — this is a consequence of
Lemma 6.5.9 below. Since we are typically only interested in what happens when
\(x\) is close to
\(a\text{,}\) the condition that
\(L \neq 0\) ensures that
\(1/g(x)\) and ratio
\(f(x)/g(x)\) are defined.
Subsubsection 6.5.2.1 Uniqueness of limits
Proof of the uniqueness of limits.
To show that the limit of a function is unique, we prove that if
\begin{equation*}
\lim_{x\to a} f(x) = K \qquad \text{and also} \qquad
\lim_{x\to a} f(x) = L
\end{equation*}
then \(K = L\text{.}\)
So now assume that \(\lim_{x\to a} f(x) =K\) and \(\lim_{x\to a}f(x)=L\text{,}\) and moreover let \(\epsilon \gt 0\text{.}\)
Since \(\lim_{x\to a}f(x)=K\text{,}\) we see that \(\exists \delta_K \gt 0\) so that when \(0 \lt |x-a| \lt \delta_K\) we have \(|f(x)-K| \lt \frac{\epsilon}{2}\text{.}\)
Similarly, since \(\lim_{x\to a}f(x)=L\text{,}\) we see that \(\exists \delta_L \gt 0\) so that when \(0 \lt |x-a| \lt \delta_L\) we have \(|f(x)-K| \lt \frac{\epsilon}{2}\text{.}\)
Thus, if we pick \(\delta = \min\{\delta_K, \delta_L \}\) then when \(0 \lt |x-a| \lt \delta\) we know that
\begin{equation*}
|K-L| = |(K-f(x))+(f(x)-L)| \leq |f(x)-K| + |f(x)-L|
\lt \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon.
\end{equation*}
This, by
Lemma 6.5.2, implies that
\(K=L\text{,}\) and therefore the limit of a function at a point is unique.
Subsubsection 6.5.2.2 Linearity of limits
We prove the linearity of limits via two simpler lemmas.
Lemma 6.5.7.
Let \(a, K, c\in \mathbb{R}\) and let \(f\) be a real value function so that
\begin{equation*}
\lim_{x\to a}f(x) = K.
\end{equation*}
Then
\begin{equation*}
\lim_{x\to a} c\cdot f(x) = c\cdot K.
\end{equation*}
Proof.
Let \(a,c,K,f\) be as in the statement of the lemma. Now let \(\epsilon \gt 0\text{,}\) so that by the convergence of \(f\) we know that there is some \(\delta_K\) so that when \(0 \lt |x-a| \lt \delta_K\) we have that
\begin{equation*}
|f(x)-K| \lt \frac{\epsilon}{|c|+1}\text{.}
\end{equation*}
Notice that this choice avoids any problems that might arise in the case that \(c=0\text{.}\)
Now let \(\delta = \delta_K\text{,}\) so that when \(0 \lt |x-a| \lt \delta=\delta_K\) we know that
\begin{equation*}
|c\cdot f(x) - c\cdot K| \lt |c| \cdot |f(x)-K| \lt \frac{|c|}{|c|+1}\epsilon \lt \epsilon
\end{equation*}
and thus \(cf(x) \to cK\) as \(x \to a\) as required.
Lemma 6.5.8.
Let \(a, K, L\in \mathbb{R}\) and let \(f\) and \(g\) be real valued functions so that
\begin{equation*}
\lim_{x\to a}f(x) = K \qquad \text{and} \qquad
\lim_{x\to a}g(x) = L.
\end{equation*}
Then
\begin{equation*}
\lim_{x\to a} f(x)+g(x) = K+L.
\end{equation*}
Proof.
Let \(a,K,L,f,g\) be as in the statement of the lemma, and let \(\epsilon \gt 0\text{.}\) Then
since \(f(x)\to K\) we know that there is some \(\delta_K\) so that when \(0\lt|x-a|\lt \delta_K\) we have that \(|f(x)-K| \lt \frac{\epsilon}{2}\text{,}\)
and similarly, since \(g(x)\to L\) we know that there is some \(\delta_L\) so that when \(0\lt|x-a|\lt \delta_L\) we have that \(|g(x)-L| \lt \frac{\epsilon}{2}\text{.}\)
Pick \(\delta = \min\left\{\delta_K, \delta_L\right\}\text{,}\) so that for all \(x\) with \(0\lt|x-a|\lt \delta\) we know that
\begin{align*}
|(f(x)+g(x))-(K+L) | \amp = |(f(x)-K) + (g(x)-L) |\\
\amp \leq |f(x)-K| + |g(x)-L|\\
\amp \lt \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon.
\end{align*}
Thus \((f(x)+g(x))\to (K+L)\) as \(x\to a\) as required.
Equipped with these two lemmas, the proof of the linearity of limits of functions is quite straightforward.
Proof of the linearity of limits.
Let \(f\) and \(g\) be functions so that
\begin{equation*}
\lim_{x\to a} f(x) =K \qquad \text{and} \qquad
\lim_{x\to a} g(x) = L.
\end{equation*}
Moreover, let
\(c,d \in \mathbb{R}\text{.}\) Then using
Lemma 6.5.7 we know that
\begin{equation*}
\lim_{x\to a} c\cdot f(x) = c\cdot K \qquad \text{and} \qquad
\lim_{x\to a} d\cdot g(x) = d\cdot L.
\end{equation*}
\begin{equation*}
\lim_{x \to a}\left( c\cdot f(x) + d\cdot g(x)\right)
= c\cdot K + d \cdot L
\end{equation*}
as desired.
Subsubsection 6.5.2.3 Product of limits
As was the case for sequences, our proof of the product of limits (and also the reciprocal of limits) relies the “trick” of rewriting
\begin{align*}
| f(x)\cdot g(x)-K\cdot L |\amp =
|f(x)\cdot g(x)-f(x) \cdot L + f(x)\cdot L - K\cdot L|\\
\amp = |f(x)(g(x)-L) + L(f(x)-K) | \\
\amp \leq |f(x)|\cdot|g(x)-L| + |L|\cdot|f(x)-K|.
\end{align*}
So we again require some control over the size of the function close to
\(x=a\text{.}\) Consequently we need lemma analogous to
Lemma 6.5.5 that gives us a rigorous bound on
\(f(x)\) when
\(x\) is close to
\(a\text{.}\)
Lemma 6.5.9.
Let \(a,K \in \mathbb{R}\) and let \(f(x)\) be a function that converges to \(K\) as \(x\) approaches \(a\text{.}\) Then, there is some \(\delta \gt 0\) so that when \(0 \lt |x-a| \lt \delta\text{,}\) we have
\begin{equation*}
\frac{|K|}{2} \leq |f(x)| \leq \frac{3|K|}{2}.
\end{equation*}
Now that we have this lemma we can proceed with the proof.
Proof of the product of limits.
Let \(a,K,L,f,g\) be as in the statement of the lemma, and let \(\epsilon \gt 0\text{.}\) Then we assemble the following three facts:
Since \(f(x)\to K\) as \(x \to a\text{,}\) there is some \(\delta_K \gt 0\) so that when \(0 \lt |x-a| \lt \delta_K\) we know that \(|f(x)-K| \lt \frac{\epsilon}{2|L|+1}\text{.}\)
Similarly, since \(g(x)\to L\) as \(x \to a\text{,}\) there is some \(\delta_L \gt 0\) so that when \(0 \lt |x-a| \lt \delta_L\) we know that \(|g(x)-L| \lt \frac{\epsilon}{3|K|+1}\text{.}\)
Finally, since
\(f(x)\to L\) as
\(x \to a\text{,}\) Lemma 6.5.9 tells us that there is some
\(\delta_f\) so that when
\(0\lt |x-a|\lt \delta_f\) we know that
\(|f(x)| \lt \frac{3|K|}{2}\text{.}\)
Notice that we have chosen denominators of \(2|L|+1\) and \(3|K|+1\) to avoid any problems that could arise if we had \(L=0\) or \(K=0\text{.}\)
Now let \(\delta = \min\{\delta_K, \delta_L, \delta_f \}\text{.}\) Then when \(0 \lt |x-a| \lt \delta\) we know that
\begin{equation*}
|f(x)-K| \lt \frac{\epsilon}{2|L|+1}
\qquad \text{and}\qquad
|g(x)-L| \lt \frac{\epsilon}{3|K|+1}
\qquad \text{and}\qquad
|f(x)| \lt \frac{3|K|}{2}.
\end{equation*}
Then:
\begin{align*}
| f(x)\cdot g(x)-K\cdot L |\amp = |f(x)(g(x)-L) + L(f(x)-K) | \\
\amp \leq |f(x)|\cdot|g(x)-L| + |L|\cdot|f(x)-K| \\
\amp \leq \frac{3|K|}{2} \cdot \frac{\epsilon}{3|K|+1}
+ |L| \cdot \frac{\epsilon}{2|L|+1}\\
\amp \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon.
\end{align*}
Hence the result follows.
Subsubsection 6.5.2.4 Ratio of limits
As was the case for limits of sequences, we prove the limit of ratios of functions by first proving the limit of the reciprocal of a function and then using the above result on the limit of products to complete the result.
Proof of the reciprocal of limits.
Let \(a, L\) and \(g\) be as stated, and let \(\epsilon \gt 0\) be arbitrary. Then
since \(\lim_{x\to a} g(x) = L\text{,}\) there is some \(\delta_L\) so that when \(0 \lt |x-a| \lt \delta_L\text{,}\) we know that
\begin{equation*}
|g(x)-L| \lt \epsilon \frac{|L|^2}{2},
\end{equation*}
and similarly, since
\(\lim_{x\to a} g(x) = L\text{,}\) Lemma 6.5.9 implies that there is some
\(\delta_g\) so that when
\(0 \lt |x-a| \lt \delta_g\text{,}\) we know that
\(\frac{|L|}{2} \lt |g(x)|\text{.}\)
Now pick \(\delta = \min\{\delta_L, \delta_g \}\text{.}\) Then whenever \(0 \lt |x-a| \lt \delta\) we get
\begin{align*}
\left| \frac{1}{g(x)}-\frac{1}{L} \right|
\amp
=\left| \frac{(L-g(x))}{L \cdot g(x)} \right|\\
\amp
= \frac{1}{|L|} \cdot \frac{1}{|g|}\cdot |g(x)-L|\\
\amp\lt
\frac{2}{|L|^2} \cdot \epsilon \cdot \frac{|L|^2}{2} = \epsilon.
\end{align*}
Therefore the result follows.
Putting this result together with the result for the product of limits gives us the ratio of limits.
Proof of the ratio of limits.
Let
\(f\) and
\(g\) be functions so that
\(\lim_{x\to a}f(x)=K\) and
\(\lim_{x\to a} g(x)=L \neq 0\text{.}\) Then from
Item d we see that
\begin{equation*}
\lim_{x\to a} \frac{1}{g(x)} = \frac{1}{L}
\end{equation*}
\begin{align*}
\lim _{x\to a }\left(\frac {f(x)}{g(x)}\right)
\amp
=\lim_{x\to a}f(x)\cdot\lim _{x\to a }\left(\frac {1}{g(x)}\right)\\
\amp = K \cdot \frac{1}{L} = \frac{K}{L}
\end{align*}
Therefore the result follows.
To be more precise, we can find some \(N_0\) so that when \(n \gt N_0\) we know that \(|b_n| \gt
0\text{.}\)
We saw something like this back in
Remark 3.2.8 — we recommend that the reader quickly review that remark.
There are lots of potential choices here. For example, we can also pick \(\epsilon_K=\alpha\epsilon, \epsilon_L=\beta\epsilon\) with \(\alpha,\beta \gt 0\) and \(\alpha+\beta \leq 1\text{,}\) so that \(\epsilon_K+\epsilon_L \leq \epsilon\text{.}\)
Again, we are careful to use different \(\epsilon\) and different \(N\) for each convergence statement in order to avoid making accidental extra assumptions.
We used a very similar idea in our proof of uniqueness of limits.
That is, since \(y_n \to b\) we know that we can make \(|y_n-b|\) as small as we need, just by making \(n\) sufficiently large. In this way, our knowledge that \(y_n\) converges, gives us some control over the size of that term, \(|y_n-b|\text{.}\)
When someone says that you “just” need to do something, you are right to be skeptical. “Just” can be a very dangerous word.
Another dangerous word. Sorry. Better to say something like “Similarly to our earlier proofs in this section”. The point of this footnote is to draw the reader’s attention to the fact that words like “obviously”, “clearly”, or “just”, are very subjective and should generally be avoided. But, we hope that it is clear to the reader that instructions like this are obviously to be disregarded from time to time. All things in moderation.
That is, since \(y_n \to b\text{,}\) we know that we can make \(|y_n-b|\) as small as we want by making \(n\) sufficiently large.
That is, we can find some \(c \gt 0\) so that when \(|x-a|\lt c\text{,}\) we know \(|g(x)| \gt 0\text{.}\)