When we work with sequences, it is not convenient to prove sequence convergence for each and every sequence individually. We can make use of some more general properties of limits of sequences to simplify our work. You will have already seen some “limit laws” when you studied calculus. We will prove some similar results in this section.
Theorem 6.5.1. Basic properties of limits of sequences.
Let \((x_n)\) and \((y_n)\) be sequences so that
\begin{equation*}
\lim_{n \to \infty} x_n = a \qquad \text{ and } \qquad
\lim_{n \to \infty} y_n = b
\end{equation*}
Additionally let \(c,d \in \mathbb{R}\text{.}\) Then
-
The limit of a sequence is unique
-
Linearity of limits: \(\ds
\lim _{n\to \infty }(c \cdot x_{n} + d \cdot y_{n})=c \cdot a+ d \cdot b
\text{.}\)
-
Product of limits: \(\ds \lim_{n\to \infty} \left(x_n \cdot y_n\right) = a \cdot b\text{.}\)
-
Reciprocal of limit: \(\ds\lim_{n\to\infty} \frac{1}{y_n} = \frac{1}{b}\) as long as \(b\neq 0\)
-
Ratio of limits: \(\ds \lim_{n\to\infty} \frac{x_n}{y_n} = \frac{a}{b}\) as long as \(b\neq 0\)
Notice that for the sequences
\((1/b_n)\) and
\((a_n/b_n)\) to be defined for all
\(n\) we need
\(b_n \neq 0\text{,}\) but we have not stated that in the theorem. This is because the condition that the limit
\(b_n \to b \neq 0\) implies that when
\(n\) is
large enough we know that
\(b_n \neq 0\) — this is a consequence of
Lemma 6.5.5 below. This is enough to tell us that when
\(n\) is large everything is defined, and, typically, we don’t worry about what happens when
\(n\) is small.
Subsubsection 6.5.1.1 Uniqueness of limits
To prove the first property — uniqueness of limits — we need to do some scratchwork to build up our intuition. A very standard approach to proving uniqueness is to assume that we have two objects satisfying the property and then show that those two things must actually be the same.
So, let \((x_n)\) be a convergent sequence and that
\begin{equation*}
\lim_{n\to\infty}x_n=K \qquad \text{and also} \qquad
\lim_{n\to\infty}x_n=L
\end{equation*}
i.e. there are two limits. Of course, we don’t want these limits to actually be different, even though we’ve labelled them by different variables. We want to show that they are the same, that is \(K=L\text{.}\) In other words,
\begin{equation*}
\left(\text{the limit is unique}\right) \equiv
\left(
(\lim_{n\to\infty}x_n=K) \land ( \lim_{n\to\infty}x_n=L)
\implies (K=L)
\right).
\end{equation*}
Assume the hypothesis is true. So
\begin{equation*}
\lim_{n\to\infty}x_n=K \qquad \text{and also} \qquad
\lim_{n\to\infty}x_n=L
\end{equation*}
and then we try to show that \(K=L\text{.}\) Intuitively this makes sense. Since \(\lim_{n\to\infty} x_n= K\text{,}\) we know that we can make \(x_n\) arbitrarily close to \(K\) by making \(n\) large enough. Similarly, we can make \(x_n\) arbitrarily close to \(L\text{.}\) The only way this can happen is if \(K\) and \(L\) are also arbitrarily close to each other. And the only way that can happen is if they are actually the same.
This is an important point that we will have to prove. Namely, we are claiming that if two numbers are arbitrarily close to each other, then they must be equal. Rewriting this with quantifiers gives
\begin{equation*}
(\forall \epsilon > 0, |K-L|\lt \epsilon) \implies (K=L).
\end{equation*}
At first glance this might look a little hard to prove, but think about its contrapositive:
\begin{equation*}
(K \neq L) \implies (\exists \epsilon > 0 \st |K-L| \geq \epsilon).
\end{equation*}
So if two numbers are different, then we can find some positive number \(\epsilon\) so that the distance between those two numbers is bigger. That doesn’t sound so bad. It is a useful result, so we’ll make it into a lemma.
Lemma 6.5.2.
Let
\(K,L \in \mathbb{R}\text{.}\) If for every
\(\epsilon \gt 0\) we have that
\(|K-L| \lt \epsilon\text{,}\) then we must have that
\(K=L\text{.}\)
Proof.
We prove the contrapositive. Let
\(K,L \in \mathbb{R}\) so that
\(K \neq L\text{.}\) Then set
\(\epsilon = \frac{|K-L|}{2}\text{.}\) Since
\(K \neq L\) we know that
\(\epsilon \gt 0\text{.}\) Then we have that
\(|K - L| = 2\epsilon \gt \epsilon\) and so the result holds.
Okay, to recap, we have assumed that \(x_n \to K\) and \(x_n \to L\text{.}\) This means that
-
for all \(\epsilon_K \gt 0\text{,}\) there is some \(N_K \in \mathbb{N}\) so that for all \(n \in \mathbb{N}\) if \(n \gt N_K\) then \(|x_n-K| \lt \epsilon_K\text{,}\) and
-
for all \(\epsilon_L \gt 0\text{,}\) there is some \(N_L \in \mathbb{N}\) so that for all \(n \in \mathbb{N}\) if \(n \gt N_L\) then \(|x_n-L| \lt \epsilon_L\text{.}\)
Notice that we have carefully used different symbols for the \(\epsilon\) and \(N\) to describe the convergence of \(x_n \to K\) and \(x_n \to L\text{.}\) We do this so that we are not accidentally assuming anything extra about how \(x_n\) converges to \(K\) or \(L\text{.}\) Now, this tells us that when \(n\) is big enough — ie \(n \gt \max\{N_K,N_L \}\text{,}\) that
\begin{equation*}
|x_n -K | \lt \epsilon_K \qquad \text{and} \qquad
|x_n -L | \lt \epsilon_L.
\end{equation*}
But how do we use this, and the lemma above, to tell us about the size of \(|K-L|\text{?}\)
There is a really nice trick using the
Theorem 5.4.6 and a little algebra. First, we add zero in a sneaky way that allows us to rewrite
\(K-L\) in terms of
\((K-x_n)\) and
\((L-x_n)\text{:}\)
\begin{equation*}
|K-L| = |K-L+0| = |K-L + \underbrace{(x_n-x_n)}_{=0}| = |(K-x_n) + (x_n-L)|
\end{equation*}
Now apply the triangle inequality:
\begin{equation*}
|K-L| = |(K-x_n) + (x_n-L)| \leq |K-x_n| + |x_n-L| = |x_n-K| + |x_n-L|
\end{equation*}
This gives us a way to bound the distance between \(K\) and \(L\) in terms of the distances between \(x_n\) and \(K\) and between \(x_n\) and \(L\text{.}\) But our assumption about the convergence of \(x_n\) gives us exactly that information. That is
\begin{equation*}
|K-L| \leq |x_n-K| + |x_n-L| \lt \epsilon_K + \epsilon_L
\end{equation*}
Now, given any \(\epsilon\text{,}\) we can choose \(\epsilon_K = \epsilon_L = \frac{\epsilon}{2}\text{.}\) Then
-
since \(x_n \to K\text{,}\) we know that there is some \(N_K\) so that when \(n \gt N_K\text{,}\) we have that \(|x_n-K| \lt \frac{\epsilon}{2}\text{.}\)
-
Similarly, since \(x_n \to L\text{,}\) we know that there is some \(N_L\) so that when \(n \gt N_L\text{,}\) we have that \(|x_n-L| \lt \frac{\epsilon}{2}\text{.}\)
Then our reasoning above tells us that
\(|K-L|\lt \epsilon\) providing
\(n \gt \max\{N_K, N_L\}\text{.}\) And finally we can use
Lemma 6.5.2 to complete the result.
Proof of uniqueness of limits.
Let
\((x_n)\) be a convergent sequence. We will prove that its limit is unique. To do so we prove that if
\(x_n \to K\) and
\(x_n \to L\) then we must have that
\(K=L\text{.}\)
So assume that \(x_n \to K\) and \(x_n \to L\text{,}\) and let \(\epsilon \gt 0\text{.}\)
-
Since \(x_n \to K\text{,}\) there is some \(N_K \in \mathbb{N}\) so that for all \(n \gt N_K\) we have that \(|x_n - K| \lt \frac{\epsilon}{2}\text{.}\)
-
And, since \(x_n \to L\text{,}\) there is some \(N_L \in \mathbb{N}\) so that for all \(n \gt N_L\) we have that \(|x_n - L| \lt \frac{\epsilon}{2}\text{.}\)
So if we pick \(N = \max\{N_K, N_L\}\) then for all \(n \gt N\) the triangle inequality implies that
\begin{equation*}
|K-L| = |(K-x_n)+(x_n-L)| \leq |x_n-K| + |x_n-L| \leq \epsilon
\end{equation*}
Note that
\(K,L\) are constants so the inequality
\(|K-L| \lt \epsilon\) must hold independently of the value of
\(n\text{.}\) And since it holds for any
\(\epsilon \gt 0\) Lemma 6.5.2 implies that
\(K=L\) as required.
Subsubsection 6.5.1.2 Linearity of limits
No time to rest! Let’s get working on the linearity of limits. We prove this by breaking the result down into two simpler lemmas.
Lemma 6.5.3.
Let
\(a,c \in \mathbb{R}\) and let
\((x_n)\) be a sequence that converges to
\(a\text{.}\) The sequence
\((c \cdot x_n)\) converges to
\(c \cdot a\text{.}\)
Lemma 6.5.4.
Let
\(a,b \in \mathbb{R}\) and let
\((x_n)\) and
\((y_n)\) be sequences so that
\(x_n \to a\) and
\(y_n \to b\text{.}\) The sequence
\((z_n) = (x_n+y_n)\) converges to
\(a+b\text{.}\)
Once we prove both of these, the linearity of limits follows quite directly:
\begin{align*}
\lim _{n\to \infty }(c\cdot x_{n} + d\cdot y_{n}) \amp =
\lim _{n\to \infty }(c\cdot x_{n}) + \lim_{n\to \infty} (d\cdot y_{n})\\
\amp= c\cdot \lim _{n\to \infty }( x_{n}) + d\cdot\lim_{n\to \infty} (y_{n}).
\end{align*}
The first of these lemmas is a little easier than the second, so we’ll start there. And, as usual, we start with scratchwork. Notice that when
\(c=0\) the result simplifies down to the statement that the constant sequence
\(x_n=0\) converges to
\(0\text{.}\) This is just
Example 6.4.3 and we can recycle that proof. So since we know how to prove the case
\(c=0\text{,}\) we can now work on
\(c \neq 0\text{.}\)
Notice that the statement is really a conditional. If \(x_n \to a\) then \(c \cdot x_n \to c\cdot a\text{.}\) We’ll assume that \(x_n \to a\) and then work towards showing that \(c \cdot x_n \to c\cdot a\text{.}\) To do this we have to prove that for all \(\epsilon \gt 0\text{,}\) there is some \(N \in \mathbb{N}\) so that for all \(n \in \mathbb{N}\) if \(n \gt N\) then \(|c x_n - c a| \lt \epsilon\text{.}\) Let’s manipulate this inequality a little:
\begin{equation*}
|c x_n - c a| = |c| |x_n-a|
\end{equation*}
and so it suffices for us to show that \(|x_n-a| \lt \frac{\epsilon}{|c|}\text{.}\)
Well, now we can put our assumption that
\(x_n \to a\) to use. That assumption tells us that for
any \(\epsilon_x \gt 0\text{,}\) there is
\(N_x \in \mathbb{N}\) so that for all
\(n \in \mathbb{N}\) when
\(n \gt N_x\) then
\(|x_n-a| \lt \epsilon_x\text{.}\) We are being careful to label those constants with the subscript
\(x\) to help remind us that those constants describe the convergence of
\(x_n \to a\text{.}\)
Since this works for
any \(\epsilon_x\text{,}\) we are free to set
\(\epsilon_x = \frac{\epsilon}{|c|}\text{.}\) Then we know there is
\(N_x\) so that if
\(n \gt N_x\) then
\(|x_n-a| \lt \frac{\epsilon}{|c|}\) and thus
\(|c||x_n-a| \lt \epsilon\text{,}\) just as we need. All that remains is to write it up as a neat proof.
Proof of Lemma 6.5.3.
Let
\(\epsilon \gt 0\) and assume that
\(x_n \to a\text{.}\) We split the proof into two cases,
\(c = 0\) and
\(c \neq 0\text{.}\)
When \(c=0\text{,}\) then we have that \(c \cdot x_n = 0\text{,}\) and hence we trivially have
\begin{equation*}
|c\cdot x_n - c \cdot a| = |0 - 0| \lt \epsilon
\end{equation*}
Thus \(0 x_n \to 0\text{.}\)
So now assume that \(c \neq 0\text{.}\) Since \(x_n \to a\text{,}\) we know that there exists \(N_x \in \mathbb{N}\) so that for all \(n \in \mathbb{N}\) when \(n \gt N_x\text{,}\) we have \(|x_n-a|\lt \frac{\epsilon}{|c|}\text{.}\) Let \(N= N_x\) and then provided \(n \gt N\text{,}\)
\begin{equation*}
|c\cdot x_n - c\cdot a| = |c| |x_n-a| \lt \epsilon.
\end{equation*}
And thus \(c\cdot x_n \to c\cdot a\) as required.
We can actually clean this proof up and write it as a single case. We had to separate our the case
\(c=0\) so that we did not divide
\(\epsilon\) by
\(0\text{.}\) However, we should remember that we do have some flexibility. Here is an alternate, slightly cleaner proof.
Second proof of Lemma 6.5.3.
Let \(\epsilon \gt 0\) and assume that \(x_n \to a\text{.}\) We know that there exists \(N_x \in \mathbb{N}\) so that for all \(n \in \mathbb{N}\) when \(n \gt N_x\text{,}\) we have \(|x_n-a|\lt \frac{\epsilon}{|c|+1}\text{.}\) Let \(N= N_x\) and then provided \(n \gt N\text{,}\)
\begin{equation*}
|c\cdot x_n - c\cdot a| = |c| |x_n-a| \lt \frac{|c| \epsilon}{|c|+1} \lt \epsilon.
\end{equation*}
And thus \(c\cdot x_n \to c\cdot a\) as required.
Let us now turn to
Lemma 6.5.4. Notice that, again, it is really a conditional: “if those sequences converge to
\(a\) and
\(b\text{,}\) then their sum converges to
\(a+b\text{.}\)” So our proof will start by assuming the hypothesis is true and then working our way to the conclusion. We start by assuming that
\(x_n \to a\) and
\(y_n \to b\) and, as is always the case, it is a good idea to write down the meaning of the things that we have assumed and also to write down the meaning of what we want to show.
Our assumptions that \(x_n \to a\) and \(y_n \to b\) mean:
-
for any \(\epsilon_x \gt 0\) there is some \(N_x \in\mathbb{N}\) so that for all \(n \in \mathbb{N}\text{,}\) if \(n \gt N\) then \(|x_n-a| \lt \epsilon_x\text{,}\) and
-
for any \(\epsilon_y \gt 0\) there is some \(N_y \in\mathbb{N}\) so that for all \(n \in \mathbb{N}\text{,}\) if \(n \gt N\) then \(|y_n-b| \lt \epsilon_y\text{.}\)
And we wish to show that
-
for any \(\epsilon \gt 0\) there is some \(N \in \mathbb{N}\) so that for all \(n \in \mathbb{N}\) if \(n \gt N\) then
\begin{equation*}
|(x_n+y_n)-(a+b)| \lt \epsilon.
\end{equation*}
The triangle inequality,
Theorem 5.4.6, helps us here. It tells us how to bound the quantity
\(|(x_n+y_n)-(a+b)|\) by
\(|x_n-a|\) and
\(|y_n-b|\text{:}\)
\begin{equation*}
|(x_n+y_n)-(a+b)| = |(x_n-a)+(y_n-b)| \leq |x_n-a| +|y_n-b|
\end{equation*}
And then since we have assumed that \((x_n), (y_n)\) converge to \(a,b\text{,}\) we know that by making \(n\) very big, we can make both \(|x_n-a|\) and \(|y_n-b|\) very small. This then implies that we can make \(|(x_n+y_n)-(a+b)|\) very small. In particular, if we can make both \(|x_n-a|\) and \(|y_n-b|\) smaller than \(\frac{\epsilon}{2}\text{,}\) then the triangle inequality tells us that \(|(x_n+y_n)-(a+b)|\) is smaller than \(\epsilon\text{.}\) This is precisely what we need to prove the result.
Time to use our assumptions \(x_n \to a\) and \(y_n \to b\text{.}\) Since the definition of convergence works for any choice of \(\epsilon\text{,}\) we can pick \(\epsilon_x = \epsilon_y = \frac{\epsilon}{2}\text{.}\) Then
-
there is \(N_x\) so that when \(n \gt N_x\text{,}\) \(|x_n-a| \lt \epsilon_x = \frac{\epsilon}{2}\text{,}\) and
-
there is \(N_y\) so that when \(n \gt N_y\text{,}\) \(|x_n-b| \lt \epsilon_y = \frac{\epsilon}{2}\text{.}\)
This means that for any \(n \gt \max\{N_x,N_y\}\) we have \(|x_n-a|+|x_n-b| \lt \epsilon\text{,}\) which, in turn, guarantees that \(|(x_n+y_n)-(a+b)| \lt\epsilon\text{.}\) Now we just have to tidy it up and write it in a nice proof.
Proof of Lemma 6.5.4.
Assume that
\(x_n \to a\) and
\(y_n \to b\text{.}\) We will show that
\(z_n =x_n+y_n \to a+b\text{.}\)
Let
\(\epsilon \gt 0\text{.}\) Then since
\(x_n \to a\text{,}\) we know that there exists
\(N_x \in \mathbb{N}\) so that for all
\(n \in \mathbb{N}\) if
\(n \gt N_x\) then
\(|x_n-a| \lt \frac{\epsilon}{2}\text{.}\) Similarly, since
\(y_n \to b\text{,}\) we know that there exists
\(N_y \in \mathbb{N}\) so that for all
\(n \in \mathbb{N}\) if
\(n \gt N_y\) then
\(|y_n-b| \lt \frac{\epsilon}{2}\text{.}\)
Now pick \(N = \max\{N_x, N_y\}\text{.}\) Then for all \(n \in \mathbb{N}\) with \(n \gt N\text{,}\) we have
\begin{align*}
|z_n - (a+b)| \amp = |x_n-a + y_n -b| \\
\amp \leq |x_n-a| + |y_n-b|\\
\amp \lt \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon.
\end{align*}
and thus \(z_n \to (a+b)\) as required.
Now that we have proved these two lemmas we can complete our proof of the linearity of limits:
Proof of the linearity of limits.
Let \(a,b,c,d \in \mathbb{R}\) and let \((x_n)\) and \((y_n)\) be sequences so that
\begin{equation*}
\lim _{n\to \infty }x_{n}=a \qquad \text{and} \qquad \lim _{n\to \infty }y_{n}=b.
\end{equation*}
\begin{equation*}
\lim _{n\to \infty } c\cdot x_{n}=c\cdot a \qquad \text{and} \qquad \lim _{n\to \infty }d\cdot y_{n}=d\cdot b.
\end{equation*}
\begin{equation*}
\lim _{n\to \infty } \left(c\cdot x_{n} + d\cdot b_n\right) =
\lim _{n\to \infty } c\cdot x_{n} + \lim_{n\to\infty} d\cdot b_n =
c \cdot a + c \cdot b
\end{equation*}
as required.
Notice that by working in this order, we have been careful to first establish the convergence of the sequences
\((c x_n)\) and
\((d y_n)\text{,}\) via
Lemma 6.5.3, before establishing the convergence of their sum. This is necessary because
Lemma 6.5.4 only works for the sum of convergent sequences.
Subsubsection 6.5.1.3 Product of limits
Again, the statement is really an implication: “if
\(x_n\to a\) and
\(y_n \to b\) then
\(x_n \cdot y_n \to a \cdot b\)”. So we assume that
\(x_n \to a\) and
\(y_n \to b\text{.}\) This means, roughly speaking, that when
\(n\) is really big, we know that
\(|x_n-a|\) and
\(|y_n-b|\) are small. And from that we need to show that
\(|x_n \cdot y_n - a\cdot b|\) is also small.
So we have to somehow express
\begin{equation*}
|x_n \cdot y_n - a\cdot b| \qquad \text{ in terms of } \qquad |x_n-a| \text{ and } |y_n-b|
\end{equation*}
and we can do it by carefully adding and subtracting terms.
\begin{align*}
(x_n \cdot y_n - a\cdot b) \amp =
(x_n \cdot y_n - a\cdot b) + \underbrace{(x_n \cdot b - x_n \cdot b)}_{=0}\\
\amp = x_n(y_n-b) + b (x_n-a)
\end{align*}
So then, a little application of the triangle inequality gives
\begin{align*}
|x_n \cdot y_n - a \cdot b| \amp = |x_n(y_n-b) + b(x_n-a) | \\
\amp \leq |x_n(y_n-b)| + |b(x_n-a)| \\
\amp = |x_n|\cdot |y_n-b| + |b|\cdot |x_n-a|
\end{align*}
Similar to the argument we used to prove
Lemma 6.5.4, we see that if we can keep
\(|x_n|\cdot |y_n-b| \lt \epsilon/2\) and
\(|b|\cdot |x_n-a| \lt \epsilon/2\text{,}\) then we are done. But, how can we do that? Well, we can recycle the ideas from the proof of
Lemma 6.5.3 to keep
\(|b|\cdot |x_n-a| \lt \epsilon/2\text{,}\) i.e.
\(|x_n-a| \lt \frac{\epsilon}{2|b|+1}\text{,}\) since
\(|b|\) is a constant. But that argument doesn’t work for the other term,
\(|x_n|\cdot |y_n-b|\) since
\(x_n\) need not be a constant.
However, we do know when \(n\) is very large that \(x_n\) must be close \(a\text{,}\) its limit. So we should be able to bound \(\frac{|a|}{2} \leq |x_n| \leq \frac{3|a|}{2}\) for some sufficiently large \(n\text{.}\) This, in turn, would allow us to bound
\begin{equation*}
\frac{|a|}{2} | y_n -b | \leq |x_n| \cdot |y_n-b| \leq \frac{3|a|}{2} |y_n-b|
\end{equation*}
And now, we use our control over \(|y_n-b|\text{,}\) to make sure that \(\frac{3|a|}{2}|y_n-b| \lt \epsilon/2\text{.}\)
Let us make this intermediate result bounding
\(|x_n|\text{,}\) into a lemma. It takes a little careful juggling of inequalities and the reverse triangle inequality,
Corollary 5.4.7, helps us. Then we can use the lemma to finish our proof.
Lemma 6.5.5.
Let \(a \in \mathbb{R}\) and let \((x_n)\) be a sequence that converges to \(a\text{.}\) Then there is some \(N \in \mathbb{N}\) so that for all \(n \in \mathbb{N}\text{,}\) when \(n \gt N\text{,}\) we have
\begin{equation*}
\frac{|a|}{2} \leq |x_n| \leq \frac{3|a|}{2}.
\end{equation*}
Proof.
Let \(a\) and \((x_n)\) be as given, and let \(\epsilon = \frac{|a|}{2}\text{.}\) Then since \(x_n \to a\text{,}\) we know that there is \(N \in \mathbb{N}\) so that for all integer \(n \gt N\text{,}\)
\begin{equation*}
|x_n - a| \lt \frac{|a|}{2}.
\end{equation*}
\begin{equation*}
|x_n-a| \geq \left||x_n| - |a| \right|
\end{equation*}
and hence we know that
\begin{equation*}
\left||x_n| - |a| \right| \lt \frac{|a|}{2}
\end{equation*}
\begin{equation*}
-\frac{|a|}{2} \lt |x_n|-|a| \lt \frac{|a|}{2}
\end{equation*}
from which the result quickly follows by adding \(|a|\) to both sides.
Proof of the product of limits.
Let \((x_n)\) and \((y_n)\) be sequences so that \(x_n \to a\) and \(y_n \to b\text{.}\) Let \(\epsilon \gt 0\text{.}\) Then, since those sequences converge we know that
-
there is some \(N_x\) so that for all \(n \gt N_x\) we have \(|x_n-a| \lt \frac{\epsilon}{2|b|+1}\text{,}\) and
-
there is some \(N_y\) so that for all \(n \gt N_y\) we have \(|y_n-b| \lt \frac{\epsilon}{3|a|+1}\text{.}\)
Notice that we have chosen denominators \(2|b|+1\) and \(3|a|+1\) to avoid the possibility of dividing by zero when \(a\) or \(b\) is zero. We also know
-
by
Lemma 6.5.5 there is some
\(N_a\) so that for all
\(n \gt N_a\text{,}\) we have
\(|x_n| \lt \frac{3|a|}{2}\text{.}\)
Now assume that \(n \gt \max\{N_x, N_y, N_a\}\text{,}\) then
\begin{align*}
|x_n y_n - a b| \amp = |x_n (y_n-b) + b (x_n-a)|\\
\amp \leq |x_n| |y_n-b| + |b| |x_n-a|\\
\amp \leq \frac{3|a|}{2} |y_n-b| + |b||x_n-a| \amp \text{by bound on }|x_n|\\
\amp \lt \frac{3|a|}{2} \cdot \frac{\epsilon}{3|a|+1} + |b|\cdot \frac{\epsilon}{2|b|+1} \amp \text{convergence of } x_n, y_n\\
\amp \lt \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon.
\end{align*}
and thus \(x_n \cdot y_n \to a\cdot b\) as required.
Subsubsection 6.5.1.4 Ratio of limits
Again,
Item d is a conditional statement, so to prove it, we assume the hypothesis,
\((y_n)\to b\) and
\(b\neq 0\text{,}\) and then show that
\(\left(\dfrac{1}{y_n}\right)\to \dfrac{1}{b}\text{.}\)
-
So the assumption tells us that \(b\neq 0\) and for all \(\epsilon_y \gt 0\text{,}\) there is some \(N_y \in \mathbb{N}\) so that for all \(n \in \mathbb{N}\) when \(n \gt N_y\) then \(|y_n-b| \lt \epsilon_y\text{.}\)
-
While to prove the conclusion we need to show that for all \(\epsilon \gt 0\) there is some \(N \in \mathbb{N}\) so that for all \(n \in \mathbb{N}\) when \(n \gt N\) then \(\left|\frac{1}{y_n}-\frac{1}{b}\right| \lt \epsilon\text{.}\)
Obviously we need to somehow relate this final inequality, \(\left|\frac{1}{y_n}-\frac{1}{b}\right| \lt \epsilon\text{,}\) to the inequality we get from the convergence of \(y_n\to b\text{,}\) namely \(|y_n-b| \lt \epsilon_y\text{.}\) So, time to do some rewriting:
\begin{align*}
\left|\frac{1}{y_n}-\frac{1}{b}\right| \amp = \left| \frac{(b-y_n)}{b\cdot y_n} \right|\\
\amp = \frac{|b-y_n|}{|b\cdot y_n|}
= \frac{1}{|b|}\cdot \frac{1}{|y_n|} \cdot |y_n-b|
\end{align*}
And hence we need to choose \(N\) so that we can guarantee that
\begin{equation*}
\left|\frac{1}{y_n}-\frac{1}{b}\right| = \frac{1}{|b|}\cdot \frac{1}{|y_n|} \cdot |y_n-b| \lt \epsilon,
\end{equation*}
or equivalently:
\begin{equation*}
|y_n-b| \lt |b| \cdot |y_n| \cdot \epsilon.
\end{equation*}
Now since we have control over the size of
\(|y_n-b|\text{,}\) we can make it really small. But, just as was the case when we proved the
product of limits c, we first have to bound
\(|y_n|\text{.}\) Thankfully we did all that hard work already when we proved
Lemma 6.5.5. That lemma tells us that there is some
\(N_b\) so that when
\(n \gt N_b\) we know that
\begin{equation*}
\frac{|b|}{2} \lt |y_n| \lt \frac{3|b|}{2}
\end{equation*}
Thus when \(n \gt N_b\text{,}\) we know that \(\frac{|b|^2}{2}\cdot \epsilon \lt |b| \cdot |y_n| \cdot \epsilon\) So, if we can guarantee that
\begin{equation*}
|y_n-b| \lt \frac{|b|^2}{2}\cdot \epsilon
\end{equation*}
then we have
\begin{equation*}
|y_n-b| \lt \frac{|b|^2}{2}\cdot \epsilon \lt |b| \cdot |y_n| \cdot \epsilon
\end{equation*}
and so \(\left|\frac{1}{y_n}-\frac{1}{b}\right| \lt \epsilon\) as required. Therefore we set \(\epsilon_y = \frac{|b|^2}{2}\cdot \epsilon \text{.}\)
The proof is ready to go. We just have to tidy things up and be careful of our various
\(N\)’s and
\(\epsilon\)’s.
Proof of the reciprocal of a limit.
Let
\(b\in \mathbb{R}\) with
\(b \neq 0\) and let
\((y_n)\) be a sequence that converges to
\(b\text{.}\)
Now let
\(\epsilon \gt 0\text{.}\) Since
\(y_n \to b\text{,}\) Lemma 6.5.5 implies that there is
\(N_b \in \mathbb{N}\) so that for all integer
\(n \gt N_b\)
\begin{equation*}
\frac{|b|}{2} \lt |y_n|.
\end{equation*}
Additionally, since \(y_n \to b\text{,}\) we can find \(N_y \in \mathbb{N}\) so that for all integer \(n \gt N_y\text{,}\)
\begin{equation*}
|y_n-b|\lt \frac{|b|^2}{2} \cdot \epsilon.
\end{equation*}
Thus, if we pick \(N=\max\{N_b,N_y\}\text{,}\) then for all integer \(n \gt N\text{,}\) we have
\begin{align*}
\left| \frac{1}{y_n}-\frac{1}{b} \right| \amp
= \frac{|y_n-b|}{|b|\cdot |y_n|}\\
\amp \lt |y_n-b| \cdot \frac{2}{|b|^2}\\
\amp
\lt \frac{|b|^2}{2} \cdot \epsilon \cdot \frac{2}{|b|^2} = \epsilon
\end{align*}
And therefore the result follows.
Now that we have proved both the product of limits property and the reciprocal of limits property, we get the ratio of limits property quite directly.
Proof of the ratio of limits.
\begin{equation*}
\lim_{n \to \infty} \frac{x_n}{y_n} =
\lim_{n \to \infty} x_n \cdot \frac{1}{y_n} =
\frac{a}{b}
\end{equation*}
as required.