You are viewing the html version of Calculus, by Benjamin Crowell. This version is only designed for casual browsing, and may have some formatting problems. For serious reading, you want the Adobe Acrobat version. Table of Contents (c) 1998-2011 Benjamin Crowell, licensed under the Creative Commons Attribution-ShareAlike license. Photo credits are given at the end of the Adobe Acrobat version.

# Chapter 3. Limits and continuity

## 3.1 Continuity

Intuitively, a continuous function is one whose graph has no sudden jumps in it; the graph is all a single connected piece. Formally, a function f(x) is defined to be continuous if for any real x and any infinitesimal dx, f(x+dx)-f(x) is infinitesimal.

##### Example 1
Let the function f be defined by f(x)=0 for x≤ 0, and f(x)=1 for x>0. Then f(x) is discontinuous, since for dx>0, f(0+dx)-f(0)=1, which isn't infinitesimal.

a / Example 32. The black dot indicates that the endpoint of the lower ray is part of the ray, while the white one shows the contrary for the ray on the top.

If a function is discontinuous at a given point, then it is not differentiable at that point. On the other hand, the example y=|x| shows that a function can be continuous without being differentiable.

In most cases, there is no need to invoke the definition explicitly in order to check whether a function is continuous. Most of the functions we work with are defined by putting together simpler functions as building blocks. For example, let's say we're already convinced that the functions defined by g(x)=3x and h(x)=sin x are both continuous. Then if we encounter the function f(x)=sin(3x), we can tell that it's continuous because its definition corresponds to f(x)=h(g(x)). The functions g and h have been set up like a bucket brigade, so that g takes the input, calculates the output, and then hands it off to h for the final step of the calculation. This method of combining functions is called composition. The composition of two continuous functions is also continuous. Just watch out for division. The function f(x)=1/x is continuous everywhere except at x=0, so for example 1/sin(x) is continuous everywhere except at multiples of π, where the sine has zeroes.

### The intermediate value theorem

Another way of thinking about continuous functions is given by the intermediate value theorem. Intuitively, it says that if you are moving continuously along a road, and you get from point A to point B, then you must also visit every other point along the road; only by teleporting (by moving discontinuously) could you avoid doing so. More formally, the theorem states that if y is a continuous real-valued function on the real interval from a to b, and if y takes on values y1 and y2 at certain points within this interval, then for any y3 between y1 and y2, there is some real x in the interval for which y(x)=y3.

b / The intermediate value theorem states that if the function is continuous, it must pass through y3.

The intermediate value theorem seems so intuitively appealing that if we want to set out to prove it, we may feel as though we're being asked to prove a proposition such as, “a number greater than 10 exists.” If a friend wanted to bet you a six-pack that you couldn't prove this with complete mathematical rigor, you would have to get your friend to spell out very explicitly what she thought were the facts about integers that you were allowed to start with as initial assumptions. Are you allowed to assume that 1 exists? Will she grant you that if a number n exists, so does n+1? The intermediate value theorem is similar. It's stated as a theorem about certain types of functions, but its truth isn't so much a matter of the properties of functions as the properties of the underlying number system. For the reader with a interest in pure mathematics, I've discussed this in more detail on page 154 and given an abbreviated proof. (Most introductory calculus texts do not prove it at all.)

##### Example 2

◊ Show that there is a solution to the equation 10x+x=1000.

◊ We expect there to be a solution near x=3, where the function f(x)=10x+x=1003 is just a little too big. On the other hand, f(2)=102 is much too small. Since f has values above and below 1000 on the interval from 2 to 3, and f is continuous, the intermediate value theorem proves that a solution exists between 2 and 3. If we wanted to find a better numerical approximation to the solution, we could do it using Newton's method, which is introduced in section 5.1.

##### Example 3
◊ Show that there is at least one solution to the equation cos x=x, and give bounds on its location.

◊ This is a transcendental equation, and no amount of fiddling with algebra and trig identities will ever give a closed-form solution, i.e., one that can be written down with a finite number of arithmetic operations to give an exact result. However, we can easily prove that at least one solution exists, by applying the intermediate value theorem to the function x-cos x. The cosine function is bounded between -1 and 1, so this function must be negative for x<-1 and positive for x>1. By the intermediate value theorem, there must be a solution in the interval -1 ≤ x ≤ 1. The graph, c, verifies this, and shows that there is only one solution.

c / The function x-cos x constructed in example 34.

##### Example 4

◊ Prove that every odd-order polynomial P with real coefficients has at least one real root x, i.e., a point at which P(x)=0.

◊ Example 34 might have given the impression that there was nothing to be learned from the intermediate value theorem that couldn't be determined by graphing, but this example clearly can't be solved by graphing, because we're trying to prove a general result for all polynomials.

To see that the restriction to odd orders is necessary, consider the polynomial x2+1, which has no real roots because x2>0 for any real number x.

To fix our minds on a concrete example for the odd case, consider the polynomial P(x)=x3-x+17. For large values of x, the linear and constant terms will be negligible compared to the x3 term, and since x3 is positive for large values of x and negative for large negative ones, it follows that P is sometimes positive and sometimes negative.

Making this argument more general and rigorous, suppose we had a polynomial of odd order n that always had the same sign for real x. Then by the transfer principle the same would hold for any hyperreal value of x. Now if x is infinite then the lower-order terms are infinitesimal compared to the xn term, and the sign of the result is determined entirely by the xn term, but xn and (-x)n have opposite signs, and therefore P(x) and P(-x) have opposite signs. This is a contradiction, so we have disproved the assumption that P always had the same sign for real x. Since P is sometimes negative and sometimes positive, we conclude by the intermediate value theorem that it is zero somewhere.

##### Example 5
◊ Show that the equation x=sin 1/x has infinitely many solutions.

◊ This is another example that can't be solved by graphing; there is clearly no way to prove, just by looking at a graph like d, that it crosses the x axis infinitely many times. The graph does, however, help us to gain intuition for what's going on. As x gets smaller and smaller, 1/x blows up, and sin 1/x oscillates more and more rapidly. The function f is undefined at 0, but it's continuous everywhere else, so we can apply the intermediate value theorem to any interval that doesn't include 0.

We want to prove that for any positive u, there exists an x with 0<x<u for which f(x) has either desired sign. Suppose that this fails for some real u. Then by the transfer principle the nonexistence of any real x with the desired property also implies the nonexistence of any such hyperreal x. But for an infinitesimal x the sign of f is determined entirely by the sine term, since the sine term is finite and the linear term infinitesimal. Clearly sin 1/x can't have a single sign for all values of x less than u, so this is a contradiction, and the proposition succeeds for any u. It follows from the intermediate value theorem that there are infinitely many solutions to the equation.

d / The function x-sin 1/x.

### The extreme value theorem

In chapter 1, we saw that locating maxima and minima of functions may in general be fairly difficult, because there are so many different ways in which a function can attain an extremum: e.g., at an endpoint, at a place where its derivative is zero, or at a nondifferentiable kink. The following theorem allows us to make a very general statement about all these possible cases, assuming only continuity.

The extreme value theorem states that if f is a continuous real-valued function on the real-number interval defined by axb, then f has maximum and minimum values on that interval, which are attained at specific points in the interval.

Let's first see why the assumptions are necessary. If we weren't combined to a finite interval, then y=x would be a counterexample, because it's continuous and doesn't have any maximum or minimum value. If we didn't assume continuity, then we could have a function defined as y=x for x < 1, and y=0 for ; this function never gets bigger than 1, but it never attains a value of 1 for any specific value of x.

The extreme value theorem is proved, in a somewhat more general form, on page 157.

##### Example 6

◊ Find the maximum value of the polynomial P(x)=x3+x2+x+1 for -5 ≤ x ≤ 5.

◊ Polynomials are continuous, so the extreme value theorem guarantees that such a maximum exists. Suppose we try to find it by looking for a place where the derivative is zero. The derivative is 3x2+2x+1, and setting it equal to zero gives a quadratic equation, but application of the quadratic formula shows that it has no real solutions. It appears that the function doesn't have a maximum anywhere (even outside the interval of interest) that looks like a smooth peak. Since it doesn't have kinks or discontinuities, there is only one other type of maximum it could have, which is a maximum at one of its endpoints. Plugging in the limits, we find P(-5)=-104 and P(5)=156, so we conclude that the maximum value on this interval is 156.

## 3.2 Limits

Historically, the calculus of infinitesimals as created by Newton and Leibniz was reinterpreted in the nineteenth century by Cauchy, Bolzano, and Weierstrass in terms of limits. All mathematicians learned both languages, and switched back and forth between them effortlessly, like the lady I overheard in a Southern California supermarket telling her mother, “Let's get that one, con los nuts.” Those who had been trained in infinitesimals might hear a statement using the language of limits, but translate it mentally into infinitesimals; to them, every statement about limits was really a statement about infinitesimals. To their younger colleagues, trained using limits, every statement about infinitesimals was really to be understood as shorthand for a limiting process. When Robinson laid the rigorous foundations for the hyperreal number system in the 1960's, a common objection was that it was really nothing new, because every statement about infinitesimals was really just a different way of expressing a corresponding statement about limits; of course the same could have been said about Weierstrass's work of the preceding century! In reality, all practitioners of calculus had realized all along that different approaches worked better for different problems; problem 13 on page 82 is an example of a result that is much easier to prove with infinitesimals than with limits.

The Weierstrass definition of a limit is this:

##### Definition of the limit

We say that ℓ is the limit of the function f(x) as x approaches a, written

if the following is true: for any real number ε, there exists another real number δ such that for all x in the interval a-δ≤ xa+δ, the value of f lies within the range from ℓ-ε to ℓ+ε.

Intuitively, the idea is that if I want you to make f(x) close to ℓ, I just have to tell you how close, and you can tell me that it will be that close as long as x is within a certain distance of a.

In terms of infinitesimals, we have:

##### Definition of the limit

We say that ℓ is the limit of the function f(x) as x approaches a, written

if the following is true: for any infinitesimal number dx, the value of f(a+dx) is finite, and the standard part of f(a+dx) equals ℓ.

The two definitions are equivalent.

Sometimes a limit can be evaluated simply by plugging in numbers:

##### Example 7

◊ Evaluate

◊ Plugging in x=0, we find that the limit is 1.

In some examples, plugging in fails if we try to do it directly, but can be made to work if we massage the expression into a different form:

##### Example 8
◊ Evaluate

◊ Plugging in x=0 fails because division by zero is undefined.

Intuitively, however, we expect that the limit will be well defined, and will equal 2, because for very small values of x, the numerator is dominated by the 2/x term, and the denominator by the 1/x term, so the 7 and 8686 terms will matter less and less as x gets smaller and smaller.

To demonstrate this more rigorously, a trick that works is to multiply both the top and the bottom by x, giving

which equals 2 when we plug in x=0, so we find that the limit is zero.

This example is a little subtle, because when x equals zero, the function is not defined, and moreover it would not be valid to multiply both the top and the bottom by x. In general, it's not valid algebra to multiply both the top and the bottom of a fraction by 0, because the result is 0/0, which is undefined. But we didn't actually multiply both the top and the bottom by zero, because we never let x equal zero. Both the Weierstrass definition and the definition in terms of infinitesimals only refer to the properties of the function in a region very close to the limiting point, not at the limiting point itself.

This is an example in which the function was not well defined at a certain point, and yet the limit of the function was well defined as we approached that point. In a case like this, where there is only one point missing from the domain of the function, it is natural to extend the definition of the function by filling in the “gap tooth.” Example 41 below shows that this kind of filling-in procedure is not always possible.

e / Example 40, the function 1/x2.

##### Example 9
◊ Investigate the limiting behavior of 1/x2 as x approaches 0, and 1.

◊ At x=1, plugging in works, and we find that the limit is 1.

At x=0, plugging in doesn't work, because division by zero is undefined. Applying the definition in terms of infinitesimals to the limit as x approaches 0, we need to find out whether 1/(0+dx)2 is finite for infinitesimal dx, and if so, whether it always has the same standard part. But clearly 1/(0+dx)2=dx-2 is always infinite, and we conclude that this limit is undefined.

f / Example 41, the function tan-1(1/x).

##### Example 10
◊ Investigate the limiting behavior of f(x)=tan-1(1/x) as x approaches 0.

◊ Plugging in doesn't work, because division by zero is undefined.

In the definition of the limit in terms of infinitesimals, the first requirement is that f(0+dx) be finite for infinitesimal values of dx. The graph makes this look plausible, and indeed we can prove that it is true by the transfer principle. For any real x we have -π/2 ≤ f(x) ≤ π/2, and by the transfer principle this holds for the hyperreals as well, and therefore f(0+dx) is finite.

The second requirement is that the standard part of f(0+dx) have a uniquely defined value. The graph shows that we really have two cases to consider, one on the right side of the graph, and one on the left. Intuitively, we expect that the standard part of f(0+dx) will equal π/2 for positive dx, and -π/2 for negative, and thus the second part of the definition will not be satisfied. For a more formal proof, we can use the transfer principle. For real x with 0<x<1, for example, f is always positive and greater than 1, so we conclude based on the transfer principle that f(0+dx)>1 for positive infinitesimal dx. But on similar grounds we can be sure that f(0+dx)<-1 when dx is negative and infinitesimal. Thus the standard part of f(0+dx) can have different values for different infinitesimal values of dx, and we conclude that the limit is undefined.

In examples like this, we can define a kind of one-sided limit, notated like this:

where the notations x→ 0- and x→ 0+ are to be read “as x approaches zero from below,” and “as x approaches zero from above.”

## 3.3 L'H\^{o}pital's rule

Consider the limit

Plugging in doesn't work, because we get 0/0. Division by zero is undefined, both in the real number system and in the hyperreals. A nonzero number divided by a small number gives a big number; a nonzero number divided by a very small number gives a very big number; and a nonzero number divided by an infinitesimal number gives an infinite number. On the other hand, dividing zero by zero means looking for a solution to the equation 0=0q, where q is the result of the division. But any q is a solution of this equation, so even speaking casually, it's not correct to say that 0/0 is infinite; it's not infinite, it's anything we like.

Since plugging in zero didn't work, let's try estimating the limit by plugging in a number for x that's small, but not zero. On a calculator,

It looks like the limit is 1. We can confirm our conjecture to higher precision using Yacas's ability to do high-precision arithmetic:

```   N(Sin(10^-20)/10^-20,50)
0.99999999999999999
9999999999999999999
99998333333333
```

It's looking pretty one-ish. This is the idea of the Weierstrass definition of a limit: it seems like we can get an answer as close to 1 as we like, if we're willing to make x as close to 0 as necessary. The graph helps to make this plausible.

g / The graph of sin x/x.

The general idea here is that for small values of x, the small-angle approximation sin xx obtains, and as x gets smaller and smaller, the approximation gets better and better, so sin x/x gets closer and closer to 1.

But we still haven't proved rigorously that the limit is exactly 1. Let's try using the definition of the limit in terms of infinitesimals.

We can check our work using Inf:

```   : (sin d)/d
1+(-0.16667)d^2+...
```

(The ... is where I've snipped trailing terms from the output.)

This is a special case of a the following rule for calculating limits involving 0/0:

##### L'H\^{o}pital's rule (simplest form)
pital's rule (simplest form)} If u and v are functions with u(a)=0 and v(a)=0, the derivatives and are defined, and the derivative , then

Proof: Since u(a)=0, and the derivative du/dx is defined at a, u(a+dx)=du is infinitesimal, and likewise for v. By the definition of the limit, the limit is the standard part of

where by assumption the numerator and denominator are both defined (and finite, because the derivative is defined in terms of the standard part). The standard part of a quotient like p/q equals the quotient of the standard parts, provided that both p and q are finite (which we've established), and q ≠ 0 (which is true by assumption). But the standard part of du/dx is the definition of the derivative , and likewise for dv/dx, so this establishes the result.

We will generalize L'H\^{o}pital's rule on p. 65.

By the way, the housetop accent on the “\^{o}” in l'H\^{o}pital means that in Old French it used to be spelled and pronounced “l'Hospital,” but the “s” later became silent, so they stopped writing it. So yes, it is the same word as “hospital.”

##### Example 11

◊ Evaluate

◊ Taking the derivatives of the top and bottom, we find ex/1, which equals 1 when evaluated at x=0.

##### Example 12

◊ Evaluate

◊ Plugging in x=1 fails, because both the top and the bottom are zero. Taking the derivatives of the top and bottom, we find 1/(2x-2), which blows up to infinity when x=1. To symbolize the fact that the limit is undefined, and undefined because it blows up to infinity, we write

## 3.4 Another perspective on indeterminate forms

An expression like 0/0, called an indeterminate form, can be thought of in a different way in terms of infinitesimals. Suppose I tell you I have two infinitesimal numbers d and e in my pocket, and I ask you whether d/e is finite, infinite, or infinitesimal. You can't tell, because d and e might not be infinitesimals of the same order of magnitude. For instance, if e=37d, then d/e=1/37 is finite; but if e=d2, then d/e is infinite; and if d=e2, then d/e is infinitesimal. Acting this out with numbers that are small but not infinitesimal,

On the other hand, suppose I tell you I have an infinitesimal number d and a finite number x, and I ask you to speculate about d/x. You know for sure that it's going to be infinitesimal. Likewise, you can be sure that x/d is infinite. These aren't indeterminate forms.

We can do something similar with infinite numbers. If H and K are both infinite, then H-K is indeterminate. It could be infinite, for example, if H was positive infinite and K=H/2. On the other hand, it could be finite if H=K+1. Acting this out with big but finite numbers,

##### Example 13

◊ If H is a positive infinite number, is finite, infinite, infinitesimal, or indeterminate?

◊ Trying it with a finite, big number, we have

which is clearly a wannabe infinitesimal. We can verify the result using Inf:

```   : H=1/d
d^-1
: sqrt(H+1)-sqrt(H-1)
d^1/2+0.125d^5/2+...
```

For convenience, the first line of input defines an infinite number H in terms of the calculator's built-in infinitesimal d. The result has only positive powers of d, so it's clearly infinitesimal.

More rigorously, we can rewrite the expression as . Since the derivative of the square root function evaluated at x=1 is 1/2, we can approximate this as

which is infinitesimal.

## 3.5 Limits at infinity

The definition of the limit in terms of infinitesimals extends immediately to limiting processes where x gets bigger and bigger, rather than closer and closer to some finite value. For example, the function 3+1/x clearly gets closer and closer to 3 as x gets bigger and bigger. If a is an infinite number, then the definition says that evaluating this expression at a+dx, where dx is infinitesimal, gives a result whose standard part is 3. It doesn't matter that a happens to be infinite, the definition still works. We also note that in this example, it doesn't matter what infinite number a is; the limit equals 3 for any infinite a. We can write this fact as

where the symbol ∞ is to be interpreted as “nyeah nyeah, I don't even care what infinite number you put in here, I claim it will work out to 3 no matter what.” The symbol ∞ is not to be interpreted as standing for any specific infinite number. That would be the type of fallacy that lay behind the bogus proof on page 30 that 1=1/2, which assumed that all infinities had to be the same size.

A somewhat different example is the arctangent function. The arctangent of 1000 equals approximately 1.5698, and inputting bigger and bigger numbers gives answers that appear to get closer and closer to π/2≈1.5707. But the arctangent of -1000 is approximately -1.5698, i.e., very close to -π/2. From these numerical observations, we conjecture that

equals π/2 for positive infinite a, but -π/2 for negative infinite a. It would not be correct to write

because it does matter what infinite number we pick. Instead we write

Some expressions don't have this kind of limit at all. For example, if you take the sines of big numbers like a thousand, a million, etc., on your calculator, the results are essentially random numbers lying between -1 and 1. They don't settle down to any particular value, because the sine function oscillates back and forth forever. To prove formally that is undefined, consider that the sine function, defined on the real numbers, has the property that you can always change its result by at least 0.1 if you add either 1.5 or -1.5 to its input. For example, sin(.8)≈ 0.717, and sin(.8-1.5)≈-0.644. Applying the transfer principle to this statement, we find that the same is true on the hyperreals. Therefore there cannot be any value ℓ that differs infinitesimally from sin a for all positive infinite values of a.

Often we're interested in finding the limit as x approaches infinity of an expression that is written as an indeterminate form like H/K, where both H and K are infinite.

##### Example 14
◊ Evaluate the limit

◊ Intuitively, if x gets large enough the constant terms will be negligible, and the top and bottom will be dominated by the 2x and x terms, respectively, giving an answer that approaches 2.

One way to verify this is to divide both the top and the bottom by x, giving

If x is infinite, then the standard part of the top is 2, the standard part of the bottom is 1, and the standard part of the whole thing is therefore 2.

Another approach is to use l'H\^{o}pital's rule. The derivative of the top is 2, and the derivative of the bottom is 1, so the limit is 2/1=2.

## 3.6 Generalizations of l'H\^{o}pital's rule

Mathematical theorems are sometimes like cars. I own a Honda Fit that is about as bare-bones as you can get these days, but persuading a dealer to sell me that car was like pulling teeth. The salesman was absolutely certain that any sane customer would want to pay an extra \$1,800 for such crucial amenities as floor mats and a chrome tailpipe. L'H\^{o}pital's rule in its most general form is a much fancier piece of machinery than the stripped down model described on p. 60. The price you pay for the deluxe model is that the proof becomes much more complicated than the one-liner that sufficed for the simple version.

### Multiple applications of the rule

In the following example, we have to use l'H\^{o}pital's rule twice before we get an answer.
##### Example 15
◊ Evaluate

◊ Applying l'H\^{o}pital's rule gives

which still produces 0/0 when we plug in x=π. Going again, we get

The reason that this always works is outlined on p. 150.

### The indeterminate form ∞/∞

Consider an example like this:

This is an indeterminate form like ∞/∞ rather than the 0/0 form for which we've already proved l'H\^{o}pital's rule. As proved on p. 151, l'H\^{o}pital's rule applies to examples like this as well.

##### Example 16

◊ Evaluate

◊ Both the numerator and the denominator go to infinity. Differentiation of the top and bottom gives (-x-2)/(-2x-2) = 1/2. We can see that the reason the rule worked was that (1) the constant terms were irrelevant because they become negligible as the 1/x terms blow up; and (2) differentiating the blowing-up 1/x terms makes them into the same x-2 on top and bottom, which cancel.

Note that we could also have gotten this result without l'H\^{o}pital's rule, simply by multiplying both the top and the bottom of the original expression by x in order to rewrite it as (x+1)/(x+2).

### Limits at infinity

It is straightforward to prove a variant of l'H\^{o}pital's rule that allows us to do limits at infinity. The general proof is left as an exercise (problem 8, p. 67). The result is that l'H\^{o}pital's rule is equally valid when the limit is at ±∞ rather than at some real number a.

##### Example 17

◊ Evaluate

◊ We could use a change of variable to make this into example 39 on p. 59, which was solved using an ad hoc and multiple-step procedure. But having established the more general form of l'H\^{o}pital's rule, we can do it in one step. Differentiation of the top and bottom produces

## Homework Problems

1. (a) Prove, using the Weierstrass definition of the limit, that if and both exist, them , i.e., that the limit of a sum is the sum of the limits. (b) Prove the same thing using the definition of the limit in terms of infinitesimals. (solution in the pdf version of the book)

2. Sketch the graph of the function e-1/x, and evaluate the following four limits:

(solution in the pdf version of the book)

3. Verify the following limits.

[Granville, 1911] (solution in the pdf version of the book)

4. Evaluate

exactly, and check your result by numerical approximation. (solution in the pdf version of the book)

5. Amy is asked to evaluate

She applies l'H\^{o}pital's rule, differentiating top and bottom to find 1/ex, which equals 1 when she plugs in x=0. What is wrong with her reasoning? (solution in the pdf version of the book)

6. Evaluate

exactly, and check your result by numerical approximation. (solution in the pdf version of the book)

7. Evaluate

exactly, and check your result by numerical approximation. (solution in the pdf version of the book)

8. Prove a form of l'H\^{o}pital's rule stating that

is equal to the limit of f'/g' at infinity. Hint: change to some new variable u such that x→∞ corresponds to u→0. (solution in the pdf version of the book)

9. Prove that the linear function y=ax+b, where a and b are real, is continuous, first using the definition of continuity in terms of infinitesimals, and then using the definition in terms of the Weierstrass limit. (solution in the pdf version of the book)