You are viewing the html version of |

Contents

Section 5.1 - Newton's method

Section 5.2 - Implicit differentiation

Section 5.3 - Methods of integration

Section 5.1 - Newton's method

Section 5.2 - Implicit differentiation

Section 5.3 - Methods of integration

In the 1958 science fiction novel **Have Space Suit --- Will Travel**, by
Robert Heinlein, Kip is a high school student who wants to be an engineer,
and his father is trying to convince him to stretch himself more if he
wants to get anything out of his education:

“Why did Van Buren fail of re-election? How do you extract the cube root of eighty-seven?”

Van Buren had been a president; that was all I remembered. But I could answer the other one. “If you want a cube root, you look in a table in the back of the book.”

Dad sighed. “Kip, do you think that table was brought down from on high by an archangel?”

We no longer use tables to compute roots, but how does a pocket calculator
do it? A technique called Newton's method allows us to calculate
the inverse of any function efficiently, including cases that aren't
preprogrammed into a calculator. In the example from the novel,
we know how to calculate the function *y*=*x*^{3} fairly accurately and
quickly for any given value of *x*, but we want to turn the equation
around and find *x* when *y*=87. We start with a rough mental guess:
since 4^{3}=64 is a little too small, and 5^{3}=125 is much too big,
we guess *x*≈ 4.3. Testing our guess, we have
4.3^{3}=79.5. We want *y* to get bigger by 7.5, and we can use
calculus to find approximately how much bigger *x* needs to get
in order to accomplish that:

Increasing our value of *x* to 4.3+0.14=4.44, we find that
4.44^{3}=87.5 is a pretty good approximation to 87. If we need higher
precision, we can go through the process again with Δ *y*=-0.5,
giving

This second iteration gives an excellent approximation.

The planet Mercury has *e*=0.206. Find the angle *x*
when Mercury has completed 1/4 of a period.

◊ We have

and we want to find *x* when *y*=2π/4=1.57. As a first guess, we try *x*=π/2 (90 degrees), since the eccentricity of Mercury's
orbit is actually much smaller than the example shown in the figure, and
therefore the planet's speed doesn't vary all that much as it goes around the sun.
For this value of *x* we have *y*=1.36, which is too small by 0.21.

(The derivative *dy*/*dx* happens to be 1 at *x*=π/2.) This gives a new value of *x*, 1.57+.21=1.78.
Testing it, we have *y*=1.58, which is correct to within rounding errors after only one iteration.
(We were only supplied with a value of *e* accurate to three significant figures, so we can't get a result
with precision better than about that level.)

We can differentiate any function that is written as a formula,
and find a result in terms of a formula. However, sometimes the
original problem can't be written in any nice way as a formula.
For example, suppose we want to find *dy*/*dx* in a case
where the relationship between *x*
and *y* is given by the following equation:

There is no equivalent of the quadratic formula for seventh-order
polynomials, so we have no way to solve for one variable in terms
of the other in order to differentiate it. However, we can still
find *dy*/*dx* in terms of *x* and *y*. Suppose we let *x*
grow to *x*+*dx*. Then for example the *x*^{2} term will grow
to (*x*+*dx*)^{2}=*x*+2*dx*+*dx*^{2}. The squared infinitesimal
is negligible, so the increase in *x*^{2} was really just
2*dx*, and we've really just computed the derivative of
*x*^{2} with respect to *x* and multiplied it by *dx*. In
symbols,

That is, the change in *x*^{2} is 2*x* times the change in *x*.
Doing this to both sides of the original equation, we have

This still doesn't give us a formula for the derivative in
terms of *x* alone, but it's not entirely useless. For instance,
if we're given a numerical value of *x*, we can always use
Newton's method to find *y*, and then
evaluate the derivative.

Sometimes an unfamiliar-looking integral can be made into a familiar one by substituting
a new variable for an old one. For example, we know how to integrate 1/*x* --- the answer
is ln *x* --- but what about

Let *u*=2*x*+1. Differentiating both sides, we have *du*=2*dx*, or *dx*=*du*/2, so

This technique is known as a change of variable or a substitution. (Because the letter *u* is often
employed, you may also see it called *u*-substitution.)

In the case of a definite integral, we have to remember to change the limits of integration to reflect the new variable.

◊ Evaluate .

◊ As before, let *u*=2*x*+1.

Sometimes, as in the next example, a clever substitution is the secret to doing a seemingly impossible integral.

◊ The only hope for reducing this to a form we can do is to let
. Then *dx*=d(*u*^{2})=2*udu*, so

Example 65 really isn't so tricky, since there was only one logical choice for the substitution that had any hope of working. The following is a little more dastardly.

◊ The substitution that works is *x*=tan *u*. First let's see what this does
to the expression 1+*x*^{2}. The familiar identity

so 1+*x*^{2} becomes .
But differentiating both sides of *x*=tan *u* gives

so the integral becomes

What mere mortal would ever have suspected that the substitution *x*=tan *u* was
the one that was needed in example 66? One possible answer is to
give up and do the integral on a computer:

Integrate(x) 1/(1+x^2) ArcTan(x)

Another possible answer is that you can usually smell the possibility of
this type of substitution, involving a trig function, when the thing to be
integrated contains something reminiscent of the Pythagorean theorem, as
suggested by figure b. The 1+*x*^{2} looks like what
you'd get if you had a right triangle with legs 1 and *x*, and were using the
Pythagorean theorem to find its hypotenuse.

◊ Evaluate .

◊ The looks like what you'd get if you had a right
triangle with hypotenuse 1 and a leg of length *x*, and were using the
Pythagorean theorem to find the other leg, as in figure c.
This motivates us to try the substitution *x*=cos *u*, which gives
*dx*=-sin *udu* and . The result is

Figure d shows a technique called integration by parts. If the integral is easier than the integral , then we can calculate the easier one, and then by simple geometry determine the one we wanted. Identifying the large rectangle that surrounds both shaded areas, and the small white rectangle on the lower left, we have

In the case of an indefinite integral, we have a similar relationship derived from the product rule:

Integrating both sides, we have the following relation.

Since a definite integral can always be done by evaluating an indefinite
integral at its upper and lower limits, one usually uses this form.
Integrals don't usually come prepackaged in a form that makes it
obvious that you should use integration by parts. What the equation for
integration by parts tells us is that if we can split up the integrand
into two factors, one of which (the *dv*) we know how to integrate, we have
the option of changing the integral into a new form in which that factor
becomes its integral, and the other factor becomes its derivative. If we
choose the right way of splitting up the integrand into parts, the result can be
a simplification.

◊ Evaluate

◊ There are two obvious possibilities for splitting up the integrand into factors,

The first one is the one that lets us make progress. If *u*=*x*, then
*du*=*dx*, and if *dv*=cos *xdx*, then integration
gives *v*=sin *x*.

Of the two possibilities we considered for *u* and *dv*, the reason
this one helped was that differentiating *x* gave *dx*, which was
simpler, and integrating cos *xdx* gave sin *x*, which was no
more complicated than before. The second possibility would have made
things worse rather than better, because integrating *xdx* would
have given *x*^{2}/2, which would have been more complicated rather than
less.

◊ Evaluate .

◊ This one is a little tricky, because it isn't explicitly
written as a product, and yet we can attack it using integration by
parts. Let *u*=ln *x* and *dv*=*dx*.

◊
Integration by parts lets us split the integrand into two factors, integrate one, differentiate the
other, and then do that integral. Integrating or differentiating *e*^{x} does nothing. Integrating
*x*^{2} increases the exponent, which makes the problem look harder, whereas differentiating *x*^{2}
knocks the exponent down a step, which makes it look easier. Let *u*=*x*^{2} and *dv*=*e*^{x} *dx*,
so that *du*=2*xdx* and *v*=*e*^{x}. We then have

Although we don't immediately know how to evaluate this new integral, we can subject it to the
same type of integration by parts, now with *u*=*x* and *dv*=*e*^{x} *dx*. After the second
integration by parts, we have:

Given a function like

we can rewrite it over a common denominator like this:

But note that the original form is easily integrated to give

while faced with the form

[4] -2/(*x*^{2}-1), we wouldn't have known how to
integrate it.

Note that the original function was of the form (-1)/…+(+1)/…
It's not a coincidence that the two constants on top, -1 and +1, are opposite in sign
but equal in absolute value. To see why, consider the behavior of
this function for large values of *x*. Looking at the form
-1/(*x*-1)+1/(*x*+1), we might naively guess that for a large value of *x*
such as 1000, it would come out to be somewhere on the order thousandths.
But looking at the form -2/(*x*^{2}-1), we would expect it to be way down in the millionths.
This seeming paradox is resolved by noting that for large values of *x*,
the two terms in the form -1/(*x*-1)+1/(*x*+1) very nearly cancel. This cancellation
could only have happened if the constants on top were opposites like plus and minus one.

The idea of the method of partial fractions is that if we want to do an integral of the form

where *P*(*x*) is an nth order polynomial, we rewrite 1/*P* as

where *r*_{1} ... *r*_{n} are the roots of the polynomial, i.e., the
solutions of the equation *P*(*r*)=0. If the polynomial is second-order,
you can find the roots *r*_{1} and *r*_{2} using the quadratic formula; I'll assume for
the time being that they're real.
For higher-order polynomials, there is no surefire, easy
way of finding the roots by hand, and you'd be smart simply to use computer software
to do it. In Yacas, you can find the real roots of a polynomial like this:

FindRealRoots(x^4-5*x^3 -25*x^2+65*x+84) \{3.,7.,-4.,-1.\}

(I assume it uses Newton's method to find them.) The constants
*A*_{i} can then be determined by algebra, or by the following trick.

P(x):=x^4-5*x^3-25*x^2 +65*x+84 N(1/P(3.000001)) -8928.5702094768

We know that for *x* very close to 3, the expression

will be dominated by the *A*_{1} term, so

By the same method we can find the other four constants:

dx:=.000001 N(1/P(7+dx),30)*dx 0.2840908276e-2 N(1/P(-4+dx),30)*dx -0.4329006192e-2 N(1/P(-1+dx),30)*dx 0.1041666664e-1

(The N( ,30) construct is to tell Yacas to do a numerical calculation rather than an exact symbolic one, and to use 30 digits of precision, in order to avoid problems with rounding errors.) Thus,

The desired integral is

As in the simpler example I started off with, where *P* was second order
and we got *A*_{1}=-*A*_{2}, in this *n*=4 example we expect that *A*_{1}+*A*_{2}+*A*_{3}+*A*_{4}=0,
for otherwise the large-*x* behavior of the partial-fraction form would be 1/*x* rather than 1/*x*^{4}.
This is a useful way of checking the result: -8.93+2.84-4.33+10.4=-.02≈0.

There are two possible complications:

First, the same factor may occur more than once, as in
*x*^{3}-5*x*^{2}+7*x*-3=(*x*-1)(*x*-1)(*x*-3). In this example, we have to look for an answer of the form
*A*/(*x*-1)+*B*/(*x*-1)^{2}+*C*/(*x*-3), the solution being
-.25/(*x*-1)-.5/(*x*-1)^{2}+.25/(*x*-3).

Second, the roots may be complex.
This is no show-stopper
if you're using computer software that handles complex numbers gracefully. (You can choose a *c*
that makes the result real.)
In fact, as discussed in section 8.3, some beautiful things can happen
with complex roots. But as an alternative,
any polynomial with real coefficients can be factored into linear
and quadratic factors with real coefficients. For each quadratic factor *Q*(*x*), we then have a
partial fraction of the form (*A*+*Bx*)/*Q*(*x*), where *A* and *B* can be determined by algebra.
In Yacas, this can be done using the Apart function.

◊ Evaluate the integral

using the method of partial fractions.

◊ First we use Yacas to look for real roots of the polynomial:

FindRealRoots(x^4-8*x^3 +8*x^2-8*x+7) \{1.,7.\}

Unfortunately this polynomial seems to have only two real roots; the rest
are complex.
We can divide out the factor (*x*-1)(*x*-7), but that still
leaves us with a second-order polynomial, which has no real roots.
One approach would be to factor the polynomial into the form
(*x*-1)(*x*-7)(*x*-*p*)(*x*-*q*), where *p* and *q* are complex,
as in section 8.3. Instead, let's use
Yacas to expand the integrand in terms of partial fractions:

Apart(1/(x^4-8*x^3 +8*x^2-8*x+7)) ((2*x)/25+3/50)/(x^2+1) +1/(300*(x-7)) +(-1)/(12*(x-1))

We can now rewrite the integral like this:

In fact, Yacas should be able to do the whole integral for us from scratch, but it's best to understand how these things work under the hood, and to avoid being completely dependent on one particular piece of software. As an illustration of this gem of wisdom, I found that when I tried to make Yacas evaluate the integral in one gulp, it choked because the calculation became too complicated! Because I understood the ideas behind the procedure, I was still able to get a result through a mixture of computer calculations and working it by hand. Someone who didn't have the knowledge of the technique might have tried the integral using the software, seen it fail, and concluded, incorrectly, that the integral was one that simply couldn't be done. A computer is no substitute for understanding.

On p. 92 I introduced the trick of carrying out the method of partial fractions
by evaluating 1/*P*(*x*) numerically at *x*=*r*_{i}+ε, near where 1/*P* blows up. Sometimes we would like
to have an exact result rather than a numerical approximation. We can accomplish this by using an infinitesimal
number *dx* rather than a small but finite ε. For simplicity, let's assume that all of the *n* roots *r*_{i}
are distinct, and that *P*'s highest-order term is *x*^{n}. We can then write *P* as the product
*P*(*x*)=(*x*-*r*_{1})(*x*-*r*_{2})…(*x*-*r*_{n}). For products like this, there is a notation Π (capital Greek letter “pi”)
that works like Σ does for sums:

It's not necessary that the roots be real, but for now we assume that they are.
We want to find the coefficients *A*_{i} such that

We then have

where … represents finite terms that are negligible compared to the infinite ones. Multiplying on
both sides by *dx*, we have

where the … now stand for infinitesimals which must in fact cancel out, since both *A*_{i} and
1/*P*' are real numbers.

◊ The partial-fraction decomposition of the function

was found numerically on p. 92. The coefficient
of the 1/(*x*-3) term was found numerically to be *A*_{1}≈ -8.930×10^{-3}.
Determine it exactly using the residue method.

◊
Differentiation gives *P*'(*x*)=4*x*^{3}-15*x*^{2}-50*x*+65. We then have
*A*_{1}=1/*P*'(3)=-1/112.

Integral calculus was invented in the age of powdered wigs and harpsichords, so the original emphasis was on expressing integrals in a form that would allow numbers to be plugged in for easy numerical evaluation by scribbling on scraps of parchment with a quill pen. This was an era when you might have to travel to a large city to get access to a table of logarithms.

In this computationally impoverished environment, one always
wanted to get answers in what's known as *closed form* and in terms of *elementary functions*.

A closed form expression means one written using a finite number of operations, as opposed to something
like the geometric series 1+*x*+*x*^{2}+*x*^{3}+…, which goes on forever.

Elementary functions are usually taken to be addition, subtraction, multiplication, division, logs, and exponentials, as well as other functions derivable from these. For example, a cube root is allowed, since , and so are trig functions and their inverses, since, as we will see in chapter 8, they can be expressed in terms of logs and exponentials.

In theory, “closed form” doesn't mean anything unless we state the elementary functions that are allowed. In practice, when people refer to closed form, they usually have in mind the particular set of elementary functions described above.

A traditional freshman calculus course spends such a vast amount of time teaching you how to do integrals in closed form that it may be easy to miss the fact that this is impossible for the vast majority of integrands that you might randomly write down. Here are some examples of impossible integrals:

The first of these is a form that is extremely important in statistics (it describes the area under the standard “bell curve”), so you can see that impossible integrals aren't just obscure things that don't pop up in real life.

People who are proficient at doing integrals in closed form generally seem to work by a process of pattern matching. They recognize certain integrals as being of a form that can't be done, so they know not to try.

◊ Students! Stand at attention! You will now evaluate in closed form.

◊
No sir, I can't do that. By a change of variables of the form *u*=*x*+*c*, where *c* is a constant,
we could clearly put this into the form , which we know is impossible.

Sometimes an integral such as is important enough that we want to give it a name, tabulate it, and write computer subroutines that can evaluate it numerically. For example, statisticians define the “error function” . Sometimes if you're not sure whether an integral can be done in closed form, you can put it into computer software, which will tell you that it reduces to one of these functions. You then know that it can't be done in closed form. For example, if you ask the popular web site integrals.com to do , it spits back . This tells you both that you shouldn't be wasting your time trying to do the integral in closed form and that if you need to evaluate it numerically, you can do that using the erf function.

As shown in the following example, just because an indefinite integral can't be done, that doesn't mean that we can never do a related definite integral.

◊ Evaluate .

◊ The obvious substitution to try is *u*=tan *x*, and this reduces the integrand to *e*^{-x2}.
This proves that the corresponding indefinite integral is impossible to express in closed form. However,
the definite integral *can* be expressed in closed form; it turns out to be . The trick
for proving this is given in example 99 on p. 134.

Sometimes computer software can't say anything about a particular integral at all. That doesn't mean that the integral can't be done. Computers are stupid, and they may try brute-force techniques that fail because the computer runs out of memory or CPU time. For example, the integral (problem 15, p. 127) can be done in closed form using the techniques of chapter 8, and it's not too hard for a proficient human to figure out how to attack it, but every computer program I've tried it on has failed silently.

**1**.
Graph the function *y*=*e*^{x}-7*x* and get an approximate idea of where any of its zeroes
are (i.e., for what values of *x* we have *y*(*x*)=0).
Use Newton's method to find the zeroes to three significant figures of precision.

**2**.
The relationship between *x* and *y* is given by *xy* = sin *y*+*x*^{2}*y*^{2}.

(a) Use Newton's method to find the nonzero solution for *y* when *x*=3. Answer: *y*=0.2231

(b) Find *dy*/*dx* in terms of *x* and *y*, and evaluate the derivative
at the point on the curve you found in part a. Answer: *dy*/*dx*=-0.0379

Based on an example by Craig B. Watkins.

**3**.
Suppose you want to evaluate

and you've found

in a table of integrals. Use a change of variable to find the answer to the original problem.

**4**.
Evaluate

**5**.
Evaluate

**6**.
Evaluate

**7**.
Evaluate

where *b* is a constant.

**8**.
Evaluate

**9**.
Evaluate

**10**.
Use integration by parts to evaluate the following integrals.

**11**.
Evaluate

Hint: Use integration by parts more than once.

**12**.
Evaluate

**13**.
Evaluate

**14**.
Evaluate

**15**.
Apply integration by parts *twice* to

examine what happens, and manipulate the result in order to solve the original integral. (An approach that doesn't rely on tricks is given in example 91 on p. 123.)

**16**.
Plan, *but do not actually carry out* the steps that would be required in order to generalize
the result of example 70 on p. 91
in order to evaluate

where *a* and *b* are constants.
Which is easier, the generalization from 2 to *a*, or the one from *e* to *b*?
Do we need to introduce any restrictions on *a* or *b*?
(solution in the pdf version of the book)

**17**.
The integral can't be done in closed form. Knowing this, use a change
of variable to write down a different integral that also can't be done in closed form.

**18**.
Consider the integral

where *p* is a constant. There is an obvious substitution. If this is to result in
an integral that can be evaluated in closed form by a series of integrations by parts,
what are the possible values of *p*? Don't actually complete the integral; just determine
what values of *p* will work.
(solution in the pdf version of the book)

(c) 1998-2013 Benjamin Crowell, licensed under the Creative Commons Attribution-ShareAlike license. Photo credits are given at the end of the Adobe Acrobat version.