You are viewing the html version of Simple Nature, by Benjamin Crowell. This version is only designed for casual browsing, and may have some formatting problems. For serious reading, you want the Adobe Acrobat version.

Table of Contents

Contents
Section 13.1 - Rules of Randomness
Section 13.2 - Light As a Particle
Section 13.3 - Matter As a Wave
Section 13.4 - The Atom

Chapter 13. Quantum Physics

13.1 Rules of Randomness

volcano

a / In 1980, the continental U.S. got its first taste of active volcanism in recent memory with the eruption of Mount St. Helens.

Given for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective positions of the things which compose it...nothing would be uncertain, and the future as the past would be laid out before its eyes. -- Pierre Simon de Laplace, 1776

The energy produced by the atom is a very poor kind of thing. Anyone who expects a source of power from the transformation of these atoms is talking moonshine. -- Ernest Rutherford, 1933

The Quantum Mechanics is very imposing. But an inner voice tells me that it is still not the final truth. The theory yields much, but it hardly brings us nearer to the secret of the Old One. In any case, I am convinced that He does not play dice. -- Albert Einstein

However radical Newton's clockwork universe seemed to his contemporaries, by the early twentieth century it had become a sort of smugly accepted dogma. Luckily for us, this deterministic picture of the universe breaks down at the atomic level. The clearest demonstration that the laws of physics contain elements of randomness is in the behavior of radioactive atoms. Pick two identical atoms of a radioactive isotope, say the naturally occurring uranium 238, and watch them carefully. They will decay at different times, even though there was no difference in their initial behavior.

We would be in big trouble if these atoms' behavior was as predictable as expected in the Newtonian world-view, because radioactivity is an important source of heat for our planet. In reality, each atom chooses a random moment at which to release its energy, resulting in a nice steady heating effect. The earth would be a much colder planet if only sunlight heated it and not radioactivity. Probably there would be no volcanoes, and the oceans would never have been liquid. The deep-sea geothermal vents in which life first evolved would never have existed. But there would be an even worse consequence if radioactivity was deterministic: after a few billion years of peace, all the uranium 238 atoms in our planet would presumably pick the same moment to decay. The huge amount of stored nuclear energy, instead of being spread out over eons, would all be released at one instant, blowing our whole planet to Kingdom Come.1

The new version of physics, incorporating certain kinds of randomness, is called quantum physics (for reasons that will become clear later). It represented such a dramatic break with the previous, deterministic tradition that everything that came before is considered “classical,” even the theory of relativity. This chapter is a basic introduction to quantum physics.

Discussion Question

I said “Pick two identical atoms of a radioactive isotope.” Are two atoms really identical? If their electrons are orbiting the nucleus, can we distinguish each atom by the particular arrangement of its electrons at some instant in time?

13.1.1 Randomness isn't random.

Einstein's distaste for randomness, and his association of determinism with divinity, goes back to the Enlightenment conception of the universe as a gigantic piece of clockwork that only had to be set in motion initially by the Builder. Many of the founders of quantum mechanics were interested in possible links between physics and Eastern and Western religious and philosophical thought, but every educated person has a different concept of religion and philosophy. Bertrand Russell remarked, “Sir Arthur Eddington deduces religion from the fact that atoms do not obey the laws of mathematics. Sir James Jeans deduces it from the fact that they do.”

Russell's witticism, which implies incorrectly that mathematics cannot describe randomness, remind us how important it is not to oversimplify this question of randomness. You should not simply surmise, “Well, it's all random, anything can happen.” For one thing, certain things simply cannot happen, either in classical physics or quantum physics. The conservation laws of mass, energy, momentum, and angular momentum are still valid, so for instance processes that create energy out of nothing are not just unlikely according to quantum physics, they are impossible.

A useful analogy can be made with the role of randomness in evolution. Darwin was not the first biologist to suggest that species changed over long periods of time. His two new fundamental ideas were that (1) the changes arose through random genetic variation, and (2) changes that enhanced the organism's ability to survive and reproduce would be preserved, while maladaptive changes would be eliminated by natural selection. Doubters of evolution often consider only the first point, about the randomness of natural variation, but not the second point, about the systematic action of natural selection. They make statements such as, “the development of a complex organism like Homo sapiens via random chance would be like a whirlwind blowing through a junkyard and spontaneously assembling a jumbo jet out of the scrap metal.” The flaw in this type of reasoning is that it ignores the deterministic constraints on the results of random processes. For an atom to violate conservation of energy is no more likely than the conquest of the world by chimpanzees next year.

Discussion Question

Economists often behave like wannabe physicists, probably because it seems prestigious to make numerical calculations instead of talking about human relationships and organizations like other social scientists. Their striving to make economics work like Newtonian physics extends to a parallel use of mechanical metaphors, as in the concept of a market's supply and demand acting like a self-adjusting machine, and the idealization of people as economic automatons who consistently strive to maximize their own wealth. What evidence is there for randomness rather than mechanical determinism in economics?

globe

b / Normalization: the probability of picking land plus the probability of picking water adds up to 1.

dice

c / Why are dice random?

13.1.2 Calculating randomness

You should also realize that even if something is random, we can still understand it, and we can still calculate probabilities numerically. In other words, physicists are good bookmakers. A good bookmaker can calculate the odds that a horse will win a race much more accurately that an inexperienced one, but nevertheless cannot predict what will happen in any particular race.

Statistical independence

As an illustration of a general technique for calculating odds, suppose you are playing a 25-cent slot machine. Each of the three wheels has one chance in ten of coming up with a cherry. If all three wheels come up cherries, you win $100. Even though the results of any particular trial are random, you can make certain quantitative predictions. First, you can calculate that your odds of winning on any given trial are \(1/10\times1/10\times1/10=1/1000=0.001\). Here, I am representing the probabilities as numbers from 0 to 1, which is clearer than statements like “The odds are 999 to 1,” and makes the calculations easier. A probability of 0 represents something impossible, and a probability of 1 represents something that will definitely happen.

Also, you can say that any given trial is equally likely to result in a win, and it doesn't matter whether you have won or lost in prior games. Mathematically, we say that each trial is statistically independent, or that separate games are uncorrelated. Most gamblers are mistakenly convinced that, to the contrary, games of chance are correlated. If they have been playing a slot machine all day, they are convinced that it is “getting ready to pay,” and they do not want anyone else playing the machine and “using up” the jackpot that they “have coming.” In other words, they are claiming that a series of trials at the slot machine is negatively correlated, that losing now makes you more likely to win later. Craps players claim that you should go to a table where the person rolling the dice is “hot,” because she is likely to keep on rolling good numbers. Craps players, then, believe that rolls of the dice are positively correlated, that winning now makes you more likely to win later.

My method of calculating the probability of winning on the slot machine was an example of the following important rule for calculations based on independent probabilities:

the law of independent probabilities

If the probability of one event happening is \(P_A\), and the probability of a second statistically independent event happening is \(P_B\), then the probability that they will both occur is the product of the probabilities, \(P_AP_B\). If there are more than two events involved, you simply keep on multiplying.

This can be taken as the definition of statistical independence.

Note that this only applies to independent probabilities. For instance, if you have a nickel and a dime in your pocket, and you randomly pull one out, there is a probability of 0.5 that it will be the nickel. If you then replace the coin and again pull one out randomly, there is again a probability of 0.5 of coming up with the nickel, because the probabilities are independent. Thus, there is a probability of 0.25 that you will get the nickel both times.

Suppose instead that you do not replace the first coin before pulling out the second one. Then you are bound to pull out the other coin the second time, and there is no way you could pull the nickel out twice. In this situation, the two trials are not independent, because the result of the first trial has an effect on the second trial. The law of independent probabilities does not apply, and the probability of getting the nickel twice is zero, not 0.25.

Experiments have shown that in the case of radioactive decay, the probability that any nucleus will decay during a given time interval is unaffected by what is happening to the other nuclei, and is also unrelated to how long it has gone without decaying. The first observation makes sense, because nuclei are isolated from each other at the centers of their respective atoms, and therefore have no physical way of influencing each other. The second fact is also reasonable, since all atoms are identical. Suppose we wanted to believe that certain atoms were “extra tough,” as demonstrated by their history of going an unusually long time without decaying. Those atoms would have to be different in some physical way, but nobody has ever succeeded in detecting differences among atoms. There is no way for an atom to be changed by the experiences it has in its lifetime.

Addition of probabilities

The law of independent probabilities tells us to use multiplication to calculate the probability that both A and B will happen, assuming the probabilities are independent. What about the probability of an “or” rather than an “and”? If two events A and \(B\) are mutually exclusive, then the probability of one or the other occurring is the sum \(P_A+P_B\). For instance, a bowler might have a 30% chance of getting a strike (knocking down all ten pins) and a 20% chance of knocking down nine of them. The bowler's chance of knocking down either nine pins or ten pins is therefore 50%.

It does not make sense to add probabilities of things that are not mutually exclusive, i.e., that could both happen. Say I have a 90% chance of eating lunch on any given day, and a 90% chance of eating dinner. The probability that I will eat either lunch or dinner is not 180%.

Normalization

If I spin a globe and randomly pick a point on it, I have about a 70% chance of picking a point that's in an ocean and a 30% chance of picking a point on land. The probability of picking either water or land is \(70%+30%=100%\). Water and land are mutually exclusive, and there are no other possibilities, so the probabilities had to add up to 100%. It works the same if there are more than two possibilities --- if you can classify all possible outcomes into a list of mutually exclusive results, then all the probabilities have to add up to 1, or 100%. This property of probabilities is known as normalization.

Averages

Another way of dealing with randomness is to take averages. The casino knows that in the long run, the number of times you win will approximately equal the number of times you play multiplied by the probability of winning. In the slot-machine game described on page 825, where the probability of winning is 0.001, if you spend a week playing, and pay $2500 to play 10,000 times, you are likely to win about 10 times \((10,000\times0.001=10)\), and collect $1000. On the average, the casino will make a profit of $1500 from you. This is an example of the following rule.

Rule for Calculating Averages

If you conduct \(N\) identical, statistically independent trials, and the probability of success in each trial is \(P\), then on the average, the total number of successful trials will be \(NP\). If \(N\) is large enough, the relative error in this estimate will become small.

The statement that the rule for calculating averages gets more and more accurate for larger and larger \(N\)(known popularly as the “law of averages”) often provides a correspondence principle that connects classical and quantum physics. For instance, the amount of power produced by a nuclear power plant is not random at any detectable level, because the number of atoms in the reactor is so large. In general, random behavior at the atomic level tends to average out when we consider large numbers of atoms, which is why physics seemed deterministic before physicists learned techniques for studying atoms individually.

We can achieve great precision with averages in quantum physics because we can use identical atoms to reproduce exactly the same situation many times. If we were betting on horses or dice, we would be much more limited in our precision. After a thousand races, the horse would be ready to retire. After a million rolls, the dice would be worn out.

self-check:

Which of the following things must be independent, which could be independent, and which definitely are not independent? (1) the probability of successfully making two free-throws in a row in basketball; (2) the probability that it will rain in London tomorrow and the probability that it will rain on the same day in a certain city in a distant galaxy; (3) your probability of dying today and of dying tomorrow.

(answer in the back of the PDF version of the book)
Discussion Questions

Newtonian physics is an essentially perfect approximation for describing the motion of a pair of dice. If Newtonian physics is deterministic, why do we consider the result of rolling dice to be random?

Why isn't it valid to define randomness by saying that randomness is when all the outcomes are equally likely?

The sequence of digits 121212121212121212 seems clearly nonrandom, and 41592653589793 seems random. The latter sequence, however, is the decimal form of pi, starting with the third digit. There is a story about the Indian mathematician Ramanujan, a self-taught prodigy, that a friend came to visit him in a cab, and remarked that the number of the cab, 1729, seemed relatively uninteresting. Ramanujan replied that on the contrary, it was very interesting because it was the smallest number that could be represented in two different ways as the sum of two cubes. The Argentine author Jorge Luis Borges wrote a short story called “The Library of Babel,” in which he imagined a library containing every book that could possibly be written using the letters of the alphabet. It would include a book containing only the repeated letter “a;” all the ancient Greek tragedies known today, all the lost Greek tragedies, and millions of Greek tragedies that were never actually written; your own life story, and various incorrect versions of your own life story; and countless anthologies containing a short story called “The Library of Babel.” Of course, if you picked a book from the shelves of the library, it would almost certainly look like a nonsensical sequence of letters and punctuation, but it's always possible that the seemingly meaningless book would be a science-fiction screenplay written in the language of a Neanderthal tribe, or the lyrics to a set of incomparably beautiful love songs written in a language that never existed. In view of these examples, what does it really mean to say that something is random?

single-die

d / Probability distribution for the result of rolling a single die.

two-dice

e / Rolling two dice and adding them up.

human-height

f / A probability distribution for height of human adults (not real data).

human-height-tail

g / Example 1.

average

h / The average of a probability distribution.

fwhm

i / The full width at half maximum (FWHM) of a probability distribution.

13.1.3 Probability distributions

So far we've discussed random processes having only two possible outcomes: yes or no, win or lose, on or off. More generally, a random process could have a result that is a number. Some processes yield integers, as when you roll a die and get a result from one to six, but some are not restricted to whole numbers, for example the number of seconds that a uranium-238 atom will exist before undergoing radioactive decay.

Consider a throw of a die. If the die is “honest,” then we expect all six values to be equally likely. Since all six probabilities must add up to 1, then probability of any particular value coming up must be 1/6. We can summarize this in a graph, d. Areas under the curve can be interpreted as total probabilities. For instance, the area under the curve from 1 to 3 is \(1/6+1/6+1/6=1/2\), so the probability of getting a result from 1 to 3 is 1/2. The function shown on the graph is called the probability distribution.

Figure e shows the probabilities of various results obtained by rolling two dice and adding them together, as in the game of craps. The probabilities are not all the same. There is a small probability of getting a two, for example, because there is only one way to do it, by rolling a one and then another one. The probability of rolling a seven is high because there are six different ways to do it: 1+6, 2+5, etc.

If the number of possible outcomes is large but finite, for example the number of hairs on a dog, the graph would start to look like a smooth curve rather than a ziggurat.

What about probability distributions for random numbers that are not integers? We can no longer make a graph with probability on the \(y\) axis, because the probability of getting a given exact number is typically zero. For instance, there is zero probability that a radioactive atom will last for exactly 3 seconds, since there is are infinitely many possible results that are close to 3 but not exactly three: 2.999999999999999996876876587658465436, for example. It doesn't usually make sense, therefore, to talk about the probability of a single numerical result, but it does make sense to talk about the probability of a certain range of results. For instance, the probability that an atom will last more than 3 and less than 4 seconds is a perfectly reasonable thing to discuss. We can still summarize the probability information on a graph, and we can still interpret areas under the curve as probabilities.

But the \(y\) axis can no longer be a unitless probability scale. In radioactive decay, for example, we want the \(x\) axis to have units of time, and we want areas under the curve to be unitless probabilities. The area of a single square on the graph paper is then

\[\begin{gather*} \text{(unitless area of a square)} \\ = \text{(width of square with time units)}\\ \times \text{(height of square)} . \end{gather*}\]

If the units are to cancel out, then the height of the square must evidently be a quantity with units of inverse time. In other words, the \(y\) axis of the graph is to be interpreted as probability per unit time, not probability.

Figure f shows another example, a probability distribution for people's height. This kind of bell-shaped curve is quite common.

self-check:

Compare the number of people with heights in the range of 130-135 cm to the number in the range 135-140.

(answer in the back of the PDF version of the book)
Example 1: Looking for tall basketball players
\(\triangleright\) A certain country with a large population wants to find very tall people to be on its Olympic basketball team and strike a blow against western imperialism. Out of a pool of \(10^8\) people who are the right age and gender, how many are they likely to find who are over 225 cm (7 feet 4 inches) in height? Figure g gives a close-up of the “tail” of the distribution shown previously in figure f.

\(\triangleright\) The shaded area under the curve represents the probability that a given person is tall enough. Each rectangle represents a probability of \(0.2\times10^{-7}\ \text{cm}^{-1} \times 1\ \text{cm}=2\times10^{-8}\). There are about 35 rectangles covered by the shaded area, so the probability of having a height greater than 225 cm is \(7\times10^{-7}\) , or just under one in a million. Using the rule for calculating averages, the average, or expected number of people this tall is \((10^8)\times(7\times10^{-7})=70\).

Average and width of a probability distribution

If the next Martian you meet asks you, “How tall is an adult human?,” you will probably reply with a statement about the average human height, such as “Oh, about 5 feet 6 inches.” If you wanted to explain a little more, you could say, “But that's only an average. Most people are somewhere between 5 feet and 6 feet tall.” Without bothering to draw the relevant bell curve for your new extraterrestrial acquaintance, you've summarized the relevant information by giving an average and a typical range of variation.

The average of a probability distribution can be defined geometrically as the horizontal position at which it could be balanced if it was constructed out of cardboard. A convenient numerical measure of the amount of variation about the average, or amount of uncertainty, is the full width at half maximum, or FWHM, shown in figure i.

A great deal more could be said about this topic, and indeed an introductory statistics course could spend months on ways of defining the center and width of a distribution. Rather than force-feeding you on mathematical detail or techniques for calculating these things, it is perhaps more relevant to point out simply that there are various ways of defining them, and to inoculate you against the misuse of certain definitions.

The average is not the only possible way to say what is a typical value for a quantity that can vary randomly; another possible definition is the median, defined as the value that is exceeded with 50% probability. When discussing incomes of people living in a certain town, the average could be very misleading, since it can be affected massively if a single resident of the town is Bill Gates. Nor is the FWHM the only possible way of stating the amount of random variation; another possible way of measuring it is the standard deviation (defined as the square root of the average squared deviation from the average value).

13.1.4 Exponential decay and half-life

Half-life

Most people know that radioactivity “lasts a certain amount of time,” but that simple statement leaves out a lot. As an example, consider the following medical procedure used to diagnose thyroid function. A very small quantity of the isotope \(^{131}\text{I}\), produced in a nuclear reactor, is fed to or injected into the patient. The body's biochemical systems treat this artificial, radioactive isotope exactly the same as \(^{127}\text{I}\), which is the only naturally occurring type. (Nutritionally, iodine is a necessary trace element. Iodine taken into the body is partly excreted, but the rest becomes concentrated in the thyroid gland. Iodized salt has had iodine added to it to prevent the nutritional deficiency known as goiters, in which the iodine-starved thyroid becomes swollen.) As the \(^{131}\text{I}\) undergoes beta decay, it emits electrons, neutrinos, and gamma rays. The gamma rays can be measured by a detector passed over the patient's body. As the radioactive iodine becomes concentrated in the thyroid, the amount of gamma radiation coming from the thyroid becomes greater, and that emitted by the rest of the body is reduced. The rate at which the iodine concentrates in the thyroid tells the doctor about the health of the thyroid.

If you ever undergo this procedure, someone will presumably explain a little about radioactivity to you, to allay your fears that you will turn into the Incredible Hulk, or that your next child will have an unusual number of limbs. Since iodine stays in your thyroid for a long time once it gets there, one thing you'll want to know is whether your thyroid is going to become radioactive forever. They may just tell you that the radioactivity “only lasts a certain amount of time,” but we can now carry out a quantitative derivation of how the radioactivity really will die out.

Let \(P_{surv}(t)\) be the probability that an iodine atom will survive without decaying for a period of at least \(t\). It has been experimentally measured that half all \(^{131}\text{I}\) atoms decay in 8 hours, so we have

\[\begin{equation*} P_{surv}(8\ \text{hr}) = 0.5 . \end{equation*}\]

Now using the law of independent probabilities, the probability of surviving for 16 hours equals the probability of surviving for the first 8 hours multiplied by the probability of surviving for the second 8 hours,

\[\begin{align*} P_{surv}(16\ \text{hr}) &= 0.50\times0.50 \\ &= 0.25 . \end{align*}\]

Similarly we have

\[\begin{align*} P_{surv}(24\ \text{hr}) &= 0.50\times0.5\times0.5 \\ &= 0.125 . \end{align*}\]

Generalizing from this pattern, the probability of surviving for any time \(t\) that is a multiple of 8 hours is

\[\begin{equation*} P_{surv}(t) = 0.5^{t/8\ \text{hr}} . \end{equation*}\]

We now know how to find the probability of survival at intervals of 8 hours, but what about the points in time in between? What would be the probability of surviving for 4 hours? Well, using the law of independent probabilities again, we have

\[\begin{equation*} P_{surv}(8\ \text{hr}) = P_{surv}(4\ \text{hr}) \times P_{surv}(4\ \text{hr}) , \end{equation*}\]

which can be rearranged to give

\[\begin{align*} P_{surv}(4\ \text{hr}) &= \sqrt{P_{surv}(8\ \text{hr})} \\ &= \sqrt{0.5} \\ &= 0.707 . \end{align*}\]

This is exactly what we would have found simply by plugging in \(P_{surv}(t)=0.5^{t/8\ \text{hr}}\) and ignoring the restriction to multiples of 8 hours. Since 8 hours is the amount of time required for half of the atoms to decay, it is known as the half-life, written \(t_{1/2}\). The general rule is as follows:

Exponential Decay Equation

\[\begin{equation*} P_{surv}(t) = 0.5^{t/t_{1/2}} \end{equation*}\]

Using the rule for calculating averages, we can also find the number of atoms, \(N(t)\), remaining in a sample at time \(t \):

\[\begin{equation*} N(t) = N(0) \times 0.5^{t/t_{1/2}} \end{equation*}\]

Both of these equations have graphs that look like dying-out exponentials, as in the example below.

Example 2: Radioactive contamination at Chernobyl

\(\triangleright\) One of the most dangerous radioactive isotopes released by the Chernobyl disaster in 1986 was \(^{90}\text{Sr}\), whose half-life is 28 years. (a) How long will it be before the contamination is reduced to one tenth of its original level? (b) If a total of \(10^{27}\) atoms was released, about how long would it be before not a single atom was left?

\(\triangleright\) (a) We want to know the amount of time that a \(^{90}\text{Sr}\) nucleus has a probability of 0.1 of surviving. Starting with the exponential decay formula,

\[\begin{equation*} P_{surv} = 0.5^{t/t_{1/2}} , \end{equation*}\]

we want to solve for \(t\). Taking natural logarithms of both sides,

\[\begin{equation*} \ln P = \frac{t}{t_{1/2}}\ln 0.5 , \end{equation*}\]

so

\[\begin{equation*} t = \frac{t_{1/2}}{\ln 0.5}\ln P \end{equation*}\]

Plugging in \(P=0.1\) and \(t_{1/2}=28\) years, we get \(t=93\) years.

(b) This is just like the first part, but \(P=10^{-27}\) . The result is about 2500 years.

carbon-fourteen

j / Calibration of the \(^{14}\text{C}\) dating method using tree rings and artifacts whose ages were known from other methods. Redrawn from Emilio Segrè, Nuclei and Particles, 1965.

Example 3: \(^{14}\text{C}\) Dating
\textup{C}\( Dating} Almost all the carbon on Earth is \)^{12}\textup{C}\(, but not quite. The isotope \)^{14}\textup{C}\(, with a half-life of 5600 years, is produced by cosmic rays in the atmosphere. It decays naturally, but is replenished at such a rate that the fraction of \)^{14}\textup{C}\( in the atmosphere remains constant, at \)1.3\times10^{-12}\( . Living plants and animals take in both \)^{12}\textup{C}\( and \)^{14}\textup{C}\( from the atmosphere and incorporate both into their bodies. Once the living organism dies, it no longer takes in C atoms from the atmosphere, and the proportion of \)^{14}\textup{C}\( gradually falls off as it undergoes radioactive decay. This effect can be used to find the age of dead organisms, or human artifacts made from plants or animals. Figure j on page 834 shows the exponential decay curve of \)^{14}\textup{C}$ in various objects. Similar methods, using longer-lived isotopes, provided the first firm proof that the earth was billions of years old, not a few thousand as some had claimed on religious grounds.

Rate of decay

If you want to find how many radioactive decays occur within a time interval lasting from time \(t\) to time \(t+\Delta t\), the most straightforward approach is to calculate it like this:

\[\begin{align*} (\text{number of}&\text{ decays between } t \text{ and } t+\Delta t) \\ &= N(t) - N(t+\Delta t) \end{align*}\]

Usually we're interested in the case where \(\Delta t\) is small compared to \(t_{1/2}\), and in this limiting case the calculation starts to look exactly like the limit that goes into the definition of the derivative \(dN/dt\). It is therefore more convenient to talk about the rate of decay \(-dN/dt\) rather than the number of decays in some finite time interval. Doing calculus on the function \(e^x\) is also easier than with \(0.5^x\), so we rewrite the function \(N(t)\) as

\[\begin{equation*} N = N(0) e^{-t/\tau} , \end{equation*}\]

where \(\tau=t_{1/2}/\ln 2\) is shown in example 6 on p. 837 to be the average time of survival. The rate of decay is then

\[\begin{equation*} -\frac{dN}{dt} = \frac{N(0)}{\tau} e^{-t/\tau} . \end{equation*}\]

Mathematically, differentating an exponential just gives back another exponential. Physically, this is telling us that as \(N\) falls off exponentially, the rate of decay falls off at the same exponential rate, because a lower \(N\) means fewer atoms that remain available to decay.

self-check:

Check that both sides of the equation for the rate of decay have units of \(\text{s}^{-1}\), i.e., decays per unit time.

(answer in the back of the PDF version of the book)
Example 4: The hot potato

\(\triangleright\) A nuclear physicist with a demented sense of humor tosses you a cigar box, yelling “hot potato.” The label on the box says “contains \(10^{20}\) atoms of \(^{17}\text{F}\), half-life of 66 s, produced today in our reactor at 1 p.m.” It takes you two seconds to read the label, after which you toss it behind some lead bricks and run away. The time is 1:40 p.m. Will you die?

\(\triangleright\) The time elapsed since the radioactive fluorine was produced in the reactor was 40 minutes, or 2400 s. The number of elapsed half-lives is therefore \(t/t_{1/2}= 36\). The initial number of atoms was \(N(0)=10^{20}\) . The number of decays per second is now about \(10^7\ \text{s}^{-1}\), so it produced about \(2\times10^7\) high-energy electrons while you held it in your hands. Although twenty million electrons sounds like a lot, it is not really enough to be dangerous.

By the way, none of the equations we've derived so far was the actual probability distribution for the time at which a particular radioactive atom will decay. That probability distribution would be found by substituting \(N(0)=1\) into the equation for the rate of decay.

Discussion Questions

In the medical procedure involving \(^{131}\text{I}\), why is it the gamma rays that are detected, not the electrons or neutrinos that are also emitted?

For 1 s, Fred holds in his hands 1 kg of radioactive stuff with a half-life of 1000 years. Ginger holds 1 kg of a different substance, with a half-life of 1 min, for the same amount of time. Did they place themselves in equal danger, or not?

How would you interpret it if you calculated \(N(t)\), and found it was less than one?

Does the half-life depend on how much of the substance you have? Does the expected time until the sample decays completely depend on how much of the substance you have?

13.1.5 Applications of calculus

The area under the probability distribution is of course an integral. If we call the random number \(x\) and the probability distribution \(D(x)\), then the probability that \(x\) lies in a certain range is given by

\[\begin{equation*} \text{(probability of $a\le x \le b$)}=\int_a^b D(x) dx . \end{equation*}\]

What about averages? If \(x\) had a finite number of equally probable values, we would simply add them up and divide by how many we had. If they weren't equally likely, we'd make the weighted average \(x_1P_1+x_2P_2+\)... But we need to generalize this to a variable \(x\) that can take on any of a continuum of values. The continuous version of a sum is an integral, so the average is

\[\begin{equation*} \text{(average value of $x$)} = \int x D(x) dx , \end{equation*}\]

where the integral is over all possible values of \(x\).

Example 5: Probability distribution for radioactive decay

Here is a rigorous justification for the statement in subsection 13.1.4 that the probability distribution for radioactive decay is found by substituting \(N(0)=1\) into the equation for the rate of decay. We know that the probability distribution must be of the form

\[\begin{equation*} D(t) = k 0.5^{t/t_{1/2}} , \end{equation*}\]

where \(k\) is a constant that we need to determine. The atom is guaranteed to decay eventually, so normalization gives us

\[\begin{align*} \text{(probability of $0\le t \lt \infty$)} &= 1 \\ &= \int_0^\infty D(t) dt . \end{align*}\]

The integral is most easily evaluated by converting the function into an exponential with \(e\) as the base

\[\begin{align*} D(t) &= k \exp\left[\ln\left(0.5^{t/t_{1/2}}\right)\right] \\ &= k \exp\left[\frac{t}{t_{1/2}}\ln 0.5\right] \\ &= k \exp\left(-\frac{\ln 2}{t_{1/2}}t\right) , \end{align*}\]

which gives an integral of the familiar form \(\int e^{cx}dx=(1/c)e^{cx}\). We thus have

\[\begin{equation*} 1 = \left.-\frac{kt_{1/2}}{\ln 2}\exp\left(-\frac{\ln 2}{t_{1/2}}t\right)\right]_0^\infty , \end{equation*}\]

which gives the desired result:

\[\begin{equation*} k = \frac{\ln 2}{t_{1/2}} . \end{equation*}\]

Example 6: Average lifetime
You might think that the half-life would also be the average lifetime of an atom, since half the atoms' lives are shorter and half longer. But the half whose lives are longer include some that survive for many half-lives, and these rare long-lived atoms skew the average. We can calculate the average lifetime as follows:
\[\begin{equation*} (\text{average lifetime}) = \int_0^\infty t\: D(t)dt \end{equation*}\]
Using the convenient base-\(e\) form again, we have
\[\begin{equation*} (\text{average lifetime}) = \frac{\ln 2}{t_{1/2}} \int_0^\infty t \exp\left(-\frac{\ln 2}{t_{1/2}}t\right) dt . \end{equation*}\]
This integral is of a form that can either be attacked with integration by parts or by looking it up in a table. The result is \(\int x e^{cx}dx=\frac{x}{c}e^{cx}-\frac{1}{c^2}e^{cx}\), and the first term can be ignored for our purposes because it equals zero at both limits of integration. We end up with
\[\begin{align*} \text{(average lifetime)} &= \frac{\ln 2}{t_{1/2}}\left(\frac{t_{1/2}}{\ln 2}\right)^2 \\ &= \frac{t_{1/2}}{\ln 2} \\ &= 1.443 \: t_{1/2} , \end{align*}\]
which is, as expected, longer than one half-life.

ozone

k / In recent decades, a huge hole in the ozone layer has spread out from Antarctica. Left: November 1978. Right: November 1992

13.2 Light As a Particle

The only thing that interferes with my learning is my education. -- Albert Einstein

Radioactivity is random, but do the laws of physics exhibit randomness in other contexts besides radioactivity? Yes. Radioactive decay was just a good playpen to get us started with concepts of randomness, because all atoms of a given isotope are identical. By stocking the playpen with an unlimited supply of identical atom-toys, nature helped us to realize that their future behavior could be different regardless of their original identicality. We are now ready to leave the playpen, and see how randomness fits into the structure of physics at the most fundamental level.

The laws of physics describe light and matter, and the quantum revolution rewrote both descriptions. Radioactivity was a good example of matter's behaving in a way that was inconsistent with classical physics, but if we want to get under the hood and understand how nonclassical things happen, it will be easier to focus on light rather than matter. A radioactive atom such as uranium-235 is after all an extremely complex system, consisting of 92 protons, 143 neutrons, and 92 electrons. Light, however, can be a simple sine wave.

However successful the classical wave theory of light had been --- allowing the creation of radio and radar, for example --- it still failed to describe many important phenomena. An example that is currently of great interest is the way the ozone layer protects us from the dangerous short-wavelength ultraviolet part of the sun's spectrum. In the classical description, light is a wave. When a wave passes into and back out of a medium, its frequency is unchanged, and although its wavelength is altered while it is in the medium, it returns to its original value when the wave reemerges. Luckily for us, this is not at all what ultraviolet light does when it passes through the ozone layer, or the layer would offer no protection at all!

attenuation-wave

b / A wave is partially absorbed.

attenuation-bullets

c / A stream of particles is partially absorbed.

seurat

d / Einstein and Seurat: twins separated at birth? Seine Grande Jatte by Georges Seurat (19th century).

13.2.1 Evidence for light as a particle

For a long time, physicists tried to explain away the problems with the classical theory of light as arising from an imperfect understanding of atoms and the interaction of light with individual atoms and molecules. The ozone paradox, for example, could have been attributed to the incorrect assumption that one could think of the ozone layer as a smooth, continuous substance, when in reality it was made of individual ozone molecules. It wasn't until 1905 that Albert Einstein threw down the gauntlet, proposing that the problem had nothing to do with the details of light's interaction with atoms and everything to do with the fundamental nature of light itself.

ccd-spot

a / Digital camera images of dimmer and dimmer sources of light. The dots are records of individual photons.

In those days the data were sketchy, the ideas vague, and the experiments difficult to interpret; it took a genius like Einstein to cut through the thicket of confusion and find a simple solution. Today, however, we can get right to the heart of the matter with a piece of ordinary consumer electronics, the digital camera. Instead of film, a digital camera has a computer chip with its surface divided up into a grid of light-sensitive squares, called “pixels.” Compared to a grain of the silver compound used to make regular photographic film, a digital camera pixel is activated by an amount of light energy orders of magnitude smaller. We can learn something new about light by using a digital camera to detect smaller and smaller amounts of light, as shown in figure a. Figure a/1 is fake, but a/2 and a/3 are real digital-camera images made by Prof. Lyman Page of Princeton University as a classroom demonstration. Figure a/1 is what we would see if we used the digital camera to take a picture of a fairly dim source of light. In figures a/2 and a/3, the intensity of the light was drastically reduced by inserting semitransparent absorbers like the tinted plastic used in sunglasses. Going from a/1 to a/2 to a/3, more and more light energy is being thrown away by the absorbers.

The results are drastically different from what we would expect based on the wave theory of light. If light was a wave and nothing but a wave, b, then the absorbers would simply cut down the wave's amplitude across the whole wavefront. The digital camera's entire chip would be illuminated uniformly, and weakening the wave with an absorber would just mean that every pixel would take a long time to soak up enough energy to register a signal.

But figures a/2 and a/3 show that some pixels take strong hits while others pick up no energy at all. Instead of the wave picture, the image that is naturally evoked by the data is something more like a hail of bullets from a machine gun, c. Each “bullet” of light apparently carries only a tiny amount of energy, which is why detecting them individually requires a sensitive digital camera rather than an eye or a piece of film.

Although Einstein was interpreting different observations, this is the conclusion he reached in his 1905 paper: that the pure wave theory of light is an oversimplification, and that the energy of a beam of light comes in finite chunks rather than being spread smoothly throughout a region of space.

We now think of these chunks as particles of light, and call them “photons,” although Einstein avoided the word “particle,” and the word “photon” was invented later. Regardless of words, the trouble was that waves and particles seemed like inconsistent categories. The reaction to Einstein's paper could be kindly described as vigorously skeptical. Even twenty years later, Einstein wrote, “There are therefore now two theories of light, both indispensable, and --- as one must admit today despite twenty years of tremendous effort on the part of theoretical physicists --- without any logical connection.” In the remainder of this chapter we will learn how the seeming paradox was eventually resolved.

Discussion Questions

Suppose someone rebuts the digital camera data in figure a, claiming that the random pattern of dots occurs not because of anything fundamental about the nature of light but simply because the camera's pixels are not all exactly the same --- some are just more sensitive than others. How could we test this interpretation?

Discuss how the correspondence principle applies to the observations and concepts discussed in this section.

photoelectric-apparatus-a

e / Apparatus for observing the photoelectric effect. A beam of light strikes a capacitor plate inside a vacuum tube, and electrons are ejected (black arrows).

photoelectric-hamster

f / The hamster in her hamster ball is like an electron emerging from the metal (tiled kitchen floor) into the surrounding vacuum (wood floor). The wood floor is higher than the tiled floor, so as she rolls up the step, the hamster will lose a certain amount of kinetic energy, analogous to \(E_s\). If her kinetic energy is too small, she won't even make it up the step.

photoelectric-apparatus-b

g / A different way of studying the photoelectric effect.

photoelectric-graph

h / The quantity \(E_s+e\Delta V\) indicates the energy of one photon. It is found to be proportional to the frequency of the light.

13.2.2 How much light is one photon?

The photoelectric effect

We have seen evidence that light energy comes in little chunks, so the next question to be asked is naturally how much energy is in one chunk. The most straightforward experimental avenue for addressing this question is a phenomenon known as the photoelectric effect. The photoelectric effect occurs when a photon strikes the surface of a solid object and knocks out an electron. It occurs continually all around you. It is happening right now at the surface of your skin and on the paper or computer screen from which you are reading these words. It does not ordinarily lead to any observable electrical effect, however, because on the average free electrons are wandering back in just as frequently as they are being ejected. (If an object did somehow lose a significant number of electrons, its growing net positive charge would begin attracting the electrons back more and more strongly.)

Figure e shows a practical method for detecting the photoelectric effect. Two very clean parallel metal plates (the electrodes of a capacitor) are sealed inside a vacuum tube, and only one plate is exposed to light. Because there is a good vacuum between the plates, any ejected electron that happens to be headed in the right direction will almost certainly reach the other capacitor plate without colliding with any air molecules.

The illuminated (bottom) plate is left with a net positive charge, and the unilluminated (top) plate acquires a negative charge from the electrons deposited on it. There is thus an electric field between the plates, and it is because of this field that the electrons' paths are curved, as shown in the diagram. However, since vacuum is a good insulator, any electrons that reach the top plate are prevented from responding to the electrical attraction by jumping back across the gap. Instead they are forced to make their way around the circuit, passing through an ammeter. The ammeter allows a measurement of the strength of the photoelectric effect.

An unexpected dependence on frequency

The photoelectric effect was discovered serendipitously by Heinrich Hertz in 1887, as he was experimenting with radio waves. He was not particularly interested in the phenomenon, but he did notice that the effect was produced strongly by ultraviolet light and more weakly by lower frequencies. Light whose frequency was lower than a certain critical value did not eject any electrons at all. (In fact this was all prior to Thomson's discovery of the electron, so Hertz would not have described the effect in terms of electrons --- we are discussing everything with the benefit of hindsight.) This dependence on frequency didn't make any sense in terms of the classical wave theory of light. A light wave consists of electric and magnetic fields. The stronger the fields, i.e., the greater the wave's amplitude, the greater the forces that would be exerted on electrons that found themselves bathed in the light. It should have been amplitude (brightness) that was relevant, not frequency. The dependence on frequency not only proves that the wave model of light needs modifying, but with the proper interpretation it allows us to determine how much energy is in one photon, and it also leads to a connection between the wave and particle models that we need in order to reconcile them.

To make any progress, we need to consider the physical process by which a photon would eject an electron from the metal electrode. A metal contains electrons that are free to move around. Ordinarily, in the interior of the metal, such an electron feels attractive forces from atoms in every direction around it. The forces cancel out. But if the electron happens to find itself at the surface of the metal, the attraction from the interior side is not balanced out by any attraction from outside. In popping out through the surface the electron therefore loses some amount of energy \(E_s\), which depends on the type of metal used.

Suppose a photon strikes an electron, annihilating itself and giving up all its energy to the electron. (We now know that this is what always happens in the photoelectric effect, although it had not yet been established in 1905 whether or not the photon was completely annihilated.) The electron will (1) lose kinetic energy through collisions with other electrons as it plows through the metal on its way to the surface; (2) lose an amount of kinetic energy equal to \(E_s\) as it emerges through the surface; and (3) lose more energy on its way across the gap between the plates, due to the electric field between the plates. Even if the electron happens to be right at the surface of the metal when it absorbs the photon, and even if the electric field between the plates has not yet built up very much, \(E_s\) is the bare minimum amount of energy that it must receive from the photon if it is to contribute to a measurable current. The reason for using very clean electrodes is to minimize \(E_s\) and make it have a definite value characteristic of the metal surface, not a mixture of values due to the various types of dirt and crud that are present in tiny amounts on all surfaces in everyday life.

We can now interpret the frequency dependence of the photoelectric effect in a simple way: apparently the amount of energy possessed by a photon is related to its frequency. A low-frequency red or infrared photon has an energy less than \(E_s\), so a beam of them will not produce any current. A high-frequency blue or violet photon, on the other hand, packs enough of a punch to allow an electron to make it to the other plate. At frequencies higher than the minimum, the photoelectric current continues to increase with the frequency of the light because of effects (1) and (3).

Numerical relationship between energy and frequency

Prompted by Einstein's photon paper, Robert Millikan (whom we first encountered in chapter 8) figured out how to use the photoelectric effect to probe precisely the link between frequency and photon energy. Rather than going into the historical details of Millikan's actual experiments (a lengthy experimental program that occupied a large part of his professional career) we will describe a simple version, shown in figure g, that is used sometimes in college laboratory courses.2 The idea is simply to illuminate one plate of the vacuum tube with light of a single wavelength and monitor the voltage difference between the two plates as they charge up. Since the resistance of a voltmeter is very high (much higher than the resistance of an ammeter), we can assume to a good approximation that electrons reaching the top plate are stuck there permanently, so the voltage will keep on increasing for as long as electrons are making it across the vacuum tube.

At a moment when the voltage difference has a reached a value \(\Delta \)V, the minimum energy required by an electron to make it out of the bottom plate and across the gap to the other plate is \(E_s+e\Delta \)V. As \(\Delta V\) increases, we eventually reach a point at which \(E_s+e\Delta V\) equals the energy of one photon. No more electrons can cross the gap, and the reading on the voltmeter stops rising. The quantity \(E_s+e\Delta V\) now tells us the energy of one photon. If we determine this energy for a variety of wavelengths, h, we find the following simple relationship between the energy of a photon and the frequency of the light:

\[\begin{equation*} E = hf , \end{equation*}\]

where \(h\) is a constant with the value \(6.63\times10^{-34}\ \text{J}\cdot\text{s}\). Note how the equation brings the wave and particle models of light under the same roof: the left side is the energy of one particle of light, while the right side is the frequency of the same light, interpreted as a wave. The constant \(h\) is known as Planck's constant, for historical reasons explained in the footnote beginning on the preceding page.

self-check:

How would you extract \(h\) from the graph in figure h? What if you didn't even know \(E_s\) in advance, and could only graph \(e\Delta V\) versus \(f\)?

(answer in the back of the PDF version of the book)

Since the energy of a photon is \(hf\), a beam of light can only have energies of \(hf\), \(2hf\), \(3hf\), etc. Its energy is quantized --- there is no such thing as a fraction of a photon. Quantum physics gets its name from the fact that it quantizes quantities like energy, momentum, and angular momentum that had previously been thought to be smooth, continuous and infinitely divisible.

Example 7: Number of photons emitted by a lightbulb per second
\(\triangleright\) Roughly how many photons are emitted by a 100-W lightbulb in 1 second?

\(\triangleright\) People tend to remember wavelengths rather than frequencies for visible light. The bulb emits photons with a range of frequencies and wavelengths, but let's take 600 nm as a typical wavelength for purposes of estimation. The energy of a single photon is

\[\begin{align*} E_{photon} &= hf \\ &= hc/\lambda \end{align*}\]

A power of 100 W means 100 joules per second, so the number of photons is

\[\begin{align*} (100\ \text{J})/E_{photon} &= (100\ \text{J}) / (hc/\lambda ) \\ &\approx 3\times10^{20} \end{align*}\]

This hugeness of this number is consistent with the correspondence principle. The experiments that established the classical theory of optics weren't wrong. They were right, within their domain of applicability, in which the number of photons was so large as to be indistinguishable from a continuous beam.

Example 8: Measuring the wave
When surfers are out on the water waiting for their chance to catch a wave, they're interested in both the height of the waves and when the waves are going to arrive. In other words, they observe both the amplitude and phase of the waves, and it doesn't matter to them that the water is granular at the molecular level. The correspondence principle requires that we be able to do the same thing for electromagnetic waves, since the classical theory of electricity and magnetism was all stated and verified experimentally in terms of the fields \(\mathbf{E}\) and \(\mathbf{B}\), which are the amplitude of an electromagnetic wave. The phase is also necessary, since the induction effects predicted by Maxwell's equation would flip their signs depending on whether an oscillating field is on its way up or on its way back down.

This is a more demanding application of the correspondence principle than the one in example 7, since amplitudes and phases constitute more detailed information than the over-all intensity of a beam of light. Eyeball measurements can't detect this type of information, since the eye is much bigger than a wavelength, but for example an AM radio receiver can do it with radio waves, since the wavelength for a station at 1000 kHz is about 300 meters, which is much larger than the antenna. The correspondence principle demands that we be able to explain this in terms of the photon theory, and this requires not just that we have a large number of photons emitted by the transmitter per second, as in example 7, but that even by the time they spread out and reach the receiving antenna, there should be many photons overlapping each other within a space of one cubic wavelength. Problem 47 on p. 905 verifies that the number is in fact extremely large.

Example 9: Momentum of a photon

\(\triangleright\) According to the theory of relativity, the momentum of a beam of light is given by \(p=E/c\). Apply this to find the momentum of a single photon in terms of its frequency, and in terms of its wavelength.

\(\triangleright\) Combining the equations \(p=E/c\) and \(E=hf\), we find

\[\begin{align*} p &= E/c \\ &= \frac{h}{c}f . \end{align*}\]

To reexpress this in terms of wavelength, we use \(c=f\lambda \):

\[\begin{align*} p &= \frac{h}{c}\cdot\frac{c}{\lambda} \\ &= \frac{h}{\lambda} \end{align*}\]

The second form turns out to be simpler.

Discussion Questions

The photoelectric effect only ever ejects a very tiny percentage of the electrons available near the surface of an object. How well does this agree with the wave model of light, and how well with the particle model? Consider the two different distance scales involved: the wavelength of the light, and the size of an atom, which is on the order of \(10^{-10}\) or \(10^{-9}\) m.

What is the significance of the fact that Planck's constant is numerically very small? How would our everyday experience of light be different if it was not so small?

How would the experiments described above be affected if a single electron was likely to get hit by more than one photon?

Draw some representative trajectories of electrons for \(\Delta V=0\), \(\Delta V\) less than the maximum value, and \(\Delta V\) greater than the maximum value.

Explain based on the photon theory of light why ultraviolet light would be more likely than visible or infrared light to cause cancer by damaging DNA molecules. How does this relate to discussion question C?

Does \(E=hf\) imply that a photon changes its energy when it passes from one transparent material into another substance with a different index of refraction?

double-slit-bullets

j / Bullets pass through a double slit.

interference

k / A water wave passes through a double slit.

skier

l / A single photon can go through both slits.

carrot

m / Example 10.

13.2.3 Wave-particle duality

How can light be both a particle and a wave? We are now ready to resolve this seeming contradiction. Often in science when something seems paradoxical, it's because we (1) don't define our terms carefully, or (2) don't test our ideas against any specific real-world situation. Let's define particles and waves as follows:

As a real-world check on our philosophizing, there is one particular experiment that works perfectly. We set up a double-slit interference experiment that we know will produce a diffraction pattern if light is an honest-to-goodness wave, but we detect the light with a detector that is capable of sensing individual photons, e.g., a digital camera. To make it possible to pick out individual dots due to individual photons, we must use filters to cut down the intensity of the light to a very low level, just as in the photos by Prof. Page on p. 839. The whole thing is sealed inside a light-tight box. The results are shown in figure i. (In fact, the similar figures in on page 839 are simply cutouts from these figures.)

ccd-diffraction

i / Wave interference patterns photographed by Prof. Lyman Page with a digital camera. Laser light with a single well-defined wavelength passed through a series of absorbers to cut down its intensity, then through a set of slits to produce interference, and finally into a digital camera chip. (A triple slit was actually used, but for conceptual simplicity we discuss the results in the main text as if it was a double slit.) In panel 2 the intensity has been reduced relative to 1, and even more so for panel 3.

Neither the pure wave theory nor the pure particle theory can explain the results. If light was only a particle and not a wave, there would be no interference effect. The result of the experiment would be like firing a hail of bullets through a double slit, j. Only two spots directly behind the slits would be hit.

If, on the other hand, light was only a wave and not a particle, we would get the same kind of diffraction pattern that would happen with a water wave, k. There would be no discrete dots in the photo, only a diffraction pattern that shaded smoothly between light and dark.

Applying the definitions to this experiment, light must be both a particle and a wave. It is a wave because it exhibits interference effects. At the same time, the fact that the photographs contain discrete dots is a direct demonstration that light refuses to be split into units of less than a single photon. There can only be whole numbers of photons: four photons in figure i/3, for example.

A wrong interpretation: photons interfering with each other

One possible interpretation of wave-particle duality that occurred to physicists early in the game was that perhaps the interference effects came from photons interacting with each other. By analogy, a water wave consists of moving water molecules, and interference of water waves results ultimately from all the mutual pushes and pulls of the molecules. This interpretation was conclusively disproved by G.I. Taylor, a student at Cambridge. The demonstration by Prof. Page that we've just been discussing is essentially a modernized version of Taylor's work. Taylor reasoned that if interference effects came from photons interacting with each other, a bare minimum of two photons would have to be present at the same time to produce interference. By making the light source extremely dim, we can be virtually certain that there are never two photons in the box at the same time. In figure i/3, however, the intensity of the light has been cut down so much by the absorbers that if it was in the open, the average separation between photons would be on the order of a kilometer! At any given moment, the number of photons in the box is most likely to be zero. It is virtually certain that there were never two photons in the box at once.

The concept of a photon's path is undefined.

If a single photon can demonstrate double-slit interference, then which slit did it pass through? The unavoidable answer must be that it passes through both! This might not seem so strange if we think of the photon as a wave, but it is highly counterintuitive if we try to visualize it as a particle. The moral is that we should not think in terms of the path of a photon. Like the fully human and fully divine Jesus of Christian theology, a photon is supposed to be 100% wave and 100% particle. If a photon had a well defined path, then it would not demonstrate wave superposition and interference effects, contradicting its wave nature. (In subsection 13.3.4 we will discuss the Heisenberg uncertainty principle, which gives a numerical way of approaching this issue.)

Another wrong interpretation: the pilot wave hypothesis

A second possible explanation of wave-particle duality was taken seriously in the early history of quantum mechanics. What if the photon particle is like a surfer riding on top of its accompanying wave? As the wave travels along, the particle is pushed, or “piloted” by it. Imagining the particle and the wave as two separate entities allows us to avoid the seemingly paradoxical idea that a photon is both at once. The wave happily does its wave tricks, like superposition and interference, and the particle acts like a respectable particle, resolutely refusing to be in two different places at once. If the wave, for instance, undergoes destructive interference, becoming nearly zero in a particular region of space, then the particle simply is not guided into that region.

The problem with the pilot wave interpretation is that the only way it can be experimentally tested or verified is if someone manages to detach the particle from the wave, and show that there really are two entities involved, not just one. Part of the scientific method is that hypotheses are supposed to be experimentally testable. Since nobody has ever managed to separate the wavelike part of a photon from the particle part, the interpretation is not useful or meaningful in a scientific sense.

The probability interpretation

The correct interpretation of wave-particle duality is suggested by the random nature of the experiment we've been discussing: even though every photon wave/particle is prepared and released in the same way, the location at which it is eventually detected by the digital camera is different every time. The idea of the probability interpretation of wave-particle duality is that the location of the photon-particle is random, but the probability that it is in a certain location is higher where the photon-wave's amplitude is greater.

More specifically, the probability distribution of the particle must be proportional to the square of the wave's amplitude,

\[\begin{equation*} (\text{probability distribution}) \propto (\text{amplitude})^2 . \end{equation*}\]

This follows from the correspondence principle and from the fact that a wave's energy density is proportional to the square of its amplitude. If we run the double-slit experiment for a long enough time, the pattern of dots fills in and becomes very smooth as would have been expected in classical physics. To preserve the correspondence between classical and quantum physics, the amount of energy deposited in a given region of the picture over the long run must be proportional to the square of the wave's amplitude. The amount of energy deposited in a certain area depends on the number of photons picked up, which is proportional to the probability of finding any given photon there.

Example 10: A microwave oven
\(\triangleright\) The figure shows two-dimensional (top) and one-dimensional (bottom) representations of the standing wave inside a microwave oven. Gray represents zero field, and white and black signify the strongest fields, with white being a field that is in the opposite direction compared to black. Compare the probabilities of detecting a microwave photon at points A, B, and C.

\(\triangleright\) A and C are both extremes of the wave, so the probabilities of detecting a photon at A and C are equal. It doesn't matter that we have represented C as negative and A as positive, because it is the square of the amplitude that is relevant. The amplitude at B is about 1/2 as much as the others, so the probability of detecting a photon there is about 1/4 as much.

The probability interpretation was disturbing to physicists who had spent their previous careers working in the deterministic world of classical physics, and ironically the most strenuous objections against it were raised by Einstein, who had invented the photon concept in the first place. The probability interpretation has nevertheless passed every experimental test, and is now as well established as any part of physics.

An aspect of the probability interpretation that has made many people uneasy is that the process of detecting and recording the photon's position seems to have a magical ability to get rid of the wavelike side of the photon's personality and force it to decide for once and for all where it really wants to be. But detection or measurement is after all only a physical process like any other, governed by the same laws of physics. We will postpone a detailed discussion of this issue until p. 866, since a measuring device like a digital camera is made of matter, but we have so far only discussed how quantum mechanics relates to light.

Example 11: What is the proportionality constant?

\(\triangleright\) What is the proportionality constant that would make an actual equation out of \((\text{probability distribution})\propto(\text{amplitude})^2\)?

\(\triangleright\) The probability that the photon is in a certain small region of volume \(v\) should equal the fraction of the wave's energy that is within that volume. For a sinusoidal wave, which has a single, well-defined frequency \(f\), this gives

\[\begin{align*} P &= \frac{\text{energy in volume $v$}}{\text{energy of photon}} \\ &= \frac{\text{energy in volume $v$}}{hf} . \end{align*}\]

We assume \(v\) is small enough so that the electric and magnetic fields are nearly constant throughout it. We then have

\[\begin{equation*} P = \frac{\left(\frac{1}{8\pi k}|\mathbf{E}|^2 +\frac{c^2}{8\pi k}|\mathbf{B}|^2\right)v}{hf} . \end{equation*}\]

We can simplify this formidable looking expression by recognizing that in a plane wave, \(|\mathbf{E}|\) and \(|\mathbf{B}|\) are related by \(|\mathbf{E}|=c|\mathbf{B}|\). This implies (problem 40, p. 727), that the electric and magnetic fields each contribute half the total energy, so we can simplify the result to

\[\begin{align*} P &= 2\frac{\left(\frac{1}{8\pi k}|\mathbf{E}|^2\right)v}{hf} \\ &= \frac{v}{4\pi khf}|\mathbf{E}|^2 . \end{align*}\]

The probability is proportional to the square of the wave's amplitude, as advertised.3

Discussion Questions

Referring back to the example of the carrot in the microwave oven, show that it would be nonsensical to have probability be proportional to the field itself, rather than the square of the field.

Einstein did not try to reconcile the wave and particle theories of light, and did not say much about their apparent inconsistency. Einstein basically visualized a beam of light as a stream of bullets coming from a machine gun. In the photoelectric effect, a photon “bullet” would only hit one atom, just as a real bullet would only hit one person. Suppose someone reading his 1905 paper wanted to interpret it by saying that Einstein's so-called particles of light are simply short wave-trains that only occupy a small region of space. Comparing the wavelength of visible light (a few hundred nm) to the size of an atom (on the order of 0.1 nm), explain why this poses a difficulty for reconciling the particle and wave theories.

Can a white photon exist?

In double-slit diffraction of photons, would you get the same pattern of dots on the digital camera image if you covered one slit? Why should it matter whether you give the photon two choices or only one?

volume-under-surface

n / Probability is the volume under a surface defined by \(D(x,y)\).

13.2.4 Photons in three dimensions

Up until now I've been sneaky and avoided a full discussion of the three-dimensional aspects of the probability interpretation. The example of the carrot in the microwave oven, for example, reduced to a one-dimensional situation because we were considering three points along the same line and because we were only comparing ratios of probabilities. The purpose of bringing it up now is to head off any feeling that you've been cheated conceptually rather than to prepare you for mathematical problem solving in three dimensions, which would not be appropriate for the level of this course.

A typical example of a probability distribution in section 13.1 was the distribution of heights of human beings. The thing that varied randomly, height, \(h\), had units of meters, and the probability distribution was a graph of a function \(D(h)\). The units of the probability distribution had to be \(\text{m}^{-1}\) (inverse meters) so that areas under the curve, interpreted as probabilities, would be unitless: \((\text{area})=(\text{height})(\text{width})=\text{m}^{-1}\cdot\text{m}\).

Now suppose we have a two-dimensional problem, e.g., the probability distribution for the place on the surface of a digital camera chip where a photon will be detected. The point where it is detected would be described with two variables, \(x\) and \(y\), each having units of meters. The probability distribution will be a function of both variables, \(D(x,y)\). A probability is now visualized as the volume under the surface described by the function \(D(x,y)\), as shown in figure n. The units of \(D\) must be \(\text{m}^{-2}\) so that probabilities will be unitless: \((\text{probability})=(\text{depth})(\text{length})(\text{width}) =\text{m}^{-2}\cdot\text{m}\cdot\text{m}\). In terms of calculus, we have \(P\:=\:\int Ddx dy\).

Generalizing finally to three dimensions, we find by analogy that the probability distribution will be a function of all three coordinates, \(D(x,y,z)\), and will have units of \(\text{m}^{-3}\). It is unfortunately impossible to visualize the graph unless you are a mutant with a natural feel for life in four dimensions. If the probability distribution is nearly constant within a certain volume of space \(v\), the probability that the photon is in that volume is simply \(vD\). If not, then we can use an integral, \(P\:=\:\int Ddx dydz\).

\inlinefignocaption{melting-witch}

13.3 Matter As a Wave

[In] a few minutes I shall be all melted... I have been wicked in my day, but I never thought a little girl like you would ever be able to melt me and end my wicked deeds. Look out --- here I go! -- The Wicked Witch of the West

As the Wicked Witch learned the hard way, losing molecular cohesion can be unpleasant. That's why we should be very grateful that the concepts of quantum physics apply to matter as well as light. If matter obeyed the laws of classical physics, molecules wouldn't exist.

Consider, for example, the simplest atom, hydrogen. Why does one hydrogen atom form a chemical bond with another hydrogen atom? Roughly speaking, we'd expect a neighboring pair of hydrogen atoms, A and B, to exert no force on each other at all, attractive or repulsive: there are two repulsive interactions (proton A with proton B and electron A with electron B) and two attractive interactions (proton A with electron B and electron A with proton B). Thinking a little more precisely, we should even expect that once the two atoms got close enough, the interaction would be repulsive. For instance, if you squeezed them so close together that the two protons were almost on top of each other, there would be a tremendously strong repulsion between them due to the \(1/r^2\) nature of the electrical force. The repulsion between the electrons would not be as strong, because each electron ranges over a large area, and is not likely to be found right on top of the other electron. Thus hydrogen molecules should not exist according to classical physics.

Quantum physics to the rescue! As we'll see shortly, the whole problem is solved by applying the same quantum concepts to electrons that we have already used for photons.

electron-wave-phase

b / These two electron waves are not distinguishable by any measuring device.

13.3.1 Electrons as waves

We started our journey into quantum physics by studying the random behavior of matter in radioactive decay, and then asked how randomness could be linked to the basic laws of nature governing light. The probability interpretation of wave-particle duality was strange and hard to accept, but it provided such a link. It is now natural to ask whether the same explanation could be applied to matter. If the fundamental building block of light, the photon, is a particle as well as a wave, is it possible that the basic units of matter, such as electrons, are waves as well as particles?

A young French aristocrat studying physics, Louis de Broglie (pronounced “broylee”), made exactly this suggestion in his 1923 Ph.D. thesis. His idea had seemed so farfetched that there was serious doubt about whether to grant him the degree. Einstein was asked for his opinion, and with his strong support, de Broglie got his degree.

Only two years later, American physicists C.J. Davisson and L. Germer confirmed de Broglie's idea by accident. They had been studying the scattering of electrons from the surface of a sample of nickel, made of many small crystals. (One can often see such a crystalline pattern on a brass doorknob that has been polished by repeated handling.) An accidental explosion occurred, and when they put their apparatus back together they observed something entirely different: the scattered electrons were now creating an interference pattern! This dramatic proof of the wave nature of matter came about because the nickel sample had been melted by the explosion and then resolidified as a single crystal. The nickel atoms, now nicely arranged in the regular rows and columns of a crystalline lattice, were acting as the lines of a diffraction grating. The new crystal was analogous to the type of ordinary diffraction grating in which the lines are etched on the surface of a mirror (a reflection grating) rather than the kind in which the light passes through the transparent gaps between the lines (a transmission grating).

neutron-interference

a / A double-slit interference pattern made with neutrons. (A. Zeilinger, R. Gähler, C.G. Shull, W. Treimer, and W. Mampe, Reviews of Modern Physics, Vol. 60, 1988.)

Although we will concentrate on the wave-particle duality of electrons because it is important in chemistry and the physics of atoms, all the other “particles” of matter you've learned about show wave properties as well. Figure a, for instance, shows a wave interference pattern of neutrons.

It might seem as though all our work was already done for us, and there would be nothing new to understand about electrons: they have the same kind of funny wave-particle duality as photons. That's almost true, but not quite. There are some important ways in which electrons differ significantly from photons:

  1. Electrons have mass, and photons don't.
  2. Photons always move at the speed of light, but electrons can move at any speed less than \(c\).
  3. Photons don't have electric charge, but electrons do, so electric forces can act on them. The most important example is the atom, in which the electrons are held by the electric force of the nucleus.
  4. Electrons cannot be absorbed or emitted as photons are. Destroying an electron or creating one out of nothing would violate conservation of charge.

(In section 13.4 we will learn of one more fundamental way in which electrons differ from photons, for a total of five.)

Because electrons are different from photons, it is not immediately obvious which of the photon equations from chapter 11 can be applied to electrons as well. A particle property, the energy of one photon, is related to its wave properties via \(E=hf\) or, equivalently, \(E=hc/\lambda \). The momentum of a photon was given by \(p=hf/c\) or \(p=h/\lambda \). Ultimately it was a matter of experiment to determine which of these equations, if any, would work for electrons, but we can make a quick and dirty guess simply by noting that some of the equations involve \(c\), the speed of light, and some do not. Since \(c\) is irrelevant in the case of an electron, we might guess that the equations of general validity are those that do not have \(c\) in them:

\[\begin{align*} E &= hf \\ p &= h/\lambda \end{align*}\]

This is essentially the reasoning that de Broglie went through, and experiments have confirmed these two equations for all the fundamental building blocks of light and matter, not just for photons and electrons.

The second equation, which I soft-pedaled in the previous chapter, takes on a greater important for electrons. This is first of all because the momentum of matter is more likely to be significant than the momentum of light under ordinary conditions, and also because force is the transfer of momentum, and electrons are affected by electrical forces.

Example 12: The wavelength of an elephant

\(\triangleright\) What is the wavelength of a trotting elephant?

\(\triangleright\) One may doubt whether the equation should be applied to an elephant, which is not just a single particle but a rather large collection of them. Throwing caution to the wind, however, we estimate the elephant's mass at \(10^3\) kg and its trotting speed at 10 m/s. Its wavelength is therefore roughly

\[\begin{align*} \lambda &= \frac{h}{p} \\ &= \frac{h}{mv} \\ &= \frac{6.63\times10^{-34}\ \text{J}\!\cdot\!\text{s}}{(10^3\ \text{kg})(10\ \text{m}/\text{s})} \\ &\sim 10^{-37}\ \frac{\left(\text{kg}\!\cdot\!\text{m}^2/\text{s}^2\right)\!\cdot\!\text{s}}{\text{kg}\!\cdot\!\text{m}/\text{s}} \\ &= 10^{-37}\ \text{m} \end{align*}\]

The wavelength found in this example is so fantastically small that we can be sure we will never observe any measurable wave phenomena with elephants or any other human-scale objects. The result is numerically small because Planck's constant is so small, and as in some examples encountered previously, this smallness is in accord with the correspondence principle.

Although a smaller mass in the equation \(\lambda =h/mv\) does result in a longer wavelength, the wavelength is still quite short even for individual electrons under typical conditions, as shown in the following example.

Example 13: The typical wavelength of an electron

\(\triangleright\) Electrons in circuits and in atoms are typically moving through voltage differences on the order of 1 V, so that a typical energy is \((e)(1\ \text{V})\), which is on the order of \(10^{-19}\ \text{J}\). What is the wavelength of an electron with this amount of kinetic energy?

\(\triangleright\) This energy is nonrelativistic, since it is much less than \(mc^2\). Momentum and energy are therefore related by the nonrelativistic equation \(K=p^2/2m\). Solving for \(p\) and substituting in to the equation for the wavelength, we find

\[\begin{align*} \lambda &= \frac{h}{\sqrt{2mK}} \\ &= 1.6\times10^{-9}\ \text{m} . \end{align*}\]

This is on the same order of magnitude as the size of an atom, which is no accident: as we will discuss in the next chapter in more detail, an electron in an atom can be interpreted as a standing wave. The smallness of the wavelength of a typical electron also helps to explain why the wave nature of electrons wasn't discovered until a hundred years after the wave nature of light. To scale the usual wave-optics devices such as diffraction gratings down to the size needed to work with electrons at ordinary energies, we need to make them so small that their parts are comparable in size to individual atoms. This is essentially what Davisson and Germer did with their nickel crystal.

self-check:

These remarks about the inconvenient smallness of electron wavelengths apply only under the assumption that the electrons have typical energies. What kind of energy would an electron have to have in order to have a longer wavelength that might be more convenient to work with?

(answer in the back of the PDF version of the book)

What kind of wave is it?

If a sound wave is a vibration of matter, and a photon is a vibration of electric and magnetic fields, what kind of a wave is an electron made of? The disconcerting answer is that there is no experimental “observable,” i.e., directly measurable quantity, to correspond to the electron wave itself. In other words, there are devices like microphones that detect the oscillations of air pressure in a sound wave, and devices such as radio receivers that measure the oscillation of the electric and magnetic fields in a light wave, but nobody has ever found any way to measure the electron wave directly.

We can of course detect the energy (or momentum) possessed by an electron just as we could detect the energy of a photon using a digital camera. (In fact I'd imagine that an unmodified digital camera chip placed in a vacuum chamber would detect electrons just as handily as photons.) But this only allows us to determine where the wave carries high probability and where it carries low probability. Probability is proportional to the square of the wave's amplitude, but measuring its square is not the same as measuring the wave itself. In particular, we get the same result by squaring either a positive number or its negative, so there is no way to determine the positive or negative sign of an electron wave.

Most physicists tend toward the school of philosophy known as operationalism, which says that a concept is only meaningful if we can define some set of operations for observing, measuring, or testing it. According to a strict operationalist, then, the electron wave itself is a meaningless concept. Nevertheless, it turns out to be one of those concepts like love or humor that is impossible to measure and yet very useful to have around. We therefore give it a symbol, \(\Psi \) (the capital Greek letter psi), and a special name, the electron wavefunction (because it is a function of the coordinates \(x\), \(y\), and \(z\) that specify where you are in space). It would be impossible, for example, to calculate the shape of the electron wave in a hydrogen atom without having some symbol for the wave. But when the calculation produces a result that can be compared directly to experiment, the final algebraic result will turn out to involve only \(\Psi^2\), which is what is observable, not \(\Psi \) itself.

Since \(\Psi \), unlike \(E\) and \(B\), is not directly measurable, we are free to make the probability equations have a simple form: instead of having the probability density equal to some funny constant multiplied by \(\Psi^2\), we simply define \(\Psi \) so that the constant of proportionality is one:

\[\begin{equation*} (\text{probability distribution}) = \Psi ^2 . \end{equation*}\]

Since the probability distribution has units of \(\text{m}^{-3}\), the units of \(\Psi \) must be \(\text{m}^{-3/2}\).

Discussion Question

Frequency is oscillations per second, whereas wavelength is meters per oscillation. How could the equations \(E=hf\) and \(p=h/\lambda\) be made to look more alike by using quantities that were more closely analogous? (This more symmetric treatment makes it easier to incorporate relativity into quantum mechanics, since relativity says that space and time are not entirely separate.)

sine-wave

c / Part of an infinite sine wave.

sine-wave-pulse

d / A finite-length sine wave.

beats

e / A beat pattern created by superimposing two sine waves with slightly different wavelengths.

13.3.2 Dispersive waves

A colleague of mine who teaches chemistry loves to tell the story about an exceptionally bright student who, when told of the equation \(p=h/\lambda \), protested, “But when I derived it, it had a factor of 2!” The issue that's involved is a real one, albeit one that could be glossed over (and is, in most textbooks) without raising any alarms in the mind of the average student. The present optional section addresses this point; it is intended for the student who wishes to delve a little deeper.

Here's how the now-legendary student was presumably reasoning. We start with the equation \(v=f\lambda \), which is valid for any sine wave, whether it's quantum or classical. Let's assume we already know \(E=hf\), and are trying to derive the relationship between wavelength and momentum:

\[\begin{align*} \lambda &= \frac{v}{f} \\ &= \frac{vh}{E} \\ &= \frac{vh}{\frac{1}{2}mv^2} \\ &= \frac{2h}{mv} \\ &= \frac{2h}{p} . \end{align*}\]

The reasoning seems valid, but the result does contradict the accepted one, which is after all solidly based on experiment.

The mistaken assumption is that we can figure everything out in terms of pure sine waves. Mathematically, the only wave that has a perfectly well defined wavelength and frequency is a sine wave, and not just any sine wave but an infinitely long sine wave, c. The unphysical thing about such a wave is that it has no leading or trailing edge, so it can never be said to enter or leave any particular region of space. Our derivation made use of the velocity, \(v\), and if velocity is to be a meaningful concept, it must tell us how quickly stuff (mass, energy, momentum, ...) is transported from one region of space to another. Since an infinitely long sine wave doesn't remove any stuff from one region and take it to another, the “velocity of its stuff” is not a well defined concept.

Of course the individual wave peaks do travel through space, and one might think that it would make sense to associate their speed with the “speed of stuff,” but as we will see, the two velocities are in general unequal when a wave's velocity depends on wavelength. Such a wave is called a dispersive wave, because a wave pulse consisting of a superposition of waves of different wavelengths will separate (disperse) into its separate wavelengths as the waves move through space at different speeds. Nearly all the waves we have encountered have been nondispersive. For instance, sound waves and light waves (in a vacuum) have speeds independent of wavelength. A water wave is one good example of a dispersive wave. Long-wavelength water waves travel faster, so a ship at sea that encounters a storm typically sees the long-wavelength parts of the wave first. When dealing with dispersive waves, we need symbols and words to distinguish the two speeds. The speed at which wave peaks move is called the phase velocity, \(v_p\), and the speed at which “stuff” moves is called the group velocity, \(v_g\).

An infinite sine wave can only tell us about the phase velocity, not the group velocity, which is really what we would be talking about when we refer to the speed of an electron. If an infinite sine wave is the simplest possible wave, what's the next best thing? We might think the runner up in simplicity would be a wave train consisting of a chopped-off segment of a sine wave, d. However, this kind of wave has kinks in it at the end. A simple wave should be one that we can build by superposing a small number of infinite sine waves, but a kink can never be produced by superposing any number of infinitely long sine waves.

Actually the simplest wave that transports stuff from place to place is the pattern shown in figure e. Called a beat pattern, it is formed by superposing two sine waves whose wavelengths are similar but not quite the same. If you have ever heard the pulsating howling sound of musicians in the process of tuning their instruments to each other, you have heard a beat pattern. The beat pattern gets stronger and weaker as the two sine waves go in and out of phase with each other. The beat pattern has more “stuff” (energy, for example) in the areas where constructive interference occurs, and less in the regions of cancellation. As the whole pattern moves through space, stuff is transported from some regions and into other ones.

If the frequency of the two sine waves differs by 10%, for instance, then ten periods will be occur between times when they are in phase. Another way of saying it is that the sinusoidal “envelope” (the dashed lines in figure e) has a frequency equal to the difference in frequency between the two waves. For instance, if the waves had frequencies of 100 Hz and 110 Hz, the frequency of the envelope would be 10 Hz.

To apply similar reasoning to the wavelength, we must define a quantity \(z=1/\lambda \) that relates to wavelength in the same way that frequency relates to period. In terms of this new variable, the \(z\) of the envelope equals the difference between the \(z's\) of the two sine waves.

The group velocity is the speed at which the envelope moves through space. Let \(\Delta f\) and \(\Delta z\) be the differences between the frequencies and \(z's\) of the two sine waves, which means that they equal the frequency and \(z\) of the envelope. The group velocity is \(v_g=f_{envelope}\lambda_{envelope}=\Delta f/\Delta \)z. If \(\Delta f\) and \(\Delta z\) are sufficiently small, we can approximate this expression as a derivative,

\[\begin{equation*} v_g = \frac{df}{dz} . \end{equation*}\]

This expression is usually taken as the definition of the group velocity for wave patterns that consist of a superposition of sine waves having a narrow range of frequencies and wavelengths. In quantum mechanics, with \(f=E/h\) and \(z=p/h\), we have \(v_g=dE/dp\). In the case of a nonrelativistic electron the relationship between energy and momentum is \(E=p^2/2m\), so the group velocity is \(dE/dp=p/m=v\), exactly what it should be. It is only the phase velocity that differs by a factor of two from what we would have expected, but the phase velocity is not the physically important thing.

particle-in-a-box

f / Three possible standing-wave patterns for a particle in a box.

sirius-spectrum

g / The spectrum of the light from the star Sirius.

h-molecule

h / Two hydrogen atoms bond to form an \(\text{H}_2\) molecule. In the molecule, the two electrons' wave patterns overlap , and are about twice as wide.

13.3.3 Bound states

Electrons are at their most interesting when they're in atoms, that is, when they are bound within a small region of space. We can understand a great deal about atoms and molecules based on simple arguments about such bound states, without going into any of the realistic details of atom. The simplest model of a bound state is known as the particle in a box: like a ball on a pool table, the electron feels zero force while in the interior, but when it reaches an edge it encounters a wall that pushes back inward on it with a large force. In particle language, we would describe the electron as bouncing off of the wall, but this incorrectly assumes that the electron has a certain path through space. It is more correct to describe the electron as a wave that undergoes 100% reflection at the boundaries of the box.

Like a generation of physics students before me, I rolled my eyes when initially introduced to the unrealistic idea of putting a particle in a box. It seemed completely impractical, an artificial textbook invention. Today, however, it has become routine to study electrons in rectangular boxes in actual laboratory experiments. The “box” is actually just an empty cavity within a solid piece of silicon, amounting in volume to a few hundred atoms. The methods for creating these electron-in-a-box setups (known as “quantum dots”) were a by-product of the development of technologies for fabricating computer chips.

For simplicity let's imagine a one-dimensional electron in a box, i.e., we assume that the electron is only free to move along a line. The resulting standing wave patterns, of which the first three are shown in the figure, are just like some of the patterns we encountered with sound waves in musical instruments. The wave patterns must be zero at the ends of the box, because we are assuming the walls are impenetrable, and there should therefore be zero probability of finding the electron outside the box. Each wave pattern is labeled according to \(n\), the number of peaks and valleys it has. In quantum physics, these wave patterns are referred to as “states” of the particle-in-the-box system.

The following seemingly innocuous observations about the particle in the box lead us directly to the solutions to some of the most vexing failures of classical physics:

The particle's energy is quantized (can only have certain values). Each wavelength corresponds to a certain momentum, and a given momentum implies a definite kinetic energy, \(E=p^2/2m\). (This is the second type of energy quantization we have encountered. The type we studied previously had to do with restricting the number of particles to a whole number, while assuming some specific wavelength and energy for each particle. This type of quantization refers to the energies that a single particle can have. Both photons and matter particles demonstrate both types of quantization under the appropriate circumstances.)

The particle has a minimum kinetic energy. Long wavelengths correspond to low momenta and low energies. There can be no state with an energy lower than that of the \(n=1\) state, called the ground state.

The smaller the space in which the particle is confined, the higher its kinetic energy must be. Again, this is because long wavelengths give lower energies.

Example 14: Spectra of thin gases

A fact that was inexplicable by classical physics was that thin gases absorb and emit light only at certain wavelengths. This was observed both in earthbound laboratories and in the spectra of stars. The figure on the left shows the example of the spectrum of the star Sirius, in which there are “gap teeth” at certain wavelengths. Taking this spectrum as an example, we can give a straightforward explanation using quantum physics.

Energy is released in the dense interior of the star, but the outer layers of the star are thin, so the atoms are far apart and electrons are confined within individual atoms. Although their standing-wave patterns are not as simple as those of the particle in the box, their energies are quantized.

When a photon is on its way out through the outer layers, it can be absorbed by an electron in an atom, but only if the amount of energy it carries happens to be the right amount to kick the electron from one of the allowed energy levels to one of the higher levels. The photon energies that are missing from the spectrum are the ones that equal the difference in energy between two electron energy levels. (The most prominent of the absorption lines in Sirius's spectrum are absorption lines of the hydrogen atom.)

Example 15: The stability of atoms

In many Star Trek episodes the Enterprise, in orbit around a planet, suddenly lost engine power and began spiraling down toward the planet's surface. This was utter nonsense, of course, due to conservation of energy: the ship had no way of getting rid of energy, so it did not need the engines to replenish it.

Consider, however, the electron in an atom as it orbits the nucleus. The electron does have a way to release energy: it has an acceleration due to its continuously changing direction of motion, and according to classical physics, any accelerating charged particle emits electromagnetic waves. According to classical physics, atoms should collapse!

The solution lies in the observation that a bound state has a minimum energy. An electron in one of the higher-energy atomic states can and does emit photons and hop down step by step in energy. But once it is in the ground state, it cannot emit a photon because there is no lower-energy state for it to go to.

Example 16: Chemical bonds
I began this section with a classical argument that chemical bonds, as in an \(\text{H}_2\) molecule, should not exist. Quantum physics explains why this type of bonding does in fact occur. When the atoms are next to each other, the electrons are shared between them. The “box” is about twice as wide, and a larger box allows a smaller energy. Energy is required in order to separate the atoms. (A qualitatively different type of bonding is discussed on page 893. Example 23 on page 889 revisits the \(\text{H}_2\) bond in more detail.)
Discussion Questions

Neutrons attract each other via the strong nuclear force, so according to classical physics it should be possible to form nuclei out of clusters of two or more neutrons, with no protons at all. Experimental searches, however, have failed to turn up evidence of a stable two-neutron system (dineutron) or larger stable clusters. These systems are apparently not just unstable in the sense of being able to beta decay but unstable in the sense that they don't hold together at all. Explain based on quantum physics why a dineutron might spontaneously fly apart.

The following table shows the energy gap between the ground state and the first excited state for four nuclei, in units of picojoules. (The nuclei were chosen to be ones that have similar structures, e.g., they are all spherical in shape.)

nucleus energy gap (picojoules)
4textupHe 3.234
16textupO 0.968
40textupCa 0.536
208textupPb 0.418

Explain the trend in the data.

heisenberg

i / Werner Heisenberg (1901-1976). Heisenberg helped to develop the foundations of quantum mechanics, including the Heisenberg uncertainty principle. He was the scientific leader of the Nazi atomic-bomb program up until its cancellation in 1942, when the military decided that it was too ambitious a project to undertake in wartime, and too unlikely to produce results.

13.3.4 The uncertainty principle and measurement

Eliminating randomness through measurement?

A common reaction to quantum physics, among both early-twentieth-century physicists and modern students, is that we should be able to get rid of randomness through accurate measurement. If I say, for example, that it is meaningless to discuss the path of a photon or an electron, one might suggest that we simply measure the particle's position and velocity many times in a row. This series of snapshots would amount to a description of its path.

A practical objection to this plan is that the process of measurement will have an effect on the thing we are trying to measure. This may not be of much concern, for example, when a traffic cop measure's your car's motion with a radar gun, because the energy and momentum of the radar pulses are insufficient to change the car's motion significantly. But on the subatomic scale it is a very real problem. Making a videotape through a microscope of an electron orbiting a nucleus is not just difficult, it is theoretically impossible. The video camera makes pictures of things using light that has bounced off them and come into the camera. If even a single photon of visible light was to bounce off of the electron we were trying to study, the electron's recoil would be enough to change its behavior significantly.

The Heisenberg uncertainty principle

This insight, that measurement changes the thing being measured, is the kind of idea that clove-cigarette-smoking intellectuals outside of the physical sciences like to claim they knew all along. If only, they say, the physicists had made more of a habit of reading literary journals, they could have saved a lot of work. The anthropologist Margaret Mead has recently been accused of inadvertently encouraging her teenaged Samoan informants to exaggerate the freedom of youthful sexual experimentation in their society. If this is considered a damning critique of her work, it is because she could have done better: other anthropologists claim to have been able to eliminate the observer-as-participant problem and collect untainted data.

The German physicist Werner Heisenberg, however, showed that in quantum physics, any measuring technique runs into a brick wall when we try to improve its accuracy beyond a certain point. Heisenberg showed that the limitation is a question of what there is to be known, even in principle, about the system itself, not of the ability or inability of a specific measuring device to ferret out information that is knowable but not previously hidden.

Suppose, for example, that we have constructed an electron in a box (quantum dot) setup in our laboratory, and we are able to adjust the length \(L\) of the box as desired. All the standing wave patterns pretty much fill the box, so our knowledge of the electron's position is of limited accuracy. If we write \(\Delta x\) for the range of uncertainty in our knowledge of its position, then \(\Delta x\) is roughly the same as the length of the box:

\[\begin{equation*} \Delta x \approx L \end{equation*}\]

If we wish to know its position more accurately, we can certainly squeeze it into a smaller space by reducing \(L\), but this has an unintended side-effect. A standing wave is really a superposition of two traveling waves going in opposite directions. The equation \(p=h/\lambda \) really only gives the magnitude of the momentum vector, not its direction, so we should really interpret the wave as a 50/50 mixture of a right-going wave with momentum \(p=h/\lambda \) and a left-going one with momentum \(p=-h/\lambda \). The uncertainty in our knowledge of the electron's momentum is \(\Delta p=2h/\lambda\), covering the range between these two values. Even if we make sure the electron is in the ground state, whose wavelength \(\lambda =2L\) is the longest possible, we have an uncertainty in momentum of \(\Delta p=h/L\). In general, we find

\[\begin{equation*} \Delta p \gtrsim h/L , \end{equation*}\]

with equality for the ground state and inequality for the higher-energy states. Thus if we reduce \(L\) to improve our knowledge of the electron's position, we do so at the cost of knowing less about its momentum. This trade-off is neatly summarized by multiplying the two equations to give

\[\begin{equation*} \Delta p\Delta x \gtrsim h . \end{equation*}\]

Although we have derived this in the special case of a particle in a box, it is an example of a principle of more general validity:

The Heisenberg uncertainty principle

It is not possible, even in principle, to know the momentum and the position of a particle simultaneously and with perfect accuracy. The uncertainties in these two quantities are always such that \(\Delta p\Delta x \gtrsim h\).

(This approximation can be made into a strict inequality, \(\Delta p\Delta x>h/4\pi\), but only with more careful definitions, which we will not bother with.)

Note that although I encouraged you to think of this derivation in terms of a specific real-world system, the quantum dot, no reference was ever made to any specific laboratory equipment or procedures. The argument is simply that we cannot know the particle's position very accurately unless it has a very well defined position, it cannot have a very well defined position unless its wave-pattern covers only a very small amount of space, and its wave-pattern cannot be thus compressed without giving it a short wavelength and a correspondingly uncertain momentum. The uncertainty principle is therefore a restriction on how much there is to know about a particle, not just on what we can know about it with a certain technique.

Example 17: An estimate for electrons in atoms

\(\triangleright\) A typical energy for an electron in an atom is on the order of \((\text{1 volt})\cdot e\), which corresponds to a speed of about 1% of the speed of light. If a typical atom has a size on the order of 0.1 nm, how close are the electrons to the limit imposed by the uncertainty principle?

\(\triangleright\) If we assume the electron moves in all directions with equal probability, the uncertainty in its momentum is roughly twice its typical momentum. This only an order-of-magnitude estimate, so we take \(\Delta p\) to be the same as a typical momentum:

\[\begin{align*} \Delta p \Delta x &= p_{typical} \Delta x \\ &= (m_{electron}) (0.01c) (0.1\times10^{-9}\ \text{m}) \\ &= 3\times 10^{-34}\ \text{J}\!\cdot\!\text{s} \end{align*}\]

This is on the same order of magnitude as Planck's constant, so evidently the electron is “right up against the wall.” (The fact that it is somewhat less than \(h\) is of no concern since this was only an estimate, and we have not stated the uncertainty principle in its most exact form.)

self-check:

If we were to apply the uncertainty principle to human-scale objects, what would be the significance of the small numerical value of Planck's constant?

(answer in the back of the PDF version of the book)

Measurement and Schrödinger's cat

On p. 849 I briefly mentioned an issue concerning measurement that we are now ready to address carefully. If you hang around a laboratory where quantum-physics experiments are being done and secretly record the physicists' conversations, you'll hear them say many things that assume the probability interpretation of quantum mechanics. Usually they will speak as though the randomness of quantum mechanics enters the picture when something is measured. In the digital camera experiments of section 13.2, for example, they would casually describe the detection of a photon at one of the pixels as if the moment of detection was when the photon was forced to “make up its mind.” Although this mental cartoon usually works fairly well as a description of things they experience in the lab, it cannot ultimately be correct, because it attributes a special role to measurement, which is really just a physical process like all other physical processes.4

If we are to find an interpretation that avoids giving any special role to measurement processes, then we must think of the entire laboratory, including the measuring devices and the physicists themselves, as one big quantum-mechanical system made out of protons, neutrons, electrons, and photons. In other words, we should take quantum physics seriously as a description not just of microscopic objects like atoms but of human-scale (“macroscopic”) things like the apparatus, the furniture, and the people.

The most celebrated example is called the Schrödinger's cat experiment. Luckily for the cat, there probably was no actual experiment --- it was simply a “thought experiment” that the German theorist Schrödinger discussed with his colleagues. Schrödinger wrote:

One can even construct quite burlesque cases. A cat is shut up in a steel container, together with the following diabolical apparatus (which one must keep out of the direct clutches of the cat): In a Geiger tube [radiation detector] there is a tiny mass of radioactive substance, so little that in the course of an hour perhaps one atom of it disintegrates, but also with equal probability not even one; if it does happen, the counter [detector] responds and ... activates a hammer that shatters a little flask of prussic acid [filling the chamber with poison gas]. If one has left this entire system to itself for an hour, then one will say to himself that the cat is still living, if in that time no atom has disintegrated. The first atomic disintegration would have poisoned it.

Now comes the strange part. Quantum mechanics describes the particles the cat is made of as having wave properties, including the property of superposition. Schrödinger describes the wavefunction of the box's contents at the end of the hour:

The wavefunction of the entire system would express this situation by having the living and the dead cat mixed ... in equal parts [50/50 proportions]. The uncertainty originally restricted to the atomic domain has been transformed into a macroscopic uncertainty...

At first Schrödinger's description seems like nonsense. When you opened the box, would you see two ghostlike cats, as in a doubly exposed photograph, one dead and one alive? Obviously not. You would have a single, fully material cat, which would either be dead or very, very upset. But Schrödinger has an equally strange and logical answer for that objection. In the same way that the quantum randomness of the radioactive atom spread to the cat and made its wavefunction a random mixture of life and death, the randomness spreads wider once you open the box, and your own wavefunction becomes a mixture of a person who has just killed a cat and a person who hasn't.5

Discussion Questions

Compare \(\Delta p\) and \(\Delta x\) for the two lowest energy levels of the one-dimensional particle in a box, and discuss how this relates to the uncertainty principle.

On a graph of \(\Delta p\) versus \(\Delta \)x, sketch the regions that are allowed and forbidden by the Heisenberg uncertainty principle. Interpret the graph: Where does an atom lie on it? An elephant? Can either \(p\) or \(x\) be measured with perfect accuracy if we don't care about the other?

accelerating-electron

j / An electron in a gentle electric field gradually shortens its wavelength as it gains energy.

kinks

k / The wavefunction's tails go where classical physics says they shouldn't.

13.3.5 Electrons in electric fields

So far the only electron wave patterns we've considered have been simple sine waves, but whenever an electron finds itself in an electric field, it must have a more complicated wave pattern. Let's consider the example of an electron being accelerated by the electron gun at the back of a TV tube. Newton's laws are not useful, because they implicitly assume that the path taken by the particle is a meaningful concept. Conservation of energy is still valid in quantum physics, however. In terms of energy, the electron is moving from a region of low voltage into a region of higher voltage. Since its charge is negative, it loses electrical energy by moving to a higher voltage, so its kinetic energy increases. As its electrical energy goes down, its kinetic energy goes up by an equal amount, keeping the total energy constant. Increasing kinetic energy implies a growing momentum, and therefore a shortening wavelength, j.

The wavefunction as a whole does not have a single well-defined wavelength, but the wave changes so gradually that if you only look at a small part of it you can still pick out a wavelength and relate it to the momentum and energy. (The picture actually exaggerates by many orders of magnitude the rate at which the wavelength changes.)

But what if the electric field was stronger? The electric field in a TV is only \(\sim10^5\) N/C, but the electric field within an atom is more like \(10^{12}\) N/C. In figure l, the wavelength changes so rapidly that there is nothing that looks like a sine wave at all. We could get a rough idea of the wavelength in a given region by measuring the distance between two peaks, but that would only be a rough approximation. Suppose we want to know the wavelength at point \(P\). The trick is to construct a sine wave, like the one shown with the dashed line, which matches the curvature of the actual wavefunction as closely as possible near \(P\). The sine wave that matches as well as possible is called the “osculating” curve, from a Latin word meaning “to kiss.” The wavelength of the osculating curve is the wavelength that will relate correctly to conservation of energy.

osculating

l / A typical wavefunction of an electron in an atom (heavy curve) and the osculating sine wave (dashed curve) that matches its curvature at point P.

Tunneling

We implicitly assumed that the particle-in-a-box wavefunction would cut off abruptly at the sides of the box, k/1, but that would be unphysical. A kink has infinite curvature, and curvature is related to energy, so it can't be infinite. A physically realistic wavefunction must always “tail off” gradually, k/2. In classical physics, a particle can never enter a region in which its interaction energy \(U\) would be greater than the amount of energy it has available. But in quantum physics the wavefunction will always have a tail that reaches into the classically forbidden region. If it was not for this effect, called tunneling, the fusion reactions that power the sun would not occur due to the high electrical energy nuclei need in order to get close together! Tunneling is discussed in more detail in the following subsection.

barrier-with-u-notation

m / Tunneling through a barrier.

alpha-potential

n / The electrical, nuclear, and total interaction energies for an alpha particle escaping from a nucleus.

step-potential

o / A particle encounters a step of height \(U\ltE\) in the interaction energy. Both sides are classically allowed. A reflected wave exists, but is not shown in the figure.

marble-no-reflection

p / The marble has zero probability of being reflected from the edge of the table. (This example has \(U\lt0\), not \(U>0\) as in figures o and q).

step-potential-smoother

q / Making the step more gradual reduces the probability of reflection.

superposition-cancellation

13.3.6 The Schrödinger equation

In subsection 13.3.5 we were able to apply conservation of energy to an electron's wavefunction, but only by using the clumsy graphical technique of osculating sine waves as a measure of the wave's curvature. You have learned a more convenient measure of curvature in calculus: the second derivative. To relate the two approaches, we take the second derivative of a sine wave:

\[\begin{align*} \frac{d^2}{dx^2}\sin(2\pi x/\lambda) &= \frac{d}{dx}\left(\frac{2\pi}{\lambda}\cos\frac{2\pi x}{\lambda}\right) \\ &= -\left(\frac{2\pi}{\lambda}\right)^2 \sin\frac{2\pi x}{\lambda} \end{align*}\]

Taking the second derivative gives us back the same function, but with a minus sign and a constant out in front that is related to the wavelength. We can thus relate the second derivative to the osculating wavelength:

\[\begin{equation*} \frac{d^2\Psi}{dx^2} = -\left(\frac{2\pi}{\lambda}\right)^2\Psi \end{equation*}\]

This could be solved for \(\lambda \) in terms of \(\Psi \), but it will turn out below to be more convenient to leave it in this form.

Applying this to conservation of energy, we have

\[\begin{align*} \begin{split} E &= K + U \\ &= \frac{p^2}{2m} + U \\ &= \frac{(h/\lambda)^2}{2m} + U \end{split} \end{align*}\]

Note that both equation \eqref{eq:schreqna} and equation \eqref{eq:schreqnb} have \(\lambda^2\) in the denominator. We can simplify our algebra by multiplying both sides of equation \eqref{eq:schreqnb} by \(\Psi \) to make it look more like equation \eqref{eq:schreqna}:

\[\begin{align*} E \cdot \Psi &= \frac{(h/\lambda)^2}{2m}\Psi + U \cdot \Psi \\ &= \frac{1}{2m}\left(\frac{h}{2\pi}\right)^2\left(\frac{2\pi}{\lambda}\right)^2\Psi + U \cdot \Psi \\ &= -\frac{1}{2m}\left(\frac{h}{2\pi}\right)^2 \frac{d^2\Psi}{dx^2} + U \cdot \Psi \end{align*}\]

Further simplification is achieved by using the symbol \(\hbar\) (\(h\) with a slash through it, read “h-bar”) as an abbreviation for \(h/2\pi \). We then have the important result known as the \labelimportantintext{Schrödinger equation}:

\[\begin{equation*} E \cdot \Psi = -\frac{\hbar^2}{2m}\frac{d^2\Psi}{dx^2} + U \cdot \Psi \end{equation*}\]

(Actually this is a simplified version of the Schrödinger equation, applying only to standing waves in one dimension.) Physically it is a statement of conservation of energy. The total energy \(E\) must be constant, so the equation tells us that a change in interaction energy \(U\) must be accompanied by a change in the curvature of the wavefunction. This change in curvature relates to a change in wavelength, which corresponds to a change in momentum and kinetic energy.

self-check:

Considering the assumptions that were made in deriving the Schrödinger equation, would it be correct to apply it to a photon? To an electron moving at relativistic speeds?

(answer in the back of the PDF version of the book)

Usually we know right off the bat how \(U\) depends on \(x\), so the basic mathematical problem of quantum physics is to find a function \(\Psi (x\)) that satisfies the Schrödinger equation for a given interaction-energy function \(U(x)\). An equation, such as the Schrödinger equation, that specifies a relationship between a function and its derivatives is known as a differential equation.

The detailed study of the solution of the Schrödinger equation is beyond the scope of this book, but we can gain some important insights by considering the easiest version of the Schrödinger equation, in which the interaction energy \(U\) is constant. We can then rearrange the Schrödinger equation as follows:

\[\begin{align*} \frac{d^2\Psi}{dx^2} &= \frac{2m(U-E)}{\hbar^2} \Psi , \text{which boils down to} \frac{d^2\Psi}{dx^2} &= a\Psi , \end{align*}\]

where, according to our assumptions, \(a\) is independent of \(x\). We need to find a function whose second derivative is the same as the original function except for a multiplicative constant. The only functions with this property are sine waves and exponentials:

\[\begin{align*} \frac{d^2}{dx^2}\left[\:q\sin(rx+s)\:\right] &= -qr^2\sin(rx+s) \\ \frac{d^2}{dx^2}\left[qe^{rx+s}\right] &= qr^2e^{rx+s} \end{align*}\]

The sine wave gives negative values of \(a\), \(a=-r^2\), and the exponential gives positive ones, \(a=r^2\). The former applies to the classically allowed region with \(U\ltE\).

This leads us to a quantitative calculation of the tunneling effect discussed briefly in the preceding subsection. The wavefunction evidently tails off exponentially in the classically forbidden region. Suppose, as shown in figure m, a wave-particle traveling to the right encounters a barrier that it is classically forbidden to enter. Although the form of the Schrödinger equation we're using technically does not apply to traveling waves (because it makes no reference to time), it turns out that we can still use it to make a reasonable calculation of the probability that the particle will make it through the barrier. If we let the barrier's width be \(w\), then the ratio of the wavefunction on the left side of the barrier to the wavefunction on the right is

\[\begin{equation*} \frac{qe^{rx+s}}{qe^{r(x+w)+s}} = e^{-rw} . \end{equation*}\]

Probabilities are proportional to the squares of wavefunctions, so the probability of making it through the barrier is

\[\begin{align*} P &= e^{-2rw} \\ &= \exp\left(-\frac{2w}{\hbar}\sqrt{2m(U-E)}\right) \end{align*}\]

self-check:

If we were to apply this equation to find the probability that a person can walk through a wall, what would the small value of Planck's constant imply?

(answer in the back of the PDF version of the book)
Example 18: Tunneling in alpha decay
Naively, we would expect alpha decay to be a very fast process. The typical speeds of neutrons and protons inside a nucleus are extremely high (see problem 20). If we imagine an alpha particle coalescing out of neutrons and protons inside the nucleus, then at the typical speeds we're talking about, it takes a ridiculously small amount of time for them to reach the surface and try to escape. Clattering back and forth inside the nucleus, we could imagine them making a vast number of these “escape attempts” every second.

Consider figure n, however, which shows the interaction energy for an alpha particle escaping from a nucleus. The electrical energy is \(kq_1q_2/r\) when the alpha is outside the nucleus, while its variation inside the nucleus has the shape of a parabola, as a consequence of the shell theorem. The nuclear energy is constant when the alpha is inside the nucleus, because the forces from all the neighboring neutrons and protons cancel out; it rises sharply near the surface, and flattens out to zero over a distance of \(\sim 1\) fm, which is the maximum distance scale at which the strong force can operate. There is a classically forbidden region immediately outside the nucleus, so the alpha particle can only escape by quantum mechanical tunneling. (It's true, but somewhat counterintuitive, that a repulsive electrical force can make it more difficult for the alpha to get out.)

In reality, alpha-decay half-lives are often extremely long --- sometimes billions of years --- because the tunneling probability is so small. Although the shape of the barrier is not a rectangle, the equation for the tunneling probability on page 872 can still be used as a rough guide to our thinking. Essentially the tunneling probability is so small because \(U-E\) is fairly big, typically about 30 MeV at the peak of the barrier.

Example 19: The correspondence principle for \(E>U\)

The correspondence principle demands that in the classical limit \(h\rightarrow0\), we recover the correct result for a particle encountering a barrier \(U\), for both \(E\ltU\) and \(E>U\). The \(E\ltU\) case was analyzed in self-check H on p. 872. In the remainder of this example, we analyze \(E>U\), which turns out to be a little trickier.

The particle has enough energy to get over the barrier, and the classical result is that it continues forward at a different speed (a reduced speed if \(U>0\), or an increased one if \(U\lt0\)), then regains its original speed as it emerges from the other side. What happens quantum-mechanically in this case? We would like to get a “tunneling” probability of 1 in the classical limit. The expression derived on p. 872, however, doesn't apply here, because it was derived under the assumption that the wavefunction inside the barrier was an exponential; in the classically allowed case, the barrier isn't classically forbidden, and the wavefunction inside it is a sine wave.

We can simplify things a little by letting the width \(w\) of the barrier go to infinity. Classically, after all, there is no possibility that the particle will turn around, no matter how wide the barrier. We then have the situation shown in figure o. The analysis is the same as for any other wave being partially reflected at the boundary between two regions where its velocity differs, and the result is the same as the one found on p. 367. The ratio of the amplitude of the reflected wave to that of the incident wave is \(R = (v_2-v_1)/(v_2+v_1)\). The probability of reflection is \(R^2\). (Counterintuitively, \(R^2\) is nonzero even if \(U\lt0\), i.e., \(v_2>v_1\).)

This seems to violate the correspondence principle. There is no \(m\) or \(h\) anywhere in the result, so we seem to have the result that, even classically, the marble in figure p can be reflected!

The solution to this paradox is that the step in figure o was taken to be completely abrupt --- an idealized mathematical discontinuity. Suppose we make the transition a little more gradual, as in figure q. As shown in problem 17 on p. 380, this reduces the amplitude with which a wave is reflected. By smoothing out the step more and more, we continue to reduce the probability of reflection, until finally we arrive at a barrier shaped like a smooth ramp. More detailed calculations show that this results in zero reflection in the limit where the width of the ramp is large compared to the wavelength.

Three dimensions

For simplicity, we've been considering the Schrödinger equation in one dimension, so that \(\Psi\) is only a function of \(x\), and has units of \(\text{m}^{-1/2}\) rather than \(\text{m}^{-3/2}\). Since the Schrödinger equation is a statement of conservation of energy, and energy is a scalar, the generalization to three dimensions isn't particularly complicated. The total energy term \(E\cdot\Psi\) and the interaction energy term \(U\cdot\Psi\) involve nothing but scalars, and don't need to be changed at all. In the kinetic energy term, however, we're essentially basing our computation of the kinetic energy on the squared magnitude of the momentum, \(p_x^2\), and in three dimensions this would clearly have to be generalized to \(p_x^2+p_y^2+p_z^2\). The obvious way to achieve this is to replace the second derivative \(d^2\Psi/dx^2\) with the sum \(\partial^2\Psi/\partial x^2+ \partial^2\Psi/\partial y^2+ \partial^2\Psi/\partial z^2\). Here the partial derivative symbol \(\partial\), introduced on page 216, indicates that when differentiating with respect to a particular variable, the other variables are to be considered as constants. This operation on the function \(\Psi\) is notated \(\nabla^2\Psi\), and the derivative-like operator \(\nabla^2=\partial^2/\partial x^2+ \partial^2/\partial y^2+ \partial^2/\partial z^2\) is called the Laplacian. It occurs elswehere in physics. For example, in classical electrostatics, the voltage in a region of vacuum must be a solution of the equation \(\nabla^2V=0\). Like the second derivative, the Laplacian is essentially a measure of curvature.

Example 20: Examples of the Laplacian in two dimensions

\(\triangleright\) Compute the Laplacians of the following functions in two dimensions, and interpret them: \(A=x^2+y^2\), \(B=-x^2-y^2\), \(C=x^2-y^2\).

\(\triangleright\) The first derivative of function \(A\) with respect to \(x\) is \(\partial A/\partial x=2x\). Since \(y\) is treated as a constant in the computation of the partial derivative \(\partial/\partial x\), the second term goes away. The second derivative of \(A\) with respect to \(x\) is \(\partial^2 A/\partial x^2=2\). Similarly we have \(\partial^2 A/\partial y^2=2\), so \(\nabla^2 A=4\).

All derivative operators, including \(\nabla^2\), have the linear property that multiplying the input function by a constant just multiplies the output function by the same constant. Since \(B=-A\), and we have \(\nabla^2 B=-4\).

For function \(C\), the \(x\) term contributes a second derivative of 2, but the \(y\) term contributes \(-2\), so \(\nabla^2 C=0\).

The interpretation of the positive sign in \(\nabla^2 A=4\) is that \(A\)'s graph is shaped like a trophy cup, and the cup is concave up. The negative sign in the result for \(\nabla^2 B\) is because \(B\) is concave down. Function \(C\) is shaped like a saddle. Since its curvature along one axis is concave up, but the curvature along the other is down and equal in magnitude, the function is considered to have zero concavity over all.

Example 21: A classically allowed region with constant \(U\)

In a classically allowed region with constant \(U\), we expect the solutions to the Schrödinger equation to be sine waves. A sine wave in three dimensions has the form

\[\begin{equation*} \Psi = \sin\left( k_x x + k_y y + k_z z \right) . \end{equation*}\]

When we compute \(\partial^2\Psi/\partial x^2\), double differentiation of \(\sin\) gives \(-\sin\), and the chain rule brings out a factor of \(k_x^2\). Applying all three second derivative operators, we get

\[\begin{align*} \nabla^2\Psi &= \left(-k_x^2-k_y^2-k_z^2\right)\sin\left( k_x x + k_y y + k_z z \right) \\ &= -\left(k_x^2+k_y^2+k_z^2\right)\Psi . \end{align*}\]

The Schrödinger equation gives

\[\begin{align*} E\cdot\Psi &= -\frac{\hbar^2}{2m}\nabla^2\Psi + U\cdot\Psi \\ &= -\frac{\hbar^2}{2m}\cdot -\left(k_x^2+k_y^2+k_z^2\right)\Psi + U\cdot\Psi \\ E-U &= \frac{\hbar^2}{2m}\left(k_x^2+k_y^2+k_z^2\right) , \end{align*}\]

which can be satisfied since we're in a classically allowed region with \(E-U>0\), and the right-hand side is manifestly positive.

complex-wavefunction

r / 1. Oscillations can go back and forth, but it's also possible for them to move along a path that bites its own tail, like a circle. Photons act like one, electrons like the other.
2. Back-and-forth oscillations can naturally be described by a segment taken from the real number line, and we visualize the corresponding type of wave as a sine wave. Oscillations around a closed path relate more naturally to the complex number system. The complex number system has rotation built into its structure, e.g., the sequence 1, \(i\), \(i^2\), \(i^3\), ... rotates around the unit circle in 90-degree increments.
3. The double slit experiment embodies the one and only mystery of quantum physics. Either type of wave can undergo double-slit interference.

Use of complex numbers

In a classically forbidden region, a particle's total energy, \(U+K\), is less than its \(U\), so its \(K\) must be negative. If we want to keep believing in the equation \(K=p^2/2m\), then apparently the momentum of the particle is the square root of a negative number. This is a symptom of the fact that the Schrödinger equation fails to describe all of nature unless the wavefunction and various other quantities are allowed to be complex numbers. In particular it is not possible to describe traveling waves correctly without using complex wavefunctions. Complex numbers were reviewed in subsection 10.5.5, p. 605.

This may seem like nonsense, since real numbers are the only ones that are, well, real! Quantum mechanics can always be related to the real world, however, because its structure is such that the results of measurements always come out to be real numbers. For example, we may describe an electron as having non-real momentum in classically forbidden regions, but its average momentum will always come out to be real (the imaginary parts average out to zero), and it can never transfer a non-real quantity of momentum to another particle.

A complete investigation of these issues is beyond the scope of this book, and this is why we have normally limited ourselves to standing waves, which can be described with real-valued wavefunctions. Figure r gives a visual depiction of the difference between real and complex wavefunctions. The following remarks may also be helpful.

Neither of the graphs in r/2 should be interpreted as a path traveled by something. This isn't anything mystical about quantum physics. It's just an ordinary fact about waves, which we first encountered in subsection 6.1.1, p. 340, where we saw the distinction between the motion of a wave and the motion of a wave pattern. In both examples in r/2, the wave pattern is moving in a straight line to the right.

The helical graph in r/2 shows a complex wavefunction whose value rotates around a circle in the complex plane with a frequency \(f\) related to its energy by \(E=hf\). As it does so, its squared magnitude \(|\Psi|^2\) stays the same, so the corresponding probability stays constant. Which direction does it rotate? This direction is purely a matter of convention, since the distinction between the symbols \(i\) and \(-i\) is arbitrary --- both are equally valid as square roots of \(-1\). We can, for example, arbitrarily say that electrons with positive energies have wavefunctions whose phases rotate counterclockwise, and as long as we follow that rule consistently within a given calculation, everything will work. Note that it is not possible to define anything like a right-hand rule here, because the complex plane shown in the right-hand side of r/2 doesn't represent two dimensions of physical space; unlike a screw going into a piece of wood, an electron doesn't have a direction of rotation that depends on its direction of travel.

Example 22: Superposition of complex wavefunctions

\(\triangleright\) The right side of figure r/3 is a cartoonish representation of double-slit interference; it depicts the situation at the center, where symmetry guarantees that the interference is constuctive. Suppose that at some off-center point, the two wavefunctions being superposed are \(\Psi_1=b\) and \(\Psi_2=bi\), where \(b\) is a real number with units. Compare the probability of finding the electron at this position with what it would have been if the superposition had been purely constructive, \(b+b=2b\).

\(\triangleright\) The probability per unit volume is proportional to the square of the magnitude of the total wavefunction, so we have

\[\begin{equation*} \frac{P_{\text{off center}}}{P_{\text{center}}} = \frac{|b+bi|^2}{|b+b|^2} = \frac{1^2+1^2}{2^2+0^2} = \frac{1}{2} . \end{equation*}\]

Discussion Questions

The zero level of interaction energy \(U\) is arbitrary, e.g., it's equally valid to pick the zero of gravitational energy to be on the floor of your lab or at the ceiling. Suppose we're doing the double-slit experiment, r/3, with electrons. We define the zero-level of \(U\) so that the total energy \(E=U+K\) of each electron is positive. and we observe a certain interference pattern like the one in figure i on p. 846. What happens if we then redefine the zero-level of \(U\) so that the electrons have \(E\lt0\)?

The figure shows a series of snapshots in the motion of two pulses on a coil spring, one negative and one positive, as they move toward one another and superpose. The final image is very close to the moment at which the two pulses cancel completely. The following discussion is simpler if we consider infinite sine waves rather than pulses. How can the cancellation of two such mechanical waves be reconciled with conservation of energy? What about the case of colliding electromagnetic waves?

Quantum-mechanically, the issue isn't conservation of energy, it's conservation of probability, i.e., if there's initially a 100% probability that a particle exists somewhere, we don't want the probability to be more than or less than 100% at some later time. What happens when the colliding waves have real-valued wavefunctions \(\Psi\)? Complex ones? What happens with standing waves?

The figure shows a skateboarder tipping over into a swimming pool with zero initial kinetic energy. There is no friction, the corners are smooth enough to allow the skater to pass over the smoothly, and the vertical distances are small enough so that negligible time is required for the vertical parts of the motion. The pool is divided into a deep end and a shallow end. Their widths are equal. The deep end is four times deeper. (1) Classically, compare the skater's velocity in the left and right regions, and infer the probability of finding the skater in either of the two halves if an observer peeks at a random moment. (2) Quantum-mechanically, this could be a one-dimensional model of an electron shared between two atoms in a diatomic molecule. Compare the electron's kinetic energies, momenta, and wavelengths in the two sides. For simplicity, let's assume that there is no tunneling into the classically forbidden regions. What is the simplest standing-wave pattern that you can draw, and what are the probabilities of finding the electron in one side or the other? Does this obey the correspondence principle?

quantum-pool-skater

hwavefnlowres

13.4 The Atom

You can learn a lot by taking a car engine apart, but you will have learned a lot more if you can put it all back together again and make it run. Half the job of reductionism is to break nature down into its smallest parts and understand the rules those parts obey. The second half is to show how those parts go together, and that is our goal in this chapter. We have seen how certain features of all atoms can be explained on a generic basis in terms of the properties of bound states, but this kind of argument clearly cannot tell us any details of the behavior of an atom or explain why one atom acts differently from another.

The biggest embarrassment for reductionists is that the job of putting things back together job is usually much harder than the taking them apart. Seventy years after the fundamentals of atomic physics were solved, it is only beginning to be possible to calculate accurately the properties of atoms that have many electrons. Systems consisting of many atoms are even harder. Supercomputer manufacturers point to the folding of large protein molecules as a process whose calculation is just barely feasible with their fastest machines. The goal of this chapter is to give a gentle and visually oriented guide to some of the simpler results about atoms.

wrapping-wave

a / Eight wavelengths fit around this circle (\(\ell=8\)).

13.4.1 Classifying states

We'll focus our attention first on the simplest atom, hydrogen, with one proton and one electron. We know in advance a little of what we should expect for the structure of this atom. Since the electron is bound to the proton by electrical forces, it should display a set of discrete energy states, each corresponding to a certain standing wave pattern. We need to understand what states there are and what their properties are.

What properties should we use to classify the states? The most sensible approach is to used conserved quantities. Energy is one conserved quantity, and we already know to expect each state to have a specific energy. It turns out, however, that energy alone is not sufficient. Different standing wave patterns of the atom can have the same energy.

Momentum is also a conserved quantity, but it is not particularly appropriate for classifying the states of the electron in a hydrogen atom. The reason is that the force between the electron and the proton results in the continual exchange of momentum between them. (Why wasn't this a problem for energy as well? Kinetic energy and momentum are related by \(K=p^2/2m\), so the much more massive proton never has very much kinetic energy. We are making an approximation by assuming all the kinetic energy is in the electron, but it is quite a good approximation.)

Angular momentum does help with classification. There is no transfer of angular momentum between the proton and the electron, since the force between them is a center-to-center force, producing no torque.

Like energy, angular momentum is quantized in quantum physics. As an example, consider a quantum wave-particle confined to a circle, like a wave in a circular moat surrounding a castle. A sine wave in such a “quantum moat” cannot have any old wavelength, because an integer number of wavelengths must fit around the circumference, \(C\), of the moat. The larger this integer is, the shorter the wavelength, and a shorter wavelength relates to greater momentum and angular momentum. Since this integer is related to angular momentum, we use the symbol \(\ell\) for it:

\[\begin{equation*} \lambda = C / \ell \end{equation*}\]

The angular momentum is

\[\begin{equation*} L = rp . \end{equation*}\]

Here, \(r=C/2\pi \), and \(p=h/\lambda=h\ell/C\), so

\[\begin{align*} L &= \frac{C}{2\pi}\cdot\frac{h\ell}{C} \\ &= \frac{h}{2\pi}\ell \end{align*}\]

In the example of the quantum moat, angular momentum is quantized in units of \(h/2\pi \). This makes \(h/2\pi \) a pretty important number, so we define the abbreviation \(\hbar=h/2\pi \). This symbol is read “h-bar.”

In fact, this is a completely general fact in quantum physics, not just a fact about the quantum moat:

Quantization of angular momentum

The angular momentum of a particle due to its motion through space is quantized in units of \(\hbar\).

self-check:

What is the angular momentum of the wavefunction shown at the beginning of the section?

(answer in the back of the PDF version of the book)

particle-a-m-examples

b / Reconciling the uncertainty principle with the definition of angular momentum.

13.4.2 Three dimensions

Our discussion of quantum-mechanical angular momentum has so far been limited to rotation in a plane, for which we can simply use positive and negative signs to indicate clockwise and counterclockwise directions of rotation. A hydrogen atom, however, is unavoidably three-dimensional. The classical treatment of angular momentum in three-dimensions has been presented in section 4.3; in general, the angular momentum of a particle is defined as the vector cross product \(\mathbf{r}\times\mathbf{p}\).

There is a basic problem here: the angular momentum of the electron in a hydrogen atom depends on both its distance \(\mathbf{r}\) from the proton and its momentum \(\mathbf{p}\), so in order to know its angular momentum precisely it would seem we would need to know both its position and its momentum simultaneously with good accuracy. This, however, seems forbidden by the Heisenberg uncertainty principle.

Actually the uncertainty principle does place limits on what can be known about a particle's angular momentum vector, but it does not prevent us from knowing its magnitude as an exact integer multiple of \(\hbar\). The reason is that in three dimensions, there are really three separate uncertainty principles:

\[\begin{align*} \Delta p_x \Delta x &\gtrsim h \\ \Delta p_y \Delta y &\gtrsim h \\ \Delta p_z \Delta z &\gtrsim h \end{align*}\]

Now consider a particle, b/1, that is moving along the \(x\) axis at position \(x\) and with momentum \(p_x\). We may not be able to know both \(x\) and \(p_x\) with unlimited accurately, but we can still know the particle's angular momentum about the origin exactly: it is zero, because the particle is moving directly away from the origin.

Suppose, on the other hand, a particle finds itself, b/2, at a position \(x\) along the \(x\) axis, and it is moving parallel to the \(y\) axis with momentum \(p_y\). It has angular momentum \(xp_y\) about the \(z\) axis, and again we can know its angular momentum with unlimited accuracy, because the uncertainty principle only relates \(x\) to \(p_x\) and \(y\) to \(p_y\). It does not relate \(x\) to \(p_y\).

As shown by these examples, the uncertainty principle does not restrict the accuracy of our knowledge of angular momenta as severely as might be imagined. However, it does prevent us from knowing all three components of an angular momentum vector simultaneously. The most general statement about this is the following theorem, which we present without proof:

The angular momentum vector in quantum physics

The most that can be known about an angular momentum vector is its magnitude and one of its three vector components. Both are quantized in units of \(\hbar\).

hwavefnlowres

c / A cross-section of a hydrogen wavefunction.

hydrogen-energy-levels

d / The energy of a state in the hydrogen atom depends only on its \(n\) quantum number.

13.4.3 The hydrogen atom

Deriving the wavefunctions of the states of the hydrogen atom from first principles would be mathematically too complex for this book, but it's not hard to understand the logic behind such a wavefunction in visual terms. Consider the wavefunction from the beginning of the section, which is reproduced in figure c. Although the graph looks three-dimensional, it is really only a representation of the part of the wavefunction lying within a two-dimensional plane. The third (up-down) dimension of the plot represents the value of the wavefunction at a given point, not the third dimension of space. The plane chosen for the graph is the one perpendicular to the angular momentum vector.

Each ring of peaks and valleys has eight wavelengths going around in a circle, so this state has \(L=8\hbar\), i.e., we label it \(\ell=8\). The wavelength is shorter near the center, and this makes sense because when the electron is close to the nucleus it has a lower electrical energy, a higher kinetic energy, and a higher momentum.

Between each ring of peaks in this wavefunction is a nodal circle, i.e., a circle on which the wavefunction is zero. The full three-dimensional wavefunction has nodal spheres: a series of nested spherical surfaces on which it is zero. The number of radii at which nodes occur, including \(r=\infty\), is called \(n\), and \(n\) turns out to be closely related to energy. The ground state has \(n=1\) (a single node only at \(r=\infty\)), and higher-energy states have higher \(n\) values. There is a simple equation relating \(n\) to energy, which we will discuss in subsection 13.4.4.

The numbers \(n\) and \(\ell\), which identify the state, are called its quantum numbers. A state of a given \(n\) and \(\ell\) can be oriented in a variety of directions in space. We might try to indicate the orientation using the three quantum numbers \(\ell_x=L_x/\hbar\), \(\ell_y=L_y/\hbar\), and \(\ell_z=L_z/\hbar\). But we have already seen that it is impossible to know all three of these simultaneously. To give the most complete possible description of a state, we choose an arbitrary axis, say the \(z\) axis, and label the state according to \(n\), \(\ell\), and \(\ell_z\).6

Angular momentum requires motion, and motion implies kinetic energy. Thus it is not possible to have a given amount of angular momentum without having a certain amount of kinetic energy as well. Since energy relates to the \(n\) quantum number, this means that for a given \(n\) value there will be a maximum possible . It turns out that this maximum value of equals \(n-1\).

In general, we can list the possible combinations of quantum numbers as follows:


n can equal 1, 2, 3, …
ell can range from 0 ton − 1, in steps of 1
ellz can range fromell toell, in steps of 1

Applying these rules, we have the following list of states:





n = 1,ell0, ellz0 one state
n = 2, ell0, ellz0 one state
n = 2, ell1,ellz1, 0, or 1three states




self-check:

Continue the list for \(n=3\).

(answer in the back of the PDF version of the book)

Figure e on page 884 shows the lowest-energy states of the hydrogen atom. The left-hand column of graphs displays the wavefunctions in the \(x-y\) plane, and the right-hand column shows the probability distribution in a three-dimensional representation.

hydrogen-three-states

e / The three states of the hydrogen atom having the lowest energies.

Discussion Questions

The quantum number \(n\) is defined as the number of radii at which the wavefunction is zero, including \(r=\infty\). Relate this to the features of the figures on the facing page.

Based on the definition of \(n\), why can't there be any such thing as an \(n=0\) state?

Relate the features of the wavefunction plots in figure e to the corresponding features of the probability distribution pictures.

How can you tell from the wavefunction plots in figure e which ones have which angular momenta?

Criticize the following incorrect statement: “The \(\ell=8\) wavefunction in figure c has a shorter wavelength in the center because in the center the electron is in a higher energy level.”

Discuss the implications of the fact that the probability cloud in of the \(n=2\), \(\ell=1\) state is split into two parts.

hydrogen-versus-box

f / The energy levels of a particle in a box, contrasted with those of the hydrogen atom.

13.4.4 Energies of states in hydrogen

History

The experimental technique for measuring the energy levels of an atom accurately is spectroscopy: the study of the spectrum of light emitted (or absorbed) by the atom. Only photons with certain energies can be emitted or absorbed by a hydrogen atom, for example, since the amount of energy gained or lost by the atom must equal the difference in energy between the atom's initial and final states. Spectroscopy had become a highly developed art several decades before Einstein even proposed the photon, and the Swiss spectroscopist Johann Balmer determined in 1885 that there was a simple equation that gave all the wavelengths emitted by hydrogen. In modern terms, we think of the photon wavelengths merely as indirect evidence about the underlying energy levels of the atom, and we rework Balmer's result into an equation for these atomic energy levels:

\[\begin{equation*} E_n = -\frac{2.2\times10^{-18}\ \text{J}}{n^2} , \end{equation*}\]

This energy includes both the kinetic energy of the electron and the electrical energy. The zero-level of the electrical energy scale is chosen to be the energy of an electron and a proton that are infinitely far apart. With this choice, negative energies correspond to bound states and positive energies to unbound ones.

Where does the mysterious numerical factor of \(2.2\times10^{-18}\ \text{J}\) come from? In 1913 the Danish theorist Niels Bohr realized that it was exactly numerically equal to a certain combination of fundamental physical constants:

\[\begin{equation*} E_n = -\frac{mk^2e^4}{2\hbar^2}\cdot\frac{1}{n^2} , \end{equation*}\]

where \(m\) is the mass of the electron, and \(k\) is the Coulomb force constant for electric forces.

Bohr was able to cook up a derivation of this equation based on the incomplete version of quantum physics that had been developed by that time, but his derivation is today mainly of historical interest. It assumes that the electron follows a circular path, whereas the whole concept of a path for a particle is considered meaningless in our more complete modern version of quantum physics. Although Bohr was able to produce the right equation for the energy levels, his model also gave various wrong results, such as predicting that the atom would be flat, and that the ground state would have \(\ell=1\) rather than the correct \(\ell=0\).

Approximate treatment

Rather than leaping straight into a full mathematical treatment, we'll start by looking for some physical insight, which will lead to an approximate argument that correctly reproduces the form of the Bohr equation.

A typical standing-wave pattern for the electron consists of a central oscillating area surrounded by a region in which the wavefunction tails off. As discussed in subsection 13.3.6, the oscillating type of pattern is typically encountered in the classically allowed region, while the tailing off occurs in the classically forbidden region where the electron has insufficient kinetic energy to penetrate according to classical physics. We use the symbol \(r\) for the radius of the spherical boundary between the classically allowed and classically forbidden regions. Classically, \(r\) would be the distance from the proton at which the electron would have to stop, turn around, and head back in.

If \(r\) had the same value for every standing-wave pattern, then we'd essentially be solving the particle-in-a-box problem in three dimensions, with the box being a spherical cavity. Consider the energy levels of the particle in a box compared to those of the hydrogen atom, f. They're qualitatively different. The energy levels of the particle in a box get farther and farther apart as we go higher in energy, and this feature doesn't even depend on the details of whether the box is two-dimensional or three-dimensional, or its exact shape. The reason for the spreading is that the box is taken to be completely impenetrable, so its size, \(r\), is fixed. A wave pattern with \(n\) humps has a wavelength proportional to \(r/n\), and therefore a momentum proportional to \(n\), and an energy proportional to \(n^2\). In the hydrogen atom, however, the force keeping the electron bound isn't an infinite force encountered when it bounces off of a wall, it's the attractive electrical force from the nucleus. If we put more energy into the electron, it's like throwing a ball upward with a higher energy --- it will get farther out before coming back down. This means that in the hydrogen atom, we expect \(r\) to increase as we go to states of higher energy. This tends to keep the wavelengths of the high energy states from getting too short, reducing their kinetic energy. The closer and closer crowding of the energy levels in hydrogen also makes sense because we know that there is a certain energy that would be enough to make the electron escape completely, and therefore the sequence of bound states cannot extend above that energy.

When the electron is at the maximum classically allowed distance \(r\) from the proton, it has zero kinetic energy. Thus when the electron is at distance \(r\), its energy is purely electrical:

\[\begin{equation*} E = -\frac{ke^2}{r} \end{equation*}\]

Now comes the approximation. In reality, the electron's wavelength cannot be constant in the classically allowed region, but we pretend that it is. Since \(n\) is the number of nodes in the wavefunction, we can interpret it approximately as the number of wavelengths that fit across the diameter \(2r\). We are not even attempting a derivation that would produce all the correct numerical factors like 2 and \(\pi \) and so on, so we simply make the approximation

\[\begin{equation*} \lambda \sim \frac{r}{n} . \end{equation*}\]

Finally we assume that the typical kinetic energy of the electron is on the same order of magnitude as the absolute value of its total energy. (This is true to within a factor of two for a typical classical system like a planet in a circular orbit around the sun.) We then have \begin{subequations} \renewcommand{\theequation}{\theparentequation}

\[\begin{align*} \text{absolute}&\text{ value of total energy} \\ &= \frac{ke^2}{r} \notag \\ &\sim K \notag \ &= p^2/2m \notag \\ &= (h/\lambda)^2/2m \notag \\ &\sim h^2n^2/2mr^2 \notag \end{align*}\]

\end{subequations} We now solve the equation \(ke^2/r \sim h^2n^2 / 2mr^2\) for \(r\) and throw away numerical factors we can't hope to have gotten right, yielding

\[\begin{equation*} r \sim \frac{h^2n^2}{mke^2} . \end{equation*}\]

Plugging \(n=1\) into this equation gives \(r=2\) nm, which is indeed on the right order of magnitude. Finally we combine equations [4] and [1] to find

\[\begin{equation*} E \sim -\frac{mk^2e^4}{h^2n^2} , \end{equation*}\]

which is correct except for the numerical factors we never aimed to find.

Exact treatment of the ground state

The general proof of the Bohr equation for all values of \(n\) is beyond the mathematical scope of this book, but it's fairly straightforward to verify it for a particular \(n\), especially given a lucky guess as to what functional form to try for the wavefunction. The form that works for the ground state is

\[\begin{equation*} \Psi = ue^{-r/a} , \end{equation*}\]

where \(r=\sqrt{x^2+y^2+z^2}\) is the electron's distance from the proton, and \(u\) provides for normalization. In the following, the result \(\partial r/\partial x=x/r\) comes in handy. Computing the partial derivatives that occur in the Laplacian, we obtain for the \(x\) term

\[\begin{align*} \frac{\partial\Psi}{\partial x} &= \frac{\partial \Psi}{\partial r} \frac{\partial r}{\partial x} \\ &= -\frac{x}{ar} \Psi \\ \frac{\partial^2\Psi}{\partial x^2} &= -\frac{1}{ar} \Psi -\frac{x}{a}\left(\frac{\partial}{dx}\frac{1}{r}\right)\Psi+ \left( \frac{x}{ar}\right)^2 \Psi\\ &= -\frac{1}{ar} \Psi +\frac{x^2}{ar^3}\Psi+ \left( \frac{x}{ar}\right)^2 \Psi , \text{so} \nabla^2\Psi &= \left( -\frac{2}{ar} + \frac{1}{a^2} \right) \Psi . \end{align*}\]

The Schrödinger equation gives

\[\begin{align*} E\cdot\Psi &= -\frac{\hbar^2}{2m}\nabla^2\Psi + U\cdot\Psi \\ &= \frac{\hbar^2}{2m}\left( \frac{2}{ar} - \frac{1}{a^2} \right)\Psi -\frac{ke^2}{r}\cdot\Psi \end{align*}\]

If we require this equation to hold for all \(r\), then we must have equality for both the terms of the form \((\text{constant})\times\Psi\) and for those of the form \((\text{constant}/r)\times\Psi\). That means

\[\begin{align*} E &= -\frac{\hbar^2}{2ma^2} \\ \text{and} 0 &= \frac{\hbar^2}{mar} -\frac{ke^2}{r} . \end{align*}\]

These two equations can be solved for the unknowns \(a\) and \(E\), giving

\[\begin{align*} a &= \frac{\hbar^2}{mke^2} \\ \text{and} E &= -\frac{mk^2e^4}{2\hbar^2} , \end{align*}\]

where the result for the energy agrees with the Bohr equation for \(n=1\). The calculation of the normalization constant \(u\) is relegated to homework problem 36.

self-check:

We've verified that the function \(\Psi = he^{-r/a}\) is a solution to the Schrödinger equation, and yet it has a kink in it at \(r=0\). What's going on here? Didn't I argue before that kinks are unphysical?

(answer in the back of the PDF version of the book)
Example 23: Wave phases in the hydrogen molecule
In example 16 on page 863, I argued that the existence of the \(\text{H}_2\) molecule could essentially be explained by a particle-in-a-box argument: the molecule is a bigger box than an individual atom, so each electron's wavelength can be longer, its kinetic energy lower. Now that we're in possession of a mathematical expression for the wavefunction of the hydrogen atom in its ground state, we can make this argument a little more rigorous and detailed. Suppose that two hydrogen atoms are in a relatively cool sample of monoatomic hydrogen gas. Because the gas is cool, we can assume that the atoms are in their ground states. Now suppose that the two atoms approach one another. Making use again of the assumption that the gas is cool, it is reasonable to imagine that the atoms approach one another slowly. Now the atoms come a little closer, but still far enough apart that the region between them is classically forbidden. Each electron can tunnel through this classically forbidden region, but the tunneling probability is small. Each one is now found with, say, 99% probability in its original home, but with 1% probability in the other nucleus. Each electron is now in a state consisting of a superposition of the ground state of its own atom with the ground state of the other atom. There are two peaks in the superposed wavefunction, but one is a much bigger peak than the other.

An interesting question now arises. What are the relative phases of the two electrons? As discussed on page 857, the absolute phase of an electron's wavefunction is not really a meaningful concept. Suppose atom A contains electron Alice, and B electron Bob. Just before the collision, Alice may have wondered, “Is my phase positive right now, or is it negative? But of course I shouldn't ask myself such silly questions,” she adds sheepishly.

h2-phase

g / Example 23.

But relative phases are well defined. As the two atoms draw closer and closer together, the tunneling probability rises, and eventually gets so high that each electron is spending essentially 50% of its time in each atom. It's now reasonable to imagine that either one of two possibilities could obtain. Alice's wavefunction could either look like g/1, with the two peaks in phase with one another, or it could look like g/2, with opposite phases. Because relative phases of wavefunctions are well defined, states 1 and 2 are physically distinguishable. In particular, the kinetic energy of state 2 is much higher; roughly speaking, it is like the two-hump wave pattern of the particle in a box, as opposed to 1, which looks roughly like the one-hump pattern with a much longer wavelength. Not only that, but an electron in state 1 has a large probability of being found in the central region, where it has a large negative electrical energy due to its interaction with both protons. State 2, on the other hand, has a low probability of existing in that region. Thus state 1 represents the true ground-state wavefunction of the \(\text{H}_2\) molecule, and putting both Alice and Bob in that state results in a lower energy than their total energy when separated, so the molecule is bound, and will not fly apart spontaneously.

State g/3, on the other hand, is not physically distinguishable from g/2, nor is g/4 from g/1. Alice may say to Bob, “Isn't it wonderful that we're in state 1 or 4? I love being stable like this.” But she knows it's not meaningful to ask herself at a given moment which state she's in, 1 or 4.

Discussion Questions

States of hydrogen with \(n\) greater than about 10 are never observed in the sun. Why might this be?

Sketch graphs of \(r\) and \(E\) versus \(n\) for the hydrogen, and compare with analogous graphs for the one-dimensional particle in a box.

spin-vs-orbital

h / The top has angular momentum both because of the motion of its center of mass through space and due to its internal rotation. Electron spin is roughly analogous to the intrinsic spin of the top.

13.4.5 Electron spin

It's disconcerting to the novice ping-pong player to encounter for the first time a more skilled player who can put spin on the ball. Even though you can't see that the ball is spinning, you can tell something is going on by the way it interacts with other objects in its environment. In the same way, we can tell from the way electrons interact with other things that they have an intrinsic spin of their own. Experiments show that even when an electron is not moving through space, it still has angular momentum amounting to \(\hbar/2\).

This may seem paradoxical because the quantum moat, for instance, gave only angular momenta that were integer multiples of \(\hbar\), not half-units, and I claimed that angular momentum was always quantized in units of \(\hbar\), not just in the case of the quantum moat. That whole discussion, however, assumed that the angular momentum would come from the motion of a particle through space. The \(\hbar/2\) angular momentum of the electron is simply a property of the particle, like its charge or its mass. It has nothing to do with whether the electron is moving or not, and it does not come from any internal motion within the electron. Nobody has ever succeeded in finding any internal structure inside the electron, and even if there was internal structure, it would be mathematically impossible for it to result in a half-unit of angular momentum.

We simply have to accept this \(\hbar/2\) angular momentum, called the “spin” of the electron --- Mother Nature rubs our noses in it as an observed fact.

Protons and neutrons have the same \(\hbar/2\) spin, while photons have an intrinsic spin of \(\hbar\). In general, half-integer spins are typical of material particles. Integral values are found for the particles that carry forces: photons, which embody the electric and magnetic fields of force, as well as the more exotic messengers of the nuclear and gravitational forces.

As was the case with ordinary angular momentum, we can describe spin angular momentum in terms of its magnitude, and its component along a given axis. We write \(s\) and \(s_z\) for these quantities, expressed in units of \(\hbar\), so an electron has \(s=1/2\) and \(s_z=+1/2\) or \(-1/2\).

Taking electron spin into account, we need a total of four quantum numbers to label a state of an electron in the hydrogen atom: \(n\), \(\ell\), \(\ell_z\), and \(s_z\). (We omit \(s\) because it always has the same value.) The symbols and include only the angular momentum the electron has because it is moving through space, not its spin angular momentum. The availability of two possible spin states of the electron leads to a doubling of the numbers of states:






n = 1,ell0, ellz0, sz = + 1 / 2 or − 1 / 2two states
n = 2, ell0, ellz0, sz = + 1 / 2 or − 1 / 2two states
n = 2, ell1,ellz1, 0, or 1,sz = + 1 / 2 or − 1 / 2six states





A note about notation

There are unfortunately two inconsistent systems of notation for the quantum numbers we've been discussing. The notation I've been using is the one that is used in nuclear physics, but there is a different one that is used in atomic physics.



nuclear physics atomic physics


n same
ell same
ellx no notation
elly no notation
ellzm
s = 1 / 2 no notation (sometimesσ)
sx no notation
sy no notation
sz s


{}The nuclear physics notation is more logical (not giving special status to the \(z\) axis) and more memorable (\(\ell_z\) rather than the obscure \(m\)), which is why I use it consistently in this book, even though nearly all the applications we'll consider are atomic ones.

We are further encumbered with the following historically derived letter labels, which deserve to be eliminated in favor of the simpler numerical ones:





ell0ell1ell2ell3
s p d f











n = 1n = 2n = 3n = 4n = 5n = 6n = 7
K L M N O P Q







{}The spdf labels are used in both nuclear7 and atomic physics, while the KLMNOPQ letters are used only to refer to states of electrons.

And finally, there is a piece of notation that is good and useful, but which I simply haven't mentioned yet. The vector \(\mathbf{j}=\vc{\ell}+\mathbf{s}\) stands for the total angular momentum of a particle in units of \(\hbar\), including both orbital and spin parts. This quantum number turns out to be very useful in nuclear physics, because nuclear forces tend to exchange orbital and spin angular momentum, so a given energy level often contains a mixture of \(\ell\) and \(s\) values, while remaining fairly pure in terms of \(j\).

small-periodic-table

i / The beginning of the periodic table.

hindenburg

j / Hydrogen is highly reactive.

13.4.6 Atoms with more than one electron

What about other atoms besides hydrogen? It would seem that things would get much more complex with the addition of a second electron. A hydrogen atom only has one particle that moves around much, since the nucleus is so heavy and nearly immobile. Helium, with two, would be a mess. Instead of a wavefunction whose square tells us the probability of finding a single electron at any given location in space, a helium atom would need to have a wavefunction whose square would tell us the probability of finding two electrons at any given combination of points. Ouch! In addition, we would have the extra complication of the electrical interaction between the two electrons, rather than being able to imagine everything in terms of an electron moving in a static field of force created by the nucleus alone.

Despite all this, it turns out that we can get a surprisingly good description of many-electron atoms simply by assuming the electrons can occupy the same standing-wave patterns that exist in a hydrogen atom. The ground state of helium, for example, would have both electrons in states that are very similar to the \(n=1\) states of hydrogen. The second-lowest-energy state of helium would have one electron in an \(n=1\) state, and the other in an \(n=2\) states. The relatively complex spectra of elements heavier than hydrogen can be understood as arising from the great number of possible combinations of states for the electrons.

A surprising thing happens, however, with lithium, the three-electron atom. We would expect the ground state of this atom to be one in which all three electrons settle down into \(n=1\) states. What really happens is that two electrons go into \(n=1\) states, but the third stays up in an \(n=2\) state. This is a consequence of a new principle of physics:

The Pauli Exclusion Principle

Only one electron can ever occupy a given state.

There are two \(n=1\) states, one with \(s_z=+1/2\) and one with \(s_z=-1/2\), but there is no third \(n=1\) state for lithium's third electron to occupy, so it is forced to go into an \(n=2\) state.

It can be proved mathematically that the Pauli exclusion principle applies to any type of particle that has half-integer spin. Thus two neutrons can never occupy the same state, and likewise for two protons. Photons, however, are immune to the exclusion principle because their spin is an integer.

Deriving the periodic table

We can now account for the structure of the periodic table, which seemed so mysterious even to its inventor Mendeleev. The first row consists of atoms with electrons only in the \(n=1\) states:

H

1 electron in ann = 1 state

He

2 electrons in the twon = 1 states

The next row is built by filling the \(n=2\) energy levels:

Li

2 electrons inn = 1 states, 1 electron in ann = 2 state

Be

2 electrons inn = 1 states, 2 electrons inn = 2 states

O

2 electrons inn = 1 states, 6 electrons inn = 2 states

F

2 electrons inn = 1 states, 7 electrons inn = 2 states

Ne

2 electrons inn = 1 states, 8 electrons inn = 2 states

In the third row we start in on the \(n=3\) levels:

Na

2 electrons inn = 1 states, 8 electrons inn = 2 states, 1 electron in ann = 3 state

...

We can now see a logical link between the filling of the energy levels and the structure of the periodic table. Column 0, for example, consists of atoms with the right number of electrons to fill all the available states up to a certain value of \(n\). Column I contains atoms like lithium that have just one electron more than that.

This shows that the columns relate to the filling of energy levels, but why does that have anything to do with chemistry? Why, for example, are the elements in columns I and VII dangerously reactive? Consider, for example, the element sodium (Na), which is so reactive that it may burst into flames when exposed to air. The electron in the \(n=3\) state has an unusually high energy. If we let a sodium atom come in contact with an oxygen atom, energy can be released by transferring the \(n=3\) electron from the sodium to one of the vacant lower-energy \(n=2\) states in the oxygen. This energy is transformed into heat. Any atom in column I is highly reactive for the same reason: it can release energy by giving away the electron that has an unusually high energy.

Column VII is spectacularly reactive for the opposite reason: these atoms have a single vacancy in a low-energy state, so energy is released when these atoms steal an electron from another atom.

It might seem as though these arguments would only explain reactions of atoms that are in different rows of the periodic table, because only in these reactions can a transferred electron move from a higher-\(n\) state to a lower-\(n\) state. This is incorrect. An \(n=2\) electron in fluorine (F), for example, would have a different energy than an \(n=2\) electron in lithium (Li), due to the different number of protons and electrons with which it is interacting. Roughly speaking, the \(n=2\) electron in fluorine is more tightly bound (lower in energy) because of the larger number of protons attracting it. The effect of the increased number of attracting protons is only partly counteracted by the increase in the number of repelling electrons, because the forces exerted on an electron by the other electrons are in many different directions and cancel out partially.

Homework Problems

hw-unknownisotopes

a / Problem 6.

comparephotons

b / Problem 15.

wavespeedingup

c / Problem 16.

hw-h-transitions

d / Problem 25.

hw-generalize-hydrogen

e / Problem 43.

\begin{homeworkforcelabel}{wholelife}{1}{}{1}If a radioactive substance has a half-life of one year, does this mean that it will be completely decayed after two years? Explain. \end{homeworkforcelabel} \begin{homeworkforcelabel}{snakeeyes}{1}{}{2} What is the probability of rolling a pair of dice and getting “snake eyes,” i.e., both dice come up with ones? \end{homeworkforcelabel} \begin{homeworkforcelabel}{exponentiationapprox}{1}{}{3} Problem 3 has been deleted. \end{homeworkforcelabel} \begin{homeworkforcelabel}{rateofdecayapprox}{1}{}{4} Problem 4 has been deleted. \end{homeworkforcelabel}

\begin{homeworkforcelabel}{heightdistribution}{1}{}{5}Refer to the probability distribution for people's heights in figure f on page 830.
(a) Show that the graph is properly normalized.
(b) Estimate the fraction of the population having heights between 140 and 150 cm.(answer check available at lightandmatter.com) \end{homeworkforcelabel}

\begin{homeworkforcelabel}{unknownisotopes}{1}{}{6}(a) A nuclear physicist is studying a nuclear reaction caused in an accelerator experiment, with a beam of ions from the accelerator striking a thin metal foil and causing nuclear reactions when a nucleus from one of the beam ions happens to hit one of the nuclei in the target. After the experiment has been running for a few hours, a few billion radioactive atoms have been produced, embedded in the target. She does not know what nuclei are being produced, but she suspects they are an isotope of some heavy element such as Pb, Bi, Fr or U. Following one such experiment, she takes the target foil out of the accelerator, sticks it in front of a detector, measures the activity every 5 min, and makes a graph (figure). The isotopes she thinks may have been produced are:

isotope half-life (minutes)
211textupPb 36.1
214textupPb 26.8
214textupBi 19.7
223textupFr 21.8
239textupU 23.5

Which one is it?
(b) Having decided that the original experimental conditions produced one specific isotope, she now tries using beams of ions traveling at several different speeds, which may cause different reactions. The following table gives the activity of the target 10, 20 and 30 minutes after the end of the experiment, for three different ion speeds.

activity (millions of decays/s) after…
10 min20 min30 min
first ion speed 1.933 0.832 0.382
second ion speed1.200 0.545 0.248
third ion speed 7.211 1.296 0.248

Since such a large number of decays is being counted, assume that the data are only inaccurate due to rounding off when writing down the table. Which are consistent with the production of a single isotope, and which imply that more than one isotope was being created? \end{homeworkforcelabel}

\begin{homeworkforcelabel}{craps}{1}{}{7}Devise a method for testing experimentally the hypothesis that a gambler's chance of winning at craps is independent of her previous record of wins and losses. If you don't invoke the mathematical definition of statistical independence, then you haven't proposed a test. \end{homeworkforcelabel}

\begin{homeworkforcelabel}{blindfold-target}{1}{}{8}A blindfolded person fires a gun at a circular target of radius \(b\), and is allowed to continue firing until a shot actually hits it. Any part of the target is equally likely to get hit. We measure the random distance \(r\) from the center of the circle to where the bullet went in.
(a) Show that the probability distribution of \(r\) must be of the form \(D(r)=kr\), where \(k\) is some constant. (Of course we have \(D(r)=0\) for \(r>b\).)
(b) Determine \(k\) by requiring \(D\) to be properly normalized.(answer check available at lightandmatter.com)

(c) Find the average value of \(r\).(answer check available at lightandmatter.com)

(d) Interpreting your result from part c, how does it compare with \(b/2\)? Does this make sense? Explain. \end{homeworkforcelabel} \begin{homeworkforcelabel}{truncated-half-life}{1}{}{9}We are given some atoms of a certain radioactive isotope, with half-life \(t_{1/2}\). We pick one atom at random, and observe it for one half-life, starting at time zero. If it decays during that one-half-life period, we record the time \(t\) at which the decay occurred. If it doesn't, we reset our clock to zero and keep trying until we get an atom that cooperates. The final result is a time \(0\le t\le t_{1/2}\), with a distribution that looks like the usual exponential decay curve, but with its tail chopped off.
(a) Find the distribution \(D(t)\), with the proper normalization.(answer check available at lightandmatter.com)

(b) Find the average value of \(t\).(answer check available at lightandmatter.com)

(c) Interpreting your result from part b, how does it compare with \(t_{1/2}/2\)? Does this make sense? Explain. \end{homeworkforcelabel}

\begin{homeworkforcelabel}{maxwellian}{1}{}{10}The speed, \(v\), of an atom in an ideal gas has a probability distribution of the form \(D(v) = bve^{-cv^2}\), where \(0\le v \lt \infty\), \(c\) relates to the temperature, and \(b\) is determined by normalization.
(a) Sketch the distribution.
(b) Find \(b\) in terms of \(c\).(answer check available at lightandmatter.com)

(c) Find the average speed in terms of \(c\), eliminating \(b\). (Don't try to do the indefinite integral, because it can't be done in closed form. The relevant definite integral can be found in tables or done with computer software.)(answer check available at lightandmatter.com) \end{homeworkforcelabel}

\begin{homeworkforcelabel}{lava}{1}{}{11} All helium on earth is from the decay of naturally occurring heavy radioactive elements such as uranium. Each alpha particle that is emitted ends up claiming two electrons, which makes it a helium atom. If the original \(^{238}\text{U}\) atom is in solid rock (as opposed to the earth's molten regions), the He atoms are unable to diffuse out of the rock. This problem involves dating a rock using the known decay properties of uranium 238. Suppose a geologist finds a sample of hardened lava, melts it in a furnace, and finds that it contains 1230 mg of uranium and 2.3 mg of helium. \(^{238}\text{U}\) decays by alpha emission, with a half-life of \(4.5\times10^9\) years. The subsequent chain of alpha and electron (beta) decays involves much shorter half-lives, and terminates in the stable nucleus \(^{206}\text{Pb}\). Almost all natural uranium is \(^{238}\text{U}\), and the chemical composition of this rock indicates that there were no decay chains involved other than that of \(^{238}\text{U}\).
(a) How many alphas are emitted per decay chain? [Hint: Use conservation of mass.]
(b) How many electrons are emitted per decay chain? [Hint: Use conservation of charge.]
(c) How long has it been since the lava originally hardened?(answer check available at lightandmatter.com) \end{homeworkforcelabel} \begin{homeworkforcelabel}{mirrorphotons}{1}{}{12} When light is reflected from a mirror, perhaps only 80% of the energy comes back. One could try to explain this in two different ways: (1) 80% of the photons are reflected, or (2) all the photons are reflected, but each loses 20% of its energy. Based on your everyday knowledge about mirrors, how can you tell which interpretation is correct? [Based on a problem from PSSC Physics.] \end{homeworkforcelabel} \begin{homeworkforcelabel}{pelightsensor}{1}{}{13}Suppose we want to build an electronic light sensor using an apparatus like the one described in the section on the photoelectric effect. How would its ability to detect different parts of the spectrum depend on the type of metal used in the capacitor plates? \end{homeworkforcelabel}

\begin{homeworkforcelabel}{cancer}{1}{}{14} The photoelectric effect can occur not just for metal cathodes but for any substance, including living tissue. Ionization of DNA molecules can cause cancer or birth defects. If the energy required to ionize DNA is on the same order of magnitude as the energy required to produce the photoelectric effect in a metal, which of the following types of electromagnetic waves might pose such a hazard? Explain.

60 Hz waves from power lines
100 MHz FM radio
microwaves from a microwave oven
visible light
ultraviolet light
x-rays

\end{homeworkforcelabel}

\begin{homeworkforcelabel}{comparephotons}{1}{}{15}(a) Rank-order the photons according to their wavelengths, frequencies, and energies. If two are equal, say so. Explain all your answers.
(b) Photon 3 was emitted by a xenon atom going from its second-lowest-energy state to its lowest-energy state. Which of photons 1, 2, and 4 are capable of exciting a xenon atom from its lowest-energy state to its second-lowest-energy state? Explain. \end{homeworkforcelabel}

\begin{homeworkforcelabel}{wavespeedingup}{1}{}{16}Which figure could be an electron speeding up as it moves to the right? Explain. \end{homeworkforcelabel} \begin{homeworkforcelabel}{projector}{1}{}{17} The beam of a 100-W overhead projector covers an area of \(1\ \text{m}\times1\ \text{m}\) when it hits the screen 3 m away. Estimate the number of photons that are in flight at any given time. (Since this is only an estimate, we can ignore the fact that the beam is not parallel.)(answer check available at lightandmatter.com) \end{homeworkforcelabel} \begin{homeworkforcelabel}{pe}{1}{}{18} In the photoelectric effect, electrons are observed with virtually no time delay (\(\sim10\) ns), even when the light source is very weak. (A weak light source does however only produce a small number of ejected electrons.) The purpose of this problem is to show that the lack of a significant time delay contradicted the classical wave theory of light, so throughout this problem you should put yourself in the shoes of a classical physicist and pretend you don't know about photons at all. At that time, it was thought that the electron might have a radius on the order of \(10^{-15}\) m. (Recent experiments have shown that if the electron has any finite size at all, it is far smaller.)
(a) Estimate the power that would be soaked up by a single electron in a beam of light with an intensity of 1 \(\text{mW}/\text{m}^2\).(answer check available at lightandmatter.com)
(b) The energy, \(E_s\), required for the electron to escape through the surface of the cathode is on the order of \(10^{-19}\) J. Find how long it would take the electron to absorb this amount of energy, and explain why your result constitutes strong evidence that there is something wrong with the classical theory.(answer check available at lightandmatter.com) \end{homeworkforcelabel} \begin{homeworkforcelabel}{tv}{1}{}{19}In a television, suppose the electrons are accelerated from rest through a voltage difference of \(10^4\) V. What is their final wavelength?(answer check available at lightandmatter.com) \end{homeworkforcelabel} \begin{homeworkforcelabel}{lead}{1}{}{20} Use the Heisenberg uncertainty principle to estimate the minimum velocity of a proton or neutron in a \(^{208}\text{Pb}\) nucleus, which has a diameter of about 13 fm (1 fm=\(10^{-15}\) m). Assume that the speed is nonrelativistic, and then check at the end whether this assumption was warranted.(answer check available at lightandmatter.com) \end{homeworkforcelabel} \begin{homeworkforcelabel}{particleinabox}{1}{}{21}Find the energy of a particle in a one-dimensional box of length \(L\), expressing your result in terms of \(L\), the particle's mass \(m\), the number of peaks and valleys \(n\) in the wavefunction, and fundamental constants.(answer check available at lightandmatter.com) \end{homeworkforcelabel} \begin{homeworkforcelabel}{chip-qm}{1}{}{22} A free electron that contributes to the current in an ohmic material typically has a speed of \(10^5\) m/s (much greater than the drift velocity).
(a) Estimate its de Broglie wavelength, in nm.(answer check available at lightandmatter.com)
(b) If a computer memory chip contains \(10^8\) electric circuits in a 1 \(\text{cm}^2\) area, estimate the linear size, in nm, of one such circuit.(answer check available at lightandmatter.com)
(c) Based on your answers from parts a and b, does an electrical engineer designing such a chip need to worry about wave effects such as diffraction?
(d) Estimate the maximum number of electric circuits that can fit on a 1 \(\text{cm}^2\) computer chip before quantum-mechanical effects become important. \end{homeworkforcelabel} \begin{homeworkforcelabel}{quantumho}{1}{}{23}In classical mechanics, an interaction energy of the form \(U(x)=\frac{1}{2}kx^2\) gives a harmonic oscillator: the particle moves back and forth at a frequency \(\omega=\sqrt{k/m}\). This form for \(U(x)\) is often a good approximation for an individual atom in a solid, which can vibrate around its equilibrium position at \(x=0\). (For simplicity, we restrict our treatment to one dimension, and we treat the atom as a single particle rather than as a nucleus surrounded by electrons). The atom, however, should be treated quantum-mechanically, not clasically. It will have a wave function. We expect this wave function to have one or more peaks in the classically allowed region, and we expect it to tail off in the classically forbidden regions to the right and left. Since the shape of \(U(x)\) is a parabola, not a series of flat steps as in figure m on page 871, the wavy part in the middle will not be a sine wave, and the tails will not be exponentials.
(a) Show that there is a solution to the Schrödinger equation of the form

\[\begin{equation*} \Psi(x)=e^{-bx^2} , \end{equation*}\]

and relate \(b\) to \(k\), \(m\), and \(\hbar\). To do this, calculate the second derivative, plug the result into the Schrödinger equation, and then find what value of \(b\) would make the equation valid for all values of \(x\). This wavefunction turns out to be the ground state. Note that this wavefunction is not properly normalized --- don't worry about that.(answer check available at lightandmatter.com)
(b) Sketch a graph showing what this wavefunction looks like.
(c) Let's interpret \(b\). If you changed \(b\), how would the wavefunction look different? Demonstrate by sketching two graphs, one for a smaller value of \(b\), and one for a larger value.
(d) Making \(k\) greater means making the atom more tightly bound. Mathematically, what happens to the value of \(b\) in your result from part a if you make \(k\) greater? Does this make sense physically when you compare with part c? \end{homeworkforcelabel} \begin{homeworkforcelabel}{hydrogen-scale}{1}{}{24}(a) A distance scale is shown below the wavefunctions and probability densities illustrated in figure e on page 884. Compare this with the order-of-magnitude estimate derived in subsection 13.4.4 for the radius \(r\) at which the wavefunction begins tailing off. Was the estimate on the right order of magnitude?
(b) Although we normally say the moon orbits the earth, actually they both orbit around their common center of mass, which is below the earth's surface but not at its center. The same is true of the hydrogen atom. Does the center of mass lie inside the proton, or outside it?
\end{homeworkforcelabel}

\begin{homeworkforcelabel}{hydrogenlevels}{1}{}{25}The figure shows eight of the possible ways in which an electron in a hydrogen atom could drop from a higher energy state to a state of lower energy, releasing the difference in energy as a photon. Of these eight transitions, only D, E, and F produce photons with wavelengths in the visible spectrum.
(a) Which of the visible transitions would be closest to the violet end of the spectrum, and which would be closest to the red end? Explain.
(b) In what part of the electromagnetic spectrum would the photons from transitions A, B, and C lie? What about G and H? Explain.
(c) Is there an upper limit to the wavelengths that could be emitted by a hydrogen atom going from one bound state to another bound state? Is there a lower limit? Explain. \end{homeworkforcelabel}

\begin{homeworkforcelabel}{hydrogenphoton}{1}{}{26} Find an equation for the wavelength of the photon emitted when the electron in a hydrogen atom makes a transition from energy level \(n_1\) to level \(n_2\).(answer check available at lightandmatter.com) \end{homeworkforcelabel} \begin{homeworkforcelabel}{basketball}{1}{}{27}Estimate the angular momentum of a spinning basketball, in units of \(\hbar\). Explain how this result relates to the correspondence principle. \end{homeworkforcelabel} \begin{homeworkforcelabel}{hydrogennonrelativistic}{1}{}{28} Assume that the kinetic energy of an electron the \(n=1\) state of a hydrogen atom is on the same order of magnitude as the absolute value of its total energy, and estimate a typical speed at which it would be moving. (It cannot really have a single, definite speed, because its kinetic and interaction energy trade off at different distances from the proton, but this is just a rough estimate of a typical speed.) Based on this speed, were we justified in assuming that the electron could be described nonrelativistically? \end{homeworkforcelabel}

\begin{homeworkforcelabel}{energysum}{1}{}{29}Before the quantum theory, experimentalists noted that in many cases, they would find three lines in the spectrum of the same atom that satisfied the following mysterious rule: \(1/\lambda_1=1/\lambda_2+1/\lambda_3\). Explain why this would occur. Do not use reasoning that only works for hydrogen --- such combinations occur in the spectra of all elements. [Hint: Restate the equation in terms of the energies of photons.] \end{homeworkforcelabel}

\begin{homeworkforcelabel}{electroninproton}{1}{}{30} The wavefunction of the electron in the ground state of a hydrogen atom is

\[\begin{equation*} \Psi = \pi^{-1/2} a^{-3/2} e^{-r/a} , \end{equation*}\]

where \(r\) is the distance from the proton, and \(a=5.3\times10^{-11}\) m is a constant that sets the size of the wave.
(a) Calculate symbolically, without plugging in numbers, the probability that at any moment, the electron is inside the proton. Assume the proton is a sphere with a radius of \(b=0.5\) fm. [Hint: Does it matter if you plug in \(r=0\) or \(r=b\) in the equation for the wavefunction?](answer check available at lightandmatter.com)
(b) Calculate the probability numerically.(answer check available at lightandmatter.com)
(c) Based on the equation for the wavefunction, is it valid to think of a hydrogen atom as having a finite size? Can \(a\) be interpreted as the size of the atom, beyond which there is nothing? Or is there any limit on how far the electron can be from the proton? \end{homeworkforcelabel} \begin{homeworkforcelabel}{hydrogenlike}{1}{}{31}Use physical reasoning to explain how the equation for the energy levels of hydrogen,

\[\begin{equation*} E_n = -\frac{mk^2e^4}{2\hbar^2}\cdot\frac{1}{n^2} , \end{equation*}\]

should be generalized to the case of an atom with atomic number \(Z\) that has had all its electrons removed except for one. \end{homeworkforcelabel} \begin{homeworkforcelabel}{muonic-spectrum}{1}{}{32} A muon is a subatomic particle that acts exactly like an electron except that its mass is 207 times greater. Muons can be created by cosmic rays, and it can happen that one of an atom's electrons is displaced by a muon, forming a muonic atom. If this happens to a hydrogen atom, the resulting system consists simply of a proton plus a muon.
(a) Based on the results of section 13.4.4, how would the size of a muonic hydrogen atom in its ground state compare with the size of the normal atom?
(b) If you were searching for muonic atoms in the sun or in the earth's atmosphere by spectroscopy, in what part of the electromagnetic spectrum would you expect to find the absorption lines? \end{homeworkforcelabel} \begin{homeworkforcelabel}{compton}{1}{}{33}A photon collides with an electron and rebounds from the collision at 180 degrees, i.e., going back along the path on which it came. The rebounding photon has a different energy, and therefore a different frequency and wavelength. Show that, based on conservation of energy and momentum, the difference between the photon's initial and final wavelengths must be \(2h/mc\), where \(m\) is the mass of the electron. The experimental verification of this type of “pool-ball” behavior by Arthur Compton in 1923 was taken as definitive proof of the particle nature of light. Note that we're not making any nonrelativistic approximations. To keep the algebra simple, you should use natural units --- in fact, it's a good idea to use even-more-natural-than-natural units, in which we have not just \(c=1\) but also \(h=1\), and \(m=1\) for the mass of the electron. You'll also probably want to use the relativistic relationship \(E^2-p^2=m^2\), which becomes \(E^2-p^2=1\) for the energy and momentum of the electron in these units. \end{homeworkforcelabel} \begin{homeworkforcelabel}{compton-any-angle}{1}{}{34}Generalize the result of problem 33 to the case where the photon bounces off at an angle other than 180° with respect to its initial direction of motion. \end{homeworkforcelabel}

\begin{homeworkforcelabel}{wkb}{2}{}{35} On page 871 we derived an expression for the probability that a particle would tunnel through a rectangular barrier, i.e., a region in which the interaction energy \(U(x)\) has a graph that looks like a rectangle. Generalize this to a barrier of any shape. [Hints: First try generalizing to two rectangular barriers in a row, and then use a series of rectangular barriers to approximate the actual curve of an arbitrary function \(U(x)\). Note that the width and height of the barrier in the original equation occur in such a way that all that matters is the area under the \(U\)-versus-\(x\) curve. Show that this is still true for a series of rectangular barriers, and generalize using an integral.] If you had done this calculation in the 1930's you could have become a famous physicist. \end{homeworkforcelabel}

\begin{homeworkforcelabel}{h-atom-normalization}{1}{}{36}Show that the wavefunction given in problem 30 is properly normalized. \end{homeworkforcelabel}

\begin{homeworkforcelabel}{three-d-forbidden}{1}{}{37} Show that a wavefunction of the form \(\Psi = e^{by} \sin ax \) is a possible solution of the Schrödinger equation in two dimensions, with a constant potential. Can we tell whether it would apply to a classically allowed region, or a classically forbidden one? \end{homeworkforcelabel}

\begin{homeworkforcelabel}{three-d-box}{1}{}{38} Find the energy levels of a particle in a three-dimensional rectangular box with sides of length \(a\), \(b\), and \(c\).(answer check available at lightandmatter.com) \end{homeworkforcelabel}

\begin{homeworkforcelabel}{tritium-decay}{1}{}{39} Americium-241 is an artificial isotope used in smoke detectors. It undergoes alpha decay, with a half-life of 432 years. As discussed in example 18 on page 872, alpha decay can be understood as a tunneling process, and although the barrier is not rectangular in shape, the equation for the tunneling probability on page 872 can still be used as a rough guide to our thinking. For americium-241, the tunneling probability is about \(1\times10^{-29}\). Suppose that this nucleus were to decay by emitting a tritium (helium-3) nucleus instead of an alpha particle (helium-4). Estimate the relevant tunneling probability, assuming that the total energy \(E\) remains the same. This higher probability is contrary to the empirical observation that this nucleus is not observed to decay by tritium emission with any significant probability, and in general tritium emission is almost unknown in nature; this is mainly because the tritium nucleus is far less stable than the helium-4 nucleus, and the difference in binding energy reduces the energy available for the decay. \end{homeworkforcelabel}

\begin{homeworkforcelabel}{photon-mass}{1}{}{40} As far as we know, the mass of the photon is zero. However, it's not possible to prove by experiments that anything is zero; all we can do is put an upper limit on the number. As of 2008, the best experimental upper limit on the mass of the photon is about \(1\times 10^{-52}\) kg. Suppose that the photon's mass really isn't zero, and that the value is at the top of the range that is consistent with the present experimental evidence. In this case, the \(c\) occurring in relativity would no longer be interpreted as the speed of light. As with material particles, the speed \(v\) of a photon would depend on its energy, and could never be as great as \(c\). Estimate the relative size \((c-v)/c\) of the discrepancy in speed, in the case of a photon with a frequency of 1 kHz, lying in the very low frequency radio range. \hwans{hwans:photon-mass} \end{homeworkforcelabel}

\begin{homeworkforcelabel}{hydrogen-ratio}{1}{}{41}Hydrogen is the only element whose energy levels can be expressed exactly in an equation. Calculate the ratio \(\lambda_E/\lambda_F\) of the wavelengths of the transitions labeled E and F in problem 25 on p. 900. Express your answer as an exact fraction, not a decimal approximation. In an experiment in which atomic wavelengths are being measured, this ratio provides a natural, stringent check on the precision of the results.(answer check available at lightandmatter.com) \end{homeworkforcelabel}

\begin{homeworkforcelabel}{compare-photons}{1}{}{42}Give a numerical comparison of the number of photons per second emitted by a hundred-watt FM radio transmitter and a hundred-watt lightbulb.(answer check available at lightandmatter.com) \end{homeworkforcelabel}

\begin{homeworkforcelabel}{generalize-hydrogen}{1}{}{43}On pp. 886-887 of subsection 13.4.4, we used simple algebra to derive an approximate expression for the energies of states in hydrogen, without having to explicitly solve the Schrödinger equation. As input to the calculation, we used the the proportionality \(U \propto r^{-1}\), which is a characteristic of the electrical interaction. The result for the energy of the \(n\)th standing wave pattern was \(E_n \propto n^{-2}\).

There are other systems of physical interest in which we have \(U \propto r^k\) for values of \(k\) besides \(-1\). Problem 23 discusses the ground state of the harmonic oscillator, with \(k=2\) (and a positive constant of proportionality). In particle physics, systems called charmonium and bottomonium are made out of pairs of subatomic particles called quarks, which interact according to \(k=1\), i.e., a force that is independent of distance. (Here we have a positive constant of proportionality, and \(r>0\) by definition. The motion turns out not to be too relativistic, so the Schrödinger equation is a reasonable approximation.) The figure shows actual energy levels for these three systems, drawn with different energy scales so that they can all be shown side by side. The sequence of energies in hydrogen approaches a limit, which is the energy required to ionize the atom. In charmonium, only the first three levels are known.8

Generalize the method used for \(k=-1\) to any value of \(k\), and find the exponent \(j\) in the resulting proportionality \(E_n \propto n^j\). Compare the theoretical calculation with the behavior of the actual energies shown in the figure. Comment on the limit \(k\rightarrow\infty\). (answer check available at lightandmatter.com) \end{homeworkforcelabel}

\begin{homeworkforcelabel}{neutron-as-proton-plus-electron}{1}{}{44} The electron, proton, and neutron were discovered, respectively, in 1897, 1919, and 1932. The neutron was late to the party, and some physicists felt that it was unnecessary to consider it as fundamental. Maybe it could be explained as simply a proton with an electron trapped inside it. The charges would cancel out, giving the composite particle the correct neutral charge, and the masses at least approximately made sense (a neutron is heavier than a proton). (a) Given that the diameter of a proton is on the order of \(10^{-15}\ \text{m}\), use the Heisenberg uncertainty principle to estimate the trapped electron's minimum momentum.(answer check available at lightandmatter.com)
(b) Find the electron's minimum kinetic energy.(answer check available at lightandmatter.com)
(c) Show via \(E=mc^2\) that the proposed explanation fails, because the contribution to the neutron's mass from the electron's kinetic energy would be many orders of magnitude too large. \end{homeworkforcelabel}

\begin{homeworkforcelabel}{easy-electron-normalization}{1}{}{45}Suppose that an electron, in one dimension, is confined to a certain region of space so that its wavefunction is given by

\[\begin{equation*} \Psi = \begin{cases} 0 & \text{if } x\lt0 \\ A \sin(2\pi x/L) & \text{if } 0\le x\le L \\ 0 & \text{if } x>L \end{cases} \end{equation*}\]

Determine the constant \(A\) from normalization.(answer check available at lightandmatter.com) \end{homeworkforcelabel}

\begin{homeworkforcelabel}{easy-partials}{1}{}{46}In the following, \(x\) and \(y\) are variables, while \(u\) and \(v\) are constants. Compute (a) \(\partial(ux\ln (vy))/\partial x\), (b) \(\partial(ux\ln (vy))/\partial y\).(answer check available at lightandmatter.com) \end{homeworkforcelabel}

\begin{homeworkforcelabel}{am-radio-photon-density}{1}{}{47}(a) A radio transmitter radiates power \(P\) in all directions, so that the energy spreads out spherically. Find the energy density at a distance \(r\).(answer check available at lightandmatter.com)
(b) Let the wavelength be \(\lambda\). As described in example 8 on p. 844, find the number of photons in a volume \(\lambda^3\) at this distance \(r\).(answer check available at lightandmatter.com)
(c) For a 1000 kHz AM radio transmitting station, assuming reasonable values of \(P\) and \(r\), verify, as claimed in the example, that the result from part b is very large. \end{homeworkforcelabel}

Exercises

Exercise A: Quantum Versus Classical Randomness

1. Imagine the classical version of the particle in a one-dimensional box. Suppose you insert the particle in the box and give it a known, predetermined energy, but a random initial position and a random direction of motion. You then pick a random later moment in time to see where it is. Sketch the resulting probability distribution by shading on top of a line segment. Does the probability distribution depend on energy?

2. Do similar sketches for the first few energy levels of the quantum mechanical particle in a box, and compare with 1.

3. Do the same thing as in 1, but for a classical hydrogen atom in two dimensions, which acts just like a miniature solar system. Assume you're always starting out with the same fixed values of energy and angular momentum, but a position and direction of motion that are otherwise random. Do this for \(L=0\), and compare with a real \(L=0\) probability distribution for the hydrogen atom.

4. Repeat 3 for a nonzero value of \(L\), say L=\(\hbar\).

5. Summarize: Are the classical probability distributions accurate? What qualitative features are possessed by the classical diagrams but not by the quantum mechanical ones, or vice-versa?

(c) 1998-2013 Benjamin Crowell, licensed under the Creative Commons Attribution-ShareAlike license. Photo credits are given at the end of the Adobe Acrobat version.

Footnotes
[1] This is under the assumption that all the uranium atoms were created at the same time. In reality, we have only a general idea of the processes that might have created the heavy elements in the nebula from which our solar system condensed. Some portion of them may have come from nuclear reactions in supernova explosions in that particular nebula, but some may have come from previous supernova explosions throughout our galaxy, or from exotic events like collisions of white dwarf stars.
[2] What I'm presenting in this chapter is a simplified explanation of how the photon could have been discovered. The actual history is more complex. Max Planck (1858-1947) began the photon saga with a theoretical investigation of the spectrum of light emitted by a hot, glowing object. He introduced quantization of the energy of light waves, in multiples of \(hf\), purely as a mathematical trick that happened to produce the right results. Planck did not believe that his procedure could have any physical significance. In his 1905 paper Einstein took Planck's quantization as a description of reality, and applied it to various theoretical and experimental puzzles, including the photoelectric effect. Millikan then subjected Einstein's ideas to a series of rigorous experimental tests. Although his results matched Einstein's predictions perfectly, Millikan was skeptical about photons, and his papers conspicuously omit any reference to them. Only in his autobiography did Millikan rewrite history and claim that he had given experimental proof for photons.
[3] But note that along the way, we had to make two crucial assumptions: that the wave was sinusoidal, and that it was a plane wave. These assumptions will not prevent us from describing examples such as double-slit diffraction, in which the wave is approximately sinusoidal within some sufficiently small region such as one pixel of a camera's imaging chip. Nevertheless, these issues turn out to be symptoms of deeper problems, beyond the scope of this book, involving the way in which relativity and quantum mechanics should be combined. As a taste of the ideas involved, consider what happens when a photon is reflected from a conducting surface, as in example 23 on p. 701, so that the electric field at the surface is zero, but the magnetic field isn't. The superposition is a standing wave, not a plane wave, so \(|\mathbf{E}|=c|\mathbf{B}|\) need not hold, and doesn't. A detector's probability of detecting a photon near the surface could be zero if the detector sensed electric fields, but nonzero if it sensed magnetism. It doesn't make sense to say that either of these is the probability that the photon “was really there.”
[4] This interpretation of quantum mechanics is called the Copenhagen interpretation, because it was originally developed by a school of physicists centered in Copenhagen and led by Niels Bohr.
[5] This interpretation, known as the many-worlds interpretation, was developed by Hugh Everett in 1957.
[6] See page 891 for a note about the two different systems of notations that are used for quantum numbers.
[7] After f, the series continues in alphabetical order. In nuclei that are spinning rapidly enough that they are almost breaking apart, individual protons and neutrons can be stirred up to \(\ell\) values as high as 7, which is j.
[8] See Barnes et al., “The XYZs of Charmonium at BES,” arxiv.org/abs/hep-ph/0608103. To avoid complication, the levels shown are only those in the group known for historical reasons as the \(\Psi\) and \(J/\Psi\).