Join us for conversations that inspire, recognize, and encourage innovation and best practices in the education profession.
Available on Apple Podcasts, Spotify, Google Podcasts, and more.
Online Text by Shamit Kachru
The videos and online textbook units can be used independently. When using both, it is possible to start with either one. Watching the video first, and then reading the unit from the online textbook is recommended.
Each unit was written by a prominent physicist who describes the cutting edge advances in his or her area of research and the potential impacts of those advances on everyday life. The classical physics related to each new topic is covered briefly to help the reader better understand the research, its effects, and our current understanding of physics.
Click on “Content By Unit” (in the menu to the left) and select a unit title to view the web version of the online text, which includes links to related material. Or, download PDF versions of the units below.
The creation of quantum mechanics in the 1920s broke open the gates to understanding atoms, molecules, and the structure of materials. This new knowledge transformed our world. Within two decades, quantum mechanics led to the invention of the transistor to be followed by the invention of the laser, revolutionary advances in semiconductor electronics, integrated circuits, medical diagnostics, and optical communications. Quantum mechanics also transformed physics because it profoundly changed our understanding of how to ask questions of nature and how to interpret the answers. An intellectual change of this magnitude does not come easily. The founders of quantum mechanics struggled with its concepts and passionately debated them. We are the beneficiaries of that struggle and quantum mechanics has now been developed into an elegant and coherent discipline. Nevertheless, quantum mechanics always seems strange on first acquaintance and certain aspects of it continue to generate debate today. We hope that this unit provides insight into how quantum mechanics works and why people find it so strange at first. We will also sketch some of the recent developments that have enormously enhanced our powers for working in the quantum world. These advances make it possible to manipulate and study quantum systems with a clarity previously achieved only in hypothetical thought experiments. They are so dramatic that some physicists have described them as a second quantum revolution.
An early step in the second quantum revolution was the discovery of how to capture and manipulate a single ion in an electromagnetic trap, reduce its energy to the quantum limit, and even watch the ion by eye as it fluoresces. Figure 1 shows an array of fluorescing ions in a trap. Then methods were discovered for cooling atoms to microkelvin temperatures (a microkelvin is a millionth of a degree) and trapping them in magnetic fields or with light waves (Figure 2). These advances opened the way to stunning advances such as the observation of Bose-Einstein condensation of atoms, to be discussed in Unit 6, and the creation of a new discipline that straddles atomic and condensed matter physics.
The goal of this unit is to convey the spirit of life in the quantum world—that is, to give an idea of what quantum mechanics is and how it works—and to describe two events in the second quantum revolution: atom cooling and atomic clocks.
The nature of light was a profound mystery from the earliest stirrings of science until the 1860s and 1870s, when James Clerk Maxwell developed and published his electromagnetic theory. By joining the two seemingly disparate phenomena, electricity and magnetism, into the single concept of an electromagnetic field, Maxwell’s theory showed that waves in the field travel at the speed of light and are, in fact, light itself. Today, most physicists regard Maxwell’s theory as among the most important and beautiful theories in all of physics.
Maxwell’s theory is elegant because it can be expressed by a short set of equations. It is powerful because it leads to powerful predictions—for instance, the existence of radio waves and, for that matter, the entire electromagnetic spectrum from radio waves to x-rays. Furthermore, the theory explained how light can be created and absorbed, and provided a key to essentially every question in optics.
Given the beauty, elegance, and success of Maxwell’s theory of light, it is ironic that the quantum age, in which many of the most cherished concepts of physics had to be recast, was actually triggered by a problem involving light.
The spectrum of light from a blackbody—for instance the oven in Figure 3 or the filament of an electric light bulb—contains a broad spread of wavelengths. The spectrum varies rapidly with the temperature of the body. As the filament is heated, the faint red glow of a warm metal becomes brighter, and the peak of the spectrum broadens and shifts to a shorter wavelength, from orange to yellow and then to blue. The spectra of radiation from black bodies at different temperatures have identical shapes and differ only in the scales of the axes.
Enter the quantum
In the final years of the 19th century, physicists attempted to understand the spectrum of blackbody radiation but theory kept giving absurd results. German physicist Max Planck finally succeeded in calculating the spectrum in December 1900. However, he had to make what he could regard only as a preposterous hypothesis. According to Maxwell’s theory, radiation from a blackbody is emitted and absorbed by charged particles moving in the walls of the body, for instance by electrons in a metal. Planck modeled the electrons as charged particles held by fictitious springs. A particle moving under a spring force behaves like a harmonic oscillator. Planck found he could calculate the observed spectrum if he hypothesized that the energy of each harmonic oscillator could change only by discrete steps. If the frequency of the oscillator is ( is the Greek letter “nu” and is often used to stand for frequency), then the energy had to be 0, 1 hν, 2hν, 3 hν, … nhν, … where n could be any integer and h is a constant that soon became known as Planck’s constant. Planck named the stephν a quantum of energy. The blackbody spectrum Planck obtained by invoking his quantum hypothesis agreed beautifully with the experiment. But the quantum hypothesis seemed so absurd to Planck that he hesitated to talk about it.
The physical dimension—the unit—of Planck’s constant h is interesting. It is either [energy] / [frequency] or [angular momentum]. Both of these dimensions have important physical interpretations. The constant’s value in S.I. units, 6.6 x 10-34 joule-seconds, suggests the enormous distance between the quantum world and everyday events.
Planck’s constant is ubiquitous in quantum physics. The combination h/2 appears so often that it has been given a special symbol called “hbar.” This symbol appears in the upper-right-hand corner of these pages.
For five years, the quantum hypothesis had little impact. But in 1905, in what came to be called his miracle year, Swiss physicist Albert Einstein published a theory that proposed a quantum hypothesis from a totally different point of view. Einstein pointed out that, although Maxwell’s theory was wonderfully successful in explaining the known phenomena of light, these phenomena involved light waves interacting with large bodies. Nobody knew how light behaved on the microscopic scale—with individual electrons or atoms, for instance. Then, by a subtle analysis based on the analogy of certain properties of blackbody radiation with the behavior of a gas of particles, he concluded that electromagnetic energy itself must be quantized in units of . Thus, the light energy in a radiation field obeyed the same quantum law that Planck proposed for his fictitious mechanical oscillators; but Einstein’s quantum hypothesis did not involve hypothetical oscillators.
An experimental test of the quantum hypothesis
Whereas Planck’s theory led to no experimental predictions, Einstein’s theory did. When light hits a metal, electrons can be ejected, a phenomenon called the photoelectric effect. According to Einstein’s hypothesis, the energy absorbed by each electron had to come in bundles of light quanta. The minimum energy an electron could extract from the light beam is one quantum, hν. A certain amount of energy, W, is needed to remove electrons from a metal; otherwise they would simply flow out. So, Einstein predicted that the maximum kinetic energy of a photoelectron, E, had to be given by the equation E=hν – W.
The prediction is certainly counterintuitive, for Einstein predicted that E would depend only on the frequency of light, not on the light’s intensity. The American physicist Robert A. Millikan set out to prove experimentally that Einstein must be wrong. By a series of painstaking experiments, however, Millikan convinced himself that Einstein must be right.
The quantum of light energy is called a photon. A photon possesses energy hν, and it carries momentum hν/c, where c is the speed of light. Photons are particle-like because they carry discrete energy and momentum. They are relativistic because they always travel at the speed of light and consequently can possess momentum even though they are massless.
Although the quantum hypothesis solved the problem of blackbody radiation, Einstein’s concept of a light quantum—a particle-like bundle of energy—ran counter to common sense because it raised a profoundly troubling question: Does light consist of waves or particles? As we will show, answering this question required a revolution in physics. The issue was so profound that we should devote the next section to reviewing just what we mean by a wave and what we mean by a particle.
A particle is an object so small that its size is negligible; a wave is a periodic disturbance in a medium. These two concepts are so different that one can scarcely believe that they could be confused. In quantum physics, however, they turn out to be deeply intertwined and fundamentally inseparable.
The electron provides an ideal example of a particle because no attempt to measure its size has yielded a value different from zero. Clearly, an electron is small compared to an atom, while an atom is small compared to, for instance, a marble. In the night sky, the tiny points of starlight appear to come from luminous particles, and for many purposes we can treat stars as particles that interact gravitationally. It is evident that “small” is a relative term. Nevertheless, the concept of a particle is generally clear.
The essential properties of a particle are its mass, m; and, if it is moving with velocity v, its momentum, mv; and its kinetic energy, 1/2mv 2. The energy of a particle remains localized, like the energy of a bullet, until it hits something. One could say, without exaggeration, that nothing could be simpler than a particle.
A wave is a periodic disturbance in a medium. Water waves are the most familiar example (we talk here about gentle waves, like ripples on a pond, not the breakers loved by surfers); but there are numerous other kinds, including sound waves (periodic oscillations of pressure in the air), light waves (periodic oscillations in the electromagnetic field), and the yet-to-be-detected gravitational waves (periodic oscillations in the gravitational field). The nature of the amplitude, or height of the wave, depends on the medium, for instance the pressure of air in a sound wave, the actual height in a water wave, or the electric field in a light wave. However, every wave is characterized by its wavelength (the Greek letter “lambda”), the distance from one crest to the next; its frequency (the Greek letter “nu”), the number of cycles or oscillations per second; and its velocity v, the distance a given crest moves in a second. This distance is the product of the number of oscillations the wave undergoes in a second and the wavelength.
The energy in a wave spreads like the ripples traveling outward in Figure 7. A surprising property of waves is that they pass freely through each other: as they cross, their displacements simply add. The wave fronts retain their circular shape as if the other wave were not there. However, at the intersections of the circles marking the wave crests, the amplitudes add, producing a bright image. In between, the positive displacement of one wave is canceled by the negative displacement of the other. This phenomenon, called interference, is a fundamental property of waves. Interference constitutes a characteristic signature of wave phenomena.
If a system is constrained, for instance if the medium is a guitar string that is fixed at either end, the energy cannot simply propagate away. As a result, the pattern is fixed in space and it oscillates in time. Such a wave is called a standing wave.
Far from their source, in three dimensions, the wave fronts of a disturbance behave like equally spaced planes, and the waves are called plane waves. If plane waves pass through a slit, the emerging wave does not form a perfect beam but spreads, or diffracts, as in Figure 10. This may seem contrary to experience because light is composed of waves, but light waves do not seem to spread. Rather, light appears to travel in straight lines. This is because in everyday experience, light beams are formed by apertures that are many wavelengths wide. A 1 millimeter aperture, for instance, is about 2,000 wavelengths wide. In such a situation, diffraction is weak and spreading is negligible. However, if the slit is about a wavelength across, the emerging disturbance is not a sharp beam but a rapidly spreading wave, as in Figure 10. To see light diffract, one must use very narrow slits.
If a plane wave passes through two nearby slits, the emerging beams can overlap and interfere. The points of interference depend only on the geometry and are fixed in space. The constructive interference creates a region of brightness, while destructive interference produces darkness. As a result, the photograph of light from two slits reveals bright and dark fringes, called “interference fringes.” An example of two-slit interference is shown in Figure 11.
The paradox emerges
Diffraction, interference, and in fact all of the phenomena of light can be explained by the wave theory of light, Maxwell’s theory. Consequently, there can be no doubt that light consists of waves. However, in Section 2 we described Einstein’s conjecture that light consists of particle-like bundles of energy, and explained that the photoelectric effect provides experimental evidence that this is true. A single phenomenon that displays contrary properties creates a paradox.
Is it possible to reconcile these two descriptions? One might argue that the bundles of light energy are so small that their discreteness is unimportant. For instance, a one-watt light source, which is quite dim, emits over 1018photons per second. The number of photons captured in visual images or the images in digital cameras are almost astronomically large. One photon more or less would never make a difference. However, we will see show examples where wave-like behavior is displayed by single particles. We will return to the wave-particle paradox later.
Early in the 20th century, it was known that everyday matter consists of atoms and that atoms contain positive and negative charges. Furthermore, each type of atom, that is, each element, has a unique spectrum—a pattern of wavelengths the atom radiates or absorbs if sufficiently heated. A particularly important spectrum, the spectrum of atomic hydrogen, is shown in Figure 12. The art of measuring the wavelengths, spectroscopy, had been highly developed, and scientists had generated enormous quantities of precise data on the wavelengths of light emitted or absorbed by atoms and molecules.
In spite of the elegance of spectroscopic measurement, it must have been uncomfortable for scientists to realize that they knew essentially nothing about the structure of atoms, much less why they radiate and absorb certain colors of light. Solving this puzzle ultimately led to the creation of quantum mechanics, but the task took about 20 years.
The nuclear atom
In 1910, there was a major step in unraveling the mystery of matter: Ernest Rutherford realized that most of the mass of an atom is located in a tiny volume—a nucleus—at the center of the atom. The positively charged nucleus is surrounded by the negatively charged electrons. Rutherford was forced reluctantly to accept a planetary model of the atom in which electrons, electrically attracted to the nucleus, fly around the nucleus like planets gravitationally attracted to a star. However, the planetary model gave rise to a dilemma. According to Maxwell’s theory of light, circling electrons radiate energy. The electrons would generate light at ever-higher frequencies as they spiraled inward to the nucleus. The spectrum would be broad, not sharp. More importantly, the atom would collapse as the electrons crashed into the nucleus. Rutherford’s discovery threatened to become a crisis for physics.
The Bohr model of hydrogen
Niels Bohr, a young scientist from Denmark, happened to be visiting Rutherford’s laboratory and became intrigued by the planetary atom dilemma. Shortly after returning home Bohr proposed a solution so radical that even he could barely believe it. However, the model gave such astonishingly accurate results that it could not be ignored. His 1913 paper on what became known as the “Bohr model” of the hydrogen atom opened the path to the creation of quantum mechanics.
Bohr proposed that—contrary to all the rules of classical physics—hydrogen atoms exist only in certain fixed energy states, called stationary states. Occasionally, an atom somehow jumps from one state to another by radiating the energy difference. If an atom jumps from state b with energy Eb to state a with lower energy, Ea, it radiates light with frequency given by . Today, we would say that the atom emits a photon when it makes a quantum jump. The reverse is possible: An atom in a lower energy state can absorb a photon with the correct energy and make a transition to the higher state. Each energy state would be characterized by an integer, now called a quantum number, with the lowest energy state described by n = 1.
Bohr’s ideas were so revolutionary that they threatened to upset all of physics. However, the theories of physics, which we now call “classical physics,” were well tested and could not simply be dismissed. So, to connect his wild proposition with reality, Bohr introduced an idea that he later named the Correspondence Principle. This principle holds that there should be a smooth transition between the quantum and classical worlds. More precisely, in the limit of large energy state quantum numbers, atomic systems should display classical-like behavior. For example, the jump from a state with quantum number n = 100 to the state n = 99 should give rise to radiation at the frequency of an electron circling a proton with approximately the energy of those states. With these ideas, and using only the measured values of a few fundamental constants, Bohr calculated the spectrum of hydrogen and obtained astonishing agreement with observations.
Bohr understood very well that his theory contained too many radical assumptions to be intellectually satisfying. Furthermore, it left numerous questions unanswered, such as why atoms make quantum jumps. The fundamental success of Bohr’s model of hydrogen was to signal the need to replace classical physics with a totally new theory. The theory should be able to describe behavior at the microscopic scale—atomic behavior—but it should also be in harmony with classical physics, which works well in the world around us.
Matter waves
By the end of the 1920s, Bohr’s vision of a new theory was fulfilled by the creation of quantum mechanics, which turned out to be strange and even disturbing.
A key idea in the development of quantum mechanics came from the French physicist Louis de Broglie. In his doctoral thesis in 1924, de Broglie suggested that if waves can behave like particles, as Einstein had shown, then one might expect that particles can behave like waves. He proposed that a particle with momentum p should be associated with a wave of wavelength = h/p, where, as usual, h stands for Planck’s constant. The question “Waves of what?” was left unanswered.
de Broglie’s hypothesis was not limited to simple particles such as electrons. Any system with momentum p, for instance an atom, should behave like a wave with its particular de Broglie wavelength. The proposal must have seemed absurd because in the entire history of science, nobody had ever seen anything like a de Broglie wave. The reason that nobody had ever seen a de Broglie wave, however, is simple: Planck’s constant is so small that the de Broglie wavelength for observable everyday objects is much too small to be noticeable. But for an electron in hydrogen, for instance, the deBroglie wavelength is about the size of the atom.
Today, de Broglie waves are familiar in physics. For example, the diffraction of particles through a series of slits (see Figure 14) looks exactly like the interference pattern expected for a light wave through a series of slits. The signal, however, is that of a matter wave—the wave of a stream of sodium molecules. The calculated curve (solid line) is the interference pattern for a wave with the de Broglie wavelength of sodium molecules, which are diffracted by slits with the measured dimensions. The experimental points are the counts from an atom (or molecule) detector. The stream of particles behaves exactly like a wave.
The concept of a de Broglie wave raises troubling issues. For instance, for de Broglie waves one must ask: Waves of what? Part of the answer is provided in the two-slit interference data in Figure 15. The particles in this experiment are electrons. Because the detector is so sensitive, the position of every single electron can be recorded with high efficiency. Panel (a) displays only eight electrons, and they appear to be randomly scattered. The points in panels (b) and (c) also appear to be randomly scattered. Panel (d) displays 60,000 points, and these are far from randomly distributed. In fact, the image is a traditional two-slit interference pattern. This suggests that the probability that an electron arrives at a given position is proportional to the intensity of the interference pattern there. It turns out that this suggestion provides a useful interpretation of a quantum wavefunction: The probability of finding a particle at a given position is proportional to the intensity of its wavefunction there, that is to the square of the wavefunction.
As we saw in the previous section, there is strong evidence that atoms can behave like waves. So, we shall take the wave nature of atoms as a fact and turn to the questions of how matter waves behave and what they mean.
Mathematically, waves are described by solutions to a differential equation called the “wave equation.” In 1925, the Austrian physicist Erwin Schrödinger reasoned that since particles can behave like waves, there must be a wave equation for particles. He traveled to a quiet mountain lodge to discover the equation; and after a few weeks of thinking and skiing, he succeeded. Schrödinger’s equation opened the door to the quantum world, not only answering the many paradoxes that had arisen, but also providing a method for calculating the structure of atoms, molecules, and solids, and for understanding the structure of all matter. Schrödinger’s creation, called wave mechanics, precipitated a genuine revolution in science. Almost simultaneously, a totally different formulation of quantum theory was created by Werner Heisenberg: matrix mechanics. The two theories looked different but turned out to be fundamentally equivalent. Often, they are simply referred to as “quantum mechanics.” Schrödinger and Heisenberg were awarded the Nobel Prize in 1932 for their theories.
In wave mechanics, our knowledge about a system is embodied in its wavefunction. A wavefunction is the solution to Schrödinger’s equation that fits the particular circumstances. For instance, one can speak of the wavefunction for a particle moving freely in space, or an electron bound to a proton in a hydrogen atom, or a mass moving under the spring force of a harmonic oscillator.
To get some insight into the quantum description of nature, let’s consider a mass M, moving in one dimension, bouncing back and forth between two rigid walls separated by distance L. We will refer to this idealized one-dimensional system as a particle in a box. The wavefunction must vanish outside the box because the particle can never be found there. Physical waves cannot jump abruptly, so the wavefunction must smoothly approach zero at either end of the box. Consequently, the box must contain an integral number of half-wavelengths of the particle’s de Broglie wave. Thus, the de Broglie wavelength λ must obey nλ/2=L, where L is the length of the box and n = 1, 2, 3… . The integer n is called the quantum number of the state. Once we know the de Broglie wavelength, we also know the particle’s momentum and energy. See The Math section below.
The mere existence of matter waves suggests that in any confined system, the energy can have only certain discrete values, that is the energy is quantized. The minimum energy is called the ground state energy. For the particle in the box, the ground state energy is (hL)2/8M. The energy of the higher-lying states increases as n2. For a harmonic oscillator, it turns out that the energy levels are equally spaced, and the allowed energies increase linearly with n. For a hydrogen atom, the energy levels are found to get closer and closer as n increases, varying as 1/n2.
If this is your first encounter with quantum phenomena, you may be confused as to what the wavefunction means and what connection it could have with the behavior of a particle. Before discussing the interpretation, it will be helpful to look at the wavefunction for a system slightly more interesting than a particle in a box.
The harmonic oscillator
In free space, where there are no forces, the momentum and kinetic energy of a particle are constant. In most physically interesting situations, however, a particle experiences a force. A harmonic oscillator is a particle moving under the influence of a spring force as shown in Figure 18. The spring force is proportional to how far the spring is stretched or compressed away from its equilibrium position, and the particle’s potential energy is proportional to that distance squared. Because energy is conserved, the total energy, E = K + V, is constant. These relations are shown in the energy diagram in Figure 18.
The energy diagram in Figure 18 is helpful in understanding both classical and quantum behavior. Classically, the particle moves between the two extremes (-a, a) shown in the drawing. The extremes are called “turning points” because the direction of motion changes there. The particle comes to momentary rest at a turning point, the kinetic energy vanishes, and the potential energy is equal to the total energy. When the particle passes the origin, the potential energy vanishes, and the kinetic energy is equal to the total energy. Consequently, as the particle moves back and forth, its momentum oscillates between zero and its maximum value.
Solutions to Schrödinger’s equation for the harmonic oscillator show that the energy is quantized, as we expect for a confined system, and that the allowed states are given by En = (n+1/2)hν, where is the frequency of the oscillator and n = 0, 1, 2… . The energy levels are separated by hν, as Planck had conjectured, but the system has a ground state energy 1/2hν, which Planck could not have known about. The harmonic oscillator energy levels are evenly spaced, as shown in Figure 19.
What does the wavefunction mean?
If we measure the position of the mass, for instance by taking a flash photograph of the oscillator with a meter stick in the background, we do not always get the same result. Even under ideal conditions, which includes eliminating thermal fluctuations by working at zero temperature, the mass would still jitter due to its zero point energy. However, if we plot the results of successive measurements, we find that they start to look reasonably orderly. In particular, the fraction of the measurements for which the mass is in some interval, s, is proportional to the area of the strip of width s lying under the curve in Figure 20, shown in blue. This curve is called a probability distribution curve. Since the probability of finding the mass somewhere is unity, the height of the curve must be chosen so that the area under the curve is 1. With this convention, the probability of finding the mass in the interval s is equal to the area of the shaded strip. It turns out that the probability distribution is simply the wavefunction squared.
Here, we have a curious state of affairs. In classical physics, if one knows the state of a system, for instance the position and speed of a marble at rest, one can predict the result of future measurements as precisely as one wishes. In quantum mechanics, however, the harmonic oscillator cannot be truly at rest: The closest it can come is the ground state energy 1/2 . Furthermore, we cannot predict the precise result of measurements, only the probability that a measurement will give a result in a given range. Such a probabilistic theory was not easy to accept at first. In fact, Einstein never accepted it.
Aside from its probabilistic interpretation, Figure 20 portrays a situation that could hardly be less like what we expect from classical physics. A classical harmonic oscillator moves fastest near the origin and spends most of its time as it slows down near the turning points. Figure 20 suggests the contrary: The most likely place to find the mass is at the origin where it is moving fastest. However, there is an even more bizarre aspect to the quantum solution: The wavefunction extends beyond the turning points. This means that in a certain fraction of measurements, the mass will be found in a place where it could never go if it obeyed the classical laws. The penetration of the wavefunction into the classically forbidden region gives rise to a purely quantum phenomenon called tunneling. If the energy barrier is not too high, for instance if the energy barrier is a thin layer of insulator in a semiconductor device, then a particle can pass from one classically allowed region to another, tunneling through a region that is classically forbidden.
The quantum description of a harmonic oscillator starts to look a little more reasonable for higher-lying states. For instance, the wavefunction and probability distribution for the state n = 10 are shown in Figure 21.
Although the n = 10 state shown in Figure 21 may look weird, it shows some similarities to classical behavior. The mass is most likely to be observed near a turning point and least likely to be seen near the origin, as we expect. Furthermore, the fraction of time it spends outside of the turning points is much less than in the ground state. Aside from these clues, however, the quantum description appears to have no connection to the classical description of a mass oscillating in a real harmonic oscillator. We turn next to showing that such a connection actually exists.
The idea of the position of an object seems so obvious that the concept of position is generally taken for granted in classical physics. Knowing the position of a particle means knowing the values of its coordinates in some coordinate system. The precision of those values, in classical physics, is limited only by our skill in measuring. In quantum mechanics, the concept of position differs fundamentally from this classical meaning. A particle’s position is summarized by its wavefunction. To describe a particle at a given position in the language of quantum mechanics, we would need to find a wavefunction that is extremely high near that position and zero elsewhere. The wavefunction would resemble a very tall and very thin tower. None of the wavefunctions we have seen so far look remotely like that. Nevertheless, we can construct a wavefunction that approximates the classical description as precisely as we please.
Let’s take the particle in a box described in Section 4 as an example. The possible wavefunctions, each labeled by an integer quantum number, n, obey the superposition principle, and so we are free to add solutions with different values of n, adjusting the amplitudes as needed. The sum of the individual wavefunctions yields another legitimate wavefunction that could describe a particle in a box. If we’re clever, we can come up with a combination that resembles the classical solution. If, for example, we add a series of waves with n = 1, 3, 5, and 7 and the carefully chosen amplitudes shown in Figure 22, the result appears to be somewhat localized near the center of the box.
Localizing the particle has come at a cost, however, because each wave we add to the wavefunction corresponds to a different momentum. If the lowest possible momentum is p0, then the wavefunction we created has components of momentum at p0, 3p0, 5p0, and 7p0. If we measure the momentum, for instance, by suddenly opening the ends of the box and measuring the time for the particle to reach a detector, we would observe one of the four possible values. If we repeat the measurement many times and plot the results, we would find that the probability for a particular value is proportional to the square of the amplitude of its component in the wavefunction.
If we continue to add waves of ever-shortening wavelengths to our solution, the probability curve becomes narrower while the spread of momentum increases. Thus, as the wavefunction sharpens and our uncertainty about the particle’s position decreases, the spread of values observed in successive measurements, that is, the uncertainty in the particle’s momentum, increases.
This state of affairs may seem unnatural because energy is not conserved: Often, the particle is observed to move slowly but sometimes it is moving very fast. However, there is no reason energy should be conserved because the system must be freshly prepared before each measurement. The preparation process requires that the particle has the given wavefunction before each measurement. All the information that we have about the state of a particle is in its wavefunction, and this information does not include a precise value for the energy.
The reciprocal relation between the spread in repeated measurements of position and momentum was first recognized by Werner Heisenberg. If we denote the scatter in results for repeated measurements of a position of a particle byΔx (Δ, Greek letter “delta”), and the scatter in results in repeated measurements of the momentum byΔp, then Heisenberg showed that ΔxΔp ≥ h/4π, a result famously known as the Heisenberg uncertainty principle. The uncertainty principle means that in quantum mechanics, we cannot simultaneously know both the position and the momentum of an object arbitrarily well.
Measurements of certain other quantities in quantum mechanics are also governed by uncertainty relations. An important relation for quantum measurements relates the uncertainty in measurements of the energy of a system, ΔE, to the time τ (τ, Greek letter “tau”) during which the measurement is made: τΔE ≥h/4π .
Some illustrations of the uncertainty principle
Harmonic oscillator. The ground state energy of the harmonic oscillator, 1/2hv, makes immediate sense from the uncertainty principle. If the ground state of the oscillator were more highly localized, that is sharper than in Figure 20, the oscillator’s average potential energy would be lower. However, sharpening the wavefunction requires introducing shorter wavelength components. These have higher momentum, and thus higher kinetic energy. The result would be an increase in the total energy. The ground state represents the optimum trade-off between decreasing the potential energy and increasing the kinetic energy.
Hydrogen atom. The size of a hydrogen atom also represents a trade-off between potential and kinetic energy, dictated by the uncertainty principle. If we think of the electron as smeared over a spherical volume, then the smaller the radius, the lower the potential energy due to the electron’s interaction with the positive nucleus. However, the smaller the radius, the higher the kinetic energy arising from the electron’s confinement. Balancing these trade-offs yields a good estimate of the actual size of the atom. The mean radius is about 0.05 nm.
Natural linewidth. The most precise measurements in physics are frequency measurements, for instance the frequencies of radiation absorbed or radiated in transitions between atomic stationary states. Atomic clocks are based on such measurements. If we designate the energy difference between two states by E, then the frequency of the transition is given by Bohr’s relation: = . An uncertainty in energy ΔE leads to an uncertainty in the transition frequency given by ΔE = hΔv. The time-energy uncertainty principle can be written ΔE ≥h/(4πτ), where τ is the time during which the measurement is made. Combining these, we find that the uncertainty in frequency is Δv ≥ l/(4πτ).
It is evident that the longer the time for a frequency measurement, the smaller the possible uncertainty. The time τ may be limited by experimental conditions, but even under ideal conditions τ would still be limited. The reason is that an atom in an excited state eventually radiates to a lower state by a process called spontaneous emission. This is the process that causes quantum jumps in the Bohr model. Spontaneous emission causes an intrinsic energy uncertainty, or width, to an energy level. This width is called the natural linewidth of the transition. As a result, the energies of all the states of a system, except for the ground states, are intrinsically uncertain. One might think that this uncertainty fundamentally precludes accurate frequency measurement in physics. However, as we shall see, this is not the case.
Myths about the uncertainty principle.
Heisenberg’s uncertainty principle is among the most widely misunderstood principles of quantum physics. Non-physicists sometimes argue that it reveals a fundamental shortcoming in science and poses a limitation to scientific knowledge. On the contrary, the uncertainty principle is seminal to quantum measurement theory, and quantum measurements have achieved the highest accuracy in all of science. It is important to appreciate that the uncertainty principle does not limit the precision with which a physical property, for instance a transition frequency, can be measured. What it does is to predict the scatter of results of a single measurement. By repeating the measurements, the ultimate precision is limited only by the skill and patience of the experimenter. Should there be any doubt about whether the uncertainty principle limits the power of precision in physics, measurements made with the apparatus shown in Figure 24 should put them to rest. The experiment confirmed the accuracy of a basic quantum mechanical prediction to an accuracy of one part in 1012, one of the most accurate tests of theory in all of science.
The uncertainty principle and the world about us
Because the quantum world is so far from our normal experience, the uncertainty principle may seem remote from our everyday lives. In one sense, the uncertainty principle really is remote. Consider, for instance, the implications of the uncertainty principle for a baseball. Conceivably, the baseball could fly off unpredictably due to its intrinsically uncertain momentum. The more precisely we can locate the baseball in space, the larger is its intrinsic momentum. So, let’s consider a pitcher who is so sensitive that he can tell if the baseball is out of position by, for instance, the thickness of a human hair, typically 0.1 mm or 10-4 m. According to the uncertainty principle, the baseball’s intrinsic speed due to quantum effects is about 10-29 m/s. This is unbelievably slow. For instance, the time for the baseball to move quantum mechanically merely by the diameter of an atom would be roughly 20 times the age of the universe. Obviously, whatever might give a pitcher a bad day, it will not be the uncertainty principle.
Nevertheless, effects of the uncertainty principle are never far off. Our world is composed of atoms and molecules; and in the atomic world, quantum effects rule everything. For instance, the uncertainty principle prevents electrons from crashing into the nucleus of an atom. As an electron approaches a nucleus under the attractive Coulomb force, its potential energy falls. However, localizing the electron near the nucleus requires the sharpening of its wavefunction. This sharpening causes the electron’s momentum spread to get larger and its kinetic energy to increase. At some point, the electron’s total energy would start to increase. The quantum mechanical balance between the falling potential energy and rising kinetic energy fixes the size of the atom. If we magically turned off the uncertainty principle, atoms would vanish in a flash. From this point of view, you can see the effects of the uncertainty principle everywhere.
The discovery that laser light can cool atoms to less than a millionth of a degree above absolute zero opened a new world of quantum physics. Previously, the speeds of atoms due to their thermal energy were always so high that their de Broglie wavelengths were much smaller than the atoms themselves. This is the reason why gases often behave like classical particles rather than systems of quantum objects. At ultra-low temperatures, however, the de Broglie wavelength can actually exceed the distance between the atoms. In such a situation, the gas can abruptly undergo a quantum transformation to a state of matter called a Bose-Einstein condensate. The properties of this new state are described in Unit 6. In this section, we describe some of the techniques for cooling and trapping atoms that have opened up a new world of ultracold physics. The atom-cooling techniques enabled so much new science that the 1997 Nobel Prize was awarded to three of the pioneers: Steven Chu, Claude Cohen-Tannoudji, and William D. Phillips.
Doppler cooling
As we learned earlier, a photon carries energy and momentum. An atom that absorbs a photon recoils from the momentum kick, just as you experience recoil when you catch a ball. Laser cooling manages the momentum transfer so that it constantly slows the atom’s motion, slowing it down. In absorbing a photon, the atom makes a transition from its ground state to a higher energy state. This requires that the photon has just the right energy. Fortunately, lasers can be tuned to precisely match the difference between energy levels in an atom. After absorbing a photon, an atom does not remain in the excited state but returns to the ground state by a process called spontaneous emission, emitting a photon in the process. At optical wavelengths, the process is quick, typically taking a few tens of nanoseconds. The atom recoils as it emits the photon, but this recoil, which is opposite to the direction of photon emission, can be in any direction. As the atom undergoes many cycles of absorbing photons from one direction followed by spontaneously emitting photons in random directions, the momentum absorbed from the laser beam accumulates while the momentum from spontaneous emission averages to zero.
This diagram of temperatures of interest in physics uses a scale of factors of 10 (a logarithmic scale). On this scale, the difference between the Sun’s surface temperature and room temperature is a small fraction of the range of temperatures opened by the invention of laser cooling. Temperature is in the Kelvin scale at which absolute zero would describe particles in thermal equilibrium that are totally at rest. The lowest temperature measured so far by measuring the speeds of atoms is about 450 picokelvin (one picokelvin is 10-12 K). This was obtained by evaporating atoms in a Bose-Einstein condensate.
The process of photon absorption followed by spontaneous emission can heat the atoms just as easily as cool them. Cooling is made possible by a simple trick: Tune the laser so that its wavelength is slightly too long for the atoms to absorb. In this case, atoms at rest cannot absorb the light. However, for an atom moving toward the laser, against the direction of the laser beam, the wavelength appears to be slightly shortened due to the Doppler effect. The wavelength shift can be enough to permit the atom to absorb the light. The recoil slows the atom’s motion. To slow motion in the opposite direction, away from the light source, one merely needs to employ a second laser beam, opposite to the first. These two beams slow atoms moving along a single axis. To slow atoms in three dimensions, six beams are needed (Figure 28). This is not as complicated as it may sound: All that is required is a single laser and mirrors.
Laser light is so intense that an atom can be excited just as soon as it gets to the ground state. The resulting acceleration is enormous, about 10,000 times the acceleration of gravity. An atom moving with a typical speed in a room temperature gas, thousands of meters per second, can be brought to rest in a few milliseconds. With six laser beams shining on them, the atoms experience a strong resistive force no matter which way they move, as if they were moving in a sticky fluid. Such a situation is known as optical molasses.
A popular atom for laser cooling is rubidium-87. Its mass is m = 1.45 X 10-25 kg. The wavelength for excitation is λ = 780 nm. The momentum carried by the photon is p = hv/c = h/λ , and the change in the atom’s velocity from absorbing a photon is Δv= p/m = 5.9 X 10-3 m/s. The lifetime for spontaneous emission is 26 X 10-9 s, and the average time between absorbing photons is about tabs = 52 X 10-9 s. Consequently, the average acceleration is a = Δv/ tabs = 1.1 X 105 m/s2, which is about 10,000 times the acceleration of gravity. At room temperature, the rubidium atom has a mean thermal speed of vth = 290 m/s. The time for the atom to come close to rest is vth/a = 2.6 X 10-3 s.
As one might expect, laser cooling cannot bring atoms to absolute zero. The limit of Doppler cooling is actually set by the uncertainty principle, which tells us that the finite lifetime of the excited state due to spontaneous emission causes an uncertainty in its energy. This blurring of the energy level causes a spread in the frequency of the optical transition called the natural linewidth. When an atom moves so slowly that its Doppler shift is less than the natural linewidth, cooling comes to a halt. The temperature at which this occurs is known as the Doppler cooling limit. The theoretical predictions for this temperature are in the low millikelvin regime. However, by great good luck, it turned out that the actual temperature limit was lower than the theoretical prediction for the Doppler cooling limit. Sub-Doppler cooling, which depends on the polarization of the laser light and the spin of the atoms, lowers the temperature of atoms down into the microkelvin regime.
Atom traps
Like all matter, ultracold atoms fall in a gravitational field. Even optical molasses falls, though slowly. To make atoms useful for experiments, a strategy is needed to support and confine them. Devices for confining and supporting isolated atoms are called “atom traps.” Ultracold atoms cannot be confined by material walls because the lowest temperature walls might just as well be red hot compared to the temperature of the atoms. Instead, the atoms are trapped by force fields. Magnetic fields are commonly used, but optical fields are also employed.
Magnetic traps depend on the intrinsic magnetism that many atoms have. If an atom has a magnetic moment, meaning that it acts as a tiny magnet, its energy is altered when it is put in a magnetic field. The change in energy was first discovered by examining the spectra of atoms in magnetic fields and is called the Zeeman effect after its discoverer, the Dutch physicist Pieter Zeeman.
Because of the Zeeman effect, the ground state of alkali metal atoms, the most common atoms for ultracold atom research, is split into two states by a magnetic field. The energy of one state increases with the field, and the energy of the other decreases. Systems tend toward the configuration with the lowest accessible energy. Consequently, atoms in one state are repelled by a magnetic field, and atoms in the other state are attracted. These energy shifts can be used to confine the atoms in space.
The MOT
The magneto-optical trap, or MOT, is the workhorse trap for cold atom research. In the MOT, a pair of coils with currents in opposite direction creates a magnetic field that vanishes at the center. The field points inward along the z-axis but outward along the x- and y-axes. Atoms in a vapor are cooled by laser beams in the same configuration as optical molasses, centered on the midpoint of the system. The arrangement by itself could not trap atoms because, if they were pushed inward along one axis, they would be pushed outward along another. However, by employing a trick with the laser polarization, it turns out that the atoms can be kept in a state that is pushed inward from every direction. Atoms that drift into the MOT are rapidly cooled and trapped, forming a small cloud near the center.
To measure the temperature of ultracold atoms, one turns off the trap, letting the small cloud of atoms drop. The speeds of the atoms can be found by taking photographs of the ball and measuring how rapidly it expands as it falls. Knowing the distribution of speeds gives the temperature. It was in similar experiments that atoms were sometimes found to have temperatures below the Doppler cooling limit, not in the millikelvin regime, but in the microkelvin regime. The reason turned out to be an intricate interplay of the polarization of the light with the Zeeman states of the atom causing a situation known as the Sisyphus effect. The experimental discovery and the theoretical explanation of the “Sisyphus effect” were the basis of the Nobel Prize to Chu, Cohen-Tannoudji, and Phillips in 1997.
Evaporative cooling
When the limit of laser cooling is reached, the old-fashioned process of evaporation can cool a gas further. In thermal equilibrium, atoms in a gas have a broad range of speeds. At any instant, some atoms have speeds much higher than the average, and some are much slower. Atoms that are energetic enough to fly out of the trap escape from the system, carrying away their kinetic energy. As the remaining atoms collide and readjust their speeds, the temperature drops slightly. If the trap is slowly adjusted so that it gets weaker and weaker, the process continues and the temperature falls. This process has been used to reach the lowest kinetic temperatures yet achieved, a few hundred picokelvin. Evaporative cooling cannot take place in a MOT because the constant interaction between the atoms and laser beams keeps the temperature roughly constant. To use this process to reach temperatures less than a billionth of a degree above absolute zero, the atoms are typically transferred into a trap that is made purely of magnetic fields.
Optical traps
Atoms in light beams experience forces even if they don’t actually absorb or radiate photons. The forces are attractive or repulsive depending on whether the laser frequency is below or above the transition frequency. These forces are much weaker than photon recoil forces, but if the atoms are cold enough, they can be large enough to confine them. For instance, if an intense light beam is turned on along the axis of a MOT that holds a cloud of cold atoms, the MOT can be turned off, leaving the atoms trapped in the light beam. Unperturbed by magnetic fields or by photon recoil, for many purposes, the environment is close to ideal. This kind of trap is called an optical dipole trap.
If the laser beam is reflected back on itself to create a standing wave of laser light, the standing wave pattern creates a regular array of areas where the optical field is strong and weak known as an optical lattice. Atoms are trapped in the regions of the strong field. If the atoms are tightly confined in a strong lattice and the lattice is gradually made weaker, the atoms start to tunnel from one site to another. At some point the atoms move freely between the sites. The situation is similar to the phase transition in a material that abruptly turns from an insulator into a conductor. This is but one of many effects that are well known in materials and can now be studied using ultracold atoms that can be controlled and manipulated with a precision totally different from anything possible in the past.
Why the excitement?
The reason that ultracold atoms have generated enormous scientific excitement is that they make it possible to study basic properties of matter with almost unbelievable clarity and control. These include phase transitions to exotic states of matter such as superfluidity and superconductivity that we will learn about in Unit 8, and theories of quantum information and communication that are covered in Unit 7. There are methods for controlling the interactions between ultracold atoms so that they can repel or attract each other, causing quantum changes of state at will. These techniques offer new inroads to quantum entanglement—a fundamental behavior that lies beyond this discussion—and new possibilities for quantum computation. They are also finding applications in metrology, including atomic clocks.
The words “Atomic Clock” occasionally appear on wall clocks, wristwatches, and desk clocks, though in fact none of these devices are really atomic. They are, however, periodically synchronized to signals broadcast by the nation’s timekeeper, the National Institute of Standards and Technology (NIST). The NIST signals are generated from a time scale controlled by the frequency of a transition between the energy states of an atom—a true atomic clock. In fact, the legal definition of the second is the time for 9,192,631,770 cycles of a particular transition in the atom 133Cs.
Columbia University physicist Isidor Isaac Rabi first suggested the possibility that atoms could be used for time keeping. Rabi’s work with molecular beams in 1937 opened the way to broad progress in physics, including the creation of the laser as well as nuclear magnetic resonance, which led to the MRI imaging now used in hospitals. In 1944, the same year he received the Nobel Prize, he proposed employing a microwave transition in the cesium atom, and this system has been used ever since. The first atomic clocks achieved an accuracy of about 1 part in 1010. Over the years, their accuracy has been steadily improved. Cesium-based clocks now achieve accuracy greater than 1 part in 1015, 10,000 times more accurate than their predecessors, which is generally believed to be close to their ultimate limit. Happily, as will be described, a new technology for clocks based on optical transitions has opened a new frontier for precision.
A clock is a device in which a motion or event occurs repeatedly and which has a mechanism for keeping count of the repetitions. The number of counts between two events is a measure of the interval between them, in units of the period of the atomic transition frequency. If a clock is started at a given time—that is, synchronized with the time system—and kept going, then the accumulated counts define the time. This statement actually encapsulates the concept of time in physics.
In a pendulum clock, the motion is a swinging pendulum, and the counting device is an escapement and gear mechanism that converts the number of swings into the position of the hands on the clock face. In an atomic clock, the repetitious event is the quantum mechanical analogy to the physical motion of an atom: the frequency for a transition between two atomic energy states. An oscillator is adjusted so that its frequency matches the transition frequency, effectively making the atom the master of the oscillator. The number of oscillation cycles—the analogy to the number of swings of a pendulum—is counted electronically.
The quality of a clock—essentially its ability to agree with an identical clock—depends on the intrinsic reproducibility of the periodic event and the skill of the clockmaker in counting the events. A cardinal principle in quantum mechanics is that all atoms of a given species are absolutely identical. Consequently, any transition frequency could form the basis for an atomic clock. The art lies in identifying the transition that can be measured with the greatest accuracy. For this, a high tick rate is desirable: It would be difficult to compare the rates of two clocks that ticked, for instance, only once a month. As the definition of the second reveals, cesium-based clocks tick almost 10 billion times per second.
Atomic clocks and the uncertainty principle
The precision with which an atomic transition can be measured is fundamentally governed by the uncertainty principle. As explained in Section 6, because of the time-energy uncertainty principle, there is an inherent uncertainty in the measurement of a frequency (which is essentially an energy) that depends on the length of the time interval during which the measurement is made. To reduce the uncertainty in the frequency measurement, the observation time should be as long as possible.
Four factors govern the quality of an atomic clock. They are:
1. The “tick rate,” meaning the frequency of the transition. The higher the frequency, the larger the number of counts in a given interval, and the higher the precision. Cesium clocks tick almost 10 billion times per second.
2. The precision by which the transition frequency can be determined. This is governed fundamentally by the time-frequency uncertainty principle for a single measurement: τΔf > 1 . The fractional precision for a single measurement can be defined to be f/Δf = τf . Thus, the time interval during which the frequency of each atom is observed should be as long as possible. This depends on the art of the experimenter. In the most accurate cesium clocks, the observation time is close to one second.
3. The rate at which the measurement can be repeated—that is, the number of atoms per second that are observed.
4. The ability to approach ideal measurement conditions by understanding and controlling the many sources of error that can affect a measurement. Those sources include noise in the measurement process, perturbations to the atomic system by magnetic fields and thermal (blackbody) radiation, energy level shifts due to interactions between the atoms, and distortions in the actual measurement process. The steady improvement in the precision of atomic clocks has come from progress in identifying and controlling these effects.
In an atomic clock, the observation time is the time during which the atoms interact with the microwave radiation as they make the transition. Before the advent of ultracold atoms and atom trapping, this time was limited by the speed of the atoms as they flew through the apparatus. However, the slow speed of ultracold atoms opened the way for new strategies, including the possibility of an atomic fountain. In an atomic fountain, a cloud of cold atoms is thrust upward by a pulse of light. The atoms fly upward in the vacuum chamber, and then fall downward under the influence of gravity. The observation time is essentially the time for the atoms to make a roundtrip. For a meter-high fountain, the time is about one second.
The quality of an atomic clock depends on how well it can approach ideal measurement conditions. This requires understanding and controlling the many sources of error that can creep in. Errors arise from noise in the measurement process, perturbations to the atomic system by magnetic fields and thermal (blackbody) radiation, energy level shifts due to interactions between the atoms, and distortions in the actual measurement process. The steady improvement in the precision of atomic clocks has come from incremental progress in identifying and controlling these effects.
The cesium fountain clock
The cesium clock operates on a transition between two energy states in the electronic ground state of the atom. As mentioned in Section 7, the ground state of an alkali metal atom is split into two separate energy levels in a magnetic field. Even in the absence of an external magnetic field, however, the ground state is split in two. This splitting arises from a magnetic interaction between the outermost in the atom electron and the atom’s nucleus, known as the hyperfine interaction. The upper hyperfine state can in principle radiate to the lower state by spontaneous emission, but the lifetime for this is so long—thousands of years—that for all purposes, both states are stable. The transition between these two hyperfine states is the basis of the cesium clock that defines the second.
The cesium fountain clock operates in a high vacuum so that atoms move freely without colliding. Cesium atoms from a vapor are trapped and cooled in a magneto-optical trap. The trap lasers both cool the atoms and “pump” them into one of the hyperfine states, state A. Then, the wavelength of the trap laser beam pointing up is tuned to an optical transition in the atoms, giving the cloud a push by photon recoil. The push is just large enough to send the atoms up about one meter before they fall back down. The atoms ascend through a microwave cavity, a resonant chamber where the atoms pass through the microwave field from an oscillator. The field is carefully controlled to be just strong enough that the atoms make “half a transition,” which is to say that if one observed the states of the atoms as they emerged from the cavity, half would be in hyperfine state A and half would be in state B. Then the atoms fly up, and fall back. If the frequency is just right, the atoms complete the transition as they pass through the cavity, so that they emerge in state B. The atoms then fall through a probe laser, which excites only those that are in state B. The fluorescence of the excited atoms is registered on a detector. The signal from the detector is fed back to control the frequency of the microwave oscillator, so as to continuously stay in tune with the atoms.
If we plot the signal on the detector against the frequency of the oscillator, we end up with what is known as a resonance curve. The pattern, called a Ramsey resonance curve, looks suspiciously like two-slit interference. In fact, it is an interference curve, but the sources interfere not in space but in time. There are two ways for an atom to go to state B from state A: by making the transition on the way up or on the way down. The final amplitude of the wavefunction has contributions from both paths, just as the wavefunction in two-slit interference has contributions from paths going through each of the slits. This method of observing the transition by passing the atom through a microwave field twice is called the “separated oscillatory field method” and its inventor, Norman F. Ramsey, received the Nobel Prize for it in 1989.
Optical clocks
A useful figure of quality for atomic clocks is the ratio of its frequency to the uncertainty in its frequency, Δv. For a given value of Δv, the higher the frequency, the better the clock. With atom-cooling techniques, there are many possibilities for keeping atoms close to rest so that Δv is small. Consequently, clocks operating at optical frequencies, in the petahertz (1015 Hz) region, are potentially much more accurate than cesium-based clocks that operate in the gigahertz (109 Hz) region. However, two impediments have delayed the advent of optical clocks. Fortunately, these have been overcome, and optical clock technology is moving forward rapidly.
The first impediment was the need for an incredibly stable laser to measure the atomic signal. In order to obtain a signal from the atoms, the laser must continue oscillating smoothly on its own during the entire time the atoms are being observed. The requirement is formidable: a laser oscillating at a frequency of close to 1015 Hz that fluctuates less than 1 Hz. Through a series of patient developments over many years, this challenge has been met.
The second impediment to optical clocks was the problem of counting cycles of light. Although counting cycles of an oscillating electric field is routine at microwave frequencies using electronic circuitry, until recently there was no way to count cycles at optical frequencies. Fortunately, a technology has been invented. Known as the “frequency comb,” the invention was immediately recognized as revolutionary. The inventors, Theodor W. Hänsch and John L. Hall, were awarded the Nobel Prize in 2005 “for their contributions to the development of laser-based precision spectroscopy including the optical frequency-comb technique.”
Optical clocks are only in the laboratory stage but progress is rapid. One type of clock employs ions stored in electromagnetic traps, similar to the trap used in Figure 1; another employs neutral atoms confined in an optical lattice such as in Figure 2. Figure 34 shows a state-of-the-art ion-based clock at NIST. A pair of such clocks has recently demonstrated a relative accuracy greater than one part in 1017. Making these clocks into practical devices is an interesting engineering challenge.
In the new world of precise clocks, transmitting timing signals and comparing clocks in different locations presents a major challenge. Transmissions through the atmosphere or by a satellite relay suffer bad atmospheric fluctuations. The signals can be transmitted over optical fibers, but fibers can introduce timing jitter from vibrations and optical nonlinearities. These can be overcome for distances of tens of kilometers by using two-way monitoring techniques, but methods for extending the distances to thousands of kilometers have yet to be developed. However, there is an even more interesting impediment to comparing clocks at different locations. The gravitational redshift explained in Unit 3 changes the rates of clocks by 1 part in 1016 for each meter of altitude, near Earth’s surface. Clocks are approaching the regime of parts in 1018. To compare clocks in different locations, the relative altitudes would need to be known to centimeters. Earth’s surface is constantly moving by tens of centimeters due to tides, weather, and geological processes. This presents not merely a practical problem but also a conceptual problem, for it forces us to realize that time and gravity are inextricably interlinked. Because of this, the view that time is essentially the clicks on a clock begins to seem inadequate.
Payoffs from basic research
When Isidor Isaac Rabi proposed the possibility of an atomic clock, he had a scientific goal in mind: to observe the effect of gravity on time—the gravitational redshift—predicted by Einstein’s theory of general relativity. The quest to confirm Einstein’s prediction motivated the field. Today, the gravitational redshift has not only been observed, but also measured to high precision. However, the biggest impacts of atomic clocks were totally unforeseen. Global Positioning System (GPS) is one of these.
The GPS is a network of satellites positioned so that several of them are essentially always in view. A receiver calculates its location from information transmitted by the satellites about their time and position at each instant. The satellites carry one or more atomic clocks whose times are periodically updated by a master atomic clock in a ground station. The GPS system is a miracle of engineering technology: sophisticated satellites, integrated electronics and advanced communications, information processing, geodesy, and orbital mechanics. But without atomic clocks, there would be no GPS. Furthermore, with the precision inherent in the GPS system, the gravitational redshift is not only detectable, but to overlook it would cause catastrophic navigational errors.
Atomic clocks have applications in fundamental science as well. The technique of very long baseline radio interferometry (VLBI) permits Earth to be converted to a giant radio telescope. Signals from radio observatories on different continents can be brought together and compared to provide the angular resolution of an Earth-sized dish. To do this, however, the astronomical radio signals must first be recorded against the signal from an atomic clock. The records are then brought together and their information is correlated. VLBI can reveal details less than a millionth of a degree, the highest resolution achieved in all of astronomy.
Although Einstein’s theory of gravity is one of the most abstract subjects in science, the search to study it led to the invention of GPS and the creation of VLBI. This history illustrates, if illustration is needed, that the pursuit of basic knowledge is a worthy goal for scientists and a wise investment for society.
The paradox of how a wave can be a particle and a particle can be a wave was brought up in Section 4, but not resolved. The issue is far from trivial and was fiercely debated in the early days of quantum mechanics. Niels Bohr even designed a hypothetical experiment to clarify the question of whether you could detect which slit a photon passed through in a two-slit interference experiment. For light to interfere, it must slightly change its direction as it passes through a slit in order to merge with the second beam.
Consequently, passing through a slit must slightly alter a photon’s direction, which means that the slit has altered the photon’s momentum. The photon must give an opposite momentum to the slit. Bohr’s apparatus was designed to detect the recoil of the slit. If this were possible, an observer could decide which slit each photon passed through in creating an interference pattern, revealing both the particle and wave nature of light simultaneously. However, Bohr proved that detecting the photon would actually wipe out the interference pattern.
Thinking about waves passing through slits provides a different way to understand the situation. The waves might be light waves but they could just as well be matter waves. As the waves emerge from the slits, they diverge in a diffraction pattern. The wave intensity on the viewing screen might be registered on a camera, as in Figure 11, or measured by detections with particle counters, creating images similar to those in Figure 15. For the sake of discussion, we assume that the individual atoms or photons are detected with particle counters.
If the slits are close together, the diffraction patterns of particles coming through them overlap. In time the counts add to give a two-slit interference pattern, which is the signature of waves. What about the intermediate case? If the slits are far enough apart that the diffraction patterns only overlap a little bit, we should be able to place two detectors that only see particles passing through one or the other of the slits, and a detector in the center that sees two-slit interference. The conclusion is that if one knows from which of two slits the signal arises, one must ascribe the signal to the arrival of a particle. However, if there is no way to distinguish which of two possibilities gave rise to the signal, one must ascribe the signal to the arrival of waves.
The answer to the question, “Is light composed of waves or particles?” is “Both.” If you search for light’s wave properties, you will find them. If you search for light’s particle properties, you will find them, too. However, you cannot see both properties at the same time. They are what Bohr called complementary properties. One needs both properties for a complete understanding of light, but they are fundamentally incompatible and cannot be observed at the same time. Thus, the wave-particle paradox presents a contradiction that is not really true, but merely apparent.
We have discussed the wave-particle paradox for light, but the same reasoning applies to atoms and matter waves. Atoms are waves and they are particles, but not at the same time. You will find what you look for.
Momentum and Energy
The momentum has the value p = h/λ = hnL/2. The energy of the particle, E, is its kinetic energy, p2/2M, and it follows that the energy of the nth state is En=n2(hL)2/8M.