Skip to main content Skip to main content

Physics for the 21st Century

The Quantum World Interview with Featured Scientist Martin Zwierlein

Interviewer: What is the quantum mechanical concept of spin?

MARTIN: So, it turns out, every elementary particle in nature has a quantity that we call spin. The classical analog of spin is just how fast does an object rotate about itself. The classical analog works to a certain extent. For now, this is fine. Every elementary particle has a certain, very distinct amount of spin. Turns out, electrons, protons and neutrons, they all carry the same spin which, in physics units, takes on the value one-half. What are the units? It’s one-half H-bar, which is the Planck’s constant, the constant introduced by Max Plank.

Interviewer: Why is it important to have spin and what is the difference between having one-half or one, or two?

MARTIN: It makes a whole lot of a difference. It turns out, and that is a very deep result or quantum mechanics, the so-called Spin Statistics Theory, that if you have half integer spin, you don’t want to be together in the same quantum state. So, let’s take electrons. Electrons have half integer spin. So, that means we cannot stack them into the same quantum state. Electrons always have to find a way around each other, always take on different energy states, and we see that most clearly and most dramatically in the structure of our atoms. We all know this from high school, from our physics, our chemistry classes, but it is really a fundamental property of the electrons due to their spin, and that is very fundamental. Otherwise, all atoms would look very boringly alike because all the electrons would always be in the same shell, and all atoms would pretty much do the same thing, but that is not at all the case. We have the shell structure, and that gives us all the beauty of nature around us, all these different elements and that is just due to the electrons having half integer spin.

Interviewer: What is the difference between bosons and fermions?

MARTIN: So, in nature, you only have two kinds of particles, bosons or fermions, and if you want to figure out whether you are a boson or a fermion, just count the number of elementary particles, protons, neutrons, electrons, or equivalently count how much spin you have. If the outcome is, I have an even number of elementary particles, therefore I have an integer spin, well then, I am a boson. Then I can be together with other bosons in the same state. If the outcome is, I have an uneven number of elementary particles or, equivalently, I have a half integer spin, then I am a fermion, and I cannot be together in the same state with another fermion at the same time. That is the difference between bosons and fermions. Turns out, fermions are the building blocks of nature. If you think about it, all massive elementary particles are fermions. We have the quarks, turns out they have half integer spin. The quarks form the protons and the neutrons which have half integer spin because they consist of three quarks. The electron carries half integer spin. So, the elementary particles that have mass are fermions and they build everything we see around us, ourselves, the sun, neutron stars, the universe. So, fermions that glue together form everything we know. So, that is why they are the fundamental particles to study.

Interviewer: What is the many-body problem?

MARTIN: So, we are all familiar with the two-body problem, like in everyday life we have to figure out that we find some work for our partner in the same city, that kind of two-body problem. In physics, usually you think about the earth circling around the sun, and we can solve this problem exactly. It is beautiful. It is no problem whatsoever. But now, what about the three-body problem? What about adding the moon to the story? Already, that cannot be solved exactly anymore. So, that is a big problem in physics, that once you add one more particle to your two-body problem, you are already at a loss. Of course, you can do wonderful approximations and get it really, really nice, figure it out very nicely, but it turns out the system is already so complicated that actually there is chaos in it, and terrible things can happen, already at the level of three particles, now are four, at five, at six particles, and try to calculate this complicated problem exactly. It is impossible. No computer in world can do it. And now take 1023 particles, making up the solids. With 1023 electrons per cubic centimeter swimming around in your metal, and try to solve that problem. It seems absolutely impossible. Many-body physics tries to tackle exactly that question. Can we still have a handle on these very complicated problems with all these many particles, even though we already don’t understand how to solve the three-body problem exactly? Well, there are wonderful schemes how to attack this problem, and also to incorporate statistics into many-body physics.

Interviewer: How does this apply when you start to look at systems that have strong interactions between fermions?

MARTIN: So, usually for weak interactions, as is the case for metals that become super conducting at very low temperatures, you can write down approximate wavefunctions that describe the situation extremely well, but once we crank up the interactions between fermions, once we make these interactions very strong, we are at a loss because now there are no approximations that work anymore. Approximations usually mean that you throw out some part of your problem which is may small, not so important, maybe you can neglect it. If you have strong interactions, every fermion talks to everyone else, and you cannot neglect that. You cannot just throw out a bunch of fermions and say, okay, just look at the interaction between this guy and the other guy, and don’t incorporate interactions with others. If you have strong interactions, you have to take everything into account, and that is just terribly complicated. So, that is exactly what many-body physics tries to think of the whole time, how can we handle strong interactions even though there is no approximation that works anymore.

Interviewer: You mentioned superconductors. What are other problems might be approachable by looking closely at strongly interacting fermions?

MARTIN: Fermions with strong interactions turn up in all kind of systems in nature. In condensed-matter physics, we have high-Tc superconductors that we don’t understand. We would like to get the critical temperature up higher to finally get a room temperature superconductor. Colosso-magneto resistance materials, they would be fantastically useful for hard drives and our USB sticks; they already make use of a giant magneto-resistance effect in these interesting materials. We find it in the beginning of the universe in the quark glue and plasma that has been created in the split second after the Big Bang, strongly interacting fermions that are very hard to model. And inside the atomic nucleus. The atomic nucleus itself is a strongly interacting fermionic system where protons and neutrons pair up. Or inside a neutron star, where you have in the crust some kind of neutron superfluid and in the center, a mixture of quarks that interact and pair up with each other. All these different systems belong to the same category of strongly interacting fermions. We don’t understand them well. We cannot model them well, but we hope to have a model system that explains them better: using our atoms as model matter in the lab, to probe these strongly interacting fermions in a controlled environment, one where we can tune the directions and tune the distance between particles at will and understand how the many-body system works.

Interviewer: What happens when you cool the atoms to ultra-cold temperatures?

MARTIN: So, at high temperatures, you can think of atoms simply as little billiard balls that are moving around at a high speed on the order of a thousand meters per second, and they move around here, I show the velocity of those particles like crazy in some erratic kind of fashion. Now, it turns out, as you lower the temperatures, eventually you should not think of particles as billiard balls anymore. Quantum mechanics takes over and particles become waves. At low temperatures, each of these particles is actually smeared out over certain matter wave, over a certain range which we call lambda, that’s roughly the size of these matter waves, lambda, and it turns out the lower your cool, the colder you are, the lower the velocity is, the larger this wave length. It’s the larger these waves are. It turns out, de Broglie told us what the relationship is. The wave length is given by Planck’s constant H divided by the mass of the particle times the velocity. So, at very low temperatures all these waves start eventually to overlap, once the wave length is on the order of the interparticle distance. In this situation now, it becomes crucial whether you are a boson or a fermion. If you are a fermion, you really want to stay away from the other guys and form different wavefunctions. All the fermions should distinguish themselves by their wave. They cannot occupy all the same wave. Bosons instead, they will decide to all join up to one big microscopic metawave at low temperatures, below a certain critical temperature, and that is the Bose-Einstein condensate.

Interviewer: This is the story of bosons, but now what about fermions?

MARTIN: Well, for fermions, fermions don’t want to be together in the same quantum state. So, we don’t have this large microscopic matter wave. Fermions first have to pair up. Say here I have a spin up fermion and here I have a spin down fermion. These guys have to pair up to form a boson, and then these bosons, they can again form a Bose-Einstein condensate. So, here I draw a couple of fermion pairs that form eventually a boson. So, all these pairs then have a wave associated to them, this blob, this blob and this blob, and these fermion pairs then form one microscopic wavefunction. So, I can again draw this at zero temperature. All the fermion pairs are again in the same big microscopic wavefunction, but now it is the fermion pairs. It is not the fermions themselves, but the fermion pairs. Once the fermions have found a partner to form a pair with, then these pairs can form the Bose-Einstein condensate, and then we can again draw one microscopic wavefunction for the fermion pairs.

Interviewer: Once you have condensate of fermion gas, describe your rotation experiment.

MARTIN: So, in the lab, we deal with a puff of gas, a million times thinner than air, and we need to rotate it. So, how do you make that rotate? Well, we can take yet another laser beam. In our lab, it’s a green laser beam. Then we rotate around this puff of gas at a certain speed, 100 hertz, about a hundred times per second, and that will excite the gas, and make it start to rotate. So, as we rotate this puff of gas with our green laser beam, the edges of the cloud, they are not quite superfluid. They are maybe still normal. They are kind of carried around and start to rotate, and they will will tell the superfluid to also rotate.

Interviewer: How does the superfluid rotate?

MARTIN: Well, it has to have vortices inside. Now, these vortices, they cannot just suddenly pop up in the center of the superfluid. That would violate conservation of angular momentum in this superfluid. They have to come from the outside. So, at the rim of our puff of gas, we see some fluctuations that appear here and there. These are already the vortices and they sometimes, after a while, they make it inside. Some of them make it inside, and then some more, and then some more, and eventually we end up with a huge vortex lattice piercing through the superfluid core and that is the final state. This would happen, if you keep the rotation on, it would also still happen in a metastable way. If you stop the rotation after a while, the vortices would still remain there because, as a superfluid, it doesn’t even feel friction. So, at zero temperature, it would rotate forever. We are not at zero temperature so, eventually, if you stop the stirring, the normal cloud will spin down and it will also drag the superfluid with it, meaning that the vortices eventually will disappear after the spin down, but it lives for a very long time, even without any rotation.

Interviewer: How do you look at your cloud?

MARTIN: Well, we have this tiny puff of gas that is in here, a little cloud. It is maybe hundred micrometers in size, roughly, and we want to know what is going on inside that cloud. So, what do we do? Well, we expand it. We open up our trap, let this puff of gas expand, and then it will be a large puff of gas, a couple of millimeter in size, after expansion, and then—then we shine in a resonant laser beam, all this is laser light. We shine it onto our atoms. The atoms cast a shadow in that laser light, and we image the atoms onto a camera.

Interviewer: If you have created vortices in that cloud, how would you see them?

MARTIN: Well, the vortices are mini whirlpools, so there are little tubes that run through that cloud, empty tubes, hollow tubes, where the light simply passes through; and so, in our image, we will see lots of white patches exactly where the vortices are. So, where the vortices are, all the light makes it through. Once the atoms have scattered all these photons from the laser beam, they are hot and the superfluid has broken down and also it is flying apart, anyways. So, we have to repeat the shot, maybe varying some wait time, or varying some other parameters of the experiment, repeat the shot in the exact same initial conditions, and then, forty seconds later, we see the second shot on the screen, and then we take a third shot, and a fourth shot; and, to take a movie in that sense, takes maybe fifty images of, every time newly prepared clouds. So, it is a little tedious.

Interviewer: What motivates you to continue these experiments?

MARTIN: So, what we hope to achieve with these atomic systems is to understand strong interacting fermions much better than we know now, today, how to describe them. This correspondence between theory and experiments, this constant exchange, will hopefully enable us to have much better theoretical tools available in five years, or ten years, to study strong interacting fermions than today by constantly probing these techniques on atomic systems that we can control very well in the lab. So, it will be a back and forth between theory and experiments. The theorists come up with a model. The experimentalists realize that model, and we can match theory and experiment in a quantitative way. Will this data fit with their theory and hopefully it agrees with what they have come up with. Otherwise, they have to go back to the drawing board.

It is lots of fun. You get to play really, I sometimes say, with nature herself. You can tickle the atoms, play with them, and get another laser beam, and see what happens. You can apply a gradient, split them apart, bounce them off each other, and see how nature reacts to this unusual situation in which you place her, and this is really unusual because there is no other experiment that you can do where you suddenly have this very well prepared, here are the spin ups, here are the spin down, now you bang them into each other, what happens? Usually, in physics, things are rather complicated, and I haven’t found any other system where you can just be so sure about what you have in front of you, what are the interactions, and you can even switch off the interactions if you want to. So, it is immense control that we have, the fact that only a few people in the lab can make a difference in the understanding of what is going on, the fact that we can make quantitative experiments that really give the number, the hard number that we give to theorists to say, okay, please, you should get that. This is the outcome. That’s your theory. Get that.

Interviewer: What new directions are you looking forward to exploring in the future?

MARTIN: It turns out, one dream that we all hope to reach is to have perfect control of the internal and external degrees of freedom of the atoms and that includes the spin degrees of freedom. We want to be able to control whether the system is a ferromagnet or an anti-ferromagnet. We want to be able to address atoms on an optical lattice, single sites, and have perfect control over the degrees of freedom of these atoms. If we have that, if we have a system that is controllable, that is addressable, where we can address every single atom, and we can manipulate it using microwave circuitry, then what we actually built is a quantum computer because now we can use these spins of atoms and optical lattices as cubits, one, zero, or one plus zero, and we can bring them into contact with each other, build logic gates for this quantum computer, shuffle them around, move them around, and read out the outcome of a quantum computation. This is another aspect of the story that, on one hand, we want to understand many-body physics and we want to create model systems that help us understand current models; but, at the same time, while we are doing this, we will be coming closer and closer to a real quantum computer, that allows us to do computations that a classic computer cannot do.

Unit Glossary

alkali metals

The alkali metals are the chemical elements in the first column of the periodic table. They all have one valence electron. Alkali metals are commonly used atoms in atomic physics experiments for several reasons. Their structure is relatively simple and provides energy states that are convenient for laser cooling. Many of their transition frequencies match convenient laser sources. Also, the single valence electron’s magnetic moment allows the atoms to be easily trapped using magnetic fields, which is convenient for the evaporative cooling process necessary to reach ultracold temperatures.

atomic fountain

An atomic fountain is a cloud of cold atoms that is given a push upward with a laser pulse. The laser is tuned to the right energy to transfer its momentum to the atoms, which fly up until gravity takes over, reversing their motion so they fall back down. The path the atoms take is analogous to the path of water in a fountain.

blackbody

A blackbody is an object that absorbs all incident electromagnetic radiation and re-radiates it after reaching thermal equilibrium. The spectrum of light emitted by a blackbody is smooth and continuous, and depends on the blackbody’s temperature. The peak of the spectrum is higher and at a shorter wavelength as the temperature increases.

Bohr Correspondence Principle

The Bohr Correspondence Principle states that the predictions of quantum mechanics must match the predictions of classical physics in the physical situations that classical physics is intended to describe, and does describe very accurately. Mathematically, this means that the equations of quantum mechanics must smoothly turn into the equations of classical mechanics as the de Broglie wavelength of particles becomes very small, and the energy state quantum number gets very large.

Bose-Einstein condensate

A Bose-Einstein condensate, or BEC, is a special phase of matter in which the quantum mechanical wavefunctions of a collection of particles line up and overlap in a manner that allows the particles to act as a single quantum object. The electrons in a superconductor form a BEC; superfluid helium is an example of a liquid BEC. BECs can also be created from dilute gases of ultracold atoms and molecules.

cosmic microwave background

The cosmic microwave background (CMB) radiation is electromagnetic radiation left over from when atoms first formed in the early universe, according to our standard model of cosmology. Prior to that time, photons and the fundamental building blocks of matter formed a hot, dense soup, constantly interacting with one another. As the universe expanded and cooled, protons and neutrons formed atomic nuclei, which then combined with electrons to form neutral atoms. At this point, the photons effectively stopped interacting with them. These photons, which have stretched as the universe expanded, form the CMB. First observed by Penzias and Wilson in 1965, the CMB remains the focus of increasingly precise observations intended to provide insight into the composition and evolution of the universe.

diffraction

Diffraction is the spreading of a wave after it encounters an obstacle, a sharp corner, or emerges from a slit. If the slit is small, the spreading is large and is accompanied by an interference pattern with a central peak surrounded by weaker side lobes. In this context, “small” means comparable to the wavelength of the diffracting wave. The fact that light diffracts when passed through a small slit is evidence of its wave nature.

Doppler cooling

Doppler cooling is a technique that uses laser light to slow, and thus cool, moving atoms. An atom will absorb a photon that has an energy equal to the difference between two energy levels in the atom. When the atom absorbs a photon, it also absorbs the photon’s momentum and gets a push in the direction that the photon was traveling. If the photon and atoms were traveling in opposite directions, the atom slows down. However, when the atom is moving relative to the laser, the laser light is Doppler shifted in the atom’s reference frame. To cool moving atoms, the laser must be tuned slightly to the red to account for the Doppler shift of atoms moving toward the light source.

Doppler shift (Doppler effect)

The Doppler shift is a shift in the wavelength of light or sound that depends on the relative motion of the source and the observer. A familiar example of a Doppler shift is the apparent change in pitch of an ambulance siren as it passes a stationary observer. When the ambulance is moving toward the observer, the observer hears a higher pitch because the wavelength of the sound waves is shortened. As the ambulance moves away from the observer, the wavelength is lengthened and the observer hears a lower pitch. Likewise, the wavelength of light emitted by an object moving toward an observer is shortened, and the observer will see a shift to blue. If the light-emitting object is moving away from the observer, the light will have a longer wavelength and the observer will see a shift to red. By observing this shift to red or blue, astronomers can determine the velocity of distant stars and galaxies relative to the Earth. Atoms moving relative to a laser also experience a Doppler shift, which must be taken into account in atomic physics experiments that make use of laser cooling and trapping.

evaporative cooling

Evaporative cooling is a process used in atomic physics experiments to cool atoms down to a few billionths of a degree above absolute zero. The way it works is similar to how a cup of hot coffee cools through evaporation. Atoms are pre-cooled, usually with some kind of laser cooling, and trapped in a manner that imparts no additional energy to the atoms. The warmest atoms are removed from the trap, and the remaining atoms reach a new, lower equilibrium temperature. This process is typically repeated many times, creating small clouds of very cold atoms.

frequency comb

A frequency comb is a special type of laser that has a spectrum that looks like a comb. Most lasers emit light at one well-defined frequency, and have a resonance curve that looks like a peak at that frequency and zero everywhere else. A frequency comb has a resonance curve that looks like a series of evenly spaced peaks over a broad range. These lasers were important in the development of optical clocks because the peaks in their spectrum are at optical frequencies, but the spacing between peaks is at much lower microwave frequencies. Comb lasers therefore provide a link between optical frequencies that are very difficult to measure directly, and microwave frequencies that can be counted with well-established laboratory techniques.

ground state

The ground state of a physical system is the lowest energy state it can occupy. For example, a hydrogen atom is in its ground state when its electron occupies the lowest available energy level.

harmonic oscillator

A harmonic oscillator is a physical system that, when displaced from equilibrium, experiences a restoring force proportional to the displacement. A harmonic oscillator that is displaced and then let go will oscillate sinusoidally. Examples from classical physics are a mass attached to a spring and a simple pendulum swinging through a small angle.

Heisenberg uncertainty principle

The Heisenberg uncertainty principle states that the values of certain pairs of observable quantities cannot be known with arbitrary precision. The most well-known variant states that the uncertainty in a particle’s momentum multiplied by the uncertainty in a particle’s position must be greater than or equal to Planck’s constant divided by 4pi.gif. This means that if you measure a particle’s position to better than Planck’s constant divided by 4pi.gif, you know that there is a larger uncertainty in the particle’s momentum. Energy and time are connected by the uncertainty principle in the same way as position and momentum. The uncertainty principle is responsible for numerous physical phenomena, including the size of atoms, the natural linewidth of transitions in atoms, and the amount of time virtual particles can last.

hyperfine interaction

When the nucleus of an atom has a non-zero magnetic moment, the magnetic field of the nucleus interacts with electrons in the atom. This interaction is called the hyperfine interaction, and leads to finely spaced atomic energy levels called hyperfine structure.

interference

Interference is an effect that occurs when two or more waves overlap. In general, the individual waves do not affect one another, and the total wave amplitude at any point in space is simply the sum of the amplitudes of the individual waves at that point. In some places, the two waves may add together, and in other places they may cancel each other out, creating an interference pattern that may look quite different than either of the original waves. Quantum mechanical wavefunctions can interfere, creating interference patterns that can only be observed in their corresponding probability distributions.

magnetic moment

The magnetic moment (or magnetic dipole moment) of an object is a measure of the object’s tendency to align with a magnetic field. It is a vector quantity, with the positive direction defined by the way the object responds to a magnetic field: The object will tend to align itself so that its magnetic moment vector is parallel to the magnetic field lines. There are two sources for a magnetic moment: the motion of electric charge and spin angular momentum. For example, a loop of wire with a current running through it will have a magnetic moment proportional to the current and area of the loop, pointing in the direction of your right thumb if your fingers are curling in the direction of the current. Alternatively, an electron, which is a spin-1/2 fermion, has an intrinsic magnetic moment proportional to its spin.

magneto-optical trap

A magneto-optical trap, or MOT, uses a combination of laser beams and magnetic fields to confine atoms at temperatures between a few millikelvin and a few microkelvin. Atoms in a MOT are constantly interacting with the laser beams, which cool them to the laser-cooling limit, but no further than that.

matrix mechanics

Matrix mechanics is the version of quantum mechanics formulated in the 1920s by Werner Heisenberg and his close colleagues Max Born and Pascual Jordan. It makes extensive use of matrices and linear algebra, which was relatively new mathematics at the time. Matrix mechanics, which is mathematically equivalent to Schrödinger’s wave mechanics, greatly simplifies certain calculations.

microkelvin

The microkelvin is a unit of temperature equivalent to one-millionth (10-6) of a degree in the Kelvin scale. Laser-cooled atoms are typically at temperatures of a few microkelvin.

millikelvin

The millikelvin is a unit of temperature equivalent to one-thousandth (10-3) of a degree in the Kelvin scale. 3He becomes a superfluid at a temperature of around one millikelvin.

natural linewidth

The natural linewidth of an atomic energy level is the intrinsic uncertainty in its energy due to the uncertainty principle.

optical dipole trap

An optical dipole trap is a type of atom trap that uses only laser light to trap atoms. The laser frequency is tuned to a frequency below an atomic resonance so the atoms do not absorb laser photons as they do in laser cooling or in a MOT. Instead, the electric field from the laser induces an electric dipole in the atoms that attracts them to regions of more intense laser light. Optical dipole traps are only strong enough to hold cold atoms, so atoms are typically cooled first and then transferred into the dipole trap.

optical lattice

An optical lattice is an optical dipole trap made from a standing wave laser beam, so there is a periodic array of regions with a strong and weak laser field. Atoms are attracted to regions of a strong field, so they are trapped in a lattice-like pattern.

optical molasses

Optical molasses is formed when laser beams for Doppler cooling are directed along each spatial axis so that atoms are laser cooled in every direction. Atoms can reach microkelvin temperatures in optical molasses. However, the molasses is not a trap, so the atoms can still, for example, fall under the influence of gravity.

phase

In physics, the term phase has two distinct meanings. The first is a property of waves. If we think of a wave as having peaks and valleys with a zero-crossing between them, the phase of the wave is defined as the distance between the first zero-crossing and the point in space defined as the origin. Two waves with the same frequency are “in phase” if they have the same phase and therefore line up everywhere. Waves with the same frequency but different phases are “out of phase.” The term phase also refers to states of matter. For example, water can exist in liquid, solid, and gas phases. In each phase, the water molecules interact differently, and the aggregate of many molecules has distinct physical properties. Condensed matter systems can have interesting and exotic phases, such as superfluid, superconducting, and quantum critical phases. Quantum fields such as the Higgs field can also exist in different phases.

photon

Photons can be thought of as particle-like carriers of electromagnetic energy, or as particles of light. In the Standard Model, the photon is the force-carrier of the electromagnetic force. Photons are massless bosons with integer spin, and travel through free space at the speed of light. Like material particles, photons possess energy and momentum.

Planck’s constant

Planck’s constant, denoted by the symbol h, has the value 6.626 x 10-34 m2 kg/s. It sets the characteristic scale of quantum mechanics. For example, energy is quantized in units of h multiplied by a particle’s characteristic frequency, and spin is quantized in units of h/2pi_1.gif. The quantity h/2pi_1.gif appears so frequently in quantum mechanics that it has its own symbol: planck_gloss.gif.

plane wave

A plane wave is a wave of constant frequency and amplitude with wavefronts that are an infinitely long straight line. Plane waves travel in the direction perpendicular to the wavefronts. Although they are a mathematical abstraction, many physical waves approximate plane waves far from their source.

polarization

The polarization of a wave is the direction in which it is oscillating. The simplest type of polarization is linear, transverse polarization. Linear means that the wave oscillation is confined along a single axis, and transverse means that the wave is oscillating in a direction perpendicular to its direction of travel. Laser light is most commonly a wave with linear, transverse polarization. If the laser beam travels along the x-axis, its electric field will oscillate either in the y-direction or in the z-direction. Gravitational waves also have transverse polarization, but have a more complicated oscillation pattern than laser light.

probability distribution

In quantum mechanics, the probability distribution is a mathematical function that gives the probability of finding a particle in any small region of space. The probability distribution for a quantum mechanical system is simply the square of the wavefunction.

quantized

Any quantum system in which a physical property can take on only discrete values is said to be quantized. For instance, the energy of a confined particle is quantized. This is in contrast to a situation in which the energy can vary continuously, which is the case for a free particle.

quantum number

A quantum number is a number that characterizes a particular property of a quantum mechanical state. For example, each atomic energy level is assigned a set of integers that is uniquely related to the quantized energy of that level.

resonance curve

A resonance curve is a graph of the response of an atomic system to electromagnetic radiation as a function of the frequency of the radiation. The simplest example of a resonance curve is the single peak that appears as a laser’s frequency is scanned through the difference between two energy levels in the atoms.

spontaneous emission

An atom in an excited state can decay down to a lower state by emitting a photon with an energy equal to the difference between the initial, higher energy level and the final, lower energy level. When this process takes place naturally, rather than being initiated by disturbing the atom somehow, it is called spontaneous emission.

standing wave

A standing wave is a wave that does not travel or propagate: The troughs and crests of the wave are always in the same place. A familiar example of a standing wave is the motion of a plucked guitar string.

stationary states

In quantum mechanics, a stationary state is a state of a system that will always yield the same result when observed in an experiment. The allowed energy states of a harmonic oscillator (Unit 5, section 5) are an example, as are the allowed energy levels of an atom. Stationary states correspond to quantum wavefunctions that describe standing waves.

superposition principle

Both quantum and classical waves obey the superposition principle, which states that when two waves overlap, the resulting wave is the sum of the two individual waves.

tunneling

Tunneling, or quantum tunneling, takes place when a particle travels through a region that would be forbidden according to the laws of classical physics. Tunneling occurs because quantum wavefunctions extend slightly past the boundaries that define where a particle is allowed to be. For example, in classical physics, an electron is allowed to move through a conductor but not through an insulator. However, if a thin layer of insulator is placed between two conductors, the electron can tunnel through from one conductor to the other because its wavefunction extends into the insulating layer.

wave mechanics

Wave mechanics is the version of quantum mechanics formulated primarily by Erwin Schrödinger in the 1920s. Following de Broglie’s hypothesis that particles can equally well be described as waves, Schrödinger set out to write down a wave equation for quantum systems and proceeded to solve it in many interesting examples. Wave mechanics is mathematically equivalent to Heisenberg’s matrix mechanics.

Zeeman effect

Each atomic energy level in which an atom has a non-zero spin splits into two or more separate levels when the atom is placed in an external magnetic field. The splitting grows with the strength of the external field. This effect is named the Zeeman effect after the experimentalist who first studied it in the laboratory, Pieter Zeeman. He received the 1902 Nobel Prize for this work, along with Hendrik Lorentz, the theorist who explained the effect.

zero point energy

The zero point energy is the minimum energy a system can have based on the Heisenberg uncertainty principle.

Content Developer: Daniel Kleppner

Portrait of Daniel Kleppner

Daniel Kleppner is the Lester Wolfe Professor of Physics, Emeritus at MIT. He is the founding director of the MIT-Harvard Center for Ultracold Atoms, funded by the National Science Foundation. He is the coauthor of two textbooks and the recipient of the Oersted Medal of the National Association of Physics Teachers and of the National Medal of Science.

Featured Scientist: David J. Wineland

Portrait of David J. Wineland

David J. Wineland received a bachelor’s degree from Berkeley in 1965 and his Ph.D. from Harvard in 1970. After a postdoctoral appointment at the University of Washington he joined the National Bureau of Standards, presently renamed the National Institute of Standards and Technology (NIST), at Boulder, Colorado, where he is the leader of the Ion-Storage Group in the Time and Frequency Division. Dr. Wineland and Hans Dehmelt were the first to propose using lasers to cool trapped atomic ions, and, in 1978 he and coworkers conducted experiments that realized this effect. This early work led to a number of experiments in laser cooling and spectroscopy of trapped atomic ions, atomic clocks, quantum information processing, quantum-limited metrology, and quantum state control.

Interview Transcript:

Interviewer: Can you describe where you work and what you do?

DAVID: I’m a physicist at the National Institute of Standards and Technology in Boulder, Colorado. At NIST, our primary business is to try to make accurate frequency standards. One of the really nice things about working at NIST—we, our group, Jim Bergquist, and I and the other people working in the group, is that we were blessed because our managers have been very encouraging to us. I think often in a university environment, the department wants individual stars, and I think NIST has been very supportive of our group environment. So, I think I can give an example of how well this has worked because Jim, myself, and two other guys, we put the group together in the late 70s, and basically we’ve been together this whole time. And as I say, NIST has been very supportive of this group environment. And I think it’s worked very well for all of us. And for all of my career, I’ve been working on methods to try to improve atomic clocks. And basically that’s the name of the game for us—we’re just trying to make oscillators that oscillate back and forth but at a very precise frequency and one that doesn’t change in time.

Interviewer: As a physicist, do you have a particular notion of time?

DAVID: Well, I think as a physicist, we tend to take maybe a low-brow approach—we don’t get too philosophical about what time is. But I think our notion of time generally is the same as everybody else’s. It’s a way to mark a series of events and durations of time, of course, and it’s just a way to mark the distance between these events. So, I would say certainly in my case, and I think for most physicists, our notion of time is very much that it is the same for everybody else: how to get to work and be there when everybody else is there. It’s really the same measure.

Interviewer: Why is there a need for better clocks?

DAVID: Well, I would say in the business of clocks, I don’t see it ending. And I think—it has always been true for the last ten centuries, at least, that when a better clock was built, use was immediately found for it. And this particularly applies to navigation. So, whenever a better clock was devised, it improves our ability for navigation. We’re at the level now where, for example, the global positioning system, the GPS is a good example where routinely we can go to the store and buy a device that tells our position to roughly ten meters or so. And yet, as we make the better clocks in the lab this technology eventually finds its way into the public domain. So, you might ask, well, aren’t we good enough now in navigating this precision? And I think the answer is no. You can always find applications where we like to have better precision. And the good example that is starting to be developed now is to locate any position on Earth to say to the millimeter or centimeter level, this is very useful. For example, earthquake prediction uses this kind of data. And I think this is unending. If we could make clocks even that much better, after that application has solidified, someone will come up with an application where we can use more precise clocks.

Interviewer: What are the principles behind an atomic clock and how are they different than a mechanical clock?

DAVID: Basically all clocks are made the same way. A common example and one that everybody is familiar with is a grandfather clock. And there, typically, the length of the pendulum is adjusted so that it oscillates back and forth maybe once a second or once every two seconds. And from that, of course, we can drive the second. And actually, an atomic clock works on exactly the same principle. Rather than an atom having a little pendulum, you can still think of an atom as having a characteristic oscillation and in fact a vibration is a good example—a good analogy that the electrons and the atom are vibrating back and forth. And they act very much then like a pendulum clock. And in the same way, we just count these vibrations of the atom, these oscillations, and when a certain number of oscillations have gone by, then we know a certain amount of time has elapsed. And so, one dramatic difference in an atomic clock is that typically nowadays the vibrations we use are occurring not at once a second or once every two seconds but more typically about a thousand trillion times a second. Nevertheless, we have the capabilities now to count each one of these oscillations and when a thousand trillion have gone by, then we know that a certain time has elapsed.

Interviewer: What might affect the accuracy of different kinds of clocks?

DAVID: So, in a quartz clock—say the one sitting on your nightstand or in your wrist watch—the vibrating component is a small piece of crystal often quartz or something closely allied to quartz. If we were to strike it, it will vibrate, and if we cut the crystal to a certain size, and then a certain shape, then we can predict pretty well what the frequency that these oscillations occur at. So, one of the things, though, that constrains the accuracy of crystal clocks is we have to know the dimensions of this quartz crystal very well. And in fact, we’re limited on how well we can know dimensions of the crystal, so that quartz clocks all run at slightly different frequencies. But in the case of atoms, all atoms of a given species, they run at exactly the same frequency. And so, there are no dimensions that come into play. We don’t have to engineer that. We don’t have to cut the atoms in a certain way as we did for the crystal. They all run at exactly the same frequency. So, our task is to be able to measure these frequencies. And so part of the limitation that we have on atomic clocks is just perturbations and how well we can determine the frequency that they’re oscillating in. We’re always thinking of ways to try to get rid of the effects or suppress the effects that affect the frequency of the clock. And one nice thing about our jobs here is this process will never end. We’re always able to think of new ways to suppress these effects and clocks continue to get better for that reason.

Interviewer: What steps have you taken to improve atomic clocks at NIST?

DAVID: So, one thing that’s happened in our development of atomic clocks is that the frequency of the vibrations that we use in the atoms or molecules has increased. So, for example, in the Cesium atom, the characteristic oscillations occur at about 10 gigahertz, which means about ten billion cycles per second. Now in the more modern clocks that we’re pursuing now, the vibrations that are important to us are the vibrations typically of electrons in their orbits. And these occur at about a thousand trillion times a second. In other words, about a hundred thousand times faster than the oscillations that occur in the Cesium atom. And fundamentally, it isn’t so important that it’s oscillating faster, but in practice it turns out that using higher frequency vibrations gives us an advantage. And the simple way to think about that is, that in any given unit of time, the oscillations essentially divide that unit of time up by finer and finer steps. So, by going from Cesium to optical clocks, and we’ve divided up any unit of time, say the second, in that case by a factor of a hundred thousand. We just define these increments of times into finer and finer units. In fact, that’s one of the main reasons we get a higher precision.

Interviewer: Describe the basic quantum mechanical principles behind the mercury ion clock.

DAVID: In our mercury clock—the same idea would apply for any other optical clock. What we do is we prepare the mercury ion—the mercury atom in its ground state. And then we apply radiation at these roughly thousand billion cycles per second. And when the radiation is tuned to a particular value, it causes the energy state of the mercury ion to go from a lower energy state to a higher energy state. And that occurs with maximum probability when this incoming radiation is tuned to the exact vibration frequency of this transition on the mercury atom. So, what we do in the lab is we make an apparatus that allows us to tell when the atom has gone from its lower energy state to its higher energy state. Now, in principle, the way we could detect it is we could wait for the atom to decay back to the lower energy state—what we call the ground state. And in principle, we could catch that photon coming out and measure it. It turns out that’s not a very efficient process for detection. So, what we do in the case of mercury is we play a trick that involves the same lower energy state of the atom—the ground state—and we have another laser that when we shine that laser on the atom, if it’s on the lower energy state, it will scatter light at a very high rate. So, let’s say the clock radiation is not tuned to the right frequency, then the atom remains in its ground state. It can’t make this transition to the excited level of the clock transition that we’re using. So, when that happens—when the radiation is detuned, then when we shine the second laser in on the atom, we see scattering. And in fact, the atom lights up. It’s a little dot—that if the radiation weren’t in the ultraviolet that we’re using—we could actually see it with our eye—a single atom. So, the idea is that then we use the second laser as kind of a discriminator. It tells us when the atom is in the ground state. On the other hand, when the clock radiation is tuned exactly right—when the atom goes from this ground state to the upper state we’re using for the clock, and we shine the second laser in, the atom doesn’t scatter anymore. So, we can clearly see this effect. The atom is either bright, or it’s dark. And as I say, in the lab, we’re not able to see this with our eye. It’s actually bright enough to see with our eye, but it’s in the ultraviolet. We typically just use a simple video camera that’s very similar to the one you could buy at the store, but it’s sensitive to ultraviolet light, and we see the atom blink on and off, and we know when it’s either made the transition, or it hasn’t.

Interviewer: Just how precise are your single-ion clocks?

DAVID: Often what’s said about our advanced optical clocks is if we made a clock—and we could keep it running long enough—it would neither lose or gain a second in roughly the age of the universe. Well, that’s a tough task to keep this clock running that long. But nevertheless, that gives an idea of the performance.

Interviewer: Are there effects that limit the precision of these advanced clocks?

DAVID: Yes. Well, we often use the term frequency jitter. There are myriad effects that can cause the frequency to change. A common one that we think about and have to worry about is that if the local magnetic field changes slightly, it will cause the vibrations to run at a slightly different rate. If the field fluctuates, then the frequency is going to change and fluctuate as well. It might be—well, a good example is the earth’s magnetic field, which in part depends on the local environment and temperature. And so, for example, when the temperature changes, the earth’s field can change slightly, and we have to worry about those effects. And another thing in the lab, where it’s sort of a nightmare we have to worry about all the time, is that our electronic equipment that derives power from the 60 hertz electricity that we get out of the wall causes the frequency of the clock to move around back and forth at 60 hertz. And this you might classify as a form of jitter. It’s something that’s happening fairly rapidly. That nevertheless causes a frequency error in our clocks. Our main job then is to try to control these environmental effects we call systematic effects as well as we can. And in fact, that’s fine. In the end, that’s what limits the accuracy or the precision of the clocks is just how well we can control these environmental effects—or more to the point—our inability to control them as precisely as we’d like.

Interviewer: What other things might affect the frequency of the clock?

DAVID: In very simplistic terms—what we do is we go through every known force of nature and try to understand how these forces of nature might affect the clock. Now in the end, the dominant ones are electric and magnetic fields—electromagnetic fields. And they come from various sources. I mentioned, for example, the ambient magnetic fields that might be in the lab. But there are some more interesting ones. For example, gravity affects the rate that clocks run. One of the effects of gravity comes from Einstein’s theory of general relativity. And one of the consequences of Einstein’s theory of general relativity was that clocks, if they’re placed near a gravitational mass, say the Earth—will run at a slower rate than if they’re removed from the source—say clocks on a satellite. But nowadays the precision of the clocks is such that we have to worry, when we compare clocks, if one clock in one lab is 30 centimeters higher than the clock in the other lab, we can see the difference in the rates they run at. And this is an extremely small effect that we haven’t had to worry about before.

Interviewer: Will increasing the accuracy of clocks allow the possibility of discovering new physics?

DAVID: Yes, well, one of the things that is interesting is, of course, our job as clock makers is to try to identify everything that might affect the frequency of the clock, and then we compensate or correct for that. But on the more fundamental side, what we’re always hoping for is that we might discover something that nobody has seen before. And one of the things that’s surfaced over the last decade or so is that clocks that drive their basic frequencies from physical principles might see the ratio of the frequencies that they run at starting to evolve over time. So, in fact, as a result of making better clocks, we’re able to search for these fundamental effects. And in our recent comparisons of two optical clocks in our labs here, we have been able to put some of the most stringent limits on the ratios that the frequency of these clocks run at. We haven’t seen anything yet, but if we saw something—first of all, we’d have to expect that—well, we just did something wrong. We didn’t account for say a magnetic field in the right way. But if we could somehow rule out all the effects that we know about—we might be able to say something about that—well, there’s a new fundamental effect there. I think, any physicists dream is to discover some new effect—and of course, we aren’t banking on this. But we certainly want to keep our eye out. And so, we continue to perform these checks as the performance of the clocks gets better. And before we would say anything very loudly, we would have to check for a very long time to make sure it wasn’t some simple environmental effect.

Interviewer: What parts of your work have you found most rewarding?

DAVID: I think you’ll probably get this similar answer from Jim. I think when we’ve compared notes about our careers here, and maybe there’s some paperwork we’d rather not do, but I would say we’re pretty lucky with our jobs because we get to play with fancy toys and yet, it has a purpose. And it’s like a really fancy video game where you can do amazing things. And the only difference, but an exciting difference is that we don’t know the rules in the game exactly. So, part of the game is to find out the rules—how does nature work. So, I think that’s one of the things that continues to make it exciting for us.

So, the other part of this physics is that it does involve mathematics. And I think one of the things I always liked was how the things we work on can be described by mathematics, quantum mechanics, for example. It’s maybe more complicated than simple algebra, but on the scale of what the mathematician thinks about, it’s not really that complicated. And yet, I never cease to be satisfied by how we can use this relatively simple math to describe what seem to be fairly complicated situations, including how our atomic clock works.

Featured Scientist: Martin Zwierlein

Portrait of Martin Zwierlein

Martin Zwierlein joined the Department of Physics at MIT as an assistant professor in the fall of 2007. He studied physics at the University of Bonn and at the École Normale Supérieure in Paris, where he received his undergraduate and a Masters degree in theoretical physics in 2002. His doctoral thesis in experimental atomic physics was completed in 2006 while working in the Wolfgang Ketterle group at MIT. His research focused on the observation of superfluidity in ultracold fermionic gases, a novel form of strongly interacting matter. From 2006-07, he was a postdoctoral research associate at the University of Mainz in the Immanuel Bloch group. At MIT he studies ultracold quantum gases of atoms and molecules. Just a few billionths of a degree above absolute zero and a million times thinner than air, these gases provide ideal model systems for many-body physics in a clean and controllable environment.

Interview Transcript:

Interviewer: What is the quantum mechanical concept of spin?

MARTIN: So, it turns out, every elementary particle in nature has a quantity that we call spin. The classical analog of spin is just how fast does an object rotate about itself. The classical analog works to a certain extent. For now, this is fine. Every elementary particle has a certain, very distinct amount of spin. Turns out, electrons, protons and neutrons, they all carry the same spin which, in physics units, takes on the value one-half. What are the units? It’s one-half H-bar, which is the Planck’s constant, the constant introduced by Max Plank.

Interviewer: Why is it important to have spin and what is the difference between having one-half or one, or two?

MARTIN: It makes a whole lot of a difference. It turns out, and that is a very deep result or quantum mechanics, the so-called Spin Statistics Theory, that if you have half integer spin, you don’t want to be together in the same quantum state. So, let’s take electrons. Electrons have half integer spin. So, that means we cannot stack them into the same quantum state. Electrons always have to find a way around each other, always take on different energy states, and we see that most clearly and most dramatically in the structure of our atoms. We all know this from high school, from our physics, our chemistry classes, but it is really a fundamental property of the electrons due to their spin, and that is very fundamental. Otherwise, all atoms would look very boringly alike because all the electrons would always be in the same shell, and all atoms would pretty much do the same thing, but that is not at all the case. We have the shell structure, and that gives us all the beauty of nature around us, all these different elements and that is just due to the electrons having half integer spin.

Interviewer: What is the difference between bosons and fermions?

MARTIN: So, in nature, you only have two kinds of particles, bosons or fermions, and if you want to figure out whether you are a boson or a fermion, just count the number of elementary particles, protons, neutrons, electrons, or equivalently count how much spin you have. If the outcome is, I have an even number of elementary particles, therefore I have an integer spin, well then, I am a boson. Then I can be together with other bosons in the same state. If the outcome is, I have an uneven number of elementary particles or, equivalently, I have a half integer spin, then I am a fermion, and I cannot be together in the same state with another fermion at the same time. That is the difference between bosons and fermions. Turns out, fermions are the building blocks of nature. If you think about it, all massive elementary particles are fermions. We have the quarks, turns out they have half integer spin. The quarks form the protons and the neutrons which have half integer spin because they consist of three quarks. The electron carries half integer spin. So, the elementary particles that have mass are fermions and they build everything we see around us, ourselves, the sun, neutron stars, the universe. So, fermions that glue together form everything we know. So, that is why they are the fundamental particles to study.

Interviewer: What is the many-body problem?

MARTIN: So, we are all familiar with the two-body problem, like in everyday life we have to figure out that we find some work for our partner in the same city, that kind of two-body problem. In physics, usually you think about the earth circling around the sun, and we can solve this problem exactly. It is beautiful. It is no problem whatsoever. But now, what about the three-body problem? What about adding the moon to the story? Already, that cannot be solved exactly anymore. So, that is a big problem in physics, that once you add one more particle to your two-body problem, you are already at a loss. Of course, you can do wonderful approximations and get it really, really nice, figure it out very nicely, but it turns out the system is already so complicated that actually there is chaos in it, and terrible things can happen, already at the level of three particles, now are four, at five, at six particles, and try to calculate this complicated problem exactly. It is impossible. No computer in world can do it. And now take 1023 particles, making up the solids. With 1023 electrons per cubic centimeter swimming around in your metal, and try to solve that problem. It seems absolutely impossible. Many-body physics tries to tackle exactly that question. Can we still have a handle on these very complicated problems with all these many particles, even though we already don’t understand how to solve the three-body problem exactly? Well, there are wonderful schemes how to attack this problem, and also to incorporate statistics into many-body physics.

Interviewer: How does this apply when you start to look at systems that have strong interactions between fermions?

MARTIN: So, usually for weak interactions, as is the case for metals that become super conducting at very low temperatures, you can write down approximate wavefunctions that describe the situation extremely well, but once we crank up the interactions between fermions, once we make these interactions very strong, we are at a loss because now there are no approximations that work anymore. Approximations usually mean that you throw out some part of your problem which is may small, not so important, maybe you can neglect it. If you have strong interactions, every fermion talks to everyone else, and you cannot neglect that. You cannot just throw out a bunch of fermions and say, okay, just look at the interaction between this guy and the other guy, and don’t incorporate interactions with others. If you have strong interactions, you have to take everything into account, and that is just terribly complicated. So, that is exactly what many-body physics tries to think of the whole time, how can we handle strong interactions even though there is no approximation that works anymore.

Interviewer: You mentioned superconductors. What are other problems might be approachable by looking closely at strongly interacting fermions?

MARTIN: Fermions with strong interactions turn up in all kind of systems in nature. In condensed-matter physics, we have high-Tc superconductors that we don’t understand. We would like to get the critical temperature up higher to finally get a room temperature superconductor. Colosso-magneto resistance materials, they would be fantastically useful for hard drives and our USB sticks; they already make use of a giant magneto-resistance effect in these interesting materials. We find it in the beginning of the universe in the quark glue and plasma that has been created in the split second after the Big Bang, strongly interacting fermions that are very hard to model. And inside the atomic nucleus. The atomic nucleus itself is a strongly interacting fermionic system where protons and neutrons pair up. Or inside a neutron star, where you have in the crust some kind of neutron superfluid and in the center, a mixture of quarks that interact and pair up with each other. All these different systems belong to the same category of strongly interacting fermions. We don’t understand them well. We cannot model them well, but we hope to have a model system that explains them better: using our atoms as model matter in the lab, to probe these strongly interacting fermions in a controlled environment, one where we can tune the directions and tune the distance between particles at will and understand how the many-body system works.

Interviewer: What happens when you cool the atoms to ultra-cold temperatures?

MARTIN: So, at high temperatures, you can think of atoms simply as little billiard balls that are moving around at a high speed on the order of a thousand meters per second, and they move around here, I show the velocity of those particles like crazy in some erratic kind of fashion. Now, it turns out, as you lower the temperatures, eventually you should not think of particles as billiard balls anymore. Quantum mechanics takes over and particles become waves. At low temperatures, each of these particles is actually smeared out over certain matter wave, over a certain range which we call lambda, that’s roughly the size of these matter waves, lambda, and it turns out the lower your cool, the colder you are, the lower the velocity is, the larger this wave length. It’s the larger these waves are. It turns out, de Broglie told us what the relationship is. The wave length is given by Planck’s constant H divided by the mass of the particle times the velocity. So, at very low temperatures all these waves start eventually to overlap, once the wave length is on the order of the interparticle distance. In this situation now, it becomes crucial whether you are a boson or a fermion. If you are a fermion, you really want to stay away from the other guys and form different wavefunctions. All the fermions should distinguish themselves by their wave. They cannot occupy all the same wave. Bosons instead, they will decide to all join up to one big microscopic metawave at low temperatures, below a certain critical temperature, and that is the Bose-Einstein condensate.

Interviewer: This is the story of bosons, but now what about fermions?

MARTIN: Well, for fermions, fermions don’t want to be together in the same quantum state. So, we don’t have this large microscopic matter wave. Fermions first have to pair up. Say here I have a spin up fermion and here I have a spin down fermion. These guys have to pair up to form a boson, and then these bosons, they can again form a Bose-Einstein condensate. So, here I draw a couple of fermion pairs that form eventually a boson. So, all these pairs then have a wave associated to them, this blob, this blob and this blob, and these fermion pairs then form one microscopic wavefunction. So, I can again draw this at zero temperature. All the fermion pairs are again in the same big microscopic wavefunction, but now it is the fermion pairs. It is not the fermions themselves, but the fermion pairs. Once the fermions have found a partner to form a pair with, then these pairs can form the Bose-Einstein condensate, and then we can again draw one microscopic wavefunction for the fermion pairs.

Interviewer: Once you have condensate of fermion gas, describe your rotation experiment.

MARTIN: So, in the lab, we deal with a puff of gas, a million times thinner than air, and we need to rotate it. So, how do you make that rotate? Well, we can take yet another laser beam. In our lab, it’s a green laser beam. Then we rotate around this puff of gas at a certain speed, 100 hertz, about a hundred times per second, and that will excite the gas, and make it start to rotate. So, as we rotate this puff of gas with our green laser beam, the edges of the cloud, they are not quite superfluid. They are maybe still normal. They are kind of carried around and start to rotate, and they will will tell the superfluid to also rotate.

Interviewer: How does the superfluid rotate?

MARTIN: Well, it has to have vortices inside. Now, these vortices, they cannot just suddenly pop up in the center of the superfluid. That would violate conservation of angular momentum in this superfluid. They have to come from the outside. So, at the rim of our puff of gas, we see some fluctuations that appear here and there. These are already the vortices and they sometimes, after a while, they make it inside. Some of them make it inside, and then some more, and then some more, and eventually we end up with a huge vortex lattice piercing through the superfluid core and that is the final state. This would happen, if you keep the rotation on, it would also still happen in a metastable way. If you stop the rotation after a while, the vortices would still remain there because, as a superfluid, it doesn’t even feel friction. So, at zero temperature, it would rotate forever. We are not at zero temperature so, eventually, if you stop the stirring, the normal cloud will spin down and it will also drag the superfluid with it, meaning that the vortices eventually will disappear after the spin down, but it lives for a very long time, even without any rotation.

Interviewer: How do you look at your cloud?

MARTIN: Well, we have this tiny puff of gas that is in here, a little cloud. It is maybe hundred micrometers in size, roughly, and we want to know what is going on inside that cloud. So, what do we do? Well, we expand it. We open up our trap, let this puff of gas expand, and then it will be a large puff of gas, a couple of millimeter in size, after expansion, and then—then we shine in a resonant laser beam, all this is laser light. We shine it onto our atoms. The atoms cast a shadow in that laser light, and we image the atoms onto a camera.

Interviewer: If you have created vortices in that cloud, how would you see them?

MARTIN: Well, the vortices are mini whirlpools, so there are little tubes that run through that cloud, empty tubes, hollow tubes, where the light simply passes through; and so, in our image, we will see lots of white patches exactly where the vortices are. So, where the vortices are, all the light makes it through. Once the atoms have scattered all these photons from the laser beam, they are hot and the superfluid has broken down and also it is flying apart, anyways. So, we have to repeat the shot, maybe varying some wait time, or varying some other parameters of the experiment, repeat the shot in the exact same initial conditions, and then, forty seconds later, we see the second shot on the screen, and then we take a third shot, and a fourth shot; and, to take a movie in that sense, takes maybe fifty images of, every time newly prepared clouds. So, it is a little tedious.

Interviewer: What motivates you to continue these experiments?

MARTIN: So, what we hope to achieve with these atomic systems is to understand strong interacting fermions much better than we know now, today, how to describe them. This correspondence between theory and experiments, this constant exchange, will hopefully enable us to have much better theoretical tools available in five years, or ten years, to study strong interacting fermions than today by constantly probing these techniques on atomic systems that we can control very well in the lab. So, it will be a back and forth between theory and experiments. The theorists come up with a model. The experimentalists realize that model, and we can match theory and experiment in a quantitative way. Will this data fit with their theory and hopefully it agrees with what they have come up with. Otherwise, they have to go back to the drawing board.

It is lots of fun. You get to play really, I sometimes say, with nature herself. You can tickle the atoms, play with them, and get another laser beam, and see what happens. You can apply a gradient, split them apart, bounce them off each other, and see how nature reacts to this unusual situation in which you place her, and this is really unusual because there is no other experiment that you can do where you suddenly have this very well prepared, here are the spin ups, here are the spin down, now you bang them into each other, what happens? Usually, in physics, things are rather complicated, and I haven’t found any other system where you can just be so sure about what you have in front of you, what are the interactions, and you can even switch off the interactions if you want to. So, it is immense control that we have, the fact that only a few people in the lab can make a difference in the understanding of what is going on, the fact that we can make quantitative experiments that really give the number, the hard number that we give to theorists to say, okay, please, you should get that. This is the outcome. That’s your theory. Get that.

Interviewer: What new directions are you looking forward to exploring in the future?

MARTIN: It turns out, one dream that we all hope to reach is to have perfect control of the internal and external degrees of freedom of the atoms and that includes the spin degrees of freedom. We want to be able to control whether the system is a ferromagnet or an anti-ferromagnet. We want to be able to address atoms on an optical lattice, single sites, and have perfect control over the degrees of freedom of these atoms. If we have that, if we have a system that is controllable, that is addressable, where we can address every single atom, and we can manipulate it using microwave circuitry, then what we actually built is a quantum computer because now we can use these spins of atoms and optical lattices as cubits, one, zero, or one plus zero, and we can bring them into contact with each other, build logic gates for this quantum computer, shuffle them around, move them around, and read out the outcome of a quantum computation. This is another aspect of the story that, on one hand, we want to understand many-body physics and we want to create model systems that help us understand current models; but, at the same time, while we are doing this, we will be coming closer and closer to a real quantum computer, that allows us to do computations that a classic computer cannot do.

Series Directory

Physics for the 21st Century

Credits

Produced by the Harvard-Smithsonian Center for Astrophysics Science Media Group in association with the Harvard University Department of Physics. 2010.
  • Closed Captioning
  • ISBN: 1-57680-891-2