Skip to main content Skip to main content

Physics for the 21st Century

The Fundamental Interactions Interview with Featured Scientist Ayana T. Arce

Interviewer: So what motivates your research?

AYANA: In our research, we are trying to search for and characterize the unknown. Basically, the point of high-energy physics research is to understand the fundamental interactions of matter and energy in its simplest form. We’re trying to look at the way matter and energy behaved, for example, a few moments after the Big Bang.

In the broad view, I’m looking at what happens when elementary particles collide—when they interact and transfer a lot of energy between each other. Mostly, I look at the interactions of quarks and gluon because that’s what a proton-proton collider gives us. And what I’m looking for specifically is the behavior of the heaviest quark known to us—the top quark. This particle is so heavy that we’ve begun to suspect that there might be something funny about it.

Interviewer: Could you tell me a little about using the Large Hadron Collider?

AYANA: While using the Large Hadron Collider, we are trying to produce new kinds of particles and interactions and study these interactions to put together a clearer picture of how the universe works at its most fundamental level. So, we’re looking at the collisions of protons. And within those collisions, we expect to produce a lot of familiar particles like W and Z bosons, but we also hope to produce some particles that we haven’t seen before like the Higgs boson—perhaps supersymmetric particles are something entirely unexpected.

Every proton is the same as every other proton in physics. There’s no difference except for their environment—and the environment of the LHC collider at both collision points is a vacuum. There’s nothing there. So, there’s nothing fundamentally that should be different about the proton here—or the proton in the early universe. This is why we’re doing this—because all of these things are fundamentally the same.

So, all of us in particle physics are ultimately interested in what happens at the moment of a proton-proton collision. What happens when the constituents of the protons—the quark or the gluon—when they interact? But in order to understand what happens there… What we have is a tremendous detector, which is basically trying to take a digital snapshot—not of the collision itself but of its aftermath. So, to go from what was left in the detector—the electronic signals to the particles that those signals represent—is one step that takes a lot of software understanding.

The second step is to go from all of those particles with the aftermath of the collision to actually identifying what that collision was. That also takes a lot of software simulations of particle collisions to help us interpret the ones that we’ll actually see.

Interviewer: What is the ATLAS experiment?

AYANA: We need acronyms for everything. And ATLAS stands for A Toroidal LHC Apparatus, which makes a much better acronym than it does a name.

The LHC is basically a facility that’s providing hadron collisions to several experiments. ATLAS is one of them. ATLAS and CMS are the two general-purpose experiments designed primarily to study proton-proton collisions.

ATLAS is a general-purpose experiment. And the reason that it’s built to measure everything about the aftermath of a collision is because some of the physics we’re trying to study—we don’t know what it looks like yet. We think there’s a Higgs boson, and we’ve designed the experiments to be able to look for it in the many ways that it might show up. But, we also hope that there’s more physics out there that we can look for and we have even less of an idea what that might look like. We know that we want to find heavier particles because we’re using a machine with higher energy than we’ve used before.

Interviewer: The LHC is the supplier of your raw material.

AYANA: Right. If we were astronomers, the LHC would be the universe that we’re trying to study and we’re all telescopes looking at what takes place there.

The LHC is really a discovery machine because it’s colliding protons at the highest energy that anyone has ever seen. These collisions can produce basically anything that’s allowed by the laws of the universe. Quantum mechanics tells us that we can create any allowed state in these high-energy collisions. That’s why we hope that the LHC will not only produce Higgs bosons, which we expect, but will also produce whatever the particles are part of the higher theory that will encompass the Standard Model. We hope we’ll have the energy to produce them at the LHC.

Interviewer: Can you briefly describe the Standard Model?

AYANA: The Standard Model is elementary particle physics, as we know it today. It’s sort of a list of the kinds of matter that we know exist and a list of the ways that we know the matter interacts. So, the kinds of matter that we know about are quarks and lepton and neutrinos and the way that these particles interact.

There are many reasons to look for theories beyond the Standard Model and the best reason by far is based on observation. We know that there are kinds of particles that aren’t included in the Standard Model. And so, this is an obvious reason why we have to expand this theory to encompass things like dark matter, which aren’t explained at all in the theoretical framework that we have.

In the case of dark matter, we haven’t produced it in the laboratory; and so, we haven’t been able to study it the way that we’ve studied other elementary particles. We know it’s there because it gravitates and it has a big impact on the evolution of the cosmos, but we haven’t been able to observe what it’s related to in terms of the other particles that it might be able to change into. So, one of the goals of the experiments at the LHC is possibly to produce the parents of dark matter—particles that can decay into it.

Eventually, I think we’ll start to call it something other than the Standard Model.

Interviewer: Perhaps “The Nearly Complete Model”?

AYANA: That’s right. It’s hard to say.

Interviewer: Is there a difference between “forces” and “interactions”?

AYANA: There’s not really much of a difference between a force and an interaction. Both describe the kinds of things that can happen to an elementary matter particle. So, if you consider an electron, it feels an electromagnetic force because it can interact with photons. That’s sort of the same story for all of the particles.

The equivalence of matter and energy is a key concept in particle physics… The equation E=mc2, which sort of describes the equivalence between mass and energy, has a place in describing how gravitational interactions happen. But it’s also extremely important to us because, when we look at elementary particle interactions, we often go from a state that is characterized by energy to a state that’s characterized by mass. An energetic virtual particle like a photon can decay into massive children.

The equivalence of mass and energy is fundamental to the whole idea of the LHC. One of its goals is to create new massive particles—things we’ve never seen before. And the way to get there is to create collisions—create states that have higher and higher energy. And when the energy is high enough, these new heavy massive particles can just appear. The LHC, being the highest energy collider in the world, will be the place where it’s easiest to make these new heavy particles that haven’t been observed anywhere else.

Particle physicists don’t use all of the things that they learned in undergraduate mechanics every day. But, something that I think we do use almost every day is the conservation of energy and momentum. This is an important tool for us because it helps us to reconstruct not only the particles that we did see, but also the particles that we didn’t.

Interviewer: Can you tell us about your current work please?

AYANA: One of the things that I am doing right now is working with the simulation of one of the detectors in the ATLAS experiment: the calorimeter. We have a very good microscopic simulation of the calorimeter but sometimes we want something that’s approximate but a little bit faster. So, when working with the simulation, what I’m doing is trying to generate a quick way of understanding basically how the detector will respond to particles of different types so that we can distinguish those kinds of particles and also how the detector will respond to particles of a certain energy so that we can understand the measurements that we’ve made of that energy.

Interviewer: Is there uncertainty in your measurement? You’re using a huge machine that could be affecting these elementary particles.

AYANA: We don’t affect the proton-proton collision because this happens in a vacuum far from any of the instruments that we’ve built. But, the particles that come out through the detector have all kinds of interactions that we expect with the detector—these are the ones that leave energy, which we record and understand. But they also have interactions that we don’t measure. They can lose energy and pieces of cable or copper or insulation. These interactions do make our measurements less precise than they would be in an ideal detector. This is one of the things that the simulation has to understand very carefully—not only what are the detector/particle interactions that help us to understand what the particle is doing but also what were those detector/particle interactions that just sort of mess up our measurements.

Interviewer: Is the upper energy level arbitrary because of budget? Or, are physicists hoping to find something specific at that level?

AYANA: The energy of the LHC is nearly ten times the energy of the next most energetic machine. And this leap in energy should open up a new world of physics at what we call the Terrascale. There are a lot of reasons to think that the particles we’re interested in—specifically the Higgs boson, but also others—will be in an energy range that the LHC can access. But the other reason for the upper limit on the LHC energy is, of course, technology. The LHC uses the world’s best superconducting magnets in order to keep these protons in their path and to collide them. And so, the energy of the LHC is the best that we can do, but it’s also probably good enough for the physics that we’re interested in.

Interviewer: What is the software that you work with?

AYANA: The software that I work on is the software that the ATLAS experiment as a whole uses. A lot of the tools that we develop and use are shared by almost everybody because everybody has the same initial goal, which is to analyze the aftermath of the proton-proton collision to figure out what those particles were, how much energy they had, and where they were going. And so, everybody uses these common elements of the software. Now, every physicist at the end of his or her work writes a little bit of specialized software, which calculates exactly the quantities for the events that they’ve selected out of the billions given to us by the LHC, that takes these quantities and calculates some things that will either validate the Standard Model or disprove it or that might tell them more precisely the mass or the interaction properties of the particle that they’re trying to study.

Interviewer: When you incorporate data into your software are you using inferred data?

AYANA: The ATLAS detector can’t tell us everything that we want to know about a collision. The individual elements of the ATLAS detector tell us things like a particle went by here or there was a certain amount of energy left by some particle here.

After our software has put together the reports from each of these detector elements into a picture of all of the particles that came out from the collision, then we use our knowledge of physics to take that story about all of those particles and try to understand what took place in their interaction—and what that interaction tells us about particle physics in general. But the detector and the software cannot tell us everything about the interaction. They can’t measure all of the particles and they can’t tell us with one hundred percent precision any of their properties. So, there’s an inherent uncertainty in the measurements that we make. It’s something that we have to take into account when we extrapolate from those measurements to a hypothesis.

The role of the software is really to put together the picture that the detector has taken for us. It coordinates the reports from all of the different detector elements, saying a particle was here—a particle left this much energy here, and puts that together into a picture of the aftermath of the collision—what particles the collision produced. Now once we’ve seen the aftermath of the collision, we start to be able to have an idea of what that collision actually was. But in order to quantify what we think might have happened, what we often do is simulate particle collisions and their interaction with the detector and compare a lot of simulated events to a lot of recorded events. And if this comparison matches up, then we think we understand not only what our detector was doing, but also what took place in that initial collision.

One of the first things that we see when we compare the data to the simulation is: By comparing processes that we understand very well, we can see if the description that we’ve made in the simulation of the detector—how it performs, how it operates… We can see if that description is actually accurate. And very often, we are too optimistic. We think the detector is better than it is. So, we adjust the simulation in order to have a more realistic picture of the defector’s operation. Once all of those little tunings and tweakings are done and we have a detector simulation that describes events we understand very well—the same way as the data, then we can start to look at events that we don’t understand in the data. We can look in new corners of phase space—new kinds of interactions. And since we trust now the detector simulation—then by comparing that simulation to those interactions, we can see if the kind of physics that we already know about is taking place or if it’s something new.

Interviewer: What are some of the elements of the software that physicists use?

AYANA: The software that we use has a lot of different pieces. The fundamental piece, which is analogous to the fundamental collision at the LHC, is the physics Monte Carlo. So, like the LHC collisions, this is a program that’s built to generate the interactions of primary particles. The Monte Carlo does exactly for us what the LHC does, except in simulation; so we know exactly what’s happening. It generates the interaction of the parts of the proton that collide and then everything that comes out from that.

The next part of the software is the detector simulation, which represents the ATLAS detector. It describes how the different sensitive elements of ATLAS interact with the particles that are produced by the Monte Carlo. And the final stage of the simulation data change is the reconstruction software. This should be absolutely identical to the software that we run on actual data coming from the LHC because it interprets the signals that were generated—in the case of the LHC, by the real ATLAS detector and in the case of our simulation, by the simulated ATLAS detector. But by applying the same reconstruction processes, we have an idea of what we will see when we do reconstruct LHC collision data.

Interviewer: How will you know if you have discovered a new particle, or interaction?

AYANA: If you’re looking for a new kind of particle or a new interaction, you can never really say you’ve found it based on a single event. You need an ensemble of events. You need many instances that share some properties in order to prove to yourself that you’ve actually seen this new process. It’s similar to trying to prove a correlation between an environmental condition and a disease. If you find one patient who has the disease and was living in that environment that’s not proof that they’re connected. You need to study an ensemble of people—some of which do have that condition and that environmental condition, and others who don’t—and study how often that disease is found in these patients before you could say, “I have found this correlation.”

In a simulation, you do know the origin of every event that you’re looking at. And so, you could say, “this one event was evidence for this new particle.” But that’s information that you don’t have available to you when you’re looking at the data. So, suppose you’re looking for the Higgs and the Higgs can decay, for example, into two electrons and two muons, right? If the Higgs were the only thing in the universe that could produce two electrons and two muons in the proton-proton collision and finding an event with two electrons and two muons would almost tell you that you’ve found the Higgs, you might be wrong because you might have actually seen two pions and two muons and not two electrons and two muons. But the uncertainty in your detector—accurately representing what you’ve seen—is one reason that you can’t say, “this was a Higgs event.”

Interviewer: Are simulations restricted by computing power you can access?

AYANA: What we were doing yesterday was basically comparing two software components—two parts of the detector simulation. One of them we know is very accurate or can be very accurate when it’s properly tuned—but it’s extremely slow and therefore it’s also difficult to tune. The other one is extremely fast, but right now it’s not as accurate in representing how particles will interact with the detector. So, our goal is to make the fast simulation a tool that can perform very close to the level of the slower full simulation so that we can use it to generate and analyze more events more quickly.

Interviewer: How do you know that your software is running correctly?

AYANA: Well, some mistakes are obvious. If you put data in and don’t get any out then you’ve got a bug but some problems are more difficult to detect. One thing that we do is a very intensive process of validating the software so that we establish a baseline based on some software that seems to work fairly well in the past and then every change that’s made is compared to the previous version. We’ll analyze the same events and see if we get the same answers. If we get different answers, we hope that they’re better.

Interviewer: How do you validate such complex data?

AYANA: Well, the Monte Carlo is one of the really important tools that we use in software validation because in the Monte Carlo we have what we call truth information. We know exactly which kinds of interactions were simulated and what particles were produced. So, when we use our detector reconstruction software to interpret the simulation event, we can compare what we know to have happened in the truth from the Monte Carlo to what we’ve reconstructed. And if there are major discrepancies, we know there’s something wrong with the reconstruction software. The other thing that we do is compare different versions of the software—one that we thought was pretty good.

If you see a rare event—something that you don’t understand, you’re not going to go back to your reconstruction software and try to tune things and adjust things to explain or to make that event disappear. If you believe in your reconstruction software, then you’re not going to change that at all. What you’ll do is go back to your Monte Carlo where you’ve plugged in all of the physics that you already know—you’ve plugged in the Standard Model—and you’ll take that Monte Carlo and analyze it with an identical set of software tools to reconstruct events. Then you’ll compare the Monte Carlo to the events that you’ve seen. If that weird event is still there and it’s never generated by your Monte Carlo, but you’ve seen it many times in data, then it’s a signal that there’s something new that you don’t understand.

The new model that explains the physics that we see at the LHC will have to be compatible with a host of astronomical observations. And if this model can accurately tell the story of the early universe and produce the patterns that we see on the sky, and the model is also very good at analyzing and explaining the events that we see at the LHC, then it’s going to be a really good candidate. But, only more measurements of new properties of these new particles or new interactions are going to let us know that this hypothetical model is actually the right one. I mean it’s physics. There are always more tests to be done.

Series Directory

Physics for the 21st Century

Credits

Produced by the Harvard-Smithsonian Center for Astrophysics Science Media Group in association with the Harvard University Department of Physics. 2010.
  • ISBN: 1-57680-891-2