In Sync Online Textbook
While it is true that mathematics can be used to make sense of and make testable predictions about certain real-life situations, such as solar eclipses, there are many types of natural phenomena, such as the turbulence of fluids in motion, for which current mathematical models are inadequate.
“As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.”
The interplay between the abstract world of mathematics and the real world is not as straightforward as it might seem at first. While it is true that mathematics can be used to make sense of and make testable predictions about certain real-life situations, such as solar eclipses, there are many types of natural phenomena, such as the turbulence of fluids in motion, for which current mathematical models are inadequate. Situations such as turbulence represent the frontier of how mathematics can be used to help us understand reality. Coming to an understanding of turbulence is challenging because of the complexity and dynamism of moving fluids. Turbulence involves the combined behavior of trillions of particles of fluid, each of which is subject to many types of forces and interactions. While mathematics can be used to describe the behavior of a single particle relatively comprehensively, the behavior of a group of associated particles is well understood only under certain, sometimes contrived, conditions.
How can we make progress in understanding large, complex, dynamic systems? It helps to start with certain special cases that lend themselves more readily than others to analysis and quantification with the currently available mathematical tools. An understanding of the behavior of a system in these special cases can then provide hints regarding the behavior of the system in more general situations. This is a common strategy in applied mathematics: First, find intriguing special cases that lend themselves readily to study and explanation, then explore how the results can be generalized. Spontaneous synchronization is one such special case of complicated dynamic phenomena. Understanding the mathematics of how, and under what circumstances, entities can come into synchronization with one another provides a starting point for exploring the vast world of nonlinear dynamics.
Our world is filled with all sorts of phenomena that amaze us with their regularity and baffle us with their complexity. For example, how is it that a school of fish can, seemingly simultaneously, all turn on a dime at a mere hint of a nearby predator? How is it that very large groups of East Asian fireflies, and some other varieties as well, when left to their own devices, spontaneously synchronize their flashes? How do the individual cells that make up your heart contract in a coordinated rhythmic fashion to keep your blood flowing? Even a system as simple and seemingly unrelated as an inanimate pair of grandfather clocks can exhibit a kind of synchronous behavior. It is clear that synchronization is a phenomenon that can be found in many different contexts.
The art of mathematical modeling involves identifying a few simple and quantifiable assumptions about a given system (or systems) of study that actually give rise to a good approximation of the phenomenon of interest. Mathematically capturing the complex, dynamic phenomena of the real world is a gargantuan task and is an area in which there is much opportunity for the advancement of our understanding. The study of synchronization represents one of the outposts on the frontier of this vast, unexplored territory.
In this chapter, we will begin by looking at some examples of natural phenomena that exhibit fascinating coordinated and synchronous behavior. Then we will learn a bit about the available mathematical tools that are useful in our quest to understand these phenomena, namely differential equations and calculus, the mathematics of change. From there we will investigate how one particular mathematical model of a system of coupled oscillators can be used to help us understand complex coordinated behavior. We will then be prepared to take a more in-depth look at a couple of examples from the realms of biology and physics to see how the study of synchronization is an example of using mathematics to describe the real world.
2. Unit Overview
- The phenomenon of synchronous behavior occurs in many different situations, ranging from the intentional synchronization of a symphony to the inevitable synchrony of orbiting planets.
When we listen to an orchestra, we are often impressed by how well the musicians can play together, each individual contributing to a whole that is almost always something very different from the individual parts. From a group of musicians playing individual parts, a complex, coordinated piece emerges. The mechanism for this particular synchronization is not hard to understand: The conductor keeps time and cues the musicians to play “in sync” with each other.
Marching bands are another example of synchronous behavior. Their synchrony is somewhat more complicated than that of an orchestra in that the marching musicians move together in addition to playing music together. To play in sync with each other, they take their cues from a conductor, as the orchestra does.
To move in sync with one another, however, they must take their cues from each other. The marchers judge their position and velocity relative to their neighbors. This may seem like a lot to think about for the band members, and it is. Consequently, it would be tempting to conclude that synchronous, coordinated behavior requires a conscious mind, but humans are definitely not the only ones who exhibit synchronous behavior.
Flocking behavior in birds and schooling behavior in fish are two examples of synchrony in the animal world. Watch a flock of pigeons flying and you are likely to see them make remarkably sharp turns, all at the same time. The entire flock can change direction seemingly simultaneously and without running into each other. The same is true for a school of fish, darting, turning, splitting, and re-uniting to evade a predator. Both flocks of birds and schools of fish exhibit this sort of coordinated motion—what we have been calling synchronous behavior—without a leader or “conductor” whose actions tell the group what to do. Rather, each individual pays attention to its immediate neighbors and makes small adjustments in speed and spacing to maintain the cohesion of the group. This phenomenon of groups of individuals who each follow local relationship rules results in the whole group seemingly acting as one. It would then be reasonable to surmise that coordinated, synchronous behavior requires some higher level of brain function—at least at a level that enables an individual subconsciously to follow an innate set of rules about distance and speed.
Even this conjecture, however, falls apart when we consider another example from the animal world. Certain species of fireflies in Southeast Asia exhibit extraordinary synchronous behavior. By the thousands they are able to synchronize the rhythmic flashing of their abdomens so that they all flash at the same time. They seem to accomplish this naturally and spontaneously without any leader showing the way. They accomplish this synchronization despite the fact that each individual firefly’s brain can’t hold a candle to the processing power of a bird’s or fish’s brain.
Synchronous behavior obviously occurs among simpler animals, but what about at a sub-organism level? How about between cells? An individual cell has no brain, and yet our bodies are made up of trillions of individual cells, each of which functions—during states of health—in life-sustaining harmony with the others. A great example of this is heart pacemaker cells. Pacemaker cells are the key rhythm keepers that govern how and when the heart contracts. These cells display a great degree of spontaneous synchronous behavior; indeed, if they didn’t, none of us would be here to observe it! Each pacemaker cell has an innate cycle of building and releasing electrical charges that ultimately stimulate the cells of the heart to contract or relax. In isolation, pacemaker cells keep their own rhythm. When one pacemaker cell is placed in proximity to another pacemaker cell, however, something remarkable occurs. They maintain their separate rhythms for a brief period and then naturally fall into sync with one another, both building and releasing charges at the same time. This phenomenon has no leader guiding it and no processor, such as a brain, to make judgments about what the neighbors are doing.
At this point we could still argue that the phenomenon of synchronous behavior requires some sort of living thing. Although it doesn’t need a leader, or even brains, perhaps it results from some basic principle of biology.
Of course, by now we should not be surprised that this is not the case. All sorts of non-biological systems can spontaneously synchronize, creating order where we might expect to see chaos. We can see this in the heavens, in the tidal locking of our moon (a case of two cycles, both an orbit and a rotation) becoming synchronized so that we always see the same side of the moon when we look from Earth. Even something as simple and mundane as a system of two pendula, little more than weights attached to the ends of sticks, will exhibit spontaneous synchronization when both connected to a movable platform.
Synchronization is at the heart of the study of how order emerges from disorder and the rules that guide this process. Mathematics is the perfect tool to use to study this, because it provides methods that are general enough to encompass the commonalities in the seemingly disparate phenomena that we have looked at so far. Using the tools of mathematics, we can start to clarify a complex situation by making simplifying assumptions, seeing how these simple cases behave, and then trying to generalize our findings to cases that are not so simple. This is a common theme in mathematics, but to use this method to understand synchrony, we will need some specific mathematical tools, namely those that can quantify and describe things that are continuously changing.
GET IN LINE
- The slope-intercept form of a linear equation is a common way to represent the mathematics of change.
One of the key features of algebraic mathematics is the use of symbols instead of numbers. In algebra, we learn how to generalize and explore the rules of arithmetic by using variables that can stand for any number. We become less concerned with answers to specific problems and more concerned with the relationships between the entities and values under investigation. The advantage of this is that our analyses can be applied to a wider variety of situations than would be possible if we restricted ourselves to using specific numbers that apply only to a particular situation.
An example of this concern with relationships is the familiar slope-intercept form of the equation of a line: y = mx + b. Typical high school algebra courses reveal how this relationship can be applied to any number of situations. We can apply the equation to the cost of painting a house, for example, by letting y represent the total cost; x, the number of gallons of paint purchased; m, the price per gallon of paint; and b, the fixed cost of supplies such as brushes and buckets. The total cost can then be found by substituting real-world values for the variables and performing the indicated operations.
In general, a linear equation expresses a relationship between the two variables, x and y. These variables represent two values that are related in some way. In other words, changing one leads to a change in the other. The constants of the linear equation, m and b, help show specifically how x andy are related. These constants are determined by the conditions of the situation that we wish to understand and model with the equation.
In the house-painting example mentioned above, we saw that b represents a fixed, up-front cost. Graphically this value determines the placement of the line on the coordinate plane. Specifically, it identifies the point at which the line intersects the y-axis.
Many times in mathematics we have to choose what it is we care most about. In other words, in a given situation we must decide which quantities to de-emphasize and which to give our full focus. In our present discussion, we are going to ignore b for the time being, because what we are really interested in is how changes in one variable affect the other. In our painting example, the up-front cost becomes increasingly less important as more paint is purchased, so we should probably pay more attention to the price of paint than to those fixed, up-front costs. Knowing the price per gallon will enable us to determine exactly how our total cost changes as we use more or less paint. In the general case then, m is more interesting to us right now because it lets us calculate how a change in x will affect the value of y.
The number m compares the change in y to the change in x for a given line. We call this ratio of changes, the “slope.” If we know two points on the line, (x1, y1) and (x2, y2), we can find the slope by taking the difference in y values and dividing it by the difference in x values. This slope ratio is commonly referred to as “rise over run.”
Slope is a useful concept because it describes how two quantities change in relation to one another. The slope of a linear equation is constant; it never changes. While many real-world situations can be modeled with a linear equation, most cannot. For one thing, most real-world situations can’t be modeled using just multiplication and addition. Equations involving powers of variables, such as the equation for the velocity of a falling object, don’t lend themselves to the simple notion of a constant slope implied in the linear equation model. Let’s look at how we can generalize the concept of slope to talk about such non-constant rates of change.
- To capture the notion of rates of change that can themselves change, we need the concept of a derivative.
In our painting example, we might arrange a deal with the paint store that the more paint we buy, the less we pay per gallon. This means that while the total cost increases as we buy more paint, the rate at which the total cost changes actually decreases. Our slope is no longer constant; it, like y, depends on which x (amount of paint) we choose to consider. To better understand how real-life situations change, we need a more comprehensive concept of slope.
Notice that if we attempt to find the slope between two points on a curve, we end up with a straight line that doesn’t correspond with the curve very well. Furthermore, notice how the slope between two points on a curve changes depending on which two points are selected.
Considering just a few examples also makes it clear that generally the further the two selected points are away from each other, the worse the correlation between the slope of the line and what is actually happening to the curve over the chosen interval.
If we could somehow have a notion of slope between two points on a curve that are not very far apart at all, we could practically eliminate the discrepancy between the line determined by those points and the path of the curve. Such a conceptual tool could help us understand mathematically all sorts of curves and the situations they represent. To do this, we can shrink our view as far as we wish and consider the slope between two points that are extremely close to one another on the curve.
On a curve, imagine a point whose horizontal position is x. Now imagine a second point on the curve that is some very small horizontal distance, Δx, from x. This point’s horizontal position is x+Δx. The slope between these two points is represented by this expression:
This is the familiar “rise over run” expression indicating the rate of change between these two very slightly separated points. If we let their horizontal separation, Δx, approach zero, we will have an expression for the “instantaneous” rate of change for that section of the curve. Note that we cannot make the separation equal to zero, because division by zero is undefined. We can, however, talk about the slope as Δx “gets arbitrarily close” to zero. This quantity, called a derivative, is the generalized notion of slope that we need to deal with many complicated (i.e., “curvy”) real-world models.
The derivative is a powerful mathematical tool because it allows us to describe in great detail not only how quantities change in relation to each other, but also how their changes change. We can now account for the vast amount of real-world phenomena that do not conform to the simple, linear notion of a constant slope.
We don’t have the space in this text to explore how to find derivatives of specific functions, but we’ll need to use some of them later as we attempt to mathematically model synchronization. The following table gives a few basic functions and their derivatives.
The derivative is one of the key ideas in differential calculus, which can be thought of as the mathematics of change. Calculus uses the concepts of infinite processes and infinitesimal steps to describe how changing quantities (e.g., those that grow, shrink, move, or proliferate) vary.
Ancient Egyptian thinkers, trying to compute the volumes of various solids, made the first strides toward this understanding. Greek mathematicians, such as Eudoxus and Archimedes, carried on this legacy by developing the “method of exhaustion,” which involved dealing with infinite processes. As the West descended into the so-called Dark Ages, Indian, Arab, and Persian mathematicians flourished, making great strides toward an understanding of derivatives. By the late 1600s, European mathematicians were building upon the techniques of past thinkers, using calculus-like methods to understand physical processes. It was at this point that the traditionally-held “fathers of calculus,” Isaac Newton and Gottfried Leibniz, simultaneously put centuries’ worth of pieces together, and added many significant contributions of their own, to form a coherent whole called “the calculus.”
Calculus provides us with the mathematical tools to deal with rates of change in a sensible manner. But as with any discipline, the tools are only as effective as the skill of the one who wields them. Using the tools of calculus to model real-world situations requires the ability to see a dynamic situation and recognize the relevant quantities and rates of change involved, and how they relate to each other. With a grasp of the elements and relationships in play, we are better prepared to express what is happening using equations that we can analyze to make predictions about the future and to find new understandings of our world.
4. Differential Equations
- A differential equation is an expression that relates quantities and their rates of change.
- The solution to a differential equation is not simply a number; it is a function.
With a solid mathematical tool, calculus, in hand, we can set out to try to understand the phenomena of the world mathematically. Let’s start with a simple example. Imagine an object in free-fall. At any given time during its fall, it will have some specific velocity, v. Furthermore, we intuitively know that the longer something falls, the faster it goes. This suggests that the velocity of the object should be expressed as a function of elapsed time, t.
To write the specific expression that will tell us the object’s velocity at any point in time, let’s first assume that the object begins from a state of rest. This gives us an “initial condition,” of v(0) = 0, or “the velocity at time zero equals zero.” The velocity of the object as it falls will then be due solely to the influence of gravity. If we multiply the time spent falling t by the acceleration due to gravity g, which is the experimentally observed rate at which the velocity of a freely falling object changes, we can determine the speed at which our object is falling at any point in time:
v(t) = gt
Notice here that what interests us is not a specific value for velocity or time, but rather the exact relationship between the two. In this example, we have a non-constant velocity. If we take the derivative of this, we should get an expression that tells us how fast velocity is changing. Doing this, we get:
(It is the derivative of a linear equation, like the first example in the table on page 12. Note that the equation is shorthand for “the derivative of v with respect to t.”)
This is a very simple example of what is known as a differential equation. A differential equation is simply an equation that relates quantities with their rates of change. In this example, we see that the amount by which v changes, dv, in some small amount of time, dt, is equal to a constant, g.
To solve this equation, we are looking for a function whose derivative is the constant g. Notice that solving a differential equation does not give us a simple number, as we would expect were we to solve the equation 10 = 4x -2 for the variable x. Rather, our solution to a differential equation is a function v(t). This example is somewhat contrived because we already know that the answer will be v(t) = g. After all, that’s what we started with. But if we didn’t already know, how could we figure it out?
There are a variety of methods that one can use to solve different types of differential equations. No one method can solve every differential equation, and there are many differential equations that can’t be solved at all. In the next example, we’ll get a sense of the methods and thinking that go into solving differential equations.
- Exponential growth is a classic example of a real-world situation that lends itself to a solvable differential equation.
Let’s look at another example, one that gives us an equation involving both a quantity and its derivative. Imagine a single bacterium surrounded by nutrients—perhaps it’s in a bottle of milk. Bacteria divide asexually by binary fusion, their population basically doubling at set intervals. The more bacteria there are, the more that are “born.” This implies a rate of change, or growth, that is not steady, as was the case in the previous example of the velocity of a falling object. Furthermore, the rate of increase in the bacteria population depends on how many there are to begin with. If there are two bacteria initially, the first increase is by two, the second increase is by four, the third increase is by 8, etc.
Let’s designate P(t) as the number of bacteria at any given time, t. The rate of change in this population is then , some small change in population over a small change in time. The rate, , depends on how many bacteria there are, P. Therefore:
The a is just a constant that is related to the specifics of the situation—what type of bacteria, how long it takes them to reproduce, etc. In this situation, we have a rate of change that is directly proportional to the quantity that is changing; in other words, we have an equation that relates a certain quantity to its derivative. This is a classic differential equation that describes exponential growth.
We could use a process known as integration to solve this by separating the variables, putting the parts having to do with P on one side of the equation and the parts having to do with t on the other side. Integration and differentiation are two of the most important concepts of calculus. Whereas differentiation seeks to explain rates of change, integration makes sense of the accumulation of an infinite number of tiny changes. Integration is in a very real sense the “opposite” of differentiation, but it can be very complicated for anything but the simplest of equations. A faster way, for our purposes, might be simply to try a few possible solutions and see if they work.
First let’s try P(t) = at. According to our table from the previous section, would then be just a. Substituting this value into our differential equation we would get:
P(t) = at
Since this is true only for t = 1, let’s try something else.
How about P(t) = sin at? would then be a cos(at) and we would have:
a cos at = a sin at
Again, this is true only sometimes, in much the same way that a stopped clock is right twice a day. We need something that is always true regardless of what value of t we consider. Let’s try something else.
How about P(t) = eat? would then be aeat, which is just aP(t)! This gives us aeat = aeat, which is always true, no matter what t is. So the solution to our differential equation is P(t) = eat.
In this example, we see again how the solution to a differential equation is a function, not a number. In our example here, this function describes how to find the population of bacteria at any point in time, even though the rate of increase is changing. It’s a nice, simple expression that encompasses the complexity of the situation under examination.
SOLVING DIFFERENTIAL EQUATIONS
- Many differential equations are not solvable, but they can, upon analysis, yield information about the system they represent.
In addition to integration and the “guess and check” method we just used, there are other ways of solving differential equations (sometimes nicknamed “diff EQs”), and they generally fall into two categories: exact and numerical methods. Exact methods yield exact solutions, as did the function in our example above. Numerical methods give approximations based on different algorithms. Often, however, we can discover interesting behavior regarding our situation without having to solve any equation. We can look at its qualitative behavior via what is called a phase portrait, a picture that shows a system’s “phase space.”
Phase space is handy because it provides a way to represent all the possible states of a system with one picture. It is a graph of the variables, such as position and velocity, that determine the state of a system. We will talk about phase space in more depth in Unit 13. For our purposes here, it suffices to say that examining graphical representations of systems of differential equations can yield a wealth of qualitative information about the system, such as whether or not it will display cyclical or synchronous behavior.
Now that we have an idea how to model certain real-life situations using equations that use both quantities and rates of change, we can tackle the issue of how synchronization arises in nature. We are going to look at one of the most basic and accessible types of synchronization, that of cyclical behavior.
- One of the first breakthroughs in the study of spontaneous synchronization was in the modeling of how two oscillators that are initially out of phase with each other can come into phase with one another.
- Two oscillators influencing one another can be modeled by a system of coupled differential equations.
- Certain species of fireflies exhibit this synchronization property in the wild.
How is it that two fireflies, each blinking to its own rhythm, can come into sync with each other, flashing at the same time? How do we even begin to represent this situation mathematically?
A single firefly, if left to its own devices, will flash with some regularity. To model this situation mathematically requires a function that has periodicity, which simply means that it returns to the same value at regular intervals. As we saw in our unit on the connections between music and mathematics, a good mathematical function that models periodicity is a sinusoid. A sine wave oscillates smoothly between one value and another. For the firefly, these two values would be the states “on” and “off.”
It would be reasonable to model the flashing of a single firefly by looking at the sine of theta, where theta represents where the firefly is in its flashing cycle. The firefly flashes when θ equals zero.
Another way to think about this is to imagine a runner on a circular track. Picture the runner traveling at a constant speed, corresponding to how quickly the firefly charges up its flash. The flash itself corresponds to the runner crossing the start/finish line. The angle theta then represents where the runner is on the track in relation to the start/finish line.
So, if theta represents where the firefly or the runner is in his cycle, the derivative of this will indicate how fast that position is changing.
= the rate at which θ changes.
This value is intuitively related to the frequency of oscillation—the more quickly θ changes, the more cycles the runner, or the firefly, will complete. Let’s call the frequency that the runner or firefly would have alone, without any influence from others, the natural frequency, denoted by ω.
Things get interesting when we introduce another oscillator and consider two fireflies, or two runners, that interact with one another. We can model each one as an oscillator, just as we did in the single case, but because they interact with each other, the expression is somewhat more complicated.
Because we now have two oscillators, we will have two phases (θ1 and θ2) and two natural frequencies (ω1 and ω2) to account for. If we assume that the natural frequencies are fixed, then we will need two equations for the two unknowns θ1 and θ2.
The first firefly has phase θ1 and frequency ω1. The second firefly has phase θ2 and frequency ω2. For both fireflies to flash in sync with one another, the two thetas must be equal to one another. Mathematically, θ1 – θ2 must equal zero.
The phase difference, θ1 – θ2, determines the extent of “correction” each firefly needs to make to synchronize with the other one. The necessary adjustment varies depending on how far apart the two fireflies are in their cycles. If the two fireflies are very far apart in their cycles, a large correction is needed. If they are only slightly out of sync, only a slight nudge is required. However, the situation is a bit more complex than this.
The adjustment each firefly makes can be either to slow down or to speed up its flashes. How does it determine which to do? Consider the case of perfect alternation, with one firefly flashing and then the other flashing at perfectly spaced intervals. Should the one slow down or speed up to match the other? It can speed up, basically doubling its frequency temporarily so that its next flash coincides with the other, or it can slow down, halving its frequency, skipping the next flash in the attempt to synchronize with the other.
If the flash of the first firefly occurs at a point in time that is less than half the firefly’s cycle time from the second firefly’s next flash, it makes sense to speed up. On the other hand, if it is more than half way through its cycle, it is better to slow down and wait for the other firefly to catch up. The difference in θ is what influences the firefly as to what to do. A function capable of modeling either a speed up or a slow down must be able to periodically take on positive or negative values, depending on the difference in θ. Once again this is ideally a sinusoid. So, our mathematical model of how a firefly adjusts its flashing cycle to achieve synchronization with another should look something like this:
The sine terms should be mediated by a constant that represents how strongly the two fireflies interact with each other. This constant can take into account things such as distance and ambient light levels that affect a firefly’s perception. Let’s designate this constant K1 for the first firefly and K2for the second firefly. Incorporating these factors yields these modified expressions:
Finally, we shouldn’t forget the influence of each firefly’s natural rhythm, ω1 and ω2 respectively.
These two equations represent the changes that each firefly should make, based on what the other is doing, in order to achieve synchronization. Mathematically, these are the equations of coupled oscillators. In our study of sync, we need to analyze the behavior of these equations to find out the various conditions under which spontaneous synchronization can occur. This is a simple, standard model that can be applied to many different situations in which synchronization is observed.
Recall that synchronization is defined to be the condition in which both oscillators are in phase. Mathematically, this occurs when:
θ1 = θ2 or θ1 – θ2 =0
We can let φ = θ1 – θ2 to introduce a single, convenient variable to represent the phase difference. The change in φ, representing how the phase difference changes, would then be:
Using our equations for the derivatives of the flashing cycle equations of the two fireflies from above, we can get:
What this equation tells us, via φ and , is that the fireflies’ synchronization with one another is based on the difference in their natural frequencies, ω1 – ω2, and how that difference compares to the strength of the signals they send and receive from each other, K1 +K2, also called the coupling strength. If the difference in frequency is less than the coupling strength, the fireflies will spontaneously synchronize. If the difference is too great, they will go on flashing at their individual rates.
This is a relatively straightforward model of potentially synchronous behavior with two oscillators. Real-world systems, however, are often made up of many oscillators. In the next section, we will explore how to expand our model to deal with more-complicated systems such as these.
6. Many Oscillators and Biological Sync
- The Kuramoto model mathematically captures the behavior of systems of many coupled oscillators.
- Unlike other models of this type, the Kuramoto model is solvable.
We’ve now seen one possible way to model the rather complicated process of two individual fireflies coming into sync with each other. The mechanism by which this happens is based on each firefly being aware of the other’s cycle and making modifications in its own cycle to match it. Synchronization between these fireflies would not be possible were it not for this visual communication taking place.
It’s interesting to think of this from the firefly’s perspective. At some level, the firefly is aware of what its neighbor is doing and can, intentionally or not, adapt its own cycle to match. With only one neighbor, this may not seem like a big deal, but what about when there are two neighbors? How does our model change if there are more than just two oscillators? In reality, synchronous flashing has been observed in groups of many thousands of fireflies. If we want our model to be as accurate and useful as possible, we must find a way to generalize our model of coupled oscillators to account for synchronization within groups of many oscillators.
One such model was developed by Yoshiki Kuramoto at Kyoto University in the 1970s. In considering large groups of oscillators, it makes things significantly easier to assume that every oscillator affects each of the others equally. In the context of a group of biological oscillators, such as fireflies, one could reasonably expect that fireflies that are further away will actually have less influence than fireflies that are closer. This geographical/spatial factor is ignored in the Kuramoto model. This provides an example of how it is often necessary to make simplifying assumptions about a situation in order to create an understandable, workable model. Doing so provides a foothold from which we can then explore what happens as that model is modified.
What is remarkable about the Kuramoto model is that it is a potentially infinite set of nonlinear, coupled differential equations, and yet it can be solved exactly. The general model itself resembles our system of two equations from the previous section:
This form uses summation notation to compactly state a system of N differential equations, one for each oscillator. What it says is that the change in phase for a specific oscillator (the ith oscillator) depends on both its natural frequency, ωi, and the sum of the influences of the other oscillators. These influences are each related to the difference in phase between the ith oscillator and each other oscillator taken individually, which is why the sum is over j oscillators, even though the equation gives the behavior of the ith oscillator. Furthermore, the amount of influence that each other oscillator has on the ith one, K, is divided evenly by the total number of oscillators, N.
The Kuramoto model can be used to explain many different biological phenomena because of its simplicity and the fact that it can be solved. Systems of nonlinear, coupled differential equations can only rarely be solved exactly. Solutions to the Kuramoto model resemble somewhat our conclusions from the two-oscillator model, most notably the finding that spontaneous synchronization occurs depending on the relationships between differences in natural frequency and the strength of the interaction between oscillators.
In the realm of biology, there are many examples of situations in which the Kuramoto model is applicable. We’ve already seen how it applies to fireflies, and there are a couple of other fairly common yet fascinating examples from the biological world.
Crickets and frogs communicate with cyclic sound much as fireflies do with cyclic light. In some parts of the country, the night-time soundscape is full of the chirps of crickets and the chorus of frogs croaking. Sometimes these sounds can spontaneously synchronize within a species in a process that is similar to how fireflies synchronize their flashes.
- Heart pacemaker cells exhibit spontaneous synchronization in their firing of electrical impulses.
Biological synchronization is by no means limited to insects and amphibians, however. The cells that make up the human heart’s natural pacemaker, the rhythm keeper that controls the electrical signals that cause the heart to pump, display a propensity for spontaneous synchronization. Each cell can be thought of as an individual oscillator, in much the same way that a firefly can, but with a few key differences.
Recall that with the firefly, we modeled the cycle of its flashes as a smooth sinusoidally varying function. A heart cell’s electrical firing is better modeled as a pulse. The voltage across a cell builds slowly until it reaches some threshold; at that point the cell discharges most of its voltage rapidly.
Each cell has a form of communication with its neighbors via the voltages that discharge. When one cell fires, it kicks up the voltages of its neighbors so that if they are close to their firing threshold, they fire. This has a synchronizing effect on all the nearby cells that were approaching their firing threshold when the first one fired. Cells that were not close to firing get knocked further out of sync with the others.
At first glance, it might seem that this would lead to disorganized behavior among some cells and organized behavior among others. What actually happens is that as certain cells near their firing threshold, voltage begins to leak out in small amounts, to be absorbed by the neighboring cells. This leakage would have little effect if there were only one or two cells, but in a group of thousands, the leakage has a homogenizing effect on the average voltage across each cell. In time, this leads to synchronization of the entire system, not just particular groups of cells.
Cells that build up charge and then discharge precipitously are not modeled well by the Kuramoto model. Math that involves sharp changes often gets tricky. These issues were successfully tackled, however, by Charlie Peskin at New York University in 1975. He was able to show mathematically how synchronization is possible for the entire cardiac firing system.
We have been talking mainly about cyclical synchronization up to this point, but there are other forms of spontaneous order that arise in nature, such as flocking and schooling. Believe it or not, even traffic congestion/flow often results in spontaneous order. The models for these phenomena are not as simple as the Kuramoto model, but the basic mechanism is the same. Spontaneous order emerges naturally in systems in which the individuals communicate with each other in some fashion and make small group-adaptive changes based on those signals. What’s fascinating is that these individuals need not be organisms, and the signals exchanged can be much simpler than a cricket’s chirp or the voltage spikes of the heart’s pacemaker cells. Let us now turn our attention to synchronization of inanimate objects.
7. Mechanical Sync
- Non-biological oscillators can spontaneously synchronize provided they • have a mechanism for exchanging signals (i.e., transferring kinetic energy).
At the beginning of this unit, we caught a glimpse of the variety of situations in which synchronization can occur. Up until now, we have focused primarily on sync as it occurs with living things that are able to send, receive, and interpret signals. We hinted, however, at the fact that spontaneous synchronization is not limited to living beings. It seems to be a fundamental phenomenon in nature, occurring not only in the realm of biology, but also in chemistry and physics.
In fact, the first documented observations of a system coming into spontaneous order were solidly in the realm of physics. In the 1660s, the Dutch physicist Christiaan Huygens, known primarily for his contributions to probability, astronomy, and optics, found himself sick in bed, as the legend goes, observing two pendulum clocks. He noticed that no matter what configuration each started in, they would eventually begin swinging in sync with each other. Technically it was anti-phase sync:
Huygens examined the situation and found that the two clocks were both resting on a loose, wobbly floorboard. He also noted that if the two clocks were placed at opposite ends of the room, no such synchronization occurred. He surmised that the motions of the two pendula transmitted tiny forces to each other via the loose plank, subtly slowing down or speeding up the frequency of each until they swung in anti-phase synchrony.
We can observe a similar phenomenon using a couple of metronomes. Imagine that we have two metronomes, both set to oscillate at the same frequency.
If we place these two metronomes on a solid, fixed surface, out of phase with each other, they will continue to oscillate out of phase with each other for as long as we care to watch. If we place the same two metronomes on a board that is allowed to move in a particular way, however, the situation is quite different.
If the board connecting the metronomes sits atop two cans, so that it is free to move laterally, parallel to the motion of the metronome arms, it becomes a connection between the two metronomes that is capable of transmitting subtle shifts in momentum.
Imagine that the arm of metronome 1 is moving towards the left, while the arm of metronome 2 is moving towards the right. Let’s say that metronome 2 is closer to the right-most point in its cycle than metronome 1 is to the left-most point of its cycle.
When metronome 2 reaches its right-most point, the motion of switching to start moving to the left imparts some small change in momentum that is equal and opposite to the change that drives the metronome arm to the left. In other words, it will shift the board ever so slightly to the right. This is a consequence of Newton’s third law of motion, which states that for every action, there is an equal and opposite reaction.
The effect of the board moving to the right is to accelerate, ever so slightly, the arm of the left metronome towards its left-most point.
This is similar to the forces involved when you try to pull the tablecloth out from under a setting of tall glasses. Unless you are extremely gifted and/or lucky, you are likely to cause at least a few glasses to fall. When they fall, they will fall in the direction opposite the movement of the tablecloth.
This is how the board allows the two metronomes to influence each other. The net effect of the small changes transmitted from metronome 1 to metronome 2, and vice versa, will be that the metronomes eventually will come to oscillate in sync with each other.
THE MILLENNIUM BRIDGE
- The worlds of biological and mechanical synchronization came together in the shaking of the Millennium Bridge in London at the turn of the 21st century.
This concept of oscillators, connected by some medium that can transmit signals between them, seems to be at the heart of the phenomenon of synchronization. We’ve seen how sync arises in a variety of contexts, both biological and mechanical. In our final example, we will see how sync occurred in a system comprised of both biological and mechanical elements.
The Millennium Bridge was constructed across the River Thames in London in the late 1990s to commemorate the beginning of a new millennium in the year 2000.
On the day the bridge was opened to the public, crowds of people assembled to walk across the newest landmark in the city. As the bridge filled with people, something remarkable, and somewhat frightening, began to take place. The bridge started swaying, with no observable cause. The winds were calm, and yet the bridge began to sway with more and more severity.
Video from that day shows that, as the bridge swayed, the pedestrians began to compensate by adopting a staggering, side-to-side gait. Moreover, groups of them began to stagger in sync with one another, completely unintentionally.
The synchronized staggering of the people, begun as a response to the initially slight swaying movements of the bridge, served to amplify the oscillations until the bridge swayed quite violently. In this case, the walking surface of the bridge served the same function as the plank in the metronome example that we just examined; it transmitted small changes in lateral momentum between people to the bridge structure, reinforcing the oscillations that had already begun. The more the bridge shook, the more people compensated in their walking motion, and as more people began to stagger in sync with each other, the bridge shook more violently, creating a sort of feedback loop.
After a few days, the bridge was closed due to safety concerns and construction crews reinforced it to prevent so much lateral flexibility. No one was injured in the event, and it might have been written off as just an odd coincidence were it not for mathematicians taking an interest in the phenomenon and seeing it as a startling example of the mathematics of synchronization.
The following is an interview with Roger Ridsdill Smith, Director, Ove Arup and Partners Ltd. and Project Director for the London Millennium Footbridge
What was Arup’s role in the design and construction of the Millennium Bridge?
Arup have been the Engineer for the bridge, from its inception to completion of the modification works.
Arup won the international competition (over 200 entrants) in 1996 as the Engineer in a team with Foster and Partners (Architect) and Sir Anthony Caro (Artist).
Describe what happened to the bridge on 10 June 2000
It is estimated that between 80 000 and 100 000 people crossed the bridge during the first day. Analysis of video footage showed a maximum of 2000 people on the deck at any one time, resulting in a maximum density of between 1.3 and 1.5 people per square metre.
Unexpected excessive lateral vibrations of the bridge occurred. The movements took place mainly on the south span, at a frequency of around 0.8 Hz ( the first south lateral mode), and on the central span, at frequencies of just under 0.5Hz and 1.0 Hz (the first and second lateral modes respectively). More rarely movement occurred on the north span at a frequency of just over 1.0 Hz, (the first north lateral mode).
Excessive vibration did not occur continuously, but built up when a large number of pedestrians were on the affected spans of bridge and died down if the number of people on the bridge reduced, or if the people stopped walking. From visual estimation of the amplitude of the movements on the south and central span, the maximum lateral acceleration experienced on the bridge was between 200 and 250 milli-g. At this level of acceleration a significant number of pedestrians began to have difficulty in walking and held onto the balustrades for support.
No excessive vertical vibration was observed.
The number of pedestrians allowed onto the bridge was reduced on Sunday 11th June, and the movements occurred far more rarely. On the 12th June it was decided to close the bridge in order to fully investigate the cause of the movements.
What is Synchronous Lateral Excitation? Briefly, how did you model it mathematically?
The movement of the Millennium Bridge has been found to be due to the synchronisation of lateral footfall forces within a large crowd of pedestrians on the bridge. This arises because it is more comfortable for pedestrians to walk in synchronisation with the natural swaying of the bridge, even if the degree of swaying is initially very small. The pedestrians find this makes their interaction with the movement of the bridge more predictable and helps them maintain their lateral balance. This instinctive behaviour ensures that footfall forces are applied at the resonant frequency of the bridge, and with a phase such as to increase the motion of the bridge. As the amplitude of the motion increases, the lateral force imparted by individuals increases, as does the degree of correlation between individuals. It was subsequently determined, as described below, that for potentially susceptible spans there is a critical number of pedestrians that will cause the vibrations to increase to unacceptable levels.
How was the Millennium Bridge’s swaying different from the swaying that brought down the Tacoma Narrows Bridge in 1940?
The movements that occurred on the Tacoma Narrows Bridge were a resonant response to forces exerted by wind rather than pedestrians. The pedestrian induced forces that cause Synchronous Lateral Excitation are self-limiting because above a certain level of movement, pedestrians stop walking.
How did ARUP fix the issue with the Millennium Bridge?
Although a few previous reports of this phenomenon were found in the literature, none of them gave any reliable quantification of the lateral force due to the pedestrians, or any relationship between the force exerted and the movement of the deck surface.
Arup therefore carried out tests in 3 universities, as well as crowd walking tests on the bridge itself, in order to quantify the force exerted on the structure. Arup then designed a system of passive dampers which are mobilized by the lateral movements of the bridge. These dampers are arranged beneath the deck over the full length of the bridge, as well as at the piers and at the south abutment.
In order to demonstrate that the solution performed satisfactorily, Arup carried out a crowd test with 2000 pedestrians — the most extreme dynamic test ever carried out on a bridge. The bridge movements were less than a sixth of the allowable movements.
The bridge reopened in February 2002.
Unit 1 The Primes
It is often said that Mathematics is a universal language. No matter one's culture, country, gender, race, or even religion, certain mathematical principles remain true. The fundamental letters of the mathematical alphabet are known as the primes.
Unit 2 Combinatorics Counts
Counting things seems so simple. Children do it intuitively, connecting a thing with fingers to say how many. Finding efficient and interesting ways to organize things and information is what the field of mathematics, known as Combinatorics, is all about.
Unit 3 How Big Is Infinity?
It takes courage to push beyond the boundaries of understanding, to both explore and explain the boundlessness of the infinite. Numbers and counting are real — intrinsic to our everyday life. But acknowledging their existence ties us to the existence of the infinitude.
Unit 4 Topology’s Twists and Turns
Can you imagine the shape of the universe? That's where Topology comes in: a branch of mathematics concerned with the study of spatial relationships that don't depend on measurement, and is more concerned with concepts like 'between' or 'inside,' and how things are connected.
Unit 5 Other Dimensions
Is there such a thing as a higher dimension, a parallel universe where otherworldly things can happen? Over the years, artists, writers and filmmakers have tried to answer that question, creating some dazzling works of science fiction in the process. But are the higher dimensions we see in sci-fi really fiction?
Unit 6 The Beauty of Symmetry
They say, beauty is in the eye of the beholder. What we consider to be beautiful in nature, art, or music often differs from culture to culture. But somehow, there seem to be constants — commonalities in how we as human beings "see" beauty. Where does that "sense" of beauty and order come from? And what does algebra or geometry have to do with it?
Unit 7 Making Sense of Randomness
How can we make sense out of the seemingly random results of throwing a pair of dice or even the haphazard flow of heavy traffic in the city? How can we talk meaningfully about any situation that is unpredictable or has an uncertain outcome? Well, welcome to the mathematics of probability.
Unit 8 Geometries Beyond Euclid
We live in a world — a reality — ruled by straight lines. Our streets, houses, cubicles — virtually all of our space is parceled into rectilinear grids. Mathematicians were also ruled by straight lines — some would say imprisoned by them — for two thousand years. But what is a straight line? And when is a straight line not "straight"?
Unit 9 Game Theory
We've all heard it said that life is like a game. Most games have well defined rules, with clear benefits for winning and costs for losing. And that makes them something we can think about logically and mathematically. But what about life? Can mathematics tell us anything about the competitions and collaborations that happen every day? From the social sciences to biology, robotics and beyond, the answer is yes.
Unit 10 Harmonious Math
Waves — lightwaves washing against our eyes creating a vision of the world around us, sound waves crashing against our ears — sometimes jarring and other times, beautiful, cosmic waves bathing the Universe. All of it explained, illuminated, and connected via mathematics.
Unit 11 Connecting with Networks
Virtually everything we experience — in nature as well as human activity — involves a series of connections that link one thing to another. Networks, you might say, make the world go 'round.
Unit 12 In Sync
Many things in the universe behave in a synchronized way — whether manmade, or natural. We see synchronization as an emergence of spontaneous order in systems that most naturally should be disorganized. And when it emerges, there is a beauty and a mystery to it, qualities that often can be understood through the power of mathematics.
Unit 13 The Concepts of Chaos
Most of us learned at an early age how an apple falling from a tree... inspired Isaac Newton to describe how the universe behaves by certain predictable rules. But what about when the universe doesn't behave so... predictably? Can mathematics explain the often unpredictable behavior of the physical world?
interactive 14 Math Family Tree
The Math Family Tree highlights the major discoveries, events, and mathematicians from the content covered in Mathematics Illuminated and maps it on a timeline starting as early as 25,000 B.C.E., with the discovery of the Isango Bone, to present day with the Fields Medal winner, Grigori Perelman, who solved the Poincare Conjecture.