Teacher resources and professional development across the curriculum
Teacher professional development and classroom resources across the curriculum
A common belief toward the end of the 19^{th} century was that mathematics can be used to obtain an exact description of the world around us. The pinnacle of this belief was the doctrine of determinism, which holds that if we can know the state of the universe at one moment, and write out all the equations that govern it, we can accurately predict its state at any other moment in the future.
Much of the impetus for this popular view came from the work of Sir Isaac Newton in formulating both laws of motion and the mathematical techniques of calculus that could be used to make accurate predictions based on those laws. According to the Newtonian view, if one had the proper equations and reasonably accurate knowledge of starting conditions, one could predict the future behavior of a system with extreme accuracy.
This deterministic Newtonian view was, and is still, a powerful paradigm. It fostered mathematical understanding of aspects of the world around us that were previously inaccessible. A key example of the power of this line of thinking was Newton's solution of the two-body problem.
The two-body problem is a simplified version of the problem of describing the motions of the planets. Numerous philosophers and scientists throughout the centuries had attempted to explain planetary motion. Newton was the first to model these motions mathematically and to make accurate predictions about how the planets move and why.
Newton used his newly formulated law of gravitation to model the forces that two massive bodies exert on each other. Plugging quantitative values for these forces into his equations of motion, Newton was able to predict how the two objects would move with respect to one another. He found a number of different possible orbits that depended on specific conditions such as the masses of the bodies, their separation distance, and their initial velocities. In short, he found that any system of two orbiting bodies exhibits one of two possible behaviors. The two bodies either settle into a periodic orbit, cycling between positions forever, or they affect each other only briefly and then separate along asymptotic paths, in much the same way that a meteor shoots past a planet. According to Newton, the specific starting values of the system determined which one of these behaviors would occur. Once the system was quantified and put in motion, its fate was known and there were no surprises.
The solution of the two-body problem was a triumph of both science and mathematics. It gave hope that if the heavens could be understood mathematically, so could other aspects of life. Perhaps there was a bright future in which much of the unpleasant uncertainty in peoples' lives could be eliminated. It was assumed that Newton's methods could be easily extended from a system of two massive bodies to one with three and eventually to systems with any number of bodies. Unfortunately, the "tricks" that Newton applied to generate an exact solution to the two-body problem are not applicable to the three-body problem. Many of the greatest mathematical minds of the 18th and 19th centuries, including Euler and Lagrange, attempted to find a general, exact, solution. The problem of describing the interrelated motion of more than two bodies remained so elusive that the King of Sweden, in the late 19th century, established a prize for its solution. The king phrased his challenge in
these terms:
"Given a system of arbitrarily many mass points which attract each other according to Newton's laws, try to find, under the assumption that no two points ever collide, a representation of the coordinates of each point as a series in a variable which is some known function of time and for all of whose values the series converges uniformly."
The great French mathematician and scientist, Henri Poincaré, tackled this challenge. His response, while not providing the general solution that the king sought, laid the groundwork for what would later be known as chaos theory.
He examined a very specific case of the three-body problem, a case in which two of the bodies orbited each other as Newton described, while a third mass-less speck orbited them. The advantage of this purely theoretical model was that the speck exerted no gravitational attraction on the other two bodies.
As he delved into the problem, Poincaré abandoned the goal of finding exact solutions of the type desired by the king and instead focused on studying the qualitative behavior of the system. He realized that an exact solution, as was available in the two-body case, was not possible for the case involving three bodies. Fortunately, he also realized that this did not preclude answering important qualitative questions such as, "Is the system stable or will the planets eventually fly off to infinity?" What he found was that the behavior of the mass-less speck was wildly unpredictable.
Poincaré was able to explore such qualitative features of the system by using the concept of phase space. Phase space is an abstract space of the stated variables of a system. In other words, if you take all the possible combinations of, say, position and velocity and arrange them as coordinates in an abstract space, then a path through this space represents how the system will evolve. The initial conditions of a system correspond to where it starts in phase space.
The actual phase space for the three-body problem is 18-dimensional. Each of the three bodies requires three dimensions to describe its position, x, y, and z, and three dimensions to describe its velocity, , , and . By looking only at the mass-less speck and confining its position and velocity to the orbital plane, Poincaré reduced the 18 dimensions to 4: x, y, , and . Constraining the total energy of the system eliminates one more variable dimension, leaving a three-dimensional phase space, which is readily visualized.
What sorts of information can we infer from a picture such as this? It would be better to look at a simpler example of phase space to get an idea of how we can use it to analyze the qualitative behavior of a system.
A more accessible example of this qualitative method is the phase portrait, which is a specific path or set of paths through a phase space, of a system such as:
= sin x
This system describes an object whose position oscillates sinusoidally. Although this system can be solved directly through integration, looking at the phase portrait will tell us more about the actual behavior of the system than would be obvious in an exact solution.
What this picture portrays is the velocity, , of the system at any given position, x. The arrows on the x-axis serve to remind us of the directional component of the velocity. Values of x that yield positive velocities will move the particle to the right; values of x associated with negative velocities will move it to the left.
The places where our path crosses the x-axis correspond to positions that yield no velocity (because sin(0) = 0). These are known as stability points, because if we started a particle at any of these points, it would not be influenced to move in any direction. There are two main types of equilibrium points, stable and unstable. A stable equilibrium point is one to which a particle would return if it were displaced by some small amount. Think of releasing a grape on the inside rim of a bowl. No matter where you release the grape, it will always end up in the center of the bottom of the bowl. This is a stable equilibrium point. If you were to turn the bowl over and place the grape very carefully in the exact center of its top, the grape would stay where you put it. If you placed it anywhere else on the outside of the bowl, however, it would roll off. All of these possible locations represent unstable equilibrium points.
We can determine what kind of equilibrium points we have in our phase portrait by looking at the velocities associated with particle movement around each point. The velocity to the left of point A is positive, driving the particle to the right. The velocity to the right of point A is negative, driving the particle to the left. This means that if a particle starts out anywhere relatively close to point A, it will eventually come to rest at point A. Point A is, therefore, a stable equilibrium point. Because it seems to attract particles, we can call it an attractor.
Point B, on the other hand, is a bit different. The velocities corresponding to positions to its left are negative, tending to drive the particle away from the point. The velocities corresponding to positions to its right are positive, also tending to drive the particle away from point B. This indicates that starting a particle anywhere near point B will result in that particle moving away from that position and toward one of the attractors. Point B is, therefore, an unstable equilibrium point. Because it tends to repel particles, we can call it a repeller.
Examining systems in this way, geometrically, has the advantage of enabling us to see certain aspects of their behavior very clearly, without having to plow through pages of equations. By identifying attractors and repellers, we can tell how a system will evolve qualitatively over time, depending upon where it starts.
Although in the preceding example we saw how certain points act as attractors or repellers, this does not always have to be the case. For example, if we look at the phase portrait of a simple harmonic oscillator, such as the mass and spring that we examined previously (but without friction this time), we see that there is a nice closed loop corresponding to each combination of starting position and starting velocity.
Wherever we start our system, we see that it will trace out a path in phase space. This path is like a series of equilibrium points, and the system will naturally evolve toward following this path through phase space. This means that attractors do not have to be single points; instead, they can be paths, or trajectories, through phase space.
This notion of examining phase space was part of the contribution that won Poincaré the prize for the three-body problem. He did not find an exact solution, but his techniques were so important that he won the contest anyway. He found that the three-body system exhibits a range of possible behaviors, including some that seem to be wildly unpredictable; that is, even if you know the state of the system at some point in time, there is no guarantee that you will be able to say what the state will be at some significantly later time. In the short term, things are predictable, but in the long term, there is no way to know for sure what will happen.
These initial insights from Poincaré dealt a blow to the Newtonian paradigm of deterministic predictability. Poincaré's dynamics were still deterministic in the sense that the state of a system at any one time depends on its state at a previous time. However, in opposition to Newton's viewpoint, they were far from delineating a predictable future. Poincaré's concept, which represented the first notion of what is now called mathematical chaos, made it clear that there is a limit to how far into the future we can see using mathematics. These ideas, though shocking, did not really take hold until the mid-20^{th} century when the advent of the computer enabled mathematicians to practice mathematics in an entirely new way. With the help of computers, mathematicians soon found another remarkable aspect of chaotic behavior, the idea of sensitive dependence, to which we will now turn.
Next: 13.4 Sensitive Dependence