Teacher resources and professional development across the curriculum

Teacher professional development and classroom resources across the curriculum

Monthly Update sign up
Mailing List signup
Search
Follow The Annenberg Learner on LinkedIn Follow The Annenberg Learner on Facebook Follow Annenberg Learner on Twitter
MENU

Unit 7

Making Sense of Randomness

7.7 Other Types of Probability

LET'S MAKE A DEAL

Can you use what you know about the past to predict the future? When does past performance tell you about future returns? In roulette, the fact that the wheel lands on a red space eight times in succession has no bearing on the next spin of the wheel-even though we might be tempted to think that it does! Each spin of the wheel is an independent event. Many other situations in life do not exhibit such perfect independence, however. For instance, your chances of winning the lottery are greatly increased by purchasing a ticket, and your chances of being eaten by a shark are greatly reduced by staying on the beach. More realistically, what the weather will be doing in an hour depends to a large degree on what it is doing now. These examples and others like them come from the world of conditional probability.

A classic example of conditional probability is what is often referred to as the “Monty Hall Problem.” This is a situation in which a game show contestant is faced with three doors, one of which conceals a new car, and the other two of which conceal less desirable prizes, such as a donkey or a pile of sand. The contestant chooses a door, door number 2 let's say. Suppose that the host then opens door number 1 to reveal a pile of sand. Now, with two closed doors remaining, the host offers the contestant a chance to switch his/her selection to door number 3. Should the person switch?

The probability that switching one's selection will result in winning the car depends on the probability that one's initial selection was either correct or incorrect. The probability that your initial guess is correct is 1/3. After the host narrows the choice, the probability that you were initially correct is still the same, 1/3, which means that your probability of being initially incorrect, and thus the probability that switching your choice will prove fruitful, is 2/3. After the host reveals one of the klunkers, we are now considering a conditional probability: the probability that the remaining door has the grand prize, given that one klunker has been revealed, is 2/3.

This is a much different result than if the host would reveal one of the nonwinning doors prior to your first choice. In this scenario, your first choice would have a 1/2 probability of being correct. If then given the option to switch, the probability that switching will be advantageous is only 1/2. The fact that our original situation leads to the switch strategy presenting a higher probability of success may seem counter-intuitive, but one of the great strengths of probability theory is that it allows us to quantify the randomness that we are facing and gives us a rational and logical way to make decisions, one that is helpful in situations in which our intuition is often wrong.

back to top

OFF THE CHAIN

A concept from probability that is similar to conditional probability, yet different in some important ways, is the Markov Chain. In a Markov Chain, the probability of a future event depends on what is happening now. The probability of the next event depends on what happened in the previous event. The outcome of a given experiment can affect the outcome of the next experiment.

Let's say it is raining in our current location. There is a certain probability that in ten minutes it will still be raining. There is also a certain probability that in ten minutes it will be sunny. These two probabilities, the rain-rain transitional probability and the rain-sun transitional probability, depend on many factors. If we want to project what the weather will be like in an hour, we can model this as a succession of six 10-minute steps. Each state along the way will affect the probabilities for transitioning to another state. The rain-sun transition's probability will be different than the rain-rain transition's, and both will be different than the sun-sun transition. So, if it is raining right now, in order to use our model to figure out the likelihood that it will still be raining in an hour, we need to map out the various sequences of transitions and their probabilities. For example, let's say that the rain-rain transition has probability 2/3. This leaves the rain-sun transition with a probability of 1/3. Suppose the sun-sun transition has a probability of 4/5, which makes the probability of the sun-rain transition 1/5. We can organize these probabilities into a matrix to help us think through this exercise in weather forecasting.

states

We can also construct a branching diagram to show the possible ways that this model can develop over six steps:

Branching Diagram

To find the probability that we end up either with rain or sun after six steps, we need an efficient way to consider all of the ways and probabilities that, after the sixth step, the weather will be sunny. For instance, after two steps the possible ways for it to be sunny, assuming we begin with rain, are: rain–rain–sun or rain–sun–sun. Each of these combinations has two transitions, and each transition has an associated probability. We can multiply the probabilities of each transition to find the overall probability of events developing according to each specific sequence. Because both sequences end up with the same result, we can add the probabilities of each sequence happening to obtain the overall probability of ending with sun after two steps.

Multiplying and adding is okay for two steps, but for greater numbers of steps, this process can be quite unwieldy because as we consider more steps, we have to consider more specific sequences. Fortunately, we can find the probability of either rain or sun at any step by multiplying the entire matrix of probabilities by itself for however many steps we wish to consider. This is the same as raising the probability matrix to a power. While the details of why this works would be distractingly beyond the level of this discussion, it suffices to say that multiplying two matrices together accounts for all of the various ways in which we can go from a particular initial state to a particular final state in two steps.

Therefore, to find the probability that it will be sunny after six steps (i.e, in one hour), we take our original probability matrix and raise it to the sixth power, which gives us:

states

From this probability matrix, we can see that if it is currently raining, there is a 38% chance that it will be raining in an hour and a 62% chance that it will be sunny. This prediction of course is only as valid as the assumptions that went into our model. Often, these assumptions are quite reasonable and powerful. Because of this, Markov Chains form the heart of solving problems ranging from how we can have a computer recognize human speech to how we can identify a region of the human genome responsible for a genetic disease.

In this section we were introduced to two of the many ways in which probability is used in a modern context. We have also seen the important connection between probability and modeling. Our next section will bring us right to the forefront of both probability and mathematical modeling.

back to top

Next: 7.8 Modern Probability


HomeVideo CatalogAbout UsSearchContact Us

© Annenberg Foundation 2013. All rights reserved. Privacy Policy