Tag Archives: conditional probability

Marilyn vos Savant Conditional Probability Follow Up

In the Marilyn vos Savant problem I posted yesterday, I focused on the subtle shift from simple to conditional probability the writer of the question appeared to miss.  Two of my students took a different approach.

The majority of my students, typical of AP Statistics students’ tendencies very early in the course, tried to use a “wall of words” to explain away the discrepancy rather than providing quantitative evidence.  But two fully embraced the probabilities and developed the following probability tree to incorporate all of the given probabilities.  Each branch shows the probability of a short or long straw given the present state of the system.  Notice that it includes both of the apparently confounding 1/3 and 1/2 probabilities.


The uncontested probability of the first person is 1/4.

The probability of the second person is then (3/4)(1/3) = 1/4, exactly as expected.  The probabilities of the 3rd and 4th people can be similarly computed to arrive at the same 1/4 final result.

My students argued essentially that the writer was correct in saying the probability of the second person having the short straw was 1/3 in the instant after it was revealed that the first person didn’t have the straw, but that they had forgotten to incorporate the probability of arriving in that state.  When you use all of the information, the probability of each person receiving the short straw remains at 1/4, just as expected.

Marilyn vos Savant and Conditional Probability

The following question appeared in the “Ask Marilyn” column in the August 16, 2015 issue of Parade magazine.  The writer seems stuck between two probabilities.


(Click here for a cleaned-up online version if you don’t like the newspaper look.)

I just pitched this question to my statistics class (we start the year with a probability unit).  I thought some of you might like it for your classes, too.

I asked them to do two things.  1) Answer the writer’s question, AND 2) Use precise probability terminology to identify the source of the writer’s conundrum.  Can you answer both before reading further?


Very briefly, the writer is correct in both situations.  If each of the four people draws a random straw, there is absolutely a 1 in 4 chance of each drawing the straw.  Think about shuffling the straws and “dealing” one to each person much like shuffling a deck of cards and dealing out all of the cards.  Any given straw or card is equally likely to land in any player’s hand.

Now let the first person look at his or her straw.  It is either short or not.  The author is then correct at claiming the probability of others holding the straw is now 0 (if the first person found the short straw) or 1/3 (if the first person did not).  And this is precisely the source of the writer’s conundrum.  She’s actually asking two different questions but thinks she’s asking only one.

The 1/4 result is from a pure, simple probability scenario.  There are four possible equally-likely locations for the short straw.

The 0 and 1/3 results happen only after the first (or any other) person looks at his or her straw.  At that point, the problem shifts from simple probability to conditional probability.  After observing a straw, the question shifts to determining the probability that one of the remaining people has the short straw GIVEN that you know the result of one person’s draw.

So, the writer was correct in all of her claims; she just didn’t realize she was asking two fundamentally different questions.  That’s a pretty excusable lapse, in my opinion.  Slips into conditional probability are often missed.

Perhaps the most famous of these misses is the solution to the Monty Hall scenario that vos Savant famously posited years ago.  What I particularly love about this is the number of very-well-educated mathematicians who missed the conditional and wrote flaming retorts to vos Savant brandishing their PhDs and ultimately found themselves publicly supporting errant conclusions.  You can read the original question, errant responses, and vos Savant’s very clear explanation here.


Probability is subtle and catches all of us at some point.  Even so, the careful thinking required to dissect and answer subtle probability questions is arguably one of the best exercises of logical reasoning around.


As a completely different connection, I think this is very much like Heisenberg’s Uncertainty Principle.  Until the first straw is observed, the short straw really could (does?) exist in all hands simultaneously.  Observing the system (looking at one person’s straw) permanently changes the state of the system, bifurcating forever the system into one of two potential future states:  the short straw is found in the first hand or is it not.

CORRECTION (3 hours after posting):

I knew I was likely to overstate or misname something in my final connection.  Thanks to Mike Lawler (@mikeandallie) for a quick correction via Twitter.  I should have called this quantum superposition and not the uncertainty principle.  Thanks so much, Mike.

Monty Hall Continued

In my recent post describing a Monty Hall activity in my AP Statistics class, I shared an amazingly crystal-clear explanation of how one of my new students conceived of the solution:

If your strategy is staying, what’s your chance of winning?  You’d have to miraculously pick the money on the first shot, which is a 1/3 chance.  But if your strategy is switching, you’d have to pick a goat on the first shot.  Then that’s a 2/3 chance of winning.  

Then I got a good follow-up question from @SteveWyborney on Twitter:

Returning to my student’s conclusion about the 3-door version of the problem, she said,

The fact that there are TWO goats actually can help you, which is counterintuitive on first glance. 

Extending her insight and expanding the problem to any number of doors, including Steve’s proposed 1,000,000 doors, the more goats one adds to the problem statement, the more likely it becomes to win the treasure with a switching doors strategy.  This is very counterintuitive, I think.

For Steve’s formulation, only 1 initial guess from the 1,000,000 possible doors would have selected the treasure–the additional goats seem to diminish one’s hopes of ever finding the prize.  Each of the other 999,999 initial doors would have chosen a goat.  So if 999,998 goat-doors then are opened until all that remains is the original door and one other, the contestant would win by not switching doors iff the prize was initially randomly selected, giving P(win by staying) = 1/1000000.  The probability of winning with the switching strategy is the complement, 999999/1000000.  


My student’s solution statement reminds me on one hand how critically important it is for teachers to always listen to and celebrate their students’ clever new insights and questions, many possessing depth beyond what students realize.  

The solution reminds me of a several variations on “Everything is obvious in retrospect.”  I once read an even better version but can’t track down the exact wording.  A crude paraphrasing is

The more profound a discovery or insight, the more obvious it appears after.

I’d love a lead from anyone with the original wording.


Adding to the mystique of this problem, I read in the Wikipedia description that even the great problem poser and solver Paul Erdős didn’t believe the solution until he saw a computer simulation result detailing the solution.  

Probability and Monty Hall

I’m teaching AP Statistics for the first time this year, and my first week just ended.  I’ve taught statistics as portions of other secondary math courses and as a semester-long community college class, but never under the “AP” moniker.  The first week was a blast.  

To connect even the very beginning of the course to previous knowledge of all of my students, I decided to start the year with a probability unit.  For an early class activity, I played the classic Monte Hall game with the classes.  Some readers will recall the rules, but here they are just in case you don’t know them.  

  1. A contestant faces three closed doors.  Behind one is a new car. There is a goat behind each of the other two. 
  2. The contestant chooses one of the doors and announces her choice.  
  3. The game show host then opens one of the other two doors to reveal a goat.
  4. Now the contestant has a choice to make.  Should she
    1. Always stay with the door she initially chose, or
    2. Always change to the remaining unopened door, or
    3. Flip a coin to choose which door because the problem essentially has become a 50-50 chance of pure luck.

Historically, many people (including many very highly educated, degree flaunting PhDs) intuit the solution to be “pure luck”.  After all, don’t you have just two doors to choose from at the end?

In one class this week, I tried a few simulations before I posed the question about strategy.  In the other, I posed the question of strategy before any simulations.  In the end, very few students intuitively believed that staying was a good strategy, with the remainder more or less equally split between the “switch” and “pure luck” options.  I suspect the greater number of “switch” believers (and dearth of stays) may have been because of earlier exposure to the problem.  

I ran my class simulation this way:  

  • Students split into pairs (one class had a single group of 3).  
  • One student was the host and secretly recorded a door number.  
  • The class decided in advance to always follow the “shift strategy”.  [Ultimately, following either stay or switch is irrelevant, but having all groups follow the same strategy gives you the same data in the end.]
  • The contestant then chose a door, the host announced an open door, and the contestant switched doors.
  • The host then declared a win or loss bast on his initial door choice in step two.
  • Each group repeated this 10 times and reported their final number of wins to the entire class.
  • This accomplished a reasonably large number of trials from the entire class in a very short time via division of labor.  Because they chose the shift strategy, my two classes ultimately reported 58% and 68% winning percentages.  

Curiously, the class that had the 58% percentage had one group with just 1 win out of 10 and another winning only 4 of 10. It also had a group that reported winning 10 of 10.  Strange, but even with the low, unexpected probabilities, the long-run behavior from all groups still led to a plurality winning percentage for switching.

Here’s a verbatim explanation from one of my students written after class for why switching is the winning strategy.  It’s perhaps the cleanest reason I’ve ever heard.

The faster, logical explanation would be: if your strategy is staying, what’s your chance of winning?  You’d have to miraculously pick the money on the first shot, which is a 1/3 chance.  But if your strategy is switching, you’d have to pick a goat on the first shot.  Then that’s a 2/3 chance of winning.  In a sense, the fact that there are TWO goats actually can help you, which is counterintuitive on first glance. 

Engaging students hands-on in the experiment made for a phenomenal pair of classes and discussions. While many left still a bit disturbed that the answer wasn’t 50-50, this was a spectacular introduction to simulations, conditional probability, and cool conversations about the inevitability of streaks in chance events. 

For those who are interested, here’s another good YouTube demonstration & explanation.