Tag Archives: probability

Marilyn vos Savant and Conditional Probability

The following question appeared in the “Ask Marilyn” column in the August 16, 2015 issue of Parade magazine.  The writer seems stuck between two probabilities.

Marilyn_conditional

(Click here for a cleaned-up online version if you don’t like the newspaper look.)

I just pitched this question to my statistics class (we start the year with a probability unit).  I thought some of you might like it for your classes, too.

I asked them to do two things.  1) Answer the writer’s question, AND 2) Use precise probability terminology to identify the source of the writer’s conundrum.  Can you answer both before reading further?

SOLUTION ALERT:

Very briefly, the writer is correct in both situations.  If each of the four people draws a random straw, there is absolutely a 1 in 4 chance of each drawing the straw.  Think about shuffling the straws and “dealing” one to each person much like shuffling a deck of cards and dealing out all of the cards.  Any given straw or card is equally likely to land in any player’s hand.

Now let the first person look at his or her straw.  It is either short or not.  The author is then correct at claiming the probability of others holding the straw is now 0 (if the first person found the short straw) or 1/3 (if the first person did not).  And this is precisely the source of the writer’s conundrum.  She’s actually asking two different questions but thinks she’s asking only one.

The 1/4 result is from a pure, simple probability scenario.  There are four possible equally-likely locations for the short straw.

The 0 and 1/3 results happen only after the first (or any other) person looks at his or her straw.  At that point, the problem shifts from simple probability to conditional probability.  After observing a straw, the question shifts to determining the probability that one of the remaining people has the short straw GIVEN that you know the result of one person’s draw.

So, the writer was correct in all of her claims; she just didn’t realize she was asking two fundamentally different questions.  That’s a pretty excusable lapse, in my opinion.  Slips into conditional probability are often missed.

Perhaps the most famous of these misses is the solution to the Monty Hall scenario that vos Savant famously posited years ago.  What I particularly love about this is the number of very-well-educated mathematicians who missed the conditional and wrote flaming retorts to vos Savant brandishing their PhDs and ultimately found themselves publicly supporting errant conclusions.  You can read the original question, errant responses, and vos Savant’s very clear explanation here.

CONCLUSION:

Probability is subtle and catches all of us at some point.  Even so, the careful thinking required to dissect and answer subtle probability questions is arguably one of the best exercises of logical reasoning around.

RANDOM(?) CONNECTION:

As a completely different connection, I think this is very much like Heisenberg’s Uncertainty Principle.  Until the first straw is observed, the short straw really could (does?) exist in all hands simultaneously.  Observing the system (looking at one person’s straw) permanently changes the state of the system, bifurcating forever the system into one of two potential future states:  the short straw is found in the first hand or is it not.

CORRECTION (3 hours after posting):

I knew I was likely to overstate or misname something in my final connection.  Thanks to Mike Lawler (@mikeandallie) for a quick correction via Twitter.  I should have called this quantum superposition and not the uncertainty principle.  Thanks so much, Mike.

Advertisements

Innumeracy and Sharks

Here’s a brief snippet from a conversation about the recent spate of shark attacks in North Carolina as I heard yesterday morning (approx 6AM, 7/4/15) on CNN.

George Burgess (Director, Florida Program for Shark Research):  “One thing is going to happen and that is there are going to be more [shark] attacks year in and year out simply because the human population continues to rise and with it a concurrent interest in aquatic recreation.  So one of the few things I, as a scientist, can predict with some certainty is more attacks in the future because there’s more people.”

Alison Kosik (CNN anchor):  “That is scary and I just started surfing so I may dial that back a bit.”

This marks another great teaching moment spinning out of innumeracy in the media.  I plan to drop just those two paragraphs on my classes when school restarts this fall and open the discussion.  I wonder how many will question the implied, but irrational probability in Kosik’s reply.

TOO MUCH COVERAGE?

Burgess argued elsewhere that

Increased documentation of the incidents may also make people believe attacks are more prevalent.  (Source here.)

It’s certainly plausible that some people think shark attacks are more common than they really are.  But that begs the question of just how nervous a swimmer should be.

MEDIA MANIPULATION

CNN–like almost all mass media, but not nearly as bad as some–shamelessly hyper-focuses on catchy news banners, and what could be catchier than something like ‘Shark attacks spike just as tourists crowd beaches on busy July 4th weekend”?  Was Kosik reading a prepared script that distorts the underlying probability, or was she showing signs of innumeracy? I hope it’s not both, but neither is good.

IRRATIONAL PROBABILITY

So just how uncommon is a shark attack?  In a few minutes of Web research, I found that there were 25 shark attacks in North Carolina from 2005-2014.  There was at least one every year with a maximum of 5 attacks in 2010 (source).  So this year’s 7 attacks is certainly unusually high from the recent annual average of 2.5, but John Allen Paulos reminded us in Innumeracy that [in this case about 3 times] a very small probability, is still a very small probability.

In another place, Burgess noted

“It’s amazing, given the billions of hours humans spend in the water, how uncommon attacks are,” Burgess said, “but that doesn’t make you feel better if you’re one of them.”  (Source here.)

18.9% of NC visitors went to the beach (source) .  In 2012, there were approximately 45.4 million visitors to NC (source).  To overestimate the number of beachgoers, Let’s say 19% of 46 million visitors, or 8.7 million people, went to NC beaches.  Seriously underestimating the number of beachgoers who enter the ocean, assume only 1 in 8 beachgoers entered the ocean.  That’s still a very small 7 attacks out of 1 million people in the ocean.  Because beachgoers almost always enter the ocean at some point (in my experiences), the average likely is much closer to 2 or fewer attacks per million.

To put that in perspective, 110,406 people were injured in car accidents in 2012 in NC (source).  The probability of getting injured driving to the beach is many orders of magnitude larger than the likelihood of ever being attacked by a shark.

CONCLUSIONS AND READING SUGGESTIONS

Alison Kosik should keep up her surfing.

If you made it to a NC beach safely, enjoy the swim.  It’s safer than your trip there was or your trip home is going to be.  But even those trips are reasonably safe.

I certainly am not diminishing the anguish of accident victims (shark, auto, or otherwise), but accidents happen.  But don’t make too much of one either.  Be intelligent, be reasonable, and enjoy life.

In the end, I hope my students learn to question facts and probabilities.  I hope they always question “How reasonable is what I’m being told?”

Here’s a much more balanced article on shark attacks from NPR:
Don’t Blame the Sharks For ‘Perfect Storm’ of Attacks In North Carolina.

Book suggestions:
1)  Innumeracy, John Allen Paulos
2) Predictably Irrational, Dan Ariely

CAS and Normal Probability Distributions

My presentation this past Saturday at the 2015 T^3 International Conference in Dallas, TX was on the underappreciated applicability of CAS to statistics.  This post shares some of what I shared there from my first year teaching AP Statistics.

MOVING PAST OUTDATED PEDAGOGY

It’s been decades since we’ve required students to use tables of values to compute by hand trigonometric and radical values.  It seems odd to me that we continue to do exactly that today for so many statistics classes, including the AP.  While the College Board permits statistics-capable calculators, it still provides probability tables with every exam.  That messaging is clear:  it is still “acceptable” to teach statistics using outdated probability tables.

In this, my first year teaching AP Statistics, I decided it was time for my students and I to completely break from this lingering past.  My statistics classes this year have been 100% software-enabled.  Not one of my students has been required to use or even see any tables of probability values.

My classes also have been fortunate to have complete CAS availability on their laptops.  My school’s math department deliberately adopted the TI-Nspire platform in part because that software looks and operates exactly the same on tablet, computer, and handheld platforms.  We primarily use the computer-based version for learning because of the speed and visualization of the large “real estate” there.  We are shifting to school-owned handhelds in our last month before the AP Exam to gain practice on the platform required on the AP.

The remainder of this post shares ways my students and I have learned to apply the TI-Nspire CAS to some statistical questions around normal distributions.

FINDING NORMAL AREAS AND PROBABILITIES

Assume a manufacturer makes golf balls whose distances traveled under identical testing conditions are approximately normally distributed with a mean 295 yards with a standard deviation of 3 yards.  What is the probability that one such randomly selected ball travels more than 300 yards?

Traditional statistics courses teach students to transform the 300 yards into a z-score to look up in a probability table.  That approach obviously works, but with appropriate technology, I believe there will be far less need to use or even compute z-scores in much the same way that always converting logarithms to base-10 or base-to use logarithmic tables is anachronistic when using many modern scientific calculators.

TI calculators and other technologies allow computations of non-standard normal curves.  Notice the Nspire CAS calculation below the curve uses both bounds of the area of interest along with the mean and standard deviation of the distribution to accomplish the computation in a single step.

norm1

norm2

So the probability of a randomly selected ball from the population described above going more than 300 yards is 4.779%.

GOING BACKWARDS

Now assume the manufacturing process can control the mean distance traveled.  What mean should it use so that no more than 1% of the golf balls travel more than 300 yards?

Depending on the available normal probability tables, the traditional approach to this problem is again to work with z-scores.  A modified CAS version of this is shown below.

norm4

Therefore, the manufacturer should produce a ball that travels a mean 293.021 yards under the given conditions.

The approach is legitimate, and I shared it with my students.  Several of them ultimately chose a more efficient single line command:

norm6

But remember that the invNorm() and normCdf() commands on the Nspire are themselves functions, and so their internal parameters are available to solve commands.  A pure CAS, “forward solution” still incorporating only the normCdf() command to which my students were first introduced makes use of this to determine the missing center.

norm5

DIFFERENTIATING INSTRUCTION

While calculus techniques definitely are NOT part of the AP Statistics curriculum, I do have several students jointly enrolled in various calculus classes.  Some of these astutely noted the similarity between the area-based arguments above and the area under a curve techniques they were learning in their calculus classes.  Never being one to pass on a teaching moment, I pulled a few of these to the side to show them that the previous solutions also could have been derived via integration.

norm7

I can’t recall any instances of my students actually employing integrals to solve statistics problems this year, but just having the connection verified completely solidified the mathematics they were learning in my class.

CONFIDENCE INTERVALS

The mean lead level of 35 crows in a random sample from a region was 4.90 ppm and the standard deviation was 1.12 ppm.  Construct a 95 percent confidence interval for the mean lead level of crows in the region.

Many students–mine included–have difficulty comprehending confidence intervals and resort to “black box” confidence interval tools available in most (all?) statistics-capable calculators, including the TI-Nspire.

As n is greater than 30, I can compute the requested z-interval by filling in just four entries in a pop-up window and pressing Enter.

norm8

Convenient, for sure, but this approach doesn’t help the confused students understand that the confidence interval is nothing more than the bounds of the middle 95% of the normal pdf described in the problem, a fact crystallized by the application of the tools the students have been using for weeks by that point in the course.

norm9

Notice in the solve+normCdf() combination commands that the unknown this time was a bound and not the mean as was the case in the previous example.

EXTENDING THE RULE OF FOUR

I’ve used the “Rule of Four” in every math class I’ve taught for over two decades, explaining that every mathematical concept can be explained or expressed four different ways:  Numerically, Algebraically, Graphically (including graphs and geometric figures), and Verbally.  While not the contextual point of his quote, I often cite MIT’s Marvin Minsky here:

“You don’t understand anything until you learn it more than one way.”

Learning to translate between the four representations grants deeper understanding of concepts and often gives access to solutions in one form that may be difficult or impossible in other forms.

After my decades-long work with CAS, I now believe there is actually a 5th representation of mathematical ideas:  Tools.  Knowing how to translate a question into a form that your tool (in the case of CAS, the tool is computers) can manage or compute creates a different representation of the problem and requires deeper insights to manage the translation.

I knew some of my students this year had deeply embraced this “5th Way” when one showed me his alternative approach to the confidence interval question:

norm10

I found this solution particularly lovely for several reasons.

  • The student knew about lists and statistical commands and on a whim tried combining them in a novel way to produce the desired solution.
  • He found the confidence interval directly using a normal distribution command rather than the arguably more convenient black box confidence interval tool.  He also showed explicitly his understanding of the distribution of sample means by adjusting the given standard deviation for the sample size.
  • Finally, while using a CAS sometimes involves getting answers in forms you didn’t expect, in this case, I think the CAS command and list output actually provide a cleaner, interval-looking result than the black box confidence interval command much more intuitively connected to the actual meaning of a confidence interval.
  • While I haven’t tried it out, it seems to me that this approach also should work on non-CAS statistical calculators that can handle lists.

(a very minor disappointment, quickly overcome)

Returning to my multiple approaches, I tried using my student’s newfound approach using a normCdf() command.

norm11

Alas, my Nspire returned the very command I had entered, indicating that it didn’t understand the question I had posed.  While a bit disappointed that this approach didn’t work, I was actually excited to have discovered a boundary in the current programming of the Nspire.  Perhaps someday this approach will also work, but my students and I have many other directions we can exploit to find what we need.

Leaving the probability tables behind in their appropriate historical dust while fully embracing the power of modern classroom technology to enhance my students’ statistical learning and understanding, I’m convinced I made the right decision to start this school year.  They know more, understand the foundations of statistics better, and as a group feel much more confident and flexible.  Whether their scores on next month’s AP exam will reflect their growth, I can’t say, but they’ve definitely learned more statistics this year than any previous statistics class I’ve ever taught.

COMPLETE FILES FROM MY 2015 T3 PRESENTATION

If you are interested, you can download here the PowerPoint file for my entire Nspired Statistics and CAS presentation from last week’s 2015 T3 International Conference in Dallas, TX.  While not the point of this post, the presentation started with a non-calculus derivation/explanation of linear regressions.  Using some great feedback from Jeff McCalla, here is an Nspire CAS document creating the linear regression computation updated from what I presented in Dallas.  I hope you found this post and these files helpful, or at least thought-provoking.

Probability, Polynomials, and Sicherman Dice

Three years ago, I encountered a question on the TI-Nspire Google group asking if there was a way to use CAS to solve probability problems.  The ideas I pitched in my initial response and follow-up a year later (after first using it with students in a statistics class) have been thoroughly re-confirmed in my first year teaching AP Statistics.  I’ll quickly re-share them below before extending the concept with ideas I picked up a couple weeks ago from Steve Phelps’ session on Probability, Polynomials, and CAS at the 64th annual OCTM conference earlier this month in Cleveland, OH.

BINOMIALS:  FROM POLYNOMIALS TO SAMPLE SPACES

Once you understand them, binomial probability distributions aren’t that difficult, but the initial conjoining of combinatorics and probability makes this a perennially difficult topic for many students.  The standard formula for the probability of determining the chances of K successes in N attempts of a binomial situation where p is the probability of a single success in a single attempt is no less daunting:

\displaystyle \left( \begin{matrix} N \\ K \end{matrix} \right) p^K (1-p)^{N-K} = \frac{N!}{K! (N-K)!} p^K (1-p)^{N-K}

But that is almost exactly the same result one gets by raising binomials to whole number powers, so why not use a CAS to expand a polynomial and at least compute the \displaystyle \left( \begin{matrix} N \\ K \end{matrix} \right) portion of the probability?  One added advantage of using a CAS is that you could use full event names instead of abbreviations, making it even easier to identify the meaning of each event.

prob1

The TI-Nspire output above shows the entire sample space resulting from flipping a coin 6 times.  Each term is an event.  Within each term, the exponent of each variable notes the number of times that variable occurs and the coefficient is the number of times that combination occurs.  The overall exponent in the expand command is the number of trials.  For example, the middle term– 20\cdot heads^3 \cdot tails^3 –says that there are 20 ways you could get 3 heads and 3 tails when tossing a coin 6 times. The last term is just tails^6, and its implied coefficient is 1, meaning there is just one way to flip 6 tails in 6 tosses.

The expand command makes more sense than memorized algorithms and provides context to students until they gain a deeper understanding of what’s actually going on.

FROM POLYNOMIALS TO PROBABILITY

Still using the expand command, if each variable is preceded by its probability, the CAS result combines the entire sample space AND the corresponding probability distribution function.  For example, when rolling a fair die four times, the distribution for 1s vs. not 1s (2, 3, 4, 5, or 6) is given by

prob2

The highlighted term says there is a 38.58% chance that there will be exactly one 1 and any three other numbers (2, 3, 4, 5, or 6) in four rolls of a fair 6-sided die.  The probabilities of the other four events in the sample space are also shown.  Within the TI-Nspire (CAS or non-CAS), one could use a command to give all of these probabilities simultaneously (below), but then one has to remember whether the non-contextualized probabilities are for increasing or decreasing values of which binomial outcome.

prob3

Particularly early on in their explorations of binomial probabilities, students I’ve taught have shown a very clear preference for the polynomial approach, even when allowed to choose any approach that makes sense to them.

TAKING POLYNOMIALS FROM ONE DIE TO MANY

Given these earlier thoughts, I was naturally drawn to Steve Phelps “Probability, Polynomials, and CAS” session at the November 2014 OCTM annual meeting in Cleveland, OH.  Among the ideas he shared was using polynomials to create the distribution function for the sum of two fair 6-sided dice.  My immediate thought was to apply my earlier ideas.  As noted in my initial post, the expansion approach above is not limited to binomial situations.  My first reflexive CAS command in Steve’s session before he share anything was this.

prob4

By writing the outcomes in words, the CAS interprets them as variables.  I got the entire sample space, but didn’t learn gain anything beyond a long polynomial.  The first output– five^2 –with its implied coefficient says there is 1 way to get 2 fives.  The second term– 2\cdot five \cdot four –says there are 2 ways to get 1 five and 1 four.  Nice that the technology gives me all the terms so quickly, but it doesn’t help me get a distribution function of the sum.  I got the distributions of the specific outcomes, but the way I defined the variables didn’t permit sum of their actual numerical values.  Time to listen to the speaker.

He suggested using a common variable, X, for all faces with the value of each face expressed as an exponent.  That is, a standard 6-sided die would be represented by X^1+X^2+ X^3+X^4+X^5+X^6 where the six different exponents represent the numbers on the six faces of a typical 6-sided die.  Rolling two such dice simultaneously is handled as I did earlier with the binomial cases.

NOTE:  Exponents are handled in TWO different ways here.  1) Within a single polynomial, an exponent is an event value, and 2) Outside a polynomial, an exponent indicates the number of times that polynomial is applied within the specific event.  Coefficients have the same meaning as before.

Because the variables are now the same, when specific terms are multiplied, their exponents (face values) will be added–exactly what I wanted to happen.  That means the sum of the faces when you roll two dice is determined by the following.

prob5

Notice that the output is a single polynomial.  Therefore, the exponents are the values of individual cases.  For a couple examples, there are 3 ways to get a sum of 10 \left( 3 \cdot x^{10} \right) , 2 ways to get a sum of 3 \left( 2 \cdot x^3 \right) , etc.  The most commonly occurring outcome is the term with the largest coefficient.  For rolling two standard fair 6-sided dice, a sum of 7 is the most common outcome, occurring 6 times \left( 6 \cdot x^7 \right) .  That certainly simplifies the typical 6×6 tables used to compute the sums and probabilities resulting from rolling two dice.

While not the point of Steve’s talk, I immediately saw that technology had just opened the door to problems that had been computationally inaccessible in the past.  For example, what is the most common sum when rolling 5 dice and what is the probability of that sum?  On my CAS, I entered this.

prob6

In the middle of the expanded polynomial are two terms with the largest coefficients, 780 \cdot x^{18} and 780 \cdot x^{19}, meaning a sums of 17 and 18 are the most common, equally likely outcomes when rolling 5 dice.  As there are 6^5=7776 possible outcomes when rolling a die 5 times, the probability of each of these is \frac{780}{7776} \approx 0.1003, or about 10.03% chance each for a sum of 17 or 18.  This can be verified by inserting the probabilities as coefficients before each term before CAS expanding.

prob7

With thought, this shouldn’t be surprising as the expected mean value of rolling a 6-sided die many times is 3.5, and 5 \cdot 3.5 = 17.5, so the integers on either side of 17.5 (17 & 18) should be the most common.  Technology confirms intuition.

ROLLING DIFFERENT DICE SIMULTANEOUSLY

What is the distribution of sums when rolling a 4-sided and a 6-sided die together?  No problem.  Just multiply two different polynomials, one representative of each die.

prob8

The output shows that sums of 5, 6, and 7 would be the most common, each occurring four times with probability \frac{1}{6} and together accounting for half of all outcomes of rolling these two dice together.

A BEAUTIFUL EXTENSION–SICHERMAN DICE

My most unexpected gain from Steve’s talk happened when he asked if we could get the same distribution of sums as “normal” 6-sided dice, but from two different 6-sided dice.  The only restriction he gave was that all of the faces of the new dice had to have positive values.  This can be approached by realizing that the distribution of sums of the two normal dice can be found by multiplying two representative polynomials to get

x^{12}+2x^{11}+3x^{10}+4x^9+5x^8+6x^7+5x^6+4x^5+3x^4+2x^3+x^2.

Restating the question in the terms of this post, are there two other polynomials that could be multiplied to give the same product?  That is, does this polynomial factor into other polynomials that could multiply to the same product?  A CAS factor command gives

prob9

Any rearrangement of these eight (four distinct) sub-polynomials would create the same distribution as the sum of two dice, but what would the the separate sub-products mean in terms of the dice?  As a first example, what if the first two expressions were used for one die (line 1 below) and the two squared trinomials comprised a second die (line 2)?

prob10

Line 1 actually describes a 4-sided die with one face of 4, two faces with 3s, and one face of 2.  Line 2 describes a 9-sided die (whatever that is) with one face of 8, two faces of 6, three faces of 4, two faces of 2, and one face with a 0 ( 1=1 \cdot x^0).  This means rolling a 4-sided and a 9-sided die as described would give exactly the same sum distribution.  Cool, but not what I wanted.  Now what?

Factorization gave four distinct sub-polynomials, each with multitude 2.  One die could contain 0, 1, or 2 of each of these with the remaining factors on the other die.  That means there are 3^4=81 different possible dice combinations.  I could continue with a trail-and-error approach, but I wanted to be more efficient and elegant.

What follows is the result of thinking about the problem for a while.  Like most math solutions to interesting problems, ultimate solutions are typically much cleaner and more elegant than the thoughts that went into them.  Problem solving is a messy–but very rewarding–business.

SOLUTION

Here are my insights over time:

1) I realized that the x^2 term would raise the power (face values) of the desired dice, but would not change the coefficients (number of faces).  Because Steve asked for dice with all positive face values.  That meant each desired die had to have at least one x to prevent non-positive face values.

2) My first attempt didn’t create 6-sided dice.  The sums of the coefficients of the sub-polynomials determined the number of sides.  That sum could also be found by substituting x=1 into the sub-polynomial.  I want 6-sided dice, so the final coefficients must add to 6.  The coefficients of the factored polynomials of any die individually must add to 2, 3, or 6 and have a product of 6.  The coefficients of (x+1) add to 2, \left( x^2+x+1 \right) add to 3, and \left( x^2-x+1 \right) add to 1.  The only way to get a polynomial coefficient sum of 6 (and thereby create 6-sided dice) is for each die to have one (x+1) factor and one \left( x^2+x+1 \right) factor.

3) That leaves the two \left( x^2-x+1 \right) factors.  They could split between the two dice or both could be on one die, leaving none on the other.  We’ve already determined that each die already had to have one each of the x, (x+1), and \left( x^2+x+1 \right) factors.  To also split the \left( x^2-x+1 \right) factors would result in the original dice:  Two normal 6-sided dice.  If I want different dice, I have to load both of these factors on one die.

That means there is ONLY ONE POSSIBLE alternative for two 6-sided dice that have the same sum distribution as two normal 6-sided dice.

prob11

One die would have single faces of 8, 6, 5, 4, 3, and 1.  The other die would have one 4, two 3s, two 2s, and one 1.  And this is exactly the result of the famous(?) Sicherman Dice.

If a 0 face value was allowed, shift one factor of x from one polynomial to the other.  This can be done two ways.

prob12

The first possibility has dice with faces {9, 7, 6, 5, 4, 2} and {3, 2, 2, 1, 1, 0}, and the second has faces {7, 5, 4, 3, 2, 0} and {5, 4, 4, 3, 3, 2}, giving the only other two non-negative solutions to the Sicherman Dice.

Both of these are nothing more than adding one to all faces of one die and subtracting one from from all faces of the other.  While not necessary to use polynomials to compute these, they are equivalent to multiplying the polynomial of one die by x and the other by \frac{1}{x} as many times as desired. That means there are an infinite number of 6-sided dice with the same sum distribution as normal 6-sided dice if you allow the sides to have negative faces.  One of these is

prob13

corresponding to a pair of Sicherman Dice with faces {6, 4, 3, 2, 1, -1} and {1,5,5,4,4,3}.

CONCLUSION:

There are other very interesting properties of Sicherman Dice, but this is already a very long post.  In the end, there are tremendous connections between probability and polynomials that are accessible to students at the secondary level and beyond.  And CAS keeps the focus on student learning and away from the manipulations that aren’t even the point in these explorations.

Enjoy.

Birthdays, CAS, Probability, and Student Creativity

Many readers are familiar with the very counter-intuitive Birthday Problem:

It is always fun to be in a group when two people suddenly discover that they share a birthday.  Should we be surprised when this happens?  Asked a different way, how large a group of randomly selected people is required to have at least a 50% probability of having a birthday match within the group?

I posed this question to both of my sections of AP Statistics in the first week of school this year.  In a quick poll, one section had a birthday match–two students who had taken classes together for a few years without even realizing what they had in common.  Was I lucky, or was this a commonplace occurrence?

Intrigue over this question motivated our early study of probability.  The remainder of this post follows what I believe is the traditional approach to the problem, supplemented by the computational power of a computer algebra system (CAS)–the TI Nspire CX CAS–available on each of my students’ laptops.

Initial Attempt:

Their first try at a solution was direct.  The difficulty was the number of ways a common birthday could occur.  After establishing that we wanted any common birthday to count as a match and not just an a priori specific birthday, we tried to find the number of ways birthday matches could happen for different sized groups.  Starting small, they reasoned that

  • If there were 2 people in a room, there was only 1 possible birthday connection.
  • If there were 3 people (A, B, and C), there were 4 possible birthday connections–three pairs (A-B, A-C, and B-C) and one triple (A-B-C).
  • For four people (A, B, C, and D), they realized they had to look for pair, triple, and quad connections.  The latter two were easiest:  one quad (A-B-C-D) and four triples (A-B-C, A-B-D, A-C-D, and B-C-D).  For the pairs, we considered the problem as four points and looked for all the ways we could create segments.  That gave (A-B, A-C, A-D, B-C, B-D, and C-D).  These could also occur as double pairs in three ways (A-B & C-D, A-C & B-D, and A-D & B-C).  All together, this made 1+4+6+3=14 ways.

This required lots of support from me and was becoming VERY COMPLICATED VERY QUICKLY.  Two people had 1 connection, 3 people had 4 connections, and 4 people had 14 connections.  Tracking all of the possible connections as the group size expanded–and especially not losing track of any possibilities–was making this approach difficult.  This created a perfect opportunity to use complement probabilities.

The traditional solution:

While there were MANY ways to have a shared birthday, for every sized group, there is one and only one way to not have any shared birthdays–they all had to be different.  And computing a probability for a single possibility was a much simpler task.

We imagined an empty room with random people entering one at a time.  The first person entering could have any birthday without matching anyone, so P \left( \text{no match with 1 person} \right) = \frac{365}{365}  .  When the second person entered, there were 364 unchosen birthdays remaining, giving P \left( \text{no match with 2 people} \right) = \frac{365}{365} \cdot \frac{364}{365} , and P \left( \text{no match with 3 people} \right) = \frac{365}{365} \cdot \frac{364}{365} \cdot \frac{363}{365} .  And the complements to each of these are the probabilities we sought:

P \left( \text{birthday match with 1 person} \right) = 1- \frac{365}{365} = 0
P \left( \text{birthday match with 2 people} \right) = 1- \frac{365}{365} \cdot \frac{364}{365} \approx 0.002740
P \left( \text{birthday match with 3 people} \right) = 1- \frac{365}{365} \cdot \frac{364}{365} \cdot \frac{363}{365} \approx 0.008204 .

The probabilities were small, but with persistent data entry from a few classmates, they found that the 50% threshold was reached with 23 people.

Probability1

The hard work was finished, but some wanted to find an easier way to compute the solution.  A few students noticed that the numerator looked like the start of a factorial and revised the equation:

\begin{matrix} \displaystyle  P \left( \text{birthday match with n people} \right ) & = & 1- \frac{365}{365} \cdot \frac{364}{365} \dots \frac{(366-n)}{365} \\  \\  & = & 1- \frac{365 \cdot 364 \dots (366-n)}{365^n} \\  \\  & = & 1- \frac{365\cdot 364 \dots (366-n)\cdot (366-n-1)!}{365^n \cdot (366-n-1)!} \\  \\  & = & 1- \frac{365!}{365^n \cdot (365-n)!}  \end{matrix}

It was much simpler to plug in values to this simplified equation, confirming the earlier result.

Probability2

Not everyone saw the “complete the factorial” manipulation, but one noticed in the first solution the linear pattern in the numerators of the probability fractions.  While it was easy enough to write a formula for the fractions, he didn’t know an easy way to multiply all the fractions together.  He had experience with Sigma Notation for sums, so I introduced him to Pi Notation–it works exactly the same as Sigma Notation, except Pi multiplies the individual terms instead of adding them.  On the TI-Nspire, the Pi Notation command is available in the template menu or under the calculus menu.

Probability3

Conclusion:

I really like two things about this problem:  the extremely counterintuitive result (just 23 people gives a 50% chance of a birthday match) and discovering the multiple ways you could determine the solution.  Between student pattern recognition and my support in formalizing computation suggestions, students learned that translating different recognized patterns into mathematics symbols, supported by technology, can provide different equally valid ways to solve a problem.

Now I can answer the question I posed about the likelihood of me finding a birthday match among my two statistics classes.  The two sections have 15 and 21 students, respectively.  The probability of having at least one match is the complement of not having any matches.  Using the Pi Notation version of the solution gives

Probability4 I wasn’t guaranteed a match, but the 58.4% probability gave me a decent chance of having a nice punch line to start the class.  It worked pretty well this time!

Extension:

My students are currently working on their first project, determining a way to simulate groups of people entering a room with randomly determined birthdays to see if the 23 person theoretical threshold bears out with experimental results.

Monty Hall Continued

In my recent post describing a Monty Hall activity in my AP Statistics class, I shared an amazingly crystal-clear explanation of how one of my new students conceived of the solution:

If your strategy is staying, what’s your chance of winning?  You’d have to miraculously pick the money on the first shot, which is a 1/3 chance.  But if your strategy is switching, you’d have to pick a goat on the first shot.  Then that’s a 2/3 chance of winning.  

Then I got a good follow-up question from @SteveWyborney on Twitter:

Returning to my student’s conclusion about the 3-door version of the problem, she said,

The fact that there are TWO goats actually can help you, which is counterintuitive on first glance. 

Extending her insight and expanding the problem to any number of doors, including Steve’s proposed 1,000,000 doors, the more goats one adds to the problem statement, the more likely it becomes to win the treasure with a switching doors strategy.  This is very counterintuitive, I think.

For Steve’s formulation, only 1 initial guess from the 1,000,000 possible doors would have selected the treasure–the additional goats seem to diminish one’s hopes of ever finding the prize.  Each of the other 999,999 initial doors would have chosen a goat.  So if 999,998 goat-doors then are opened until all that remains is the original door and one other, the contestant would win by not switching doors iff the prize was initially randomly selected, giving P(win by staying) = 1/1000000.  The probability of winning with the switching strategy is the complement, 999999/1000000.  

IN RETROSPECT:

My student’s solution statement reminds me on one hand how critically important it is for teachers to always listen to and celebrate their students’ clever new insights and questions, many possessing depth beyond what students realize.  

The solution reminds me of a several variations on “Everything is obvious in retrospect.”  I once read an even better version but can’t track down the exact wording.  A crude paraphrasing is

The more profound a discovery or insight, the more obvious it appears after.

I’d love a lead from anyone with the original wording.

REALLY COOL FOOTNOTE:

Adding to the mystique of this problem, I read in the Wikipedia description that even the great problem poser and solver Paul Erdős didn’t believe the solution until he saw a computer simulation result detailing the solution.  

Probability and Monty Hall

I’m teaching AP Statistics for the first time this year, and my first week just ended.  I’ve taught statistics as portions of other secondary math courses and as a semester-long community college class, but never under the “AP” moniker.  The first week was a blast.  

To connect even the very beginning of the course to previous knowledge of all of my students, I decided to start the year with a probability unit.  For an early class activity, I played the classic Monte Hall game with the classes.  Some readers will recall the rules, but here they are just in case you don’t know them.  

  1. A contestant faces three closed doors.  Behind one is a new car. There is a goat behind each of the other two. 
  2. The contestant chooses one of the doors and announces her choice.  
  3. The game show host then opens one of the other two doors to reveal a goat.
  4. Now the contestant has a choice to make.  Should she
    1. Always stay with the door she initially chose, or
    2. Always change to the remaining unopened door, or
    3. Flip a coin to choose which door because the problem essentially has become a 50-50 chance of pure luck.

Historically, many people (including many very highly educated, degree flaunting PhDs) intuit the solution to be “pure luck”.  After all, don’t you have just two doors to choose from at the end?

In one class this week, I tried a few simulations before I posed the question about strategy.  In the other, I posed the question of strategy before any simulations.  In the end, very few students intuitively believed that staying was a good strategy, with the remainder more or less equally split between the “switch” and “pure luck” options.  I suspect the greater number of “switch” believers (and dearth of stays) may have been because of earlier exposure to the problem.  

I ran my class simulation this way:  

  • Students split into pairs (one class had a single group of 3).  
  • One student was the host and secretly recorded a door number.  
  • The class decided in advance to always follow the “shift strategy”.  [Ultimately, following either stay or switch is irrelevant, but having all groups follow the same strategy gives you the same data in the end.]
  • The contestant then chose a door, the host announced an open door, and the contestant switched doors.
  • The host then declared a win or loss bast on his initial door choice in step two.
  • Each group repeated this 10 times and reported their final number of wins to the entire class.
  • This accomplished a reasonably large number of trials from the entire class in a very short time via division of labor.  Because they chose the shift strategy, my two classes ultimately reported 58% and 68% winning percentages.  

Curiously, the class that had the 58% percentage had one group with just 1 win out of 10 and another winning only 4 of 10. It also had a group that reported winning 10 of 10.  Strange, but even with the low, unexpected probabilities, the long-run behavior from all groups still led to a plurality winning percentage for switching.

Here’s a verbatim explanation from one of my students written after class for why switching is the winning strategy.  It’s perhaps the cleanest reason I’ve ever heard.

The faster, logical explanation would be: if your strategy is staying, what’s your chance of winning?  You’d have to miraculously pick the money on the first shot, which is a 1/3 chance.  But if your strategy is switching, you’d have to pick a goat on the first shot.  Then that’s a 2/3 chance of winning.  In a sense, the fact that there are TWO goats actually can help you, which is counterintuitive on first glance. 

Engaging students hands-on in the experiment made for a phenomenal pair of classes and discussions. While many left still a bit disturbed that the answer wasn’t 50-50, this was a spectacular introduction to simulations, conditional probability, and cool conversations about the inevitability of streaks in chance events. 

For those who are interested, here’s another good YouTube demonstration & explanation.