Tag Archives: problem-solving

Roots of Complex Numbers without DeMoivre

Finding roots of complex numbers can be … complex.

This post describes a way to compute roots of any number–real or complex–via systems of equations without any conversions to polar form or use of DeMoivre’s Theorem.  Following a “traditional approach,” one non-technology example is followed by a CAS simplification of the process.

Most sources describe the following procedure to compute the roots of complex numbers (obviously including the real number subset).

• Write the complex number whose root is sought in generic polar form.  If necessary, convert from Cartesian form.
• Invoke DeMoivre’s Theorem to get the polar form of all of the roots.
• If necessary, convert the numbers from polar form back to Cartesian.

As a very quick example,

Compute all square roots of -16.

Rephrased, this asks for all complex numbers, z, that satisfy  $z^2=-16$.  The Fundamental Theorem of Algebra guarantees two solutions to this quadratic equation.

The complex Cartesian number, $-16+0i$, converts to polar form, $16cis( \pi )$, where $cis(\theta ) = cos( \theta ) +i*sin( \theta )$.  Unlike Cartesian form, polar representations of numbers are not unique, so any full rotation from the initial representation would be coincident, and therefore equivalent if converted to Cartesian.  For any integer n, this means

$-16 = 16cis( \pi ) = 16 cis \left( \pi + 2 \pi n \right)$

Invoking DeMoivre’s Theorem,

$\sqrt{-16} = (-16)^{1/2} = \left( 16 cis \left( \pi + 2 \pi n \right) \right) ^{1/2}$
$= 16^{1/2} * cis \left( \frac{1}{2} \left( \pi + 2 \pi n \right) \right)$
$= 4 * cis \left( \frac{ \pi }{2} + \pi * n \right)$

For $n= \{ 0, 1 \}$, this gives polar solutions, $4cis \left( \frac{ \pi }{2} \right)$ and $4cis \left( \frac{ 3 \pi }{2} \right)$ .  Each can be converted back to Cartesian form, giving the two square roots of -16:  $4i$ and $-4i$.  Squaring either gives -16, confirming the result.

I’ve always found the rotational symmetry of the complex roots of any number beautiful, particularly for higher order roots.  This symmetry is perfectly captured by DeMoivre’s Theorem, but there is arguably a simpler way to compute them.

NEW(?) NON-TECH APPROACH:

Because the solution to every complex number computation can be written in $a+bi$ form, new possibilities open.  The original example can be rephrased:

Determine the simultaneous real values of x and y for which $-16=(x+yi)^2$.

Start by expanding and simplifying the right side back into $a+bi$ form.  (I wrote about a potentially easier approach to simplifying powers of i in my last post.)

$-16+0i = \left( x+yi \right)^2 = x^2 +2xyi+y^2 i^2=(x^2-y^2)+(2xy)i$

Notice that the two ends of the previous line are two different expressions for the same complex number(s).  Therefore, equating the real and imaginary coefficients gives a system of equations:

Solving the system gives the square roots of -16.

From the latter equation, either $x=0$ or $y=0$.  Substituting $y=0$ into the first equation gives $-16=x^2$, an impossible equation because x & y are both real numbers, as stated above.

Substituting $x=0$ into the first equation gives $-16=-y^2$, leading to $y= \pm 4$.  So, $x=0$ and $y=-4$ -OR- $x=0$ and $y=4$ are the only solutions–$x+yi=0-4i$ and $x+yi=0+4i$–the same solutions found earlier, but this time without using polar form or DeMoivre!  Notice, too, that the presence of TWO solutions emerged naturally.

Higher order roots could lead to much more complicated systems of equations, but a CAS can solve that problem.

CAS APPROACH:

Determine all fourth roots of $1+2i$.

That’s equivalent to finding all simultaneous x and y values that satisfy $1+2i=(x+yi)^4$.  Expanding the right side is quickly accomplished on a CAS.  From my TI-Nspire CAS:

Notice that the output is simplified to $a+bi$ form that, in the context of this particular example, gives the system of equations,

Using my CAS to solve the system,

First, note there are four solutions, as expected.  Rewriting the approximated numerical output gives the four complex fourth roots of $1+2i$$-1.176-0.334i$$-0.334+1.176i$$0.334-1.176i$, and $1.176+0.334i$.  Each can be quickly confirmed on the CAS:

CONCLUSION:

Given proper technology, finding the multiple roots of a complex number need not invoke polar representations or DeMoivre’s Theorem.  It really is as “simple” as expanding $(x+yi)^n$ where n is the given root, simplifying the expansion into $a+bi$ form, and solving the resulting 2×2 system of equations.

At the point when such problems would be introduced to students, their algebraic awareness should be such that using a CAS to do all the algebraic heavy lifting is entirely appropriate.

As one final glimpse at the beauty of complex roots, I entered the two equations from the last system into Desmos to take advantage of its very good implicit graphing capabilities.  You can see the four intersections corresponding to the four solutions of the system.  Solutions to systems of implicit equations are notoriously difficult to compute, so I wasn’t surprised when Desmos didn’t compute the coordinates of the points of intersection, even though the graph was pretty and surprisingly quick to generate.

Stats Exploration Yields Deeper Understanding

or “A lesson I wouldn’t have learned without technology”

Last November, some of my AP Statistics students were solving a problem involving a normal distribution with an unknown mean.  Leveraging the TI Nspire CAS calculators we use for all computations, they crafted a logical command that should have worked.  Their unexpected result initially left us scratching heads.  After some conversations with the great folks at TI, we realized that what at first seemed perfectly reasonable for a single answer, in fact had two solutions.  And it took until the end of this week for another student to finally identify and resolve the mysterious results.  This ‘blog post recounts our journey from a questionable normal probability result to a rich approach to confidence intervals.

THE INITIAL PROBLEM

I had assigned an AP Statistics free response question about a manufacturing process that could be manipulated to control the mean distance its golf balls would travel.  We were told that the process created balls with a normally distributed distance of 288 yards and a standard deviation of 2.8 yards.  The first part asked students to find the probability of balls traveling more than an allowable 291.2 yards.  This was straightforward.  Find the area under a normal curve with a mean of 288 and a standard deviation of 2.8 from 291.2 to infinity.  The Nspire (CAS and non-CAS) syntax for this is:

[Post publishing note: See Dennis’ comment below for a small correction for the non-CAS Nspires.  I forgot that those machines don’t accept “infinity” as a bound.]

As 12.7% of the golf balls traveling too far is obviously an unacceptably high percentage, the next part asked for the mean distance needed so only 99% of the balls traveled allowable distances.  That’s when things got interesting.

A “LOGICAL” RESPONSE RESULTS IN A MYSTERY

Their initial thought was that even though they didn’t know the mean, they now knew the output of their normCdf command.  Since the balls couldn’t travel a negative distance and zero was many standard deviations from the unknown mean, the following equation with representing the unknown mean should define the scenario nicely.

Because this was an equation with a single unknown, we could now use our CAS calculators to solve for the missing parameter.

Something was wrong.  How could the mean distance possibly be just 6.5 yards?  The Nspires are great, reliable machines.  What happened?

I had encountered something like this before with unexpected answers when a solve command was applied to a Normal cdf with dual finite bounds .  While it didn’t seem logical to me why this should make a difference, I asked them to try an infinite lower bound and also to try computing the area on the other side of 291.2.  Both of these provided the expected solution.

The caution symbol on the last line should have been a warning, but I honestly didn’t see it at the time.  I was happy to see the expected solution, but quite frustrated that infinite bounds seemed to be required.  Beyond three standard deviations from the mean of any normal distribution, almost no area exists, so how could extending the lower bound from 0 to negative infinity make any difference in the solution when 0 was already $\frac{291.2}{2.8}=104$ standard deviations away from 291.2?  I couldn’t make sense of it.

My initial assumption was that something was wrong with the programming in the Nspire, so I emailed some colleagues I knew within CAS development at TI.

GRAPHS REVEAL A HIDDEN SOLUTION

They reminded me that statistical computations in the Nspire CAS were resolved through numeric algorithms–an understandable approach given the algebraic definition of the normal and other probability distribution functions.  The downside to this is that numeric solvers may not pick up on (or are incapable of finding) difficult to locate or multiple solutions.  Their suggestion was to employ a graph whenever we got stuck.  This, too, made sense because graphing a function forced the machine to evaluate multiple values of the unknown variable over a predefined domain.

It was also a good reminder for my students that a solution to any algebraic equation can be thought of as the first substitution solution step for a system of equations.  Going back to the initially troublesome input, I rewrote normCdf(0,291.2,x,2.8)=0.99 as the system

y=normCdf(0,291.2,x,2.8)
y=0.99

and “the point” of intersection of that system would be the solution we sought.  Notice my emphasis indicating my still lingering assumptions about the problem.  Graphing both equations shone a clear light on what was my persistent misunderstanding.

I was stunned to see two intersection solutions on the screen.  Asking the Nspire for the points of intersection revealed BOTH ANSWERS my students and I had found earlier.

If both solutions were correct, then there really were two different normal pdfs that could solve the finite bounded problem.  Graphing these two pdfs finally explained what was happening.

By equating the normCdf result to 0.99 with FINITE bounds, I never specified on which end the additional 0.01 existed–left or right.  This graph showed the 0.01 could have been at either end, one with a mean near the expected 284 yards and the other with a mean near the unexpected 6.5 yards.  The graph below shows both normal curves with the 6.5 solution having an the additional 0.01 on the left and the 284 solution with the 0.01 on the right.

The CAS wasn’t wrong in the beginning.  I was.  And as has happened several times before, the machine didn’t rely on the same sometimes errant assumptions I did.  My students had made a very reasonable assumption that the area under the normal pdf for the golf balls should start only 0 (no negative distances) and inadvertently stumbled into a much richer problem.

A TEMPORARY FIX

The reason the infinity-bounded solutions didn’t give the unexpected second solution is that it is impossible to have the unspecified extra 0.01 area to the left of an infinite lower or upper bound.

To avoid unexpected multiple solutions, I resolved to tell my students to use infinite bounds whenever solving for an unknown parameter.  It was a little dissatisfying to not be able to use my students’ “intuitive” lower bound of 0 for this problem, but at least they wouldn’t have to deal with unexpected, counterintuitive results.

Surprisingly, the permanent solution arrived weeks later when another student shared his fix for a similar problem when computing confidence interval bounds.

A PERMANENT FIX FROM AN UNEXPECTED SOURCE

I really don’t like the way almost all statistics textbooks provide complicated formulas for computing confidence intervals using standardized z- and t-distribution critical scores.  Ultimately a 95% confidence interval is nothing more than the bounds of the middle 95% of a probability distribution whose mean and standard deviation are defined by a sample from the overall population.  Where the problem above solved for an unknown mean, on a CAS, computing a confidence interval follows essentially the same reasoning to determine missing endpoints.

My theme in every math class I teach is to memorize as little as you can, and use what you know as widely as possible.  Applying this to AP Statistics, I never reveal the existence of confidence interval commands on calculators until we’re 1-2 weeks past their initial introduction.  This allows me to develop a solid understanding of confidence intervals using a variation on calculator commands they already know.

For example, assume you need a 95% confidence interval of the percentage of votes Bernie Sanders is likely to receive in Monday’s Iowa Caucus.  The CNN-ORC poll released January 21 showed Sanders leading Clinton 51% to 43% among 280 likely Democratic caucus-goers.  (Read the article for a glimpse at the much more complicated reality behind this statistic.)  In this sample, the proportion supporting Sanders is approximately normally distributed with a sample p=0.51 and sample standard deviation of p of $\sqrt((.51)(.49)/280)=0.0299$.  The 95% confidence interval is the defined by the bounds containing the middle 95% of the data of this normal distribution.

Using the earlier lesson, one student suggested finding the bounds on his CAS by focusing on the tails.

giving a confidence interval of (0.45, 0.57) for Sanders for Monday’s caucus, according to the method of the CNN-ORC poll from mid-January.  Using a CAS keeps my students focused on what a confidence interval actually means without burying them in the underlying computations.

That’s nice, but what if you needed a confidence interval for a sample mean?  Unfortunately, the t-distribution on the Nspire is completely standardized, so confidence intervals need to be built from critical t-values.  Like on a normal distribution, a 95% confidence interval is defined by the bounds containing the middle 95% of the data.  One student reasonably suggested the following for a 95% confidence interval with 23 degrees of freedom.  I really liked the explicit syntax definition of the confidence interval.

Alas, the CAS returned the input.  It couldn’t find the answer in that form.  Cognizant of the lessons learned above, I suggested reframing the query with an infinite bound.

That gave the proper endpoint, but I was again dissatisfied with the need to alter the input, even though I knew why.

That’s when another of my students spoke up to say that he got the solution to work with the initial commands by including a domain restriction.

Of course!  When more than one solution is possible, restrict the bounds to the solution range you want.  Then you can use the commands that make sense.

FIXING THE INITIAL APPROACH

That small fix finally gave me the solution to the earlier syntax issue with the golf ball problem.  There were two solutions to the initial problem, so if I bounded the output, they could use their intuitive approach and get the answer they needed.

If a mean of 288 yards and a standard deviation of 2.8 yards resulted in 12.7% of the area above 291.2, then it wouldn’t take much of a left shift in the mean to leave just 1% of the area above 291.2. Surely that unknown mean would be no lower than 3 standard deviations below the current 288, somewhere above 280 yards.  Adding that single restriction to my students’ original syntax solved their problem.

Perfection!

CONCLUSION

By encouraging a deep understanding of both the underlying statistical content AND of their CAS tool, students are increasingly able to find creative solutions using flexible methods and expressions intuitive to them.  And shouldn’t intellectual strength, creativity, and flexibility be the goals of every learning experience?

Unanticipated Proof Before Algebra

I was talking with one of our 5th graders, S,  last week about the difference between showing a few examples of numerical computations and developing a way to know something was true no matter what numbers were chosen.  I hadn’t started our conversation thinking about introducing proof.  Once we turned in that direction, I anticipated scaffolding him in a completely different direction, but S went his own way and reinforced for me the importance of listening and giving students the encouragement and room to build their own reasoning.

SETUP:  S had been telling me that he “knew” the product of an even number with any other number would always be even, while the product of any two odds was always odd.  He demonstrated this by showing lots of particular products, but I asked him if he was sure that it was still true if I were to pick some numbers he hadn’t used yet.  He was.

Then I asked him how many numbers were possible to use.  He promptly replied “infinite” at which point he finally started to see the difficulty with demonstrating that every product worked.  “We don’t have enough time” to do all that, he said.  Finally, I had maneuvered him to perhaps his first ever realization for the need for proof.

ANTICIPATION:  But S knew nothing of formal algebra.  From my experiences with younger students sans algebra, I thought I would eventually need to help him translate his numerical problem into a geometric one.  But this story is about S’s reasoning, not mine.

INSIGHT:  I asked S how he would handle any numbers I asked him to multiply to prove his claims, even if I gave him some ridiculously large ones.  “It’s really not as hard as that,” S told me.  He quickly scribbled

on his paper and covered up all but the one’s digit.  “You see,” he said, “all that matters is the units.  You can make the number as big as you want and I just need to look at the last digit.”  Without using this language, S was venturing into an even-odd proof via modular arithmetic.

With some more thought, he reasoned that he would focus on just the units digit through repeated multiples and see what happened.

FIFTH GRADE PROOF:  S’s math class is currently working through a multiplication unit in our 5th grade Bridges curriculum, so he was already in the mindset of multiples.  Since he said only the units digit mattered, he decided he could start with any even number and look at all of its multiples.  That is, he could keep adding the number to itself and see what happened.  As shown below, he first chose 32 and found the next four multiples, 64, 96, 128, and 160.  After that, S said the very next number in the list would end in a 2 and the loop would start all over again.

He stopped talking for several seconds, and then he smiled.  “I don’t have to look at every multiple of 32.  Any multiple will end up somewhere in my cycle and I’ve already shown that every number in this cycle is even.  Every multiple of 32 must be even!”  It was a pretty powerful moment.  Since he only needed to see the last digit, and any number ending in 2 would just add 2s to the units, this cycle now represented every number ending in 2 in the universe.  The last line above was S’s use of 1002 to show that the same cycling happened for another “2 number.”

DIFFERENT KINDS OF CYCLES:  So could he use this for all multiples of even numbers?  His next try was an “8 number.”

After five multiples of 18, he achieved the same cycling.  Even cooler, he noticed that the cycle for “8 numbers” was the 2 number” cycle backwards.

Also note that after S completed his 2s and 8s lists, he used only single digit seed numbers as the bigger starting numbers only complicated his examples.  He was on a roll now.

I asked him how the “4 number” cycle was related.  He noticed that the 4s used every other number in the “2 number” cycle.  It was like skip counting, he said.  Another lightbulb went off.

“And that’s because 4 is twice 2, so I just take every 2nd multiple in the first cycle!”  He quickly scratched out a “6 number” example.

This, too, cycled, but more importantly, because 6 is thrice 2, he said that was why this list used every 3rd number in the “2 number” cycle.  In that way, every even number multiple list was the same as the “2 number” list, you just skip-counted by different steps on your way through the list.

When I asked how he could get all the numbers in such a short list when he was counting by 3s, S said it wasn’t a problem at all.  Since it cycled, whenever you got to the end of a list, just go back to the beginning and keep counting.  We didn’t touch it last week, but he had opened the door to modular arithmetic.

I won’t show them here, but his “0 number” list always ended in 0s.  “This one isn’t very interesting,” he said.  I smiled.

ODDS:  It took a little more thought to start his odd number proof, because every other multiple was even.  After he recognized these as even numbers, S decided to list every other multiple as shown with his “1 number” and “3 number” lists.

As with the evens, the odd number lists could all be seen as skip-counted versions of each other.  Also, the 1s and 9s were written backwards from each other, and so were the 3s and 7s.  “5 number” lists were declared to be as boring as “0 numbers”.  Not only did the odds ultimately end up cycling essentially the same as the evens, but they had the same sort of underlying relationships.

CONCLUSION:  At this point, S declared that since he had shown every possible case for evens and odds, then he had shown that any multiple of an even number was always even, and any odd multiple of an odd number was odd.  And he knew this because no matter how far down the list he went, eventually any multiple had to end up someplace in his cycles.  At that point I reminded S of his earlier claim that there was an infinite number of even and odd numbers.  When he realized that he had just shown a case-by-case reason for more numbers than he could ever demonstrate by hand, he sat back in his chair, exclaiming, “Whoa!  That’s cool!”

It’s not a formal mathematical proof, and when S learns some algebra, he’ll be able to accomplish his cases far more efficiently, but this was an unexpectedly nice and perfectly legitimate numerical proof of even and odd multiples for an elementary student.

PowerBall Redux

Donate to a charity instead.  Let me explain.
The majority of responses to my PowerBall description/warnings yesterday have been, “If you don’t play, you can’t win.”  Unfortunately, I know many, many people are buying many lottery tickets, way more than they should.

OK.  For almost everyone, there’s little harm in spending $2 on a ticket for the entertainment, but don’t expect to win, and don’t buy multiple tickets unless you can afford to do without every dollar you spend. I worry about those who are “investing” tens or hundreds of dollars on any lottery. Two of my school colleagues captured the idea of a lottery yesterday with their analogies, Steve: Suppose you go to the beach and grab a handful of sand and bring it back to your house. And you do that every single day. Then your odds of winning the powerball are still slightly worse than picking out one particular grain of sand from all the sand you accumulated over an entire year. Or more simply put from the perspective of a lottery official, Patrick: Here’s our idea. You guys all throw your money in a big pile. Then, after we take some of it, we’ll give the pile to just one of you. WHY YOU SHOULDN’T BUY MULTIPLE TICKETS: For perspective, a football field is 120 yards long, or 703.6 US dollars long using the logic of my last post. Rounding up, that would buy you 352 PowerBall tickets. That means investing$704 dollars would buy you a single football field length of chances in 10.5 coast-to-coast traverses of the entire United States.  There’s going to be an incredibly large number of disappointed people tomorrow.
MORAL:  Even an incredibly large multiple of a less-than-microscopic chance is still a less-than-microscopic chance.
BETTER IDEA: Assume you have the resources and are willing to part with tens or hundreds of dollars for no likelihood of tangible personal gain.  Using the $704 football example, buy 2 tickets and donate the other$700 to charity. You’ll do much more good.

PowerBall Math

Given the record size and mania surrounding the current PowerBall Lottery, I thought some of you might be interested in bringing that game into perspective.  This could be an interesting application with some teachers and students.

It certainly is entertaining for many to dream about what you would do if you happened to be lucky enough to win an astronomical lottery.  And lottery vendors are quick to note that your dreams can’t come true if you don’t play.  Nice advertising.  I’ll let the numbers speak to the veracity of the Lottery’s encouragement.

PowerBall is played by picking any 5 different numbers between 1 & 69, and then one PowerBall number between 1 & 26.  So there are $nCr(69,5)*26=292,201,338$ outcomes for this game.  Unfortunately, humans have a particularly difficult time understanding extremely large numbers, so I offer an analogy to bring it a little into perspective.

• The horizontal width of the United States is generally reported to be 2680 miles, and a U.S. dollar bill is 6.14 inches wide.  That means the U.S. is approximately 27,655,505 dollar bills wide.
• If I have 292,201,338 dollar bills (one for every possible PowerBall outcome), I could make a line of dollar bills placed end-to-end from the U.S. East Coast all the way to the West Coast, back to the East, back to the West, and so forth, passing back and forth between the two coasts just over 10.5 times.
• Now imagine that exactly one of those dollar bills was replaced with a replica dollar bill made from gold colored paper.

Your chances of winning the PowerBall lottery are the same as randomly selecting that single gold note from all of those dollar bills laid end-to-end and crossing the entire breadth of the United States 10.5 times.

Dreaming is fun, but how likely is this particular dream to become real?

Play the lottery if doing so is entertaining to you, but like going to the movie theater, don’t expect to get any money back in return.

Mistakes are Good

Confession #1:  My answers on my last post were WRONG.

I briefly thought about taking that post down, but discarded that idea when I thought about the reality that almost all published mathematics is polished, cleaned, and optimized.  Many students struggle with mathematics under the misconception that their first attempts at any topic should be as polished as what they read in published sources.

While not precisely from the same perspective, Dan Teague recently wrote an excellent, short piece of advice to new teachers on NCTM’s ‘blog entitled Demonstrating Competence by Making Mistakes.  I argue Dan’s advice actually applies to all teachers, so in the spirit of showing how to stick with a problem and not just walking away saying “I was wrong”, I’m going to keep my original post up, add an advisory note at the start about the error, and show below how I corrected my error.

Confession #2:  My approach was a much longer and far less elegant solution than the identical approaches offered by a comment by “P” on my last post and the solution offered on FiveThirtyEight.  Rather than just accepting the alternative solution, as too many students are wont to do, I acknowledged the more efficient approach of others before proceeding to find a way to get the answer through my initial idea.

I’ll also admit that I didn’t immediately see the simple approach to the answer and rushed my post in the time I had available to get it up before the answer went live on FiveThirtyEight.

GENERAL STRATEGY and GOALS:

1-Use a PDF:  The original FiveThirtyEight post asked for the expected time before the siblings simultaneously finished their tasks.  I interpreted this as expected value, and I knew how to compute the expected value of a pdf of a random variable.  All I needed was the potential wait times, t, and their corresponding probabilities.  My approach was solid, but a few of my computations were off.

2-Use Self-Similarity:  I don’t see many people employing the self-similarity tactic I used in my initial solution.  Resolving my initial solution would allow me to continue using what I consider a pretty elegant strategy for handling cumbersome infinite sums.

A CORRECTED SOLUTION:

Stage 1:  My table for the distribution of initial choices was correct, as were my conclusions about the probability and expected time if they chose the same initial app.

My first mistake was in my calculation of the expected time if they did not choose the same initial app.  The 20 numbers in blue above represent that sample space.  Notice that there are 8 times where one sibling chose a 5-minute app, leaving 6 other times where one sibling chose a 4-minute app while the other chose something shorter.  Similarly, there are 4 choices of an at most 3-minute app, and 2 choices of an at most 2-minute app.  So the expected length of time spent by the longer app if the same was not chosen for both is

$E(Round1) = \frac{1}{20}*(8*5+6*4+4*3+2*2)=4$ minutes,

a notably longer time than I initially reported.

For the initial app choice, there is a $\frac{1}{5}$ chance they choose the same app for an average time of 3 minutes, and a $\frac{4}{5}$ chance they choose different apps for an average time of 4 minutes.

Stage 2:  My biggest error was a rushed assumption that all of the entries I gave in the Round 2 table were equally likely.  That is clearly false as you can see from Table 1 above.  There are only two instances of a time difference of 4, while there are eight instances of a time difference of 1.  A correct solution using my approach needs to account for these varied probabilities.  Here is a revised version of Table 2 with these probabilities included.

Conveniently–as I had noted without full realization in my last post–the revised Table 2 still shows the distribution for the 2nd and all future potential rounds until the siblings finally align, including the probabilities.  This proved to be a critical feature of the problem.

Another oversight was not fully recognizing which events would contribute to increasing the time before parity.  The yellow highlighted cells in Table 2 are those for which the next app choice was longer than the current time difference, and any of these would increase the length of a trial.

I was initially correct in concluding there was a $\frac{1}{5}$ probability of the second app choice achieving a simultaneous finish and that this would not result in any additional total time.  I missed the fact that the six non-highlighted values also did not result in additional time and that there was a $\frac{1}{5}$ chance of this happening.

That leaves a $\frac{3}{5}$ chance of the trial time extending by selecting one of the highlighted events.  If that happens, the expected time the trial would continue is

$\displaystyle \frac{4*4+(4+3)*3+(4+3+2)*2+(4+3+2+1)*1}{4+(4+3)+(4+3+2)+(4+3+2+1)}=\frac{13}{6}$ minutes.

Iterating:  So now I recognized there were 3 potential outcomes at Stage 2–a $\frac{1}{5}$ chance of matching and ending, a $\frac{1}{5}$ chance of not matching but not adding time, and a $\frac{3}{5}$ chance of not matching and adding an average $\frac{13}{6}$ minutes.  Conveniently, the last two possibilities still combined to recreate perfectly the outcomes and probabilities of the original Stage 2, creating a self-similar, pseudo-fractal situation.  Here’s the revised flowchart for time.

Invoking the similarity, if there were T minutes remaining after arriving at Stage 2, then there was a $\frac{1}{5}$ chance of adding 0 minutes, a $\frac{1}{5}$ chance of remaining at T minutes, and a $\frac{3}{5}$ chance of adding $\frac{13}{6}$ minutes–that is being at $T+\frac{13}{6}$ minutes.  Equating all of this allows me to solve for T.

$T=\frac{1}{5}*0+\frac{1}{5}*T+\frac{3}{5}*\left( T+\frac{13}{6} \right) \longrightarrow T=6.5$ minutes

Time Solution:  As noted above, at the start, there was a $\frac{1}{5}$ chance of immediately matching with an average 3 minutes, and there was a $\frac{4}{5}$ chance of not matching while using an average 4 minutes.  I just showed that from this latter stage, one would expect to need to use an additional mean 6.5 minutes for the siblings to end simultaneously, for a mean total of 10.5 minutes.  That means the overall expected time spent is

Total Expected Time $=\frac{1}{5}*3 + \frac{4}{5}*10.5 = 9$ minutes.

Number of Rounds Solution:  My initial computation of the number of rounds was actually correct–despite the comment from “P” in my last post–but I think the explanation could have been clearer.  I’ll try again.

One round is obviously required for the first choice, and in the $\frac{4}{5}$ chance the siblings don’t match, let N be the average number of rounds remaining.  In Stage 2, there’s a $\frac{1}{5}$ chance the trial will end with the next choice, and a $\frac{4}{5}$ chance there will still be N rounds remaining.  This second situation is correct because both the no time added and time added possibilities combine to reset Table 2 with a combined probability of $\frac{4}{5}$.  As before, I invoke self-similarity to find N.

$N = \frac{1}{5}*1 + \frac{4}{5}*N \longrightarrow N=5$

Therefore, the expected number of rounds is $\frac{1}{5}*1 + \frac{4}{5}*5 = 4.2$ rounds.

It would be cool if someone could confirm this prediction by simulation.

CONCLUSION:

I corrected my work and found the exact solution proposed by others and simulated by Steve!   Even better, I have shown my approach works and, while notably less elegant, one could solve this expected value problem by invoking the definition of expected value.

Best of all, I learned from a mistake and didn’t give up on a problem.  Now that’s the real lesson I hope all of my students get.

Happy New Year, everyone!

Great Probability Problems

UPDATE:  Unfortunately, there are a couple errors in my computations below that I found after this post went live.  In my next post, Mistakes are Good, I fix those errors and reflect on the process of learning from them.

ORIGINAL POST:

A post last week to the AP Statistics Teacher Community by David Bock alerted me to the new weekly Puzzler by Nate Silver’s new Web site, http://fivethirtyeight.com/.  As David noted, with their focus on probability, this new feature offers some great possibilities for AP Statistics probability and simulation.

I describe below FiveThirtyEight’s first three Puzzlers along with a potential solution to the last one.  If you’re searching for some great problems for your classes or challenges for some, try these out!

THE FIRST THREE PUZZLERS:

The first Puzzler asked a variation on a great engineering question:

You work for a tech firm developing the newest smartphone that supposedly can survive falls from great heights. Your firm wants to advertise the maximum height from which the phone can be dropped without breaking.

You are given two of the smartphones and access to a 100-story tower from which you can drop either phone from whatever story you want. If it doesn’t break when it falls, you can retrieve it and use it for future drops. But if it breaks, you don’t get a replacement phone.

Using the two phones, what is the minimum number of drops you need to ensure that you can determine exactly the highest story from which a dropped phone does not break? (Assume you know that it breaks when dropped from the very top.) What if, instead, the tower were 1,000 stories high?

The second Puzzler investigated random geyser eruptions:

You arrive at the beautiful Three Geysers National Park. You read a placard explaining that the three eponymous geysers — creatively named A, B and C — erupt at intervals of precisely two hours, four hours and six hours, respectively. However, you just got there, so you have no idea how the three eruptions are staggered. Assuming they each started erupting at some independently random point in history, what are the probabilities that A, B and C, respectively, will be the first to erupt after your arrival?

Both very cool problems with solutions on the FiveThirtyEight site.  The current Puzzler talked about siblings playing with new phone apps.

SOLVING THE CURRENT PUZZLER:

Before I started, I saw Nick Brown‘s interesting Tweet of his simulation.

If Nick’s correct, it looks like a mode of 5 minutes and an understandable right skew.  I approached the solution by first considering the distribution of initial random app choices.

There is a $\displaystyle \frac{5}{25}$ chance the siblings choose the same app and head to dinner after the first round.  The expected length of that round is $\frac{1}{5} \cdot \left( 1+2=3=4+5 \right) = 3$ minutes.

That means there is a $\displaystyle \frac{4}{5}$ chance different length apps are chosen with time differences between 1 and 4 minutes.  In the case of unequal apps, the average time spent before the shorter app finishes is $\frac{1}{25} \cdot \left( 8*1+6*2+4*3+2*4 \right) = 1.6$ minutes.

It doesn’t matter which sibling chose the shorter app.  That sibling chooses next with distribution as follows.

While the distributions are different, conveniently, there is still a time difference between 1 and 4 minutes when the total times aren’t equal.  That means the second table shows the distribution for the 2nd and all future potential rounds until the siblings finally align.  While this problem has the potential to extend for quite some time, this adds a nice pseudo-fractal self-similarity to the scenario.

As noted, there is a $\displaystyle \frac{4}{20}=\frac{1}{5}$ chance they complete their apps on any round after the first, and this would not add any additional time to the total as the sibling making the choice at this time would have initially chosen the shorter total app time(s).  Each round after the first will take an expected time of $\frac{1}{20} \cdot \left( 7*1+5*2+3*3+1*4 \right) = 1.5$ minutes.

The only remaining question is the expected number of rounds of app choices the siblings will take if they don’t align on their first choice.  This is where I invoked self-similarity.

In the initial choice there was a $\frac{4}{5}$ chance one sibling would take an average 1.6 minutes using a shorter app than the other.  From there, some unknown average N choices remain.  There is a $\frac{1}{5}$ chance the choosing sibling ends the experiment with no additional time, and a $\frac{4}{5}$ chance s/he takes an average 1.5 minutes to end up back at the Table 2 distribution, still needing an average N choices to finish the experiment (the pseudo-fractal self-similarity connection).  All of this is simulated in the flowchart below.

Recognizing the self-similarity allows me to solve for N.

$\displaystyle N = \frac{1}{5} \cdot 1 + \frac{4}{5} \cdot N \longrightarrow N=5$

Number of Rounds – Starting from the beginning, there is a $\frac{1}{5}$ chance of ending in 1 round and a $\frac{4}{5}$ chance of ending in an average 5 rounds, so the expected number of rounds of app choices before the siblings simultaneously end is

$\frac{1}{5} *1 + \frac{4}{5}*5=4.2$ rounds

Time until Eating – In the first choice, there is a $\frac{1}{5}$ chance of ending in 3 minutes.  If that doesn’t happen, there is a subsequent $\frac{1}{5}$ chance of ending with the second choice with no additional time.  If neither of those events happen, there will be 1.6 minutes on the first choice plus an average 5 more rounds, each taking an average 1.5 minutes, for a total average $1.6+5*1.5=9.1$ minutes.  So the total average time until both siblings finish simultaneously will be

$\frac{1}{5}*3+\frac{4}{5}*9.1 = 7.88$ minutes

CONCLUSION:

My 7.88 minute mean is reasonably to the right of Nick’s 5 minute mode shown above.  We’ll see tomorrow if I match the FiveThirtyEight solution.

Anyone else want to give it a go?  I’d love to hear other approaches.