Tag Archives: problem-solving

Quadratics + Tangent = ???

 

Here’s a very pretty problem I encountered on Twitter from Mike Lawler 1.5 months ago.

I’m late to the game replying to Mike’s post, but this problem is the most lovely combination of features of quadratic and trigonometric functions I’ve ever encountered in a single question, so I couldn’t resist.  This one is well worth the time for you to explore on your own before reading further.

My full thoughts and explorations follow.  I have landed on some nice insights and what I believe is an elegant solution (in Insight #5 below).  Leading up to that, I share the chronology of my investigations and thought processes.  As always, all feedback is welcome.

WARNING:  HINTS AND SOLUTIONS FOLLOW

Investigation  #1:

My first thoughts were influenced by spoilers posted as quick replies to Mike’s post.  The coefficients of the underlying quadratic, A^2-9A+1=0, say that the solutions to the quadratic sum to 9 and multiply to 1.  The product of 1 turned out to be critical, but I didn’t see just how central it was until I had explored further.  I didn’t immediately recognize the 9 as a red herring.

Basic trig experience (and a response spoiler) suggested the angle values for the tangent embedded in the quadratic weren’t common angles, so I jumped to Desmos first.  I knew the graph of the overall given equation would be ugly, so I initially solved the equation by graphing the quadratic, computing arctangents, and adding.

tan1

Insight #1:  A Curious Sum

The sum of the arctangent solutions was about 1.57…, a decimal form suspiciously suggesting a sum of \pi/2.  I wasn’t yet worried about all solutions in the required [0,2\pi ] interval, but for whatever strange angles were determined by this equation, their sum was strangely pretty and succinct.  If this worked for a seemingly random sum of 9 for the tangent solutions, perhaps it would work for others.

Unfortunately, Desmos is not a CAS, so I turned to GeoGebra for more power.

Investigation #2:  

In GeoGebra, I created a sketch to vary the linear coefficient of the quadratic and to dynamically calculate angle sums.  My procedure is noted at the end of this post.  You can play with my GeoGebra sketch here.

The x-coordinate of point G is the sum of the angles of the first two solutions of the tangent solutions.

Likewise, the x-coordinate of point H is the sum of the angles of all four angles of the tangent solutions required by the problem.

tan2

Insight #2:  The Angles are Irrelevant

By dragging the slider for the linear coefficient, the parabola’s intercepts changed, but as predicted in Insights #1, the angle sums (x-coordinates of points G & H) remained invariant under all Real values of points A & B.  The angle sum of points C & D seemed to be \pi/2 (point G), confirming Insight #1, while the angle sum of all four solutions in [0,2\pi] remained 3\pi (point H), answering Mike’s question.

The invariance of the angle sums even while varying the underlying individual angles seemed compelling evidence that that this problem was richer than the posed version. 

Insight #3:  But the Angles are bounded

The parabola didn’t always have Real solutions.  In fact, Real x-intercepts (and thereby Real angle solutions) happened iff the discriminant was non-negative:  B^2-4AC=b^2-4*1*1 \ge 0.  In other words, the sum of the first two positive angles solutions for y=(tan(x))^2-b*tan(x)+1=0 is \pi/2 iff \left| b \right| \ge 2, and the sum of the first four solutions is 3\pi under the same condition.  These results extend to the equalities at the endpoints iff the double solutions there are counted twice in the sums.  I am not convinced these facts extend to the complex angles resulting when -2<b<2.

I knew the answer to the now extended problem, but I didn’t know why.  Even so, these solutions and the problem’s request for a SUM of angles provided the insights needed to understand WHY this worked; it was time to fully consider the product of the angles.

Insight #4:  Finally a proof

It was now clear that for \left| b \right| \ge 2 there were two Quadrant I angles whose tangents were equal to the x-intercepts of the quadratic.  If x_1 and x_2 are the quadratic zeros, then I needed to find the sum A+B where tan(A)=x_1 and tan(B)=x_2.

From the coefficients of the given quadratic, I knew x_1+x_2=tan(A)+tan(B)=9 and x_1*x_2=tan(A)*tan(B)=1.

Employing the tangent sum identity gave

\displaystyle tan(A+B) = \frac{tan(A)+tan(B)}{1-tan(A)tan(B)} = \frac{9}{1-1}

and this fraction is undefined, independent of the value of x_1+x_2=tan(A)+tan(B) as suggested by Insight #2.  Because tan(A+B) is first undefined at \pi/2, the first solutions are \displaystyle A+B=\frac{\pi}{2}.

Insight #5:  Cofunctions reveal essence

The tangent identity was a cute touch, but I wanted something deeper, not just an interpretation of an algebraic result.  (I know this is uncharacteristic for my typically algebraic tendencies.)  The final key was in the implications of tan(A)*tan(B)=1.

This product meant the tangent solutions were reciprocals, and the reciprocal of tangent is cotangent, giving

\displaystyle tan(A) = \frac{1}{tan(B)} = cot(B).

But cotangent is also the co-function–or complement function–of tangent which gave me

tan(A) = cot(B) = tan \left( \frac{\pi}{2} - B \right).

Because tangent is monotonic over every cycle, the equivalence of the tangents implied the equivalence of their angles, so A = \frac{\pi}{2} - B, or A+B = \frac{\pi}{2}.  Using the Insights above, this means the sum of the solutions to the generalization of Mike’s given equation,

(tan(x))^2+b*tan(x)+1=0 for x in [0,2\pi ] and any \left| b \right| \ge 2,

is always 3\pi with the fundamental reason for this in the definition of trigonometric functions and their co-functions.  QED

Insight #6:  Generalizing the Domain

The posed problem can be generalized further by recognizing the period of tangent: \pi.  That means the distance between successive corresponding solutions to the internal tangents of this problem is always \pi each, as shown in the GeoGebra construction above.

Insights 4 & 5 proved the sum of the angles at points C & D was \pi/2.  Employing the periodicity of tangent,  the x-coordinate of E = C+\pi and F = D+\pi, so the sum of the angles at points E & F is \frac{\pi}{2} + 2 \pi.

Extending the problem domain to [0,3\pi ] would add \frac{\pi}{2} + 4\pi more to the solution, and a domain of [0,4\pi ] would add an additional \frac{\pi}{2} + 6\pi.  Pushing the domain to [0,k\pi ] would give total sum

\displaystyle \left( \frac{\pi}{2} \right) + \left( \frac{\pi}{2} +2\pi \right) + \left( \frac{\pi}{2} +4\pi \right) + \left( \frac{\pi}{2} +6\pi \right) + ... + \left( \frac{\pi}{2} +2(k-1)\pi \right)

Combining terms gives a general formula for the sum of solutions for a problem domain of [0,k\pi ]

\displaystyle k * \frac{\pi}{2} + \left( 2+4+6+...+2(k-1) \right) * \pi =

\displaystyle = k * \frac{\pi}{2} + (k)(k-1) \pi =

\displaystyle = \frac{\pi}{2} * k * (2k-1)

For the first solutions in Quadrant I, [0,\pi] means k=1, and the sum is \displaystyle \frac{\pi}{2}*1*(2*1-1) = \frac{\pi}{2}.

For the solutions in the problem Mike originally posed, [0,2\pi] means k=2, and the sum is \displaystyle \frac{\pi}{2}*2*(2*2-1) = 3\pi.

I think that’s enough for one problem.

APPENDIX

My GeoGebra procedure for Investigation #2:

  • Graph the quadratic with a slider for the linear coefficient, y=x^2-b*x+1.
  • Label the x-intercepts A & B.
  • The x-values of A & B are the outputs for tangent, so I reflected these over y=x to the y-axis to construct A’ and B’.
  • Graph y=tan(x) and construct perpendiculars at A’ and B’ to determine the points of intersection with tangent–Points C, D, E, and F in the image below
  • The x-intercepts of C, D, E, and F are the angles required by the problem.
  • Since these can be points or vectors in Geogebra, I created point G by G=C+D.  The x-intercept of G is the angle sum of C & D.
  • Likewise, the x-intercept of point H=C+D+E+F is the required angle sum.

 

 

 

 

 

 

Advertisements

Roots of Complex Numbers without DeMoivre

Finding roots of complex numbers can be … complex.

This post describes a way to compute roots of any number–real or complex–via systems of equations without any conversions to polar form or use of DeMoivre’s Theorem.  Following a “traditional approach,” one non-technology example is followed by a CAS simplification of the process.

TRADITIONAL APPROACH:

Most sources describe the following procedure to compute the roots of complex numbers (obviously including the real number subset).

  • Write the complex number whose root is sought in generic polar form.  If necessary, convert from Cartesian form.
  • Invoke DeMoivre’s Theorem to get the polar form of all of the roots.
  • If necessary, convert the numbers from polar form back to Cartesian.

As a very quick example,

Compute all square roots of -16.

Rephrased, this asks for all complex numbers, z, that satisfy  z^2=-16.  The Fundamental Theorem of Algebra guarantees two solutions to this quadratic equation.

The complex Cartesian number, -16+0i, converts to polar form, 16cis( \pi ), where cis(\theta ) = cos( \theta ) +i*sin( \theta ).  Unlike Cartesian form, polar representations of numbers are not unique, so any full rotation from the initial representation would be coincident, and therefore equivalent if converted to Cartesian.  For any integer n, this means

-16 = 16cis( \pi ) = 16 cis \left( \pi + 2 \pi n \right)

Invoking DeMoivre’s Theorem,

\sqrt{-16} = (-16)^{1/2} = \left( 16 cis \left( \pi + 2 \pi n \right) \right) ^{1/2}
= 16^{1/2} * cis \left( \frac{1}{2} \left( \pi + 2 \pi n \right) \right)
= 4 * cis \left( \frac{ \pi }{2} + \pi * n \right)

For n= \{ 0, 1 \} , this gives polar solutions, 4cis \left( \frac{ \pi }{2} \right) and 4cis \left( \frac{ 3 \pi }{2} \right) .  Each can be converted back to Cartesian form, giving the two square roots of -16:   4i and -4i .  Squaring either gives -16, confirming the result.

I’ve always found the rotational symmetry of the complex roots of any number beautiful, particularly for higher order roots.  This symmetry is perfectly captured by DeMoivre’s Theorem, but there is arguably a simpler way to compute them.

NEW(?) NON-TECH APPROACH:

Because the solution to every complex number computation can be written in a+bi form, new possibilities open.  The original example can be rephrased:

Determine the simultaneous real values of x and y for which -16=(x+yi)^2.

Start by expanding and simplifying the right side back into a+bi form.  (I wrote about a potentially easier approach to simplifying powers of i in my last post.)

-16+0i = \left( x+yi \right)^2 = x^2 +2xyi+y^2 i^2=(x^2-y^2)+(2xy)i

Notice that the two ends of the previous line are two different expressions for the same complex number(s).  Therefore, equating the real and imaginary coefficients gives a system of equations:

demoivre5

Solving the system gives the square roots of -16.

From the latter equation, either x=0 or y=0.  Substituting y=0 into the first equation gives -16=x^2, an impossible equation because x & y are both real numbers, as stated above.

Substituting x=0 into the first equation gives -16=-y^2, leading to y= \pm 4.  So, x=0 and y=-4 -OR- x=0 and y=4 are the only solutions–x+yi=0-4i and x+yi=0+4i–the same solutions found earlier, but this time without using polar form or DeMoivre!  Notice, too, that the presence of TWO solutions emerged naturally.

Higher order roots could lead to much more complicated systems of equations, but a CAS can solve that problem.

CAS APPROACH:

Determine all fourth roots of 1+2i.

That’s equivalent to finding all simultaneous x and y values that satisfy 1+2i=(x+yi)^4.  Expanding the right side is quickly accomplished on a CAS.  From my TI-Nspire CAS:

demoivre1

Notice that the output is simplified to a+bi form that, in the context of this particular example, gives the system of equations,

demoivre6

Using my CAS to solve the system,

demoivre2

First, note there are four solutions, as expected.  Rewriting the approximated numerical output gives the four complex fourth roots of 1+2i-1.176-0.334i-0.334+1.176i0.334-1.176i, and 1.176+0.334i.  Each can be quickly confirmed on the CAS:

demoivre3

CONCLUSION:

Given proper technology, finding the multiple roots of a complex number need not invoke polar representations or DeMoivre’s Theorem.  It really is as “simple” as expanding (x+yi)^n where n is the given root, simplifying the expansion into a+bi form, and solving the resulting 2×2 system of equations.

At the point when such problems would be introduced to students, their algebraic awareness should be such that using a CAS to do all the algebraic heavy lifting is entirely appropriate.

As one final glimpse at the beauty of complex roots, I entered the two equations from the last system into Desmos to take advantage of its very good implicit graphing capabilities.  You can see the four intersections corresponding to the four solutions of the system.  Solutions to systems of implicit equations are notoriously difficult to compute, so I wasn’t surprised when Desmos didn’t compute the coordinates of the points of intersection, even though the graph was pretty and surprisingly quick to generate.

demoivre4

Stats Exploration Yields Deeper Understanding

or “A lesson I wouldn’t have learned without technology”

Last November, some of my AP Statistics students were solving a problem involving a normal distribution with an unknown mean.  Leveraging the TI Nspire CAS calculators we use for all computations, they crafted a logical command that should have worked.  Their unexpected result initially left us scratching heads.  After some conversations with the great folks at TI, we realized that what at first seemed perfectly reasonable for a single answer, in fact had two solutions.  And it took until the end of this week for another student to finally identify and resolve the mysterious results.  This ‘blog post recounts our journey from a questionable normal probability result to a rich approach to confidence intervals.

THE INITIAL PROBLEM

I had assigned an AP Statistics free response question about a manufacturing process that could be manipulated to control the mean distance its golf balls would travel.  We were told that the process created balls with a normally distributed distance of 288 yards and a standard deviation of 2.8 yards.  The first part asked students to find the probability of balls traveling more than an allowable 291.2 yards.  This was straightforward.  Find the area under a normal curve with a mean of 288 and a standard deviation of 2.8 from 291.2 to infinity.  The Nspire (CAS and non-CAS) syntax for this is:

golf1

[Post publishing note: See Dennis’ comment below for a small correction for the non-CAS Nspires.  I forgot that those machines don’t accept “infinity” as a bound.]

As 12.7% of the golf balls traveling too far is obviously an unacceptably high percentage, the next part asked for the mean distance needed so only 99% of the balls traveled allowable distances.  That’s when things got interesting.

A “LOGICAL” RESPONSE RESULTS IN A MYSTERY

Their initial thought was that even though they didn’t know the mean, they now knew the output of their normCdf command.  Since the balls couldn’t travel a negative distance and zero was many standard deviations from the unknown mean, the following equation with representing the unknown mean should define the scenario nicely.

golf2

Because this was an equation with a single unknown, we could now use our CAS calculators to solve for the missing parameter.

golf3

Something was wrong.  How could the mean distance possibly be just 6.5 yards?  The Nspires are great, reliable machines.  What happened?

I had encountered something like this before with unexpected answers when a solve command was applied to a Normal cdf with dual finite bounds .  While it didn’t seem logical to me why this should make a difference, I asked them to try an infinite lower bound and also to try computing the area on the other side of 291.2.  Both of these provided the expected solution.

golf4

The caution symbol on the last line should have been a warning, but I honestly didn’t see it at the time.  I was happy to see the expected solution, but quite frustrated that infinite bounds seemed to be required.  Beyond three standard deviations from the mean of any normal distribution, almost no area exists, so how could extending the lower bound from 0 to negative infinity make any difference in the solution when 0 was already \frac{291.2}{2.8}=104 standard deviations away from 291.2?  I couldn’t make sense of it.

My initial assumption was that something was wrong with the programming in the Nspire, so I emailed some colleagues I knew within CAS development at TI.

GRAPHS REVEAL A HIDDEN SOLUTION

They reminded me that statistical computations in the Nspire CAS were resolved through numeric algorithms–an understandable approach given the algebraic definition of the normal and other probability distribution functions.  The downside to this is that numeric solvers may not pick up on (or are incapable of finding) difficult to locate or multiple solutions.  Their suggestion was to employ a graph whenever we got stuck.  This, too, made sense because graphing a function forced the machine to evaluate multiple values of the unknown variable over a predefined domain.

It was also a good reminder for my students that a solution to any algebraic equation can be thought of as the first substitution solution step for a system of equations.  Going back to the initially troublesome input, I rewrote normCdf(0,291.2,x,2.8)=0.99 as the system

y=normCdf(0,291.2,x,2.8)
y=0.99

and “the point” of intersection of that system would be the solution we sought.  Notice my emphasis indicating my still lingering assumptions about the problem.  Graphing both equations shone a clear light on what was my persistent misunderstanding.

golf5

I was stunned to see two intersection solutions on the screen.  Asking the Nspire for the points of intersection revealed BOTH ANSWERS my students and I had found earlier.

golf6

If both solutions were correct, then there really were two different normal pdfs that could solve the finite bounded problem.  Graphing these two pdfs finally explained what was happening.

By equating the normCdf result to 0.99 with FINITE bounds, I never specified on which end the additional 0.01 existed–left or right.  This graph showed the 0.01 could have been at either end, one with a mean near the expected 284 yards and the other with a mean near the unexpected 6.5 yards.  The graph below shows both normal curves with the 6.5 solution having an the additional 0.01 on the left and the 284 solution with the 0.01 on the right.

golf7

The CAS wasn’t wrong in the beginning.  I was.  And as has happened several times before, the machine didn’t rely on the same sometimes errant assumptions I did.  My students had made a very reasonable assumption that the area under the normal pdf for the golf balls should start only 0 (no negative distances) and inadvertently stumbled into a much richer problem.

A TEMPORARY FIX

The reason the infinity-bounded solutions didn’t give the unexpected second solution is that it is impossible to have the unspecified extra 0.01 area to the left of an infinite lower or upper bound.

To avoid unexpected multiple solutions, I resolved to tell my students to use infinite bounds whenever solving for an unknown parameter.  It was a little dissatisfying to not be able to use my students’ “intuitive” lower bound of 0 for this problem, but at least they wouldn’t have to deal with unexpected, counterintuitive results.

Surprisingly, the permanent solution arrived weeks later when another student shared his fix for a similar problem when computing confidence interval bounds.

A PERMANENT FIX FROM AN UNEXPECTED SOURCE

I really don’t like the way almost all statistics textbooks provide complicated formulas for computing confidence intervals using standardized z- and t-distribution critical scores.  Ultimately a 95% confidence interval is nothing more than the bounds of the middle 95% of a probability distribution whose mean and standard deviation are defined by a sample from the overall population.  Where the problem above solved for an unknown mean, on a CAS, computing a confidence interval follows essentially the same reasoning to determine missing endpoints.

My theme in every math class I teach is to memorize as little as you can, and use what you know as widely as possible.  Applying this to AP Statistics, I never reveal the existence of confidence interval commands on calculators until we’re 1-2 weeks past their initial introduction.  This allows me to develop a solid understanding of confidence intervals using a variation on calculator commands they already know.

For example, assume you need a 95% confidence interval of the percentage of votes Bernie Sanders is likely to receive in Monday’s Iowa Caucus.  The CNN-ORC poll released January 21 showed Sanders leading Clinton 51% to 43% among 280 likely Democratic caucus-goers.  (Read the article for a glimpse at the much more complicated reality behind this statistic.)  In this sample, the proportion supporting Sanders is approximately normally distributed with a sample p=0.51 and sample standard deviation of p of \sqrt((.51)(.49)/280)=0.0299.  The 95% confidence interval is the defined by the bounds containing the middle 95% of the data of this normal distribution.

Using the earlier lesson, one student suggested finding the bounds on his CAS by focusing on the tails.

golf8

giving a confidence interval of (0.45, 0.57) for Sanders for Monday’s caucus, according to the method of the CNN-ORC poll from mid-January.  Using a CAS keeps my students focused on what a confidence interval actually means without burying them in the underlying computations.

That’s nice, but what if you needed a confidence interval for a sample mean?  Unfortunately, the t-distribution on the Nspire is completely standardized, so confidence intervals need to be built from critical t-values.  Like on a normal distribution, a 95% confidence interval is defined by the bounds containing the middle 95% of the data.  One student reasonably suggested the following for a 95% confidence interval with 23 degrees of freedom.  I really liked the explicit syntax definition of the confidence interval.

golf9

Alas, the CAS returned the input.  It couldn’t find the answer in that form.  Cognizant of the lessons learned above, I suggested reframing the query with an infinite bound.

golf10

That gave the proper endpoint, but I was again dissatisfied with the need to alter the input, even though I knew why.

That’s when another of my students spoke up to say that he got the solution to work with the initial commands by including a domain restriction.

golf11

Of course!  When more than one solution is possible, restrict the bounds to the solution range you want.  Then you can use the commands that make sense.

FIXING THE INITIAL APPROACH

That small fix finally gave me the solution to the earlier syntax issue with the golf ball problem.  There were two solutions to the initial problem, so if I bounded the output, they could use their intuitive approach and get the answer they needed.

If a mean of 288 yards and a standard deviation of 2.8 yards resulted in 12.7% of the area above 291.2, then it wouldn’t take much of a left shift in the mean to leave just 1% of the area above 291.2. Surely that unknown mean would be no lower than 3 standard deviations below the current 288, somewhere above 280 yards.  Adding that single restriction to my students’ original syntax solved their problem.

golf13

Perfection!

CONCLUSION

By encouraging a deep understanding of both the underlying statistical content AND of their CAS tool, students are increasingly able to find creative solutions using flexible methods and expressions intuitive to them.  And shouldn’t intellectual strength, creativity, and flexibility be the goals of every learning experience?

 

Unanticipated Proof Before Algebra

I was talking with one of our 5th graders, S,  last week about the difference between showing a few examples of numerical computations and developing a way to know something was true no matter what numbers were chosen.  I hadn’t started our conversation thinking about introducing proof.  Once we turned in that direction, I anticipated scaffolding him in a completely different direction, but S went his own way and reinforced for me the importance of listening and giving students the encouragement and room to build their own reasoning.

SETUP:  S had been telling me that he “knew” the product of an even number with any other number would always be even, while the product of any two odds was always odd.  He demonstrated this by showing lots of particular products, but I asked him if he was sure that it was still true if I were to pick some numbers he hadn’t used yet.  He was.

Then I asked him how many numbers were possible to use.  He promptly replied “infinite” at which point he finally started to see the difficulty with demonstrating that every product worked.  “We don’t have enough time” to do all that, he said.  Finally, I had maneuvered him to perhaps his first ever realization for the need for proof.

ANTICIPATION:  But S knew nothing of formal algebra.  From my experiences with younger students sans algebra, I thought I would eventually need to help him translate his numerical problem into a geometric one.  But this story is about S’s reasoning, not mine.

INSIGHT:  I asked S how he would handle any numbers I asked him to multiply to prove his claims, even if I gave him some ridiculously large ones.  “It’s really not as hard as that,” S told me.  He quickly scribbled

s1

on his paper and covered up all but the one’s digit.  “You see,” he said, “all that matters is the units.  You can make the number as big as you want and I just need to look at the last digit.”  Without using this language, S was venturing into an even-odd proof via modular arithmetic.

With some more thought, he reasoned that he would focus on just the units digit through repeated multiples and see what happened.

FIFTH GRADE PROOF:  S’s math class is currently working through a multiplication unit in our 5th grade Bridges curriculum, so he was already in the mindset of multiples.  Since he said only the units digit mattered, he decided he could start with any even number and look at all of its multiples.  That is, he could keep adding the number to itself and see what happened.  As shown below, he first chose 32 and found the next four multiples, 64, 96, 128, and 160.  After that, S said the very next number in the list would end in a 2 and the loop would start all over again.

s2

He stopped talking for several seconds, and then he smiled.  “I don’t have to look at every multiple of 32.  Any multiple will end up somewhere in my cycle and I’ve already shown that every number in this cycle is even.  Every multiple of 32 must be even!”  It was a pretty powerful moment.  Since he only needed to see the last digit, and any number ending in 2 would just add 2s to the units, this cycle now represented every number ending in 2 in the universe.  The last line above was S’s use of 1002 to show that the same cycling happened for another “2 number.”

DIFFERENT KINDS OF CYCLES:  So could he use this for all multiples of even numbers?  His next try was an “8 number.”

s3

After five multiples of 18, he achieved the same cycling.  Even cooler, he noticed that the cycle for “8 numbers” was the 2 number” cycle backwards.

Also note that after S completed his 2s and 8s lists, he used only single digit seed numbers as the bigger starting numbers only complicated his examples.  He was on a roll now.

s4

I asked him how the “4 number” cycle was related.  He noticed that the 4s used every other number in the “2 number” cycle.  It was like skip counting, he said.  Another lightbulb went off.

“And that’s because 4 is twice 2, so I just take every 2nd multiple in the first cycle!”  He quickly scratched out a “6 number” example.

s5

This, too, cycled, but more importantly, because 6 is thrice 2, he said that was why this list used every 3rd number in the “2 number” cycle.  In that way, every even number multiple list was the same as the “2 number” list, you just skip-counted by different steps on your way through the list.

When I asked how he could get all the numbers in such a short list when he was counting by 3s, S said it wasn’t a problem at all.  Since it cycled, whenever you got to the end of a list, just go back to the beginning and keep counting.  We didn’t touch it last week, but he had opened the door to modular arithmetic.

I won’t show them here, but his “0 number” list always ended in 0s.  “This one isn’t very interesting,” he said.  I smiled.

ODDS:  It took a little more thought to start his odd number proof, because every other multiple was even.  After he recognized these as even numbers, S decided to list every other multiple as shown with his “1 number” and “3 number” lists.

s7

As with the evens, the odd number lists could all be seen as skip-counted versions of each other.  Also, the 1s and 9s were written backwards from each other, and so were the 3s and 7s.  “5 number” lists were declared to be as boring as “0 numbers”.  Not only did the odds ultimately end up cycling essentially the same as the evens, but they had the same sort of underlying relationships.

CONCLUSION:  At this point, S declared that since he had shown every possible case for evens and odds, then he had shown that any multiple of an even number was always even, and any odd multiple of an odd number was odd.  And he knew this because no matter how far down the list he went, eventually any multiple had to end up someplace in his cycles.  At that point I reminded S of his earlier claim that there was an infinite number of even and odd numbers.  When he realized that he had just shown a case-by-case reason for more numbers than he could ever demonstrate by hand, he sat back in his chair, exclaiming, “Whoa!  That’s cool!”

It’s not a formal mathematical proof, and when S learns some algebra, he’ll be able to accomplish his cases far more efficiently, but this was an unexpectedly nice and perfectly legitimate numerical proof of even and odd multiples for an elementary student.

 

PowerBall Redux

Donate to a charity instead.  Let me explain.
The majority of responses to my PowerBall description/warnings yesterday have been, “If you don’t play, you can’t win.”  Unfortunately, I know many, many people are buying many lottery tickets, way more than they should.
 
OK.  For almost everyone, there’s little harm in spending $2 on a ticket for the entertainment, but don’t expect to win, and don’t buy multiple tickets unless you can afford to do without every dollar you spend. I worry about those who are “investing” tens or hundreds of dollars on any lottery.
Two of my school colleagues captured the idea of a lottery yesterday with their analogies,
Steve:  Suppose you go to the beach and grab a handful of sand and bring it back to your house.  And you do that every single day. Then your odds of winning the powerball are still slightly worse than picking out one particular grain of sand from all the sand you accumulated over an entire year.
Or more simply put from the perspective of a lottery official, 
Patrick:  Here’s our idea.  You guys all throw your money in a big pile.  Then, after we take some of it, we’ll give the pile to just one of you.
WHY YOU SHOULDN’T BUY MULTIPLE TICKETS:
For perspective, a football field is 120 yards long, or 703.6 US dollars long using the logic of my last post. Rounding up, that would buy you 352 PowerBall tickets. That means investing $704 dollars would buy you a single football field length of chances in 10.5 coast-to-coast traverses of the entire United States.  There’s going to be an incredibly large number of disappointed people tomorrow.
MORAL:  Even an incredibly large multiple of a less-than-microscopic chance is still a less-than-microscopic chance.
BETTER IDEA: Assume you have the resources and are willing to part with tens or hundreds of dollars for no likelihood of tangible personal gain.  Using the $704 football example, buy 2 tickets and donate the other $700 to charity. You’ll do much more good.

PowerBall Math

Given the record size and mania surrounding the current PowerBall Lottery, I thought some of you might be interested in bringing that game into perspective.  This could be an interesting application with some teachers and students.

It certainly is entertaining for many to dream about what you would do if you happened to be lucky enough to win an astronomical lottery.  And lottery vendors are quick to note that your dreams can’t come true if you don’t play.  Nice advertising.  I’ll let the numbers speak to the veracity of the Lottery’s encouragement.

PowerBall is played by picking any 5 different numbers between 1 & 69, and then one PowerBall number between 1 & 26.  So there are nCr(69,5)*26=292,201,338 outcomes for this game.  Unfortunately, humans have a particularly difficult time understanding extremely large numbers, so I offer an analogy to bring it a little into perspective.

  • The horizontal width of the United States is generally reported to be 2680 miles, and a U.S. dollar bill is 6.14 inches wide.  That means the U.S. is approximately 27,655,505 dollar bills wide.
  • If I have 292,201,338 dollar bills (one for every possible PowerBall outcome), I could make a line of dollar bills placed end-to-end from the U.S. East Coast all the way to the West Coast, back to the East, back to the West, and so forth, passing back and forth between the two coasts just over 10.5 times.
  • Now imagine that exactly one of those dollar bills was replaced with a replica dollar bill made from gold colored paper.

 

Your chances of winning the PowerBall lottery are the same as randomly selecting that single gold note from all of those dollar bills laid end-to-end and crossing the entire breadth of the United States 10.5 times. 

Dreaming is fun, but how likely is this particular dream to become real?

Play the lottery if doing so is entertaining to you, but like going to the movie theater, don’t expect to get any money back in return.

Mistakes are Good

Confession #1:  My answers on my last post were WRONG.

I briefly thought about taking that post down, but discarded that idea when I thought about the reality that almost all published mathematics is polished, cleaned, and optimized.  Many students struggle with mathematics under the misconception that their first attempts at any topic should be as polished as what they read in published sources.

While not precisely from the same perspective, Dan Teague recently wrote an excellent, short piece of advice to new teachers on NCTM’s ‘blog entitled Demonstrating Competence by Making Mistakes.  I argue Dan’s advice actually applies to all teachers, so in the spirit of showing how to stick with a problem and not just walking away saying “I was wrong”, I’m going to keep my original post up, add an advisory note at the start about the error, and show below how I corrected my error.

Confession #2:  My approach was a much longer and far less elegant solution than the identical approaches offered by a comment by “P” on my last post and the solution offered on FiveThirtyEight.  Rather than just accepting the alternative solution, as too many students are wont to do, I acknowledged the more efficient approach of others before proceeding to find a way to get the answer through my initial idea.

I’ll also admit that I didn’t immediately see the simple approach to the answer and rushed my post in the time I had available to get it up before the answer went live on FiveThirtyEight.

GENERAL STRATEGY and GOALS:

1-Use a PDF:  The original FiveThirtyEight post asked for the expected time before the siblings simultaneously finished their tasks.  I interpreted this as expected value, and I knew how to compute the expected value of a pdf of a random variable.  All I needed was the potential wait times, t, and their corresponding probabilities.  My approach was solid, but a few of my computations were off.

2-Use Self-Similarity:  I don’t see many people employing the self-similarity tactic I used in my initial solution.  Resolving my initial solution would allow me to continue using what I consider a pretty elegant strategy for handling cumbersome infinite sums.

A CORRECTED SOLUTION:

Stage 1:  My table for the distribution of initial choices was correct, as were my conclusions about the probability and expected time if they chose the same initial app.

App1

My first mistake was in my calculation of the expected time if they did not choose the same initial app.  The 20 numbers in blue above represent that sample space.  Notice that there are 8 times where one sibling chose a 5-minute app, leaving 6 other times where one sibling chose a 4-minute app while the other chose something shorter.  Similarly, there are 4 choices of an at most 3-minute app, and 2 choices of an at most 2-minute app.  So the expected length of time spent by the longer app if the same was not chosen for both is

E(Round1) = \frac{1}{20}*(8*5+6*4+4*3+2*2)=4 minutes,

a notably longer time than I initially reported.

For the initial app choice, there is a \frac{1}{5} chance they choose the same app for an average time of 3 minutes, and a \frac{4}{5} chance they choose different apps for an average time of 4 minutes.

Stage 2:  My biggest error was a rushed assumption that all of the entries I gave in the Round 2 table were equally likely.  That is clearly false as you can see from Table 1 above.  There are only two instances of a time difference of 4, while there are eight instances of a time difference of 1.  A correct solution using my approach needs to account for these varied probabilities.  Here is a revised version of Table 2 with these probabilities included.

App4

Conveniently–as I had noted without full realization in my last post–the revised Table 2 still shows the distribution for the 2nd and all future potential rounds until the siblings finally align, including the probabilities.  This proved to be a critical feature of the problem.

Another oversight was not fully recognizing which events would contribute to increasing the time before parity.  The yellow highlighted cells in Table 2 are those for which the next app choice was longer than the current time difference, and any of these would increase the length of a trial.

I was initially correct in concluding there was a \frac{1}{5} probability of the second app choice achieving a simultaneous finish and that this would not result in any additional total time.  I missed the fact that the six non-highlighted values also did not result in additional time and that there was a \frac{1}{5} chance of this happening.

That leaves a \frac{3}{5} chance of the trial time extending by selecting one of the highlighted events.  If that happens, the expected time the trial would continue is

\displaystyle \frac{4*4+(4+3)*3+(4+3+2)*2+(4+3+2+1)*1}{4+(4+3)+(4+3+2)+(4+3+2+1)}=\frac{13}{6} minutes.

Iterating:  So now I recognized there were 3 potential outcomes at Stage 2–a \frac{1}{5} chance of matching and ending, a \frac{1}{5} chance of not matching but not adding time, and a \frac{3}{5} chance of not matching and adding an average \frac{13}{6} minutes.  Conveniently, the last two possibilities still combined to recreate perfectly the outcomes and probabilities of the original Stage 2, creating a self-similar, pseudo-fractal situation.  Here’s the revised flowchart for time.

App5

Invoking the similarity, if there were T minutes remaining after arriving at Stage 2, then there was a \frac{1}{5} chance of adding 0 minutes, a \frac{1}{5} chance of remaining at T minutes, and a \frac{3}{5} chance of adding \frac{13}{6} minutes–that is being at T+\frac{13}{6} minutes.  Equating all of this allows me to solve for T.

T=\frac{1}{5}*0+\frac{1}{5}*T+\frac{3}{5}*\left( T+\frac{13}{6} \right) \longrightarrow T=6.5 minutes

Time Solution:  As noted above, at the start, there was a \frac{1}{5} chance of immediately matching with an average 3 minutes, and there was a \frac{4}{5} chance of not matching while using an average 4 minutes.  I just showed that from this latter stage, one would expect to need to use an additional mean 6.5 minutes for the siblings to end simultaneously, for a mean total of 10.5 minutes.  That means the overall expected time spent is

Total Expected Time =\frac{1}{5}*3 + \frac{4}{5}*10.5 = 9 minutes.

Number of Rounds Solution:  My initial computation of the number of rounds was actually correct–despite the comment from “P” in my last post–but I think the explanation could have been clearer.  I’ll try again.

App6

One round is obviously required for the first choice, and in the \frac{4}{5} chance the siblings don’t match, let N be the average number of rounds remaining.  In Stage 2, there’s a \frac{1}{5} chance the trial will end with the next choice, and a \frac{4}{5} chance there will still be N rounds remaining.  This second situation is correct because both the no time added and time added possibilities combine to reset Table 2 with a combined probability of \frac{4}{5}.  As before, I invoke self-similarity to find N.

N = \frac{1}{5}*1 + \frac{4}{5}*N \longrightarrow N=5

Therefore, the expected number of rounds is \frac{1}{5}*1 + \frac{4}{5}*5 = 4.2 rounds.

It would be cool if someone could confirm this prediction by simulation.

CONCLUSION:

I corrected my work and found the exact solution proposed by others and simulated by Steve!   Even better, I have shown my approach works and, while notably less elegant, one could solve this expected value problem by invoking the definition of expected value.

Best of all, I learned from a mistake and didn’t give up on a problem.  Now that’s the real lesson I hope all of my students get.

Happy New Year, everyone!