Tag Archives: problem-solving

FiveThirtyEight Riddler Express Solution

I’d not peeked at FiveThirtyEight’s The Riddler in a while when I saw Neema Salimi ‘s post about the June 22, 2018 Riddler Express   Neema argued his solution was accessible to Algebra 1 students–not always possible for FiveThirtyEight’s great logic puzzles–so I knew it was past time to return.

After my exploration, I’ve concluded this is DEFINITELY worth posing to any middle school learners (or others) in search of an interesting problem variation.

Following is my solution, three retrospective insights about the problem, a comparison to Neema’s solution, and a proposed alternative numeric approach I think many Algebra 1 students might actually attempt.

THE PROBLEM:

Here is a screenshot of the original problem posted on FiveThirtyEight (2nd item on this page).

If you’re familiar with rate problems from Algebra 1, this should strike you immediately as a “complexification” of D=R*T type problems.  (“Complexification” is what a former student from a couple years ago said happened to otherwise simple problems when I made them “more interesting.”)

MY SOLUTION:

My first thought simplified my later algebra and made for a much more dramatic ending!!  Since Michelle caught up with her boarding pass with 10 meters left on the walkway, I recognized those extra 10 meters as irrelevant, so I changed the initial problem to an equivalent question–from an initial 100 m to 90 m–having Michelle catch up with her boarding pass just as the belt was about to roll back under the floor!

Let W = the speed of the walkway in m/s.  Because Michelle’s boarding pass then traveled a distance of 90 m in W m/s, her boarding pass traveled a total \displaystyle \frac{90}{W} seconds.

If M = Michelle’s walking speed, then her total distance traveled is the initial 90 meters PLUS the distance she traveled in the additional 90 seconds after dropping her boarding pass.  Her speed at this time was (M-W)  m/s (subtracting W because she was moving against the walkway), so the additional distance she traveled was D = (M-W) \cdot 90, making her her total distance D_{Michelle} = 90 + 90(M-W).

Then Michelle realized she had dropped her boarding pass and turned to run back at (2M+W) m/s (adding to show moving with the walkway this time), and she had \displaystyle \frac{90}{W} - 90 seconds to catch it before it disappeared beneath the belt.  The subtraction is the time difference between losing the pass and realizing she lost it.  Substituting into D = R*T gives

\displaystyle 90 + 90(M-W)=(2M+W)* \left( \frac{90}{W} - 90 \right)

A little expansion and algebra cleanup …

\displaystyle 90 + 90M - 90W = 180 \frac{M}{W} - 180M + 90 - 90W

\displaystyle 90M = 180 \frac{M}{W} - 180M

\displaystyle 270M = 180 \frac{M}{W}

And multiplying by \displaystyle \frac{W}{270M} solves the problem:

\displaystyle W = \frac{2}{3}

INSIGHTS:

Insight #1:  Solving a problem is always nice, but I was thinking all along that I pulled off my solution because I’m very comfortable with algebraic notation.  This is certainly NOT true of most Algebra 1 students.

Insight #2:  This made me wonder about the viability of a numeric solution to the problem–an approach many first-year algebra students attempt when frutstrated.

Insight #3:  In the very last solution step, Michelle’s rate, M, completely dropped out of the problem.  That means the solution to this problem is independent of Michelle’s walking speed.

Wondering if other terms might be superfluous, too, I generalized my initial algebraic solution further with A = the initial distance before Michelle dropped her boarding pass and B = the additional time Michelle walked before realizing she had dropped the pass.

\displaystyle A + B(M-W)=(2M+W)* \left( \frac{A}{W} - B \right)

And solving for gives \displaystyle W = \frac{2A}{3B}.

So, the solution does depend on the initial distance traveled and the time before Michelle turns around, and it was simplified in the initial statement with A=B=90.  That all made sense after a few quick thought experiments.  With one more variation you can show that the scale factor between her walking and jogging speed is relevant, but not her walking speed.  But now it was clear that in all cases, Michelle’s walking speed is irrelevant!

COMPARING TO NEEMA:

My initial conclusion matched Neema’s solution, but I really liked my separate discovery that the answer was completely independent of Michelle’s walking speed.  In my opinion, those cool insights are not at all intuitive.

AN ALTERNATIVE NUMERIC APPROACH:

While this approach is just a series of special cases of the generic approach, I suspect many Algebra 1 students would likely get frustrated quickly by the multiple variable and attempt

Ignoring everything above, but using the same variables for simplicity, perhaps the easiest start is to assume the walkway moves at W=1 m/s and Michelle’s walking speed is M=2 m/s.  That means her outward speed against the walkway is (2-1) = 1 m/s.  She drops the pass at 90 meters after 90 seconds.  So the pass will be back at the start after 90 seconds, the additional time that Michelle walks before realizing her loss.

I could imagine many students I’ve taught working from this through some sort of intelligent numeric guess-and-check, adjusting the values of M and W until landing at \displaystyle W=\frac{2}{3}.  The fractional value of W would slow them down, but many would get it this way.

CONCLUSION:

I’m definitely pitching this question to my students in just a few weeks.  (Where did summer go?)  I’m deeply curious about how they’ll approach their solutions.  I”m convinced many will attempt–at least initially–playing with more comprehensible numbers.  Such approaches often give young learners the insights they need to handle algebra’s generalizations.

Quadratics + Tangent = ???

 

Here’s a very pretty problem I encountered on Twitter from Mike Lawler 1.5 months ago.

I’m late to the game replying to Mike’s post, but this problem is the most lovely combination of features of quadratic and trigonometric functions I’ve ever encountered in a single question, so I couldn’t resist.  This one is well worth the time for you to explore on your own before reading further.

My full thoughts and explorations follow.  I have landed on some nice insights and what I believe is an elegant solution (in Insight #5 below).  Leading up to that, I share the chronology of my investigations and thought processes.  As always, all feedback is welcome.

WARNING:  HINTS AND SOLUTIONS FOLLOW

Investigation  #1:

My first thoughts were influenced by spoilers posted as quick replies to Mike’s post.  The coefficients of the underlying quadratic, A^2-9A+1=0, say that the solutions to the quadratic sum to 9 and multiply to 1.  The product of 1 turned out to be critical, but I didn’t see just how central it was until I had explored further.  I didn’t immediately recognize the 9 as a red herring.

Basic trig experience (and a response spoiler) suggested the angle values for the tangent embedded in the quadratic weren’t common angles, so I jumped to Desmos first.  I knew the graph of the overall given equation would be ugly, so I initially solved the equation by graphing the quadratic, computing arctangents, and adding.

tan1

Insight #1:  A Curious Sum

The sum of the arctangent solutions was about 1.57…, a decimal form suspiciously suggesting a sum of \pi/2.  I wasn’t yet worried about all solutions in the required [0,2\pi ] interval, but for whatever strange angles were determined by this equation, their sum was strangely pretty and succinct.  If this worked for a seemingly random sum of 9 for the tangent solutions, perhaps it would work for others.

Unfortunately, Desmos is not a CAS, so I turned to GeoGebra for more power.

Investigation #2:  

In GeoGebra, I created a sketch to vary the linear coefficient of the quadratic and to dynamically calculate angle sums.  My procedure is noted at the end of this post.  You can play with my GeoGebra sketch here.

The x-coordinate of point G is the sum of the angles of the first two solutions of the tangent solutions.

Likewise, the x-coordinate of point H is the sum of the angles of all four angles of the tangent solutions required by the problem.

tan2

Insight #2:  The Angles are Irrelevant

By dragging the slider for the linear coefficient, the parabola’s intercepts changed, but as predicted in Insights #1, the angle sums (x-coordinates of points G & H) remained invariant under all Real values of points A & B.  The angle sum of points C & D seemed to be \pi/2 (point G), confirming Insight #1, while the angle sum of all four solutions in [0,2\pi] remained 3\pi (point H), answering Mike’s question.

The invariance of the angle sums even while varying the underlying individual angles seemed compelling evidence that that this problem was richer than the posed version. 

Insight #3:  But the Angles are bounded

The parabola didn’t always have Real solutions.  In fact, Real x-intercepts (and thereby Real angle solutions) happened iff the discriminant was non-negative:  B^2-4AC=b^2-4*1*1 \ge 0.  In other words, the sum of the first two positive angles solutions for y=(tan(x))^2-b*tan(x)+1=0 is \pi/2 iff \left| b \right| \ge 2, and the sum of the first four solutions is 3\pi under the same condition.  These results extend to the equalities at the endpoints iff the double solutions there are counted twice in the sums.  I am not convinced these facts extend to the complex angles resulting when -2<b<2.

I knew the answer to the now extended problem, but I didn’t know why.  Even so, these solutions and the problem’s request for a SUM of angles provided the insights needed to understand WHY this worked; it was time to fully consider the product of the angles.

Insight #4:  Finally a proof

It was now clear that for \left| b \right| \ge 2 there were two Quadrant I angles whose tangents were equal to the x-intercepts of the quadratic.  If x_1 and x_2 are the quadratic zeros, then I needed to find the sum A+B where tan(A)=x_1 and tan(B)=x_2.

From the coefficients of the given quadratic, I knew x_1+x_2=tan(A)+tan(B)=9 and x_1*x_2=tan(A)*tan(B)=1.

Employing the tangent sum identity gave

\displaystyle tan(A+B) = \frac{tan(A)+tan(B)}{1-tan(A)tan(B)} = \frac{9}{1-1}

and this fraction is undefined, independent of the value of x_1+x_2=tan(A)+tan(B) as suggested by Insight #2.  Because tan(A+B) is first undefined at \pi/2, the first solutions are \displaystyle A+B=\frac{\pi}{2}.

Insight #5:  Cofunctions reveal essence

The tangent identity was a cute touch, but I wanted something deeper, not just an interpretation of an algebraic result.  (I know this is uncharacteristic for my typically algebraic tendencies.)  The final key was in the implications of tan(A)*tan(B)=1.

This product meant the tangent solutions were reciprocals, and the reciprocal of tangent is cotangent, giving

\displaystyle tan(A) = \frac{1}{tan(B)} = cot(B).

But cotangent is also the co-function–or complement function–of tangent which gave me

tan(A) = cot(B) = tan \left( \frac{\pi}{2} - B \right).

Because tangent is monotonic over every cycle, the equivalence of the tangents implied the equivalence of their angles, so A = \frac{\pi}{2} - B, or A+B = \frac{\pi}{2}.  Using the Insights above, this means the sum of the solutions to the generalization of Mike’s given equation,

(tan(x))^2+b*tan(x)+1=0 for x in [0,2\pi ] and any \left| b \right| \ge 2,

is always 3\pi with the fundamental reason for this in the definition of trigonometric functions and their co-functions.  QED

Insight #6:  Generalizing the Domain

The posed problem can be generalized further by recognizing the period of tangent: \pi.  That means the distance between successive corresponding solutions to the internal tangents of this problem is always \pi each, as shown in the GeoGebra construction above.

Insights 4 & 5 proved the sum of the angles at points C & D was \pi/2.  Employing the periodicity of tangent,  the x-coordinate of E = C+\pi and F = D+\pi, so the sum of the angles at points E & F is \frac{\pi}{2} + 2 \pi.

Extending the problem domain to [0,3\pi ] would add \frac{\pi}{2} + 4\pi more to the solution, and a domain of [0,4\pi ] would add an additional \frac{\pi}{2} + 6\pi.  Pushing the domain to [0,k\pi ] would give total sum

\displaystyle \left( \frac{\pi}{2} \right) + \left( \frac{\pi}{2} +2\pi \right) + \left( \frac{\pi}{2} +4\pi \right) + \left( \frac{\pi}{2} +6\pi \right) + ... + \left( \frac{\pi}{2} +2(k-1)\pi \right)

Combining terms gives a general formula for the sum of solutions for a problem domain of [0,k\pi ]

\displaystyle k * \frac{\pi}{2} + \left( 2+4+6+...+2(k-1) \right) * \pi =

\displaystyle = k * \frac{\pi}{2} + (k)(k-1) \pi =

\displaystyle = \frac{\pi}{2} * k * (2k-1)

For the first solutions in Quadrant I, [0,\pi] means k=1, and the sum is \displaystyle \frac{\pi}{2}*1*(2*1-1) = \frac{\pi}{2}.

For the solutions in the problem Mike originally posed, [0,2\pi] means k=2, and the sum is \displaystyle \frac{\pi}{2}*2*(2*2-1) = 3\pi.

I think that’s enough for one problem.

APPENDIX

My GeoGebra procedure for Investigation #2:

  • Graph the quadratic with a slider for the linear coefficient, y=x^2-b*x+1.
  • Label the x-intercepts A & B.
  • The x-values of A & B are the outputs for tangent, so I reflected these over y=x to the y-axis to construct A’ and B’.
  • Graph y=tan(x) and construct perpendiculars at A’ and B’ to determine the points of intersection with tangent–Points C, D, E, and F in the image below
  • The x-intercepts of C, D, E, and F are the angles required by the problem.
  • Since these can be points or vectors in Geogebra, I created point G by G=C+D.  The x-intercept of G is the angle sum of C & D.
  • Likewise, the x-intercept of point H=C+D+E+F is the required angle sum.

 

 

 

 

 

 

Roots of Complex Numbers without DeMoivre

Finding roots of complex numbers can be … complex.

This post describes a way to compute roots of any number–real or complex–via systems of equations without any conversions to polar form or use of DeMoivre’s Theorem.  Following a “traditional approach,” one non-technology example is followed by a CAS simplification of the process.

TRADITIONAL APPROACH:

Most sources describe the following procedure to compute the roots of complex numbers (obviously including the real number subset).

  • Write the complex number whose root is sought in generic polar form.  If necessary, convert from Cartesian form.
  • Invoke DeMoivre’s Theorem to get the polar form of all of the roots.
  • If necessary, convert the numbers from polar form back to Cartesian.

As a very quick example,

Compute all square roots of -16.

Rephrased, this asks for all complex numbers, z, that satisfy  z^2=-16.  The Fundamental Theorem of Algebra guarantees two solutions to this quadratic equation.

The complex Cartesian number, -16+0i, converts to polar form, 16cis( \pi ), where cis(\theta ) = cos( \theta ) +i*sin( \theta ).  Unlike Cartesian form, polar representations of numbers are not unique, so any full rotation from the initial representation would be coincident, and therefore equivalent if converted to Cartesian.  For any integer n, this means

-16 = 16cis( \pi ) = 16 cis \left( \pi + 2 \pi n \right)

Invoking DeMoivre’s Theorem,

\sqrt{-16} = (-16)^{1/2} = \left( 16 cis \left( \pi + 2 \pi n \right) \right) ^{1/2}
= 16^{1/2} * cis \left( \frac{1}{2} \left( \pi + 2 \pi n \right) \right)
= 4 * cis \left( \frac{ \pi }{2} + \pi * n \right)

For n= \{ 0, 1 \} , this gives polar solutions, 4cis \left( \frac{ \pi }{2} \right) and 4cis \left( \frac{ 3 \pi }{2} \right) .  Each can be converted back to Cartesian form, giving the two square roots of -16:   4i and -4i .  Squaring either gives -16, confirming the result.

I’ve always found the rotational symmetry of the complex roots of any number beautiful, particularly for higher order roots.  This symmetry is perfectly captured by DeMoivre’s Theorem, but there is arguably a simpler way to compute them.

NEW(?) NON-TECH APPROACH:

Because the solution to every complex number computation can be written in a+bi form, new possibilities open.  The original example can be rephrased:

Determine the simultaneous real values of x and y for which -16=(x+yi)^2.

Start by expanding and simplifying the right side back into a+bi form.  (I wrote about a potentially easier approach to simplifying powers of i in my last post.)

-16+0i = \left( x+yi \right)^2 = x^2 +2xyi+y^2 i^2=(x^2-y^2)+(2xy)i

Notice that the two ends of the previous line are two different expressions for the same complex number(s).  Therefore, equating the real and imaginary coefficients gives a system of equations:

demoivre5

Solving the system gives the square roots of -16.

From the latter equation, either x=0 or y=0.  Substituting y=0 into the first equation gives -16=x^2, an impossible equation because x & y are both real numbers, as stated above.

Substituting x=0 into the first equation gives -16=-y^2, leading to y= \pm 4.  So, x=0 and y=-4 -OR- x=0 and y=4 are the only solutions–x+yi=0-4i and x+yi=0+4i–the same solutions found earlier, but this time without using polar form or DeMoivre!  Notice, too, that the presence of TWO solutions emerged naturally.

Higher order roots could lead to much more complicated systems of equations, but a CAS can solve that problem.

CAS APPROACH:

Determine all fourth roots of 1+2i.

That’s equivalent to finding all simultaneous x and y values that satisfy 1+2i=(x+yi)^4.  Expanding the right side is quickly accomplished on a CAS.  From my TI-Nspire CAS:

demoivre1

Notice that the output is simplified to a+bi form that, in the context of this particular example, gives the system of equations,

demoivre6

Using my CAS to solve the system,

demoivre2

First, note there are four solutions, as expected.  Rewriting the approximated numerical output gives the four complex fourth roots of 1+2i-1.176-0.334i-0.334+1.176i0.334-1.176i, and 1.176+0.334i.  Each can be quickly confirmed on the CAS:

demoivre3

CONCLUSION:

Given proper technology, finding the multiple roots of a complex number need not invoke polar representations or DeMoivre’s Theorem.  It really is as “simple” as expanding (x+yi)^n where n is the given root, simplifying the expansion into a+bi form, and solving the resulting 2×2 system of equations.

At the point when such problems would be introduced to students, their algebraic awareness should be such that using a CAS to do all the algebraic heavy lifting is entirely appropriate.

As one final glimpse at the beauty of complex roots, I entered the two equations from the last system into Desmos to take advantage of its very good implicit graphing capabilities.  You can see the four intersections corresponding to the four solutions of the system.  Solutions to systems of implicit equations are notoriously difficult to compute, so I wasn’t surprised when Desmos didn’t compute the coordinates of the points of intersection, even though the graph was pretty and surprisingly quick to generate.

demoivre4

Stats Exploration Yields Deeper Understanding

or “A lesson I wouldn’t have learned without technology”

Last November, some of my AP Statistics students were solving a problem involving a normal distribution with an unknown mean.  Leveraging the TI Nspire CAS calculators we use for all computations, they crafted a logical command that should have worked.  Their unexpected result initially left us scratching heads.  After some conversations with the great folks at TI, we realized that what at first seemed perfectly reasonable for a single answer, in fact had two solutions.  And it took until the end of this week for another student to finally identify and resolve the mysterious results.  This ‘blog post recounts our journey from a questionable normal probability result to a rich approach to confidence intervals.

THE INITIAL PROBLEM

I had assigned an AP Statistics free response question about a manufacturing process that could be manipulated to control the mean distance its golf balls would travel.  We were told that the process created balls with a normally distributed distance of 288 yards and a standard deviation of 2.8 yards.  The first part asked students to find the probability of balls traveling more than an allowable 291.2 yards.  This was straightforward.  Find the area under a normal curve with a mean of 288 and a standard deviation of 2.8 from 291.2 to infinity.  The Nspire (CAS and non-CAS) syntax for this is:

golf1

[Post publishing note: See Dennis’ comment below for a small correction for the non-CAS Nspires.  I forgot that those machines don’t accept “infinity” as a bound.]

As 12.7% of the golf balls traveling too far is obviously an unacceptably high percentage, the next part asked for the mean distance needed so only 99% of the balls traveled allowable distances.  That’s when things got interesting.

A “LOGICAL” RESPONSE RESULTS IN A MYSTERY

Their initial thought was that even though they didn’t know the mean, they now knew the output of their normCdf command.  Since the balls couldn’t travel a negative distance and zero was many standard deviations from the unknown mean, the following equation with representing the unknown mean should define the scenario nicely.

golf2

Because this was an equation with a single unknown, we could now use our CAS calculators to solve for the missing parameter.

golf3

Something was wrong.  How could the mean distance possibly be just 6.5 yards?  The Nspires are great, reliable machines.  What happened?

I had encountered something like this before with unexpected answers when a solve command was applied to a Normal cdf with dual finite bounds .  While it didn’t seem logical to me why this should make a difference, I asked them to try an infinite lower bound and also to try computing the area on the other side of 291.2.  Both of these provided the expected solution.

golf4

The caution symbol on the last line should have been a warning, but I honestly didn’t see it at the time.  I was happy to see the expected solution, but quite frustrated that infinite bounds seemed to be required.  Beyond three standard deviations from the mean of any normal distribution, almost no area exists, so how could extending the lower bound from 0 to negative infinity make any difference in the solution when 0 was already \frac{291.2}{2.8}=104 standard deviations away from 291.2?  I couldn’t make sense of it.

My initial assumption was that something was wrong with the programming in the Nspire, so I emailed some colleagues I knew within CAS development at TI.

GRAPHS REVEAL A HIDDEN SOLUTION

They reminded me that statistical computations in the Nspire CAS were resolved through numeric algorithms–an understandable approach given the algebraic definition of the normal and other probability distribution functions.  The downside to this is that numeric solvers may not pick up on (or are incapable of finding) difficult to locate or multiple solutions.  Their suggestion was to employ a graph whenever we got stuck.  This, too, made sense because graphing a function forced the machine to evaluate multiple values of the unknown variable over a predefined domain.

It was also a good reminder for my students that a solution to any algebraic equation can be thought of as the first substitution solution step for a system of equations.  Going back to the initially troublesome input, I rewrote normCdf(0,291.2,x,2.8)=0.99 as the system

y=normCdf(0,291.2,x,2.8)
y=0.99

and “the point” of intersection of that system would be the solution we sought.  Notice my emphasis indicating my still lingering assumptions about the problem.  Graphing both equations shone a clear light on what was my persistent misunderstanding.

golf5

I was stunned to see two intersection solutions on the screen.  Asking the Nspire for the points of intersection revealed BOTH ANSWERS my students and I had found earlier.

golf6

If both solutions were correct, then there really were two different normal pdfs that could solve the finite bounded problem.  Graphing these two pdfs finally explained what was happening.

By equating the normCdf result to 0.99 with FINITE bounds, I never specified on which end the additional 0.01 existed–left or right.  This graph showed the 0.01 could have been at either end, one with a mean near the expected 284 yards and the other with a mean near the unexpected 6.5 yards.  The graph below shows both normal curves with the 6.5 solution having an the additional 0.01 on the left and the 284 solution with the 0.01 on the right.

golf7

The CAS wasn’t wrong in the beginning.  I was.  And as has happened several times before, the machine didn’t rely on the same sometimes errant assumptions I did.  My students had made a very reasonable assumption that the area under the normal pdf for the golf balls should start only 0 (no negative distances) and inadvertently stumbled into a much richer problem.

A TEMPORARY FIX

The reason the infinity-bounded solutions didn’t give the unexpected second solution is that it is impossible to have the unspecified extra 0.01 area to the left of an infinite lower or upper bound.

To avoid unexpected multiple solutions, I resolved to tell my students to use infinite bounds whenever solving for an unknown parameter.  It was a little dissatisfying to not be able to use my students’ “intuitive” lower bound of 0 for this problem, but at least they wouldn’t have to deal with unexpected, counterintuitive results.

Surprisingly, the permanent solution arrived weeks later when another student shared his fix for a similar problem when computing confidence interval bounds.

A PERMANENT FIX FROM AN UNEXPECTED SOURCE

I really don’t like the way almost all statistics textbooks provide complicated formulas for computing confidence intervals using standardized z- and t-distribution critical scores.  Ultimately a 95% confidence interval is nothing more than the bounds of the middle 95% of a probability distribution whose mean and standard deviation are defined by a sample from the overall population.  Where the problem above solved for an unknown mean, on a CAS, computing a confidence interval follows essentially the same reasoning to determine missing endpoints.

My theme in every math class I teach is to memorize as little as you can, and use what you know as widely as possible.  Applying this to AP Statistics, I never reveal the existence of confidence interval commands on calculators until we’re 1-2 weeks past their initial introduction.  This allows me to develop a solid understanding of confidence intervals using a variation on calculator commands they already know.

For example, assume you need a 95% confidence interval of the percentage of votes Bernie Sanders is likely to receive in Monday’s Iowa Caucus.  The CNN-ORC poll released January 21 showed Sanders leading Clinton 51% to 43% among 280 likely Democratic caucus-goers.  (Read the article for a glimpse at the much more complicated reality behind this statistic.)  In this sample, the proportion supporting Sanders is approximately normally distributed with a sample p=0.51 and sample standard deviation of p of \sqrt((.51)(.49)/280)=0.0299.  The 95% confidence interval is the defined by the bounds containing the middle 95% of the data of this normal distribution.

Using the earlier lesson, one student suggested finding the bounds on his CAS by focusing on the tails.

golf8

giving a confidence interval of (0.45, 0.57) for Sanders for Monday’s caucus, according to the method of the CNN-ORC poll from mid-January.  Using a CAS keeps my students focused on what a confidence interval actually means without burying them in the underlying computations.

That’s nice, but what if you needed a confidence interval for a sample mean?  Unfortunately, the t-distribution on the Nspire is completely standardized, so confidence intervals need to be built from critical t-values.  Like on a normal distribution, a 95% confidence interval is defined by the bounds containing the middle 95% of the data.  One student reasonably suggested the following for a 95% confidence interval with 23 degrees of freedom.  I really liked the explicit syntax definition of the confidence interval.

golf9

Alas, the CAS returned the input.  It couldn’t find the answer in that form.  Cognizant of the lessons learned above, I suggested reframing the query with an infinite bound.

golf10

That gave the proper endpoint, but I was again dissatisfied with the need to alter the input, even though I knew why.

That’s when another of my students spoke up to say that he got the solution to work with the initial commands by including a domain restriction.

golf11

Of course!  When more than one solution is possible, restrict the bounds to the solution range you want.  Then you can use the commands that make sense.

FIXING THE INITIAL APPROACH

That small fix finally gave me the solution to the earlier syntax issue with the golf ball problem.  There were two solutions to the initial problem, so if I bounded the output, they could use their intuitive approach and get the answer they needed.

If a mean of 288 yards and a standard deviation of 2.8 yards resulted in 12.7% of the area above 291.2, then it wouldn’t take much of a left shift in the mean to leave just 1% of the area above 291.2. Surely that unknown mean would be no lower than 3 standard deviations below the current 288, somewhere above 280 yards.  Adding that single restriction to my students’ original syntax solved their problem.

golf13

Perfection!

CONCLUSION

By encouraging a deep understanding of both the underlying statistical content AND of their CAS tool, students are increasingly able to find creative solutions using flexible methods and expressions intuitive to them.  And shouldn’t intellectual strength, creativity, and flexibility be the goals of every learning experience?

 

Unanticipated Proof Before Algebra

I was talking with one of our 5th graders, S,  last week about the difference between showing a few examples of numerical computations and developing a way to know something was true no matter what numbers were chosen.  I hadn’t started our conversation thinking about introducing proof.  Once we turned in that direction, I anticipated scaffolding him in a completely different direction, but S went his own way and reinforced for me the importance of listening and giving students the encouragement and room to build their own reasoning.

SETUP:  S had been telling me that he “knew” the product of an even number with any other number would always be even, while the product of any two odds was always odd.  He demonstrated this by showing lots of particular products, but I asked him if he was sure that it was still true if I were to pick some numbers he hadn’t used yet.  He was.

Then I asked him how many numbers were possible to use.  He promptly replied “infinite” at which point he finally started to see the difficulty with demonstrating that every product worked.  “We don’t have enough time” to do all that, he said.  Finally, I had maneuvered him to perhaps his first ever realization for the need for proof.

ANTICIPATION:  But S knew nothing of formal algebra.  From my experiences with younger students sans algebra, I thought I would eventually need to help him translate his numerical problem into a geometric one.  But this story is about S’s reasoning, not mine.

INSIGHT:  I asked S how he would handle any numbers I asked him to multiply to prove his claims, even if I gave him some ridiculously large ones.  “It’s really not as hard as that,” S told me.  He quickly scribbled

s1

on his paper and covered up all but the one’s digit.  “You see,” he said, “all that matters is the units.  You can make the number as big as you want and I just need to look at the last digit.”  Without using this language, S was venturing into an even-odd proof via modular arithmetic.

With some more thought, he reasoned that he would focus on just the units digit through repeated multiples and see what happened.

FIFTH GRADE PROOF:  S’s math class is currently working through a multiplication unit in our 5th grade Bridges curriculum, so he was already in the mindset of multiples.  Since he said only the units digit mattered, he decided he could start with any even number and look at all of its multiples.  That is, he could keep adding the number to itself and see what happened.  As shown below, he first chose 32 and found the next four multiples, 64, 96, 128, and 160.  After that, S said the very next number in the list would end in a 2 and the loop would start all over again.

s2

He stopped talking for several seconds, and then he smiled.  “I don’t have to look at every multiple of 32.  Any multiple will end up somewhere in my cycle and I’ve already shown that every number in this cycle is even.  Every multiple of 32 must be even!”  It was a pretty powerful moment.  Since he only needed to see the last digit, and any number ending in 2 would just add 2s to the units, this cycle now represented every number ending in 2 in the universe.  The last line above was S’s use of 1002 to show that the same cycling happened for another “2 number.”

DIFFERENT KINDS OF CYCLES:  So could he use this for all multiples of even numbers?  His next try was an “8 number.”

s3

After five multiples of 18, he achieved the same cycling.  Even cooler, he noticed that the cycle for “8 numbers” was the 2 number” cycle backwards.

Also note that after S completed his 2s and 8s lists, he used only single digit seed numbers as the bigger starting numbers only complicated his examples.  He was on a roll now.

s4

I asked him how the “4 number” cycle was related.  He noticed that the 4s used every other number in the “2 number” cycle.  It was like skip counting, he said.  Another lightbulb went off.

“And that’s because 4 is twice 2, so I just take every 2nd multiple in the first cycle!”  He quickly scratched out a “6 number” example.

s5

This, too, cycled, but more importantly, because 6 is thrice 2, he said that was why this list used every 3rd number in the “2 number” cycle.  In that way, every even number multiple list was the same as the “2 number” list, you just skip-counted by different steps on your way through the list.

When I asked how he could get all the numbers in such a short list when he was counting by 3s, S said it wasn’t a problem at all.  Since it cycled, whenever you got to the end of a list, just go back to the beginning and keep counting.  We didn’t touch it last week, but he had opened the door to modular arithmetic.

I won’t show them here, but his “0 number” list always ended in 0s.  “This one isn’t very interesting,” he said.  I smiled.

ODDS:  It took a little more thought to start his odd number proof, because every other multiple was even.  After he recognized these as even numbers, S decided to list every other multiple as shown with his “1 number” and “3 number” lists.

s7

As with the evens, the odd number lists could all be seen as skip-counted versions of each other.  Also, the 1s and 9s were written backwards from each other, and so were the 3s and 7s.  “5 number” lists were declared to be as boring as “0 numbers”.  Not only did the odds ultimately end up cycling essentially the same as the evens, but they had the same sort of underlying relationships.

CONCLUSION:  At this point, S declared that since he had shown every possible case for evens and odds, then he had shown that any multiple of an even number was always even, and any odd multiple of an odd number was odd.  And he knew this because no matter how far down the list he went, eventually any multiple had to end up someplace in his cycles.  At that point I reminded S of his earlier claim that there was an infinite number of even and odd numbers.  When he realized that he had just shown a case-by-case reason for more numbers than he could ever demonstrate by hand, he sat back in his chair, exclaiming, “Whoa!  That’s cool!”

It’s not a formal mathematical proof, and when S learns some algebra, he’ll be able to accomplish his cases far more efficiently, but this was an unexpectedly nice and perfectly legitimate numerical proof of even and odd multiples for an elementary student.

 

PowerBall Redux

Donate to a charity instead.  Let me explain.
The majority of responses to my PowerBall description/warnings yesterday have been, “If you don’t play, you can’t win.”  Unfortunately, I know many, many people are buying many lottery tickets, way more than they should.
 
OK.  For almost everyone, there’s little harm in spending $2 on a ticket for the entertainment, but don’t expect to win, and don’t buy multiple tickets unless you can afford to do without every dollar you spend. I worry about those who are “investing” tens or hundreds of dollars on any lottery.
Two of my school colleagues captured the idea of a lottery yesterday with their analogies,
Steve:  Suppose you go to the beach and grab a handful of sand and bring it back to your house.  And you do that every single day. Then your odds of winning the powerball are still slightly worse than picking out one particular grain of sand from all the sand you accumulated over an entire year.
Or more simply put from the perspective of a lottery official, 
Patrick:  Here’s our idea.  You guys all throw your money in a big pile.  Then, after we take some of it, we’ll give the pile to just one of you.
WHY YOU SHOULDN’T BUY MULTIPLE TICKETS:
For perspective, a football field is 120 yards long, or 703.6 US dollars long using the logic of my last post. Rounding up, that would buy you 352 PowerBall tickets. That means investing $704 dollars would buy you a single football field length of chances in 10.5 coast-to-coast traverses of the entire United States.  There’s going to be an incredibly large number of disappointed people tomorrow.
MORAL:  Even an incredibly large multiple of a less-than-microscopic chance is still a less-than-microscopic chance.
BETTER IDEA: Assume you have the resources and are willing to part with tens or hundreds of dollars for no likelihood of tangible personal gain.  Using the $704 football example, buy 2 tickets and donate the other $700 to charity. You’ll do much more good.

PowerBall Math

Given the record size and mania surrounding the current PowerBall Lottery, I thought some of you might be interested in bringing that game into perspective.  This could be an interesting application with some teachers and students.

It certainly is entertaining for many to dream about what you would do if you happened to be lucky enough to win an astronomical lottery.  And lottery vendors are quick to note that your dreams can’t come true if you don’t play.  Nice advertising.  I’ll let the numbers speak to the veracity of the Lottery’s encouragement.

PowerBall is played by picking any 5 different numbers between 1 & 69, and then one PowerBall number between 1 & 26.  So there are nCr(69,5)*26=292,201,338 outcomes for this game.  Unfortunately, humans have a particularly difficult time understanding extremely large numbers, so I offer an analogy to bring it a little into perspective.

  • The horizontal width of the United States is generally reported to be 2680 miles, and a U.S. dollar bill is 6.14 inches wide.  That means the U.S. is approximately 27,655,505 dollar bills wide.
  • If I have 292,201,338 dollar bills (one for every possible PowerBall outcome), I could make a line of dollar bills placed end-to-end from the U.S. East Coast all the way to the West Coast, back to the East, back to the West, and so forth, passing back and forth between the two coasts just over 10.5 times.
  • Now imagine that exactly one of those dollar bills was replaced with a replica dollar bill made from gold colored paper.

 

Your chances of winning the PowerBall lottery are the same as randomly selecting that single gold note from all of those dollar bills laid end-to-end and crossing the entire breadth of the United States 10.5 times. 

Dreaming is fun, but how likely is this particular dream to become real?

Play the lottery if doing so is entertaining to you, but like going to the movie theater, don’t expect to get any money back in return.

Mistakes are Good

Confession #1:  My answers on my last post were WRONG.

I briefly thought about taking that post down, but discarded that idea when I thought about the reality that almost all published mathematics is polished, cleaned, and optimized.  Many students struggle with mathematics under the misconception that their first attempts at any topic should be as polished as what they read in published sources.

While not precisely from the same perspective, Dan Teague recently wrote an excellent, short piece of advice to new teachers on NCTM’s ‘blog entitled Demonstrating Competence by Making Mistakes.  I argue Dan’s advice actually applies to all teachers, so in the spirit of showing how to stick with a problem and not just walking away saying “I was wrong”, I’m going to keep my original post up, add an advisory note at the start about the error, and show below how I corrected my error.

Confession #2:  My approach was a much longer and far less elegant solution than the identical approaches offered by a comment by “P” on my last post and the solution offered on FiveThirtyEight.  Rather than just accepting the alternative solution, as too many students are wont to do, I acknowledged the more efficient approach of others before proceeding to find a way to get the answer through my initial idea.

I’ll also admit that I didn’t immediately see the simple approach to the answer and rushed my post in the time I had available to get it up before the answer went live on FiveThirtyEight.

GENERAL STRATEGY and GOALS:

1-Use a PDF:  The original FiveThirtyEight post asked for the expected time before the siblings simultaneously finished their tasks.  I interpreted this as expected value, and I knew how to compute the expected value of a pdf of a random variable.  All I needed was the potential wait times, t, and their corresponding probabilities.  My approach was solid, but a few of my computations were off.

2-Use Self-Similarity:  I don’t see many people employing the self-similarity tactic I used in my initial solution.  Resolving my initial solution would allow me to continue using what I consider a pretty elegant strategy for handling cumbersome infinite sums.

A CORRECTED SOLUTION:

Stage 1:  My table for the distribution of initial choices was correct, as were my conclusions about the probability and expected time if they chose the same initial app.

App1

My first mistake was in my calculation of the expected time if they did not choose the same initial app.  The 20 numbers in blue above represent that sample space.  Notice that there are 8 times where one sibling chose a 5-minute app, leaving 6 other times where one sibling chose a 4-minute app while the other chose something shorter.  Similarly, there are 4 choices of an at most 3-minute app, and 2 choices of an at most 2-minute app.  So the expected length of time spent by the longer app if the same was not chosen for both is

E(Round1) = \frac{1}{20}*(8*5+6*4+4*3+2*2)=4 minutes,

a notably longer time than I initially reported.

For the initial app choice, there is a \frac{1}{5} chance they choose the same app for an average time of 3 minutes, and a \frac{4}{5} chance they choose different apps for an average time of 4 minutes.

Stage 2:  My biggest error was a rushed assumption that all of the entries I gave in the Round 2 table were equally likely.  That is clearly false as you can see from Table 1 above.  There are only two instances of a time difference of 4, while there are eight instances of a time difference of 1.  A correct solution using my approach needs to account for these varied probabilities.  Here is a revised version of Table 2 with these probabilities included.

App4

Conveniently–as I had noted without full realization in my last post–the revised Table 2 still shows the distribution for the 2nd and all future potential rounds until the siblings finally align, including the probabilities.  This proved to be a critical feature of the problem.

Another oversight was not fully recognizing which events would contribute to increasing the time before parity.  The yellow highlighted cells in Table 2 are those for which the next app choice was longer than the current time difference, and any of these would increase the length of a trial.

I was initially correct in concluding there was a \frac{1}{5} probability of the second app choice achieving a simultaneous finish and that this would not result in any additional total time.  I missed the fact that the six non-highlighted values also did not result in additional time and that there was a \frac{1}{5} chance of this happening.

That leaves a \frac{3}{5} chance of the trial time extending by selecting one of the highlighted events.  If that happens, the expected time the trial would continue is

\displaystyle \frac{4*4+(4+3)*3+(4+3+2)*2+(4+3+2+1)*1}{4+(4+3)+(4+3+2)+(4+3+2+1)}=\frac{13}{6} minutes.

Iterating:  So now I recognized there were 3 potential outcomes at Stage 2–a \frac{1}{5} chance of matching and ending, a \frac{1}{5} chance of not matching but not adding time, and a \frac{3}{5} chance of not matching and adding an average \frac{13}{6} minutes.  Conveniently, the last two possibilities still combined to recreate perfectly the outcomes and probabilities of the original Stage 2, creating a self-similar, pseudo-fractal situation.  Here’s the revised flowchart for time.

App5

Invoking the similarity, if there were T minutes remaining after arriving at Stage 2, then there was a \frac{1}{5} chance of adding 0 minutes, a \frac{1}{5} chance of remaining at T minutes, and a \frac{3}{5} chance of adding \frac{13}{6} minutes–that is being at T+\frac{13}{6} minutes.  Equating all of this allows me to solve for T.

T=\frac{1}{5}*0+\frac{1}{5}*T+\frac{3}{5}*\left( T+\frac{13}{6} \right) \longrightarrow T=6.5 minutes

Time Solution:  As noted above, at the start, there was a \frac{1}{5} chance of immediately matching with an average 3 minutes, and there was a \frac{4}{5} chance of not matching while using an average 4 minutes.  I just showed that from this latter stage, one would expect to need to use an additional mean 6.5 minutes for the siblings to end simultaneously, for a mean total of 10.5 minutes.  That means the overall expected time spent is

Total Expected Time =\frac{1}{5}*3 + \frac{4}{5}*10.5 = 9 minutes.

Number of Rounds Solution:  My initial computation of the number of rounds was actually correct–despite the comment from “P” in my last post–but I think the explanation could have been clearer.  I’ll try again.

App6

One round is obviously required for the first choice, and in the \frac{4}{5} chance the siblings don’t match, let N be the average number of rounds remaining.  In Stage 2, there’s a \frac{1}{5} chance the trial will end with the next choice, and a \frac{4}{5} chance there will still be N rounds remaining.  This second situation is correct because both the no time added and time added possibilities combine to reset Table 2 with a combined probability of \frac{4}{5}.  As before, I invoke self-similarity to find N.

N = \frac{1}{5}*1 + \frac{4}{5}*N \longrightarrow N=5

Therefore, the expected number of rounds is \frac{1}{5}*1 + \frac{4}{5}*5 = 4.2 rounds.

It would be cool if someone could confirm this prediction by simulation.

CONCLUSION:

I corrected my work and found the exact solution proposed by others and simulated by Steve!   Even better, I have shown my approach works and, while notably less elegant, one could solve this expected value problem by invoking the definition of expected value.

Best of all, I learned from a mistake and didn’t give up on a problem.  Now that’s the real lesson I hope all of my students get.

Happy New Year, everyone!

Great Probability Problems

UPDATE:  Unfortunately, there are a couple errors in my computations below that I found after this post went live.  In my next post, Mistakes are Good, I fix those errors and reflect on the process of learning from them.

ORIGINAL POST:

A post last week to the AP Statistics Teacher Community by David Bock alerted me to the new weekly Puzzler by Nate Silver’s new Web site, http://fivethirtyeight.com/.  As David noted, with their focus on probability, this new feature offers some great possibilities for AP Statistics probability and simulation.

I describe below FiveThirtyEight’s first three Puzzlers along with a potential solution to the last one.  If you’re searching for some great problems for your classes or challenges for some, try these out!

THE FIRST THREE PUZZLERS:

The first Puzzler asked a variation on a great engineering question:

You work for a tech firm developing the newest smartphone that supposedly can survive falls from great heights. Your firm wants to advertise the maximum height from which the phone can be dropped without breaking.

You are given two of the smartphones and access to a 100-story tower from which you can drop either phone from whatever story you want. If it doesn’t break when it falls, you can retrieve it and use it for future drops. But if it breaks, you don’t get a replacement phone.

Using the two phones, what is the minimum number of drops you need to ensure that you can determine exactly the highest story from which a dropped phone does not break? (Assume you know that it breaks when dropped from the very top.) What if, instead, the tower were 1,000 stories high?

The second Puzzler investigated random geyser eruptions:

You arrive at the beautiful Three Geysers National Park. You read a placard explaining that the three eponymous geysers — creatively named A, B and C — erupt at intervals of precisely two hours, four hours and six hours, respectively. However, you just got there, so you have no idea how the three eruptions are staggered. Assuming they each started erupting at some independently random point in history, what are the probabilities that A, B and C, respectively, will be the first to erupt after your arrival?

Both very cool problems with solutions on the FiveThirtyEight site.  The current Puzzler talked about siblings playing with new phone apps.

You’ve just finished unwrapping your holiday presents. You and your sister got brand-new smartphones, opening them at the same moment. You immediately both start doing important tasks on the Internet, and each task you do takes one to five minutes. (All tasks take exactly one, two, three, four or five minutes, with an equal probability of each). After each task, you have a brief moment of clarity. During these, you remember that you and your sister are supposed to join the rest of the family for dinner and that you promised each other you’d arrive together. You ask if your sister is ready to eat, but if she is still in the middle of a task, she asks for time to finish it. In that case, you now have time to kill, so you start a new task (again, it will take one, two, three, four or five minutes, exactly, with an equal probability of each). If she asks you if it’s time for dinner while you’re still busy, you ask for time to finish up and she starts a new task and so on. From the moment you first open your gifts, how long on average does it take for both of you to be between tasks at the same time so you can finally eat? (You can assume the “moments of clarity” are so brief as to take no measurable time at all.)

SOLVING THE CURRENT PUZZLER:

Before I started, I saw Nick Brown‘s interesting Tweet of his simulation.

cw4fvbgwsaai00c

If Nick’s correct, it looks like a mode of 5 minutes and an understandable right skew.  I approached the solution by first considering the distribution of initial random app choices.

App1

There is a \displaystyle \frac{5}{25} chance the siblings choose the same app and head to dinner after the first round.  The expected length of that round is \frac{1}{5} \cdot \left( 1+2=3=4+5 \right) = 3 minutes.

That means there is a \displaystyle \frac{4}{5} chance different length apps are chosen with time differences between 1 and 4 minutes.  In the case of unequal apps, the average time spent before the shorter app finishes is \frac{1}{25} \cdot \left( 8*1+6*2+4*3+2*4 \right) = 1.6 minutes.

It doesn’t matter which sibling chose the shorter app.  That sibling chooses next with distribution as follows.

App2

While the distributions are different, conveniently, there is still a time difference between 1 and 4 minutes when the total times aren’t equal.  That means the second table shows the distribution for the 2nd and all future potential rounds until the siblings finally align.  While this problem has the potential to extend for quite some time, this adds a nice pseudo-fractal self-similarity to the scenario.

As noted, there is a \displaystyle \frac{4}{20}=\frac{1}{5} chance they complete their apps on any round after the first, and this would not add any additional time to the total as the sibling making the choice at this time would have initially chosen the shorter total app time(s).  Each round after the first will take an expected time of \frac{1}{20} \cdot \left( 7*1+5*2+3*3+1*4 \right) = 1.5 minutes.

The only remaining question is the expected number of rounds of app choices the siblings will take if they don’t align on their first choice.  This is where I invoked self-similarity.

In the initial choice there was a \frac{4}{5} chance one sibling would take an average 1.6 minutes using a shorter app than the other.  From there, some unknown average N choices remain.  There is a \frac{1}{5} chance the choosing sibling ends the experiment with no additional time, and a \frac{4}{5} chance s/he takes an average 1.5 minutes to end up back at the Table 2 distribution, still needing an average N choices to finish the experiment (the pseudo-fractal self-similarity connection).  All of this is simulated in the flowchart below.

App3

Recognizing the self-similarity allows me to solve for N.

\displaystyle N = \frac{1}{5} \cdot 1 + \frac{4}{5} \cdot N \longrightarrow N=5

FINAL ANSWER:

Number of Rounds – Starting from the beginning, there is a \frac{1}{5} chance of ending in 1 round and a \frac{4}{5} chance of ending in an average 5 rounds, so the expected number of rounds of app choices before the siblings simultaneously end is

\frac{1}{5} *1 + \frac{4}{5}*5=4.2 rounds

Time until Eating – In the first choice, there is a \frac{1}{5} chance of ending in 3 minutes.  If that doesn’t happen, there is a subsequent \frac{1}{5} chance of ending with the second choice with no additional time.  If neither of those events happen, there will be 1.6 minutes on the first choice plus an average 5 more rounds, each taking an average 1.5 minutes, for a total average 1.6+5*1.5=9.1 minutes.  So the total average time until both siblings finish simultaneously will be

\frac{1}{5}*3+\frac{4}{5}*9.1 = 7.88 minutes

CONCLUSION:

My 7.88 minute mean is reasonably to the right of Nick’s 5 minute mode shown above.  We’ll see tomorrow if I match the FiveThirtyEight solution.

Anyone else want to give it a go?  I’d love to hear other approaches.

Best Algebra 2 Lab Ever

This post shares what I think is one of the best, inclusive, data-oriented labs for a second year algebra class.  This single experiment produces linear, quadratic, and exponential (and logarithmic) data from a lab my Algebra 2 students completed this past summer.  In that class, I assigned frequent labs where students gathered real data, determined models to fit that data, and analyzed goodness of the models’ fit to the data.   I believe in the importance of doing so much more than just writing an equation and moving on.

For kicks, I’ll derive an approximation for the coefficient of gravity at the end.

THE LAB:

On the way to school one morning last summer, I grabbed one of my daughters’ “almost fully inflated” kickballs and attached a TI CBR2 to my laptop and gathered (distance, time) data from bouncing the ball under the Motion Sensor.  NOTE:  TI’s CBR2 can connect directly to their Nspire and TI84 families of graphing calculators.  I typically use computer-based Nspire CAS software, so I connected the CBR via my laptop’s USB port.  It’s crazy easy to use.

One student held the CBR2 about 1.5-2 meters above the ground while another held the ball steady about 20 cm below the CBR2 sensor.  When the second student released the ball, a third clicked a button on my laptop to gather the data:  time every 0.05 seconds and height from the ground.  The graphed data is shown below.  In case you don’t have access to a CBR or other data gathering devices, I’ve uploaded my students’ data in this Excel file.

Bounce1

Remember, this is data was collected under far-from-ideal conditions.  I picked up a kickball my kids left outside on my way to class.  The sensor was handheld and likely wobbled some, and the ball was dropped on the well-worn carpet of our classroom floor.  It is also likely the ball did not remain perfectly under the sensor the entire time.  Even so, my students created a very pretty graph on their first try.

For further context, we did this lab in the middle of our quadratics unit that was preceded by a unit on linear functions and another on exponential and logarithmic functions.  So what can we learn from the bouncing ball data?

LINEAR 1:  

While it is very unlikely that any of the recorded data points were precisely at maximums, they are close enough to create a nice linear pattern.

As the height of a ball above the ground helps determine the height of its next bounce (height before –> energy on impact –> height after), the eight ordered pairs (max height #n, max height #(n+1) ) from my students’ data are shown below

bounce2

This looks very linear.  Fitting a linear regression and analyzing the residuals gives the following.

bounce3

The data seems to be close to the line, and the residuals are relatively small, about evenly distributed above and below the line, and there is no apparent pattern to their distribution.  This confirms that the regression equation, y=0.673x+0.000233, is a good fit for the = height before bounce and = height after bounce data.

NOTE:  You could reasonably easily gather this data sans any technology.  Have teams of students release a ball from different measured heights while others carefully identify the rebound heights.

The coefficients also have meaning.  The 0.673 suggests that after each bounce, the ball rebounded to 67.3%, or 2/3, of its previous height–not bad for a ball plucked from a driveway that morning.  Also, the y-intercept, 0.000233, is essentially zero, suggesting that a ball released 0 meters from the ground would rebound to basically 0 meters above the ground.  That this isn’t exactly zero is a small measure of error in the experiment.

EXPONENTIAL:

Using the same idea, consider data of the form (x,y) = (bounce number, bounce height). the graph of the nine points from my students’ data is:

bounce4

This could be power or exponential data–something you should confirm for yourself–but an exponential regression and its residuals show

bounce5

While something of a pattern seems to exist, the other residual criteria are met, making the exponential regression a reasonably good model: y = 0.972 \cdot (0.676)^x.  That means bounce number 0, the initial release height from which the downward movement on the far left of the initial scatterplot can be seen, is 0.972 meters, and the constant multiplier is about 0.676.  This second number represents the percentage of height maintained from each previous bounce, and is therefore the percentage rebound.  Also note that this is essentially the same value as the slope from the previous linear example, confirming that the ball we used basically maintained slightly more than 2/3 of its height from one bounce to the next.

And you can get logarithms from these data if you use the equation to determine, for example, which bounces exceed 0.2 meters.

bounce12

So, bounces 1-4 satisfy the requirement for exceeding 0.20 meters, as confirmed by the data.

A second way to invoke logarithms is to reverse the data.  Graphing x=height and y=bounce number will also produce the desired effect.

QUADRATIC:

Each individual bounce looks like an inverted parabola.  If you remember a little physics, the moment after the ball leaves the ground after each bounce, it is essentially in free-fall, a situation defined by quadratic movement if you ignore air resistance–something we can safely assume given the very short duration of each bounce.

I had eight complete bounces I could use, but chose the first to have as many data points as possible to model.  As it was impossible to know whether the lowest point on each end of any data set came from the ball moving up or down, I omitted the first and last point in each set.  Using (x,y) = (time, height of first bounce) data, my students got:

bounce6

What a pretty parabola.  Fitting a quadratic regression (or manually fitting one, if that’s more appropriate for your classes), I get:

bounce7

Again, there’s maybe a slight pattern, but all but two points are will withing  0.1 of 1% of the model and are 1/2 above and 1/2 below.  The model, y=-4.84x^2+4.60x-4.24, could be interpreted in terms of the physics formula for an object in free fall, but I’ll postpone that for a moment.

LINEAR 2:

If your second year algebra class has explored common differences, your students could explore second common differences to confirm the quadratic nature of the data.  Other than the first two differences (far right column below), the second common difference of all data points is roughly 0.024.  This raises suspicions that my student’s hand holding the CBR2 may have wiggled during the data collection.

bounce8

Since the second common differences are roughly constant, the original data must have been quadratic, and the first common differences linear. As a small variation for each consecutive pair of (time, height) points, I had my students graph (x,y) = (x midpoint, slope between two points):

bounce10

If you get the common difference discussion, the linearity of this graph is not surprising.  Despite those conversations, most of my students seem completely surprised by this pattern emerging from the quadratic data.  I guess they didn’t really “get” what common differences–or the closely related slope–meant until this point.

bounce11

Other than the first three points, the model seems very strong.  The coefficients tell an even more interesting story.

GRAVITY:

The equation from the last linear regression is y=4.55-9.61x.  Since the data came from slope, the y-intercept, 4.55, is measured in m/sec.  That makes it the velocity of the ball at the moment (t=0) the ball left the ground.  Nice.

The slope of this line is -9.61.  As this is a slope, its units are the y-units over the x-units, or (m/sec)/(sec).  That is, meters per squared second.  And those are the units for gravity!  That means my students measured, hidden within their data, an approximation for coefficient of gravity by bouncing an outdoor ball on a well-worn carpet with a mildly wobbly hand holding a CBR2.  The gravitational constant at sea-level on Earth is about -9.807 m/sec^2.  That means, my students measurement error was about \frac{9.807-9.610}{9.807}=2.801%.  And 2.8% is not a bad measurement for a very unscientific setting!

CONCLUSION:

Whenever I teach second year algebra classes, I find it extremely valuable to have students gather real data whenever possible and with every new function, determine models to fit their data, and analyze the goodness of the model’s fit to the data.  In addition to these activities just being good mathematics explorations, I believe they do an excellent job exposing students to a few topics often underrepresented in many secondary math classes:  numerical representations and methods, experimentation, and introduction to statistics.  Hopefully some of the ideas shared here will inspire you to help your students experience more.