# Category Archives: Applications

## From Coins to Magic

Here’s a great problem or trick for a class exploration … or magic for parties.

DO THIS YOURSELF.

Grab a small handful of coins (it doesn’t matter how many), randomly flip them onto a flat surface, and count the number of tails.

Randomly pull off from the group into a separate pile the number of coins equal to the number of tails you just counted.  Turn over every coin in this new pile.

Count the number of tails in each pile.

You got the same number both times!

Why?

Marilyn Vos Savant posed a similar problem:

Say that a hundred pennies are on a table. Ninety of them are heads. With your eyes closed, can you separate all the coins into two groups so that each group has the same number of tails?

Savant’s solution is to pull any random 10 coins from the 100 and make a second pile.  Turn all the coins in the new pile over, et voila!  Both piles have an equal number of tails.

While Savant’s approach is much more prescriptive than mine, both solutions work.  Every time.  WHY?

THIS IS STRANGE:

You have no idea the state (heads or tails) of any of the coins you pull into the second pile.  It’s counterintuitive that the two piles could ever contain the same number of tails.

Also, flipping the coins in the new pile seems completely arbitrary, and yet after any random pull & flip, the two resulting piles always hold the same number of tails.

Enter the power (and for young people, the mystery) of algebra to generalize a problem, supporting an argument that holds for all possibilities simultaneously.

HOW IT WORKS:

The first clue to this is the misdirection in Savant’s question.  Told that there are 90 heads, you are asked to make the number of tails equivalent.  In both versions, the number of TAILS in the original pile is the number of coins pulled into the second pile.  This isn’t a coincidence; it’s the key to the solution.

In any pile of randomly flipped coins (they needn’t be all or even part pennies), let N be the number tails.  Create your second pile by pulling a random coins from the initial pile.  Because the coins are randomly selected, you don’t know how many tails are in the new pile, so let that unknown number of coins be X .  That means $0 \le X \le N$, leaving $N-X$ tails in the first pile, and $N-X$ heads in the new pile.  (Make sure you understand that last bit!)  That means if you flip all the coins in the second pile, those heads will become tails, and you are guaranteed exactly $N-X$ tails in both piles.

Cool facts:

• You can’t say with certainty how many tails will be in both piles, but you know they will be the same.
• The total number of coins you start with is completely irrelevant.
• While the given two versions of the problem make piles with equal numbers of heads, this “trick” can balance heads or tails.  To balance heads instead, pull from the initial coins into a second pile the number of heads.  When you flip all the coins in the second pile, both piles will now contain the same number of heads.

A PARTY WONDER or SOLID PROBLEM FOR AN ALGEBRA CLASS:

If you work on your showmanship, you can baffle others with this.  For my middle school daughter, I counted off the “leave alone” pile and then flipped the second pile.  I also let her flip the initial set of coins and tell me each time whether she wanted me to get equal numbers of heads or tails.  I looked away as she shuffled the coins and pulled off the requisite number of coins without looking.

She’s figured out HOW I do it, but as she is just starting algebra, she doesn’t have the abstractness yet to fully generalize the big solution.  She’ll get there.

I could see this becoming a fun data-gathering project for an algebra class.  It would be cool to see how someone approaches this with a group of students.

## PowerBall Redux

Donate to a charity instead.  Let me explain.
The majority of responses to my PowerBall description/warnings yesterday have been, “If you don’t play, you can’t win.”  Unfortunately, I know many, many people are buying many lottery tickets, way more than they should.

OK.  For almost everyone, there’s little harm in spending $2 on a ticket for the entertainment, but don’t expect to win, and don’t buy multiple tickets unless you can afford to do without every dollar you spend. I worry about those who are “investing” tens or hundreds of dollars on any lottery. Two of my school colleagues captured the idea of a lottery yesterday with their analogies, Steve: Suppose you go to the beach and grab a handful of sand and bring it back to your house. And you do that every single day. Then your odds of winning the powerball are still slightly worse than picking out one particular grain of sand from all the sand you accumulated over an entire year. Or more simply put from the perspective of a lottery official, Patrick: Here’s our idea. You guys all throw your money in a big pile. Then, after we take some of it, we’ll give the pile to just one of you. WHY YOU SHOULDN’T BUY MULTIPLE TICKETS: For perspective, a football field is 120 yards long, or 703.6 US dollars long using the logic of my last post. Rounding up, that would buy you 352 PowerBall tickets. That means investing$704 dollars would buy you a single football field length of chances in 10.5 coast-to-coast traverses of the entire United States.  There’s going to be an incredibly large number of disappointed people tomorrow.
MORAL:  Even an incredibly large multiple of a less-than-microscopic chance is still a less-than-microscopic chance.
BETTER IDEA: Assume you have the resources and are willing to part with tens or hundreds of dollars for no likelihood of tangible personal gain.  Using the $704 football example, buy 2 tickets and donate the other$700 to charity. You’ll do much more good.

## PowerBall Math

Given the record size and mania surrounding the current PowerBall Lottery, I thought some of you might be interested in bringing that game into perspective.  This could be an interesting application with some teachers and students.

It certainly is entertaining for many to dream about what you would do if you happened to be lucky enough to win an astronomical lottery.  And lottery vendors are quick to note that your dreams can’t come true if you don’t play.  Nice advertising.  I’ll let the numbers speak to the veracity of the Lottery’s encouragement.

PowerBall is played by picking any 5 different numbers between 1 & 69, and then one PowerBall number between 1 & 26.  So there are $nCr(69,5)*26=292,201,338$ outcomes for this game.  Unfortunately, humans have a particularly difficult time understanding extremely large numbers, so I offer an analogy to bring it a little into perspective.

• The horizontal width of the United States is generally reported to be 2680 miles, and a U.S. dollar bill is 6.14 inches wide.  That means the U.S. is approximately 27,655,505 dollar bills wide.
• If I have 292,201,338 dollar bills (one for every possible PowerBall outcome), I could make a line of dollar bills placed end-to-end from the U.S. East Coast all the way to the West Coast, back to the East, back to the West, and so forth, passing back and forth between the two coasts just over 10.5 times.
• Now imagine that exactly one of those dollar bills was replaced with a replica dollar bill made from gold colored paper.

Your chances of winning the PowerBall lottery are the same as randomly selecting that single gold note from all of those dollar bills laid end-to-end and crossing the entire breadth of the United States 10.5 times.

Dreaming is fun, but how likely is this particular dream to become real?

Play the lottery if doing so is entertaining to you, but like going to the movie theater, don’t expect to get any money back in return.

## Mistakes are Good

Confession #1:  My answers on my last post were WRONG.

I briefly thought about taking that post down, but discarded that idea when I thought about the reality that almost all published mathematics is polished, cleaned, and optimized.  Many students struggle with mathematics under the misconception that their first attempts at any topic should be as polished as what they read in published sources.

While not precisely from the same perspective, Dan Teague recently wrote an excellent, short piece of advice to new teachers on NCTM’s ‘blog entitled Demonstrating Competence by Making Mistakes.  I argue Dan’s advice actually applies to all teachers, so in the spirit of showing how to stick with a problem and not just walking away saying “I was wrong”, I’m going to keep my original post up, add an advisory note at the start about the error, and show below how I corrected my error.

Confession #2:  My approach was a much longer and far less elegant solution than the identical approaches offered by a comment by “P” on my last post and the solution offered on FiveThirtyEight.  Rather than just accepting the alternative solution, as too many students are wont to do, I acknowledged the more efficient approach of others before proceeding to find a way to get the answer through my initial idea.

I’ll also admit that I didn’t immediately see the simple approach to the answer and rushed my post in the time I had available to get it up before the answer went live on FiveThirtyEight.

GENERAL STRATEGY and GOALS:

1-Use a PDF:  The original FiveThirtyEight post asked for the expected time before the siblings simultaneously finished their tasks.  I interpreted this as expected value, and I knew how to compute the expected value of a pdf of a random variable.  All I needed was the potential wait times, t, and their corresponding probabilities.  My approach was solid, but a few of my computations were off.

2-Use Self-Similarity:  I don’t see many people employing the self-similarity tactic I used in my initial solution.  Resolving my initial solution would allow me to continue using what I consider a pretty elegant strategy for handling cumbersome infinite sums.

A CORRECTED SOLUTION:

Stage 1:  My table for the distribution of initial choices was correct, as were my conclusions about the probability and expected time if they chose the same initial app.

My first mistake was in my calculation of the expected time if they did not choose the same initial app.  The 20 numbers in blue above represent that sample space.  Notice that there are 8 times where one sibling chose a 5-minute app, leaving 6 other times where one sibling chose a 4-minute app while the other chose something shorter.  Similarly, there are 4 choices of an at most 3-minute app, and 2 choices of an at most 2-minute app.  So the expected length of time spent by the longer app if the same was not chosen for both is

$E(Round1) = \frac{1}{20}*(8*5+6*4+4*3+2*2)=4$ minutes,

a notably longer time than I initially reported.

For the initial app choice, there is a $\frac{1}{5}$ chance they choose the same app for an average time of 3 minutes, and a $\frac{4}{5}$ chance they choose different apps for an average time of 4 minutes.

Stage 2:  My biggest error was a rushed assumption that all of the entries I gave in the Round 2 table were equally likely.  That is clearly false as you can see from Table 1 above.  There are only two instances of a time difference of 4, while there are eight instances of a time difference of 1.  A correct solution using my approach needs to account for these varied probabilities.  Here is a revised version of Table 2 with these probabilities included.

Conveniently–as I had noted without full realization in my last post–the revised Table 2 still shows the distribution for the 2nd and all future potential rounds until the siblings finally align, including the probabilities.  This proved to be a critical feature of the problem.

Another oversight was not fully recognizing which events would contribute to increasing the time before parity.  The yellow highlighted cells in Table 2 are those for which the next app choice was longer than the current time difference, and any of these would increase the length of a trial.

I was initially correct in concluding there was a $\frac{1}{5}$ probability of the second app choice achieving a simultaneous finish and that this would not result in any additional total time.  I missed the fact that the six non-highlighted values also did not result in additional time and that there was a $\frac{1}{5}$ chance of this happening.

That leaves a $\frac{3}{5}$ chance of the trial time extending by selecting one of the highlighted events.  If that happens, the expected time the trial would continue is

$\displaystyle \frac{4*4+(4+3)*3+(4+3+2)*2+(4+3+2+1)*1}{4+(4+3)+(4+3+2)+(4+3+2+1)}=\frac{13}{6}$ minutes.

Iterating:  So now I recognized there were 3 potential outcomes at Stage 2–a $\frac{1}{5}$ chance of matching and ending, a $\frac{1}{5}$ chance of not matching but not adding time, and a $\frac{3}{5}$ chance of not matching and adding an average $\frac{13}{6}$ minutes.  Conveniently, the last two possibilities still combined to recreate perfectly the outcomes and probabilities of the original Stage 2, creating a self-similar, pseudo-fractal situation.  Here’s the revised flowchart for time.

Invoking the similarity, if there were T minutes remaining after arriving at Stage 2, then there was a $\frac{1}{5}$ chance of adding 0 minutes, a $\frac{1}{5}$ chance of remaining at T minutes, and a $\frac{3}{5}$ chance of adding $\frac{13}{6}$ minutes–that is being at $T+\frac{13}{6}$ minutes.  Equating all of this allows me to solve for T.

$T=\frac{1}{5}*0+\frac{1}{5}*T+\frac{3}{5}*\left( T+\frac{13}{6} \right) \longrightarrow T=6.5$ minutes

Time Solution:  As noted above, at the start, there was a $\frac{1}{5}$ chance of immediately matching with an average 3 minutes, and there was a $\frac{4}{5}$ chance of not matching while using an average 4 minutes.  I just showed that from this latter stage, one would expect to need to use an additional mean 6.5 minutes for the siblings to end simultaneously, for a mean total of 10.5 minutes.  That means the overall expected time spent is

Total Expected Time $=\frac{1}{5}*3 + \frac{4}{5}*10.5 = 9$ minutes.

Number of Rounds Solution:  My initial computation of the number of rounds was actually correct–despite the comment from “P” in my last post–but I think the explanation could have been clearer.  I’ll try again.

One round is obviously required for the first choice, and in the $\frac{4}{5}$ chance the siblings don’t match, let N be the average number of rounds remaining.  In Stage 2, there’s a $\frac{1}{5}$ chance the trial will end with the next choice, and a $\frac{4}{5}$ chance there will still be N rounds remaining.  This second situation is correct because both the no time added and time added possibilities combine to reset Table 2 with a combined probability of $\frac{4}{5}$.  As before, I invoke self-similarity to find N.

$N = \frac{1}{5}*1 + \frac{4}{5}*N \longrightarrow N=5$

Therefore, the expected number of rounds is $\frac{1}{5}*1 + \frac{4}{5}*5 = 4.2$ rounds.

It would be cool if someone could confirm this prediction by simulation.

CONCLUSION:

I corrected my work and found the exact solution proposed by others and simulated by Steve!   Even better, I have shown my approach works and, while notably less elegant, one could solve this expected value problem by invoking the definition of expected value.

Best of all, I learned from a mistake and didn’t give up on a problem.  Now that’s the real lesson I hope all of my students get.

Happy New Year, everyone!

## How One Data Point Destroyed a Study

Statistics are powerful tools.  Well-implemented, they tease out underlying patterns from the noise of raw data and improve our understanding.  But statistics must take care to avoid misstatements.   Unfortunately, statistics can also deliberately distort relationships, declaring patterns where none exist.  In my AP Statistics classes, I hope my students learn to extract meaning from well-designed studies, and to spot instances of Benjamin Disraeli’s “three kinds of lies:  lies, damned lies, and statistics.”

This post explores part of a study published August 12, 2015, exposing what I believe to be examples of four critical ways statistics are misunderstood and misused:

• Not recognizing the distortion power of outliers in means, standard deviations, and in the case of the study below, regressions.
• Distorting graphs to create the impression of patterns different from what actually exists,
• Cherry-picking data to show only favorable results, and
• Misunderstanding the p-value in inferential studies.

THE STUDY:

I was searching online for examples of research I could use with my AP Statistics classes when I found on the page of a math teacher organization a link to an article entitled, “Cardiorespiratory fitness linked to thinner gray matter and better math skills in kids.”  Following the URL trail, I found a description of the referenced article in an August, 2015 summary article by Science Daily and the actual research posted on August 12, 2015 by the journal, PLOS ONE.

As a middle and high school teacher, I’ve read multiple studies connecting physical fitness to brain health.  I was sure I had hit paydirt with an article offering multiple, valuable lessons for my students!  I read the claims of the Science Daily research summary correlating the physical fitness of 9- and 10-year-old children to performance on a test of arithmetic.  It was careful not to declare cause-and-effect,  but did say

The team found differences in math skills and cortical brain structure between the higher-fit and lower-fit children. In particular, thinner gray matter corresponded to better math performance in the higher-fit kids. No significant fitness-associated differences in reading or spelling aptitude were detected. (source)

The researchers described plausible connections for the aerobic fitness of children and the thickness of cortical gray matter for each participating child.  The study went astray when they attempted to connect their findings to the academic performance of the participants.

Independent t-tests were employed to compare WRAT-3 scores in higher fit and lower fit children. Pearson correlations were also conducted to determine associations between cortical thickness and academic achievement. The alpha level for all tests was set at p < .05. (source)

All of the remaining images, quotes, and data in this post are pulled directly from the primary article on PLOS ONE.  The URLs are provided above with bibliographic references are at the end.

To address questions raised by the study, I had to access the original data and recreate the researchers’ analyses.  Thankfully, PLOS ONE is an open-access journal, and I was able to download the research data.  In case you want to review the data yourself or use it with your classes, here is the original SPSS file which I converted into Excel and TI-Nspire CAS formats.

My suspicions were piqued when I saw the following two graphs–the only scatterplots offered in their research publication.

Scatterplot 1:  Attempt to connect Anterior Frontal Gray Matter thickness with WRAT-3 Arithmetic performance

The right side of the top scatterplot looked like an uncorrelated cloud of data with one data point on the far left seeming to pull the left side of the linear regression upwards, creating a more negative slope.  Because the study reported only two statistically significant correlations between the WRAT tests and cortical thickness in two areas of the brain, I was now concerned that the single extreme data point may have distorted results.

My initial scatterplot (below) confirmed the published graph, but fit to the the entire window, the data now looked even less correlated.

In this scale, the farthest left data point (WRAT Arithmetic score = 66, Anterior Frontal thickness = 3.9) looked much more like an outlier.  I confirmed that the point exceeded 1.5IQRs below the lower quartile, as indicated visually in a boxplot of the WRAT-Arithmetic scores.

Also note from my rescaled scatterplot that the Anterior Frontal measure (y-coordinate) was higher than any of the next five ordered pairs to its right.  Its horizontal outlier location, coupled with its notably higher vertical component, suggested that the single point could have significant influence on any regression on the data.  There was sufficient evidence for me to investigate the study results excluding the (66, 3.9) data point.

The original linear regression on the 48 (WRAT Arithmetic, AF thickness) data was $AF=-0.007817(WRAT_A)+4.350$.  Excluding (66, 3.9), the new scatterplot above shows the revised linear regression on the remaining 47 points:  $AF=-0.007460(WRAT_A)+4.308$.  This and the original equation are close, but the revised slope is 4.6% smaller in magnitude relative to the published result. With the two published results reported significant at p=0.04, the influence of the outlier (66, 3.9) has a reasonable possibility of changing the study results.

Scatterplot 2:  Attempt to connect Superior Frontal Gray Matter thickness with WRAT-3 Arithmetic performance

The tightly compressed scale of the second published scatterplot made me deeply suspicious the (WRAT Arithmetic, Superior Frontal thickness) data was being vertically compressed to create the illusion of a linear relationship where one possibly did not exist.

Rescaling the the graphing window (below) made those appear notably less linear than the publication implied.  Also, the data point corresponding to the WRAT-Arithmetic score of 66 appeared to suffer from the same outlier-influences as the first data set.  It was still an outlier, but now its vertical component was higher than the next eight data points to its right, with some of them notably lower.  Again, there was sufficient evidence to investigate results excluding the outlier data point.

The linear regression on the original 48 (WRAT Arithmetic, SF thickness) data points was $SF=-0.002767(WRAT_A)+4.113$ (above).  Excluding the outlier , the new scatterplot (below) had revised linear regression, $SF=-0.002391(WRAT_A)+4.069$.  This time, the revised slope was 13.6% smaller in magnitude relative to the original slope.  With the published significance also at p=0.04, omitting the outlier was almost certain to change the published results.

THE OUTLIER BROKE THE STUDY

The findings above strongly suggest the published study results are not as reliable as reported.  It is time to rerun the significance tests.

For the first data set–(WRAT Arithmetic, AF thickness) —run an independent t-test on the regression slope with and without the outlier.

• INCLUDING OUTLIER:  For all 48 samples, the researchers reported a slope of -0.007817, $r=-0.292$, and $p=0.04$.  This was reported as a significant result.
• EXCLUDING OUTLIER:  For the remaining 47 samples, the slope is -0.007460, r=-0.252, and p=0.087.  The r confirms the visual impression that the data was less linear and, most importantly, the correlation is no longer significant at $\alpha <0.05$.

For the second data set–(WRAT Arithmetic, SF thickness):

• INCLUDING OUTLIER:  For all 48 samples, the researchers reported a slope of -0.002767, r=-0.291, and p=0.04.  This was reported as a significant result.
• EXCLUDING OUTLIER:  For the remaining 47 samples, the slope is -0.002391, r=-0.229, and p=0.121.  This revision is even less linear and, most importantly, the correlation is no longer significant for any standard significance level.

In brief, the researchers’ arguable decision to include the single, clear outlier data point was the source of any significant results at all.  Whatever correlation exists between gray matter thickness and WRAT-Arithmetic as measured by this study is tenuous, at best, and almost certainly not significant.

THE DANGERS OF CHERRY-PICKING RESULTS:

So, let’s set aside the entire questionable decision to keep an outlier in the data set to achieve significant findings.  There is still a subtle, potential problem with this study’s result that actually impacts many published studies.

The researchers understandably were seeking connections between the thickness of a brain’s gray matter and the academic performance of that brain as measured by various WRAT instruments.  They computed independent t-tests of linear regression slopes between thickness measures at nine different locations in the brain against three WRAT test measures for a total of 27 separate t-tests.  The next table shows the correlation coefficient and p-value from each test.

This approach is commonly used with researchers reporting out only the tests found to be significant.  But in doing so, the researchers may have overlooked a fundamental property of the confidence intervals that underlie p-values.  Using the typical critical value of p=0.05 uses a 95% confidence interval, and one interpretation of a 95% confidence interval is that under the conditions of the assumed null hypothesis, results that occur in most extreme 5% of outcomes will NOT be considered as resulting from the null hypothesis, even though they are.

In other words, even under they typical conditions for which the null hypothesis is true, 5% of correct results would be deemed different enough to be statistically significant–a Type I Error.  Within this study, this defines a binomial probability situation with 27 trials for which the probability of any one trial producing a significant result even though the null hypothesis is correct, is p=0.05.

The binomial probability of finding exactly 2 significant results at p=0.05 over 27 trials is 0.243, and the probability of producing 2 or more significant results when the null hypothesis is true is 39.4%.

That means there is a 39.4% probability in any study testing 27 trials at a p<0.05 critical value that at least 2 of those trials would report a result that would INCORRECTLY be interpreted as contradicting the null hypothesis.  And if more conditions than 27 are tested, the probability of a Type I Error is even higher.

Whenever you have a large number of inference trials, there is an increasingly large probability that at least some of the “significant” trials are actually just random, undetected occurrences of the null hypothesis.

It just happens.

THE ELUSIVE MEANING OF A p-VALUE:

For more on the difficulty of understanding p-values, check out this nice recent article on FiveThirtyEight Science–Not Even Scientists Can Easily Explain P-Values.

CONCLUSION:

Personally, I’m a little disappointed that this study didn’t find significant results.  There are many recent studies showing the connection between physical activity and brain health, but this study didn’t achieve its goal of finding a biological source to explain the correlation.

It is the responsibility of researchers to know their studies and their resulting data sets.  Not finding significant results is not a problem.  But I do expect research to disclaim when its significant results hang entirely on a choice to retain an outlier in its data set.

REFERENCES:

Chaddock-Heyman L, Erickson KI, Kienzler C, King M, Pontifex MB, Raine LB, et al. (2015) The Role of Aerobic Fitness in Cortical Thickness and Mathematics Achievement in Preadolescent Children. PLoS ONE 10(8): e0134115. doi:10.1371/journal.pone.0134115

University of Illinois at Urbana-Champaign. “Cardiorespiratory fitness linked to thinner gray matter and better math skills in kids.” ScienceDaily. http://www.sciencedaily.com/releases/2015/08/150812151229.htm (accessed December 8, 2015).

## Chemistry, CAS, and Balancing Equations

Here’ s a cool application of linear equations I first encountered about 20 years ago working with chemistry colleague Penney Sconzo at my former school in Atlanta, GA.  Many students struggle early in their first chemistry classes with balancing equations.  Thinking about these as generalized systems of linear equations gives a universal approach to balancing chemical equations, including ionic equations.

This idea makes a brilliant connection if you teach algebra 2 students concurrently enrolled in chemistry, or vice versa.

FROM CHEMISTRY TO ALGEBRA

Consider burning ethanol.  The chemical combination of ethanol and oxygen, creating carbon dioxide and water:

$C_2H_6O+3O_2 \longrightarrow 2CO_2+3H_2O$     (1)

But what if you didn’t know that 1 molecule of ethanol combined with 3 molecules of oxygen gas to create 2 molecules of carbon dioxide and 3 molecules of water?  This specific set coefficients (or multiples of the set) exist for this reaction because of the Law of Conservation of Matter.  While elements may rearrange in a chemical reaction, they do not become something else.  So how do you determine the unknown coefficients of a generic chemical reaction?

Using the ethanol example, assume you started with

$wC_2H_6O+xO_2 \longrightarrow yCO_2+zH_2O$     (2)

for some unknown values of w, x, y, and z.  Conservation of Matter guarantees that the amount of carbon, hydrogen, and oxygen are the same before and after the reaction.  Tallying the amount of each element on each side of the equation gives three linear equations:

Carbon:  $2w=y$
Hydrogen:  $6w=2z$
Oxygen:  $w+2x=2y+z$

where the coefficients come from the subscripts within the compound notations.  As one example, the carbon subscript in ethanol ( $C_2H_6O$ ) is 2, indicating two carbon atoms in each ethanol molecule.  There must have been 2w carbon atoms in the w ethanol molecules.

This system of 3 equations in 4 variables won’t have a unique solution, but let’s see what my Nspire CAS says.  (NOTE:  On the TI-Nspire, you can solve for any one of the four variables.  Because the presence of more variables than equations makes the solution non-unique, some results may appear cleaner than others.  For me, w was more complicated than z, so I chose to use the z solution.)

All three equations have y in the numerator and denominators of 2.  The presence of the y indicates the expected non-unique solution.  But it also gives me the freedom to select any convenient value of y I want to use.  I’ll pick $y=2$ to simplify the fractions.  Plugging in gives me values for the other coefficients.

Substituting these into (2) above gives the original equation (1).

VARIABILITY EXISTS

Traditionally, chemists write these equations with the lowest possible natural number coefficients, but thinking of them as systems of linear equations makes another reality obvious.  If 1 molecule of ethanol combines with 3 molecules of hydrogen gas to make 2 molecules of carbon dioxide and 3 molecules of water, surely 10 molecule of ethanol combines with 30 molecules of hydrogen gas to make 20 molecules of carbon dioxide and 30 molecules of water (the result of substituting $y=20$ instead of the $y=2$ used above).

You could even let $y=1$ to get $z=\frac{3}{2}$, $w=\frac{1}{2}$, and $x=\frac{3}{2}$.  Shifting units, this could mean a half-mole of ethanol and 1.5 moles of hydrogen make a mole of carbon dioxide and 1.5 moles of water.  The point is, the ratios are constant.  A good lesson.

ANOTHER QUICK EXAMPLE:

Now let’s try a harder one to balance:  Reacting carbon monoxide and hydrogen gas to create octane and water.

$wCO + xH_2 \longrightarrow y C_8 H_{18} + z H_2 O$

Setting up equations for each element gives

Carbon:  $w=8y$
Oxygen:  $w=z$
Hydrogen:  $2x=18y+2z$

I could simplify the hydrogen equation, but that’s not required.  Solving this system of equations gives

Nice.  No fractions this time.  Using $y=1$ gives $w=8$, $x=17$, and $z=8$, or

$8CO + 17H_2 \longrightarrow C_8 H_{18} + 8H_2 O$

Simple.

EXTENSIONS TO IONIC EQUATIONS:

Now let’s balance an ionic equation with unknown coefficients a, b, c, d, e, and f:

$a Ba^{2+} + b OH^- + c H^- + d PO_4^{3-} \longrightarrow eH_2O + fBa_3(PO_4)_2$

In addition to writing equations for barium, oxygen, hydrogen, and phosphorus, Conservation of Charge allows me to write one more equation to reflect the balancing of charge in the reaction.

Barium:  $a = 3f$
Oxygen:  $b +4d = e+8f$
Hydrogen:  $b+c=2e$
Phosphorus:  $d=2f$
CHARGE (+/-):  $2a-b-c-3d=0$

Solving the system gives

Now that’s a curious result.  I’ll deal with the zeros in a moment.  Letting $d=2$ gives $f=1$ and $a=3$, indicating that 3 molecules of ionic barium combine with 2 molecules of ionic phosphate to create a single uncharged molecule of barium phosphate precipitate.

The zeros here indicate the presence of “spectator ions”.  Basically, the hydroxide and hydrogen ions on the left are in equal measure to the liquid water molecule on the right.  Since they are in equal measure, one solution is

$3Ba^{2+}+6OH^- +6H^-+2PO_4^{3-} \longrightarrow 6H_2O + Ba_3(PO_4)_2$

CONCLUSION:

You still need to understand chemistry and algebra to interpret the results, but combining algebra (and especially a CAS) makes it much easier to balance chemical equations and ionic chemical equations, particularly those with non-trivial solutions not easily found by inspection.

The minor connection between science (chemistry) and math (algebra) is nice.

As many others have noted, CAS enables you to keep your mind on the problem while avoiding getting lost in the algebra.

## Measuring Calculator Speed

Two weeks ago, my summer school Algebra 2 students were exploring sequences and series.  A problem I thought would be a routine check on students’ ability to compute the sum of a finite arithmetic series morphed into an experimental measure of the computational speed of the TI-Nspire CX handheld calculator.  This experiment can be replicated on any calculator that can compute sums of arithmetic series.

PHILOSOPHY

Teaching this topic in prior years, I’ve found that sometimes students have found series sums by actually adding all of the individual sequence terms.  Some former students have solved problems involving  addition of more than 50 terms, in sequence order, to find their sums.  That’s a valid, but computationally painful approach. I wanted my students to practice less brute-force series manipulations.  Despite my intentions, we ended up measuring brute-force anyway!

Readers of this ‘blog hopefully know that I’m not at all a fan of memorizing formulas.  One of my class mantras is

“Memorize as little as possible.  Use what you know as broadly as possible.”

Formulas can be mis-remembered and typically apply only in very particular scenarios.  Learning WHY a procedure works allows you to apply or adapt it to any situation.

THE PROBLEM I POSED AND STUDENT RESPONSES

Not wanting students to add terms, I allowed use of their Nspire handheld calculators and asked a question that couldn’t feasibly be solved without technological assistance.

The first two terms of a sequence are $t_1=3$ and $t_2=6$.  Another term farther down the sequence is $t_k=25165824$.

A)  If the sequence is arithmetic, what is k?

B)  Compute $\sum_{n=1}^{k}t_n$ where $t_n$ is the arithmetic sequence defined above, and k is the number you computed in part A.

Part A was easy.  They quickly recognized the terms were multiples of 3, so $t_k=25165824=3\cdot k$, or $k=8388608$.

For Part B, I expected students to use the Gaussian approach to summing long arithmetic series that we had explored/discovered the day before.   For arithmetic series, rearrange the terms in pairs:  the first with last, the second with next-to-last, the third with next-to-next-to-last, etc..  Each such pair will have a constant sum, so the sum of any arithmetic series can be computed by multiplying that constant sum by the number of pairs.

Unfortunately, I think I led my students astray by phrasing part B in summation notation.  They were working in pairs and (unexpectedly for me) every partnership tried to answer part B by entering $\sum_{n=1}^{838860}(3n)$ into their calculators.  All became frustrated when their calculators appeared to freeze.  That’s when the fun began.

Multiple groups began reporting identical calculator “freezes”; it took me a few moments to realize what what happening.  That’s when I reminded students what I say at the start of every course:  Their graphing calculator will become their best, most loyal, hardworking, non-judgemental mathematical friend, but you should have some concept of what you are asking it to do.  Whatever you ask, the calculator will diligently attempt to answer until it finds a solution or runs out of energy, no matter how long it takes.  In this case, the students had asked their calculators to compute values of 8,388,608 terms and add them all up.  The machines hadn’t frozen; they were diligently computing and adding 8+ million terms, just as requested.  Nice calculator friends!

A few “Oh”s sounded around the room as they recognized the enormity of the task they had absentmindedly asked of their machines.  When I asked if there was another way to get the answer, most remembered what I had hoped they’d use in the first place.  Using a partner’s machine, they used Gauss’s approach to find $\sum_{n=1}^{8388608}(3n)=(3+25165824)\cdot (8388608/2)=105553128849408$ in an imperceptable fraction of a second.  Nice connections happened when, minutes later, the hard-working Nspires returned the same 15-digit result by the computationally painful approach.  My question phrasing hadn’t eliminated the term-by-term addition I’d hoped to avoid, but I did unintentionally create reinforcement of a concept.  Better yet, I got an idea for a data analysis lab.

LINEAR TIME

They had some fundamental understanding that their calculators were “fast”, but couldn’t quantify what “fast” meant.  The question I posed them the next day was to compute $\sum_{n=1}^k(3n)$ for various values of k, record the amount of time it took for the Nspire to return a solution, determine any pattern, and make predictions.

Recognizing the machine’s speed, one group said “K needs to be a large number, otherwise the calculator would be done before you even started to time.”  Here’s their data.

They graphed the first 5 values on a second Nspire and used the results to estimate how long it would take their first machine to compute the even more monumental task of adding up the first 50 million terms of the series–a task they had set their “loyal mathematical friend” to computing while they calculated their estimate.

Some claimed to be initially surprised that the data was so linear.  With some additional thought, they realized that every time k increased by 1, the Nspire had to do 2 additional computations:  one multiplication and one addition–a perfectly linear pattern.  They used a regression to find a quick linear model and checked residuals to make sure nothing strange was lurking in the background.

The lack of pattern and maximum residual magnitude of about 0.30 seconds over times as long as 390 seconds completely dispelled any remaining doubts of underlying linearity.  Using the linear regression, they estimated their first Nspire would be working for 32 minutes 29 seconds.

They looked at the calculator at 32 minutes, noted that it was still running, and unfortunately were briefly distracted.  When they looked back at 32 minutes, 48 seconds, the calculator had stopped.  It wasn’t worth it to them to re-run the experiment.  They were VERY IMPRESSED that even with the error, their estimate was off just 19 seconds (arguably up to 29 seconds off if the machine had stopped running right after their 32 minute observation).