Tag Archives: series

Measuring Calculator Speed

Two weeks ago, my summer school Algebra 2 students were exploring sequences and series.  A problem I thought would be a routine check on students’ ability to compute the sum of a finite arithmetic series morphed into an experimental measure of the computational speed of the TI-Nspire CX handheld calculator.  This experiment can be replicated on any calculator that can compute sums of arithmetic series.

PHILOSOPHY

Teaching this topic in prior years, I’ve found that sometimes students have found series sums by actually adding all of the individual sequence terms.  Some former students have solved problems involving  addition of more than 50 terms, in sequence order, to find their sums.  That’s a valid, but computationally painful approach. I wanted my students to practice less brute-force series manipulations.  Despite my intentions, we ended up measuring brute-force anyway!

Readers of this ‘blog hopefully know that I’m not at all a fan of memorizing formulas.  One of my class mantras is

“Memorize as little as possible.  Use what you know as broadly as possible.”

Formulas can be mis-remembered and typically apply only in very particular scenarios.  Learning WHY a procedure works allows you to apply or adapt it to any situation.

THE PROBLEM I POSED AND STUDENT RESPONSES

Not wanting students to add terms, I allowed use of their Nspire handheld calculators and asked a question that couldn’t feasibly be solved without technological assistance.

The first two terms of a sequence are t_1=3 and t_2=6.  Another term farther down the sequence is t_k=25165824.

A)  If the sequence is arithmetic, what is k?

B)  Compute \sum_{n=1}^{k}t_n where t_n is the arithmetic sequence defined above, and k is the number you computed in part A.

Part A was easy.  They quickly recognized the terms were multiples of 3, so t_k=25165824=3\cdot k, or k=8388608.

For Part B, I expected students to use the Gaussian approach to summing long arithmetic series that we had explored/discovered the day before.   For arithmetic series, rearrange the terms in pairs:  the first with last, the second with next-to-last, the third with next-to-next-to-last, etc..  Each such pair will have a constant sum, so the sum of any arithmetic series can be computed by multiplying that constant sum by the number of pairs.

Unfortunately, I think I led my students astray by phrasing part B in summation notation.  They were working in pairs and (unexpectedly for me) every partnership tried to answer part B by entering \sum_{n=1}^{838860}(3n) into their calculators.  All became frustrated when their calculators appeared to freeze.  That’s when the fun began.

Multiple groups began reporting identical calculator “freezes”; it took me a few moments to realize what what happening.  That’s when I reminded students what I say at the start of every course:  Their graphing calculator will become their best, most loyal, hardworking, non-judgemental mathematical friend, but you should have some concept of what you are asking it to do.  Whatever you ask, the calculator will diligently attempt to answer until it finds a solution or runs out of energy, no matter how long it takes.  In this case, the students had asked their calculators to compute values of 8,388,608 terms and add them all up.  The machines hadn’t frozen; they were diligently computing and adding 8+ million terms, just as requested.  Nice calculator friends!

A few “Oh”s sounded around the room as they recognized the enormity of the task they had absentmindedly asked of their machines.  When I asked if there was another way to get the answer, most remembered what I had hoped they’d use in the first place.  Using a partner’s machine, they used Gauss’s approach to find \sum_{n=1}^{8388608}(3n)=(3+25165824)\cdot (8388608/2)=105553128849408 in an imperceptable fraction of a second.  Nice connections happened when, minutes later, the hard-working Nspires returned the same 15-digit result by the computationally painful approach.  My question phrasing hadn’t eliminated the term-by-term addition I’d hoped to avoid, but I did unintentionally create reinforcement of a concept.  Better yet, I got an idea for a data analysis lab.

LINEAR TIME

They had some fundamental understanding that their calculators were “fast”, but couldn’t quantify what “fast” meant.  The question I posed them the next day was to compute \sum_{n=1}^k(3n) for various values of k, record the amount of time it took for the Nspire to return a solution, determine any pattern, and make predictions.

Recognizing the machine’s speed, one group said “K needs to be a large number, otherwise the calculator would be done before you even started to time.”  Here’s their data.

NspireTime1

They graphed the first 5 values on a second Nspire and used the results to estimate how long it would take their first machine to compute the even more monumental task of adding up the first 50 million terms of the series–a task they had set their “loyal mathematical friend” to computing while they calculated their estimate.

NspireTime2

Some claimed to be initially surprised that the data was so linear.  With some additional thought, they realized that every time k increased by 1, the Nspire had to do 2 additional computations:  one multiplication and one addition–a perfectly linear pattern.  They used a regression to find a quick linear model and checked residuals to make sure nothing strange was lurking in the background.

NspireTime4

The lack of pattern and maximum residual magnitude of about 0.30 seconds over times as long as 390 seconds completely dispelled any remaining doubts of underlying linearity.  Using the linear regression, they estimated their first Nspire would be working for 32 minutes 29 seconds.

NspireTime3

They looked at the calculator at 32 minutes, noted that it was still running, and unfortunately were briefly distracted.  When they looked back at 32 minutes, 48 seconds, the calculator had stopped.  It wasn’t worth it to them to re-run the experiment.  They were VERY IMPRESSED that even with the error, their estimate was off just 19 seconds (arguably up to 29 seconds off if the machine had stopped running right after their 32 minute observation).

HOW FAST IS YOUR NSPIRE?

The units of the linear regression slope (0.000039) were seconds per k.  Reciprocating gave approximately 25,657 computed and summed values of k per second.  As every increase in k required the calculator to multiply the next term number by 3 and add that new term value to the existing sum, each k represented 2 Nspire calculations.  Doubling the last result meant their Nspire was performing about 51,314 calculations per second when calculating the sum of an arithmetic series.

NspireTime5

My students were impressed by the speed, the lurking linear function, and their ability to predict computation times within seconds for very long arithmetic series calculations.

Not a bad diversion from unexpected student work, I thought.

Infinite Ways to an Infinite Geometric Sum

One of my students, K, and I were reviewing Taylor Series last Friday when she asked for a reminder why an infinite geometric series summed to \displaystyle \frac{g}{1-r} for first term g and common ratio r when \left| r \right| < 1.  I was glad she was dissatisfied with blind use of a formula and dove into a familiar (to me) derivation.  In the end, she shook me free from my routine just as she made sure she didn’t fall into her own.

STANDARD INFINITE GEOMETRIC SUM DERIVATION

My standard explanation starts with a generic infinite geometric series.

S = g+g\cdot r+g\cdot r^2+g\cdot r^3+...  (1)

We can reason this series converges iff \left| r \right| <1 (see Footnote 1 for an explanation).  Assume this is true for (1).  Notice the terms on the right keep multiplying by r.

The annoying part of summing any infinite series is the ellipsis (…).  Any finite number of terms always has a finite sum, but that simply written, but vague ellipsis is logically difficult.  In the geometric series case, we might be able to handle the ellipsis by aligning terms in a similar series.  You can accomplish this by continuing the pattern on the right:  multiplying both sides by r

r\cdot S = r\cdot \left( g+g\cdot r+g\cdot r^2+... \right)

r\cdot S = g\cdot r+g\cdot r^2+g\cdot r^3+...  (2)

This seems to make make the right side of (2) identical to the right side of (1) except for the leading g term of (1), but the ellipsis requires some careful treatment. Footnote 2 explains how the ellipses of (1) and (2) are identical.  After that is established, subtracting (2) from (1), factoring, and rearranging some terms leads to the infinite geometric sum formula.

(1)-(2) = S-S\cdot r = S\cdot (1-r)=g

\displaystyle S=\frac{g}{1-r}

STUDENT PREFERENCES

I despise giving any formula to any of my classes without at least exploring its genesis.  I also allow my students to use any legitimate mathematics to solve problems so long as reasoning is justified.

In my experiences, about half of my students opt for a formulaic approach to infinite geometric sums while an equal number prefer the quick “multiply-by-r-and-subtract” approach used to derive the summation formula.  For many, apparently, the dynamic manipulation is more meaningful than a static rule.  It’s very cool to watch student preferences at play.

K’s VARIATION

K understood the proof, and then asked a question I hadn’t thought to ask.  Why did we have to multiply by r?  Could multiplication by r^2 also determine the summation formula?

I had three nearly simultaneous thoughts followed quickly by a fourth.  First, why hadn’t I ever thought to ask that?  Second, geometric series for \left| r \right|<1 are absolutely convergent, so K’s suggestion should work.  Third, while the formula would initially look different, absolute convergence guaranteed that whatever the “r^2 formula” looked like, it had to be algebraically equivalent to the standard form.  While I considered those conscious questions, my math subconscious quickly saw the easy resolution to K’s question and the equivalence from Thought #3.

Multiplying (1) by r^2 gives

r^2 \cdot S = g\cdot r^2 + g\cdot r^3 + ... (3)

and the ellipses of (1) and (3) partner perfectly (Footnote 2), so K subtracted, factored, and simplified to get the inevitable result.

(1)-(3) = S-S\cdot r^2 = g+g\cdot r

S\cdot \left( 1-r^2 \right) = g\cdot (1+r)

\displaystyle S=\frac{g\cdot (1+r)}{1-r^2} = \frac{g\cdot (1+r)}{(1+r)(1-r)} = \frac{g}{1-r}

That was cool, but this success meant that there were surely many more options.

EXTENDING

Why stop at multiplying by r or r^2?  Why not multiply both sides of (1) by a generic r^N for any natural number N?   That would give

r^N \cdot S = g\cdot r^N + g\cdot r^{N+1} + ... (4)

where the ellipses of (1) and (4) are again identical by the method of Footnote 2.  Subtracting (4) from (1) gives

(1)-(4) = S-S\cdot r^N = g+g\cdot r + g\cdot r^2+...+ g\cdot r^{N-1}

S\cdot \left( 1-r^N \right) = g\cdot \left( 1+r+r^2+...+r^{N-1} \right)  (5)

There are two ways to proceed from (5).  You could recognize the right side as a finite geometric sum with first term 1 and ratio r.  Substituting that formula and dividing by \left( 1-r^N \right) would give the general result.

Alternatively, I could see students exploring \left( 1-r^N \right), and discovering by hand or by CAS that (1-r) is always a factor.  I got the following TI-Nspire CAS result in about 10-15 seconds, clearly suggesting that

1-r^N = (1-r)\left( 1+r+r^2+...+r^{N-1} \right).  (6)

geometric1

Math induction or a careful polynomial expansion of (6) would prove the pattern suggested by the CAS.  From there, dividing both sides of (5) by \left( 1-r^N \right) gives the generic result.

\displaystyle S = \frac{g\cdot \left( 1+r+r^2+...+r^{N-1} \right)}{\left( 1-r^N \right)}

\displaystyle S = \frac{g\cdot \left( 1+r+r^2+...+r^{N-1} \right) }{(1-r) \cdot \left( 1+r+r^2+...+r^{N-1} \right)} = \frac{g}{1-r}

In the end, K helped me see there wasn’t just my stock approach to an infinite geometric sum, but really an infinite number of parallel ways.  Nice.

FOOTNOTES

1) RESTRICTING r:  Obviously an infinite geometric series diverges for \left| r \right| >1 because that would make g\cdot r^n \rightarrow \infty as n\rightarrow \infty, and adding an infinitely large term (positive or negative) to any sum ruins any chance of finding a sum.

For r=1, the sum converges iff g=0 (a rather boring series). If g \ne 0 , you get a sum of an infinite number of some nonzero quantity, and that is always infinite, no matter how small or large the nonzero quantity.

The last case, r=-1, is more subtle.  For g \ne 0, this terms of this series alternate between positive and negative g, making the partial sums of the series add to either g or 0, depending on whether you have summed an even or an odd number of terms.  Since the partial sums alternate, the overall sum is divergent.  Remember that series sums and limits are functions; without a single numeric output at a particular point, the function value at that point is considered to be non-existent.

2) NOT ALL INFINITIES ARE THE SAME:  There are two ways to show two groups are the same size.  The obvious way is to count the elements in each group and find out there is the same number of elements in each, but this works only if you have a finite group size.  Alternatively, you could a) match every element in group 1 with a unique element from group 2, and b) match every element in group 2 with a unique element from group 1.  It is important to do both steps here to show that there are no left-over, unpaired elements in either group.

So do the ellipses in (1) and (2) represent the same sets?  As the ellipses represent sets with an infinite number of elements, the first comparison technique is irrelevant.  For the second approach using pairing, we need to compare individual elements.  For every element in the ellipsis of (1), obviously there is an “partner” in (2) as the multiplication of (1) by r visually shifts all of the terms of the series right one position, creating the necessary matches.

Students often are troubled by the second matching as it appears the ellipsis in (2) contains an “extra term” from the right shift.  But, for every specific term you identify in (2), its identical twin exists in (1).  In the weirdness of infinity, that “extra term” appears to have been absorbed without changing the “size” of the infinity.

Since there is a 1:1 mapping of all elements in the ellipses of (1) and (2), you can conclude they are identical, and their difference is zero.

The Value of Counter-Intuition

Numberphile caused quite a stir when it posted a video explaining why

\displaystyle 1+2+3+4+...=- \frac{1}{12}

Doug Kuhlman recently posted a great follow-up Numberphile video explaining a broader perspective behind this sum.

It’s a great reminder that there are often different ways of thinking about problems, and sometimes we have to abandon tradition to discover deeper, more elegant connections.

For those deeply bothered by this summation result, the second video contains a lovely analogy to the “reality” of \sqrt{-1} .  From one perspective, it is absolutely not acceptable to do something like square roots of negative numbers.  But by finding a way to conceptualize what such a thing would mean, we gain a far richer understanding of the very real numbers that forbade \sqrt{-1}  in the first place as well as opening the doors to stunning mathematics far beyond the limitations of real numbers.

On the face of it, \displaystyle 1+2+3+...=-\frac{1}{12} is obviously wrong within the context of real numbers only.  But the strange thing in physics and the Zeta function and other places is that \displaystyle -\frac{1}{12} just happens to work … every time.  Let’s not dismiss this out of hand.  It gives our students the wrong idea about mathematics, discovery, and learning.

There’s very clearly SOMETHING going on here.  It’s time to explore and learn something deeper.  And until then, we can revel in the awe of manipulations that logically shouldn’t work, but somehow they do.

May all of our students feel the awe of mathematical and scientific discovery.  And until the connections and understanding are firmly established, I hope we all can embrace the spirit, boldness, and fearless of Euler.

Base-x Numbers and Infinite Series

In my previous post, I explored what happened when you converted a polynomial from its variable form into a base-x numerical form.  That is, what are the computational implications when polynomial 3x^3-11x^2+2 is represented by the base-x number 3(-11)02_x, where the parentheses are used to hold the base-x digit, -11, for the second power of x?  

So far, I’ve explored only the Natural number equivalents of base-x numbers.  In this post, I explore what happens when you allow division to extend base-x numbers into their Rational number counterparts.

Level 5–Infinite Series: 

Numbers can have decimals, so what’s the equivalence for base-x numbers?  For starters, I considered trying to get a “decimal” form of \displaystyle \frac{1}{x+2}.  It was “obvious” to me that 12_x won’t divide into 1_x.  There are too few “places”, so some form of decimals are required.  Employing division as described in my previous post somewhat like you would to determine the rational number decimals of \frac{1}{12} gives

Base6

Remember, the places are powers of x, so the decimal portion of \displaystyle \frac{1}{x+2} is 0.1(-2)4(-8)..._x, and it is equivalent to

\displaystyle 1x^{-1}-2x^{-2}+4x^{-3}-8x^{-4}+...=\frac{1}{x}-\frac{2}{x^2}+\frac{4}{x^3}-\frac{8}{x^4}+....

This can be seen as a geometric series with first term \displaystyle \frac{1}{x} and ratio \displaystyle r=\frac{-2}{x}.  It’s infinite sum is therefore \displaystyle \frac{\frac{1}{x}}{1-\frac{-2}{x}} which is equivalent to \displaystyle \frac{1}{x+2}, confirming the division computation.  Of course, as a geometric series, this is true only so long as \displaystyle |r|=\left | \frac{-2}{x} \right |<1, or 2<|x|.

I thought this was pretty cool, and it led to lots of other cool series.  For example, if x=8,you get \frac{1}{10}=\frac{1}{8}-\frac{2}{64}+\frac{4}{512}-....

Likewise, x=3 gives \frac{1}{5}=\frac{1}{3}-\frac{2}{9}+\frac{4}{27}-\frac{8}{81}+....

I found it quite interesting to have a “polynomial” defined with a rational expression.

Boundary Convergence:

As shown above, \displaystyle \frac{1}{x+2}=\frac{1}{x}-\frac{2}{x^2}+\frac{4}{x^3}-\frac{8}{x^4}+... only for |x|>2.  

At x=2, the series is obviously divergent, \displaystyle \frac{1}{4} \ne \frac{1}{2}-\frac{2}{4}+\frac{4}{8}-\frac{8}{16}+....

For x=-2, I got \displaystyle \frac{1}{0} = \frac{1}{-2}-\frac{2}{4}+\frac{4}{-8}-\frac{8}{16}+...=-\frac{1}{2}-\frac{1}{2}-\frac{1}{2}-\frac{1}{2}-... which is properly equivalent to -\infty as x \rightarrow -2 as defined by the convergence domain and the graphical behavior of \displaystyle y=\frac{1}{x+2} just to the left of x=-2.  Nice.

Base7

I did find it curious, though, that \displaystyle \frac{1}{x}-\frac{2}{x^2}+\frac{4}{x^3}-\frac{8}{x^4}+... is a solid approximation for \displaystyle \frac{1}{x+2} to the left of its vertical asymptote, but not for its rotationally symmetric right side.  I also thought it philosophically strange (even though I understand mathematically why it must be) that this series could approximate function behavior near a vertical asymptote, but not near the graph’s stable and flat portion near x=0.  What a curious, asymmetrical approximator.  

Maclaurin Series:

Some quick calculus gives the Maclaurin series for \displaystyle \frac{1}{x+2} :  \displaystyle \frac{1}{2}-\frac{x}{4}+\frac{x^2}{8}-\frac{x^3}{16}+..., a geometric series with first term \frac{1}{2} and ratio \frac{-x}{2}.  Interestingly, the ratio emerging from the Maclaurin series is the reciprocal of the ratio from the “rational polynomial” resulting from the base-x division above.  

As a geometric series, the interval of convergence is  \displaystyle |r|=\left | \frac{-x}{2} \right |<1, or |x|<2.  Excluding endpoint results, the Maclaurin interval is the complete Real number complement to the base-x series.  For the endpoints, x=-2 produces the right-side vertical asymptote divergence to + \infty that x=-2 did for the left side of the vertical asymptote in the base-x series.  Again, x=2 is divergent.

It’s lovely how these two series so completely complement each other to create clean approximations of \displaystyle \frac{1}{x+2} for all x \ne 2.

Other base-x “rational numbers”

Because any polynomial divided by another is absolutely equivalent to a base-x rational number and thereby a base-x decimal number, it will always be possible to create a “rational polynomial” using powers of \displaystyle \frac{1}{x} for non-zero denominators.  But, the decimal patterns of rational base-x numbers don’t apply in the same way as for Natural number bases.  Where \displaystyle \frac{1}{12} is guaranteed to have a repeating decimal pattern, the decimal form of \displaystyle \frac{1}{x+2}=\frac{1_x}{12_x}=0.1(-2)4(-8)..._x clearly will not repeat.  I’ve not explored the full potential of this, but it seems like another interesting field.  

CONCLUSIONS and QUESTIONS

Once number bases are understood, I’d argue that using base-x multiplication might be, and base-x division definitely is, a cleaner way to compute products and quotients, respectively, for polynomials.  

The base-x division algorithm clearly is accessible to Algebra II students, and even opens the doors to studying series approximations to functions long before calculus.

Is there a convenient way to use base-x numbers to represent horizontal translations as cleanly as polynomials?  How difficult would it be to work with a base-(x-h) number for a polynomial translated h units horizontally?

As a calculus extension, what would happen if you tried employing division of non-polynomials by replacing them with their Taylor series equivalents?  I’ve played a little with proving some trig identities using base-x polynomials from the Maclaurin series for sine and cosine.

What would happen if you tried to compute repeated fractions in base-x?  

It’s an open question from my perspective when decimal patterns might terminate or repeat when evaluating base-x rational numbers.  

I’d love to see someone out there give some of these questions a run!

Number Bases and Polynomials

About a month ago, I was working with our 5th grade math teacher to develop some extension activities for some students in an unleveled class.  The class was exploring place value, and I suggested that some might be ready to explore what happens when you allow the number base to be something other than 10.  A few students had some fun learning to use their basic four algorithms in other number bases, but I made an even deeper connection.

When writing something like 512 in expanded form (5\cdot 10^2+1\cdot 10^1+2\cdot 10^0), I realized that if the 10 was an x, I’d have a polynomial.  I’d recognized this before, but this time I wondered what would happen if I applied basic math algorithms to polynomials if I wrote them in a condensed numerical form, not their standard expanded form.  That is, could I do basic algebra on 5x^2+x+2 if I thought of it as 512_x–a base-x “number”?  (To avoid other confusion later, I read this as “five one two base-x“.)

Following are some examples I played with to convince myself how my new notation would work.  I’m not convinced that this will ever lead to anything, but following my “what ifs” all the way to infinite series was a blast.  Read on!

Level 1–Basic Addition:

If I wanted to add (3x+5)(2x^2+4x+1), I could think of it as 35_x+241_x and add the numbers “normally” to get 276_x or 2x^2+7x+6.  Notice that each power of x identifies a “place value” for its characteristic coefficient.

If I wanted to add 3x-7 to itself, I had to adapt my notation a touch.  The “units digit” is a negative number, but since the number base, x, is unknown (or variable), I ended up saying 3x-7=3(-7)_x.  The parentheses are used to contain multiple characters into a single place value.  Then, (3x-7)+(3x-7) becomes 3(-7)_x+3(-7)_x=6(-14)_x or 6x-14.  Notice the expanding parentheses containing the base-x units digit.

Level 2–Advanced Addition:

The last example also showed me that simple multiplication would work.  Adding 3x-7 to itself is equivalent to multiplying 2\cdot (3x-7).  In base-x, that is 2\cdot 3(-7)_x.  That’s easy!  Arguably, this might be even easier that doubling a number when the number base is known.  Without interactions between the coefficients of different place values, just double each digit to get 6(-14)_x=6x-14, as before.

What about (x^2+7)+(8x-9)?  That’s equivalent to 107_x+8(-9)_x.  While simple, I’ll solve this one by stacking.

Base1

and this is x^2+8x-2.  As with base-10 numbers, the use of 0 is needed to hold place values exactly as I needed a 0 to hold the x^1 place for x^2+7. Again, this could easily be accomplished without the number base conversion, but how much more can we push these boundaries?

Level 3–Multiplication & Powers:

Compute (8x-3)^2.  Stacking again and using a modification of the multiply-and-carry algorithm I learned in grade school, I got

Base2and this is equivalent to 64x^2-48x+9.

All other forms of polynomial multiplication work just fine, too.

From one perspective, all of this shifting to a variable number base could be seen as completely unnecessary.  We already have acceptably working algorithms for addition, subtraction, and multiplication.  But then, I really like how this approach completes the connection between numerical and polynomial arithmetic.  The rules of math don’t change just because you introduce variables.  For some, I’m convinced this might make a big difference in understanding.

I also like how easily this extends polynomial by polynomial multiplication far beyond the bland monomial and binomial products that proliferate in virtually all modern textbooks.  Also banished here is any need at all for banal FOIL techniques.

Level 4–Division:

What about x^2+x-6 divided by x+3? In base-x, that’s 11(-6)_x \div 13_x. Remembering that there is no place value carrying possible, I had to be a little careful when setting up my computation. Focusing only on the lead digits, 1 “goes into” 1 one time.  Multiplying the partial quotient by the divisor, writing the result below and subtracting gives

Base3

Then, 1 “goes into” -2 negative two times.  Multiplying and subtracting gives a remainder of 0.

Base4

thereby confirming that x+3 is a factor of x^2+x-6, and the other factor is the quotient, x-2.

Perhaps this could be used as an alternative to other polynomial division algorithms.  It is somewhat similar to the synthetic division technique, without its  significant limitations:  It is not limited to linear divisors with lead coefficients of one.

For (4x^3-5x^2+7) \div (2x^2-1), think 4(-5)07_x \div 20(-1)_x.  Stacking and dividing gives

Base5

So \displaystyle \frac{4x^3-5x^2+7}{2x^2-1}=2x-2.5+\frac{2x+4.5}{2x^2-1}.

CONCLUSION

From all I’ve been able to tell, converting polynomials to their base-x number equivalents enables you to perform all of the same arithmetic computations.  For division in particular, it seems this method might even be a bit easier.

In my next post, I push the exploration of these base-x numbers into infinite series.

Fun with Series

Two days ago, one of my students (P) wandered into my room after school to share a problem he had encountered at the 2013 Walton MathFest, but didn’t know how to crack.  We found one solution.  I’d love to hear if anyone discovers a different approach.  Here’s our answer.

PROBLEM:  What is the sum of \displaystyle \sum_{n=1}^{\infty} \left( \frac{n^2}{2^n} \right) = \frac{1^2}{2^1} + \frac{2^2}{2^2} + \frac{3^2}{3^3} + ... ?

Without the n^2, this would be a simple geometric series, but the quadratic and exponential terms can’t be combined in any way we knew, so the solution must require rewriting.  After some thought, we remembered that perfect squares can be found by adding odd integers.  I suggested rewriting the series as

Series1

where each column adds to the one of the terms in the original series.  Each row was now a geometric series which we knew how to sum.  That  meant we could rewrite the original series as

Series2

We had lost the quadratic term, but we still couldn’t sum the series with both a linear and an exponential term.  At this point, P asked if we could use the same approach to rewrite the series again.  Because the numerators were all odd numbers and each could be written as a sum of 1 and some number of 2s, we got

Series3

where each column now added to the one of the terms in our secondary series.  Each row was again a geometric series, allowing us to rewrite the secondary series as

Series4

Ignoring the first term, this was finally a single geometric series, and we had found the sum.

Series5

Does anyone have another way?

That was fun.  Thanks, P.

Series Comfort

I ‘blogged a couple days ago about a way to use statistical regressions to develop Maclaurin Series in a way that precalculus or algebra II students could understand. In short, that approach worked because the graph of y=e^x is differentiable–or locally linear as we describe it in my class.  Following is a student solution to a limit problem that proves, I believe, that students can become quite comfortable with series approximations to functions, even while they are still very early in their calculus understanding.

This year, I’m teaching a course we call Honors Calculus.  It is my school’s prerequisite for AP Calculus BC and covers an accelerated precalculus before teaching differential calculus to a group of mostly juniors and some sophomores.  If we taught on trimesters, the precalculus portion would cover the first two trimesters.  My students explored the regression activity above 2-3 months ago and from that discovered the basic three Maclaurin series.

\displaystyle e^x=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+\ldots
\displaystyle cos(x)=1-\frac{x^2}{2!}+\frac{x^4}{4!}-\frac{x^6}{6!}+\ldots
\displaystyle sin(x)=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\ldots

Late last week, we used local linearity to establish L’Hopital’s Rule which I expected my students to invoke when I asked them on a quiz two days ago to evaluate \displaystyle\lim_{x\to 0}\frac{x\cdot sin(x)}{1-cos(x)}.  One student surprised me with his solution.

He didn’t recall L’Hopital’s, but he did remember his series exploration from January.

Had he not made a subtraction error in the denominator, he would have evaluated the limit this way.

\displaystyle\lim_{x\to 0}\frac{x\cdot sin(x)}{1-cos(x)}=\lim_{x\to 0}\frac{x*(x-\frac{x^3}{3!}+\frac{x^5}{5!}-\ldots)}{1-(1-\frac{x^2}{2!}+\frac{x^4}{4!}-\ldots)}

\displaystyle=\lim_{x\to 0}\frac{x^2-\frac{x^4}{3!}+\frac{x^6}{5!}-\ldots}{\frac{x^2}{2!}-\frac{x^4}{4!}+\ldots}

And dividing the numerator and denominator by x^2 leads to the final step.

\displaystyle=\lim_{x\to 0}\frac{1-\frac{x^2}{3!}+\frac{x^4}{5!}-\ldots}{\frac{1}{2!}-\frac{x^2}{4!}+\ldots}

=\displaystyle\frac{1-0+0-\ldots}{\frac{1}{2}-0+0-\ldots}=2

Despite his sign error, he came very close to a great answer without using L’hopital’s Rule at all, and showed an understanding of series utility long before most calculus students ever do.