Tag Archives: exponential

Exponentials Don’t Stretch

OK, this post’s title is only half true, but transforming exponentials can lead to counter-intuitive results.  This post shares a cool transformations activity using dynamic graphing software–a perfect set-up for a mind-bending algebra or precalculus student lesson in the coming year.  I use Desmos in this post, but this can be reproduced on any graphing software with sliders.

THE SCENARIO

You can vertically stretch any exponential function as much as you want, and the shape of the curve will never change!

But that doesn’t make any sense.  Doesn’t stretching a curve by definition change its curvature?

The answer is no.  Not when exponentials are vertically stretched.  It is an inevitable result from the multiplication of common bases implies add exponents property:

b^a * b^c = b^{a+c}

I set up a Desmos page to explore this property dynamically (shown below).  The base of the exponential doesn’t matter; I pre-set the base of the parent function (line 1) to 2 (in line 2), but feel free to change it.

exp1

From its form, the line 3 orange graph is a vertical stretch of the parent function; you can vary the stretch factor with the line 4 slider.  Likewise, the line 5 black graph is a horizontal translation of the parent, and the translation is controlled by the line 6 slider.  That’s all you need!

Let’s say I wanted to quadruple the height of my function, so I move the a slider to 4.  Now play with the h slider in line 6 to see if you can achieve the same results with a horizontal translation.  By the time you change h to -2, the horizontal translation aligns perfectly with the vertical stretch.  That’s a pretty strange result if you think about it.

exp2

Of course it has to be true because y = 2^{x-(-2)} = 2^x*2^2 = 4*2^x.  Try any positive stretch you like, and you will always be able to find some horizontal translation that gives you the exact same result.

Likewise, you can horizontally slide any exponential function (growth or decay) as much as you like, and there is a single vertical stretch that will produce the same results.

The implications of this are pretty deep.  Because the result of any horizontal translation of any function is a graph congruent to the initial function, AND because every vertical stretch is equivalent to a horizontal translation, then vertically stretching any exponential function produces a graph congruent to the unstretched parent curve.  That is, any vertical stretch of any exponential will never change its curvature!  Graphs make it easier to see and explore this, but it takes algebra to (hopefully) understand this cool exponential property.

NOT AN EXTENSION

My students inevitably ask if the same is true for horizontal stretches and vertical slides of exponentials.  I encourage them to play with the algebra or create another graph to investigate.  Eventually, they discover that horizontal stretches do bend exponentials (actually changing base, i.e., the growth rate), making it impossible for any translation of the parent to be congruent with the result.

ABSOLUTELY AN EXTENSION

But if a property is true for a function, then the inverse of the property generally should be true for the inverse of the function.  In this case, that means the transformation property that did not work for exponentials does work for logarithms!  That is,

Any horizontal stretch of any logarithmic function is congruent to some vertical translation of the original function.  But for logarithms, vertical stretches do morph the curve into a different shape.  Here’s a Desmos page demonstrating the log property.

exp3

The sum property of logarithms proves the existence of this equally strange property:

log(A) + log(x) = log(A*x)

CONCLUSION

Hopefully the unexpected transformational congruences will spark some nice discussions, while the graphical/algebraic equivalences will reinforce the importance of understanding mathematics more than one way.

Enjoy the strange transformational world of exponential and log functions!

Powers of i

I was discussing integer powers of i in my summer Algebra 2 last month and started with the “standard” modulus-4 pattern I learned as a student and have always taught.  While not particularly insightful, my students and I considered another approach that might prove simpler for some.

TRADITIONAL APPROACH:

I began with the obvious i^0 and i^1 before invoking the definition of i to get i^2.  From these three you can see every time the power of i increases by 1, you multiply the result by i and simplify the result if possible using these first 3 terms.  The result of i^3 is simple,  taking the known results to

i_1

But i^4=-i^2=-(-1)=1, cycling back to the value initially found with i^0.  Continuing this procedure creates a modulus-4 pattern:

i_2

They noticed that to any multiple of 4 was 1, and other powers were i, -1, or –i, depending on how far removed they were from a multiple of 4.  For an algorithm to compute a simplified form of to an integer power, divide the power by 4, and raise i to the remainder (0, 1, 2, or 3) from that division.

They got the pattern and were ready to move on when one student who had glimpsed this in a math competition at some point noted he could “do it”, but it seemed to him that memorizing the list of 4 base powers was a necessary requirement to invoking the pattern.

Then recalled a comment I made on the first day of class.  I value memorizing as little mathematics as possible and using the mathematics we do know as widely as possible.  His challenge was clear:  Wasn’t asking students to use this 4-cycle approach just a memorization task in disguise?  If I believed in my non-memorization claim, shouldn’t there be another way to achieve our results using nothing more the definition of i?

A POTENTIAL IMPROVEMENT:

By definition, i = \sqrt{-1}, so it’s a very small logical stretch with inverse operations to claim i^2=-1.

Even Powers:  After trying some different examples, one student had an easy way to handle even powers.  For example, if n=148, she invoked an exponent rule “in reverse” to extract an i^2 term which she turned into a -1.  Because -1 to any integer power is either 1 or -1, she used the properties of negative numbers to odd and even powers to determine the sign of her answer.

i_3

Because any even power can always be written as the product of 2 and another number, this gave an easy way to handle half of all cases using nothing more than the definition of i and exponents of -1.

A third student pointed out another efficiency.  Because the final result depended only on whether the integer multiplied by 2 was even or odd, only the last two digits of n were even relevant.  That pattern also exists in the 4-cycle approach, but it felt more natural here.

Odd Powers:  Even powers were so simple, they were initially frustrated that odd powers didn’t seem to be, too.  Then the student who’d issued the memorization challenge said that any odd power of i was just the product of i and an even power of i.  Invoking the efficiency in the last paragraph for n=567, he found

i_4

CONCLUSION:

In the end, powers of i had become nothing more complicated than exponent properties and powers of -1.  The students seemed to have greater comfort with finding powers of complex numbers, but I have begun to question why algebra courses have placed so much emphasis on powers of i.

From one perspective, a surprising property of complex numbers for many students is that any operation on complex numbers creates another complex number.  While they are told that complex numbers are a closed set, to see complex numbers simplify so conveniently surprises many.

Another cool aspect of complex number operations is the stretch-and-rotate graphical property of complex number multiplication.   This is the basis of DeMoivre’s Theorem and explains why there are exactly 4 results when you repeatedly multiply any complex number by i–equivalent to stretching by a factor of 1 and rotating \frac{\pi}{2}.  Multiplying by 1 doesn’t change the magnitude of a number, and after 4 rotations of \frac{\pi}{2}, you are back at the original number.

So, depending on the future goals or needs of your students, there is certainly a reason to explore the 4-cycle nature of repeated multiplication by i.  If the point is just to compute a result, perhaps the 4-cycle approach is unnecessarily “complex”, and the odd/even powers of -1 is less computationally intense.  In the end, maybe it’s all about number sense.

My students discovered a more basic algorithm, but I’m more uncomfortable.  Just because we can ask our students a question doesn’t mean we should.  I can see connections from my longer studies, but do they see or care?  In this case, should they?

Best Algebra 2 Lab Ever

This post shares what I think is one of the best, inclusive, data-oriented labs for a second year algebra class.  This single experiment produces linear, quadratic, and exponential (and logarithmic) data from a lab my Algebra 2 students completed this past summer.  In that class, I assigned frequent labs where students gathered real data, determined models to fit that data, and analyzed goodness of the models’ fit to the data.   I believe in the importance of doing so much more than just writing an equation and moving on.

For kicks, I’ll derive an approximation for the coefficient of gravity at the end.

THE LAB:

On the way to school one morning last summer, I grabbed one of my daughters’ “almost fully inflated” kickballs and attached a TI CBR2 to my laptop and gathered (distance, time) data from bouncing the ball under the Motion Sensor.  NOTE:  TI’s CBR2 can connect directly to their Nspire and TI84 families of graphing calculators.  I typically use computer-based Nspire CAS software, so I connected the CBR via my laptop’s USB port.  It’s crazy easy to use.

One student held the CBR2 about 1.5-2 meters above the ground while another held the ball steady about 20 cm below the CBR2 sensor.  When the second student released the ball, a third clicked a button on my laptop to gather the data:  time every 0.05 seconds and height from the ground.  The graphed data is shown below.  In case you don’t have access to a CBR or other data gathering devices, I’ve uploaded my students’ data in this Excel file.

Bounce1

Remember, this is data was collected under far-from-ideal conditions.  I picked up a kickball my kids left outside on my way to class.  The sensor was handheld and likely wobbled some, and the ball was dropped on the well-worn carpet of our classroom floor.  It is also likely the ball did not remain perfectly under the sensor the entire time.  Even so, my students created a very pretty graph on their first try.

For further context, we did this lab in the middle of our quadratics unit that was preceded by a unit on linear functions and another on exponential and logarithmic functions.  So what can we learn from the bouncing ball data?

LINEAR 1:  

While it is very unlikely that any of the recorded data points were precisely at maximums, they are close enough to create a nice linear pattern.

As the height of a ball above the ground helps determine the height of its next bounce (height before –> energy on impact –> height after), the eight ordered pairs (max height #n, max height #(n+1) ) from my students’ data are shown below

bounce2

This looks very linear.  Fitting a linear regression and analyzing the residuals gives the following.

bounce3

The data seems to be close to the line, and the residuals are relatively small, about evenly distributed above and below the line, and there is no apparent pattern to their distribution.  This confirms that the regression equation, y=0.673x+0.000233, is a good fit for the = height before bounce and = height after bounce data.

NOTE:  You could reasonably easily gather this data sans any technology.  Have teams of students release a ball from different measured heights while others carefully identify the rebound heights.

The coefficients also have meaning.  The 0.673 suggests that after each bounce, the ball rebounded to 67.3%, or 2/3, of its previous height–not bad for a ball plucked from a driveway that morning.  Also, the y-intercept, 0.000233, is essentially zero, suggesting that a ball released 0 meters from the ground would rebound to basically 0 meters above the ground.  That this isn’t exactly zero is a small measure of error in the experiment.

EXPONENTIAL:

Using the same idea, consider data of the form (x,y) = (bounce number, bounce height). the graph of the nine points from my students’ data is:

bounce4

This could be power or exponential data–something you should confirm for yourself–but an exponential regression and its residuals show

bounce5

While something of a pattern seems to exist, the other residual criteria are met, making the exponential regression a reasonably good model: y = 0.972 \cdot (0.676)^x.  That means bounce number 0, the initial release height from which the downward movement on the far left of the initial scatterplot can be seen, is 0.972 meters, and the constant multiplier is about 0.676.  This second number represents the percentage of height maintained from each previous bounce, and is therefore the percentage rebound.  Also note that this is essentially the same value as the slope from the previous linear example, confirming that the ball we used basically maintained slightly more than 2/3 of its height from one bounce to the next.

And you can get logarithms from these data if you use the equation to determine, for example, which bounces exceed 0.2 meters.

bounce12

So, bounces 1-4 satisfy the requirement for exceeding 0.20 meters, as confirmed by the data.

A second way to invoke logarithms is to reverse the data.  Graphing x=height and y=bounce number will also produce the desired effect.

QUADRATIC:

Each individual bounce looks like an inverted parabola.  If you remember a little physics, the moment after the ball leaves the ground after each bounce, it is essentially in free-fall, a situation defined by quadratic movement if you ignore air resistance–something we can safely assume given the very short duration of each bounce.

I had eight complete bounces I could use, but chose the first to have as many data points as possible to model.  As it was impossible to know whether the lowest point on each end of any data set came from the ball moving up or down, I omitted the first and last point in each set.  Using (x,y) = (time, height of first bounce) data, my students got:

bounce6

What a pretty parabola.  Fitting a quadratic regression (or manually fitting one, if that’s more appropriate for your classes), I get:

bounce7

Again, there’s maybe a slight pattern, but all but two points are will withing  0.1 of 1% of the model and are 1/2 above and 1/2 below.  The model, y=-4.84x^2+4.60x-4.24, could be interpreted in terms of the physics formula for an object in free fall, but I’ll postpone that for a moment.

LINEAR 2:

If your second year algebra class has explored common differences, your students could explore second common differences to confirm the quadratic nature of the data.  Other than the first two differences (far right column below), the second common difference of all data points is roughly 0.024.  This raises suspicions that my student’s hand holding the CBR2 may have wiggled during the data collection.

bounce8

Since the second common differences are roughly constant, the original data must have been quadratic, and the first common differences linear. As a small variation for each consecutive pair of (time, height) points, I had my students graph (x,y) = (x midpoint, slope between two points):

bounce10

If you get the common difference discussion, the linearity of this graph is not surprising.  Despite those conversations, most of my students seem completely surprised by this pattern emerging from the quadratic data.  I guess they didn’t really “get” what common differences–or the closely related slope–meant until this point.

bounce11

Other than the first three points, the model seems very strong.  The coefficients tell an even more interesting story.

GRAVITY:

The equation from the last linear regression is y=4.55-9.61x.  Since the data came from slope, the y-intercept, 4.55, is measured in m/sec.  That makes it the velocity of the ball at the moment (t=0) the ball left the ground.  Nice.

The slope of this line is -9.61.  As this is a slope, its units are the y-units over the x-units, or (m/sec)/(sec).  That is, meters per squared second.  And those are the units for gravity!  That means my students measured, hidden within their data, an approximation for coefficient of gravity by bouncing an outdoor ball on a well-worn carpet with a mildly wobbly hand holding a CBR2.  The gravitational constant at sea-level on Earth is about -9.807 m/sec^2.  That means, my students measurement error was about \frac{9.807-9.610}{9.807}=2.801%.  And 2.8% is not a bad measurement for a very unscientific setting!

CONCLUSION:

Whenever I teach second year algebra classes, I find it extremely valuable to have students gather real data whenever possible and with every new function, determine models to fit their data, and analyze the goodness of the model’s fit to the data.  In addition to these activities just being good mathematics explorations, I believe they do an excellent job exposing students to a few topics often underrepresented in many secondary math classes:  numerical representations and methods, experimentation, and introduction to statistics.  Hopefully some of the ideas shared here will inspire you to help your students experience more.

Powers of 2

Yesterday, James Tanton posted a fun little problem on Twitter:

2powersSo, 2 is one more than 1=2^0, and 8 is one less than 9=2^3$, and Dr. Tanton wants to know if there are any other powers of two that are within one unit of a perfect square.

While this problem may not have any “real-life relevance”, it demonstrates what I describe as the power and creativity of mathematics.  Among the infinite number of powers of two, how can someone know for certain if any others are or are not within one unit of a perfect square?  No one will ever be able to see every number in the list of powers of two, but variables and mathematics give you the tools to deal with all possibilities at once.

For this problem, let D and N be positive integers.  Translated into mathematical language, Dr. Tanton’s problem is equivalent to asking if there are values of D and N for which 2^D=N^2 \pm 1.  With a single equation in two unknowns, this is where observation and creativity come into play.  I suspect there may be more than one way to approach this, but my solution follows.  Don’t read any further if you want to solve this for yourself.

WARNING:  SOLUTION ALERT!

Because D and N are positive integers, the left side of 2^D=N^2 \pm 1,  is always even.   That means N^2, and therefore N must be odd.

Because N is odd, I know N=2k+1 for some whole number k.  Rewriting our equation gives 2^D=(2k+1)^2 \pm 1, and the right side equals either 4k^2+4k or 4k^2+4k+2.

Factoring the first expression gives 2^D=4k^2+4K=4k(k+1).   Notice that this is the product of two consecutive integers, k and k+1, and therefore one of these factors (even though I don’t know which one) must be an odd number.  The only odd number that is a factor of a power of two is 1, so either k=1 or k+1=1 \rightarrow k=0.  Now, k=1 \longrightarrow N=3 \longrightarrow D=3 and k=0 \longrightarrow N=1 \longrightarrow D=0, the two solutions Dr. Tanton gave.  No other possibilities are possible from this expression, no matter how far down the list of powers of two you want to go.

But what about the other expression?  Factoring again gives 2^D=4k^2+4k+2=2 \cdot \left( 2k^2+2k+1 \right) .  The expression in parentheses must be odd because its first two terms are both multiplied by 2 (making them even) and then one is added (making the overall sum odd).  Again, 1 is the only odd factor of a power of two, and this happens in this case only when k=0 \longrightarrow N=1 \longrightarrow D=0, repeating a solution from above.

Because no other algebraic solutions are possible, the two solutions Dr. Tanton gave in the problem statement are the only two times in the entire universe of perfect squares and powers of two where elements of those two lists are within a single unit of each other.

Math is sweet.

Exponentials and Transformations

Here’s an old and (maybe) a new way to think about equations of exponential functions.  I suspect you’ve seen the first approach.  If you understand what exponentials functions are, my second approach using transformations is much faster and involves no algebra!

Members of the exponential function family can be written in the form y=a\cdot b^x for real values of a and postive real values of b.  Because there are only two parameters, only two points are required to write an equation of any exponential.

EXAMPLE 1: Find an exponential function through the points (2,5) and (4,20).  

METHOD 1:  Plug the points into the generic exponential equation to get a 2×2 system of equations.  It isn’t necessary, but to simplify the next algebra step, I always write the equation with the larger exponent on top.

\left\{\begin{matrix} 20=a\cdot b^4 \\ 5=a\cdot b^2 \end{matrix}\right.

If the algebra isn’t the point of the lesson, this system could be solved with a CAS.  Users would need to remember that b>0 to limit the CAS solutions to just one possibility.

If you want to see algebra, you could use substitution, but I recommend division.  Students’ prior experience with systems typically involved only linear functions for which they added or subtracted the equations to eliminate variables.  For exponentials, the unknown parameters are multiplied, so division is a better operational choice.  Using the system above, I get \displaystyle \frac{20}{5}=\frac{a\cdot b^4}{a\cdot b^2}.  The fractions must be equivalent because their numerators are equal and their denominators are equal.

Simplifying gives 4=b^2\rightarrow b=+2 (because b>0 for exponential functions) and a=\frac{5}{4}.

This approach is nice because the a term will always cancel from the first division step, leaving a straightforward constant exponent to undo, a pretty easy step.

METHOD 2:  Think about what an exponential function is and does.  Then use transformations.

Remember that linear functions (y=m\cdot x+b) “start” with a y-value of b (when x=0) and add m to y every time you add 1 to x.  The only difference between linear and exponential functions is that exponentials multiply while linears add.  Therefore, exponential functions (y=a\cdot b^x) “start” with a y-value of a when x=0 and multiply by b every time 1 is added to x.

What makes the given points a bit annoying is that neither is a y-intercept.  No problem.  If you don’t like the way a problem is phrased, CHANGE IT!    (Just remember to change the answer back to the original conditions!)

If you slide the given points left 2 units, you get (0,5) and (2,20).  It would also be nice if the points were 1 x-unit apart, so halving the x-values gives (0,5) and (1,20).  Because the y-intercept is now 5, and the next point multiplies that by 4, an initial equation for the exponential is y = 5\cdot 4^x . To change this back to the original points, undo the transformations at the start of this paragraph:  stretch horizontally by 2 and then move right 2.  This gives y = 5\cdot 4^\frac{x-2}{2}.

This is algebraically equivalent to the y=\frac{5}{4}\cdot 2^x found early.  Obviously, my students prove this.

One student asked why we couldn’t make the (4,20) point the y-intercept.  Of course we can!  To move more quickly through the set up, starting at (4,20) and moving to (2,5) means my initial value is 20 and I multiply by \frac{1}{4} if the x-values move left 2 from an initial x-value of 4.  This gives y = 20\cdot\left( \frac{1}{4} \right) ^\frac{x+4}{-2}.  Of course, this 3rd equation is algebraically equivalent to the first two.

Here’s one more example to illustrate the speed of the transformations approach, even when the points aren’t convenient.

EXAMPLE 2: Find an exponential function through (-3,7) and (12,13).  

Starting at (-3,7) and moving to (12,13) means my initial value is 7, and I multiply by \frac{13}{7} if the x-values move right 15 from an initial x-value of -3.  This gives y = 7\cdot\left( \frac{13}{7} \right) ^\frac{x-3}{15}.

Equivalently, starting at (12,13) and moving to (-3,7) means my initial value is 13, and I multiply by \frac{7}{13} if the x-values move left 15 from an initial x-value of 12.  This gives y = 13\cdot\left( \frac{7}{13} \right) ^\frac{x+3}{-15}.

If you get transformations, exponential equations require almost no algebraic work, no matter how “ugly” the coordinates.  I hope this helps give a different perspective on exponential function equations and helps enhance the importance of the critical math concept of equivalence.

Exponential Derivatives and Statistics

This post gives a different way I developed years ago to determine the form of the derivative of exponential functions, y=b^x.  At the end, I provide a copy of the document I use for this activity in my calculus classes just in case that’s helpful.  But before showing that, I walk you through my set-up and solution of the problem of finding exponential derivatives.

Background:

I use this lesson after my students have explored the definition of the derivative and have computed the algebraic derivatives of polynomial and power functions. They also have access to TI-nSpire CAS calculators.

The definition of the derivative is pretty simple for polynomials, but unfortunately, the definition of the derivative is not so simple to resolve for exponential functions.  I do not pretend to teach an analysis class, so I see my task as providing strong evidence–but not necessarily a watertight mathematical proof–for each derivative rule.  This post definitely is not a proof, but its results have been pretty compelling for my students over the years.

Sketching Derivatives of Exponentials:

At this point, my students also have experience sketching graphs of derivatives from given graphs of functions.  They know there are two basic graphical forms of exponential functions, and conclude that there must be two forms of their derivatives as suggested below.

When they sketch their first derivative of an exponential growth function, many begin to suspect that an exponential growth function might just be its own derivative.  Likewise, the derivative of an exponential decay function might be the opposite of the parent function.  The lack of scales on the graphs obviously keep these from being definitive conclusions, but the hypotheses are great first ideas.  We clearly need to firm things up quite a bit.

Numerically Computing Exponential Derivatives:

Starting with y=10^x, the students used their CASs to find numerical derivatives at 5 different x-values.  The x-values really don’t matter, and neither does the fact that there are five of them.  The calculators quickly compute the slopes at the selected x-values.

Each point on f(x)=10^x has a unique tangent line and therefore a unique derivative.  From their sketches above, my students are soundly convinced that all ordered pairs \left( x,f'(x) \right) form an exponential function.  They’re just not sure precisely which one. To get more specific, graph the points and compute an exponential regression.

So, the derivatives of f(x)=10^x are modeled by f'(x)\approx 2.3026\cdot 10^x.  Notice that the base of the derivative function is the same as its parent exponential, but the coefficient is different.  So the common student hypothesis is partially correct.

Now, repeat the process for several other exponential functions and be sure to include at least 1 or 2 exponential decay curves.  I’ll show images from two more below, but ultimately will include data from all exponential curves mentioned in my Scribd document at the end of the post.

The following shows that g(x)=5^x has derivative g'(x)\approx 1.6094\cdot 5^x.  Notice that the base again remains the same with a different coefficient.

OK, the derivative of h(x)=\left( \frac{1}{2} \right)^x causes a bit of a hiccup.  Why should I make this too easy?  <grin>

As all of its h'(x) values are negative, the semi-log regression at the core of an exponential regression is impossible.  But, I also teach my students regularly that If you don’t like the way a problem appears, CHANGE IT!  Reflecting these data over the x-axis creates a standard exponential decay which can be regressed.

From this, they can conclude that  h'(x)\approx -0.69315\cdot \left( \frac{1}{2} \right)^x.

So, every derivative of an exponential function appears to be another exponential function whose base is the same as its parent function with a unique coefficient.  Obviously, the value of the coefficient depends on the base of the corresponding parent function.  Therefore, each derivative’s coefficient is a function of the base of its parent function.  The next two shots show the values of all of the coefficients and a plot of the (base,coefficient) ordered pairs.

OK, if you recognize the patterns of your families of functions, that data pattern ought to look familiar–a logarithmic function.  Applying a logarithmic regression gives

For y=a+b\cdot ln(x), a\approx -0.0000067\approx 0 and b=1, giving coefficient(base) \approx ln(base).

Therefore, \frac{d}{dx} \left( b^x \right) = ln(b)\cdot b^x.

Again, this is not a formal mathematical proof, but the problem-solving approach typically keeps my students engaged until the end, and asking my students to  discover the derivative rule for exponential functions typically results in very few future errors when computing exponential derivatives.

Feedback on the approach is welcome.

Classroom Handout:

Here’s a link to a Scribd document written for my students who use TI-nSpire CASs.  There are a few additional questions at the end.  Hopefully this post and the document make it easy enough for you to adapt this to the technology needs of your classroom.  Enjoy.

Statistics and Series

I was inspired by the article “Errors in Mathematics Aren’t Always Bad” (Sheldon Gordon, Mathematics Teacher, August 2011, Volume 105, Issue 1) to think about an innovative way to introduce series to my precalculus class without using any of the traditional calculus that’s typically required to derive them.  It’s not a proof, but it’s certainly compelling and introduces my students to an idea that many find challenging in a much less demanding environment.

Following is a paraphrase of an activity I took my students through in January.  They started by computing and graphing a few points on y=e^x near x=0.

The global shape is exponential, but this image convinced them to try a linear fit.

Simplifying a bit, this linear regression suggests that e^x\approx x+1 for values of x near x=0.  Despite the “strength” of the correlation coefficient, we teach our students always to look at the residuals from any attempted fit.  If you have ever relied solely on correlation coefficients to determine “the best fit” for a set of data, the “strength” of r \approx0.998402 and the following residual plot should convince you to be more careful.

The values are very small, but these residuals (res1=e^x-(x+1))  look pretty close to quadratic even though the correlation coefficient was nearly 1.  Fitting a quadratic to (xval,res1) gives another great fit.

The linear and constant coefficients are nearly zero making res1\approx\frac{1}{2}x^2.  Therefore, a quadratic approximation to the original exponential is e^x \approx\frac{1}{2}x^2+x+1.  But even with another great correlation coefficient, hopefully the last step has convinced you to investigate the new residuals, res2=e^x-(\frac{1}{2}x^2+x+1).

And that looks cubic.  Fitting a cubic to (xval,res2) gives yet another great fit.

This time, the quadratic, linear, and constant coefficients are all nearly zero making res2\approx.167x^3.  The simplest fraction close to this coefficient is \frac{1}{6} making cubic approximation e^x \approx\frac{1}{6}x^3+\frac{1}{2}x^2+x+1.  One more time, check the new residuals, res3=e^x-(\frac{1}{6}x^3+\frac{1}{2}x^2+x+1).

Given this progression and the “flatter” vertex, my students were ready to explore a quartic fit to the res3 data.

As before, only the highest degree term seems non-zero, giving res3\approx0.04175x^4.  Some of my students called this coefficient \frac{1}{25} and others went for \frac{1}{24}.  At this point, either approximation was acceptable, leading to e^x \approx\frac{1}{24}x^4+\frac{1}{6}x^3+\frac{1}{2}x^2+x+1.

My students clearly got the idea that this approach could be continued as far as desired, but since our TI-Nspire had used its highest polynomial regression (quartic) and the decimals were getting harder to approximate, we had enough.  As a final check, they computed a quartic regression on the original data, showing that the progression above could have been simplified to a single step.

If you try this with your classes, I recommend NOT starting with the quartic regression.  Students historically have difficulty understanding what series are and from where they come.  My anecdotal experiences from using this approach for the first time this year suggest that, as a group, my students are far more comfortable with series than ever before.

Ultimately, this activity established for my students the idea that polynomials can be great approximations for other functions at the same time we crudely developed the Maclaurin Series for e^x\approx\frac{1}{4!}x^4+\frac{1}{3!}x^3+\frac{1}{2!}x^2+x+1, a topic I’m revisiting soon as we explore derivatives.  We also learned that even very strong correlation coefficients can hide some pretty math.