Tag Archives: scale change

Exponentials Don’t Stretch

OK, this post’s title is only half true, but transforming exponentials can lead to counter-intuitive results.  This post shares a cool transformations activity using dynamic graphing software–a perfect set-up for a mind-bending algebra or precalculus student lesson in the coming year.  I use Desmos in this post, but this can be reproduced on any graphing software with sliders.

THE SCENARIO

You can vertically stretch any exponential function as much as you want, and the shape of the curve will never change!

But that doesn’t make any sense.  Doesn’t stretching a curve by definition change its curvature?

The answer is no.  Not when exponentials are vertically stretched.  It is an inevitable result from the multiplication of common bases implies add exponents property:

b^a * b^c = b^{a+c}

I set up a Desmos page to explore this property dynamically (shown below).  The base of the exponential doesn’t matter; I pre-set the base of the parent function (line 1) to 2 (in line 2), but feel free to change it.

exp1

From its form, the line 3 orange graph is a vertical stretch of the parent function; you can vary the stretch factor with the line 4 slider.  Likewise, the line 5 black graph is a horizontal translation of the parent, and the translation is controlled by the line 6 slider.  That’s all you need!

Let’s say I wanted to quadruple the height of my function, so I move the a slider to 4.  Now play with the h slider in line 6 to see if you can achieve the same results with a horizontal translation.  By the time you change h to -2, the horizontal translation aligns perfectly with the vertical stretch.  That’s a pretty strange result if you think about it.

exp2

Of course it has to be true because y = 2^{x-(-2)} = 2^x*2^2 = 4*2^x.  Try any positive stretch you like, and you will always be able to find some horizontal translation that gives you the exact same result.

Likewise, you can horizontally slide any exponential function (growth or decay) as much as you like, and there is a single vertical stretch that will produce the same results.

The implications of this are pretty deep.  Because the result of any horizontal translation of any function is a graph congruent to the initial function, AND because every vertical stretch is equivalent to a horizontal translation, then vertically stretching any exponential function produces a graph congruent to the unstretched parent curve.  That is, any vertical stretch of any exponential will never change its curvature!  Graphs make it easier to see and explore this, but it takes algebra to (hopefully) understand this cool exponential property.

NOT AN EXTENSION

My students inevitably ask if the same is true for horizontal stretches and vertical slides of exponentials.  I encourage them to play with the algebra or create another graph to investigate.  Eventually, they discover that horizontal stretches do bend exponentials (actually changing base, i.e., the growth rate), making it impossible for any translation of the parent to be congruent with the result.

ABSOLUTELY AN EXTENSION

But if a property is true for a function, then the inverse of the property generally should be true for the inverse of the function.  In this case, that means the transformation property that did not work for exponentials does work for logarithms!  That is,

Any horizontal stretch of any logarithmic function is congruent to some vertical translation of the original function.  But for logarithms, vertical stretches do morph the curve into a different shape.  Here’s a Desmos page demonstrating the log property.

exp3

The sum property of logarithms proves the existence of this equally strange property:

log(A) + log(x) = log(A*x)

CONCLUSION

Hopefully the unexpected transformational congruences will spark some nice discussions, while the graphical/algebraic equivalences will reinforce the importance of understanding mathematics more than one way.

Enjoy the strange transformational world of exponential and log functions!

Numerical Transformations, I

It’s been over a decade since I’ve taught a class where I’ve felt the freedom to really explore transformations with a strong matrix thread.  Whether due to curricular pressures, lack of time, or some other reason, I realized I had drifted away from some nice connections when I recently read Jonathan Dick’s and Maria Childrey’s Enhancing Understanding of Transformation Matrices in the April, 2012 Mathematics Teacher (abstract and complete article here).

Their approach was okay, but I was struck by the absence of a beautiful idea I believe I learned at a UCSMP conference in the early 1990s.  Further, today’s Common Core State Standards for Mathematics explicitly call for students to “Work with 2×2 matrices as transformations of the plane, and interpret the absolute value of the determinant in terms of area” (see Standard NV-M 12 on page 61 of the CCSSM here).  I’m going to take a couple posts to unpack this standard and describe the pretty connection I’ve unfortunately let slip out of my teaching.

What they almost said

At the end of the MT article, the authors performed a double transformation equivalent to reflecting the points (2,0), (3,-4), and (9,-7) over the line y=x via matrices using \left[ \begin{array}{cc} 0&1 \\ 1&0 \end{array} \right] \cdot  \left[ \begin{array}{ccc} 2 & 3 & 9 \\ 0 & -4 & -7 \end{array} \right] = \left[ \begin{array}{ccc} 0 & -4 & -7 \\ 2 & 3 & 9 \end{array} \right] giving image points (0,2), (-4,3), and (-7,9).  That this matrix multiplication reversed all of the points’ coordinates is compelling evidence that \left[ \begin{array}{cc} 0 & 1 \\ 1 & 0\end{array} \right] might be a y=x reflection matrix.

Going much deeper

Here’s how this works.  Assume a set of pre-image points, P, undergoes some transformation T to become image points, P’.  For this procedure, T can be almost any transformation except a translation–reflections, dilations, scale changes, rotations, etc.  Translations can be handled using augmentations of these transformation matrices, but that is another story.  Assuming P is a set of n two-dimensional points, then it can be written as a 2×n pre-image matrix, [P], with all of the x-coordinates in the top row and the corresponding y-coordinates in the second row.  Likewise, [P’] is a 2×n matrix of the image points, while [T] is a 2×2 matrix unique to the transformation. In matrix form, this relationship is written [T] \cdot [P] = [P'].

So what would \left[ \begin{array}{cc} 0 & -1 \\ 1 & 0\end{array} \right] do as a transformation matrix?  To see, transform (2,0), (3,-4), and (9,-7) using this new [T].

\left[ \begin{array}{cc} 0&-1 \\ 1&0 \end{array} \right] \cdot  \left[ \begin{array}{ccc} 2 & 3 & 9 \\ 0 & -4 & -7 \end{array} \right] = \left[ \begin{array}{ccc} 0 & 4 & 7 \\ 2 & 3 & 9 \end{array} \right]

The result might be more easily seen graphically with the points connected to form pre-image and image triangles.

After studying the graphic, hopefully you can see that \left[ \begin{array}{cc} 0 & -1 \\ 1 & 0\end{array} \right] rotated the pre-image points 90 degrees around the origin.

Generalizing

Now you know the effects of two different transformation matrices, but what if you wanted to perform a specific transformation and didn’t know the matrix to use.  If you’re new to transformations via matrices, you may be hoping for something much easier than the experimental approach used thus far.  If you can generalize for a moment, the result will be a stunningly simple way to determine the matrix for any transformation quickly and easily.

Assume you need to find a transformation matrix, [T]= \left[ \begin{array}{cc} a & c \\ b & d \end{array}\right] .  Pick (1,0) and (0,1) as your pre-image points.

\left[ \begin{array}{cc} a&c \\ b&d \end{array} \right] \cdot  \left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right] = \left[ \begin{array}{cc} a & c \\ b & d \end{array} \right]

On the surface, this says the image of (1,0) is (a,b) and the image of (0,1) is (c,d), but there is so much more here!

Because the pre-image matrix for (1,0) and (0,1) is the 2×2 identity matrix, [T]= \left[ \begin{array}{cc} a & c \\ b & d \end{array}\right] will always be BOTH the transformation matrix AND (much more importantly), the image matrix.  This is a major find.  It means that if you  know the images of (1,0) and (0,1) under some transformation T, then you automatically know the components of [T]!

For example, when reflecting over the x-axis, (1,0) is unchanged and (0,1) becomes (0,-1), making [T]= \left[ r_{x-axis} \right] = \left[ \begin{array}{cc} 1 & 0 \\ 0 & -1\end{array} \right] .  Remember, coordinates of points are always listed vertically.

Similarly, a scale change that doubles x-coordinates and triples the ys transforms (1,0) to (2,0) and (0,1) to (0,3), making [T]= \left[ S_{2,3} \right] = \left[ \begin{array}{cc} 2 & 0 \\ 0 & 3\end{array} \right] .

In a generic rotation of \theta around the origin, (1,0) becomes (cos(\theta ),sin(\theta )) and (0,1) becomes (-sin(\theta ),cos(\theta )).

Therefore, [T]= \left[ R_\theta \right] = \left[ \begin{array}{cc} cos(\theta ) & -sin(\theta ) \\ sin(\theta ) & cos(\theta ) \end{array} \right] .  Substituting \theta = 90^\circ into this [T] confirms the \left[ R_{90^\circ} \right] = \left[ \begin{array}{cc} 0 & -1 \\ 1 & 0\end{array} \right] matrix from earlier.

As nice as this is, there is even more beautiful meaning hidden within transformation matrices.  I’ll tackle some of that in my next post.