This post discusses two higher level applications of logarithmic algebra typically seen by students in Algebra 2.
An aspect of brain and learning research I incorporate in my classes is that concepts are committed more securely to long-term memory when the ideas are introduced, some time elapses, and are then re-encountered. The idea is that when you “learn” an idea, have a chance to “forget” it, and then have an opportunity to re-learn it or see it in a new context, you strengthen your long-term understanding . In this spirit, I introduce exponential and logarithmic algebra in Algebra 2 classes and then return to those ideas multiple times. Here are two extensions from following courses–one from Calculus and one from PreCalculus/Statistics.
LOGARITHM EXTENSION #1: CALCULUS
Scenario: Whether by hand or with a CAS for rapid data creation, students explore derivatives of variations of for any .
When all return , most initially can’t quite believe the value of k is irrelevant. Those who recall transformations are further disturbed that the slope of is invariant under all levels of horizontal scaling. Surely when a curve is stretched, its slope changes, right?
The most common resolution I’ve seen invokes the Chain Rule to cancel k algebraically .
This proves the derivative of is invariant for all , but it doesn’t get at WHY. Many students remain dissatisfied. Enter log algebra.
As explained at the end of my previous post, every horizontal stretch of any log graph is equivalent to a vertical translation of the parent graph. That’s the core of what’s being claimed by the not-fully-appreciated log algebra property,
Applied to this problem, because , making every instance of a simple vertical translation of . Their derivatives are equal precisely because all derivatives with respect to x are invariant under vertical translations. Knowing the family of logarithmic functions has the special property that every horizontal scale change is equivalent to some vertical slide completely explains the paradox.
LOGARITHM EXTENSION #2: PRECALCULUS/STATISTICS
SCENARIO: Having only experienced linear regressions, students encounter curved Quadrant I data and need to find a model.
Balancing multiple perspectives, it is critical for students to see mathematics used in precise algebraic scenarios and in “fuzzy” scenarios like fitting lines to data that are inevitably imprecise due to inherent variability in the measured data. In my Algebra 2 classes, we explore linear regressions and how they work alongside the precise algebra of finding equations of lines and more general polynomials that must pass through given specific predetermined ordered pairs.
I typically don’t move beyond linear regressions in Algebra 2, but return in PreCalculus and Statistics classes to the reality that we may understand how to fit lines to generally linear data, but we are limited if the data curves. For curved Quadrant I data (like above), it is difficult to know what curve might model the information. Exponential functions and power functions (and others) have this shape, but these are wildly different types of functions. How can you know which to use? Re-enter logarithms…
(The remainder of this post is is an overly abbreviated explanation meant only to show a powerful use of log algebra. If there’s interest, I can explore the complete connection between exponential, linear, and power regressions in another post.)
If you suspect data is exponential, then an equation of the form will model the data, while power data should be modeled by . The equations are similar, and both have exponents. From prior experiences with log algebra, some students recall that logarithmic functions have the unique algebraic property of being able to write expressions with exponents in an equivalent form without exponents.
Applying logs to the exponential equation and applying log algebra gives
The parallel application to power functions is
In both cases, the last equation is a variation of a linear equation–a transformed y-value equal to a constant, added to the product of another constant and either x or a transformed x. That is, both are some form of Y=B+MX.
So, familiar logarithms allow you to change unfamiliar and significantly curved exponential or power data back into a familiar linear form. At their cores, exponential and power regressions are just simple transformations of linear regressions. In another post in which the previous image was explained, I leveraged this curve-straightening idea in a statistics class to have my students discover the formula for standard deviations of distributions of sample means.
Research shows that aiming for student mastery on initial exposure is counterproductive. We all learn best by repeated exposure to concepts with time gaps between experiences. Hopefully these two examples can give two good ways to bring back log algebra.
From another perspective, exploring the implications of mathematics beyond just algebraic manipulations often grants key insights to scenarios that don’t seem related to when ideas were first encountered.