## Envelope Curves

My precalculus class recently returned to graphs of sinusoidal functions with an eye toward understanding them dynamically via envelope curves:  Functions that bound the extreme values of the curves. What follows are a series of curves we’ve explored over the past few weeks.  Near the end is a really cool Desmos link showing an infinite progression of periodic envelopes to a single curve–totally worth the read all by itself.

GETTING STARTED

As a simple example, my students earlier had seen the graph of $f(x)=5+2sin(x)$ as $y=sin(x)$ vertically stretched by a magnitude of 2 and then translated upward 5 units.  In their return, I encouraged them to envision the function behavior dynamically instead of statically.  I wanted them to see the curve (and the types of phenomena it could represent) as representing dynamic motion rather than a rigid transformation of a static curve.  In that sense, the graph of f oscillated 2 units (the coefficient of sine in f‘s equation) above and below the line $y=5$ (the addend in the equation for f).  The curves $y=5+2=7$ and $y=5-2=3$ define the “Envelope Curves” for $y=f(x)$.

When you graph $y=f(x)$ and its two envelope curves, you can picture the sinusoid “bouncing” between its envelopes.  We called these ceiling and floor functions for f.  Ceilings happen whenever the sinusoid term reaches its maximum value (+1), and floors when the sinusoidal term is at its minimum (-1).

Those envelope functions would be just more busy work if it stopped there, though.  The great insights were that anything you added to a sinusoid could act as a midline with the coefficient, AND anything multiplied by the sinusoid is its amplitude–the distance the curve moves above and below its midline.  The fun comes when you start to allow variable expressions for the midline and/or amplitudes.

VARIABLE MIDLINES AND ENVELOPES

For a first example, consider $y= \frac{x}{2} + sin(x)$.  By the reasoning above, $y= \frac{x}{2}$ is the midline.  The amplitude, 1, is the coefficient of sine, so the envelope curves are $y= \frac{x}{2}+1$ (ceiling) and $y= \frac{x}{2}-1$ (floor).

That got their attention!  Notice how easy it is to visualize the sine curve oscillating between its envelope curves.

For a variable amplitude, consider $y=2+1.2^{-x}*sin(x)$.  The midline is $y=2$, with an “amplitude” of $1.2^{-x}$.  That made a ceiling of $y=2+1.2^{-x}$ and a floor of $y=2-1.2^{-x}$, basically exponential decay curves converging on an end behavior asymptote defined by the midline.

SINUSOIDAL MIDLINES AND ENVELOPES

Now for even more fun.  Convinced that both midlines and amplitudes could be variably defined, I asked what would happen if the midline was another sinusoid?  For $y=cos(x)+sin(x)$, we could think of $y=cos(x)$ as the midline, and with the coefficient of sine being 1, the envelopes are $y=cos(x)+1$ and $y=cos(x)-1$.

Since cosine is a sinusoid, you could get the same curve by considering $y=sin(x)$ as the midline with envelopes $y=sin(x)+1$ and $y=sin(x)-1$.  Only the envelope curves are different!

The curve $y=cos(x)+sin(x)$ raised two interesting questions:

1. Was the addition of two sinusoids always another sinusoid?
2. What transformations of sinusoidal curves could be defined by more than one pair of envelope curves?

For the first question, they theorized that if two sinusoids had the same period, their sum was another sinusoid of the same period, but with a different amplitude and a horizontal shift.  Mathematically, that means

$A*cos(\theta ) + B*sin(\theta ) = C*cos(\theta -D)$

where A & B are the original sinusoids’ amplitudes, C is the new sinusoid’s amplitude, and D is the horizontal shift.  Use the cosine difference identity to derive

$A^2 + B^2 = C^2$  and $\displaystyle tan(D) = \frac{B}{A}$.

For $y = cos(x) + sin(x)$, this means

$\displaystyle y = cos(x) + sin(x) = \sqrt{2}*cos \left( x-\frac{\pi}{4} \right)$,

and the new coefficient means $y= \pm \sqrt{2}$ is a third pair of envelopes for the curve.

Very cool.  We explored several more sums and differences with identical periods.

WHAT HAPPENS WHEN THE PERIODS DIFFER?

Try a graph of $g(x)=cos(x)+cos(3x)$.

Using the earlier concept that any function added to a sinusoid could be considered the midline of the sinusoid, we can picture the graph of g as the graph of $y=cos(3x)$ oscillating around an oscillating midline, $y=cos(x)$:

IF you can’t see the oscillations yet, the coefficient of the $cos(3x)$ term is 1, making the envelope curves $y=cos(x) \pm 1$.  The next graph clear shows $y=cos(3x)$ bouncing off its ceiling and floor as defined by its envelope curves.

Alternatively, the base sinusoid could have been $y=cos(x)$ with envelope curves $y=cos(3x) \pm 1$.

Similar to the last section when we added two sinusoids with the same period, the sum of two sinusoids with different periods (but the same amplitude) can be rewritten using an identity.

$cos(A) + cos(B) = 2*cos \left( \frac{A+B}{2} \right) * cos \left( \frac{A-B}{2} \right)$

This can be proved in the present form, but is lots easier to prove from an equivalent form:

$cos(x+y) + cos(x-y) = 2*cos(x) * cos(y)$.

For the current function, this means $y = cos(x) + cos(3x) = 2*cos(x)*cos(2x)$.

Now that the sum has been rewritten as a product, we can now use the coefficient as the amplitude, defining two other pairs of envelope curves.  If $y=cos(2x)$ is the sinusoid, then $y= \pm 2cos(x)$ are envelopes of the original curve, and if $y=cos(x)$ is the sinusoid, then $y= \pm 2cos(2x)$ are envelopes.

In general, I think it’s easier to see the envelope effect with the larger period function.  A particularly nice application connection of adding sinusoids with identical amplitudes and different periods are the beats musicians hear from the constructive and destructive sound wave interference from two instruments close to, but not quite in tune.  The points where the envelopes cross on the x-axis are the quiet points in the beats.

A STUDENT WANTED MORE

In class last Friday, my students were reviewing envelope curves in advance of our final exam when one made the next logical leap and asked what would happen if both the coefficients and periods were different.  When I mentioned that the exam wouldn’t go that far, she uttered a teacher’s dream proclamation:  She didn’t care.  She wanted to learn anyway.  Making up some coefficients on the spot, we decided to explore $f(x)=2sin(x)+5cos(2x)$.

Assuming for now that the cos(2x) term is the primary sinusoid, the envelope curves are $y=2sin(x) \pm 5$.

That was certainly cool, but at this point, we were no longer satisfied with just one answer.  If we assumed sin(x) was the primary sinusoid, the envelopes are $y=5cos(2x) \pm 2$.

Personally, I found the first set of envelopes more satisfying, but it was nice that we could so easily identify another.

With the different periods, even though the  coefficients are different, we decided to split the original function in a way that allowed us to use the $cos(A)+cos(B)$ identity introduced earlier.  Rewriting,

$f(x)=2sin(x)+5cos(2x) = 2cos \left( x - \frac{ \pi }{2} \right) + 2cos(2x) + 3cos(2x)$ .

After factoring out the common coefficient 2, the first two terms now fit the $cos(A) + cos(B)$ identity with $A = x - \frac{ \pi }{2}$ and $B=2x$, allowing the equation to be rewritten as

$f(x)= 2 \left( 2*cos \left( \frac{x - \frac{ \pi }{2} + 2x }{2} \right) * cos \left( \frac{x - \frac{ \pi }{2} - 2x }{2} \right) \right) + 3cos(2x)$

$\displaystyle = 4* cos \left( \frac{3}{2} x - \frac{ \pi }{4} \right) * cos \left( - \frac{1}{2} x - \frac{ \pi }{4} \right) + 3cos(2x)$.

With the expression now containing three sinusoidal expressions, there are three more pairs of envelope curves!

Arguably, the simplest approach from this form assumes $cos(2x)$ from the $latex$3cos(2x)$term as the sinusoid, giving $y=2sin(x)+2cos(2x) \pm 3$ (the pre-identity form three equations earlier in this post) as envelopes. We didn’t go there, but recognizing that new envelopes can be found simply by rewriting sums creates an infinite number of additional envelopes. Defining these different sums with a slider lets you see an infinite spectrum of envelopes. The image below shows one. Here is the Desmos Calculator page that lets you play with these envelopes directly. If the $cos \left( \frac{3}{3} x - \frac{ \pi}{4} \right)$term was the sinusoid, the envelopes would be $y=3cos(2x) \pm 4cos \left( - \frac{1}{2} x - \frac{ \pi }{4} \right)$. If you look closely, you will notice that this is a different type of envelope pair with the ceiling and floor curves crossing and trading places at $x= \frac{\pi}{2}$ and every $2\pi$ units before and after. The third form creates another curious type of crossing envelopes. CONCLUSION: In all, it was fun to explore with my students the many possibilities for bounding sinusoidal curves. It was refreshing to have one student excited by just playing with the curves to see what else we could find for no other reason than just to enjoy the beauty of these periodic curves. As I reflected on the overall process, I was even more delighted to discover the infinite spectrum of envelopes modeled above on Desmos. I hope you’ve found something cool here for yourself. ## From a Square to Ratios to a System of Equations Here’s another ratio problem from @Five_Triangles, this time involving triangle areas bounded by a square. Don’t read further until you’ve tried this for yourself. It’s a fun problem that, at least from my experience, doesn’t end up where or how I thought it would. INITIAL THOUGHTS I see two big challenges here. First, the missing location of point P is especially interesting, but is also likely to be quite vexing for many students. This led me to the first twist I found in the problem: the introduction of multiple variables and a coordinate system. Without some problem-solving experience, I don’t see that as an intuitive step for most middle school students. Please don’t interpret this as a knock on this problem, I’m simply agreeing with @Five_Triangle’s assessment that this problem is likely to be challenging for middle school students. The second challenge I found emerged from the introduction the coordinate system: an underlying 2×2 system of equations. There are multiple ways to tackle a solution to a linear system, but this strikes me as yet another high hurdle for younger students. Finally, I’m a bit surprised by my current brain block on multiple approaches for this problem. I suspect I’m blinded here by my algebraic bias in problem solving; surely there are approaches that don’t require this. I’d love to hear any other possibilities. POINT P VARIES Because I was given properties of point P and not its location, the easiest approach I could see was to position the square on the xy-plane with point B at the origin, $\overline{AB}$ along the y-axis, and $\overline{BC}$ along the x-axis. That gave my point P coordinates (x,y) for some unknown values of x & y. The helpful part of this orientation is that the x & y coordinates of P are automatically the altitudes of $\Delta ABP$ and $\Delta BCP$, respectively. The altitudes of the other two triangles are determined through subtraction. AREA RATIOS BECOME A LINEAR SYSTEM From here, I used the given ratios to establish one equation in terms of x & y. $\displaystyle \frac{\Delta ABP}{\Delta DAP} = \frac{\frac{1}{2}*12*x}{\frac{1}{2}*12*(12-y)} = \frac{3}{4}$ Of course, since all four triangles have the same base lengths, the given area ratios are arithmetically equivalent to corresponding height ratios. I used that to write a second equation. $\displaystyle \frac{\Delta BCP}{\Delta CDP} = \frac{y}{12-x} = \frac{1}{3}$ Simplifying terms and clearing denominators leads to $4x=36-3y$ and $3y=12-x$, respectively. A VERY INTERESTING insight at this point is that there is an infinite number of locations within the square at which each ratio is true. Specifically, the $\Delta ABP : \Delta DAP = 3:4$ ratio is true everywhere along the line 4x=36-3y. This problem constrains us to only the points within the square with vertices (0,0), (12,0), (12,12), and (0,12), but setting that aside, anywhere along the line 4x=36-3y would satisfy the first constraint. The same is true for the second line and constraint. I think it would be very interesting for students to construct this on dynamic geometry software (e.g., GeoGebra or the TI-Nspire) and see the ratio remain constant everywhere along either line even though the triangle areas vary throughout. Together, these lines form a 2×2 system of linear equations with the solution to both ratios being the intersection point of the two lines. There are lots of ways to do this; I wonder how a typical 6th grader would tackle them. Assuming they have the algebraic expertise, I’d have work them by hand and confirm with a CAS. The question asks for the area of $\Delta ABP = \frac{1}{2}*12*x = 6*8 = 48$. PROBLEM VARIATIONS Just two extensions this time. Other suggestions are welcome. 1. What’s the ratio of the area of $\Delta BCP : \Delta DAP$ at the point P that satisfies both ratios?? It’s not 1:4 as an errant student might think from an errant application of the transitive property to the given ratios. Can you show that it’s actually 1:8? 2. If a random point is chosen within the square, is that point more likely to satisfy the area ratio of $\Delta ABP : \Delta DAP$ or the ratio of $\Delta BCP : \Delta CDP$? The first ratio is satisfied by the line 4x=36-3y which intersects the square on the segment between (9,0) and (0,12). At the latter point, both triangles are degenerate with area 0. The second ratio’s line intersects the square between (12,0) and (0,4). As the first segment is longer (how would a middle schooler prove that?), it is more likely that a randomly chosen point would satisfy the $\Delta ABP : \Delta DAP$ ratio. This would be a challenging probability problem, methinks. FURTHER EXTENSIONS? What other possibilities do you see either for a solution to the original problem or an extension? ## Party Ratios I find LOTS of great middle school problems from @Five_Triangles on Twitter. Their post two days ago was no exception. The problem requires a little stamina, but can be approached many ways–two excellent criteria for worthy student explorations. That it has some solid extensions makes it even better. Following are a few different solution approaches some colleagues and I created. INITIAL THOUGHTS, VISUAL ORGANIZATION, & A SOLUTION The most challenging part of this problem is data organization. My first thoughts were for a 2-circle Venn Diagram–one for gender and one for age. And these types of Venn Diagrams are often more easily understood, in my experience, in 2×2 Table form with extra spaces for totals. Here’s what I set up initially. The ratio of Women:Girls was 11:4, so the 24 girls meant each “unit” in this ratio accounted for 24/4=6 people. That gave 11*6=66 women and 66+24=90 females. At this point, my experience working with algebraic problems tempted me to overthink the situation. I was tempted to let B represent the unknown number of boys and set up some equations to solve. Knowing that most 6th graders would not think about variables, I held back that instinct in an attempt to discover what a less-experienced mind might try. I present my initial algebra solution below. The 5:3 Male:Female ratio told me that each “gender unit” represented 90/3=30 people. That meant there were 5*30=150 men and 240 total people at the party. Then, the 4:1 Adult:Children ratio showed how to age-divide every group of 5 partygoers. With 240/5=48 such groups, there were 48 children and 4*48=192 adults. Subtracting the already known 66 women gave the requested answer: 192-66=126 men. While this Venn Diagram/Table approach made sense to me, I was concerned that it was a moderately sophisticated and not quite intuitive problem-solving technique for younger middle school students. WHAT WOULD A MIDDLE SCHOOLER THINK? A middle school teaching colleague, Becky, offered a different solution I could see students creating. Completely independently, she solved the problem in exactly the same order I did using ratio tables to manage the scaling at each step instead of my “unit ratios”. I liked her visual representation of the 4:1 Adults:Children ratio to find the number of adults, which gave the requested number of men. I suspect many more students would implicitly or explicitly use some chunking strategies like the visual representation to work the ratios. WHY HAVE JUST ONE SOLUTION? Math problems involving ratios can usually be opened up to allow multiple, or even an infinite number of solutions. This leads to some interesting problem extensions if you eliminate the “24 girls” restriction. Here are a few examples and sample solutions. What is the least number of partygoers? For this problem, notice from the table above that all of the values have a common factor of 6. Dividing the total partygoers by this reveals that 240/6=40 is the least number. Any multiple of this number is also a legitimate solution. Interestingly, the 11:4 Women:Girls ratio becomes explicitly obvious when you scale the table down to its least common value. My former student and now colleague, Teddy, arrived at this value another way. Paraphrasing, he noted that the 5:3 Male:Female ratio meant any valid total had to be a multiple of 5+3=8. Likewise, the 4:1 Adult:Child ratio requires totals to be multiples of 4+1=5. And the LCM of 8 & 5 is 40, the same value found in the preceding paragraph. What do all total partygoer numbers have in common? As explained above, any multiple of 40 is a legitimate number of partygoers. If the venue could support no more than 500 attendees, what is the maximum number of women attending? 12*40=480 is the greatest multiple of 40 below 500. Because 480 is double the initial problem’s total, 66*2=132 is the maximum number of women. Note that this can be rephrased to accommodate any other gender/age/total target. Under the given conditions, will the number of boys and girls at the party ever be identical? As with all ratio problems, larger values are always multiples of the least common solution. That means the number of boys and girls will always be identical or always be different. From above, you can deduce that the numbers of boys and girls at the party under the given conditions will both be multiples of 4. What variations can you and/or your students create? RESOLVING THE INITIAL ALGEBRA Now to the solution variation I was initially inclined to produce. After initially determining 66 women from the given 24 girls, let B be the unknown number of boys. That gives B+24 children. It was given that adults are 4 times as numerous as children making the number of adults 4(B+24)=4B+96. Subtracting the known 66 women leaves 4B+30 men. Compiling all of this gives The 5:3 Male:Female ratio means $\displaystyle \frac{5}{3} = \frac{5B+30}{90} \longrightarrow B=24$, the same result as earlier. ALGEBRA OVERKILL Winding through all of that algebra ultimately isn’t that computationally difficult, but it certainly is more than typical 6th graders could handle. But the problem could be generalized even further, as Teddy shared with me. If the entire table were written in variables with W=number of women, M=men, G=girls, and B=boys, the given ratios in the problem would lead to a reasonably straightforward 4×4 system of equations. If you understand enough to write all of those equations, I’m certain you could solve them, so I’d feel confident allowing a CAS to do that for me. My TI-Nspire gives this. And that certainly isn’t work you’d expect from any 6th grader. CONCLUSION Given that the 11:4 Women:Girls ratio was the only “internal” ratio, it was apparent in retrospect that all solutions except the 4×4 system approach had to find the female values first. There are still several ways to resolve the problem, but I found it interesting that while there was no “direct route”, every reasonable solution started with the same steps. Thanks to colleagues Teddy S & Becky M for sharing their solution proposals. ## Many Roads Give Same Derivative A recent post in the AP Calculus Community expressed some confusion about different ways to compute $\displaystyle \frac{dy}{dx}$ at (0,4) for the function $x=2ln(y-3)$. I share below the two approaches suggested in the original post, proffer two more, and a slightly more in-depth activity I’ve used in my calculus classes for years. I conclude with an alternative to derivatives of inverses. ### Two Approaches Initially Proposed 1 – Accept the function as posed and differentiate implicitly. $\displaystyle \frac{d}{dx} \left( x = 2 ln(y-3) \right)$ $\displaystyle 1 = 2*\frac{1}{y-3} * \frac{dy}{dx}$ $\displaystyle \frac{dy}{dx} = \frac{y-3}{2}$ Which gives $\displaystyle \frac{dy}{dx} = \frac{1}{2}$ at (x,y)=(0,4). 2 – Solve for y and differentiate explicitly. $\displaystyle x = 2ln(y-3) \longrightarrow y = 3 + e^{x/2}$ $\displaystyle \frac{dy}{dx} = e^{x/2} * \frac{1}{2}$ Evaluating this at (x,y)=(0,4) gives $\displaystyle \frac{dy}{dx} = \frac{1}{2}$ . ### Two Alternative Approaches 3 – Substitute early. The question never asked for an algebraic expression of $\frac{dy}{dx}$, only the numerical value of this slope. Because students tend to make more silly mistakes manipulating algebraic expressions than numeric ones, the additional algebra steps are unnecessary, and potentially error-prone. Admittedly, the manipulations are pretty straightforward here, in more algebraically complicated cases, early substitutions could significantly simplify work. Using approach #1 and substituting directly into the second line gives $\displaystyle 1 = 2 * \frac{1}{y-3} * \frac{dy}{dx}$. At (x,y)=(0,4), this is $\displaystyle 1 = 2 * \frac{1}{4-3}*\frac{dy}{dx}$ $\displaystyle \frac{dy}{dx} = \frac{1}{2}$ The numeric manipulations on the right side are obviously easier than the earlier algebra. 4 – Solve for $\frac{dx}{dy}$ and reciprocate. There’s nothing sacred about solving for $\frac{dy}{dx}$ directly. Why not compute the derivative of the inverse and reciprocate at the end? Differentiating first with respect to y eventually leads to the same solution. $\displaystyle \frac{d}{dy} \left( x = 2 ln(y-3) \right)$ $\displaystyle \frac{dx}{dy} = 2 * \frac{1}{y-3}$ At (x,y)=(0,4), this is $\displaystyle \frac{dx}{dy} = \frac{2}{4-3} = 2$, so $\displaystyle \frac{dy}{dx} = \frac{1}{2}$. ### Equivalence = A fundamental mathematical concept I sometimes wonder if teachers should place much more emphasis on equivalence. We spend so much time manipulating expressions in mathematics classes at all levels, changing mathematical objects (shapes, expressions, equations, etc.) into a different, but equivalent objects. Many times, these manipulations are completed under the guise of “simplification.” (Here is a brilliant Dan Teague post cautioning against taking this idea too far.) But it is critical for students to recognize that proper application of manipulations creates equivalent expressions, even if when the resulting expressions don’t look the same. The reason we manipulate mathematical objects is to discover features about the object in one form that may not be immediately obvious in another. For the function $x = 2 ln(y-3)$, the slope at (0,4) must be the same, no matter how that slope is calculated. If you get a different looking answer while using correct manipulations, the final answers must be equivalent. ### Another Example A similar question appeared on the AP Calculus email list-server almost a decade ago right at the moment I was introducing implicit differentiation. A teacher had tried to find $\displaystyle \frac{dy}{dx}$ for $\displaystyle x^2 = \frac{x+y}{x-y}$ using implicit differentiation on the quotient, manipulating to a product before using implicit differentiation, and finally solving for y in terms of x to use an explicit derivative. 1 – Implicit on a quotient Take the derivative as given:$

$\displaystyle \frac{d}{dx} \left( x^2 = \frac{x+y}{x-y} \right)$

$\displaystyle 2x = \frac{(x-y) \left( 1 + \frac{dy}{dx} \right) - (x+y) \left( 1 - \frac{dy}{dx} \right) }{(x-y)^2}$

$\displaystyle 2x * (x-y)^2 = (x-y) + (x-y)*\frac{dy}{dx} - (x+y) + (x+y)*\frac{dy}{dx}$

$\displaystyle 2x * (x-y)^2 = -2y + 2x * \frac{dy}{dx}$

$\displaystyle \frac{dy}{dx} = \frac{-2x * (x-y)^2 + 2y}{2x}$

2 – Implicit on a product

Multiplying the original equation by its denominator gives

$x^2 * (x - y) = x + y$ .

Differentiating with respect to x gives

$\displaystyle 2x * (x - y) + x^2 * \left( 1 - \frac{dy}{dx} \right) = 1 + \frac{dy}{dx}$

$\displaystyle 2x * (x-y) + x^2 - 1 = x^2 * \frac{dy}{dx} + \frac{dy}{dx}$

$\displaystyle \frac{dy}{dx} = \frac{2x * (x-y) + x^2 - 1}{x^2 + 1}$

3 – Explicit

Solving the equation at the start of method 2 for y gives

$\displaystyle y = \frac{x^3 - x}{x^2 + 1}$.

Differentiating with respect to x gives

$\displaystyle \frac{dy}{dx} = \frac {\left( x^2+1 \right) \left( 3x^2 - 1\right) - \left( x^3 - x \right) (2x+0)}{\left( x^2 + 1 \right) ^2}$

Equivalence

Those 3 forms of the derivative look VERY DIFFERENT.  Assuming no errors in the algebra, they MUST be equivalent because they are nothing more than the same derivative of different forms of the same function, and a function’s rate of change doesn’t vary just because you alter the look of its algebraic representation.

Substituting the y-as-a-function-of-x equation from method 3 into the first two derivative forms converts all three into functions of x.  Lots of by-hand algebra or a quick check on a CAS establishes the suspected equivalence.  Here’s my TI-Nspire CAS check.

Here’s the form of this investigation I gave my students.

### Final Example

I’m not a big fan of memorizing anything without a VERY GOOD reason.  My teachers telling me to do so never held much weight for me.  I memorized as little as possible and used that information as long as I could until a scenario arose to convince me to memorize more.  One thing I managed to avoid almost completely were the annoying derivative formulas for inverse trig functions.

For example, find the derivative of $y = arcsin(x)$ at $x = \frac{1}{2}$.

Since arc-trig functions annoy me, I always rewrite them.  Taking sine of both sides and then differentiating with respect to x gives.

$sin(y) = x$

$\displaystyle cos(y) * \frac{dy}{dx} = 1$

I could rewrite this equation to give $\frac{dy}{dx} = \frac{1}{cos(y)}$, a perfectly reasonable form of the derivative, albeit as a less-common  expression in terms of y.  But I don’t even do that unnecessary algebra.  From the original function, $x=\frac{1}{2} \longrightarrow y=\frac{\pi}{6}$, and I substitute that immediately after the differentiation step to give a much cleaner numeric route to my answer.

$\displaystyle cos \left( \frac{\pi}{6} \right) * \frac{dy}{dx} = 1$

$\displaystyle \frac{\sqrt{3}}{2} * \frac{dy}{dx} = 1$

$\displaystyle \frac{dy}{dx} = \frac{2}{\sqrt{3}}$

And this is the same result as plugging $x = \frac{1}{2}$ into the memorized version form of the derivative of arcsine.  If you like memorizing, go ahead, but my mind remains more nimble and less cluttered.

One final equivalent approach would have been differentiating $sin(y) = x$ with respect to y and reciprocating at the end.

### CONCLUSION

There are MANY ways to compute derivatives.  For any problem or scenario, use the one that makes sense or is computationally easiest for YOU.  If your resulting algebra is correct, you know you have a correct answer, even if it looks different.  Be strong!

## Straightening Standard Deviations

This post describes a bivariate data problem I introduced last month in my AP Statistics class, but it easily could have appeared in any Algebra 2 or PreCalculus course, particularly for those classes adapting to the statistics strands of the CCSSM and new SAT standards.  While I used the lab to introduce standard deviations of random samples, the approach also could be used if your bivariate statistics unit is occurs later in your sequencing.

My class started a unit on sampling when we returned in January.  They needed to understand how larger sample sizes tended to shrink standard deviations, but I didn’t want to just give them the formula

$\displaystyle \sigma_{\overline{x}}=\frac{\sigma}{\sqrt{n}}$ .

I know many teachers introduce this relationship by selecting samples with perfect square sizes and see the population standard deviations shrink by integer factors (quadruple the sample size = halve the standard deviation, multiply the sample size by 9 = standard deviation divides by 3, etc.), but I didn’t want to exert that much control.  My students had explored data straightening techniques in the fall and were used to sampling and simulations, so I wanted to see how successfully they could leverage that background to “discover” the sample standard deviation relationship.

My AP Statistics students use TI Nspire CAS software on their laptops, so I wrote their lab using that technology.  The lab could easily be adapted to whatever statistics technology you use in your class.  You can download a pdf of my lab here.

LAB RESULTS AND REFLECTIONS

The activity drew samples from a normal distribution for which students were able to define their own means and standard deviations.  Students could choose any values, but those who chose integers tended to make the later connections more easily.

Their first step was to draw 2500 different random samples of sizes n=1, 4, 10, 25, 50, 100.  From each 2500 point data set, students computed sample means and standard deviations.  In retrospect, I should have let students select all or most of their own sample sizes, but I’m still quite satisfied with the results.  If you do experiment with different sample sizes, definitely run the larger potential sizes on your technology to check computation times.

One student chose $\mu = 7$ and $\sigma = 13$.  Her sample means and standard deviations are

It was pretty obvious to her that no matter what the sample size, $\overline{x} \approx \mu$, but the standard deviations were shrinking as the sample sizes grew.  Determining that relationship was the heart of the activity.  Obviously, the sample size (SS) seemed to drive the sample standard deviation (SD), so my student graphed her (SS, SD) data to get

We had explored bivariate data-straightening techniques at the end of the fall semester, so she tried semi-log and log-log transformations to check for the possibilities that these data might be represented by an exponential or power function, respectively.  Her semi-log transformation was still curved, but the log-log was very straight.  That transformation and its accompanying linear regression are below.

Her residuals were small, balanced, and roughly random, so she knew she had a reasonable fit.  From there, she used her CAS to transform (re-curve) the linear regression back to an equation for the original data.

It made sense that this resulting formula not only depended on the sample size, but also originally on the population standard deviation my student had earlier chosen to be $\sigma = 13$.  Within reasonable round-off deviations, the numerator appeared to be the population standard deviation and the exponent of the denominator was very close to $\frac{1}{2}$, indicating a square root.  That gave her the expected sample standard deviation formula, $\displaystyle \sigma_{\overline{x}} = \frac{\sigma}{\sqrt{n}}$.

I know this formula is provided on the AP Statistics Exam, but the simulation, curve straightening, linear regression, and statistical confirmation of the formula were a great review and exercise.  I hope you find it useful, too.

## Stats Exploration Yields Deeper Understanding

or “A lesson I wouldn’t have learned without technology”

Last November, some of my AP Statistics students were solving a problem involving a normal distribution with an unknown mean.  Leveraging the TI Nspire CAS calculators we use for all computations, they crafted a logical command that should have worked.  Their unexpected result initially left us scratching heads.  After some conversations with the great folks at TI, we realized that what at first seemed perfectly reasonable for a single answer, in fact had two solutions.  And it took until the end of this week for another student to finally identify and resolve the mysterious results.  This ‘blog post recounts our journey from a questionable normal probability result to a rich approach to confidence intervals.

THE INITIAL PROBLEM

I had assigned an AP Statistics free response question about a manufacturing process that could be manipulated to control the mean distance its golf balls would travel.  We were told that the process created balls with a normally distributed distance of 288 yards and a standard deviation of 2.8 yards.  The first part asked students to find the probability of balls traveling more than an allowable 291.2 yards.  This was straightforward.  Find the area under a normal curve with a mean of 288 and a standard deviation of 2.8 from 291.2 to infinity.  The Nspire (CAS and non-CAS) syntax for this is:

[Post publishing note: See Dennis’ comment below for a small correction for the non-CAS Nspires.  I forgot that those machines don’t accept “infinity” as a bound.]

As 12.7% of the golf balls traveling too far is obviously an unacceptably high percentage, the next part asked for the mean distance needed so only 99% of the balls traveled allowable distances.  That’s when things got interesting.

A “LOGICAL” RESPONSE RESULTS IN A MYSTERY

Their initial thought was that even though they didn’t know the mean, they now knew the output of their normCdf command.  Since the balls couldn’t travel a negative distance and zero was many standard deviations from the unknown mean, the following equation with representing the unknown mean should define the scenario nicely.

Because this was an equation with a single unknown, we could now use our CAS calculators to solve for the missing parameter.

Something was wrong.  How could the mean distance possibly be just 6.5 yards?  The Nspires are great, reliable machines.  What happened?

I had encountered something like this before with unexpected answers when a solve command was applied to a Normal cdf with dual finite bounds .  While it didn’t seem logical to me why this should make a difference, I asked them to try an infinite lower bound and also to try computing the area on the other side of 291.2.  Both of these provided the expected solution.

The caution symbol on the last line should have been a warning, but I honestly didn’t see it at the time.  I was happy to see the expected solution, but quite frustrated that infinite bounds seemed to be required.  Beyond three standard deviations from the mean of any normal distribution, almost no area exists, so how could extending the lower bound from 0 to negative infinity make any difference in the solution when 0 was already $\frac{291.2}{2.8}=104$ standard deviations away from 291.2?  I couldn’t make sense of it.

My initial assumption was that something was wrong with the programming in the Nspire, so I emailed some colleagues I knew within CAS development at TI.

GRAPHS REVEAL A HIDDEN SOLUTION

They reminded me that statistical computations in the Nspire CAS were resolved through numeric algorithms–an understandable approach given the algebraic definition of the normal and other probability distribution functions.  The downside to this is that numeric solvers may not pick up on (or are incapable of finding) difficult to locate or multiple solutions.  Their suggestion was to employ a graph whenever we got stuck.  This, too, made sense because graphing a function forced the machine to evaluate multiple values of the unknown variable over a predefined domain.

It was also a good reminder for my students that a solution to any algebraic equation can be thought of as the first substitution solution step for a system of equations.  Going back to the initially troublesome input, I rewrote normCdf(0,291.2,x,2.8)=0.99 as the system

y=normCdf(0,291.2,x,2.8)
y=0.99

and “the point” of intersection of that system would be the solution we sought.  Notice my emphasis indicating my still lingering assumptions about the problem.  Graphing both equations shone a clear light on what was my persistent misunderstanding.

I was stunned to see two intersection solutions on the screen.  Asking the Nspire for the points of intersection revealed BOTH ANSWERS my students and I had found earlier.

If both solutions were correct, then there really were two different normal pdfs that could solve the finite bounded problem.  Graphing these two pdfs finally explained what was happening.

By equating the normCdf result to 0.99 with FINITE bounds, I never specified on which end the additional 0.01 existed–left or right.  This graph showed the 0.01 could have been at either end, one with a mean near the expected 284 yards and the other with a mean near the unexpected 6.5 yards.  The graph below shows both normal curves with the 6.5 solution having an the additional 0.01 on the left and the 284 solution with the 0.01 on the right.

The CAS wasn’t wrong in the beginning.  I was.  And as has happened several times before, the machine didn’t rely on the same sometimes errant assumptions I did.  My students had made a very reasonable assumption that the area under the normal pdf for the golf balls should start only 0 (no negative distances) and inadvertently stumbled into a much richer problem.

A TEMPORARY FIX

The reason the infinity-bounded solutions didn’t give the unexpected second solution is that it is impossible to have the unspecified extra 0.01 area to the left of an infinite lower or upper bound.

To avoid unexpected multiple solutions, I resolved to tell my students to use infinite bounds whenever solving for an unknown parameter.  It was a little dissatisfying to not be able to use my students’ “intuitive” lower bound of 0 for this problem, but at least they wouldn’t have to deal with unexpected, counterintuitive results.

Surprisingly, the permanent solution arrived weeks later when another student shared his fix for a similar problem when computing confidence interval bounds.

A PERMANENT FIX FROM AN UNEXPECTED SOURCE

I really don’t like the way almost all statistics textbooks provide complicated formulas for computing confidence intervals using standardized z- and t-distribution critical scores.  Ultimately a 95% confidence interval is nothing more than the bounds of the middle 95% of a probability distribution whose mean and standard deviation are defined by a sample from the overall population.  Where the problem above solved for an unknown mean, on a CAS, computing a confidence interval follows essentially the same reasoning to determine missing endpoints.

My theme in every math class I teach is to memorize as little as you can, and use what you know as widely as possible.  Applying this to AP Statistics, I never reveal the existence of confidence interval commands on calculators until we’re 1-2 weeks past their initial introduction.  This allows me to develop a solid understanding of confidence intervals using a variation on calculator commands they already know.

For example, assume you need a 95% confidence interval of the percentage of votes Bernie Sanders is likely to receive in Monday’s Iowa Caucus.  The CNN-ORC poll released January 21 showed Sanders leading Clinton 51% to 43% among 280 likely Democratic caucus-goers.  (Read the article for a glimpse at the much more complicated reality behind this statistic.)  In this sample, the proportion supporting Sanders is approximately normally distributed with a sample p=0.51 and sample standard deviation of p of $\sqrt((.51)(.49)/280)=0.0299$.  The 95% confidence interval is the defined by the bounds containing the middle 95% of the data of this normal distribution.

Using the earlier lesson, one student suggested finding the bounds on his CAS by focusing on the tails.

giving a confidence interval of (0.45, 0.57) for Sanders for Monday’s caucus, according to the method of the CNN-ORC poll from mid-January.  Using a CAS keeps my students focused on what a confidence interval actually means without burying them in the underlying computations.

That’s nice, but what if you needed a confidence interval for a sample mean?  Unfortunately, the t-distribution on the Nspire is completely standardized, so confidence intervals need to be built from critical t-values.  Like on a normal distribution, a 95% confidence interval is defined by the bounds containing the middle 95% of the data.  One student reasonably suggested the following for a 95% confidence interval with 23 degrees of freedom.  I really liked the explicit syntax definition of the confidence interval.

Alas, the CAS returned the input.  It couldn’t find the answer in that form.  Cognizant of the lessons learned above, I suggested reframing the query with an infinite bound.

That gave the proper endpoint, but I was again dissatisfied with the need to alter the input, even though I knew why.

That’s when another of my students spoke up to say that he got the solution to work with the initial commands by including a domain restriction.

Of course!  When more than one solution is possible, restrict the bounds to the solution range you want.  Then you can use the commands that make sense.

FIXING THE INITIAL APPROACH

That small fix finally gave me the solution to the earlier syntax issue with the golf ball problem.  There were two solutions to the initial problem, so if I bounded the output, they could use their intuitive approach and get the answer they needed.

If a mean of 288 yards and a standard deviation of 2.8 yards resulted in 12.7% of the area above 291.2, then it wouldn’t take much of a left shift in the mean to leave just 1% of the area above 291.2. Surely that unknown mean would be no lower than 3 standard deviations below the current 288, somewhere above 280 yards.  Adding that single restriction to my students’ original syntax solved their problem.

Perfection!

CONCLUSION

By encouraging a deep understanding of both the underlying statistical content AND of their CAS tool, students are increasingly able to find creative solutions using flexible methods and expressions intuitive to them.  And shouldn’t intellectual strength, creativity, and flexibility be the goals of every learning experience?

## Unanticipated Proof Before Algebra

I was talking with one of our 5th graders, S,  last week about the difference between showing a few examples of numerical computations and developing a way to know something was true no matter what numbers were chosen.  I hadn’t started our conversation thinking about introducing proof.  Once we turned in that direction, I anticipated scaffolding him in a completely different direction, but S went his own way and reinforced for me the importance of listening and giving students the encouragement and room to build their own reasoning.

SETUP:  S had been telling me that he “knew” the product of an even number with any other number would always be even, while the product of any two odds was always odd.  He demonstrated this by showing lots of particular products, but I asked him if he was sure that it was still true if I were to pick some numbers he hadn’t used yet.  He was.

Then I asked him how many numbers were possible to use.  He promptly replied “infinite” at which point he finally started to see the difficulty with demonstrating that every product worked.  “We don’t have enough time” to do all that, he said.  Finally, I had maneuvered him to perhaps his first ever realization for the need for proof.

ANTICIPATION:  But S knew nothing of formal algebra.  From my experiences with younger students sans algebra, I thought I would eventually need to help him translate his numerical problem into a geometric one.  But this story is about S’s reasoning, not mine.

INSIGHT:  I asked S how he would handle any numbers I asked him to multiply to prove his claims, even if I gave him some ridiculously large ones.  “It’s really not as hard as that,” S told me.  He quickly scribbled

on his paper and covered up all but the one’s digit.  “You see,” he said, “all that matters is the units.  You can make the number as big as you want and I just need to look at the last digit.”  Without using this language, S was venturing into an even-odd proof via modular arithmetic.

With some more thought, he reasoned that he would focus on just the units digit through repeated multiples and see what happened.

FIFTH GRADE PROOF:  S’s math class is currently working through a multiplication unit in our 5th grade Bridges curriculum, so he was already in the mindset of multiples.  Since he said only the units digit mattered, he decided he could start with any even number and look at all of its multiples.  That is, he could keep adding the number to itself and see what happened.  As shown below, he first chose 32 and found the next four multiples, 64, 96, 128, and 160.  After that, S said the very next number in the list would end in a 2 and the loop would start all over again.

He stopped talking for several seconds, and then he smiled.  “I don’t have to look at every multiple of 32.  Any multiple will end up somewhere in my cycle and I’ve already shown that every number in this cycle is even.  Every multiple of 32 must be even!”  It was a pretty powerful moment.  Since he only needed to see the last digit, and any number ending in 2 would just add 2s to the units, this cycle now represented every number ending in 2 in the universe.  The last line above was S’s use of 1002 to show that the same cycling happened for another “2 number.”

DIFFERENT KINDS OF CYCLES:  So could he use this for all multiples of even numbers?  His next try was an “8 number.”

After five multiples of 18, he achieved the same cycling.  Even cooler, he noticed that the cycle for “8 numbers” was the 2 number” cycle backwards.

Also note that after S completed his 2s and 8s lists, he used only single digit seed numbers as the bigger starting numbers only complicated his examples.  He was on a roll now.

I asked him how the “4 number” cycle was related.  He noticed that the 4s used every other number in the “2 number” cycle.  It was like skip counting, he said.  Another lightbulb went off.

“And that’s because 4 is twice 2, so I just take every 2nd multiple in the first cycle!”  He quickly scratched out a “6 number” example.

This, too, cycled, but more importantly, because 6 is thrice 2, he said that was why this list used every 3rd number in the “2 number” cycle.  In that way, every even number multiple list was the same as the “2 number” list, you just skip-counted by different steps on your way through the list.

When I asked how he could get all the numbers in such a short list when he was counting by 3s, S said it wasn’t a problem at all.  Since it cycled, whenever you got to the end of a list, just go back to the beginning and keep counting.  We didn’t touch it last week, but he had opened the door to modular arithmetic.

I won’t show them here, but his “0 number” list always ended in 0s.  “This one isn’t very interesting,” he said.  I smiled.

ODDS:  It took a little more thought to start his odd number proof, because every other multiple was even.  After he recognized these as even numbers, S decided to list every other multiple as shown with his “1 number” and “3 number” lists.

As with the evens, the odd number lists could all be seen as skip-counted versions of each other.  Also, the 1s and 9s were written backwards from each other, and so were the 3s and 7s.  “5 number” lists were declared to be as boring as “0 numbers”.  Not only did the odds ultimately end up cycling essentially the same as the evens, but they had the same sort of underlying relationships.

CONCLUSION:  At this point, S declared that since he had shown every possible case for evens and odds, then he had shown that any multiple of an even number was always even, and any odd multiple of an odd number was odd.  And he knew this because no matter how far down the list he went, eventually any multiple had to end up someplace in his cycles.  At that point I reminded S of his earlier claim that there was an infinite number of even and odd numbers.  When he realized that he had just shown a case-by-case reason for more numbers than he could ever demonstrate by hand, he sat back in his chair, exclaiming, “Whoa!  That’s cool!”

It’s not a formal mathematical proof, and when S learns some algebra, he’ll be able to accomplish his cases far more efficiently, but this was an unexpectedly nice and perfectly legitimate numerical proof of even and odd multiples for an elementary student.