# Tag Archives: trigonometry

## Envelope Curves

My precalculus class recently returned to graphs of sinusoidal functions with an eye toward understanding them dynamically via envelope curves:  Functions that bound the extreme values of the curves. What follows are a series of curves we’ve explored over the past few weeks.  Near the end is a really cool Desmos link showing an infinite progression of periodic envelopes to a single curve–totally worth the read all by itself.

GETTING STARTED

As a simple example, my students earlier had seen the graph of $f(x)=5+2sin(x)$ as $y=sin(x)$ vertically stretched by a magnitude of 2 and then translated upward 5 units.  In their return, I encouraged them to envision the function behavior dynamically instead of statically.  I wanted them to see the curve (and the types of phenomena it could represent) as representing dynamic motion rather than a rigid transformation of a static curve.  In that sense, the graph of f oscillated 2 units (the coefficient of sine in f‘s equation) above and below the line $y=5$ (the addend in the equation for f).  The curves $y=5+2=7$ and $y=5-2=3$ define the “Envelope Curves” for $y=f(x)$.

When you graph $y=f(x)$ and its two envelope curves, you can picture the sinusoid “bouncing” between its envelopes.  We called these ceiling and floor functions for f.  Ceilings happen whenever the sinusoid term reaches its maximum value (+1), and floors when the sinusoidal term is at its minimum (-1).

Those envelope functions would be just more busy work if it stopped there, though.  The great insights were that anything you added to a sinusoid could act as a midline with the coefficient, AND anything multiplied by the sinusoid is its amplitude–the distance the curve moves above and below its midline.  The fun comes when you start to allow variable expressions for the midline and/or amplitudes.

VARIABLE MIDLINES AND ENVELOPES

For a first example, consider $y= \frac{x}{2} + sin(x)$.  By the reasoning above, $y= \frac{x}{2}$ is the midline.  The amplitude, 1, is the coefficient of sine, so the envelope curves are $y= \frac{x}{2}+1$ (ceiling) and $y= \frac{x}{2}-1$ (floor).

That got their attention!  Notice how easy it is to visualize the sine curve oscillating between its envelope curves.

For a variable amplitude, consider $y=2+1.2^{-x}*sin(x)$.  The midline is $y=2$, with an “amplitude” of $1.2^{-x}$.  That made a ceiling of $y=2+1.2^{-x}$ and a floor of $y=2-1.2^{-x}$, basically exponential decay curves converging on an end behavior asymptote defined by the midline.

SINUSOIDAL MIDLINES AND ENVELOPES

Now for even more fun.  Convinced that both midlines and amplitudes could be variably defined, I asked what would happen if the midline was another sinusoid?  For $y=cos(x)+sin(x)$, we could think of $y=cos(x)$ as the midline, and with the coefficient of sine being 1, the envelopes are $y=cos(x)+1$ and $y=cos(x)-1$.

Since cosine is a sinusoid, you could get the same curve by considering $y=sin(x)$ as the midline with envelopes $y=sin(x)+1$ and $y=sin(x)-1$.  Only the envelope curves are different!

The curve $y=cos(x)+sin(x)$ raised two interesting questions:

1. Was the addition of two sinusoids always another sinusoid?
2. What transformations of sinusoidal curves could be defined by more than one pair of envelope curves?

For the first question, they theorized that if two sinusoids had the same period, their sum was another sinusoid of the same period, but with a different amplitude and a horizontal shift.  Mathematically, that means

$A*cos(\theta ) + B*sin(\theta ) = C*cos(\theta -D)$

where A & B are the original sinusoids’ amplitudes, C is the new sinusoid’s amplitude, and D is the horizontal shift.  Use the cosine difference identity to derive

$A^2 + B^2 = C^2$  and $\displaystyle tan(D) = \frac{B}{A}$.

For $y = cos(x) + sin(x)$, this means

$\displaystyle y = cos(x) + sin(x) = \sqrt{2}*cos \left( x-\frac{\pi}{4} \right)$,

and the new coefficient means $y= \pm \sqrt{2}$ is a third pair of envelopes for the curve.

Very cool.  We explored several more sums and differences with identical periods.

WHAT HAPPENS WHEN THE PERIODS DIFFER?

Try a graph of $g(x)=cos(x)+cos(3x)$.

Using the earlier concept that any function added to a sinusoid could be considered the midline of the sinusoid, we can picture the graph of g as the graph of $y=cos(3x)$ oscillating around an oscillating midline, $y=cos(x)$:

IF you can’t see the oscillations yet, the coefficient of the $cos(3x)$ term is 1, making the envelope curves $y=cos(x) \pm 1$.  The next graph clear shows $y=cos(3x)$ bouncing off its ceiling and floor as defined by its envelope curves.

Alternatively, the base sinusoid could have been $y=cos(x)$ with envelope curves $y=cos(3x) \pm 1$.

Similar to the last section when we added two sinusoids with the same period, the sum of two sinusoids with different periods (but the same amplitude) can be rewritten using an identity.

$cos(A) + cos(B) = 2*cos \left( \frac{A+B}{2} \right) * cos \left( \frac{A-B}{2} \right)$

This can be proved in the present form, but is lots easier to prove from an equivalent form:

$cos(x+y) + cos(x-y) = 2*cos(x) * cos(y)$.

For the current function, this means $y = cos(x) + cos(3x) = 2*cos(x)*cos(2x)$.

Now that the sum has been rewritten as a product, we can now use the coefficient as the amplitude, defining two other pairs of envelope curves.  If $y=cos(2x)$ is the sinusoid, then $y= \pm 2cos(x)$ are envelopes of the original curve, and if $y=cos(x)$ is the sinusoid, then $y= \pm 2cos(2x)$ are envelopes.

In general, I think it’s easier to see the envelope effect with the larger period function.  A particularly nice application connection of adding sinusoids with identical amplitudes and different periods are the beats musicians hear from the constructive and destructive sound wave interference from two instruments close to, but not quite in tune.  The points where the envelopes cross on the x-axis are the quiet points in the beats.

A STUDENT WANTED MORE

In class last Friday, my students were reviewing envelope curves in advance of our final exam when one made the next logical leap and asked what would happen if both the coefficients and periods were different.  When I mentioned that the exam wouldn’t go that far, she uttered a teacher’s dream proclamation:  She didn’t care.  She wanted to learn anyway.  Making up some coefficients on the spot, we decided to explore $f(x)=2sin(x)+5cos(2x)$.

Assuming for now that the cos(2x) term is the primary sinusoid, the envelope curves are $y=2sin(x) \pm 5$.

That was certainly cool, but at this point, we were no longer satisfied with just one answer.  If we assumed sin(x) was the primary sinusoid, the envelopes are $y=5cos(2x) \pm 2$.

Personally, I found the first set of envelopes more satisfying, but it was nice that we could so easily identify another.

With the different periods, even though the  coefficients are different, we decided to split the original function in a way that allowed us to use the $cos(A)+cos(B)$ identity introduced earlier.  Rewriting,

$f(x)=2sin(x)+5cos(2x) = 2cos \left( x - \frac{ \pi }{2} \right) + 2cos(2x) + 3cos(2x)$ .

After factoring out the common coefficient 2, the first two terms now fit the $cos(A) + cos(B)$ identity with $A = x - \frac{ \pi }{2}$ and $B=2x$, allowing the equation to be rewritten as

$f(x)= 2 \left( 2*cos \left( \frac{x - \frac{ \pi }{2} + 2x }{2} \right) * cos \left( \frac{x - \frac{ \pi }{2} - 2x }{2} \right) \right) + 3cos(2x)$

$\displaystyle = 4* cos \left( \frac{3}{2} x - \frac{ \pi }{4} \right) * cos \left( - \frac{1}{2} x - \frac{ \pi }{4} \right) + 3cos(2x)$.

With the expression now containing three sinusoidal expressions, there are three more pairs of envelope curves!

Arguably, the simplest approach from this form assumes $cos(2x)$ from the $latex$3cos(2x)\$ term as the sinusoid, giving $y=2sin(x)+2cos(2x) \pm 3$ (the pre-identity form three equations earlier in this post) as envelopes.

We didn’t go there, but recognizing that new envelopes can be found simply by rewriting sums creates an infinite number of additional envelopes.  Defining these different sums with a slider lets you see an infinite spectrum of envelopes.  The image below shows one.  Here is the Desmos Calculator page that lets you play with these envelopes directly.

If the $cos \left( \frac{3}{3} x - \frac{ \pi}{4} \right)$term was the sinusoid, the envelopes would be $y=3cos(2x) \pm 4cos \left( - \frac{1}{2} x - \frac{ \pi }{4} \right)$.  If you look closely, you will notice that this is a different type of envelope pair with the ceiling and floor curves crossing and trading places at $x= \frac{\pi}{2}$ and every $2\pi$ units before and after.  The third form creates another curious type of crossing envelopes.

CONCLUSION:

In all, it was fun to explore with my students the many possibilities for bounding sinusoidal curves.  It was refreshing to have one student excited by just playing with the curves to see what else we could find for no other reason than just to enjoy the beauty of these periodic curves.  As I reflected on the overall process, I was even more delighted to discover the infinite spectrum of envelopes modeled above on Desmos.

I hope you’ve found something cool here for yourself.

## Squares and Octagons, A compilation

My last post detailed my much-too-long trigonometric proof of why the octagon formed by connecting the midpoints and vertices of the edges of a square into an 8-pointed star is always 1/6 of the area of the original square.

My proof used trigonometry, and responses to the post on Twitter  and on my ‘blog showed many cool variations.  Dave Radcliffe thought it would be cool to have a compilation of all of the different approaches.  I offer that here in the order they were shared with me.

Method 1:  My use of trigonometry in a square.  See my original post.

Method 2:  Using medians in a rectangle from Tatiana Yudovina, a colleague at Hawken School.

Below, the Area(axb rectangle) = ab = 16 blue triangles, and
Area(octagon) = 4 blue triangles – 2 red deltas..

Now look at the two green, similar triangles.  They are similar with ratio 1/2, making

Area(red delta) = $\displaystyle \frac{b}{4} \cdot \frac{a}{6} = \frac{ab}{24}$, and

Area(blue triangle) = $\displaystyle \frac{1}{16} ab$

So, Area(octagon) = $\displaystyle 2 \frac{ab}{24}-4\frac {ab}{16}=\frac{1}{6}ab$.

QED

Method 3:  Using differences in triangle areas in a square (but easily extended to rectangles)from @Five_Triangles (‘blog here).

Full solution here.

Method 4:  Very clever shorter solution using triangle area similarity in a square also from @Five_Triangles (‘blog here).

Full second solution here.

Method 5:  Great option Using dilated kitesfrom Dave Radcliffe posting as @daveinstpaul.

Full pdf and proof here.

Method 6:  Use fact that triangle medians trisect each other from Mike Lawler posting as @mikeandallie.

Tweet of solution here.

Method 7:  Use a coordinate proof on a specific square from Steve Ingrassia, a colleague at Hawken School.  Not a quick proof like some of the geometric solutions, but it’s definitely different than the others.

If students know the formula for finding the area of any polygon using its coordinates, then they can prove this result very simply with nothing more than simple algebra 1 techniques.   No trig is required.

The area of polygon with vertices (in either clockwise or counterclockwise order, starting at any vertex) of $(x_1, y_1)$, $(x_2, y_2)$, …, $(x_n, y_n)$ is

$\displaystyle Area = \left| \frac{(x_1y_2-x_2y_1)+(x_2y_3-x_3y_2)+...+(x_{n-1}y_n-x_ny_{n-1})}{2} \right|$

Use a 2×2 square situated with vertices at (0,0), (0,2), (2,2), and (2,0).  Construct segments connecting each vertex with the midpoints of the sides of the square, and find the equations of the associated lines.

• L1 (connecting (0,0) and (2,1):    y = x/2
• L2 (connecting (0,0) and (1,2):   y=2x
• L3 (connecting (0,1) and (2,0):  y= -x/2 + 1
• L4 (connecting (0,1) and (2,2):  y= x/2 + 1
• L5 (connecting (0,2) and (1,0):  y = -2x + 2
• L6 (connecting (0,2) and (2,1):  y= -x/2 + 2
• L7 (connecting (1,2) and (2,0):  y = -2x + 4
• L8 (connecting (2,2) and (1,0):  y = 2x – 2

The 8 vertices of the octagon come at pairwise intersections of some of these lines, which can be found with simple substitution:

• Vertex 1 is at the intersection of L1 and L3:   (1, 1/2)
• Vertex 2 is at the intersection of L3 and L5:  (2/3, 2/3)
• Vertex 3 is at the intersection of L2 and L5:  (1/2, 1)
• Vertex 4 is at the intersection of L2 and L4:  (2/3, 4/3)
• Vertex 5 is at the intersection of L4 and L6:  (1, 3/2)
• Vertex 6 is at the intersection of L6 and L7:  (4/3, 4/3)
• Vertex 7 is at the intersection of L7 and L8:  (3/2, 1)
• Vertex 8 is at the intersection of L1 and L8:  (4/3, 2/3)

Using the coordinates of these 8 vertices in the formula for the area of the octagon, gives

$\displaystyle \frac{ \left| 1/3 +1/3+0+(-1/3)+(-2/3)+(-1/3)+0 \right|}{2} = \frac{2}{3}$

Since the area of the original square was 4, the area of the octagon is exactly 1/6th of the area of the square.

## Old school integral

This isn’t going to be one of my typical posts, but I just cracked a challenging indefinite integral and wanted to share.

I made a mistake solving a calculus problem a few weeks ago and ended up at an integral that looked pretty simple.  I tried several approaches and found many dead ends before finally getting a breakthrough.  Rather than just giving a proof, I thought I’d share my thought process in hopes that some students just learning integration techniques might see some different ways to attack a problem and learn to persevere through difficult times.

In my opinion, most students taking a calculus class would never encounter this problem.  The work that follows is clear evidence why everyone doing math should have access to CAS (or tables of integrals when CAS aren’t available).

Here’s the problem:

Integrate $\int \left( x^2 \cdot \sqrt{1+x^2} \right) dx$.

For convenience, I’m going to ignore in this post the random constant that appears with indefinite integrals.

While there’s no single algebraic technique that will work for all integrals, sometimes there are clues to suggest productive approaches.  In this case, the square root of a binomial involving a constant and a squared variable term suggests a trig substitution.

From trig identities, I knew $tan^2 \theta + 1 = sec^2 \theta$, so my first attempt was to let $x=tan \theta$, which gives $dx=sec^2 \theta d\theta$.  Substituting these leads to

$(tan \theta)'=sec^2 \theta$, claiming two secants for the differential in a reversed chain rule, but left a single secant in the expression, so I couldn’t make the trig identities work because odd numbers of trigs don’t convert easily using Pythagorean identities.  Then I tried using $(sec \theta)'=sec \theta \cdot tan \theta$, leaving a single tangent after accounting for the potential differential–the same problem as before.  A straightforward trig identity wasn’t going to do the trick.

Then I recognized that the derivative of the root’s interior is $2x$.  It was not the exterior $x^2$, but perhaps integration by parts would work.  I tried $u=x \longrightarrow u'=dx$ and $v'=x\sqrt{1+x^2} dx \longrightarrow v=\frac{1}{2} \left( 1+x^2 \right)^{3/2} \cdot \frac{2}{3}$.  Rewriting the original integral gave

The remaining integral still suggested a trig substitution, so I again tried $x =tan \theta$ to get

but the odd number of secants led me to the same dead end from trigonometric identities that stopped my original attempt.  I tried a few other variations on these themes, but nothing seemed to work.  That’s when I wondered if the integral even had a closed form solution.  Lots of simple looking integrals don’t work out nicely; perhaps this was one of them.  Plugging the integral into my Nspire CAS gave the following.

OK, now I was frustrated.  The solution wasn’t particularly pretty, but a closed form definitely existed.  The logarithm was curious, but I was heartened by the middle term I had seen with a different coefficient in my integration by parts approach.  I had other things to do, so I employed another good problem solving strategy:  I quit working on it for a while.  Sometimes you need to allow your sub-conscious to chew on an idea for a spell.  I made a note about the integral on my To Do list and walked away.

As often happens to me on more challenging problems, I woke this morning with a new idea.  I was still convinced that trig substitutions should work in some way, but my years of teaching AP Calculus and its curricular restrictions had blinded me to other possibilities.  Why not try a hyperbolic trig substitution? In many ways, hyperbolic trig is easier to manipulate than circular trig.  I knew

$\frac{d}{dt}cosh(t)=sinh(t)$ and $\frac{d}{dt}sinh(t)=cosh(t)$,

and the hyperbolic identity

$cosh^2t - sinh^2t=1 \longrightarrow cosh^2t=1+sinh^2t$.

(In case you haven’t worked with hyperbolic trig functions before, you can prove these for yourself using the definitions of hyperbolic sine and cosine:  $cosh(x)=\frac{1}{2}\left( e^x + e^{-x} \right)$ and $sinh(x)=\frac{1}{2}\left( e^x - e^{-x} \right)$.)

So, $x=sinh(A) \longrightarrow dx=cosh(A) dA$, and substitution gives

Jackpot!  I was down to an even number of (hyperbolic) trig functions, so Pythagorean identities should help me revise my latest expression into some workable form.

To accomplish this, I employed a few more hyperbolic trig identities:

1. $sinh(2A)=2sinh(A)cosh(A)$
2. $cosh(2A)=cosh^2(A)+sinh^2(A)$
3. $cosh^2(A) = \frac{1}{2}(cosh(2A)+1)$
4. $sinh^2(A) = \frac{1}{2}(cosh(2A)-1)$

(All of these can be proven using the definitions of sinh and cosh above.  I encourage you to do so if you haven’t worked much with hyperbolic trig before.  I’ve always liked the close parallels between the forms of circular and hyperbolic trig relationships and identities.)

If you want to evaluate $\int x^2 \sqrt{x^2+1} dx$ yourself, do so before reading any further.

Using equations 3 & 4, expanding, and then equation 3 again turns the integral into something that can be integrated directly.

The integral was finally solved!  I then used equations 1 & 2 to rewrite the expression back into hyperbolic functions of A only.

The integral was solved using the substitution $x=sinhA \longrightarrow A=sinh^{-1}x$ and (using $cosh^2A-sinh^2A=1$), $coshA=\sqrt{x^2+1}$.  Substituting back gave:

but that didn’t match what my CAS had given.  I could have walked away, but I had to know if I had made an error someplace or just had found a different expression for the same quantity.  I knew the inverse sinh could be replaced with a logarithm via a quadratic expression in $e^x$.

Well, that explained the presence of the logarithm in the CAS solution, but I was still worried by the cubic in my second term and the fact that my first two terms were a sum whereas the CAS’s solution’s comparable terms were a difference.  But as a former student once said, “If you take care of the math, the math will take care of you.”  These expressions had to be the same, so I needed to complete one more identity–algebraic this time.  Factoring, rewriting, and re-expanding did the trick.

What a fun problem (for me) this turned out to be.  It’s absolutely not worth the effort to do this every time when a CAS or integral table can drop the solution so much more quickly, but it’s also deeply satisfying to me to know why the form of the solution is what it is.  It’s also nice to know that I found not one, but three different forms of the solution.

Morals:  Never give up.  Trust your instincts. Never give up. Try lots of variations on your instincts. And never give up!

## A Student’s Powerful Polar Exploration

I posted last summer on a surprising discovery of a polar function that appeared to be a horizontal translation of another polar function.  Translations happen all the time, but not really in polar coordinates.  The polar coordinate system just isn’t constructed in a way that makes translations appear in any clear way.

That’s why I was so surprised when I first saw a graph of $\displaystyle r=cos \left( \frac{\theta}{3} \right)$.

It looks just like a 0.5 left translation of $r=\frac{1}{2} +cos( \theta )$ .

But that’s not supposed to happen so cleanly in polar coordinates.  AND, the equation forms don’t suggest at all that a translation is happening.  So is it real or is it a graphical illusion?

I proved in my earlier post that the effect was real.  In my approach, I dealt with the different periods of the two equations and converted into parametric equations to establish the proof.  Because I was working in parametrics, I had to solve two different identities to establish the individual equalities of the parametric version of the Cartesian x- and y-coordinates.

As a challenge to my precalculus students this year, I pitched the problem to see what they could discover. What follows is a solution from about a month ago by one of my juniors, S.  I paraphrase her solution, but the basic gist is that S managed her proof while avoiding the differing periods and parametric equations I had employed, and she did so by leveraging the power of CAS.  The result was that S’s solution was briefer and far more elegant than mine, in my opinion.

S’s Proof:

Multiply both sides of $r = \frac{1}{2} + cos(\theta )$ by r and translate to Cartesian.

$r^2 = \frac{1}{2} r+r\cdot cos(\theta )$
$x^2 + y^2 = \frac{1}{2} \sqrt{x^2+y^2} +x$
$\left( 2\left( x^2 + y^2 -x \right) \right) ^2= \sqrt{x^2+y^2} ^2$

At this point, S employed some CAS power.

[Full disclosure: That final CAS step is actually mine, but it dovetails so nicely with S’s brilliant approach. I am always delightfully surprised when my students return using a tool (technological or mental) I have been promoting but hadn’t seen to apply in a particular situation.]

S had used her CAS to accomplish the translation in a more convenient coordinate system before moving the equation back into polar.

Clearly, $r \ne 0$, so

$4r^3 - 3r = cos(\theta )$ .

In an attachment (included below), S proved an identity she had never seen, $\displaystyle cos(\theta) = 4cos^3 \left( \frac{\theta }{3} \right) - 3cos \left( \frac{\theta }{3} \right)$ , which she now applied to her CAS result.

$\displaystyle 4r^3 - 3r = 4cos^3 \left( \frac{\theta }{3} \right) - 3cos \left( \frac{\theta }{3} \right)$

So, $\displaystyle r = cos \left( \frac{\theta }{3} \right)$

Therefore, $\displaystyle r = cos \left( \frac{\theta }{3} \right)$ is the image of $\displaystyle r = \frac{1}{2} + cos(\theta )$ after translating $\displaystyle \frac{1}{2}$ unit left.  QED

Simple. Beautiful.

Obviously, this could have been accomplished using lots of by-hand manipulations.  But, in my opinion, that would have been a horrible, potentially error-prone waste of time for a problem that wasn’t concerned at all about whether one knew some Algebra I arithmetic skills.  Great job, S!

S’s proof of her identity, $\displaystyle cos(\theta) = 4cos^3 \left( \frac{\theta }{3} \right) - 3cos \left( \frac{\theta }{3} \right)$ :

## Transformations II and a Pythagorean Surprise

In my last post, I showed how to determine an unknown matrix for most transformations in the xy-plane and suggested that they held even more information.

Given a pre-image set of points which can be connected to enclose one or more areas with either clockwise or counterclockwise orientation.  If a transformation T represented by matrix $[T]= \left[ \begin{array}{cc} A & C \\ B & D \end{array}\right]$ is applied to the pre-image points, then the determinant of $[T]$, $det[T]=AD-BC$, tells you two things about the image points.

1. The area enclosed by similarly connecting the image points is $\left| det[T] \right|$ times the area enclosed by the pre-image points, and
2. The orientation of the image points is identical to that of the pre-image if $det[T]>0$, but is reversed if $det[T]<0$.  If $det[T]=0$, then the image area is 0 by the first property, and any question about orientation is moot.

In other words, $det[T]$ is the area scaling factor from the pre-image to the image (addressing the second half of CCSSM Standard NV-M 12 on page 61 here), and the sign of $det[T]$ indicates whether the pre-image and image have the same or opposite orientation, a property beyond the stated scope of the CCSSM.

Example 1: Interpret $det[T]$ for the matrix representing a reflection over the x-axis, $[T]=\left[ r_{x-axis} \right] =\left[ \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right]$.

From here, $det[T]=-1$.  The magnitude of this is 1, indicating that the area of an image of an object reflected over the line $y=x$ is 1 times the area of the pre-image—an obviously true fact because reflections preserve area.

Also, $det \left[ r_{x-axis} \right]<0$ indicating that the orientation of the reflection image is reversed from that of its pre-image.  This, too, must be true because reflections reverse orientation.

Example 2: Interpret $det[T]$ for the matrix representing a scale change that doubles x-coordinates and triples y-coordinates, $[T]=\left[ S_{2,3} \right] =\left[ \begin{array}{cc} 2 & 0 \\ 0 & 3 \end{array} \right]$.

For this matrix, $det[T]=+6$, indicating that the image’s area is 6 times that of its pre-image area, while both the image and pre-image have the same orientation.  Both of these facts seem reasonable if you imagine a rectangle as a pre-image.  Doubling the base and tripling the height create a new rectangle whose area is six times larger.  As no flipping is done, orientation should remain the same.

Example 3 & a Pythagorean Surprise:  What should be true about  $det[T]$ for the transformation matrix representing a generic rotation of $\theta$ units around the origin,  $[T]=\left[ R_\theta \right] = \left[ \begin{array}{cc} cos( \theta ) & -sin( \theta ) \\ sin( \theta ) & cos( \theta ) \end{array} \right]$ ?

Rotations preserve area without reversing orientation, so $det\left[ R_\theta \right]$ should be +1.  Using this fact and computing the determinant gives

$det \left[ R_\theta \right] = cos^2(\theta ) + sin^2(\theta )=+1$ .

In a generic right triangle with hypotenuse C, leg A adjacent to acute angle $\theta$, and another leg B, this equation is equivalent to $\left( \frac{A}{C} \right) ^2 + \left( \frac{B}{C} \right) ^2 = 1$, or $A^2+B^2=C^2$, the Pythagorean Theorem.  There are literally hundreds of proofs of this theorem, and I suspect this proof has been given sometime before, but I think this is a lovely derivation of that mathematical hallmark.

Conclusion:  While it seems that these two properties about the determinants of transformation matrices are indeed true for the examples shown, mathematicians hold out for a higher standard.   I’ll offer a proof of both properties in my next post.

## Numerical Transformations, I

It’s been over a decade since I’ve taught a class where I’ve felt the freedom to really explore transformations with a strong matrix thread.  Whether due to curricular pressures, lack of time, or some other reason, I realized I had drifted away from some nice connections when I recently read Jonathan Dick’s and Maria Childrey’s Enhancing Understanding of Transformation Matrices in the April, 2012 Mathematics Teacher (abstract and complete article here).

Their approach was okay, but I was struck by the absence of a beautiful idea I believe I learned at a UCSMP conference in the early 1990s.  Further, today’s Common Core State Standards for Mathematics explicitly call for students to “Work with 2×2 matrices as transformations of the plane, and interpret the absolute value of the determinant in terms of area” (see Standard NV-M 12 on page 61 of the CCSSM here).  I’m going to take a couple posts to unpack this standard and describe the pretty connection I’ve unfortunately let slip out of my teaching.

What they almost said

At the end of the MT article, the authors performed a double transformation equivalent to reflecting the points (2,0), (3,-4), and (9,-7) over the line $y=x$ via matrices using $\left[ \begin{array}{cc} 0&1 \\ 1&0 \end{array} \right] \cdot \left[ \begin{array}{ccc} 2 & 3 & 9 \\ 0 & -4 & -7 \end{array} \right]$ = $\left[ \begin{array}{ccc} 0 & -4 & -7 \\ 2 & 3 & 9 \end{array} \right]$ giving image points (0,2), (-4,3), and (-7,9).  That this matrix multiplication reversed all of the points’ coordinates is compelling evidence that $\left[ \begin{array}{cc} 0 & 1 \\ 1 & 0\end{array} \right]$ might be a $y=x$ reflection matrix.

Going much deeper

Here’s how this works.  Assume a set of pre-image points, P, undergoes some transformation T to become image points, P’.  For this procedure, T can be almost any transformation except a translation–reflections, dilations, scale changes, rotations, etc.  Translations can be handled using augmentations of these transformation matrices, but that is another story.  Assuming P is a set of n two-dimensional points, then it can be written as a 2×n pre-image matrix, [P], with all of the x-coordinates in the top row and the corresponding y-coordinates in the second row.  Likewise, [P’] is a 2×n matrix of the image points, while [T] is a 2×2 matrix unique to the transformation. In matrix form, this relationship is written $[T] \cdot [P] = [P']$.

So what would $\left[ \begin{array}{cc} 0 & -1 \\ 1 & 0\end{array} \right]$ do as a transformation matrix?  To see, transform (2,0), (3,-4), and (9,-7) using this new [T].

$\left[ \begin{array}{cc} 0&-1 \\ 1&0 \end{array} \right] \cdot \left[ \begin{array}{ccc} 2 & 3 & 9 \\ 0 & -4 & -7 \end{array} \right]$ = $\left[ \begin{array}{ccc} 0 & 4 & 7 \\ 2 & 3 & 9 \end{array} \right]$

The result might be more easily seen graphically with the points connected to form pre-image and image triangles.

After studying the graphic, hopefully you can see that $\left[ \begin{array}{cc} 0 & -1 \\ 1 & 0\end{array} \right]$ rotated the pre-image points 90 degrees around the origin.

Generalizing

Now you know the effects of two different transformation matrices, but what if you wanted to perform a specific transformation and didn’t know the matrix to use.  If you’re new to transformations via matrices, you may be hoping for something much easier than the experimental approach used thus far.  If you can generalize for a moment, the result will be a stunningly simple way to determine the matrix for any transformation quickly and easily.

Assume you need to find a transformation matrix, $[T]= \left[ \begin{array}{cc} a & c \\ b & d \end{array}\right]$.  Pick (1,0) and (0,1) as your pre-image points.

$\left[ \begin{array}{cc} a&c \\ b&d \end{array} \right] \cdot \left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right]$ = $\left[ \begin{array}{cc} a & c \\ b & d \end{array} \right]$

On the surface, this says the image of (1,0) is (a,b) and the image of (0,1) is (c,d), but there is so much more here!

Because the pre-image matrix for (1,0) and (0,1) is the 2×2 identity matrix, $[T]= \left[ \begin{array}{cc} a & c \\ b & d \end{array}\right]$ will always be BOTH the transformation matrix AND (much more importantly), the image matrix.  This is a major find.  It means that if you  know the images of (1,0) and (0,1) under some transformation T, then you automatically know the components of [T]!

For example, when reflecting over the x-axis, (1,0) is unchanged and (0,1) becomes (0,-1), making $[T]= \left[ r_{x-axis} \right] = \left[ \begin{array}{cc} 1 & 0 \\ 0 & -1\end{array} \right]$.  Remember, coordinates of points are always listed vertically.

Similarly, a scale change that doubles x-coordinates and triples the ys transforms (1,0) to (2,0) and (0,1) to (0,3), making $[T]= \left[ S_{2,3} \right] = \left[ \begin{array}{cc} 2 & 0 \\ 0 & 3\end{array} \right]$.

In a generic rotation of $\theta$ around the origin, (1,0) becomes $(cos(\theta ),sin(\theta ))$ and (0,1) becomes $(-sin(\theta ),cos(\theta ))$.

Therefore, $[T]= \left[ R_\theta \right] = \left[ \begin{array}{cc} cos(\theta ) & -sin(\theta ) \\ sin(\theta ) & cos(\theta ) \end{array} \right]$.  Substituting $\theta = 90^\circ$ into this [T] confirms the $\left[ R_{90^\circ} \right] = \left[ \begin{array}{cc} 0 & -1 \\ 1 & 0\end{array} \right]$ matrix from earlier.

As nice as this is, there is even more beautiful meaning hidden within transformation matrices.  I’ll tackle some of that in my next post.

## Trig Identities with a Purpose

Yesterday, I was thinking about some changes I could introduce to a unit on polar functions.  Realizing that almost all of the polar functions traditionally explored in precalculus courses have graphs that are complete over the interval $0\le\theta\le 2\pi$, I wondered if there were any interesting curves that took more than $2\pi$ units to graph.

My first attempt was $r=cos\left(\frac{\theta}{2}\right)$ which produced something like a merged double limaçon with loops over its $4\pi$ period.

Trying for more of the same, I graphed $r=cos\left(\frac{\theta}{3}\right)$ guessing (without really thinking about it) that I’d get more loops.  I didn’t get what I expected at all.

Wow!  That looks exactly like the image of a standard limaçon with a loop under a translation left of 0.5 units.

Further exploration confirms that $r=cos\left(\frac{\theta}{3}\right)$ completes its graph in $3\pi$ units while $r=\frac{1}{2}+cos\left(\theta\right)$ requires $2\pi$ units.

As you know, in mathematics, it is never enough to claim things look the same; proof is required.  The acute challenge in this case is that two polar curves (based on angle rotations) appear to be separated by a horizontal translation (a rectangular displacement).  I’m not aware of any clean, general way to apply a rectangular transformation to a polar graph or a rotational transformation to a Cartesian graph.  But what I can do is rewrite the polar equations into a parametric form and translate from there.

For $0\le\theta\le 3\pi$ , $r=cos\left(\frac{\theta}{3}\right)$ becomes $\begin{array}{lcl} x_1 &= &cos\left(\frac{\theta}{3}\right)\cdot cos\left (\theta\right) \\ y_1 &= &cos\left(\frac{\theta}{3}\right)\cdot sin\left (\theta\right) \end{array}$ .  Sliding this $\frac{1}{2}$ a unit to the right makes the parametric equations $\begin{array}{lcl} x_2 &= &\frac{1}{2}+cos\left(\frac{\theta}{3}\right)\cdot cos\left (\theta\right) \\ y_2 &= &cos\left(\frac{\theta}{3}\right)\cdot sin\left (\theta\right) \end{array}$ .

This should align with the standard limaçon, $r=\frac{1}{2}+cos\left(\theta\right)$ , whose parametric equations for $0\le\theta\le 2\pi$  are $\begin{array}{lcl} x_3 &= &\left(\frac{1}{2}+cos\left(\theta\right)\right)\cdot cos\left (\theta\right) \\ y_3 &= &\left(\frac{1}{2}+cos\left(\theta\right)\right)\cdot sin\left (\theta\right) \end{array}$ .

The only problem that remains for comparing $(x_2,y_2)$ and $(x_3,y_3)$ is that their domains are different, but a parameter shift can handle that.

If $0\le\beta\le 3\pi$ , then $(x_2,y_2)$ becomes $\begin{array}{lcl} x_4 &= &\frac{1}{2}+cos\left(\frac{\beta}{3}\right)\cdot cos\left (\beta\right) \\ y_4 &= &cos\left(\frac{\beta}{3}\right)\cdot sin\left (\beta\right) \end{array}$ and $(x_3,y_3)$ becomes $\begin{array}{lcl} x_5 &= &\left(\frac{1}{2}+cos\left(\frac{2\beta}{3}\right)\right)\cdot cos\left (\frac{2\beta}{3}\right) \\ y_5 &= &\left(\frac{1}{2}+cos\left(\frac{2\beta}{3}\right)\right)\cdot sin\left (\frac{2\beta}{3}\right) \end{array}$ .

Now that the translation has been applied and both functions operate over the same domain, the two functions must be identical iff $x_4 = x_5$ and $y_4 = y_5$ .  It’s time to prove those trig identities!

Before blindly manipulating the equations, I take some time to develop some strategy.  I notice that the $(x_5, y_5)$ equations contain only one type of angle–double angles of the form $2\cdot\frac{\beta}{3}$ –while the $(x_4, y_4)$ equations contain angles of two different types, $\beta$ and $\frac{\beta}{3}$ .  It is generally easier to work with a single type of angle, so my strategy is going to be to turn everything into trig functions of double angles of the form $2\cdot\frac{\beta}{3}$ .

$\displaystyle \begin{array}{lcl} x_4 &= &\frac{1}{2}+cos\left(\frac{\beta}{3}\right)\cdot cos\left (\beta\right) \\ &= &\frac{1}{2}+cos\left(\frac{\beta}{3}\right)\cdot cos\left (\frac{\beta}{3}+\frac{2\beta}{3} \right) \\ &= &\frac{1}{2}+cos\left(\frac{\beta}{3}\right)\cdot\left( cos\left(\frac{\beta}{3}\right) cos\left(\frac{2\beta}{3}\right)-sin\left(\frac{\beta}{3}\right) sin\left(\frac{2\beta}{3}\right)\right) \\ &= &\frac{1}{2}+\left[cos^2\left(\frac{\beta}{3}\right)\right] cos\left(\frac{2\beta}{3}\right)-\frac{1}{2}\cdot 2cos\left(\frac{\beta}{3}\right) sin\left(\frac{\beta}{3}\right) sin\left(\frac{2\beta}{3}\right) \\ &= &\frac{1}{2}+\left[\frac{1+cos\left(2\frac{\beta}{3}\right)}{2}\right] cos\left(\frac{2\beta}{3}\right)-\frac{1}{2}\cdot sin^2\left(\frac{2\beta}{3}\right) \\ &= &\frac{1}{2}+\frac{1}{2}cos\left(\frac{2\beta}{3}\right)+\frac{1}{2} cos^2\left(\frac{2\beta}{3}\right)-\frac{1}{2} \left( 1-cos^2\left(\frac{2\beta}{3}\right)\right) \\ &= & \frac{1}{2}cos\left(\frac{2\beta}{3}\right) + cos^2\left(\frac{2\beta}{3}\right) \\ &= & \left(\frac{1}{2}+cos\left(\frac{2\beta}{3}\right)\right)\cdot cos\left(\frac{2\beta}{3}\right) = x_5 \end{array}$

Proving that the x expressions are equivalent.  Now for the ys

$\displaystyle \begin{array}{lcl} y_4 &= & cos\left(\frac{\beta}{3}\right)\cdot sin\left(\beta\right) \\ &= & cos\left(\frac{\beta}{3}\right)\cdot sin\left(\frac{\beta}{3}+\frac{2\beta}{3} \right) \\ &= & cos\left(\frac{\beta}{3}\right)\cdot\left( sin\left(\frac{\beta}{3}\right) cos\left(\frac{2\beta}{3}\right)+cos\left(\frac{\beta}{3}\right) sin\left(\frac{2\beta}{3}\right)\right) \\ &= & \frac{1}{2}\cdot 2cos\left(\frac{\beta}{3}\right) sin\left(\frac{\beta}{3}\right) cos\left(\frac{2\beta}{3}\right)+\left[cos^2 \left(\frac{\beta}{3}\right)\right] sin\left(\frac{2\beta}{3}\right) \\ &= & \frac{1}{2}sin\left(2\frac{\beta}{3}\right) cos\left(\frac{2\beta}{3}\right)+\left[\frac{1+cos \left(2\frac{\beta}{3}\right)}{2}\right] sin\left(\frac{2\beta}{3}\right) \\ &= & \left(\frac{1}{2}+cos\left(\frac{2\beta}{3}\right)\right)\cdot sin\left (\frac{2\beta}{3}\right) = y_5 \end{array}$

Therefore the graph of $r=cos\left(\frac{\theta}{3}\right)$ is exactly the graph of $r=\frac{1}{2}+cos\left(\theta\right)$ slid $\frac{1}{2}$ unit left.  Nice.

If there are any students reading this, know that it took a few iterations to come up with the versions of the identities proved above.  Remember that published mathematics is almost always cleaner and more concise than the effort it took to create it.  One of the early steps I took used the substitution $\gamma =\frac{\beta}{3}$ to clean up the appearance of the algebra.  In the final proof, I decided that the 2 extra lines of proof to substitute in and then back out were not needed.  I also meandered down a couple unnecessarily long paths that I was able to trim in the proof I presented above.

Despite these changes, my proof still feels cumbersome and inelegant to me.  From one perspective–Who cares?  I proved what I set out to prove.  On the other hand, I’d love to know if someone has a more elegant way to establish this connection.  There is always room to learn more.  Commentary welcome.

In the end, it’s nice to know these two polar curves are identical.  It pays to keep one’s eyes eternally open for unexpected connections!