Tag Archives: transformation

Transformations III

My last post interpreted the determinants of a few transformation matrices.  For any matrix of a transformation, [T], det[T] is the area scaling factor from the pre-image to the image (addressing the second half of CCSSM Standard NV-M 12 on page 61 here), and the sign of det[T] indicates whether the pre-image and image have the same or opposite orientation.

These are not intuitively obvious, in my opinion, so it’s time for some proof accessible to middle and high school students.

Setting Up:  Take the unit square defined clockwise by vertices (0,0), (0,1), (1,1), and (1,0) under a generic transformation [T]= \left[ \begin{array}{cc} A & C \\ B & D \end{array}\right] where A, B, C, and D are real constants.  Because the unit square has area 1, the area of the image is also the area scaling factor from the pre-image to the image.

As before, the image of the unit square under T is determined by

\left[ \begin{array}{cc} A & C \\ B & D \end{array}\right] \cdot \left[ \begin{array}{cccc} 0 & 0 & 1 & 1 \\ 0 & 1 & 1 & 0 \end{array}\right] =  \left[ \begin{array}{cccc} 0 & C & A+C & A \\ 0 & D & B+D & B \end{array}\right] .

So the origin is its own image, (0,1) becomes (C,D), (1,1) becomes (A+C,B+D), and (1,0) becomes (A,B).  As [T] is a generic transformation matrix, nothing can be specifically known about the sign or magnitude of its components, but the image below shows one possible case of the image that maintains the original orientation.

When I was working on this problem the first time, I did not expect the image of the unit square to become a parallelogram under every possible [T] (remember that all of its components are assumed constant), but that can be verified by comparing coordinates.  To confirm the area scale change claim, I need to know the generic parallelogram’s area.  I’ll do this two ways.  The first is more elegant, but it invokes vectors–likely a precalculus topic.  The second should be accessible to middle school students.

Area (Method 1):  A parallelogram can be defined using two vectors.  In the image above, the “left side” from the origin to (C,D) is <C,D,0>–the 3rd dimensional component is needed to compute a cross product.  Likewise, the “bottom side” can be represented by vector <A,B,0>. The area of a parallelogram is the magnitude of the cross product of the two vectors defining the parallelogram (an explanation of this fact is here).  Because <A,B,0> \times <C,D,0> = <0,0,AD-BC> ,

| \text{Area of Parallelogram} | = |AD-BC|.

Cross products are not commutative, but reversing the order gives <C,D,0> \times <A,B,0> = -<0,0,AD-BC> , which has the same magnitude.  Either way, Claim #1 is true.

Area (Method 2):  Draw a rectangle around the parallelogram with two sides on the coordinate axes, one vertex at the origin, and another at (A+C,B+D).  As shown below, the area interior to the rectangle, but exterior to the parallelogram can be decomposed into right triangular and rectangular regions.

\Delta I \cong \Delta IV with total area A\cdot B, and \Delta III \cong \Delta VI with total area C\cdot D.  Finally, rectangles II and V are congruent with total area 2B\cdot C .  Together, these lead to an indirect computation of the parallelogram’s area.

|Area|=\left| (A+C)(B+D)-AB-CD-2BC \right| =|AD-BC|

The absolute values are required because the magnitudes of the constants are unknown.  This is exactly the same result obtained above.  While I used a convenient case for the positioning of the image points in the graphics above, that positioning is irrelevant.  No matter what the sign or relative magnitudes of the constants in [T], the parallelogram area can always be computed indirectly by subtracting the areas of four triangles and two rectangles from a larger rectangle, giving the same result.

Whichever area approach works for you, Claim #1 is true.

Establishing Orientation:  The side from the origin to (A,B) in the parallelogram is a segment on the line y=\frac{B}{A} x.  The position of (C,D) relative to this line can be used to determine the orientation of the image parallelogram.

  • Assuming the two cases for A>0 shown above, the image orientation remains clockwise iff vertex (C,D) is above y=\frac{B}{A} x.  Algebraically, this happens if D>\frac{B}{A}\cdot C \Longrightarrow AD-BC>0 .

  • When A<0, the image orientation remains clockwise iff vertex (C,D) is below y=\frac{B}{A} x.  Algebraically, this happens if D<\frac{B}{A}\cdot C \Longrightarrow AD-BC>0 .
  • When A=0 and B<0, the image is clockwise when C>0, again making AD-BC>0 .  The same is true for A=0, B>0, and C<0.

In all cases, the pre-image and image have the identical orientation when AD-BC=det[T]>0 and are oppositely oriented when det[T]<0.

Q.E.D.
Advertisements

Trig Identities with a Purpose

Yesterday, I was thinking about some changes I could introduce to a unit on polar functions.  Realizing that almost all of the polar functions traditionally explored in precalculus courses have graphs that are complete over the interval 0\le\theta\le 2\pi, I wondered if there were any interesting curves that took more than 2\pi units to graph.

My first attempt was r=cos\left(\frac{\theta}{2}\right) which produced something like a merged double limaçon with loops over its 4\pi period.

Trying for more of the same, I graphed r=cos\left(\frac{\theta}{3}\right) guessing (without really thinking about it) that I’d get more loops.  I didn’t get what I expected at all.

Wow!  That looks exactly like the image of a standard limaçon with a loop under a translation left of 0.5 units.

Further exploration confirms that r=cos\left(\frac{\theta}{3}\right) completes its graph in 3\pi units while r=\frac{1}{2}+cos\left(\theta\right) requires 2\pi units.

As you know, in mathematics, it is never enough to claim things look the same; proof is required.  The acute challenge in this case is that two polar curves (based on angle rotations) appear to be separated by a horizontal translation (a rectangular displacement).  I’m not aware of any clean, general way to apply a rectangular transformation to a polar graph or a rotational transformation to a Cartesian graph.  But what I can do is rewrite the polar equations into a parametric form and translate from there.

For 0\le\theta\le 3\pi , r=cos\left(\frac{\theta}{3}\right) becomes \begin{array}{lcl} x_1 &= &cos\left(\frac{\theta}{3}\right)\cdot cos\left (\theta\right) \\ y_1 &= &cos\left(\frac{\theta}{3}\right)\cdot sin\left (\theta\right) \end{array} .  Sliding this \frac{1}{2} a unit to the right makes the parametric equations \begin{array}{lcl} x_2 &= &\frac{1}{2}+cos\left(\frac{\theta}{3}\right)\cdot cos\left (\theta\right) \\ y_2 &= &cos\left(\frac{\theta}{3}\right)\cdot sin\left (\theta\right) \end{array} .

This should align with the standard limaçon, r=\frac{1}{2}+cos\left(\theta\right) , whose parametric equations for 0\le\theta\le 2\pi  are \begin{array}{lcl} x_3 &= &\left(\frac{1}{2}+cos\left(\theta\right)\right)\cdot cos\left (\theta\right) \\ y_3 &= &\left(\frac{1}{2}+cos\left(\theta\right)\right)\cdot sin\left (\theta\right) \end{array} .

The only problem that remains for comparing (x_2,y_2) and (x_3,y_3) is that their domains are different, but a parameter shift can handle that.

If 0\le\beta\le 3\pi , then (x_2,y_2) becomes \begin{array}{lcl} x_4 &= &\frac{1}{2}+cos\left(\frac{\beta}{3}\right)\cdot cos\left (\beta\right) \\ y_4 &= &cos\left(\frac{\beta}{3}\right)\cdot sin\left (\beta\right) \end{array} and (x_3,y_3) becomes \begin{array}{lcl} x_5 &= &\left(\frac{1}{2}+cos\left(\frac{2\beta}{3}\right)\right)\cdot cos\left (\frac{2\beta}{3}\right) \\ y_5 &= &\left(\frac{1}{2}+cos\left(\frac{2\beta}{3}\right)\right)\cdot sin\left (\frac{2\beta}{3}\right) \end{array} .

Now that the translation has been applied and both functions operate over the same domain, the two functions must be identical iff x_4 = x_5 and y_4 = y_5 .  It’s time to prove those trig identities!

Before blindly manipulating the equations, I take some time to develop some strategy.  I notice that the (x_5, y_5) equations contain only one type of angle–double angles of the form 2\cdot\frac{\beta}{3} –while the (x_4, y_4) equations contain angles of two different types, \beta and \frac{\beta}{3} .  It is generally easier to work with a single type of angle, so my strategy is going to be to turn everything into trig functions of double angles of the form 2\cdot\frac{\beta}{3} .

\displaystyle \begin{array}{lcl} x_4 &= &\frac{1}{2}+cos\left(\frac{\beta}{3}\right)\cdot cos\left (\beta\right) \\  &= &\frac{1}{2}+cos\left(\frac{\beta}{3}\right)\cdot cos\left (\frac{\beta}{3}+\frac{2\beta}{3} \right) \\  &= &\frac{1}{2}+cos\left(\frac{\beta}{3}\right)\cdot\left( cos\left(\frac{\beta}{3}\right) cos\left(\frac{2\beta}{3}\right)-sin\left(\frac{\beta}{3}\right) sin\left(\frac{2\beta}{3}\right)\right) \\  &= &\frac{1}{2}+\left[cos^2\left(\frac{\beta}{3}\right)\right] cos\left(\frac{2\beta}{3}\right)-\frac{1}{2}\cdot 2cos\left(\frac{\beta}{3}\right) sin\left(\frac{\beta}{3}\right) sin\left(\frac{2\beta}{3}\right) \\  &= &\frac{1}{2}+\left[\frac{1+cos\left(2\frac{\beta}{3}\right)}{2}\right] cos\left(\frac{2\beta}{3}\right)-\frac{1}{2}\cdot sin^2\left(\frac{2\beta}{3}\right) \\  &= &\frac{1}{2}+\frac{1}{2}cos\left(\frac{2\beta}{3}\right)+\frac{1}{2} cos^2\left(\frac{2\beta}{3}\right)-\frac{1}{2} \left( 1-cos^2\left(\frac{2\beta}{3}\right)\right) \\  &= & \frac{1}{2}cos\left(\frac{2\beta}{3}\right) + cos^2\left(\frac{2\beta}{3}\right) \\  &= & \left(\frac{1}{2}+cos\left(\frac{2\beta}{3}\right)\right)\cdot cos\left(\frac{2\beta}{3}\right) = x_5  \end{array}

Proving that the x expressions are equivalent.  Now for the ys

\displaystyle \begin{array}{lcl} y_4 &= & cos\left(\frac{\beta}{3}\right)\cdot sin\left(\beta\right) \\  &= & cos\left(\frac{\beta}{3}\right)\cdot sin\left(\frac{\beta}{3}+\frac{2\beta}{3} \right) \\  &= & cos\left(\frac{\beta}{3}\right)\cdot\left( sin\left(\frac{\beta}{3}\right) cos\left(\frac{2\beta}{3}\right)+cos\left(\frac{\beta}{3}\right) sin\left(\frac{2\beta}{3}\right)\right) \\  &= & \frac{1}{2}\cdot 2cos\left(\frac{\beta}{3}\right) sin\left(\frac{\beta}{3}\right) cos\left(\frac{2\beta}{3}\right)+\left[cos^2 \left(\frac{\beta}{3}\right)\right] sin\left(\frac{2\beta}{3}\right) \\  &= & \frac{1}{2}sin\left(2\frac{\beta}{3}\right) cos\left(\frac{2\beta}{3}\right)+\left[\frac{1+cos \left(2\frac{\beta}{3}\right)}{2}\right] sin\left(\frac{2\beta}{3}\right) \\  &= & \left(\frac{1}{2}+cos\left(\frac{2\beta}{3}\right)\right)\cdot sin\left (\frac{2\beta}{3}\right) = y_5  \end{array}

Therefore the graph of r=cos\left(\frac{\theta}{3}\right) is exactly the graph of r=\frac{1}{2}+cos\left(\theta\right) slid \frac{1}{2} unit left.  Nice.

If there are any students reading this, know that it took a few iterations to come up with the versions of the identities proved above.  Remember that published mathematics is almost always cleaner and more concise than the effort it took to create it.  One of the early steps I took used the substitution \gamma =\frac{\beta}{3} to clean up the appearance of the algebra.  In the final proof, I decided that the 2 extra lines of proof to substitute in and then back out were not needed.  I also meandered down a couple unnecessarily long paths that I was able to trim in the proof I presented above.

Despite these changes, my proof still feels cumbersome and inelegant to me.  From one perspective–Who cares?  I proved what I set out to prove.  On the other hand, I’d love to know if someone has a more elegant way to establish this connection.  There is always room to learn more.  Commentary welcome.

In the end, it’s nice to know these two polar curves are identical.  It pays to keep one’s eyes eternally open for unexpected connections!

An unexpected identity

Typical high school mathematics considers only two types of transformations:  translations & scale changes.  From a broader perspective, a transformation is an operation that changes its input into an output.  A function is also an operation that changes inputs.  Therefore, functions and transformations are the same thing from different perspectives.  This post will explore an unexpected discovery Nurfatimah Merchant and I made when applying the squaring function (transformation) to trigonometric functions, an idea we didn’t fully realize until after the initial publication of PreCalculus Transformed.

When a function is transformed, some points are unchanged (invariant) while others aren’t.  But what makes a point invariant in a transformation?  From a function perspective, point a is invariant under transformation T if T(a)=a.  Using this, a squaring transformation is invariant for an input, a, when a^2=a\Rightarrow a*(a-1)=0 \Rightarrow a=\{0,1\}.

Therefore, input values of  0 and 1 are invariant under squaring, and all other inputs are changed as follows.

  • Negative inputs become positive,
  • a^2<a for any 0<a<1, and
  • a^2>a for any a>1.

So what happens when the squaring transformation is applied to the graph of y=sin(x) (the input) to get the graph of y=(sin(x))^2 (the output)?  Notice that the output of sin(x) is the input to the squaring transformation, so we are transforming y values.  The invariant points in this case are all points where y=0 or y=1.  Because squaring transforms all negative inputs into positive outputs, the first image shows a dashed graph of y=sin(x) with the invariant points marked as black points and the negative inputs made positive with the absolute value function.

All non-invariant points on y=|sin(x)| have magnitude<1 and become smaller in magnitude when squared, as noted above.  Because the original x-intercepts of y=sin(x) are all locally linear, squaring these creates local “bounce” x-intercepts on the output function looking locally similar to the graphs of polynomial double roots.  The result is shown below.

While proof that the final output is precisely another sinusoid comes later, the visual image is very compelling.  This looks like a graph of y=cos(x) under a simple scale change (S_{0.5,-0.5}) and translation (T_{0,0.5}), in that order, giving equation \displaystyle\frac{y-0.5}{-0.5}=cos(\frac{x}{0.5}) or y=\frac{1}{2}-\frac{1}{2}cos(2x).  Therefore,

sin^2(x)=\frac{1}{2}-\frac{1}{2}cos(2x).

We later rewrote this equation to get

cos(2x)=1-2sin^2(x).

The initial equation was a nice enough exercise, but what we realized in the rewriting was that we had just “discovered” the half-angle identity for sine and a double-angle identity for cosine using a graphical variation on the squaring transformation!  No manipulation of angle sum identities was required!  (OK, they really are for an honest  proof, but this is pretty compelling evidence.)

Apply the squaring transformation to the graph of y=cos(x) and you get the half-angle identity for cosine and another variation on the double-angle identity for cosine.

We thought this was a nice way to sneak up on trigonometric identities.  Enjoy.

Quadratic Explorations

I first encountered this problem about a decade ago when I attended one of the first USACAS conferences sponsored by MEECAS in the Chicago area.  I’ve gotten lots of mileage from it with my own students and in professional circles.

For a standard form quadratic, y=a*x^2+b*x+c, you probably know how the values of a and c change the graph of the parabola, but what does b do?  Be specific and prove your claim.