Tag Archives: determinant

Transformations III

My last post interpreted the determinants of a few transformation matrices.  For any matrix of a transformation, [T], det[T] is the area scaling factor from the pre-image to the image (addressing the second half of CCSSM Standard NV-M 12 on page 61 here), and the sign of det[T] indicates whether the pre-image and image have the same or opposite orientation.

These are not intuitively obvious, in my opinion, so it’s time for some proof accessible to middle and high school students.

Setting Up:  Take the unit square defined clockwise by vertices (0,0), (0,1), (1,1), and (1,0) under a generic transformation [T]= \left[ \begin{array}{cc} A & C \\ B & D \end{array}\right] where A, B, C, and D are real constants.  Because the unit square has area 1, the area of the image is also the area scaling factor from the pre-image to the image.

As before, the image of the unit square under T is determined by

\left[ \begin{array}{cc} A & C \\ B & D \end{array}\right] \cdot \left[ \begin{array}{cccc} 0 & 0 & 1 & 1 \\ 0 & 1 & 1 & 0 \end{array}\right] =  \left[ \begin{array}{cccc} 0 & C & A+C & A \\ 0 & D & B+D & B \end{array}\right] .

So the origin is its own image, (0,1) becomes (C,D), (1,1) becomes (A+C,B+D), and (1,0) becomes (A,B).  As [T] is a generic transformation matrix, nothing can be specifically known about the sign or magnitude of its components, but the image below shows one possible case of the image that maintains the original orientation.

When I was working on this problem the first time, I did not expect the image of the unit square to become a parallelogram under every possible [T] (remember that all of its components are assumed constant), but that can be verified by comparing coordinates.  To confirm the area scale change claim, I need to know the generic parallelogram’s area.  I’ll do this two ways.  The first is more elegant, but it invokes vectors–likely a precalculus topic.  The second should be accessible to middle school students.

Area (Method 1):  A parallelogram can be defined using two vectors.  In the image above, the “left side” from the origin to (C,D) is <C,D,0>–the 3rd dimensional component is needed to compute a cross product.  Likewise, the “bottom side” can be represented by vector <A,B,0>. The area of a parallelogram is the magnitude of the cross product of the two vectors defining the parallelogram (an explanation of this fact is here).  Because <A,B,0> \times <C,D,0> = <0,0,AD-BC> ,

| \text{Area of Parallelogram} | = |AD-BC|.

Cross products are not commutative, but reversing the order gives <C,D,0> \times <A,B,0> = -<0,0,AD-BC> , which has the same magnitude.  Either way, Claim #1 is true.

Area (Method 2):  Draw a rectangle around the parallelogram with two sides on the coordinate axes, one vertex at the origin, and another at (A+C,B+D).  As shown below, the area interior to the rectangle, but exterior to the parallelogram can be decomposed into right triangular and rectangular regions.

\Delta I \cong \Delta IV with total area A\cdot B, and \Delta III \cong \Delta VI with total area C\cdot D.  Finally, rectangles II and V are congruent with total area 2B\cdot C .  Together, these lead to an indirect computation of the parallelogram’s area.

|Area|=\left| (A+C)(B+D)-AB-CD-2BC \right| =|AD-BC|

The absolute values are required because the magnitudes of the constants are unknown.  This is exactly the same result obtained above.  While I used a convenient case for the positioning of the image points in the graphics above, that positioning is irrelevant.  No matter what the sign or relative magnitudes of the constants in [T], the parallelogram area can always be computed indirectly by subtracting the areas of four triangles and two rectangles from a larger rectangle, giving the same result.

Whichever area approach works for you, Claim #1 is true.

Establishing Orientation:  The side from the origin to (A,B) in the parallelogram is a segment on the line y=\frac{B}{A} x.  The position of (C,D) relative to this line can be used to determine the orientation of the image parallelogram.

  • Assuming the two cases for A>0 shown above, the image orientation remains clockwise iff vertex (C,D) is above y=\frac{B}{A} x.  Algebraically, this happens if D>\frac{B}{A}\cdot C \Longrightarrow AD-BC>0 .

  • When A<0, the image orientation remains clockwise iff vertex (C,D) is below y=\frac{B}{A} x.  Algebraically, this happens if D<\frac{B}{A}\cdot C \Longrightarrow AD-BC>0 .
  • When A=0 and B<0, the image is clockwise when C>0, again making AD-BC>0 .  The same is true for A=0, B>0, and C<0.

In all cases, the pre-image and image have the identical orientation when AD-BC=det[T]>0 and are oppositely oriented when det[T]<0.

Q.E.D.

Transformations II and a Pythagorean Surprise

In my last post, I showed how to determine an unknown matrix for most transformations in the xy-plane and suggested that they held even more information.

Given a pre-image set of points which can be connected to enclose one or more areas with either clockwise or counterclockwise orientation.  If a transformation T represented by matrix [T]= \left[ \begin{array}{cc} A & C \\ B & D \end{array}\right] is applied to the pre-image points, then the determinant of [T], det[T]=AD-BC, tells you two things about the image points.

  1. The area enclosed by similarly connecting the image points is \left| det[T] \right| times the area enclosed by the pre-image points, and
  2. The orientation of the image points is identical to that of the pre-image if det[T]>0, but is reversed if det[T]<0.  If det[T]=0, then the image area is 0 by the first property, and any question about orientation is moot.

In other words, det[T] is the area scaling factor from the pre-image to the image (addressing the second half of CCSSM Standard NV-M 12 on page 61 here), and the sign of det[T] indicates whether the pre-image and image have the same or opposite orientation, a property beyond the stated scope of the CCSSM.

Example 1: Interpret det[T] for the matrix representing a reflection over the x-axis, [T]=\left[ r_{x-axis} \right] =\left[ \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right].

From here, det[T]=-1.  The magnitude of this is 1, indicating that the area of an image of an object reflected over the line y=x is 1 times the area of the pre-image—an obviously true fact because reflections preserve area.

Also, det \left[ r_{x-axis} \right]<0 indicating that the orientation of the reflection image is reversed from that of its pre-image.  This, too, must be true because reflections reverse orientation.

Example 2: Interpret det[T] for the matrix representing a scale change that doubles x-coordinates and triples y-coordinates, [T]=\left[ S_{2,3} \right] =\left[ \begin{array}{cc} 2 & 0 \\ 0 & 3 \end{array} \right].

For this matrix, det[T]=+6, indicating that the image’s area is 6 times that of its pre-image area, while both the image and pre-image have the same orientation.  Both of these facts seem reasonable if you imagine a rectangle as a pre-image.  Doubling the base and tripling the height create a new rectangle whose area is six times larger.  As no flipping is done, orientation should remain the same.

Example 3 & a Pythagorean Surprise:  What should be true about  det[T] for the transformation matrix representing a generic rotation of \theta units around the origin,  [T]=\left[ R_\theta \right] = \left[ \begin{array}{cc} cos( \theta ) & -sin( \theta ) \\ sin( \theta ) & cos( \theta ) \end{array} \right] ?

Rotations preserve area without reversing orientation, so det\left[ R_\theta \right] should be +1.  Using this fact and computing the determinant gives

det \left[ R_\theta \right] = cos^2(\theta ) + sin^2(\theta )=+1 .

In a generic right triangle with hypotenuse C, leg A adjacent to acute angle \theta , and another leg B, this equation is equivalent to \left( \frac{A}{C} \right) ^2 + \left( \frac{B}{C} \right) ^2 = 1 , or A^2+B^2=C^2, the Pythagorean Theorem.  There are literally hundreds of proofs of this theorem, and I suspect this proof has been given sometime before, but I think this is a lovely derivation of that mathematical hallmark.

Conclusion:  While it seems that these two properties about the determinants of transformation matrices are indeed true for the examples shown, mathematicians hold out for a higher standard.   I’ll offer a proof of both properties in my next post.