# Tag Archives: precalculus

## Squares and Octagons

Following is a really fun problem Tom Reardon showed my department last May as he led us through some TI-Nspire CAS training.  Following the introduction of the problem, I offer a mea culpa, a proof, and an extension.

THE PROBLEM:

Take any square and construct midpoints on all four sides.
Connect the four midpoints and four vertices to create a continuous 8-pointed star as shown below.  The interior of the star is an octagon.  Construct this yourself using your choice of dynamic geometry software and vary the size of the square.

Compare the areas of the external square and the internal octagon.

You should find that the area of the original square is always 6 times the area of the octagon.

I thought that was pretty cool.  Then I started to play.

MINOR OBSERVATIONS:

Using my Nspire, I measured the sides of the octagon and found it to be equilateral.

As an extension of Tom’s original problem statement, I wondered if the constant square:octagon ratio occurred in any other quadrilaterals.  I found the external quadrilateral was also six times the area of the internal octagon for parallelograms, but not for any more general quadrilaterals.  Tapping my understanding of the quadrilateral hierarchy, that means the property also holds for rectangles and rhombi.

MEA CULPA:

Math teachers always warn students to never, ever assume what they haven’t proven.  Unfortunately, my initial exploration of this problem was significantly hampered by just such an assumption.  I obviously know better (and was reminded afterwards that Tom actually had told us that the octagon was not equiangular–but like many students, I hadn’t listened).   After creating the original octagon, measuring its sides and finding them all equivalent, I errantly assumed the octagon was regular.  That isn’t true.

That false assumption created flaws in my proof and generalizations.  I discovered my error when none of my proof attempts worked out, and I eventually threw everything out and started over.  I knew better than to assume.  But I persevered, discovered my error through back-tracking, and eventually overcame.  That’s what I really hope my students learn.

THE REAL PROOF:

Goal:  Prove that the area of the original square is always 6 times the area of the internal octagon.

Assume the side length of a given square is $2x$, making its area $4x^2$.

The octagon’s area obviously is more complicated.  While it is not regular, the square’s symmetry guarantees that it can be decomposed into four congruent kites in two different ways.  Kite AFGH below is one such kite.

Therefore, the area of the octagon is 4 times the area of AFGH.  One way to express the area of any kite is $\frac{1}{2}D_1\cdot D_2$, where $D_1$ and $D_2$ are the kite’s diagonals. If I can determine the lengths of $\overline{AG}$ and $\overline {FH}$, then I will know the area of AFGH and thereby the ratio of the area of the square to the area of the octagon.

The diagonals of every kite are perpendicular, and the diagonal between a kite’s vertices connecting its non-congruent sides is bisected by the kite’s other diagonal.  In terms of AFGH, that means $\overline{AG}$ is the perpendicular bisector of $\overline{FH}$.

The square and octagon are concentric at point A, and point E is the midpoint of $\overline{BC}$, so $\Delta BAC$ is isosceles with vertex A, and $\overline{AE}$ is the perpendicular bisector of $\overline{BC}$.

That makes right triangles $\Delta BEF \sim \Delta BCD$.  Because $\displaystyle BE=\frac{1}{2} BC$, similarity gives $\displaystyle AF=FE=\frac{1}{2} DC=\frac{x}{2}$.  I know one side of the kite.

Let point I be the intersection of the diagonals of AFGH.  $\Delta BEA$ is right isosceles, so $\Delta AIF$ is, too, with $m\angle{IAF}=45$ degrees.  With $\displaystyle AF=\frac{x}{2}$, the Pythagorean Theorem gives $\displaystyle IF=\frac{x}{2\sqrt{2}}$.  Point I is the midpoint of $\overline{FH}$, so $\displaystyle FH=\frac{x}{\sqrt{2}}$.  One kite diagonal is accomplished.

Construct $\overline{JF} \parallel \overline{BC}$.  Assuming degree angle measures, if $m\angle{FBC}=m\angle{FCB}=\theta$, then $m\angle{GFJ}=\theta$ and $m\angle{AFG}=90-\theta$.  Knowing two angles of $\Delta AGF$ gives the third:  $m\angle{AGF}=45+\theta$.

I need the length of the kite’s other diagonal, $\overline{AG}$, and the Law of Sines gives

$\displaystyle \frac{AG}{sin(90-\theta )}=\frac{\frac{x}{2}}{sin(45+\theta )}$, or

$\displaystyle AG=\frac{x \cdot sin(90-\theta )}{2sin(45+\theta )}$.

Expanding using cofunction and angle sum identities gives

$\displaystyle AG=\frac{x \cdot sin(90-\theta )}{2sin(45+\theta )}=\frac{x \cdot cos(\theta )}{2 \cdot \left( sin(45)cos(\theta ) +cos(45)sin( \theta) \right)}=\frac{x \cdot cos(\theta )}{\sqrt{2} \cdot \left( cos(\theta ) +sin( \theta) \right)}$

From right $\Delta BCD$, I also know $\displaystyle sin(\theta )=\frac{1}{\sqrt{5}}$ and $\displaystyle cos(\theta)=\frac{2}{\sqrt{5}}$.  Therefore, $\displaystyle AG=\frac{x\sqrt{2}}{3}$, and the kite’s second diagonal is now known.

So, the octagon’s area is four times the kite’s area, or

$\displaystyle 4\left( \frac{1}{2} D_1 \cdot D_2 \right) = 2FH \cdot AG = 2 \cdot \frac{x}{\sqrt{2}} \cdot \frac{x\sqrt{2}}{3} = \frac{2}{3}x^2$

Therefore, the ratio of the area of the square to the area of its octagon is

$\displaystyle \frac{area_{square}}{area_{octagon}} = \frac{4x^2}{\frac{2}{3}x^2}=6$.

QED

EXTENSIONS:

This was so nice, I reasoned that it couldn’t be an isolated result.

I have extended and proved that the result is true for other modulo-3 stars like the 8-pointed star in the square for any n-gon.  I’ll share that very soon in another post.

I proved the result above, but I wonder if it can be done without resorting to trigonometric identities.  Everything else is simple geometry.   I also wonder if there are other more elegant approaches.

Finally, I assume there are other constant ratios for other modulo stars inside larger n-gons, but I haven’t explored that idea.  Anyone?

## Base-x Numbers and Infinite Series

In my previous post, I explored what happened when you converted a polynomial from its variable form into a base-x numerical form.  That is, what are the computational implications when polynomial $3x^3-11x^2+2$ is represented by the base-x number $3(-11)02_x$, where the parentheses are used to hold the base-x digit, -11, for the second power of x?

So far, I’ve explored only the Natural number equivalents of base-x numbers.  In this post, I explore what happens when you allow division to extend base-x numbers into their Rational number counterparts.

Level 5–Infinite Series:

Numbers can have decimals, so what’s the equivalence for base-x numbers?  For starters, I considered trying to get a “decimal” form of $\displaystyle \frac{1}{x+2}$.  It was “obvious” to me that $12_x$ won’t divide into $1_x$.  There are too few “places”, so some form of decimals are required.  Employing division as described in my previous post somewhat like you would to determine the rational number decimals of $\frac{1}{12}$ gives

Remember, the places are powers of x, so the decimal portion of $\displaystyle \frac{1}{x+2}$ is $0.1(-2)4(-8)..._x$, and it is equivalent to

$\displaystyle 1x^{-1}-2x^{-2}+4x^{-3}-8x^{-4}+...=\frac{1}{x}-\frac{2}{x^2}+\frac{4}{x^3}-\frac{8}{x^4}+...$.

This can be seen as a geometric series with first term $\displaystyle \frac{1}{x}$ and ratio $\displaystyle r=\frac{-2}{x}$.  It’s infinite sum is therefore $\displaystyle \frac{\frac{1}{x}}{1-\frac{-2}{x}}$ which is equivalent to $\displaystyle \frac{1}{x+2}$, confirming the division computation.  Of course, as a geometric series, this is true only so long as $\displaystyle |r|=\left | \frac{-2}{x} \right |<1$, or $2<|x|$.

I thought this was pretty cool, and it led to lots of other cool series.  For example, if $x=8$,you get $\frac{1}{10}=\frac{1}{8}-\frac{2}{64}+\frac{4}{512}-...$.

Likewise, $x=3$ gives $\frac{1}{5}=\frac{1}{3}-\frac{2}{9}+\frac{4}{27}-\frac{8}{81}+...$.

I found it quite interesting to have a “polynomial” defined with a rational expression.

Boundary Convergence:

As shown above, $\displaystyle \frac{1}{x+2}=\frac{1}{x}-\frac{2}{x^2}+\frac{4}{x^3}-\frac{8}{x^4}+...$ only for $|x|>2$.

At $x=2$, the series is obviously divergent, $\displaystyle \frac{1}{4} \ne \frac{1}{2}-\frac{2}{4}+\frac{4}{8}-\frac{8}{16}+...$.

For $x=-2$, I got $\displaystyle \frac{1}{0} = \frac{1}{-2}-\frac{2}{4}+\frac{4}{-8}-\frac{8}{16}+...=-\frac{1}{2}-\frac{1}{2}-\frac{1}{2}-\frac{1}{2}-...$ which is properly equivalent to $-\infty$ as $x \rightarrow -2$ as defined by the convergence domain and the graphical behavior of $\displaystyle y=\frac{1}{x+2}$ just to the left of $x=-2$.  Nice.

I did find it curious, though, that $\displaystyle \frac{1}{x}-\frac{2}{x^2}+\frac{4}{x^3}-\frac{8}{x^4}+...$ is a solid approximation for $\displaystyle \frac{1}{x+2}$ to the left of its vertical asymptote, but not for its rotationally symmetric right side.  I also thought it philosophically strange (even though I understand mathematically why it must be) that this series could approximate function behavior near a vertical asymptote, but not near the graph’s stable and flat portion near $x=0$.  What a curious, asymmetrical approximator.

Maclaurin Series:

Some quick calculus gives the Maclaurin series for $\displaystyle \frac{1}{x+2}$ :  $\displaystyle \frac{1}{2}-\frac{x}{4}+\frac{x^2}{8}-\frac{x^3}{16}+...$, a geometric series with first term $\frac{1}{2}$ and ratio $\frac{-x}{2}$.  Interestingly, the ratio emerging from the Maclaurin series is the reciprocal of the ratio from the “rational polynomial” resulting from the base-x division above.

As a geometric series, the interval of convergence is  $\displaystyle |r|=\left | \frac{-x}{2} \right |<1$, or $|x|<2$.  Excluding endpoint results, the Maclaurin interval is the complete Real number complement to the base-x series.  For the endpoints, $x=-2$ produces the right-side vertical asymptote divergence to $+ \infty$ that $x=-2$ did for the left side of the vertical asymptote in the base-x series.  Again, $x=2$ is divergent.

It’s lovely how these two series so completely complement each other to create clean approximations of $\displaystyle \frac{1}{x+2}$ for all $x \ne 2$.

Other base-x “rational numbers”

Because any polynomial divided by another is absolutely equivalent to a base-x rational number and thereby a base-x decimal number, it will always be possible to create a “rational polynomial” using powers of $\displaystyle \frac{1}{x}$ for non-zero denominators.  But, the decimal patterns of rational base-x numbers don’t apply in the same way as for Natural number bases.  Where $\displaystyle \frac{1}{12}$ is guaranteed to have a repeating decimal pattern, the decimal form of $\displaystyle \frac{1}{x+2}=\frac{1_x}{12_x}=0.1(-2)4(-8)..._x$ clearly will not repeat.  I’ve not explored the full potential of this, but it seems like another interesting field.

CONCLUSIONS and QUESTIONS

Once number bases are understood, I’d argue that using base-x multiplication might be, and base-x division definitely is, a cleaner way to compute products and quotients, respectively, for polynomials.

The base-x division algorithm clearly is accessible to Algebra II students, and even opens the doors to studying series approximations to functions long before calculus.

Is there a convenient way to use base-x numbers to represent horizontal translations as cleanly as polynomials?  How difficult would it be to work with a base-$(x-h)$ number for a polynomial translated h units horizontally?

As a calculus extension, what would happen if you tried employing division of non-polynomials by replacing them with their Taylor series equivalents?  I’ve played a little with proving some trig identities using base-x polynomials from the Maclaurin series for sine and cosine.

What would happen if you tried to compute repeated fractions in base-x?

It’s an open question from my perspective when decimal patterns might terminate or repeat when evaluating base-x rational numbers.

I’d love to see someone out there give some of these questions a run!

## Number Bases and Polynomials

About a month ago, I was working with our 5th grade math teacher to develop some extension activities for some students in an unleveled class.  The class was exploring place value, and I suggested that some might be ready to explore what happens when you allow the number base to be something other than 10.  A few students had some fun learning to use their basic four algorithms in other number bases, but I made an even deeper connection.

When writing something like 512 in expanded form ($5\cdot 10^2+1\cdot 10^1+2\cdot 10^0$), I realized that if the 10 was an x, I’d have a polynomial.  I’d recognized this before, but this time I wondered what would happen if I applied basic math algorithms to polynomials if I wrote them in a condensed numerical form, not their standard expanded form.  That is, could I do basic algebra on $5x^2+x+2$ if I thought of it as $512_x$–a base-x “number”?  (To avoid other confusion later, I read this as “five one two base-x“.)

Following are some examples I played with to convince myself how my new notation would work.  I’m not convinced that this will ever lead to anything, but following my “what ifs” all the way to infinite series was a blast.  Read on!

If I wanted to add $(3x+5)$$(2x^2+4x+1)$, I could think of it as $35_x+241_x$ and add the numbers “normally” to get $276_x$ or $2x^2+7x+6$.  Notice that each power of x identifies a “place value” for its characteristic coefficient.

If I wanted to add $3x-7$ to itself, I had to adapt my notation a touch.  The “units digit” is a negative number, but since the number base, x, is unknown (or variable), I ended up saying $3x-7=3(-7)_x$.  The parentheses are used to contain multiple characters into a single place value.  Then, $(3x-7)+(3x-7)$ becomes $3(-7)_x+3(-7)_x=6(-14)_x$ or $6x-14$.  Notice the expanding parentheses containing the base-x units digit.

The last example also showed me that simple multiplication would work.  Adding $3x-7$ to itself is equivalent to multiplying $2\cdot (3x-7)$.  In base-x, that is $2\cdot 3(-7)_x$.  That’s easy!  Arguably, this might be even easier that doubling a number when the number base is known.  Without interactions between the coefficients of different place values, just double each digit to get $6(-14)_x=6x-14$, as before.

What about $(x^2+7)+(8x-9)$?  That’s equivalent to $107_x+8(-9)_x$.  While simple, I’ll solve this one by stacking.

and this is $x^2+8x-2$.  As with base-10 numbers, the use of 0 is needed to hold place values exactly as I needed a 0 to hold the $x^1$ place for $x^2+7$. Again, this could easily be accomplished without the number base conversion, but how much more can we push these boundaries?

Level 3–Multiplication & Powers:

Compute $(8x-3)^2$.  Stacking again and using a modification of the multiply-and-carry algorithm I learned in grade school, I got

and this is equivalent to $64x^2-48x+9$.

All other forms of polynomial multiplication work just fine, too.

From one perspective, all of this shifting to a variable number base could be seen as completely unnecessary.  We already have acceptably working algorithms for addition, subtraction, and multiplication.  But then, I really like how this approach completes the connection between numerical and polynomial arithmetic.  The rules of math don’t change just because you introduce variables.  For some, I’m convinced this might make a big difference in understanding.

I also like how easily this extends polynomial by polynomial multiplication far beyond the bland monomial and binomial products that proliferate in virtually all modern textbooks.  Also banished here is any need at all for banal FOIL techniques.

Level 4–Division:

What about $x^2+x-6$ divided by $x+3$? In base-x, that’s $11(-6)_x \div 13_x$. Remembering that there is no place value carrying possible, I had to be a little careful when setting up my computation. Focusing only on the lead digits, 1 “goes into” 1 one time.  Multiplying the partial quotient by the divisor, writing the result below and subtracting gives

Then, 1 “goes into” -2 negative two times.  Multiplying and subtracting gives a remainder of 0.

thereby confirming that $x+3$ is a factor of $x^2+x-6$, and the other factor is the quotient, $x-2$.

Perhaps this could be used as an alternative to other polynomial division algorithms.  It is somewhat similar to the synthetic division technique, without its  significant limitations:  It is not limited to linear divisors with lead coefficients of one.

For $(4x^3-5x^2+7) \div (2x^2-1)$, think $4(-5)07_x \div 20(-1)_x$.  Stacking and dividing gives

So $\displaystyle \frac{4x^3-5x^2+7}{2x^2-1}=2x-2.5+\frac{2x+4.5}{2x^2-1}$.

CONCLUSION

From all I’ve been able to tell, converting polynomials to their base-x number equivalents enables you to perform all of the same arithmetic computations.  For division in particular, it seems this method might even be a bit easier.

In my next post, I push the exploration of these base-x numbers into infinite series.

## A Student’s Powerful Polar Exploration

I posted last summer on a surprising discovery of a polar function that appeared to be a horizontal translation of another polar function.  Translations happen all the time, but not really in polar coordinates.  The polar coordinate system just isn’t constructed in a way that makes translations appear in any clear way.

That’s why I was so surprised when I first saw a graph of $\displaystyle r=cos \left( \frac{\theta}{3} \right)$.

It looks just like a 0.5 left translation of $r=\frac{1}{2} +cos( \theta )$ .

But that’s not supposed to happen so cleanly in polar coordinates.  AND, the equation forms don’t suggest at all that a translation is happening.  So is it real or is it a graphical illusion?

I proved in my earlier post that the effect was real.  In my approach, I dealt with the different periods of the two equations and converted into parametric equations to establish the proof.  Because I was working in parametrics, I had to solve two different identities to establish the individual equalities of the parametric version of the Cartesian x- and y-coordinates.

As a challenge to my precalculus students this year, I pitched the problem to see what they could discover. What follows is a solution from about a month ago by one of my juniors, S.  I paraphrase her solution, but the basic gist is that S managed her proof while avoiding the differing periods and parametric equations I had employed, and she did so by leveraging the power of CAS.  The result was that S’s solution was briefer and far more elegant than mine, in my opinion.

S’s Proof:

Multiply both sides of $r = \frac{1}{2} + cos(\theta )$ by r and translate to Cartesian.

$r^2 = \frac{1}{2} r+r\cdot cos(\theta )$
$x^2 + y^2 = \frac{1}{2} \sqrt{x^2+y^2} +x$
$\left( 2\left( x^2 + y^2 -x \right) \right) ^2= \sqrt{x^2+y^2} ^2$

At this point, S employed some CAS power.

[Full disclosure: That final CAS step is actually mine, but it dovetails so nicely with S’s brilliant approach. I am always delightfully surprised when my students return using a tool (technological or mental) I have been promoting but hadn’t seen to apply in a particular situation.]

S had used her CAS to accomplish the translation in a more convenient coordinate system before moving the equation back into polar.

Clearly, $r \ne 0$, so

$4r^3 - 3r = cos(\theta )$ .

In an attachment (included below), S proved an identity she had never seen, $\displaystyle cos(\theta) = 4cos^3 \left( \frac{\theta }{3} \right) - 3cos \left( \frac{\theta }{3} \right)$ , which she now applied to her CAS result.

$\displaystyle 4r^3 - 3r = 4cos^3 \left( \frac{\theta }{3} \right) - 3cos \left( \frac{\theta }{3} \right)$

So, $\displaystyle r = cos \left( \frac{\theta }{3} \right)$

Therefore, $\displaystyle r = cos \left( \frac{\theta }{3} \right)$ is the image of $\displaystyle r = \frac{1}{2} + cos(\theta )$ after translating $\displaystyle \frac{1}{2}$ unit left.  QED

Simple. Beautiful.

Obviously, this could have been accomplished using lots of by-hand manipulations.  But, in my opinion, that would have been a horrible, potentially error-prone waste of time for a problem that wasn’t concerned at all about whether one knew some Algebra I arithmetic skills.  Great job, S!

S’s proof of her identity, $\displaystyle cos(\theta) = 4cos^3 \left( \frac{\theta }{3} \right) - 3cos \left( \frac{\theta }{3} \right)$ :

## Fun with Series

Two days ago, one of my students (P) wandered into my room after school to share a problem he had encountered at the 2013 Walton MathFest, but didn’t know how to crack.  We found one solution.  I’d love to hear if anyone discovers a different approach.  Here’s our answer.

PROBLEM:  What is the sum of $\displaystyle \sum_{n=1}^{\infty} \left( \frac{n^2}{2^n} \right) = \frac{1^2}{2^1} + \frac{2^2}{2^2} + \frac{3^2}{3^3} + ...$ ?

Without the $n^2$, this would be a simple geometric series, but the quadratic and exponential terms can’t be combined in any way we knew, so the solution must require rewriting.  After some thought, we remembered that perfect squares can be found by adding odd integers.  I suggested rewriting the series as

where each column adds to the one of the terms in the original series.  Each row was now a geometric series which we knew how to sum.  That  meant we could rewrite the original series as

We had lost the quadratic term, but we still couldn’t sum the series with both a linear and an exponential term.  At this point, P asked if we could use the same approach to rewrite the series again.  Because the numerators were all odd numbers and each could be written as a sum of 1 and some number of 2s, we got

where each column now added to the one of the terms in our secondary series.  Each row was again a geometric series, allowing us to rewrite the secondary series as

Ignoring the first term, this was finally a single geometric series, and we had found the sum.

Does anyone have another way?

That was fun.  Thanks, P.

## Quadratics, Statistics, Symmetry, and Tranformations

A problem I assigned my precalculus class this past Thursday ended up with multiple solutions by the time we finished.  Huzzah for student creativity!

The question:

Find equations for all polynomial functions, $y=f(x)$, of degree $\le 2$ for which $f(0)=f(1)=2$ and $f(3)=0$.

After they had worked on this (along with several variations on the theme), four very different ways of thinking about this problem emerged.  All were valid and even led to a lesson I hadn’t planned–proving that, even though they looked different algebraically, all were equivalent.  I present their approaches (and a few extras) in the order they were offered in our post-solving debriefing.

The commonality among the approaches was their recognition that 3 non-collinear points uniquely define a vertical parabola, so they didn’t need to worry about polynomials of degree 0 or 1.  (They haven’t yet heard about rotated curves that led to my earlier post on rotated quadratics.)

Solution 1–Regression:  Because only 3 points were given, a quadratic regression would derive a perfectly fitting quadratic equation.  Using their TI-Nspire CASs, they started by entering the 3 ordered pairs in a Lists&Spreadsheets window.  Most then went to a Calculator window to compute a quadratic regression.  Below, I show the same approach using a Data&Statistics window instead so I could see simultaneously the curve fit and the given points.

The decimals were easy enough to interpret, so even though they were presented in decimal form, these students reported $y=-\frac{1}{3}x^2+\frac{1}{3}x+2$.

For a couple seconds after this was presented, I honestly felt a little cheated.  I was hoping they would tap the geometric or algebraic properties of quadratics to get their equations.  But I then I remembered that I clearly hadn’t make that part of my instructions.  After my initial knee-jerk reaction, I realized this group of students had actually done exactly what I explicitly have been encouraging them to do: think freely and take advantage of every tool they have to find solutions.  Nothing in the problem statement suggested technology or regressions, so while I had intended a more geometric approach, I realized I actually owed these students some kudos for a very creative, insightful, and technology-based solution.  This and Solution 2 were the most frequently chosen approaches.

Solution 2–Systems:  Equations of quadratic functions are typically presented in standard, factored, or vertex form.  Since neither two zeros nor the vertex were explicitly given, the largest portion of the students used the standard form, $y=a\cdot x^2+b\cdot x+c$ to create a 3×3 system of equations.  Some solved this by hand, but most invoked a CAS solution.  Notice the elegance of the solve command they used, working from the generic polynomial equation that kept them from having to write all three equations, keeping their focus on the form of the equation they sought.

This created the same result as Solution 1, $y=-\frac{1}{3}x^2+\frac{1}{3}x+2$.

CAS Aside: No students offered these next two solutions, but I believe when using a CAS, it is important for users to remember that the machine typically does not care what output form you want.  The standard form is the only “algebraically simple” approach when setting up a solution by hand, but the availability of technology makes solving for any form equally accessible.

The next screen shows that the vertex and factored forms are just as easily derived as the standard form my students found in Solution 2.

I was surprised when the last line’s output wasn’t in vertex form, $y=-\frac{1}{3}\cdot \left ( x-\frac{1}{2} \right )^2+\frac{25}{12}$, but the coefficients in its expanded form clearly show the equivalence between this form and the standard forms derived in Solutions 1 and 2–a valuable connection.

Solution 3–Symmetry:  Two students said they noticed that $f(0)=f(1)=2$ guaranteed the vertex of the parabola occurred at $x=\frac{1}{2}$.  Because $f(3)=0$ defined one real root of the unknown quadratic, the parabola’s symmetry guaranteed another at $x=-2$, giving potential equation $y=a\cdot (x-3)(x+2)$.  They substituted the given (0,2) to solve for a, giving final equation $y=-\frac{1}{3}\cdot (x-3)(x+2)$ as confirmed by the CAS approach above.

Solution 4–Transformations:  One of the big lessons I repeat in every class I teach is this:

If you don’t like how a question is posed.  Change it!

Notice that two of the given points have the same y-coordinate.  If that y-coordinate had been 0 (instead of its given value, 2), a factored form would be simple.  Well, why not force them to be x-intercepts by translating all of the given points down 2 units?

The transformed data show x-intercepts at 0 and 1 with another ordered pair at $(3,-2)$.  From here, the factored form is easy:  $y=a\cdot (x-0)(x-1)$.  Substituting $(3,-2)$ gives $a=-\frac{1}{3}$ and the final equation is $y=-\frac{1}{3}\cdot (x-0)(x-1)$ .

Of course, this is an equation for the transformed points.  Sliding the result back up two units, $y=-\frac{1}{3}\cdot (x-0)(x-1)+2$, gives an equation for the given points.  Aside from its lead coefficient, this last equation looked very different from the other forms, but some quick expansion proved its equivalence.

Conclusion:  It would have been nice if someone had used the symmetry noted in Solution 3 to attempt a vertex-form answer via systems.  Given the vertex at $x=\frac{1}{2}$ with an unknown y-coordinate, a potential equation is $y=a\cdot \left ( x-\frac{1}{2} \right )^2+k$.  Substituting $(3,0)$ and either $(0,2)\text{ or }(1,2)$ creates a 2×2 system of linear equations, $\left\{\begin{matrix} 0=a\cdot \left ( 3-\frac{1}{2} \right )^2+k \\ 2=a\cdot \left ( 0-\frac{1}{2} \right )^2+k \end{matrix}\right.$.  From there, a by-hand or CAS solution would have been equally acceptable to me.

That the few alternative approaches I offered above weren’t used didn’t matter in the end.  My students were creative, followed their own instincts to find solutions that aligned with their thinking, and clearly appreciated the alternative ways their classmates used to find answers.  Creativity and individual expression reigned, while everyone broadened their understanding that there’s not just one way to do math.

It was a good day.

## Factors and number bases

For the second day of my precalculus classes last week, I had planned to introduce them to some of the CAS syntax of their new TI-Nspire calculators.  The following is my best attempt to recreate a conversation that happened when we explored the factor command.  Of course, this could have happened on any CAS platform you have available (TI-Nspire CAS, Wolfram Alpha, Geogebra (v4.2 beta release and forum), …).

I first asked them to factor $x^2-1$.  No surprises.  Then, to demonstrate the power of the machines, I asked them to factor $x^{23}-1$.  The virtually instantaneous results elicited some “wow”s around the room.  Knowing some sub-factoring would result, I suggested they factor $x^8-1$.  One student also factored $x^{13}-1$.  Announcing her result to the class, speculation immediately mounted that there might be a bigger pattern at play, especially for odd integer exponents.

It’s difficult to use any toolbox to its fullest extent if you don’t know everything inside it.  My only plan for this was to introduce a CAS command for problem solving later in the course, but my students had other plans.  Seeing the results above, they started to make broader pattern predictions.

Pattern 1:  A few quickly surmised that for odd integer values of n, $x^n-1$ factored to $(x-1)\cdot\left( x^{n-1}+x^{n-2}+...+x^2+x+1 \right)$.  It was a nice stab at the obvious cases, so I asked whether the pattern also held for even values of n.  Some reflexively said “no” based on the output, but others suggested that perhaps the non-$(x-1)$ factors of $x^8-1$ could multiply back together to continue the pattern the odd powers seem to follow.

GREAT!  I had planned to introduce the Nspire expand command, it’s need just developed organically.  Why give a list of things for uninspired memorization when you can instead put them in situations where they’ll ask to be taught that same specific content?  The students needed and asked for a tool they didn’t think they had yet.  The results elicited a few more “wow”s and “cool”s.

The pattern does seem to continue, but the CAS apparently seems to factor the non-$(x-1)$ polynomial factor further.  What they didn’t ask was why the polynomials from even n factor further while those from odd n don’t seem to do the same.  If the opportunity presents itself, I’ll circle back on that one another day. They also two other smaller connections.

Boolean discovery: Notice the result from the last line of the last image.  Rather than expanding product of the three binomial factors, one student compared what she suspected were the factors, and the CAS responded with the Boolean “true”.  She had done this while the class was struggling to divine the name of the command that might make the CAS “un-factor”–before I gave them the expand command.  It showed them that a CAS actually might be able to help evaluate student hypotheses.

Continuation of Pattern 1: After seeing these results, I asked if the technology was required to factor $x^2-1$.  Most quickly replied “no” thinking, I guess, that $x^2-1=(x-1)(x+1)$ was a special case they had memorized long ago.  I pressed on.  Does this fit the pattern we had just discovered?  After a few silent moments more, one boy tentatively said, “Well, sure.  If $n=2$, then $x^2-1=(x-1)(x^1+1)$, using just the last two terms of the longer polynomials created for larger values of n.  More “cool”s.

Final pattern:  One other student noticed the cascading exponents in the factoring of $x^8-1$.  When asked why that was, he pointed out that $x^8-1$ could be written and factored as a difference of squares:  $x^8-1=\left( \left( x^4 \right)^2-1^2 \right)=(x^4-1)(x^4+1)$.  From there, the first term was also a difference of squares, and the pattern continued to the complete factoring of $x^8-1$ in line 3 of the first image.  He confirmed his hypothesis with another “cascading” factoring of differences of squares with $x^{16}-1$.

Conclusions:  Obviously my students haven’t proved any of these factoring patterns yet, but I was particularly impressed with the way technology created an algebraic sandbox for my students.  They were free to think and explore without me holding their hands every step of the way.

My students had “discovered” a cool factoring pattern.  They had to THINK hard to describe what they saw and investigated to see if the pattern was universal or just some isolated occurrence.  And technology clearly facilitated and enabled some high-level learning that otherwise would have been nothing more low-level and quickly forgotten memorizations.

This happened last Thursday.  Covering other ideas, I mostly left the idea alone Friday, but yesterday I asked my students at the start of class to factor $x^{11}-1$ without technology.  They all nailed it.

For the future?  This was an accidental lesson, but it is one I’ll deliberately set up in the future for other classes.  It was far more effective than any factoring worksheet I’ve ever seen.  Here are some additional questions I plan to pose later if an opportunity arises.  I’d love any ideas readers may suggest.

1. If n is even, will the longer, non-$(x-1)$ polynomial always factor?  Why?
2. What happens for even values of n that aren’t also powers of 2?
3. How can you PROVE that any of these patterns actually hold universally?

Extension:  One of the students stopped by my room after school Thursday to let me know that personal interest has led him to explore binary numbers in the past.  He was intrigued by the connection of these results to decimal representations.  Another great organic teaching moment!  After a fun conversation with him, I decided to share with that class some thoughts I had on using your fingers to multiply.  This is already long, so a description of that class will need to wait for another post.

## Transformations III

My last post interpreted the determinants of a few transformation matrices.  For any matrix of a transformation, $[T]$, $det[T]$ is the area scaling factor from the pre-image to the image (addressing the second half of CCSSM Standard NV-M 12 on page 61 here), and the sign of $det[T]$ indicates whether the pre-image and image have the same or opposite orientation.

These are not intuitively obvious, in my opinion, so it’s time for some proof accessible to middle and high school students.

Setting Up:  Take the unit square defined clockwise by vertices (0,0), (0,1), (1,1), and (1,0) under a generic transformation $[T]= \left[ \begin{array}{cc} A & C \\ B & D \end{array}\right]$ where A, B, C, and D are real constants.  Because the unit square has area 1, the area of the image is also the area scaling factor from the pre-image to the image.

As before, the image of the unit square under T is determined by

$\left[ \begin{array}{cc} A & C \\ B & D \end{array}\right] \cdot$ $\left[ \begin{array}{cccc} 0 & 0 & 1 & 1 \\ 0 & 1 & 1 & 0 \end{array}\right] =$ $\left[ \begin{array}{cccc} 0 & C & A+C & A \\ 0 & D & B+D & B \end{array}\right]$.

So the origin is its own image, (0,1) becomes (C,D), (1,1) becomes (A+C,B+D), and (1,0) becomes (A,B).  As $[T]$ is a generic transformation matrix, nothing can be specifically known about the sign or magnitude of its components, but the image below shows one possible case of the image that maintains the original orientation.

When I was working on this problem the first time, I did not expect the image of the unit square to become a parallelogram under every possible $[T]$ (remember that all of its components are assumed constant), but that can be verified by comparing coordinates.  To confirm the area scale change claim, I need to know the generic parallelogram’s area.  I’ll do this two ways.  The first is more elegant, but it invokes vectors–likely a precalculus topic.  The second should be accessible to middle school students.

Area (Method 1):  A parallelogram can be defined using two vectors.  In the image above, the “left side” from the origin to (C,D) is $$–the 3rd dimensional component is needed to compute a cross product.  Likewise, the “bottom side” can be represented by vector $$. The area of a parallelogram is the magnitude of the cross product of the two vectors defining the parallelogram (an explanation of this fact is here).  Because $ \times = <0,0,AD-BC>$ ,

$| \text{Area of Parallelogram} | = |AD-BC|$.

Cross products are not commutative, but reversing the order gives $ \times = -<0,0,AD-BC>$ , which has the same magnitude.  Either way, Claim #1 is true.

Area (Method 2):  Draw a rectangle around the parallelogram with two sides on the coordinate axes, one vertex at the origin, and another at (A+C,B+D).  As shown below, the area interior to the rectangle, but exterior to the parallelogram can be decomposed into right triangular and rectangular regions.

$\Delta I \cong \Delta IV$ with total area $A\cdot B$, and $\Delta III \cong \Delta VI$ with total area $C\cdot D$.  Finally, rectangles II and V are congruent with total area $2B\cdot C$ .  Together, these lead to an indirect computation of the parallelogram’s area.

$|Area|=\left| (A+C)(B+D)-AB-CD-2BC \right| =|AD-BC|$

The absolute values are required because the magnitudes of the constants are unknown.  This is exactly the same result obtained above.  While I used a convenient case for the positioning of the image points in the graphics above, that positioning is irrelevant.  No matter what the sign or relative magnitudes of the constants in $[T]$, the parallelogram area can always be computed indirectly by subtracting the areas of four triangles and two rectangles from a larger rectangle, giving the same result.

Whichever area approach works for you, Claim #1 is true.

Establishing Orientation:  The side from the origin to (A,B) in the parallelogram is a segment on the line $y=\frac{B}{A} x$.  The position of (C,D) relative to this line can be used to determine the orientation of the image parallelogram.

• Assuming the two cases for $A>0$ shown above, the image orientation remains clockwise iff vertex (C,D) is above $y=\frac{B}{A} x$.  Algebraically, this happens if $D>\frac{B}{A}\cdot C \Longrightarrow AD-BC>0$ .

• When $A<0$, the image orientation remains clockwise iff vertex (C,D) is below $y=\frac{B}{A} x$.  Algebraically, this happens if $D<\frac{B}{A}\cdot C \Longrightarrow AD-BC>0$ .
• When $A=0$ and $B<0$, the image is clockwise when $C>0$, again making $AD-BC>0$ .  The same is true for $A=0$, $B>0$, and $C<0$.

In all cases, the pre-image and image have the identical orientation when $AD-BC=det[T]>0$ and are oppositely oriented when $det[T]<0$.

Q.E.D.

## Trig Identities with a Purpose

Yesterday, I was thinking about some changes I could introduce to a unit on polar functions.  Realizing that almost all of the polar functions traditionally explored in precalculus courses have graphs that are complete over the interval $0\le\theta\le 2\pi$, I wondered if there were any interesting curves that took more than $2\pi$ units to graph.

My first attempt was $r=cos\left(\frac{\theta}{2}\right)$ which produced something like a merged double limaçon with loops over its $4\pi$ period.

Trying for more of the same, I graphed $r=cos\left(\frac{\theta}{3}\right)$ guessing (without really thinking about it) that I’d get more loops.  I didn’t get what I expected at all.

Wow!  That looks exactly like the image of a standard limaçon with a loop under a translation left of 0.5 units.

Further exploration confirms that $r=cos\left(\frac{\theta}{3}\right)$ completes its graph in $3\pi$ units while $r=\frac{1}{2}+cos\left(\theta\right)$ requires $2\pi$ units.

As you know, in mathematics, it is never enough to claim things look the same; proof is required.  The acute challenge in this case is that two polar curves (based on angle rotations) appear to be separated by a horizontal translation (a rectangular displacement).  I’m not aware of any clean, general way to apply a rectangular transformation to a polar graph or a rotational transformation to a Cartesian graph.  But what I can do is rewrite the polar equations into a parametric form and translate from there.

For $0\le\theta\le 3\pi$ , $r=cos\left(\frac{\theta}{3}\right)$ becomes $\begin{array}{lcl} x_1 &= &cos\left(\frac{\theta}{3}\right)\cdot cos\left (\theta\right) \\ y_1 &= &cos\left(\frac{\theta}{3}\right)\cdot sin\left (\theta\right) \end{array}$ .  Sliding this $\frac{1}{2}$ a unit to the right makes the parametric equations $\begin{array}{lcl} x_2 &= &\frac{1}{2}+cos\left(\frac{\theta}{3}\right)\cdot cos\left (\theta\right) \\ y_2 &= &cos\left(\frac{\theta}{3}\right)\cdot sin\left (\theta\right) \end{array}$ .

This should align with the standard limaçon, $r=\frac{1}{2}+cos\left(\theta\right)$ , whose parametric equations for $0\le\theta\le 2\pi$  are $\begin{array}{lcl} x_3 &= &\left(\frac{1}{2}+cos\left(\theta\right)\right)\cdot cos\left (\theta\right) \\ y_3 &= &\left(\frac{1}{2}+cos\left(\theta\right)\right)\cdot sin\left (\theta\right) \end{array}$ .

The only problem that remains for comparing $(x_2,y_2)$ and $(x_3,y_3)$ is that their domains are different, but a parameter shift can handle that.

If $0\le\beta\le 3\pi$ , then $(x_2,y_2)$ becomes $\begin{array}{lcl} x_4 &= &\frac{1}{2}+cos\left(\frac{\beta}{3}\right)\cdot cos\left (\beta\right) \\ y_4 &= &cos\left(\frac{\beta}{3}\right)\cdot sin\left (\beta\right) \end{array}$ and $(x_3,y_3)$ becomes $\begin{array}{lcl} x_5 &= &\left(\frac{1}{2}+cos\left(\frac{2\beta}{3}\right)\right)\cdot cos\left (\frac{2\beta}{3}\right) \\ y_5 &= &\left(\frac{1}{2}+cos\left(\frac{2\beta}{3}\right)\right)\cdot sin\left (\frac{2\beta}{3}\right) \end{array}$ .

Now that the translation has been applied and both functions operate over the same domain, the two functions must be identical iff $x_4 = x_5$ and $y_4 = y_5$ .  It’s time to prove those trig identities!

Before blindly manipulating the equations, I take some time to develop some strategy.  I notice that the $(x_5, y_5)$ equations contain only one type of angle–double angles of the form $2\cdot\frac{\beta}{3}$ –while the $(x_4, y_4)$ equations contain angles of two different types, $\beta$ and $\frac{\beta}{3}$ .  It is generally easier to work with a single type of angle, so my strategy is going to be to turn everything into trig functions of double angles of the form $2\cdot\frac{\beta}{3}$ .

$\displaystyle \begin{array}{lcl} x_4 &= &\frac{1}{2}+cos\left(\frac{\beta}{3}\right)\cdot cos\left (\beta\right) \\ &= &\frac{1}{2}+cos\left(\frac{\beta}{3}\right)\cdot cos\left (\frac{\beta}{3}+\frac{2\beta}{3} \right) \\ &= &\frac{1}{2}+cos\left(\frac{\beta}{3}\right)\cdot\left( cos\left(\frac{\beta}{3}\right) cos\left(\frac{2\beta}{3}\right)-sin\left(\frac{\beta}{3}\right) sin\left(\frac{2\beta}{3}\right)\right) \\ &= &\frac{1}{2}+\left[cos^2\left(\frac{\beta}{3}\right)\right] cos\left(\frac{2\beta}{3}\right)-\frac{1}{2}\cdot 2cos\left(\frac{\beta}{3}\right) sin\left(\frac{\beta}{3}\right) sin\left(\frac{2\beta}{3}\right) \\ &= &\frac{1}{2}+\left[\frac{1+cos\left(2\frac{\beta}{3}\right)}{2}\right] cos\left(\frac{2\beta}{3}\right)-\frac{1}{2}\cdot sin^2\left(\frac{2\beta}{3}\right) \\ &= &\frac{1}{2}+\frac{1}{2}cos\left(\frac{2\beta}{3}\right)+\frac{1}{2} cos^2\left(\frac{2\beta}{3}\right)-\frac{1}{2} \left( 1-cos^2\left(\frac{2\beta}{3}\right)\right) \\ &= & \frac{1}{2}cos\left(\frac{2\beta}{3}\right) + cos^2\left(\frac{2\beta}{3}\right) \\ &= & \left(\frac{1}{2}+cos\left(\frac{2\beta}{3}\right)\right)\cdot cos\left(\frac{2\beta}{3}\right) = x_5 \end{array}$

Proving that the x expressions are equivalent.  Now for the ys

$\displaystyle \begin{array}{lcl} y_4 &= & cos\left(\frac{\beta}{3}\right)\cdot sin\left(\beta\right) \\ &= & cos\left(\frac{\beta}{3}\right)\cdot sin\left(\frac{\beta}{3}+\frac{2\beta}{3} \right) \\ &= & cos\left(\frac{\beta}{3}\right)\cdot\left( sin\left(\frac{\beta}{3}\right) cos\left(\frac{2\beta}{3}\right)+cos\left(\frac{\beta}{3}\right) sin\left(\frac{2\beta}{3}\right)\right) \\ &= & \frac{1}{2}\cdot 2cos\left(\frac{\beta}{3}\right) sin\left(\frac{\beta}{3}\right) cos\left(\frac{2\beta}{3}\right)+\left[cos^2 \left(\frac{\beta}{3}\right)\right] sin\left(\frac{2\beta}{3}\right) \\ &= & \frac{1}{2}sin\left(2\frac{\beta}{3}\right) cos\left(\frac{2\beta}{3}\right)+\left[\frac{1+cos \left(2\frac{\beta}{3}\right)}{2}\right] sin\left(\frac{2\beta}{3}\right) \\ &= & \left(\frac{1}{2}+cos\left(\frac{2\beta}{3}\right)\right)\cdot sin\left (\frac{2\beta}{3}\right) = y_5 \end{array}$

Therefore the graph of $r=cos\left(\frac{\theta}{3}\right)$ is exactly the graph of $r=\frac{1}{2}+cos\left(\theta\right)$ slid $\frac{1}{2}$ unit left.  Nice.

If there are any students reading this, know that it took a few iterations to come up with the versions of the identities proved above.  Remember that published mathematics is almost always cleaner and more concise than the effort it took to create it.  One of the early steps I took used the substitution $\gamma =\frac{\beta}{3}$ to clean up the appearance of the algebra.  In the final proof, I decided that the 2 extra lines of proof to substitute in and then back out were not needed.  I also meandered down a couple unnecessarily long paths that I was able to trim in the proof I presented above.

Despite these changes, my proof still feels cumbersome and inelegant to me.  From one perspective–Who cares?  I proved what I set out to prove.  On the other hand, I’d love to know if someone has a more elegant way to establish this connection.  There is always room to learn more.  Commentary welcome.

In the end, it’s nice to know these two polar curves are identical.  It pays to keep one’s eyes eternally open for unexpected connections!

## Polar Graphing Surprise

Nurfatimah Merchant and I were playing around with polar graphs, trying to find something that would stretch students beyond simple circles and types of limacons while still being within the conceptual reach of those who had just been introduced to polar coordinates roughly two weeks earlier.

We remembered that Cartesian graphs of trigonometric functions are much more “interesting” with different center lines.  That is, the graph of $y=cos(x)+3$ is nothing more than a standard cosine graph oscillating around $y=3$.

Likewise, the graph of $y=cos(x)+0.5x$ is a standard cosine graph oscillating around $y=0.5x$.

We teach polar graphing the same way.  To graph $r=3+cos(2\theta )$, we encourage our students to “read” the function as a cosine curve of period $\pi$ oscillating around the polar function $r=3$.  Because of its period, this curve will complete a cycle in $0\le\theta\le\pi$.  The graph begins this interval at $\theta =0$ (the positive x-axis) with a cosine graph 1 unit “above” $r=3$, moving to 1 unit “below” the “center line” at $\theta =\frac{\pi}{2}$, and returning to 1 unit above the center line at $\theta =\pi$.  This process repeats for $\pi\le\theta\le 2\pi$.

Our students graph polar curves far more confidently since we began using this approach (and a couple extensions on it) than those we taught earlier in our careers.  It has become a matter of understanding what functions do and how they interact with each other and almost nothing to do with memorizing particular curve types.

So, now that our students are confidently able to graph polar curves like $r=3+cos(2\theta )$, we wondered how we could challenge them a bit more.  Remembering variable center lines like the Cartesian $y=cos(x)+0.5x$, we wondered what a polar curve with a variable center line would look like.  Not knowing where to start, I proposed $r=2+cos(\theta )+sin(\theta)$, thinking I could graph a period $2\pi$ sine curve around the limacon $r=2+cos(\theta )$.

There’s a lot going on here, but in its most simplified version, we thought we would get a curve on the center line at $\theta =0$, 1 unit above at $\theta =\frac{\pi}{2}$, on at $\theta =\pi$, 1 unit below at $\theta =\frac{3\pi}{2}$, and returning to its starting point at $\theta =2\pi$.  We had a very rough “by hand” sketch, and were quite surprised by the image we got when we turned to our grapher for confirmation.  The oscillation behavior we predicted was certainly there, but there was more!  What do you see in the graph of $r=2+cos(\theta )+sin(\theta)$ below?

This looked to us like some version of a cardioid.  Given the symmetry of the axis intercepts, we suspected it was rotated $\frac{\pi}{4}$ from the x-axis.  An initially x-axis symmetric polar curve rotated $\frac{\pi}{4}$ would contain the term $cos(\theta-\frac{\pi}{4})$ which expands using a trig identity.

$\begin{array}{ccc} cos(\theta-\frac{\pi}{4})&=&cos(\theta )cos(\frac{\pi}{4})+cos(\theta )cos(\frac{\pi}{4}) \\ &=&\frac{1}{\sqrt{2}}(cos(\theta )+sin(\theta )) \end{array}$

Eureka!  This identity let us rewrite the original polar equation.

$\begin{array}{ccc} r=2+cos(\theta )+sin(\theta )&=&2+\sqrt{2}\cdot\frac{1}{\sqrt{2}} (cos(\theta )+sin(\theta )) \\ &=&2+\sqrt{2}\cdot cos(\theta -\frac{\pi}{4}) \end{array}$

And this last form says our original polar function is equivalent to $r=2+\sqrt{2}\cdot cos(\theta -\frac{\pi}{4})$, or a $\frac{\pi}{4}$ rotated cosine curve of amplitude $\sqrt{2}$ and period $2\pi$ oscillating around center line $r=2$.

This last image shows a cosine curve starting at $\theta=\frac{\pi}{4}$ beginning $\sqrt{2}$ above the center circle $r=2$, crossing the center circle $\frac{\pi}{2}$ later at $\theta=\frac{3\pi}{4}$, dropping to $\sqrt{2}$ below the center circle at $\theta=\frac{5\pi}{4}$, back to the center circle at $\theta=\frac{7\pi}{4}$ before finally returning to the starting point at $\theta=\frac{9\pi}{4}$.  Because the radius is always positive, this also convinced us that this curve is actually a rotated limacon without a loop and not the cardioid that drove our initial investigation.

So, we thought we were departing into some new territory and found ourselves looking back at earlier work from a different angle.  What a nice surprise!

One more added observation:  We got a little lucky in guessing the angle of rotation, but even if it wasn’t known, it is always possible to compute an angle of rotation (or translation in Cartesian) for a sum of two sinusoids with identical periods.  This particular topic is covered in some texts, including Precalculus Transformed.