Tag Archives: identities

Quadratics + Tangent = ???

 

Here’s a very pretty problem I encountered on Twitter from Mike Lawler 1.5 months ago.

I’m late to the game replying to Mike’s post, but this problem is the most lovely combination of features of quadratic and trigonometric functions I’ve ever encountered in a single question, so I couldn’t resist.  This one is well worth the time for you to explore on your own before reading further.

My full thoughts and explorations follow.  I have landed on some nice insights and what I believe is an elegant solution (in Insight #5 below).  Leading up to that, I share the chronology of my investigations and thought processes.  As always, all feedback is welcome.

WARNING:  HINTS AND SOLUTIONS FOLLOW

Investigation  #1:

My first thoughts were influenced by spoilers posted as quick replies to Mike’s post.  The coefficients of the underlying quadratic, A^2-9A+1=0, say that the solutions to the quadratic sum to 9 and multiply to 1.  The product of 1 turned out to be critical, but I didn’t see just how central it was until I had explored further.  I didn’t immediately recognize the 9 as a red herring.

Basic trig experience (and a response spoiler) suggested the angle values for the tangent embedded in the quadratic weren’t common angles, so I jumped to Desmos first.  I knew the graph of the overall given equation would be ugly, so I initially solved the equation by graphing the quadratic, computing arctangents, and adding.

tan1

Insight #1:  A Curious Sum

The sum of the arctangent solutions was about 1.57…, a decimal form suspiciously suggesting a sum of \pi/2.  I wasn’t yet worried about all solutions in the required [0,2\pi ] interval, but for whatever strange angles were determined by this equation, their sum was strangely pretty and succinct.  If this worked for a seemingly random sum of 9 for the tangent solutions, perhaps it would work for others.

Unfortunately, Desmos is not a CAS, so I turned to GeoGebra for more power.

Investigation #2:  

In GeoGebra, I created a sketch to vary the linear coefficient of the quadratic and to dynamically calculate angle sums.  My procedure is noted at the end of this post.  You can play with my GeoGebra sketch here.

The x-coordinate of point G is the sum of the angles of the first two solutions of the tangent solutions.

Likewise, the x-coordinate of point H is the sum of the angles of all four angles of the tangent solutions required by the problem.

tan2

Insight #2:  The Angles are Irrelevant

By dragging the slider for the linear coefficient, the parabola’s intercepts changed, but as predicted in Insights #1, the angle sums (x-coordinates of points G & H) remained invariant under all Real values of points A & B.  The angle sum of points C & D seemed to be \pi/2 (point G), confirming Insight #1, while the angle sum of all four solutions in [0,2\pi] remained 3\pi (point H), answering Mike’s question.

The invariance of the angle sums even while varying the underlying individual angles seemed compelling evidence that that this problem was richer than the posed version. 

Insight #3:  But the Angles are bounded

The parabola didn’t always have Real solutions.  In fact, Real x-intercepts (and thereby Real angle solutions) happened iff the discriminant was non-negative:  B^2-4AC=b^2-4*1*1 \ge 0.  In other words, the sum of the first two positive angles solutions for y=(tan(x))^2-b*tan(x)+1=0 is \pi/2 iff \left| b \right| \ge 2, and the sum of the first four solutions is 3\pi under the same condition.  These results extend to the equalities at the endpoints iff the double solutions there are counted twice in the sums.  I am not convinced these facts extend to the complex angles resulting when -2<b<2.

I knew the answer to the now extended problem, but I didn’t know why.  Even so, these solutions and the problem’s request for a SUM of angles provided the insights needed to understand WHY this worked; it was time to fully consider the product of the angles.

Insight #4:  Finally a proof

It was now clear that for \left| b \right| \ge 2 there were two Quadrant I angles whose tangents were equal to the x-intercepts of the quadratic.  If x_1 and x_2 are the quadratic zeros, then I needed to find the sum A+B where tan(A)=x_1 and tan(B)=x_2.

From the coefficients of the given quadratic, I knew x_1+x_2=tan(A)+tan(B)=9 and x_1*x_2=tan(A)*tan(B)=1.

Employing the tangent sum identity gave

\displaystyle tan(A+B) = \frac{tan(A)+tan(B)}{1-tan(A)tan(B)} = \frac{9}{1-1}

and this fraction is undefined, independent of the value of x_1+x_2=tan(A)+tan(B) as suggested by Insight #2.  Because tan(A+B) is first undefined at \pi/2, the first solutions are \displaystyle A+B=\frac{\pi}{2}.

Insight #5:  Cofunctions reveal essence

The tangent identity was a cute touch, but I wanted something deeper, not just an interpretation of an algebraic result.  (I know this is uncharacteristic for my typically algebraic tendencies.)  The final key was in the implications of tan(A)*tan(B)=1.

This product meant the tangent solutions were reciprocals, and the reciprocal of tangent is cotangent, giving

\displaystyle tan(A) = \frac{1}{tan(B)} = cot(B).

But cotangent is also the co-function–or complement function–of tangent which gave me

tan(A) = cot(B) = tan \left( \frac{\pi}{2} - B \right).

Because tangent is monotonic over every cycle, the equivalence of the tangents implied the equivalence of their angles, so A = \frac{\pi}{2} - B, or A+B = \frac{\pi}{2}.  Using the Insights above, this means the sum of the solutions to the generalization of Mike’s given equation,

(tan(x))^2+b*tan(x)+1=0 for x in [0,2\pi ] and any \left| b \right| \ge 2,

is always 3\pi with the fundamental reason for this in the definition of trigonometric functions and their co-functions.  QED

Insight #6:  Generalizing the Domain

The posed problem can be generalized further by recognizing the period of tangent: \pi.  That means the distance between successive corresponding solutions to the internal tangents of this problem is always \pi each, as shown in the GeoGebra construction above.

Insights 4 & 5 proved the sum of the angles at points C & D was \pi/2.  Employing the periodicity of tangent,  the x-coordinate of E = C+\pi and F = D+\pi, so the sum of the angles at points E & F is \frac{\pi}{2} + 2 \pi.

Extending the problem domain to [0,3\pi ] would add \frac{\pi}{2} + 4\pi more to the solution, and a domain of [0,4\pi ] would add an additional \frac{\pi}{2} + 6\pi.  Pushing the domain to [0,k\pi ] would give total sum

\displaystyle \left( \frac{\pi}{2} \right) + \left( \frac{\pi}{2} +2\pi \right) + \left( \frac{\pi}{2} +4\pi \right) + \left( \frac{\pi}{2} +6\pi \right) + ... + \left( \frac{\pi}{2} +2(k-1)\pi \right)

Combining terms gives a general formula for the sum of solutions for a problem domain of [0,k\pi ]

\displaystyle k * \frac{\pi}{2} + \left( 2+4+6+...+2(k-1) \right) * \pi =

\displaystyle = k * \frac{\pi}{2} + (k)(k-1) \pi =

\displaystyle = \frac{\pi}{2} * k * (2k-1)

For the first solutions in Quadrant I, [0,\pi] means k=1, and the sum is \displaystyle \frac{\pi}{2}*1*(2*1-1) = \frac{\pi}{2}.

For the solutions in the problem Mike originally posed, [0,2\pi] means k=2, and the sum is \displaystyle \frac{\pi}{2}*2*(2*2-1) = 3\pi.

I think that’s enough for one problem.

APPENDIX

My GeoGebra procedure for Investigation #2:

  • Graph the quadratic with a slider for the linear coefficient, y=x^2-b*x+1.
  • Label the x-intercepts A & B.
  • The x-values of A & B are the outputs for tangent, so I reflected these over y=x to the y-axis to construct A’ and B’.
  • Graph y=tan(x) and construct perpendiculars at A’ and B’ to determine the points of intersection with tangent–Points C, D, E, and F in the image below
  • The x-intercepts of C, D, E, and F are the angles required by the problem.
  • Since these can be points or vectors in Geogebra, I created point G by G=C+D.  The x-intercept of G is the angle sum of C & D.
  • Likewise, the x-intercept of point H=C+D+E+F is the required angle sum.

 

 

 

 

 

 

Advertisements

Envelope Curves

My precalculus class recently returned to graphs of sinusoidal functions with an eye toward understanding them dynamically via envelope curves:  Functions that bound the extreme values of the curves. What follows are a series of curves we’ve explored over the past few weeks.  Near the end is a really cool Desmos link showing an infinite progression of periodic envelopes to a single curve–totally worth the read all by itself.

GETTING STARTED

As a simple example, my students earlier had seen the graph of f(x)=5+2sin(x) as y=sin(x) vertically stretched by a magnitude of 2 and then translated upward 5 units.  In their return, I encouraged them to envision the function behavior dynamically instead of statically.  I wanted them to see the curve (and the types of phenomena it could represent) as representing dynamic motion rather than a rigid transformation of a static curve.  In that sense, the graph of f oscillated 2 units (the coefficient of sine in f‘s equation) above and below the line y=5 (the addend in the equation for f).  The curves y=5+2=7 and y=5-2=3 define the “Envelope Curves” for y=f(x).

When you graph y=f(x) and its two envelope curves, you can picture the sinusoid “bouncing” between its envelopes.  We called these ceiling and floor functions for f.  Ceilings happen whenever the sinusoid term reaches its maximum value (+1), and floors when the sinusoidal term is at its minimum (-1).

Envelope1

Those envelope functions would be just more busy work if it stopped there, though.  The great insights were that anything you added to a sinusoid could act as a midline with the coefficient, AND anything multiplied by the sinusoid is its amplitude–the distance the curve moves above and below its midline.  The fun comes when you start to allow variable expressions for the midline and/or amplitudes.

VARIABLE MIDLINES AND ENVELOPES

For a first example, consider y= \frac{x}{2} + sin(x).  By the reasoning above, y= \frac{x}{2} is the midline.  The amplitude, 1, is the coefficient of sine, so the envelope curves are y= \frac{x}{2}+1 (ceiling) and y= \frac{x}{2}-1 (floor).

Envelope2

That got their attention!  Notice how easy it is to visualize the sine curve oscillating between its envelope curves.

For a variable amplitude, consider y=2+1.2^{-x}*sin(x).  The midline is y=2, with an “amplitude” of 1.2^{-x}.  That made a ceiling of y=2+1.2^{-x} and a floor of y=2-1.2^{-x}, basically exponential decay curves converging on an end behavior asymptote defined by the midline.

Envelope3

SINUSOIDAL MIDLINES AND ENVELOPES

Now for even more fun.  Convinced that both midlines and amplitudes could be variably defined, I asked what would happen if the midline was another sinusoid?  For y=cos(x)+sin(x), we could think of y=cos(x) as the midline, and with the coefficient of sine being 1, the envelopes are y=cos(x)+1 and y=cos(x)-1.

Envelope5

Since cosine is a sinusoid, you could get the same curve by considering y=sin(x) as the midline with envelopes y=sin(x)+1 and y=sin(x)-1.  Only the envelope curves are different!

Envelope6

The curve y=cos(x)+sin(x) raised two interesting questions:

  1. Was the addition of two sinusoids always another sinusoid?
  2. What transformations of sinusoidal curves could be defined by more than one pair of envelope curves?

For the first question, they theorized that if two sinusoids had the same period, their sum was another sinusoid of the same period, but with a different amplitude and a horizontal shift.  Mathematically, that means

A*cos(\theta ) + B*sin(\theta ) = C*cos(\theta -D)

where A & B are the original sinusoids’ amplitudes, C is the new sinusoid’s amplitude, and D is the horizontal shift.  Use the cosine difference identity to derive

A^2 + B^2 = C^2  and \displaystyle tan(D) = \frac{B}{A}.

For y = cos(x) + sin(x), this means

\displaystyle y = cos(x) + sin(x) = \sqrt{2}*cos \left( x-\frac{\pi}{4} \right),

and the new coefficient means y= \pm \sqrt{2} is a third pair of envelopes for the curve.

Envelope7

Very cool.  We explored several more sums and differences with identical periods.

WHAT HAPPENS WHEN THE PERIODS DIFFER?

Try a graph of g(x)=cos(x)+cos(3x).

Envelope8

Using the earlier concept that any function added to a sinusoid could be considered the midline of the sinusoid, we can picture the graph of g as the graph of y=cos(3x) oscillating around an oscillating midline, y=cos(x):

Envelope9

IF you can’t see the oscillations yet, the coefficient of the cos(3x) term is 1, making the envelope curves y=cos(x) \pm 1.  The next graph clear shows y=cos(3x) bouncing off its ceiling and floor as defined by its envelope curves.

Envelope10

Alternatively, the base sinusoid could have been y=cos(x) with envelope curves y=cos(3x) \pm 1.

Envelope11

Similar to the last section when we added two sinusoids with the same period, the sum of two sinusoids with different periods (but the same amplitude) can be rewritten using an identity.

cos(A) + cos(B) = 2*cos \left( \frac{A+B}{2} \right) * cos \left( \frac{A-B}{2} \right)

This can be proved in the present form, but is lots easier to prove from an equivalent form:

cos(x+y) + cos(x-y) = 2*cos(x) * cos(y).

For the current function, this means y = cos(x) + cos(3x) = 2*cos(x)*cos(2x).

Now that the sum has been rewritten as a product, we can now use the coefficient as the amplitude, defining two other pairs of envelope curves.  If y=cos(2x) is the sinusoid, then y= \pm 2cos(x) are envelopes of the original curve, and if y=cos(x) is the sinusoid, then y= \pm 2cos(2x) are envelopes.

Envelope12

Envelope13

In general, I think it’s easier to see the envelope effect with the larger period function.  A particularly nice application connection of adding sinusoids with identical amplitudes and different periods are the beats musicians hear from the constructive and destructive sound wave interference from two instruments close to, but not quite in tune.  The points where the envelopes cross on the x-axis are the quiet points in the beats.

A STUDENT WANTED MORE

In class last Friday, my students were reviewing envelope curves in advance of our final exam when one made the next logical leap and asked what would happen if both the coefficients and periods were different.  When I mentioned that the exam wouldn’t go that far, she uttered a teacher’s dream proclamation:  She didn’t care.  She wanted to learn anyway.  Making up some coefficients on the spot, we decided to explore f(x)=2sin(x)+5cos(2x).

Assuming for now that the cos(2x) term is the primary sinusoid, the envelope curves are y=2sin(x) \pm 5.

Envelope14

That was certainly cool, but at this point, we were no longer satisfied with just one answer.  If we assumed sin(x) was the primary sinusoid, the envelopes are y=5cos(2x) \pm 2.

Envelope15

Personally, I found the first set of envelopes more satisfying, but it was nice that we could so easily identify another.

With the different periods, even though the  coefficients are different, we decided to split the original function in a way that allowed us to use the cos(A)+cos(B) identity introduced earlier.  Rewriting,

f(x)=2sin(x)+5cos(2x) = 2cos \left( x - \frac{ \pi }{2} \right) + 2cos(2x) + 3cos(2x) .

After factoring out the common coefficient 2, the first two terms now fit the cos(A) + cos(B) identity with A = x - \frac{ \pi }{2} and B=2x, allowing the equation to be rewritten as

 f(x)= 2 \left( 2*cos \left( \frac{x - \frac{ \pi }{2} + 2x }{2} \right) * cos \left( \frac{x - \frac{ \pi }{2} - 2x }{2} \right) \right) + 3cos(2x)

\displaystyle = 4*  cos \left( \frac{3}{2} x - \frac{ \pi }{4} \right) * cos \left( - \frac{1}{2} x - \frac{ \pi }{4} \right) + 3cos(2x).

With the expression now containing three sinusoidal expressions, there are three more pairs of envelope curves!

Arguably, the simplest approach from this form assumes cos(2x) from the $latex $3cos(2x)$ term as the sinusoid, giving y=2sin(x)+2cos(2x) \pm 3 (the pre-identity form three equations earlier in this post) as envelopes.

Envelope16

We didn’t go there, but recognizing that new envelopes can be found simply by rewriting sums creates an infinite number of additional envelopes.  Defining these different sums with a slider lets you see an infinite spectrum of envelopes.  The image below shows one.  Here is the Desmos Calculator page that lets you play with these envelopes directly.

Envelope17

If the cos \left( \frac{3}{3} x - \frac{ \pi}{4} \right) term was the sinusoid, the envelopes would be y=3cos(2x) \pm 4cos \left( - \frac{1}{2} x - \frac{ \pi }{4} \right).  If you look closely, you will notice that this is a different type of envelope pair with the ceiling and floor curves crossing and trading places at x= \frac{\pi}{2} and every 2\pi units before and after.  The third form creates another curious type of crossing envelopes.

Envelope18

CONCLUSION:

In all, it was fun to explore with my students the many possibilities for bounding sinusoidal curves.  It was refreshing to have one student excited by just playing with the curves to see what else we could find for no other reason than just to enjoy the beauty of these periodic curves.  As I reflected on the overall process, I was even more delighted to discover the infinite spectrum of envelopes modeled above on Desmos.

I hope you’ve found something cool here for yourself.

A Student’s Powerful Polar Exploration

I posted last summer on a surprising discovery of a polar function that appeared to be a horizontal translation of another polar function.  Translations happen all the time, but not really in polar coordinates.  The polar coordinate system just isn’t constructed in a way that makes translations appear in any clear way.

That’s why I was so surprised when I first saw a graph of \displaystyle r=cos \left( \frac{\theta}{3} \right).

Polar1

 

It looks just like a 0.5 left translation of r=\frac{1}{2} +cos( \theta ) .

Polar2

But that’s not supposed to happen so cleanly in polar coordinates.  AND, the equation forms don’t suggest at all that a translation is happening.  So is it real or is it a graphical illusion?

I proved in my earlier post that the effect was real.  In my approach, I dealt with the different periods of the two equations and converted into parametric equations to establish the proof.  Because I was working in parametrics, I had to solve two different identities to establish the individual equalities of the parametric version of the Cartesian x- and y-coordinates.

As a challenge to my precalculus students this year, I pitched the problem to see what they could discover. What follows is a solution from about a month ago by one of my juniors, S.  I paraphrase her solution, but the basic gist is that S managed her proof while avoiding the differing periods and parametric equations I had employed, and she did so by leveraging the power of CAS.  The result was that S’s solution was briefer and far more elegant than mine, in my opinion.

S’s Proof:

Multiply both sides of r = \frac{1}{2} + cos(\theta ) by r and translate to Cartesian.

r^2 = \frac{1}{2} r+r\cdot cos(\theta )
x^2 + y^2 = \frac{1}{2} \sqrt{x^2+y^2} +x
\left( 2\left( x^2 + y^2 -x \right) \right) ^2= \sqrt{x^2+y^2} ^2

At this point, S employed some CAS power.

Polar3

[Full disclosure: That final CAS step is actually mine, but it dovetails so nicely with S’s brilliant approach. I am always delightfully surprised when my students return using a tool (technological or mental) I have been promoting but hadn’t seen to apply in a particular situation.]

S had used her CAS to accomplish the translation in a more convenient coordinate system before moving the equation back into polar.

Clearly, r \ne 0, so

4r^3 - 3r = cos(\theta ) .

In an attachment (included below), S proved an identity she had never seen, \displaystyle cos(\theta) = 4cos^3 \left( \frac{\theta }{3} \right) - 3cos \left( \frac{\theta }{3} \right) , which she now applied to her CAS result.

\displaystyle 4r^3 - 3r = 4cos^3 \left( \frac{\theta }{3} \right) - 3cos \left( \frac{\theta }{3} \right)

So, \displaystyle r = cos \left( \frac{\theta }{3} \right)

Therefore, \displaystyle r = cos \left( \frac{\theta }{3} \right) is the image of \displaystyle r = \frac{1}{2} + cos(\theta ) after translating \displaystyle \frac{1}{2} unit left.  QED

Simple. Beautiful.

Obviously, this could have been accomplished using lots of by-hand manipulations.  But, in my opinion, that would have been a horrible, potentially error-prone waste of time for a problem that wasn’t concerned at all about whether one knew some Algebra I arithmetic skills.  Great job, S!

S’s proof of her identity, \displaystyle cos(\theta) = 4cos^3 \left( \frac{\theta }{3} \right) - 3cos \left( \frac{\theta }{3} \right) :

Polar4

Trig Identities with a Purpose

Yesterday, I was thinking about some changes I could introduce to a unit on polar functions.  Realizing that almost all of the polar functions traditionally explored in precalculus courses have graphs that are complete over the interval 0\le\theta\le 2\pi, I wondered if there were any interesting curves that took more than 2\pi units to graph.

My first attempt was r=cos\left(\frac{\theta}{2}\right) which produced something like a merged double limaçon with loops over its 4\pi period.

Trying for more of the same, I graphed r=cos\left(\frac{\theta}{3}\right) guessing (without really thinking about it) that I’d get more loops.  I didn’t get what I expected at all.

Wow!  That looks exactly like the image of a standard limaçon with a loop under a translation left of 0.5 units.

Further exploration confirms that r=cos\left(\frac{\theta}{3}\right) completes its graph in 3\pi units while r=\frac{1}{2}+cos\left(\theta\right) requires 2\pi units.

As you know, in mathematics, it is never enough to claim things look the same; proof is required.  The acute challenge in this case is that two polar curves (based on angle rotations) appear to be separated by a horizontal translation (a rectangular displacement).  I’m not aware of any clean, general way to apply a rectangular transformation to a polar graph or a rotational transformation to a Cartesian graph.  But what I can do is rewrite the polar equations into a parametric form and translate from there.

For 0\le\theta\le 3\pi , r=cos\left(\frac{\theta}{3}\right) becomes \begin{array}{lcl} x_1 &= &cos\left(\frac{\theta}{3}\right)\cdot cos\left (\theta\right) \\ y_1 &= &cos\left(\frac{\theta}{3}\right)\cdot sin\left (\theta\right) \end{array} .  Sliding this \frac{1}{2} a unit to the right makes the parametric equations \begin{array}{lcl} x_2 &= &\frac{1}{2}+cos\left(\frac{\theta}{3}\right)\cdot cos\left (\theta\right) \\ y_2 &= &cos\left(\frac{\theta}{3}\right)\cdot sin\left (\theta\right) \end{array} .

This should align with the standard limaçon, r=\frac{1}{2}+cos\left(\theta\right) , whose parametric equations for 0\le\theta\le 2\pi  are \begin{array}{lcl} x_3 &= &\left(\frac{1}{2}+cos\left(\theta\right)\right)\cdot cos\left (\theta\right) \\ y_3 &= &\left(\frac{1}{2}+cos\left(\theta\right)\right)\cdot sin\left (\theta\right) \end{array} .

The only problem that remains for comparing (x_2,y_2) and (x_3,y_3) is that their domains are different, but a parameter shift can handle that.

If 0\le\beta\le 3\pi , then (x_2,y_2) becomes \begin{array}{lcl} x_4 &= &\frac{1}{2}+cos\left(\frac{\beta}{3}\right)\cdot cos\left (\beta\right) \\ y_4 &= &cos\left(\frac{\beta}{3}\right)\cdot sin\left (\beta\right) \end{array} and (x_3,y_3) becomes \begin{array}{lcl} x_5 &= &\left(\frac{1}{2}+cos\left(\frac{2\beta}{3}\right)\right)\cdot cos\left (\frac{2\beta}{3}\right) \\ y_5 &= &\left(\frac{1}{2}+cos\left(\frac{2\beta}{3}\right)\right)\cdot sin\left (\frac{2\beta}{3}\right) \end{array} .

Now that the translation has been applied and both functions operate over the same domain, the two functions must be identical iff x_4 = x_5 and y_4 = y_5 .  It’s time to prove those trig identities!

Before blindly manipulating the equations, I take some time to develop some strategy.  I notice that the (x_5, y_5) equations contain only one type of angle–double angles of the form 2\cdot\frac{\beta}{3} –while the (x_4, y_4) equations contain angles of two different types, \beta and \frac{\beta}{3} .  It is generally easier to work with a single type of angle, so my strategy is going to be to turn everything into trig functions of double angles of the form 2\cdot\frac{\beta}{3} .

\displaystyle \begin{array}{lcl} x_4 &= &\frac{1}{2}+cos\left(\frac{\beta}{3}\right)\cdot cos\left (\beta\right) \\  &= &\frac{1}{2}+cos\left(\frac{\beta}{3}\right)\cdot cos\left (\frac{\beta}{3}+\frac{2\beta}{3} \right) \\  &= &\frac{1}{2}+cos\left(\frac{\beta}{3}\right)\cdot\left( cos\left(\frac{\beta}{3}\right) cos\left(\frac{2\beta}{3}\right)-sin\left(\frac{\beta}{3}\right) sin\left(\frac{2\beta}{3}\right)\right) \\  &= &\frac{1}{2}+\left[cos^2\left(\frac{\beta}{3}\right)\right] cos\left(\frac{2\beta}{3}\right)-\frac{1}{2}\cdot 2cos\left(\frac{\beta}{3}\right) sin\left(\frac{\beta}{3}\right) sin\left(\frac{2\beta}{3}\right) \\  &= &\frac{1}{2}+\left[\frac{1+cos\left(2\frac{\beta}{3}\right)}{2}\right] cos\left(\frac{2\beta}{3}\right)-\frac{1}{2}\cdot sin^2\left(\frac{2\beta}{3}\right) \\  &= &\frac{1}{2}+\frac{1}{2}cos\left(\frac{2\beta}{3}\right)+\frac{1}{2} cos^2\left(\frac{2\beta}{3}\right)-\frac{1}{2} \left( 1-cos^2\left(\frac{2\beta}{3}\right)\right) \\  &= & \frac{1}{2}cos\left(\frac{2\beta}{3}\right) + cos^2\left(\frac{2\beta}{3}\right) \\  &= & \left(\frac{1}{2}+cos\left(\frac{2\beta}{3}\right)\right)\cdot cos\left(\frac{2\beta}{3}\right) = x_5  \end{array}

Proving that the x expressions are equivalent.  Now for the ys

\displaystyle \begin{array}{lcl} y_4 &= & cos\left(\frac{\beta}{3}\right)\cdot sin\left(\beta\right) \\  &= & cos\left(\frac{\beta}{3}\right)\cdot sin\left(\frac{\beta}{3}+\frac{2\beta}{3} \right) \\  &= & cos\left(\frac{\beta}{3}\right)\cdot\left( sin\left(\frac{\beta}{3}\right) cos\left(\frac{2\beta}{3}\right)+cos\left(\frac{\beta}{3}\right) sin\left(\frac{2\beta}{3}\right)\right) \\  &= & \frac{1}{2}\cdot 2cos\left(\frac{\beta}{3}\right) sin\left(\frac{\beta}{3}\right) cos\left(\frac{2\beta}{3}\right)+\left[cos^2 \left(\frac{\beta}{3}\right)\right] sin\left(\frac{2\beta}{3}\right) \\  &= & \frac{1}{2}sin\left(2\frac{\beta}{3}\right) cos\left(\frac{2\beta}{3}\right)+\left[\frac{1+cos \left(2\frac{\beta}{3}\right)}{2}\right] sin\left(\frac{2\beta}{3}\right) \\  &= & \left(\frac{1}{2}+cos\left(\frac{2\beta}{3}\right)\right)\cdot sin\left (\frac{2\beta}{3}\right) = y_5  \end{array}

Therefore the graph of r=cos\left(\frac{\theta}{3}\right) is exactly the graph of r=\frac{1}{2}+cos\left(\theta\right) slid \frac{1}{2} unit left.  Nice.

If there are any students reading this, know that it took a few iterations to come up with the versions of the identities proved above.  Remember that published mathematics is almost always cleaner and more concise than the effort it took to create it.  One of the early steps I took used the substitution \gamma =\frac{\beta}{3} to clean up the appearance of the algebra.  In the final proof, I decided that the 2 extra lines of proof to substitute in and then back out were not needed.  I also meandered down a couple unnecessarily long paths that I was able to trim in the proof I presented above.

Despite these changes, my proof still feels cumbersome and inelegant to me.  From one perspective–Who cares?  I proved what I set out to prove.  On the other hand, I’d love to know if someone has a more elegant way to establish this connection.  There is always room to learn more.  Commentary welcome.

In the end, it’s nice to know these two polar curves are identical.  It pays to keep one’s eyes eternally open for unexpected connections!