# Category Archives: Technology

## Exponentials Don’t Stretch

OK, this post’s title is only half true, but transforming exponentials can lead to counter-intuitive results.  This post shares a cool transformations activity using dynamic graphing software–a perfect set-up for a mind-bending algebra or precalculus student lesson in the coming year.  I use Desmos in this post, but this can be reproduced on any graphing software with sliders.

THE SCENARIO

You can vertically stretch any exponential function as much as you want, and the shape of the curve will never change!

But that doesn’t make any sense.  Doesn’t stretching a curve by definition change its curvature?

The answer is no.  Not when exponentials are vertically stretched.  It is an inevitable result from the multiplication of common bases implies add exponents property:

$b^a * b^c = b^{a+c}$

I set up a Desmos page to explore this property dynamically (shown below).  The base of the exponential doesn’t matter; I pre-set the base of the parent function (line 1) to 2 (in line 2), but feel free to change it.

From its form, the line 3 orange graph is a vertical stretch of the parent function; you can vary the stretch factor with the line 4 slider.  Likewise, the line 5 black graph is a horizontal translation of the parent, and the translation is controlled by the line 6 slider.  That’s all you need!

Let’s say I wanted to quadruple the height of my function, so I move the a slider to 4.  Now play with the h slider in line 6 to see if you can achieve the same results with a horizontal translation.  By the time you change h to -2, the horizontal translation aligns perfectly with the vertical stretch.  That’s a pretty strange result if you think about it.

Of course it has to be true because $y = 2^{x-(-2)} = 2^x*2^2 = 4*2^x$.  Try any positive stretch you like, and you will always be able to find some horizontal translation that gives you the exact same result.

Likewise, you can horizontally slide any exponential function (growth or decay) as much as you like, and there is a single vertical stretch that will produce the same results.

The implications of this are pretty deep.  Because the result of any horizontal translation of any function is a graph congruent to the initial function, AND because every vertical stretch is equivalent to a horizontal translation, then vertically stretching any exponential function produces a graph congruent to the unstretched parent curve.  That is, any vertical stretch of any exponential will never change its curvature!  Graphs make it easier to see and explore this, but it takes algebra to (hopefully) understand this cool exponential property.

NOT AN EXTENSION

My students inevitably ask if the same is true for horizontal stretches and vertical slides of exponentials.  I encourage them to play with the algebra or create another graph to investigate.  Eventually, they discover that horizontal stretches do bend exponentials (actually changing base, i.e., the growth rate), making it impossible for any translation of the parent to be congruent with the result.

ABSOLUTELY AN EXTENSION

But if a property is true for a function, then the inverse of the property generally should be true for the inverse of the function.  In this case, that means the transformation property that did not work for exponentials does work for logarithms!  That is,

Any horizontal stretch of any logarithmic function is congruent to some vertical translation of the original function.  But for logarithms, vertical stretches do morph the curve into a different shape.  Here’s a Desmos page demonstrating the log property.

The sum property of logarithms proves the existence of this equally strange property:

$log(A) + log(x) = log(A*x)$

CONCLUSION

Hopefully the unexpected transformational congruences will spark some nice discussions, while the graphical/algebraic equivalences will reinforce the importance of understanding mathematics more than one way.

Enjoy the strange transformational world of exponential and log functions!

## Confidence Intervals via graphs and CAS

Confidence intervals (CIs) are a challenging topic for many students, a task made more challenging, in my opinion, because many (most?) statistics texts approach CIs via z-scores.  While repeatedly calculating CI endpoints from standard deviations explains the underlying mathematical structure, it relies on an (admittedly simple) algebraic technique that predates classroom technology currently available for students on the AP Statistics exam.

Many (most?) statistics packages now include automatic CI commands.  Unfortunately for students just learning what a CI means, automatic commands can become computational “black boxes.”  Both CAS and graphing techniques offer a strong middle ground–enough foundation to reinforce what CIs mean with enough automation to avoid unnecessary symbol manipulation time.

In most cases, this is accomplished by understanding a normal cumulative distribution function (cdf) as a function, not just as an electronic substitute for normal probability tables of values.  In this post, I share two alternatives each for three approaches to determining CIs using a TI-Nspire CAS.

SAMPLE PROBLEM:

In 2010, the mean ACT mathematics score for all tests was 21.0 with standard deviation 5.3.  Determine a 90% confidence interval for the math ACT score of an individual chosen at random from all 2010 ACT test takers.

METHOD 1a — THE STANDARD APPROACH:

A 90% CI excludes the extreme 5% on each end of the normal distribution.  Using an inverse normal command gives the z-scores at the corresponding 5% and 95% locations on the normal cdf.

Of course, utilizing symmetry would have required only one command.  To find the actual boundary points of the CI, standardize the endpoints, x, and equate that to the two versions of the z-scores.

$\displaystyle \frac{x-21.0}{5.3} = \pm 1.64485$

Solving these rational equations for x gives $x=12.28$ and $x=29.72$, or $CI = \left[ 12.28,29.72 \right]$ .

Most statistics software lets users avoid this computation with optional parameters for the mean and standard deviation of non-standard normal curves.  One of my students last year used this in the next variation.

METHOD 1b — INTRODUCING LISTS:

After using lists as shortcuts on our TI-Nspires last year for evaluating functions at several points simultaneously, one of my students creatively applied them to the inverse normal command, entering the separate 0.05 and 0.95 cdf probabilities as a single list.  I particularly like how the output for this approach outputs looks exactly like a CI.

METHOD 2a — CAS:

The endpoints of a CI are just endpoints of an interval on a normal cdf, so why not avoid the algebra and additional inverse normal command and determine the endpoints via CAS commands?  My students know the solve command from previous math classes, so after learning the normal cdf command, there are very few situations for them to even use the inverse.

This approach keeps my students connected to the normal cdf and solving for the bounds quickly gives the previous CI bounds.

METHOD 2b (Alas, not yet) — CAS and LISTS:

Currently, the numerical techniques the TI-Nspire family uses to solve equations with statistics commands don’t work well with lists in all situations.  Curiously, the Nspire currently can’t handle the solve+lists equivalent of the inverse normal+lists approach in METHOD 1b.

But, I’ve also learned that problems not easily solved in an Nspire CAS calculator window typically crack pretty easily when translated to their graphical equivalents.

METHOD 3a — GRAPHING:

This approach should work for any CAS or non-CAS graphing calculator or software with statistics commands.

Remember the “f” in cdf.  A cumulative distribution function is a function, and graphing calculators/software treats them as such.  Replacing the normCdf upper bounds with an x for standard graphing syntax lets one graph the normal cdf (below).

Also remember that any algebraic equation can be solved graphically by independently graphing each side of the equation and treating the resulting pair of equations as a system of equations.  In this case, graphing $y=0.05$ and $y=0.95$ and finding the points of intersection gives the values of the CI.

METHOD 3b — GRAPHING and LISTS:

SIDENOTE:  While lists didn’t work with the CAS in the case of METHOD 2b, the next screen shows the syntax to graph both ends of the CI using lists with a single endpoint equation.

The lists obviously weren’t necessary here, but the ability to use lists is a very convenient feature on the TI-Nspire that I’ve leveraged countless times to represent families of functions.  In my opinion, using them in METHOD 3b again leverages that same idea, that the endpoints you seek are different aspects of the same family–the CI.

CONCLUSION:

There are many ways for students in their first statistics courses to use what they already know to determine the endpoints of a confidence interval.  And keeping students attention focused on new ways to use old information solidifies both old and new content.  Eliminating unnecessary computations that aren’t the point of most of introductory statistics anyway is an added bonus.

Happy learning everyone…

## FREE TI-Nspire iPad App Workshop

On Saturday, 31 May 2014, Texas Instruments (@TICalculators) and @HawkenSchool are hosting a FREE TI-Nspire iPad Workshop at Hawken’s Gries Center in Cleveland’s University Circle.  The workshop is designed for educators who are interested in or are just beginning to use the TI- Nspire App for iPad® (either CAS or numeric). It will cover the basics of getting started and teaching with the Apps.  Tom Reardon will be leading the training!

Sign up for the workshop here.  A pdf flyer for the workshop is here:   iPad App Training.

## Dynamic Linear Programming

My department is exploring the pros and cons of different technologies for use in teaching our classes. Two teachers shared ways to use Desmos and GeoGebra in lessons using inequalities on one day; we explored the same situation using the TI-Nspire in the following week’s meeting.  For this post, I’m assuming you are familiar with solving linear programming problems.  Some very nice technology-assisted exploration ideas are developed in the latter half of this post.

My goal is to show some cool ways we discovered to use technology to evaluate these types of problems and enhance student exploration.  Our insights follow the section considering two different approaches to graphing the feasible region.  For context, we used a dirt-biker linear programming problem from NCTM’s Illuminations Web Pages.

Assuming x = the number of Riders built and = the number of Rovers built,  inequalities for this problem are

We also learn on page 7 of the Illuminations activity that Apu makes a $15 profit on each Rider and$30 per Rover.  That means an Optimization Equation for the problem is $Profit=15x+30y$.

GRAPHING THE FEASIBLE REGION:

Graphing all of the inequalities simultaneously determines the feasible region for the problem.  This can be done easily with all three technologies, but the Nspire requires solving the inequalities for y first.  Therefore, the remainder of this post compares the Desmos and GeoGebra solutions.  Because the Desmos solutions are easily accessible as Web pages and not separate files, further images will be from Desmos until the point where GeoGebra operates differently.

Both Desmos and GeoGebra can graph these inequalities from natural inputs–inputing math sentences as you would write them from the problem information:  without solving for a specific variable.  As with many more complicated linear programming problems, graphing all the constraints at once sometimes makes a visually complicated feasible region graph.

So, we decided to reverse all of our inequalities, effectively  shading the non-feasible region instead.  Any points that emerged unshaded were possible solutions to the Dirt Bike problem (image below, file here).  All three softwares shift properly between solid and dashed lines to show respective included and excluded boundaries.

Traditional Approach – I (as well as almost all teachers, I suspect) have traditionally done some hand-waving at this point to convince (or tell) students that while any ordered pair in the unshaded region or on its boundary (all are dashed) is a potential solution, any optimal solution occurs on the boundary of the feasible region.  Hopefully teachers ask students to plug ordered pairs from the feasible region into the Optimization Equation to show that the profit does vary depending on what is built (duh), and we hope they eventually discover (or memorize) that the maximum or minimum profit occurs on the edges–usually at a corner for the rigged setups of most linear programming problems in textbooks.  Thinking about this led to several lovely technology enhancements.

INSIGHT 1:  Vary a point.

During our first department meeting, I was suddenly dissatisfied with how I’d always introduced this idea to my classes.  That unease and our play with the Desmos’ simplicity of adding sliders led me to try graphing a random ordered pair.  I typed (a,b) on an input line, and Desmos asked if I wanted sliders for both variables.  Sure, I thought (image below, file here).

— See my ASIDE note below for a philosophical point on the creation of (a,b).
— GeoGebra and the Nspire require one additional step to create/insert sliders, but GeoGebra’s naming conventions led to a smoother presentation–see below.

BIG ADVANTAGE:  While the Illuminations problem we were using had convenient vertices, we realized that students could now drag (a,b) anywhere on the graph (especially along the boundaries and to vertices of the feasible region) to determine coordinates.  Establishing exact coordinates of those points still required plugging into equations and possibly solving systems of equations (a possible entry for CAS!).  However discovered, critical coordinates were suddenly much easier to identify in any linear programming question.

HUGE ADVANTAGE:  Now that the point was variably defined, the Optimization Equation could be, too!  Rewriting and entering the Optimation Equation as an expression in terms of a and b, I took advantage of Desmos being a calculator, not just a grapher.  Notice the profit value on the left of the image.

With this, users can drag (a,b) and see not only the coordinates of the point, but also the value of the profit at the point’s current location!  Check out the live version here to see how easily Desmos updates this value as you drag the point.

From this dynamic setup, I believe students now can learn several powerful ideas through experimentation that traditionally would have been told/memorized.

STUDENT DISCOVERIES:

1. Drag (a,b) anywhere in the feasible region.  Not surprisingly, the profit’s value varies with (a,b)‘s location.
2. The profit appears to be be constant along the edges.  Confirm this by dragging (a,b) steadily along any edge of the feasible region.
3. While there are many values the profit could assume in the feasible region, some quick experimentation suggests that the largest and smallest profit values occur at the vertices of the feasible region.
4. DEEPER:  While point 3 is true, many teachers and textbooks mistakenly proclaim that solutions occur only at vertices.  In fact, it is technically possible for a problem to have an infinite number optimal solutions.  This realization is discussed further in the CONCLUSION.

ASIDE:  I was initially surprised that the variable point on the Desmos graph was directly draggable.  From a purist’s perspective, this troubled me because the location of the point depends on the values of the sliders.  That said, I shouldn’t be able to move the point and change the values of its defining sliders.  Still, the simplicity of what I was able to do with the problem as a result of this quickly led me to forgive the two-way dependency relationships between Desmos’ sliders and the objects they define.

GEOGEBRA’S VERSION:

In some ways, this result was even easier to create on GeoGebra.  After graphing the feasible region, I selected the Point tool and clicked once on the graph.  Voila!  The variable point was fully defined.  This avoids the purist issue I raised in the ASIDE above.  As a bonus, the point was also named.

Unlike Desmos, GeoGebra permits multi-character function names.  Defining $Profit(x,y)=15x+30y$ and entering $Profit(A)$ allowed me to see the profit value change as I dragged point A as I did in the Desmos solution. The $Profit(A)$ value was dynamically computed in GeoGebra as a number value in its Algebra screen.  A live version of this construction is on GeoGebraTube here.

At first, I wasn’t sure if the last command–entering a single term into a multivariable term–would work, but since A was a multivariable point, GeoGebra nicely handled the transition.  Dragging A around the feasible region updated the current profit value just as easily as Desmos did.

INSIGHT 2:  Slide a line.

OK, this last point is really an adaptation of a technique I learned from some of my mentors when I started teaching years ago, but how I will use it in the future is much cleaner and more expedient.  I thought line slides were a commonly known technique for solving linear programming problems, but conversations with some of my colleagues have convinced me that not everyone knows the approach.

Recall that each point in the feasible region has its own profit value.  Instead of sliding a point to determine a profit, why not pick a particular profit and determine all points with that profit?  As an example, if you wanted to see all points that had a profit of $100, the Optimization Equation becomes $Profit=100=15x+30y$. A graph of this line (in solid purple below) passes through the feasible region. All points on this line within the feasible region are the values where Apu could build dirt bikes and get a profit of$100.  (Of course, only integer ordered pairs are realistic.)

You could replace the 100 in the equation with different values and repeat the investigation.  But if you’re thinking already about the dynamic power of the software, I hope you will have realized that you could define profit as a slider to scan through lots of different solutions with ease after you reset the slider’s bounds.  One instance is shown below; a live Desmos version is here.

Geogebra and the Nspire set up the same way except you must define their slider before you define the line.  Both allow you to define the slider as “profit” instead of just “p”.

CONCLUSIONS:

From here, hopefully it is easy to extend Student Discovery 3 from above.  By changing the P slider, you see a series of parallel lines (prove this!).  As the value of P grows, the line goes up in this Illuminations problem.  Through a little experimentation, it should be obvious that as P rises , the last time the profit line touches the feasible region will be at a vertex.  Experiment with the P slider here to convince yourself that the maximum profit for this problem is $165 at the point $(x,y)=(3,4)$. Apu should make 3 Riders and 4 Rovers to maximize profit. Similarly (and obviously), Apu’s minimum profit is$0 at $(x,y)=(0,0)$ by making no dirt bikes.

While not applicable in this particular problem, I hope you can see that if an edge of the feasible region for some linear programming problem was parallel to the line defined by the corresponding Optimization Equation, then all points along that edge potentially would be optimal solutions with the same Optimization Equation output.  This is the point I was trying to make in Student Discovery 4.

In the end, Desmos, GeoGebra, and the TI-Nspire all have the ability to create dynamic learning environments in which students can explore linear programming situations and their optimization solutions, albeit with slightly different syntax.  In the end, I believe these any of these approaches can make learning linear programming much more experimental and meaningful.

## Two Squares, Two Triangles, and some Circles

Here’s another fun twist on another fun problem from the Five Triangles ‘blog.  A month ago, this was posted.

What I find cool about so many of the Five Triangles problems is that most permit multiple solutions.  I also like that several Five Triangles problems initially appear to not have enough information.  This one is no different until you consider the implications of the squares.

I’ve identified three unique ways to approach this problem.  I’d love to hear if any of you see any others.  Here are my solutions in the order I saw them.  The third is the shortest, but all offer unique insights.

Method 1: Law of Cosines

This solution goes far beyond the intended middle school focus of the problem, but it is what I saw first.  Sometimes, knowing more gives you additional insights.

Because DEF is a line and EF is a diagonal of a square, I know $m\angle CEF=45^{\circ}$, and therefore $m\angle CED=135^{\circ}$.  $\Delta CEF$ is a 45-45-90 triangle with hypotenuse 6, so its leg, CE has measure $\frac{6}{\sqrt{2}}=3\sqrt{2}$.  Knowing two sides and an angle in $\Delta DEC$ means I could apply the Law of Cosines.

$DC^2 = 4^2 + (3\sqrt{2})^2 - 2\cdot (3\sqrt{2}) \cdot \cos(135^{\circ})=58$

Because I’m looking for the area of ABCD,  and that is equivalent to $DC^2$, I don’t need to solve for the length of DC to know the area I seek is 58.

Method 2: Use Technology

I doubt many would want to solve using this approach, but if you don’t see (or know) trigonometry, you could build a solution from scratch if you are fluent with dynamic geometry software (GeoGebra, TI-Nspire, GSP).  My comfort with this made finding the solution via construction pretty straight-forward.

1. Construct segment EF with fixed length 6.
2. Build square CEGF with diagonal EF.  (This can be done several ways.  I was in a transformations mood, so I rotated EF $90^{\circ}$ to get the other endpoints.)
3. Draw line EF  and then circle with radius 4 through point E.
4. Mark point D as the intersection of circle and line EF outside CEGF .
5. Draw a segment through points and C.  (The square of the length of CD is the answer, but I decided to go one more step.)
6. Construct square ABCD with sides congruent to CD.  (Again, there are several ways to do this.  I left my construction marks visible in my construction below.)
7. Compute the area of ABCD.

Here is my final GeoGebra construction.

Method 3: The Pythagorean Theorem

Sometimes, changing a problem can make it much easier to solve.

As soon as I saw the problem, I forwarded it to some colleagues at my school.  Tatiana wrote back with a quick solution.  In the original image, draw diagonal, CG, of square CEGF. Because the diagonals of a square perpendicularly bisect each other, that creates right $\Delta DHC$ with legs 3 and 7.  That means the square of the hypotenuse of $\Delta DHC$ (and therefore the area of the square) can be found via the Pythagorean Theorem.

$DC^2 = 7^2+3^2 = 58$

Method 4: Coordinate Geometry

OK, I said three solutions, and perhaps this approach is completely redundant given the Pythagorean Theorem in the last approach, but you could also find a solution using coordinate geometry.

Because the diagonals of a square are perpendicular, you could construct ECFG with its center at the origin.  I placed point C at (0,3) and point E at (3,0).  That means point D is at (7,0), making the solution to the problem the square of the length of the segment from (0,3) to (7,0).  Obviously, that can be done with the Pythagorean Theorem, but in the image below, I computed number i in the upper left corner of this GeoGebra window as the square of the length of that segment.

Fun.

## Finding area

I follow the Five Triangles ‘blog for cool math problems.  A recent one proved particularly nice.

At first I wasn’t sure this situation was invariant.  I didn’t see how fixing three triangle areas guaranteed a fixed quadrilateral area.  Not seeing an immediate general solution approach, I reasoned that if there was a solution, it worked for multiple overall configurations.  If it worked in general, then it must also work for any particular case I chose, so I made the cevians perpendicular.  That made each of the given area triangles right.  I modeled that by constructing the overall triangle with the cevian intersection at the origin and the legs of the given area triangles along the coordinate axes.

There are many ways to do this, but I reasoned that if there was a single answer, then any one of them would work.  A right triangle with legs of length 8 and 5 would have area 20.  Constructing that triangle in GeoGebra fixed the lengths of the legs of the other two triangles and the hypotenuses of the area 8 & 15 triangles intersected at a Quadrant II point.  Here’s my construction.

I  overlayed a polygon to create the quadrilateral and measured its area directly.  For fun, I also wrote algebraic equations for lines CB and DA, found the coordinates of point F by solving the 2×2 linear system, used that to derive the area of $\Delta BDF$, and determined the area of the quadrilateral from that.

While I realized that this approach was just a single case of the given problem, it absolutely convinced me that the solution was unique.  Once the area 20 triangle was defined (whether or not the triangle was right), a side and the area of each of the other two given triangles is known.  That meant the heights of the triangles would be determined and thereby the location of the quadrilateral’s fourth vertex.  So, I knew without a doubt that the unknown area was $27 cm^2$, but I didn’t know a general solution.

Chronology of the General Solution

While I worked more on the problem, I also pitched it to my Twitter network and asked a colleague at my school, Tatiana Yudovina, if she was interested in the problem.  Next is Tatiana’s initial solution, followed by my generic Geogebra construction, and a much shorter solution Tatiana created.  My conclusion takes the problem to a more generic state and raises some potential extensions.

Tatiana’s First Solution:

Leveraging the fact that triangles with the same base have equivalent height and area ratios, she created a system of equations that solved to eventually determine the quadrilateral’s area.

My Generic GeoGebra Solution:

While Tatiana was working on her algebraic answer, I was creating  a dynamic version on GeoGebra.  I built the area 20 triangle by first drawing a segment AB and measuring its length, a.  That meant the height of this triangle, h, was given by $\frac{1}{2} a \cdot h =20\longrightarrow h=\frac{40}{a}$. Then I constructed a perpendicular line to AB and used the “Segment with Fixed Length” tool and defined the length using the generic length of h as defined above to create segment AC.  This worked because GeoGebra defined the length of AB as a variable as shown below.

I used the “Compass” tool to create a circle with radius AC through the perpendicular line created earlier. Point D is the intersection of the circle and the normal line.  I then constructed a perpendicular to AD through D and placed a random point E on this new line.  Point E was the requisite height above AB to guarantee that $\Delta ABE$ always had area 20 which I confirmed by drawing the triangle and computing its area.

I hid AC, the circle, and both normals.  Segment AB was a completely independent object, and point E was free to move along the second “height” normal.  I measured AE and repeated the previous construction to create the area 15 triangle. Because BE was part of a cevian, I drew line BE to determine point J on the normal defining the final vertex of the area 15 triangle.

Again, I hid all of my constructions and repeated the process to create the final vertex, K, of the area 8 triangle off side BE of the area 20 triangle.  Extending segments AJ and BK defined point L, the final vertex of the quadrilateral.  Laying a quadrilateral in the figure let me compute its area.  Moving points A, B, and E around the screen and seeing the areas remain fixed is pretty compelling evidence that the quadrilateral’s area is always 27, and Tatiana’s proof showed why.  You can play with my final construction on GeoGebra Tube here.

Then Tatiana emailed me a much shorter proof.

Tatiana’s Short Solution:

Reversing the logic of her first solution, Tatiana reasoned that equivalent-altitude triangles had equal base and area ratios.

And the sum of X and Y gave the quadrilateral’s area.

Conclusion:

This problem was entertaining both in the solution and the multiple ways we found it.  Creating the dynamic construction gave  insights into the critical features of the problem.

Here are some potential extensions I developed for this problem.  I haven’t fully explored any of them yet, hoping some of my geometry students this year might take up the exploration challenge.  I’d love to hear if any of my readers have any further suggestions.

1. It might be interesting to create an even more dynamic construction with the areas of the three given triangles defined by sliders.
2. Can the quadrilateral’s area be expressed as a closed-form function of the areas of the three given triangles.
3. What happens on the boundaries of this problem?  That is, what happens if one of the side triangles was a degenerate with area 0? What would happen to the quadrilateral? Would would be the corresponding affect on the area formula from extension 2?
4. Extending 3 even further, if both given side triangles were degenerates with area 0, it seems that the area formula from extension 2 should collapse to the area of the final given non-zero triangle, but does it?

Thanks again, Five Triangles, for another great problem!

## A Student’s Powerful Polar Exploration

I posted last summer on a surprising discovery of a polar function that appeared to be a horizontal translation of another polar function.  Translations happen all the time, but not really in polar coordinates.  The polar coordinate system just isn’t constructed in a way that makes translations appear in any clear way.

That’s why I was so surprised when I first saw a graph of $\displaystyle r=cos \left( \frac{\theta}{3} \right)$.

It looks just like a 0.5 left translation of $r=\frac{1}{2} +cos( \theta )$ .

But that’s not supposed to happen so cleanly in polar coordinates.  AND, the equation forms don’t suggest at all that a translation is happening.  So is it real or is it a graphical illusion?

I proved in my earlier post that the effect was real.  In my approach, I dealt with the different periods of the two equations and converted into parametric equations to establish the proof.  Because I was working in parametrics, I had to solve two different identities to establish the individual equalities of the parametric version of the Cartesian x- and y-coordinates.

As a challenge to my precalculus students this year, I pitched the problem to see what they could discover. What follows is a solution from about a month ago by one of my juniors, S.  I paraphrase her solution, but the basic gist is that S managed her proof while avoiding the differing periods and parametric equations I had employed, and she did so by leveraging the power of CAS.  The result was that S’s solution was briefer and far more elegant than mine, in my opinion.

S’s Proof:

Multiply both sides of $r = \frac{1}{2} + cos(\theta )$ by r and translate to Cartesian.

$r^2 = \frac{1}{2} r+r\cdot cos(\theta )$
$x^2 + y^2 = \frac{1}{2} \sqrt{x^2+y^2} +x$
$\left( 2\left( x^2 + y^2 -x \right) \right) ^2= \sqrt{x^2+y^2} ^2$

At this point, S employed some CAS power.

[Full disclosure: That final CAS step is actually mine, but it dovetails so nicely with S’s brilliant approach. I am always delightfully surprised when my students return using a tool (technological or mental) I have been promoting but hadn’t seen to apply in a particular situation.]

S had used her CAS to accomplish the translation in a more convenient coordinate system before moving the equation back into polar.

Clearly, $r \ne 0$, so

$4r^3 - 3r = cos(\theta )$ .

In an attachment (included below), S proved an identity she had never seen, $\displaystyle cos(\theta) = 4cos^3 \left( \frac{\theta }{3} \right) - 3cos \left( \frac{\theta }{3} \right)$ , which she now applied to her CAS result.

$\displaystyle 4r^3 - 3r = 4cos^3 \left( \frac{\theta }{3} \right) - 3cos \left( \frac{\theta }{3} \right)$

So, $\displaystyle r = cos \left( \frac{\theta }{3} \right)$

Therefore, $\displaystyle r = cos \left( \frac{\theta }{3} \right)$ is the image of $\displaystyle r = \frac{1}{2} + cos(\theta )$ after translating $\displaystyle \frac{1}{2}$ unit left.  QED

Simple. Beautiful.

Obviously, this could have been accomplished using lots of by-hand manipulations.  But, in my opinion, that would have been a horrible, potentially error-prone waste of time for a problem that wasn’t concerned at all about whether one knew some Algebra I arithmetic skills.  Great job, S!

S’s proof of her identity, $\displaystyle cos(\theta) = 4cos^3 \left( \frac{\theta }{3} \right) - 3cos \left( \frac{\theta }{3} \right)$ :