What follows first is the algebraic solution I expected most to find and then an elegant transformational explanation one of my students produced.

**PROOF 1:**

Given circle A with diameter BC and point D on the circle. Prove triangle BCD is a right triangle.

After some initial explorations on GeoGebra sliding point D around to discover that its angle measure was always independent of the location of D, most successful solutions recognized congruent radii AB, AC, and AD, creating isosceles triangles CAD and BAD. That gave congruent base angles x in triangle CAD, and y in BAD.

The interior angle sum of a triangle gave , or , confirming that BCD was a right triangle.

**PROOF 2:**

Then, one student surprised us. She marked the isosceles base angles as above before rotating about point A.

Because the diameter rotated onto itself, the image and pre-image combined to form an quadrilateral with all angles congruent. Because every equiangular quadrilateral is a rectangle, M had confirmed BCD was a right triangle.

**CONCLUSION:**

I don’t recall seeing M’s proof before, but I found it a delightfully elegant application of quadrilateral properties. In my opinion, her rotation is a beautiful proof without words solution.

Encourage freedom, flexibility of thought, and creativity, and be prepared to be surprised by your students’ discoveries!

]]>Here’s a very pretty problem I encountered on Twitter from Mike Lawler 1.5 months ago.

I’m late to the game replying to Mike’s post, but this problem is the most lovely combination of features of quadratic and trigonometric functions I’ve ever encountered in a single question, so I couldn’t resist. This one is well worth the time for you to explore on your own before reading further.

My full thoughts and explorations follow. I have landed on some nice insights and what I believe is an elegant solution (in Insight #5 below). Leading up to that, I share the chronology of my investigations and thought processes. As always, all feedback is welcome.

**WARNING: HINTS AND SOLUTIONS FOLLOW**

**Investigation #1:**

My first thoughts were influenced by spoilers posted as quick replies to Mike’s post. The coefficients of the underlying quadratic, , say that the solutions to the quadratic sum to 9 and multiply to 1. The product of 1 turned out to be critical, but I didn’t see just how central it was until I had explored further. I didn’t immediately recognize the 9 as a red herring.

Basic trig experience (and a response spoiler) suggested the angle values for the tangent embedded in the quadratic weren’t common angles, so I jumped to Desmos first. I knew the graph of the overall given equation would be ugly, so I initially solved the equation by graphing the quadratic, computing arctangents, and adding.

**Insight #1: A Curious Sum**

The sum of the arctangent solutions was about 1.57…, a decimal form suspiciously suggesting a sum of . I wasn’t yet worried about all solutions in the required interval, but for whatever strange angles were determined by this equation, their sum was strangely pretty and succinct. If this worked for a seemingly random sum of 9 for the tangent solutions, perhaps it would work for others.

Unfortunately, Desmos is not a CAS, so I turned to GeoGebra for more power.

**Investigation #2: **

In GeoGebra, I created a sketch to vary the linear coefficient of the quadratic and to dynamically calculate angle sums. My procedure is noted at the end of this post. You can play with my GeoGebra sketch here.

The x-coordinate of point G is the sum of the angles of the first two solutions of the tangent solutions.

Likewise, the x-coordinate of point H is the sum of the angles of all four angles of the tangent solutions required by the problem.

**Insight #2: The Angles are Irrelevant**

By dragging the slider for the linear coefficient, the parabola’s intercepts changed, but as predicted in Insights #1, the angle ** sums **(x-coordinates of points G & H) remained invariant under all Real values of points A & B. The angle sum of points C & D seemed to be (point G), confirming Insight #1, while the angle sum of all four solutions in remained (point H), answering Mike’s question.

*The invariance of the angle sums even while varying the underlying individual angles seemed compelling evidence that that this problem was richer than the posed version. *

**Insight #3: But the Angles are bounded**

The parabola didn’t always have Real solutions. In fact, Real x-intercepts (and thereby Real angle solutions) happened iff the discriminant was non-negative: . In other words, the sum of the first two positive angles solutions for is iff , and the sum of the first four solutions is under the same condition. These results extend to the equalities at the endpoints iff the double solutions there are counted twice in the sums. I am not convinced these facts extend to the complex angles resulting when .

*I knew the answer to the now extended problem, but I didn’t know why. * Even so, these solutions and the problem’s request for a SUM of angles provided the insights needed to understand WHY this worked; it was time to fully consider the product of the angles.

**Insight #4: Finally a proof**

It was now clear that for there were two Quadrant I angles whose tangents were equal to the x-intercepts of the quadratic. If and are the quadratic zeros, then I needed to find the sum A+B where and .

From the coefficients of the given quadratic, I knew and .

Employing the tangent sum identity gave

and this fraction is undefined, independent of the value of as suggested by Insight #2. Because tan(A+B) is first undefined at , the first solutions are .

**Insight #5: Cofunctions reveal essence**

The tangent identity was a cute touch, but I wanted something deeper, not just an interpretation of an algebraic result. (I know this is uncharacteristic for my typically algebraic tendencies.) The final key was in the implications of .

This product meant the tangent solutions were reciprocals, and the reciprocal of tangent is cotangent, giving

.

But cotangent is also the co-function–or complement function–of tangent which gave me

.

Because tangent is monotonic over every cycle, the equivalence of the tangents implied the equivalence of their angles, so , or . Using the Insights above, this means the sum of the solutions to the generalization of Mike’s given equation,

for x in and any ,

is always with the fundamental reason for this in the definition of trigonometric functions and their co-functions. *QED*

**Insight #6: Generalizing the Domain**

The posed problem can be generalized further by recognizing the period of tangent: . That means the distance between successive corresponding solutions to the internal tangents of this problem is always each, as shown in the GeoGebra construction above.

Insights 4 & 5 proved the sum of the angles at points C & D was . Employing the periodicity of tangent, the x-coordinate of and , so the sum of the angles at points E & F is .

Extending the problem domain to would add more to the solution, and a domain of would add an additional . Pushing the domain to would give total sum

Combining terms gives a general formula for the sum of solutions for a problem domain of

For the first solutions in Quadrant I, means k=1, and the sum is .

For the solutions in the problem Mike originally posed, means k=2, and the sum is .

I think that’s enough for one problem.

**APPENDIX**

My GeoGebra procedure for Investigation #2:

- Graph the quadratic with a slider for the linear coefficient, .
- Label the x-intercepts A & B.
- The x-values of A & B are the outputs for tangent, so I reflected these over y=x to the y-axis to construct A’ and B’.
- Graph y=tan(x) and construct perpendiculars at A’ and B’ to determine the points of intersection with tangent–Points C, D, E, and F in the image below
- The x-intercepts of C, D, E, and F are the angles required by the problem.
- Since these can be points or vectors in Geogebra, I created point G by G=C+D. The x-intercept of G is the angle sum of C & D.
- Likewise, the x-intercept of point H=C+D+E+F is the required angle sum.

]]>

An aspect of brain and learning research I incorporate in my classes is that concepts are committed more securely to long-term memory when the ideas are introduced, some time elapses, and are then re-encountered. The idea is that when you “learn” an idea, have a chance to “forget” it, and then have an opportunity to re-learn it or see it in a new context, you strengthen your long-term understanding . In this spirit, I introduce exponential and logarithmic algebra in Algebra 2 classes and then return to those ideas multiple times. Here are two extensions from following courses–one from Calculus and one from PreCalculus/Statistics.

**LOGARITHM EXTENSION #1: CALCULUS**

Scenario: Whether by hand or with a CAS for rapid data creation, students explore derivatives of variations of for any .

When all return , most initially can’t quite believe the value of *k *is irrelevant. Those who recall transformations are further disturbed that the slope of is invariant under all levels of horizontal scaling. Surely when a curve is stretched, its slope changes, right?

The most common resolution I’ve seen invokes the Chain Rule to cancel *k* algebraically .

This proves the derivative of is invariant for all , but it doesn’t get at WHY. Many students remain dissatisfied. Enter log algebra.

As explained at the end of my previous post, every horizontal stretch of any log graph is equivalent to a vertical translation of the parent graph. That’s the core of what’s being claimed by the not-fully-appreciated log algebra property,

.

Applied to this problem, because , making every instance of a simple vertical translation of . Their derivatives are equal precisely because all derivatives with respect to x are invariant under vertical translations. Knowing the family of logarithmic functions has the special property that every horizontal scale change is equivalent to some vertical slide completely explains the paradox.

**LOGARITHM EXTENSION #2: PRECALCULUS/STATISTICS**

SCENARIO: Having only experienced linear regressions, students encounter curved Quadrant I data and need to find a model.

Balancing multiple perspectives, it is critical for students to see mathematics used in precise algebraic scenarios and in “fuzzy” scenarios like fitting lines to data that are inevitably imprecise due to inherent variability in the measured data. In my Algebra 2 classes, we explore linear regressions and how they work alongside the precise algebra of finding equations of lines and more general polynomials that must pass through given specific predetermined ordered pairs.

I typically don’t move beyond linear regressions in Algebra 2, but return in PreCalculus and Statistics classes to the reality that we may understand how to fit lines to generally linear data, but we are limited if the data curves. For curved Quadrant I data (like above), it is difficult to know what curve might model the information. Exponential functions and power functions (and others) have this shape, but these are wildly different types of functions. How can you know which to use? Re-enter logarithms…

(The remainder of this post is is an overly abbreviated explanation meant only to show a powerful use of log algebra. If there’s interest, I can explore the complete connection between exponential, linear, and power regressions in another post.)

If you suspect data is exponential, then an equation of the form will model the data, while power data should be modeled by . The equations are similar, and both have exponents. From prior experiences with log algebra, some students recall that logarithmic functions have the unique algebraic property of being able to write expressions with exponents in an equivalent form without exponents.

Applying logs to the exponential equation and applying log algebra gives

.

The parallel application to power functions is

.

In both cases, the last equation is a variation of a linear equation–a transformed y-value equal to a constant, added to the product of another constant and either x or a transformed x. That is, both are some form of Y=B+MX.

So, familiar logarithms allow you to change unfamiliar and significantly curved exponential or power data back into a familiar linear form. At their cores, exponential and power regressions are just simple transformations of linear regressions. In another post in which the previous image was explained, I leveraged this curve-straightening idea in a statistics class to have my students discover the formula for standard deviations of distributions of sample means.

**CONCLUSION:**

Research shows that aiming for student mastery on initial exposure is counterproductive. We all learn best by repeated exposure to concepts with time gaps between experiences. Hopefully these two examples can give two good ways to bring back log algebra.

From another perspective, exploring the implications of mathematics beyond just algebraic manipulations often grants key insights to scenarios that don’t seem related to when ideas were first encountered.

]]>**THE SCENARIO**

*You can vertically stretch any exponential function as much as you want, and the shape of the curve will never change!*

But that doesn’t make any sense. Doesn’t stretching a curve by definition change its curvature?

The answer is no. Not when exponentials are vertically stretched. It is an inevitable result from the multiplication of common bases implies add exponents property:

I set up a Desmos page to explore this property dynamically (shown below). The base of the exponential doesn’t matter; I pre-set the base of the parent function (line 1) to 2 (in line 2), but feel free to change it.

From its form, the line 3 orange graph is a vertical stretch of the parent function; you can vary the stretch factor with the line 4 slider. Likewise, the line 5 black graph is a horizontal translation of the parent, and the translation is controlled by the line 6 slider. That’s all you need!

Let’s say I wanted to quadruple the height of my function, so I move the *a* slider to 4. Now play with the *h* slider in line 6 to see if you can achieve the same results with a horizontal translation. By the time you change *h* to -2, the horizontal translation aligns perfectly with the vertical stretch. That’s a pretty strange result if you think about it.

Of course it has to be true because . Try any positive stretch you like, and you will always be able to find some horizontal translation that gives you the exact same result.

Likewise, you can horizontally slide any exponential function (growth or decay) as much as you like, and there is a single vertical stretch that will produce the same results.

The implications of this are pretty deep. Because the result of any horizontal translation of any function is a graph ** congruent** to the initial function, AND because every vertical stretch is equivalent to a horizontal translation, then vertically stretching any exponential function produces a graph congruent to the unstretched parent curve. That is,

**NOT AN EXTENSION**

My students inevitably ask if the same is true for horizontal stretches and vertical slides of exponentials. I encourage them to play with the algebra or create another graph to investigate. Eventually, they discover that horizontal stretches do bend exponentials (actually changing base, i.e., the growth rate), making it impossible for any translation of the parent to be congruent with the result.

**ABSOLUTELY AN EXTENSION**

But if a property is true for a function, then the inverse of the property generally should be true for the inverse of the function. In this case, that means the transformation property that did not work for exponentials does work for logarithms! That is,

*Any horizontal stretch of any logarithmic function is congruent to some vertical translation of the original function.** *But for logarithms, vertical stretches do morph the curve into a different shape. Here’s a Desmos page demonstrating the log property.

The sum property of logarithms proves the existence of this equally strange property:

**CONCLUSION**

Hopefully the unexpected transformational congruences will spark some nice discussions, while the graphical/algebraic equivalences will reinforce the importance of understanding mathematics more than one way.

Enjoy the strange transformational world of exponential and log functions!

]]>Finding equations for quadratic functions has long been a staple of secondary mathematics. Historically, students are given information about some points on the graph of the quadratic, and efficient students typically figure out which form of the equation to use. This post from my Curious Quadratics a la CAS presentation explores a significant mindset change that evolves once computer algebra enters the learning environment.

**HISTORIC BACKGROUND:**

Students spend lots of time (too much?) learning how to manipulate algebraic expressions between polynomial forms. Whether distributing, factoring, or completing the square, generations of students have spent weeks changing quadratic expressions between three common algebraic forms

Standard:

Factored:

Vertex:

many times without ever really knowing why. I finally grasped deeply the reason for this about 15 years ago in a presentation by Bernhard Kutzler of Austria. Poorly paraphrasing Bernhard’s point, he said in more elegant phrasing,

We change algebraic forms of functions because different forms reveal different properties of the function and because no single form reveals everything about a function.

While any of what follows could be eventually derived from any of the three quadratic forms, in general the Standard Form explicitly gives the y-intercept, Factored Form states x-intercepts, and Vertex Form “reveals” the vertex (duh). When working without electronic technology, students can gain efficiency by choosing to work with a quadratic form that blends well with given information. To demonstrate this, here’s an example of the differences between non-tech and CAS approaches.

**COMPARING APPROACHES:**

For an example, determine all intercepts and the vertex of the parabola that passes through , , and .

NON-TECH: Not knowing anything about the points, use Standard form, plug in all three points, and solve the resulting system.

Use any approach you want to solve this 3×3 system to get , , and .

That immediately gives the y-intercept at -30. Factoring to or using the Quadratic Formula reveals the x-intercepts at -5 and 3. Completing the square or leveraging symmetry between the known x-intercepts gives the vertex at . Some less-confident students find all of the hinted-at manipulations in this paragraph burdensome or even daunting.

CAS APPROACH: By declaring the form you want/need, you can directly get any information you require. In the next three lines on my Nspire CAS, notice that the only difference in my commands is the form of the equation I want in the first part of the command. Also notice my use of lists to simplify substitution of the given points.

The last line’s output gave two solutions only because I didn’t specify which of x1 and x2 was the larger x-intercept, so my Nspire gave me both.

The -30 y-intercept appears in the first output, the vertex in the second, and the x-intercepts in the third. Any information is equally simple to obtain.

**CONCLUSION:**

In the end, it’s all about knowing what you want to find and how to ask questions of the tools you have available. Understanding the algebra behind the solutions is important, but endless repetition of these tasks is not helpful, even though it may be easy to test.

Instead, focus on using what you know, explore for patterns, and ask good questions. …And teach with a CAS!

]]>

For some integers A, B, and n, one term of the expansion of is . What are the values of A, B, and n?

In this post, I reflect for a moment on what I’ve learned from the problem and outline a solution approach before sharing a clever alternative solution one of my students this year leveraged through her CAS-enabled investigation.

**WHAT I LEARNED BEFORE THIS YEAR**

Mostly, I’ve loved this problem for its “reversal” of traditional binomial expansion problems that typically give A, B, and n values and ask for either complete expansions or specific terms of the polynomial. Both of these traditional tasks are easily managed via today’s technology. In Natalie’s variation, neither the answer nor how you would proceed are immediately obvious.

The first great part of the problem is that it doesn’t seem to give enough information. Second, it requires solvers to understand deeply the process of polynomial expansion. Third, unlike traditional formulations, Natalie’s version doesn’t allow students to avoid deep thinking by using technology.

In the comments to my original post, Christopher Olah and a former student, Bryan Spellman, solved the problem via factoring and an Excel document, respectively. Given my algebraic tendencies, I hadn’t considered Bryan’s Excel “search” approach, but one could relatively easily program Excel to provide an exhaustive search. I now think of Bryan’s approach as a coding approach to a reasonably efficient search of the sample space of possible solutions. Most of my students’ solutions over the years essentially approach the problem the same way, but less efficiently, by using one-case-at-a-time expansions via CAS commands until they stumble upon good values for A, B, and n. Understandably, students taking this approach typically become the most frustrated.

Christopher’s approach paralleled my own. The x and y exponents from the expanded term show that n=5+3=8. Expanding a generic then gives a bit more information. From my TI-Nspire CAS,

so there are 56 ways an term appears in this expansion before combining like terms (explained here, if needed). Dividing the original coefficient by 56 gives , the coefficient of .

The values of a and b are integers, so factoring 497,664 shows these coefficients are both co-multiples of 2 and 3, but which ones? In essence, this defines a system of equations. The 3 has an exponent of 5, so it can easily be attributed to a, but the 11 is not a multiple of either 5 or 3, so it must be a combination. Quick experimentation with the exponents leads to , so goes to a and goes to b. This results in and .

**WHAT A STUDENT TAUGHT ME THIS YEAR**

After my student, NB, arrived at , she focused on roots–not factors–for her solution. The exponents of a and b suggested using either a cubed or a fifth root.

The fifth root would extract only the value of a if b had only singleton factors–essentially isolating the a and b values–while the cubed root would extract a combination of a and b factors, leaving only excess a factors inside the radical. Her investigation was simplified by the exact answers from her Nspire CAS software.

From the fifth root output, the irrational term had exponent 1/5, not the expected 3/5, so b must have had at least one prime factor with non-singular multiplicity. But the cubed root played out perfectly. The exponent–2/3–matched expectation, giving a=6, and the coefficient, 24, was the product of a and b, making b=4. Clever.

**EXTENSIONS & CONCLUSION**

Admittedly, NB’s solution would have been complicated if the parameter was composed of something other than singleton prime factors, but it did present a fresh, alternative approach to what was becoming a comfortable problem for me. I’m curious about exploring other arrangements of the parameters of to see how NB’s root-based reasoning could be extended and how it would compare to the factor solutions I used before. I wonder which would be “easier” … whatever “easier” means.

As a ‘blog topic for another day, I’ve learned much by sharing this particular problem with several teachers over the years. In particular, the initial “not enough information” feel of the problem statement actually indicates the presence of some variations that lead to multiple solutions. If you think about it, NB’s root variation of the solution suggests some direct paths to such possible formulations. As intriguing as the possibilities here are, I’ve never assigned such a variation of the problem to my students.

As I finish this post, I’m questioning why I haven’t yet taken advantage of these possibilities. That will change. Until then, perhaps you can find some interesting or alternative approaches to the underlying systems of equations in this problem. Can you create a variation that has multiple solutions? Under what conditions would such a variation exist? How many distinct solutions could a problem like this have?

]]>From Desmos:

Some AP readers spoke up to declare that sin(x)^2 would always be read as . While I can’t speak to the veracity of that last claim, I found it a bit troubling and missing out on some very real difficulties users face when interpreting between paper- and computer-based versions of math expressions. Following is an edited version of my response to the AP Calculus discussion board.

**MY THOUGHTS:**

I believe there’s something at the core of all of this that isn’t being explicitly named: The differences between computer-based 1-dimensional input (left-to-right text-based commands) vs. paper-and-pencil 2-dimensional input (handwritten notation moves vertically–exponents, limits, sigma notation–and horizontally). Two-dimensional traditional math writing simply doesn’t convert directly to computer syntax. Computers are a brilliant tool for mathematics exploration and calculation, but they require a different type of input formatting. To overlook and not explicitly name this for our students leaves them in the unenviable position of trying to “creatively” translate between two types of writing with occasional interpretation differences.

Our students are unintentionally set up for this confusion when they first learn about the order of operations–typically in middle school in the US. They learn the sequencing: parentheses then exponents, then multiplication & division, and finally addition and subtraction. Notice that functions aren’t mentioned here. This thread [on the AP Calculus discussion board] has helped me realize that all or almost all of the sources I routinely reference never explicitly redefine order of operations after the introduction of the function concept and notation. That means our students are left with the insidious and oft-misunderstood PEMDAS (or BIDMAS in the UK) as their sole guide for operation sequencing. When they encounter squaring or reciprocating or any other operations applied to function notation, they’re stuck trying to make sense and creating their own interpretation of this new dissonance in their old notation. This is easily evidenced by the struggles many have when inputting computer expressions requiring lots of nested parentheses or when first trying to code in LaTEX.

While the sin(x)^2 notation is admittedly uncomfortable for traditional “by hand” notation, it is 100% logical from a computer’s perspective: evaluate the function, then square the result.

We also need to recognize that part of the confusion fault here lies in the by-hand notation. What we traditionalists understand by the notational convenience of sin^2(x) on paper is technically incorrect. We know what we MEAN, but the notation implies an incorrect order of computation. The computer notation of sin(x)^2 is actually closer to the truth.

I particularly like the way the TI-Nspire CAS handles this point. As is often the case with this software, it accepts computer ** input** (next image), while its

Further recent (?) development: Students have long struggled with the by-hand notation of sin^2(x) needing to be converted to (sin(x))^2 for computers. Personally, I’ve always liked both because the computer notation emphasizes the squaring of the function output while the by-hand version was a notational convenience. My students pointed out to me recently that Desmos now accepts the sin^2(x) notation while TI Calculators still do not.

Desmos:

The enhancement of WYSIWYG computer input formatting means that while some of the differences in 2-dimensional hand writing and computer inputs are narrowing, common classroom technologies no longer accept the same linear formatting — but then that was possibly always the case….

To rail against the fact that many software packages interpret sin(x)^2 as (sin(x))^2 or sin^2(x) misses the point that 1-dimensional computer input is not necessarily the same as 2-dimensional paper writing. We don’t complain when two human speakers misunderstand each other when they speak different languages or dialects. Instead, we should focus on what each is trying to say and learn how to communicate clearly and efficiently in both venues.

In short, “When in Rome, …”.

]]>This post describes a way to compute roots of any number–real or complex–via systems of equations without any conversions to polar form or use of DeMoivre’s Theorem. Following a “traditional approach,” one non-technology example is followed by a CAS simplification of the process.

**TRADITIONAL APPROACH:**

Most sources describe the following procedure to compute the roots of complex numbers (obviously including the real number subset).

- Write the complex number whose root is sought in generic polar form. If necessary, convert from Cartesian form.
- Invoke DeMoivre’s Theorem to get the polar form of all of the roots.
- If necessary, convert the numbers from polar form back to Cartesian.

As a very quick example,

**Compute all square roots of -16.**

Rephrased, this asks for all complex numbers, *z*, that satisfy . The Fundamental Theorem of Algebra guarantees two solutions to this quadratic equation.

The complex Cartesian number, , converts to polar form, , where . Unlike Cartesian form, polar representations of numbers are not unique, so any full rotation from the initial representation would be coincident, and therefore equivalent if converted to Cartesian. For any integer *n*, this means

Invoking DeMoivre’s Theorem,

For , this gives polar solutions, and . Each can be converted back to Cartesian form, giving the two square roots of -16: and . Squaring either gives -16, confirming the result.

I’ve always found the rotational symmetry of the complex roots of any number beautiful, particularly for higher order roots. This symmetry is perfectly captured by DeMoivre’s Theorem, but there is arguably a simpler way to compute them.

**NEW(?) NON-TECH APPROACH:**

Because the solution to every complex number computation can be written in form, new possibilities open. The original example can be rephrased:

**Determine the simultaneous real values of x and y for which .**

Start by expanding and simplifying the right side back into form. (I wrote about a potentially easier approach to simplifying powers of i in my last post.)

Notice that the two ends of the previous line are two different expressions for the same complex number(s). Therefore, equating the real and imaginary coefficients gives a system of equations:

Solving the system gives the square roots of -16.

From the latter equation, either or . Substituting into the first equation gives , an impossible equation because x & y are both real numbers, as stated above.

Substituting into the first equation gives , leading to . So, and -OR- and are the only solutions– and –the same solutions found earlier, but this time without using polar form or DeMoivre! Notice, too, that the presence of TWO solutions emerged naturally.

Higher order roots could lead to much more complicated systems of equations, but a CAS can solve that problem.

**CAS APPROACH:**

**Determine all fourth roots of .**

That’s equivalent to finding all simultaneous x and y values that satisfy . Expanding the right side is quickly accomplished on a CAS. From my TI-Nspire CAS:

Notice that the output is simplified to form that, in the context of this particular example, gives the system of equations,

Using my CAS to solve the system,

First, note there are four solutions, as expected. Rewriting the approximated numerical output gives the four complex fourth roots of : , , , and . Each can be quickly confirmed on the CAS:

**CONCLUSION:**

Given proper technology, finding the multiple roots of a complex number need not invoke polar representations or DeMoivre’s Theorem. It really is as “simple” as expanding where n is the given root, simplifying the expansion into form, and solving the resulting 2×2 system of equations.

At the point when such problems would be introduced to students, their algebraic awareness should be such that using a CAS to do all the algebraic heavy lifting is entirely appropriate.

As one final glimpse at the beauty of complex roots, I entered the two equations from the last system into Desmos to take advantage of its very good implicit graphing capabilities. You can see the four intersections corresponding to the four solutions of the system. Solutions to systems of implicit equations are notoriously difficult to compute, so I wasn’t surprised when Desmos didn’t compute the coordinates of the points of intersection, even though the graph was pretty and surprisingly quick to generate.

]]>**TRADITIONAL APPROACH:**

I began with the obvious and before invoking the definition of *i* to get . From these three you can see every time the power of *i* increases by 1, you multiply the result by *i* and simplify the result if possible using these first 3 terms. The result of is simple, taking the known results to

But , cycling back to the value initially found with . Continuing this procedure creates a modulus-4 pattern:

They noticed that *i *to any multiple of 4 was 1, and other powers were *i*, -1, or –*i*, depending on how far removed they were from a multiple of 4. For an algorithm to compute a simplified form of *i *to an integer power, divide the power by 4, and raise *i* to the remainder (0, 1, 2, or 3) from that division.

They got the pattern and were ready to move on when one student who had glimpsed this in a math competition at some point noted he could “do it”, but it seemed to him that memorizing the list of 4 base powers was a necessary requirement to invoking the pattern.

Then recalled a comment I made on the first day of class. ** I value memorizing as little mathematics as possible and using the mathematics we do know as widely as possible. **His challenge was clear: Wasn’t asking students to use this 4-cycle approach just a memorization task in disguise? If I believed in my non-memorization claim, shouldn’t there be another way to achieve our results using nothing more the definition of

**A POTENTIAL IMPROVEMENT:**

By definition, , so it’s a very small logical stretch with inverse operations to claim .

**Even Powers: **After trying some different examples, one student had an easy way to handle even powers. For example, if n=148, she invoked an exponent rule “in reverse” to extract an term which she turned into a -1. Because -1 to any integer power is either 1 or -1, she used the properties of negative numbers to odd and even powers to determine the sign of her answer.

Because any even power can always be written as the product of 2 and another number, this gave an easy way to handle half of all cases using nothing more than the definition of *i* and exponents of -1.

A third student pointed out another efficiency. Because the final result depended only on whether the integer multiplied by 2 was even or odd, only the last two digits of *n* were even relevant. That pattern also exists in the 4-cycle approach, but it felt more natural here.

**Odd Powers**: Even powers were so simple, they were initially frustrated that odd powers didn’t seem to be, too. Then the student who’d issued the memorization challenge said that any odd power of *i* was just the product of *i* and an even power of *i*. Invoking the efficiency in the last paragraph for n=567, he found

**CONCLUSION:**

In the end, powers of *i* had become nothing more complicated than exponent properties and powers of -1. The students seemed to have greater comfort with finding powers of complex numbers, but I have begun to question why algebra courses have placed so much emphasis on powers of *i.*

From one perspective, a surprising property of complex numbers for many students is that any operation on complex numbers creates another complex number. While they are told that complex numbers are a closed set, to see complex numbers simplify so conveniently surprises many.

Another cool aspect of complex number operations is the stretch-and-rotate graphical property of complex number multiplication. This is the basis of DeMoivre’s Theorem and explains why there are exactly 4 results when you repeatedly multiply any complex number by *i*–equivalent to stretching by a factor of 1 and rotating . Multiplying by 1 doesn’t change the magnitude of a number, and after 4 rotations of , you are back at the original number.

So, depending on the future goals or needs of your students, there is certainly a reason to explore the 4-cycle nature of repeated multiplication by *i*. If the point is just to compute a result, perhaps the 4-cycle approach is unnecessarily “complex”, and the odd/even powers of -1 is less computationally intense. In the end, maybe it’s all about number sense.

My students discovered a more basic algorithm, but I’m more uncomfortable. Just because we can ask our students a question doesn’t mean we should. I can see connections from my longer studies, but do they see or care? In this case, should they?

]]>The first line is fine by the standard rules of arithmetic, but as soon as you read the 2nd and 3rd lines, you know something is amiss. What could be the output of line 4?

The Telegraph post above claims there are two answers. Sadly, that post suggests there are only two solutions. The reality is that there is an infinite number of correct answers.

I first share the two most commonly proffered solutions suggested by the Telegraph as the only answers. I follow this with Knox’s clever use of an incremental number base. Finally, I offer a more generalized approach to support my claim of many more solutions.

**STANDARD SOLUTIONS**

**THE ANSWER IS 40**: After the first line, add the previous answer to next sum.

Consistent with the first three lines, the same rule to line 4 “proves” the answer is 40:

While nothing requires it, this approach is recursive. I’ve not seen anyone say this, but the 40 approach requires the equations to appear ** in the given order**. If you give the equations in a different order, the rule is no longer consistent. In particular, if you wanted a 5th line, what would it be? There’s nothing clear about how to extend this solution.

**THE ANSWER IS 96**: Alternatively, you can multiply the two numbers on the left and add that product to the first number. This procedure is consistent with the first three lines, so the solution to line 4 must be 96:

The nice thing about this approach is that the solution is explicit, not recursive. What’s obviously counter-intuitive is why you would first multiply the given numbers, and then why you would add the result to the first number, not the second. This approach is consistent with the given information, so it is valid.

Unlike the first solution, this multiplicative approach is not commutative. By this rule, 1+4 yields 5, as shown, but 4+1 would be . Nothing in the problem statement required commutativity, so no worries.

Another good aspect of this algorithm is that the order of the equations is now irrelevant. It applies no matter what numbers are “added” on the left side of the equation. This is definitely more satisfying.

**CHANGE THE NUMBER BASE**

**THE ANSWER IS 201**: Knox noticed that if you changed the number base, you could find another legit pattern. The first line is standard arithmetic, but how could the next lines be consistent, too? You know 2+5 doesn’t give 12 in standard base-10 arithmetic, but if you use base-5, .Unfortunately, in base-5, line 1 would be and line 3 would be , both inconsistent. Knox’s cleverest move was to vary the number base. The 3rd line is true in base-4; since the 1st line is true in any base larger than five, he found a consistent pattern by applying base-6 to line 1:

Following this pattern, the next line would be base-3, giving 201 as the answer:

The best part of Knox’s solution is that he maintains the addition integrity of the left side. The down-side is that this approach works for only one more line. Any 5th line would give a base-2 (binary) answer, and since base-1 does not exist, the problem would end there.

Knox’s approach also allows you to use any numbers you want for the left-hand sums. But notice that answers depend on where you write the sum. For example, if (2+5) was in any other line, you would not get 12. In line 1, , in line 3, you’d get .

**CREATE YOUR OWN SOLUTION**

By now, you should see that any any rule could work so long as you are consistent. Because standard arithmetic does not apply, solvers should feel free to invoke any functions or algorithms desired. One way to do this is to think of each line as the inputs (left side) and output (right side) of a three-variable function.

**THE ANSWER IS 96**: One possible function is for some values of a, b, and c that passes through (1,4,5), (2,5,12), and (3,6,21). I used my TI-Nspire CAS to solve the resulting system:That means if x and y are the given left-side numbers and z is the right-side answer, the equation satisfies the first three lines and the answer to line 4 is 96

**THE ANSWER IS**: If you can square the inputs, why not cube them? That means another possible function is . My CAS solution of the resulting system leads to the fractional answer:

The first three given equations essentially define three ordered triples–(1,4,5), (2,5,12), and (3,6,21)–so almost any equation you conceive with three unknown coefficients can be used to create a 3×3 system of equations. The fractional solution for line 4 may not be as satisfying as any of the earlier approaches using only integers, but these last two examples make it clear that there should be an infinite number of solutions.

These last two solutions are especially nice because they are explicit and don’t depend on the order of the given information. You can choose any two numbers to “add”, and the algorithms will work.

Notice also that all of these functions, except for Knox’s, are non-commutative. No worries, the problem already broke free of standard rules in line 2.

**ONE THAT DIDN’T WORK**

The last two examples prove the existence of quadratic and cubic solutions, so why not a linear solution? In other words, is there a 3D plane in the form containing the given points?

Unfortunately, the resulting 3×3 system didn’t solve. The determinant of the coefficient matrix is zero, suggesting an inconsistent or dependent system. Upon further inspection, subtracting line 1 from line 2 in the planar system gives . Similarly, subtracting line 2 from line 3 gives . Since both can’t be simultaneously true, the system is inconsistent and has no solution. It was worth the effort.

**CONCLUSION**

Since standard arithmetic didn’t apply after the first line and no other restrictions were in play, that opened the door to lots of creativity. The many different solutions to this problem all hinge on finding some function–any function–that satisfied the first three lines. Find one of these, and the last line is simple. That some attempts won’t work is no hinderance. Even when standard algorithms seem to apply, there is almost always the possibility of some creative twist when working with numerical sequences.

So, whenever you’re faced with a non-standard system, have fun, be creative, and develop something unexpected.

]]>