From Desmos:

Some AP readers spoke up to declare that sin(x)^2 would always be read as . While I can’t speak to the veracity of that last claim, I found it a bit troubling and missing out on some very real difficulties users face when interpreting between paper- and computer-based versions of math expressions. Following is an edited version of my response to the AP Calculus discussion board.

**MY THOUGHTS:**

I believe there’s something at the core of all of this that isn’t being explicitly named: The differences between computer-based 1-dimensional input (left-to-right text-based commands) vs. paper-and-pencil 2-dimensional input (handwritten notation moves vertically–exponents, limits, sigma notation–and horizontally). Two-dimensional traditional math writing simply doesn’t convert directly to computer syntax. Computers are a brilliant tool for mathematics exploration and calculation, but they require a different type of input formatting. To overlook and not explicitly name this for our students leaves them in the unenviable position of trying to “creatively” translate between two types of writing with occasional interpretation differences.

Our students are unintentionally set up for this confusion when they first learn about the order of operations–typically in middle school in the US. They learn the sequencing: parentheses then exponents, then multiplication & division, and finally addition and subtraction. Notice that functions aren’t mentioned here. This thread [on the AP Calculus discussion board] has helped me realize that all or almost all of the sources I routinely reference never explicitly redefine order of operations after the introduction of the function concept and notation. That means our students are left with the insidious and oft-misunderstood PEMDAS (or BIDMAS in the UK) as their sole guide for operation sequencing. When they encounter squaring or reciprocating or any other operations applied to function notation, they’re stuck trying to make sense and creating their own interpretation of this new dissonance in their old notation. This is easily evidenced by the struggles many have when inputting computer expressions requiring lots of nested parentheses or when first trying to code in LaTEX.

While the sin(x)^2 notation is admittedly uncomfortable for traditional “by hand” notation, it is 100% logical from a computer’s perspective: evaluate the function, then square the result.

We also need to recognize that part of the confusion fault here lies in the by-hand notation. What we traditionalists understand by the notational convenience of sin^2(x) on paper is technically incorrect. We know what we MEAN, but the notation implies an incorrect order of computation. The computer notation of sin(x)^2 is actually closer to the truth.

I particularly like the way the TI-Nspire CAS handles this point. As is often the case with this software, it accepts computer ** input** (next image), while its

Further recent (?) development: Students have long struggled with the by-hand notation of sin^2(x) needing to be converted to (sin(x))^2 for computers. Personally, I’ve always liked both because the computer notation emphasizes the squaring of the function output while the by-hand version was a notational convenience. My students pointed out to me recently that Desmos now accepts the sin^2(x) notation while TI Calculators still do not.

Desmos:

The enhancement of WYSIWYG computer input formatting means that while some of the differences in 2-dimensional hand writing and computer inputs are narrowing, common classroom technologies no longer accept the same linear formatting — but then that was possibly always the case….

To rail against the fact that many software packages interpret sin(x)^2 as (sin(x))^2 or sin^2(x) misses the point that 1-dimensional computer input is not necessarily the same as 2-dimensional paper writing. We don’t complain when two human speakers misunderstand each other when they speak different languages or dialects. Instead, we should focus on what each is trying to say and learn how to communicate clearly and efficiently in both venues.

In short, “When in Rome, …”.

]]>

This post describes a way to compute roots of any number–real or complex–via systems of equations without any conversions to polar form or use of DeMoivre’s Theorem. Following a “traditional approach,” one non-technology example is followed by a CAS simplification of the process.

**TRADITIONAL APPROACH:**

Most sources describe the following procedure to compute the roots of complex numbers (obviously including the real number subset).

- Write the complex number whose root is sought in generic polar form. If necessary, convert from Cartesian form.
- Invoke DeMoivre’s Theorem to get the polar form of all of the roots.
- If necessary, convert the numbers from polar form back to Cartesian.

As a very quick example,

**Compute all square roots of -16.**

Rephrased, this asks for all complex numbers, *z*, that satisfy . The Fundamental Theorem of Algebra guarantees two solutions to this quadratic equation.

The complex Cartesian number, , converts to polar form, , where . Unlike Cartesian form, polar representations of numbers are not unique, so any full rotation from the initial representation would be coincident, and therefore equivalent if converted to Cartesian. For any integer *n*, this means

Invoking DeMoivre’s Theorem,

For , this gives polar solutions, and . Each can be converted back to Cartesian form, giving the two square roots of -16: and . Squaring either gives -16, confirming the result.

I’ve always found the rotational symmetry of the complex roots of any number beautiful, particularly for higher order roots. This symmetry is perfectly captured by DeMoivre’s Theorem, but there is arguably a simpler way to compute them.

**NEW(?) NON-TECH APPROACH:**

Because the solution to every complex number computation can be written in form, new possibilities open. The original example can be rephrased:

**Determine the simultaneous real values of x and y for which .**

Start by expanding and simplifying the right side back into form. (I wrote about a potentially easier approach to simplifying powers of i in my last post.)

Notice that the two ends of the previous line are two different expressions for the same complex number(s). Therefore, equating the real and imaginary coefficients gives a system of equations:

Solving the system gives the square roots of -16.

From the latter equation, either or . Substituting into the first equation gives , an impossible equation because x & y are both real numbers, as stated above.

Substituting into the first equation gives , leading to . So, and -OR- and are the only solutions– and –the same solutions found earlier, but this time without using polar form or DeMoivre! Notice, too, that the presence of TWO solutions emerged naturally.

Higher order roots could lead to much more complicated systems of equations, but a CAS can solve that problem.

**CAS APPROACH:**

**Determine all fourth roots of .**

That’s equivalent to finding all simultaneous x and y values that satisfy . Expanding the right side is quickly accomplished on a CAS. From my TI-Nspire CAS:

Notice that the output is simplified to form that, in the context of this particular example, gives the system of equations,

Using my CAS to solve the system,

First, note there are four solutions, as expected. Rewriting the approximated numerical output gives the four complex fourth roots of : , , , and . Each can be quickly confirmed on the CAS:

**CONCLUSION:**

Given proper technology, finding the multiple roots of a complex number need not invoke polar representations or DeMoivre’s Theorem. It really is as “simple” as expanding where n is the given root, simplifying the expansion into form, and solving the resulting 2×2 system of equations.

At the point when such problems would be introduced to students, their algebraic awareness should be such that using a CAS to do all the algebraic heavy lifting is entirely appropriate.

As one final glimpse at the beauty of complex roots, I entered the two equations from the last system into Desmos to take advantage of its very good implicit graphing capabilities. You can see the four intersections corresponding to the four solutions of the system. Solutions to systems of implicit equations are notoriously difficult to compute, so I wasn’t surprised when Desmos didn’t compute the coordinates of the points of intersection, even though the graph was pretty and surprisingly quick to generate.

]]>

**TRADITIONAL APPROACH:**

I began with the obvious and before invoking the definition of *i* to get . From these three you can see every time the power of *i* increases by 1, you multiply the result by *i* and simplify the result if possible using these first 3 terms. The result of is simple, taking the known results to

But , cycling back to the value initially found with . Continuing this procedure creates a modulus-4 pattern:

They noticed that *i *to any multiple of 4 was 1, and other powers were *i*, -1, or –*i*, depending on how far removed they were from a multiple of 4. For an algorithm to compute a simplified form of *i *to an integer power, divide the power by 4, and raise *i* to the remainder (0, 1, 2, or 3) from that division.

They got the pattern and were ready to move on when one student who had glimpsed this in a math competition at some point noted he could “do it”, but it seemed to him that memorizing the list of 4 base powers was a necessary requirement to invoking the pattern.

Then recalled a comment I made on the first day of class. ** I value memorizing as little mathematics as possible and using the mathematics we do know as widely as possible. **His challenge was clear: Wasn’t asking students to use this 4-cycle approach just a memorization task in disguise? If I believed in my non-memorization claim, shouldn’t there be another way to achieve our results using nothing more the definition of

**A POTENTIAL IMPROVEMENT:**

By definition, , so it’s a very small logical stretch with inverse operations to claim .

**Even Powers: **After trying some different examples, one student had an easy way to handle even powers. For example, if n=148, she invoked an exponent rule “in reverse” to extract an term which she turned into a -1. Because -1 to any integer power is either 1 or -1, she used the properties of negative numbers to odd and even powers to determine the sign of her answer.

Because any even power can always be written as the product of 2 and another number, this gave an easy way to handle half of all cases using nothing more than the definition of *i* and exponents of -1.

A third student pointed out another efficiency. Because the final result depended only on whether the integer multiplied by 2 was even or odd, only the last two digits of *n* were even relevant. That pattern also exists in the 4-cycle approach, but it felt more natural here.

**Odd Powers**: Even powers were so simple, they were initially frustrated that odd powers didn’t seem to be, too. Then the student who’d issued the memorization challenge said that any odd power of *i* was just the product of *i* and an even power of *i*. Invoking the efficiency in the last paragraph for n=567, he found

**CONCLUSION:**

In the end, powers of *i* had become nothing more complicated than exponent properties and powers of -1. The students seemed to have greater comfort with finding powers of complex numbers, but I have begun to question why algebra courses have placed so much emphasis on powers of *i.*

From one perspective, a surprising property of complex numbers for many students is that any operation on complex numbers creates another complex number. While they are told that complex numbers are a closed set, to see complex numbers simplify so conveniently surprises many.

Another cool aspect of complex number operations is the stretch-and-rotate graphical property of complex number multiplication. This is the basis of DeMoivre’s Theorem and explains why there are exactly 4 results when you repeatedly multiply any complex number by *i*–equivalent to stretching by a factor of 1 and rotating . Multiplying by 1 doesn’t change the magnitude of a number, and after 4 rotations of , you are back at the original number.

So, depending on the future goals or needs of your students, there is certainly a reason to explore the 4-cycle nature of repeated multiplication by *i*. If the point is just to compute a result, perhaps the 4-cycle approach is unnecessarily “complex”, and the odd/even powers of -1 is less computationally intense. In the end, maybe it’s all about number sense.

My students discovered a more basic algorithm, but I’m more uncomfortable. Just because we can ask our students a question doesn’t mean we should. I can see connections from my longer studies, but do they see or care? In this case, should they?

]]>

The first line is fine by the standard rules of arithmetic, but as soon as you read the 2nd and 3rd lines, you know something is amiss. What could be the output of line 4?

The Telegraph post above claims there are two answers. Sadly, that post suggests there are only two solutions. The reality is that there is an infinite number of correct answers.

I first share the two most commonly proffered solutions suggested by the Telegraph as the only answers. I follow this with Knox’s clever use of an incremental number base. Finally, I offer a more generalized approach to support my claim of many more solutions.

**STANDARD SOLUTIONS**

**THE ANSWER IS 40**: After the first line, add the previous answer to next sum.

Consistent with the first three lines, the same rule to line 4 “proves” the answer is 40:

While nothing requires it, this approach is recursive. I’ve not seen anyone say this, but the 40 approach requires the equations to appear ** in the given order**. If you give the equations in a different order, the rule is no longer consistent. In particular, if you wanted a 5th line, what would it be? There’s nothing clear about how to extend this solution.

**THE ANSWER IS 96**: Alternatively, you can multiply the two numbers on the left and add that product to the first number. This procedure is consistent with the first three lines, so the solution to line 4 must be 96:

The nice thing about this approach is that the solution is explicit, not recursive. What’s obviously counter-intuitive is why you would first multiply the given numbers, and then why you would add the result to the first number, not the second. This approach is consistent with the given information, so it is valid.

Unlike the first solution, this multiplicative approach is not commutative. By this rule, 1+4 yields 5, as shown, but 4+1 would be . Nothing in the problem statement required commutativity, so no worries.

Another good aspect of this algorithm is that the order of the equations is now irrelevant. It applies no matter what numbers are “added” on the left side of the equation. This is definitely more satisfying.

**CHANGE THE NUMBER BASE**

**THE ANSWER IS 201**: Knox noticed that if you changed the number base, you could find another legit pattern. The first line is standard arithmetic, but how could the next lines be consistent, too? You know 2+5 doesn’t give 12 in standard base-10 arithmetic, but if you use base-5, .Unfortunately, in base-5, line 1 would be and line 3 would be , both inconsistent. Knox’s cleverest move was to vary the number base. The 3rd line is true in base-4; since the 1st line is true in any base larger than five, he found a consistent pattern by applying base-6 to line 1:

Following this pattern, the next line would be base-3, giving 201 as the answer:

The best part of Knox’s solution is that he maintains the addition integrity of the left side. The down-side is that this approach works for only one more line. Any 5th line would give a base-2 (binary) answer, and since base-1 does not exist, the problem would end there.

Knox’s approach also allows you to use any numbers you want for the left-hand sums. But notice that answers depend on where you write the sum. For example, if (2+5) was in any other line, you would not get 12. In line 1, , in line 3, you’d get .

**CREATE YOUR OWN SOLUTION**

By now, you should see that any any rule could work so long as you are consistent. Because standard arithmetic does not apply, solvers should feel free to invoke any functions or algorithms desired. One way to do this is to think of each line as the inputs (left side) and output (right side) of a three-variable function.

**THE ANSWER IS 96**: One possible function is for some values of a, b, and c that passes through (1,4,5), (2,5,12), and (3,6,21). I used my TI-Nspire CAS to solve the resulting system:That means if x and y are the given left-side numbers and z is the right-side answer, the equation satisfies the first three lines and the answer to line 4 is 96

**THE ANSWER IS**: If you can square the inputs, why not cube them? That means another possible function is . My CAS solution of the resulting system leads to the fractional answer:

The first three given equations essentially define three ordered triples–(1,4,5), (2,5,12), and (3,6,21)–so almost any equation you conceive with three unknown coefficients can be used to create a 3×3 system of equations. The fractional solution for line 4 may not be as satisfying as any of the earlier approaches using only integers, but these last two examples make it clear that there should be an infinite number of solutions.

These last two solutions are especially nice because they are explicit and don’t depend on the order of the given information. You can choose any two numbers to “add”, and the algorithms will work.

Notice also that all of these functions, except for Knox’s, are non-commutative. No worries, the problem already broke free of standard rules in line 2.

**ONE THAT DIDN’T WORK**

The last two examples prove the existence of quadratic and cubic solutions, so why not a linear solution? In other words, is there a 3D plane in the form containing the given points?

Unfortunately, the resulting 3×3 system didn’t solve. The determinant of the coefficient matrix is zero, suggesting an inconsistent or dependent system. Upon further inspection, subtracting line 1 from line 2 in the planar system gives . Similarly, subtracting line 2 from line 3 gives . Since both can’t be simultaneously true, the system is inconsistent and has no solution. It was worth the effort.

**CONCLUSION**

Since standard arithmetic didn’t apply after the first line and no other restrictions were in play, that opened the door to lots of creativity. The many different solutions to this problem all hinge on finding some function–any function–that satisfied the first three lines. Find one of these, and the last line is simple. That some attempts won’t work is no hinderance. Even when standard algorithms seem to apply, there is almost always the possibility of some creative twist when working with numerical sequences.

So, whenever you’re faced with a non-standard system, have fun, be creative, and develop something unexpected.

]]>

DO THIS YOURSELF.

Grab a small handful of coins (it doesn’t matter how many), randomly flip them onto a flat surface, and count the number of tails.

Randomly pull off from the group into a separate pile the number of coins equal to the number of tails you just counted. Turn over every coin in this new pile.

Count the number of tails in each pile.

*You got the same number both times!*

**Why?**

Marilyn Vos Savant posed a similar problem:

Say that a hundred pennies are on a table. Ninety of them are heads. With your eyes closed, can you separate all the coins into two groups so that each group has the same number of tails?

Savant’s solution is to pull *any random 10 coins* from the 100 and make a second pile. Turn all the coins in the new pile over, *et voila*! Both piles have an equal number of tails.

While Savant’s approach is much more prescriptive than mine, both solutions work. Every time. ** WHY**?

**THIS IS STRANGE:**

You have no idea the state (heads or tails) of any of the coins you pull into the second pile. It’s counterintuitive that the two piles could ever contain the same number of tails.

Also, flipping the coins in the new pile seems completely arbitrary, and yet after any random pull & flip, the two resulting piles ** always** hold the same number of tails.

Enter the power (and for young people, the mystery) of algebra to generalize a problem, supporting an argument that holds for all possibilities simultaneously.

**HOW IT WORKS:**

The first clue to this is the misdirection in Savant’s question. Told that there are 90 *heads*, you are asked to make the number of *tails *equivalent. In both versions, the number of TAILS in the original pile is the number of coins pulled into the second pile. This isn’t a coincidence; it’s the key to the solution.

In any pile of randomly flipped coins (they needn’t be all or even part pennies), let ** N** be the number tails. Create your second pile by pulling a random

Cool facts:

- You can’t say with certainty how many tails will be in both piles, but you know they will be the same.
- The total number of coins you start with is completely irrelevant.
- While the given two versions of the problem make piles with equal numbers of heads, this “trick” can balance heads or tails. To balance heads instead, pull from the initial coins into a second pile the number of heads. When you flip all the coins in the second pile, both piles will now contain the same number of heads.

**A PARTY WONDER or SOLID PROBLEM FOR AN ALGEBRA CLASS:**

If you work on your showmanship, you can baffle others with this. For my middle school daughter, I counted off the “leave alone” pile and then flipped the second pile. I also let her flip the initial set of coins and tell me each time whether she wanted me to get equal numbers of heads or tails. I looked away as she shuffled the coins and pulled off the requisite number of coins without looking.

She’s figured out HOW I do it, but as she is just starting algebra, she doesn’t have the abstractness yet to fully generalize the big solution. She’ll get there.

I could see this becoming a fun data-gathering project for an algebra class. It would be cool to see how someone approaches this with a group of students.

]]>

GETTING STARTED

As a simple example, my students earlier had seen the graph of as vertically stretched by a magnitude of 2 and then translated upward 5 units. In their return, I encouraged them to envision the function behavior dynamically instead of statically. I wanted them to see the curve (and the types of phenomena it could represent) as representing dynamic motion rather than a rigid transformation of a static curve. In that sense, the graph of *f* oscillated 2 units (the coefficient of sine in *f*‘s equation) above and below the line (the addend in the equation for *f*). The curves and define the “Envelope Curves” for .

When you graph and its two envelope curves, you can picture the sinusoid “bouncing” between its envelopes. We called these ceiling and floor functions for *f*. Ceilings happen whenever the sinusoid term reaches its maximum value (+1), and floors when the sinusoidal term is at its minimum (-1).

Those envelope functions would be just more busy work if it stopped there, though. The great insights were that *anything* you added to a sinusoid could act as a midline with the coefficient, AND *anything* multiplied by the sinusoid is its amplitude–the distance the curve moves above and below its midline. The fun comes when you start to allow variable expressions for the midline and/or amplitudes.

VARIABLE MIDLINES AND ENVELOPES

For a first example, consider . By the reasoning above, is the midline. The amplitude, 1, is the coefficient of sine, so the envelope curves are (ceiling) and (floor).

That got their attention! Notice how easy it is to visualize the sine curve oscillating between its envelope curves.

For a variable amplitude, consider . The midline is , with an “amplitude” of . That made a ceiling of and a floor of , basically exponential decay curves converging on an end behavior asymptote defined by the midline.

SINUSOIDAL MIDLINES AND ENVELOPES

Now for even more fun. Convinced that both midlines and amplitudes could be variably defined, I asked what would happen if the midline was another sinusoid? For , we could think of as the midline, and with the coefficient of sine being 1, the envelopes are and .

Since cosine is a sinusoid, you could get the same curve by considering as the midline with envelopes and . Only the envelope curves are different!

The curve raised two interesting questions:

- Was the addition of two sinusoids always another sinusoid?
- What transformations of sinusoidal curves could be defined by more than one pair of envelope curves?

For the first question, they theorized that if two sinusoids had the same period, their sum was another sinusoid of the same period, but with a different amplitude and a horizontal shift. Mathematically, that means

where A & B are the original sinusoids’ amplitudes, C is the new sinusoid’s amplitude, and D is the horizontal shift. Use the cosine difference identity to derive

and .

For , this means

,

and the new coefficient means is a third pair of envelopes for the curve.

Very cool. We explored several more sums and differences with identical periods.

WHAT HAPPENS WHEN THE PERIODS DIFFER?

Try a graph of .

Using the earlier concept that any function added to a sinusoid could be considered the midline of the sinusoid, we can picture the graph of *g* as the graph of oscillating around an oscillating midline, :

IF you can’t see the oscillations yet, the coefficient of the term is 1, making the envelope curves . The next graph clear shows bouncing off its ceiling and floor as defined by its envelope curves.

Alternatively, the base sinusoid could have been with envelope curves .

Similar to the last section when we added two sinusoids with the same period, the sum of two sinusoids with different periods (but the same amplitude) can be rewritten using an identity.

This can be proved in the present form, but is lots easier to prove from an equivalent form:

.

For the current function, this means .

Now that the sum has been rewritten as a product, we can now use the coefficient as the amplitude, defining two other pairs of envelope curves. If is the sinusoid, then are envelopes of the original curve, and if is the sinusoid, then are envelopes.

In general, I think it’s easier to see the envelope effect with the larger period function. A particularly nice application connection of adding sinusoids with identical amplitudes and different periods are the beats musicians hear from the constructive and destructive sound wave interference from two instruments close to, but not quite in tune. The points where the envelopes cross on the x-axis are the quiet points in the beats.

A STUDENT WANTED MORE

In class last Friday, my students were reviewing envelope curves in advance of our final exam when one made the next logical leap and asked what would happen if both the coefficients and periods were different. When I mentioned that the exam wouldn’t go that far, she uttered a teacher’s dream proclamation: She didn’t care. She wanted to learn anyway. Making up some coefficients on the spot, we decided to explore .

Assuming for now that the cos(2x) term is the primary sinusoid, the envelope curves are .

That was certainly cool, but at this point, we were no longer satisfied with just one answer. If we assumed sin(x) was the primary sinusoid, the envelopes are .

Personally, I found the first set of envelopes more satisfying, but it was nice that we could so easily identify another.

With the different periods, even though the coefficients are different, we decided to split the original function in a way that allowed us to use the identity introduced earlier. Rewriting,

.

After factoring out the common coefficient 2, the first two terms now fit the identity with and , allowing the equation to be rewritten as

.

With the expression now containing three sinusoidal expressions, there are three more pairs of envelope curves!

Arguably, the simplest approach from this form assumes from the $latex $3cos(2x)$ term as the sinusoid, giving (the pre-identity form three equations earlier in this post) as envelopes.

We didn’t go there, but recognizing that new envelopes can be found simply by rewriting sums creates an infinite number of additional envelopes. Defining these different sums with a slider lets you see an infinite spectrum of envelopes. The image below shows one. Here is the Desmos Calculator page that lets you play with these envelopes directly.

If the term was the sinusoid, the envelopes would be . If you look closely, you will notice that this is a different type of envelope pair with the ceiling and floor curves crossing and trading places at and every units before and after. The third form creates another curious type of crossing envelopes.

CONCLUSION:

In all, it was fun to explore with my students the many possibilities for bounding sinusoidal curves. It was refreshing to have one student excited by just playing with the curves to see what else we could find for no other reason than just to enjoy the beauty of these periodic curves. As I reflected on the overall process, I was even more delighted to discover the infinite spectrum of envelopes modeled above on Desmos.

I hope you’ve found something cool here for yourself.

]]>

Don’t read further until you’ve tried this for yourself. It’s a fun problem that, at least from my experience, doesn’t end up where or how I thought it would.

**INITIAL THOUGHTS**

I see two big challenges here.

First, the missing location of point P is especially interesting, but is also likely to be quite vexing for many students. This led me to the first twist I found in the problem: the introduction of multiple variables and a coordinate system. Without some problem-solving experience, I don’t see that as an intuitive step for most middle school students. Please don’t interpret this as a knock on this problem, I’m simply agreeing with @Five_Triangle’s assessment that this problem is likely to be challenging for middle school students.

The second challenge I found emerged from the introduction the coordinate system: an underlying 2×2 system of equations. There are multiple ways to tackle a solution to a linear system, but this strikes me as yet another high hurdle for younger students.

Finally, I’m a bit surprised by my current brain block on multiple approaches for this problem. I suspect I’m blinded here by my algebraic bias in problem solving; surely there are approaches that don’t require this. I’d love to hear any other possibilities.

**POINT P VARIES**

Because I was given properties of point P and not its location, the easiest approach I could see was to position the square on the xy-plane with point B at the origin, along the y-axis, and along the x-axis. That gave my point P coordinates (x,y) for some unknown values of x & y.

The helpful part of this orientation is that the x & y coordinates of P are automatically the altitudes of and , respectively. The altitudes of the other two triangles are determined through subtraction.

**AREA RATIOS BECOME A LINEAR SYSTEM**

From here, I used the given ratios to establish one equation in terms of x & y.

Of course, since all four triangles have the same base lengths, the given area ratios are arithmetically equivalent to corresponding height ratios. I used that to write a second equation.

Simplifying terms and clearing denominators leads to and , respectively.

A VERY INTERESTING insight at this point is that there is an infinite number of locations within the square at which each ratio is true. Specifically, the ratio is true everywhere along the line 4x=36-3y. This problem constrains us to only the points within the square with vertices (0,0), (12,0), (12,12), and (0,12), but setting that aside, anywhere along the line 4x=36-3y would satisfy the first constraint. The same is true for the second line and constraint.

**I think it would be very interesting for students to construct this on dynamic geometry software (e.g., GeoGebra or the TI-Nspire) and see the ratio remain constant everywhere along either line even though the triangle areas vary throughout.**

Together, these lines form a 2×2 system of linear equations with the solution to both ratios being the intersection point of the two lines. There are lots of ways to do this; I wonder how a typical 6th grader would tackle them. Assuming they have the algebraic expertise, I’d have work them by hand and confirm with a CAS.

The question asks for the area of .

**PROBLEM VARIATIONS **

Just two extensions this time. Other suggestions are welcome.

**What’s the ratio of the area of at the point P that satisfies both ratios??**It’s not 1:4 as an errant student might think from an errant application of the transitive property to the given ratios. Can you show that it’s actually 1:8?

**If a random point is chosen within the square, is that point more likely to satisfy the area ratio of or the ratio of ?**

The first ratio is satisfied by the line 4x=36-3y which intersects the square on the segment between (9,0) and (0,12). At the latter point, both triangles are degenerate with area 0. The second ratio’s line intersects the square between (12,0) and (0,4). As the first segment is longer (how would a middle schooler prove that?), it is more likely that a randomly chosen point would satisfy the ratio. This would be a challenging probability problem, methinks.

**FURTHER EXTENSIONS?**

What other possibilities do you see either for a solution to the original problem or an extension?

]]>

The problem requires a little stamina, but can be approached many ways–two excellent criteria for worthy student explorations. That it has some solid extensions makes it even better. Following are a few different solution approaches some colleagues and I created.

**INITIAL THOUGHTS, VISUAL ORGANIZATION, & A SOLUTION**

The most challenging part of this problem is data organization. My first thoughts were for a 2-circle Venn Diagram–one for gender and one for age. And these types of Venn Diagrams are often more easily understood, in my experience, in 2×2 Table form with extra spaces for totals. Here’s what I set up initially.

The ratio of Women:Girls was 11:4, so the 24 girls meant each “unit” in this ratio accounted for 24/4=6 people. That gave 11*6=66 women and 66+24=90 females.

At this point, my experience working with algebraic problems tempted me to overthink the situation. I was tempted to let B represent the unknown number of boys and set up some equations to solve. Knowing that most 6th graders would not think about variables, I held back that instinct in an attempt to discover what a less-experienced mind might try. I present my initial algebra solution below.

The 5:3 Male:Female ratio told me that each “gender unit” represented 90/3=30 people. That meant there were 5*30=150 men and 240 total people at the party.

Then, the 4:1 Adult:Children ratio showed how to age-divide every group of 5 partygoers. With 240/5=48 such groups, there were 48 children and 4*48=192 adults. Subtracting the already known 66 women gave the requested answer: 192-66=126 men.

While this Venn Diagram/Table approach made sense to me, I was concerned that it was a moderately sophisticated and not quite intuitive problem-solving technique for younger middle school students.

**WHAT WOULD A MIDDLE SCHOOLER THINK?**

A middle school teaching colleague, Becky, offered a different solution I could see students creating.

Completely independently, she solved the problem in exactly the same order I did using ratio tables to manage the scaling at each step instead of my “unit ratios”. I liked her visual representation of the 4:1 Adults:Children ratio to find the number of adults, which gave the requested number of men. I suspect many more students would implicitly or explicitly use some chunking strategies like the visual representation to work the ratios.

**WHY HAVE JUST ONE SOLUTION?**

Math problems involving ratios can usually be opened up to allow multiple, or even an infinite number of solutions. This leads to some interesting problem extensions if you eliminate the “24 girls” restriction. Here are a few examples and sample solutions.

**What is the least number of partygoers?**

For this problem, notice from the table above that all of the values have a common factor of 6. Dividing the total partygoers by this reveals that 240/6=40 is the least number. Any multiple of this number is also a legitimate solution.

Interestingly, the 11:4 Women:Girls ratio becomes explicitly obvious when you scale the table down to its least common value.

My former student and now colleague, Teddy, arrived at this value another way. Paraphrasing, he noted that the 5:3 Male:Female ratio meant any valid total had to be a multiple of 5+3=8. Likewise, the 4:1 Adult:Child ratio requires totals to be multiples of 4+1=5. And the LCM of 8 & 5 is 40, the same value found in the preceding paragraph.

**What do all total partygoer numbers have in common?**

As explained above, any multiple of 40 is a legitimate number of partygoers.

**If the venue could support no more than 500 attendees, what is the maximum number of women attending?**

12*40=480 is the greatest multiple of 40 below 500. Because 480 is double the initial problem’s total, 66*2=132 is the maximum number of women.

Note that this can be rephrased to accommodate any other gender/age/total target.

**Under the given conditions, will the number of boys and girls at the party ever be identical?**

As with all ratio problems, larger values are always multiples of the least common solution. That means the number of boys and girls will *always* be identical or always be different. From above, you can deduce that the numbers of boys and girls at the party under the given conditions will both be multiples of 4.

What variations can you and/or your students create?

**RESOLVING THE INITIAL ALGEBRA**

Now to the solution variation I was initially inclined to produce. After initially determining 66 women from the given 24 girls, let B be the unknown number of boys. That gives B+24 children. It was given that adults are 4 times as numerous as children making the number of adults 4(B+24)=4B+96. Subtracting the known 66 women leaves 4B+30 men. Compiling all of this gives

The 5:3 Male:Female ratio means , the same result as earlier.

**ALGEBRA OVERKILL**

Winding through all of that algebra ultimately isn’t that computationally difficult, but it certainly is more than typical 6th graders could handle.

But the problem could be generalized even further, as Teddy shared with me. If the entire table were written in variables with W=number of women, M=men, G=girls, and B=boys, the given ratios in the problem would lead to a reasonably straightforward 4×4 system of equations. If you understand enough to write all of those equations, I’m certain you could solve them, so I’d feel confident allowing a CAS to do that for me. My TI-Nspire gives this.

And that certainly isn’t work you’d expect from any 6th grader.

**CONCLUSION**

Given that the 11:4 Women:Girls ratio was the only “internal” ratio, it was apparent in retrospect that all solutions except the 4×4 system approach had to find the female values first. There are still several ways to resolve the problem, but I found it interesting that while there was no “direct route”, every reasonable solution started with the same steps.

Thanks to colleagues Teddy S & Becky M for sharing their solution proposals.

]]>

**1 – Accept the function as posed and differentiate implicitly.**

Which gives at (x,y)=(0,4).

**2 – Solve for y and differentiate explicitly.**

Evaluating this at (x,y)=(0,4) gives .

**3 – Substitute early.**

The question never asked for an algebraic expression of , only the numerical value of this slope. Because students tend to make more silly mistakes manipulating algebraic expressions than numeric ones, the additional algebra steps are unnecessary, and potentially error-prone. Admittedly, the manipulations are pretty straightforward here, in more algebraically complicated cases, early substitutions could significantly simplify work. Using approach #1 and substituting directly into the second line gives

.

At (x,y)=(0,4), this is

The numeric manipulations on the right side are obviously easier than the earlier algebra.

**4 – Solve for and reciprocate.**

There’s nothing sacred about solving for directly. Why not compute the derivative of the inverse and reciprocate at the end? Differentiating first with respect to y eventually leads to the same solution.

At (x,y)=(0,4), this is

, so

.

I sometimes wonder if teachers should place much more emphasis on equivalence. We spend so much time manipulating expressions in mathematics classes at all levels, changing mathematical objects (shapes, expressions, equations, etc.) into a different, but equivalent objects. Many times, these manipulations are completed under the guise of “simplification.” (Here is a brilliant Dan Teague post cautioning against taking this idea too far.)

But it is critical for students to recognize that proper application of manipulations creates equivalent expressions, **even if when the resulting expressions don’t look the same. ** The reason we manipulate mathematical objects is to discover features about the object in one form that may not be immediately obvious in another.

For the function , the slope at (0,4) must be the same, no matter how that slope is calculated. If you get a different looking answer while using correct manipulations, the final answers must be equivalent.

A similar question appeared on the AP Calculus email list-server almost a decade ago right at the moment I was introducing implicit differentiation. A teacher had tried to find for

using implicit differentiation on the quotient, manipulating to a product before using implicit differentiation, and finally solving for y in terms of x to use an explicit derivative.

**1 – Implicit on a quotient**

Take the derivative as given:$

**2 – Implicit on a product**

Multiplying the original equation by its denominator gives

.

Differentiating with respect to x gives

**3 – Explicit**

Solving the equation at the start of method 2 for y gives

.

Differentiating with respect to x gives

**Equivalence**

Those 3 forms of the derivative look VERY DIFFERENT. Assuming no errors in the algebra, they MUST be equivalent because they are nothing more than the same derivative of different forms of the same function, and a function’s rate of change doesn’t vary just because you alter the look of its algebraic representation.

Substituting the y-as-a-function-of-x equation from method 3 into the first two derivative forms converts all three into functions of x. Lots of by-hand algebra or a quick check on a CAS establishes the suspected equivalence. Here’s my TI-Nspire CAS check.

Here’s the form of this investigation I gave my students.

I’m not a big fan of memorizing anything without a VERY GOOD reason. My teachers telling me to do so never held much weight for me. I memorized as little as possible and used that information as long as I could until a scenario arose to convince me to memorize more. One thing I managed to avoid almost completely were the annoying derivative formulas for inverse trig functions.

For example, find the derivative of at .

Since arc-trig functions annoy me, I always rewrite them. Taking sine of both sides and then differentiating with respect to x gives.

I could rewrite this equation to give , a perfectly reasonable form of the derivative, albeit as a less-common expression in terms of y. But I don’t even do that unnecessary algebra. From the original function, , and I substitute that immediately after the differentiation step to give a much cleaner numeric route to my answer.

And this is the same result as plugging into the memorized version form of the derivative of arcsine. If you like memorizing, go ahead, but my mind remains more nimble and less cluttered.

One final equivalent approach would have been differentiating with respect to y and reciprocating at the end.

There are MANY ways to compute derivatives. For any problem or scenario, use the one that makes sense or is computationally easiest for YOU. If your resulting algebra is correct, you know you have a correct answer, even if it looks different. Be strong!

]]>

My class started a unit on sampling when we returned in January. They needed to understand how larger sample sizes tended to shrink standard deviations, but I didn’t want to just give them the formula

.

I know many teachers introduce this relationship by selecting samples with perfect square sizes and see the population standard deviations shrink by integer factors (quadruple the sample size = halve the standard deviation, multiply the sample size by 9 = standard deviation divides by 3, etc.), but I didn’t want to exert that much control. My students had explored data straightening techniques in the fall and were used to sampling and simulations, so I wanted to see how successfully they could leverage that background to “discover” the sample standard deviation relationship.

My AP Statistics students use TI Nspire CAS software on their laptops, so I wrote their lab using that technology. The lab could easily be adapted to whatever statistics technology you use in your class. You can download a pdf of my lab here.

**LAB RESULTS AND REFLECTIONS**

The activity drew samples from a normal distribution for which students were able to define their own means and standard deviations. Students could choose any values, but those who chose integers tended to make the later connections more easily.

Their first step was to draw 2500 different random samples of sizes n=1, 4, 10, 25, 50, 100. From each 2500 point data set, students computed sample means and standard deviations. In retrospect, I should have let students select all or most of their own sample sizes, but I’m still quite satisfied with the results. If you do experiment with different sample sizes, definitely run the larger potential sizes on your technology to check computation times.

One student chose and . Her sample means and standard deviations are

It was pretty obvious to her that no matter what the sample size, , but the standard deviations were shrinking as the sample sizes grew. Determining that relationship was the heart of the activity. Obviously, the sample size (SS) seemed to drive the sample standard deviation (SD), so my student graphed her (SS, SD) data to get

We had explored bivariate data-straightening techniques at the end of the fall semester, so she tried semi-log and log-log transformations to check for the possibilities that these data might be represented by an exponential or power function, respectively. Her semi-log transformation was still curved, but the log-log was very straight. That transformation and its accompanying linear regression are below.

Her residuals were small, balanced, and roughly random, so she knew she had a reasonable fit. From there, she used her CAS to transform (re-curve) the linear regression back to an equation for the original data.

It made sense that this resulting formula not only depended on the sample size, but also originally on the population standard deviation my student had earlier chosen to be . Within reasonable round-off deviations, the numerator appeared to be the population standard deviation and the exponent of the denominator was very close to , indicating a square root. That gave her the expected sample standard deviation formula, .

I know this formula is provided on the AP Statistics Exam, but the simulation, curve straightening, linear regression, and statistical confirmation of the formula were a great review and exercise. I hope you find it useful, too.

]]>