Tag Archives: TI Nspire

Computers vs. People: Writing Math

Readers of this ‘blog know I actively use many forms of technology in my teaching and personal explorations.  Yesterday, a thread started on the AP-Calculus community discussion board with some expressing discomfort that most math software accepts sin(x)^2 as an acceptable equivalent to the “traditional” handwritten sin^2 (x).

From Desmos:sine1

Some AP readers spoke up to declare that sin(x)^2 would always be read as sin(x^2).  While I can’t speak to the veracity of that last claim, I found it a bit troubling and missing out on some very real difficulties users face when interpreting between paper- and computer-based versions of math expressions.  Following is an edited version of my response to the AP Calculus discussion board.


I believe there’s something at the core of all of this that isn’t being explicitly named:  The differences between computer-based 1-dimensional input (left-to-right text-based commands) vs. paper-and-pencil 2-dimensional input (handwritten notation moves vertically–exponents, limits, sigma notation–and horizontally).  Two-dimensional traditional math writing simply doesn’t convert directly to computer syntax.  Computers are a brilliant tool for mathematics exploration and calculation, but they require a different type of input formatting.  To overlook and not explicitly name this for our students leaves them in the unenviable position of trying to “creatively” translate between two types of writing with occasional interpretation differences.

Our students are unintentionally set up for this confusion when they first learn about the order of operations–typically in middle school in the US.  They learn the sequencing:  parentheses then exponents, then multiplication & division, and finally addition and subtraction.  Notice that functions aren’t mentioned here.  This thread [on the AP Calculus discussion board] has helped me realize that all or almost all of the sources I routinely reference never explicitly redefine order of operations after the introduction of the function concept and notation.  That means our students are left with the insidious and oft-misunderstood PEMDAS (or BIDMAS in the UK) as their sole guide for operation sequencing.  When they encounter squaring or reciprocating or any other operations applied to function notation, they’re stuck trying to make sense and creating their own interpretation of this new dissonance in their old notation.  This is easily evidenced by the struggles many have when inputting computer expressions requiring lots of nested parentheses or when first trying to code in LaTEX.

While the sin(x)^2 notation is admittedly uncomfortable for traditional “by hand” notation, it is 100% logical from a computer’s perspective:  evaluate the function, then square the result.

We also need to recognize that part of the confusion fault here lies in the by-hand notation.  What we traditionalists understand by the notational convenience of sin^2(x) on paper is technically incorrect.  We know what we MEAN, but the notation implies an incorrect order of computation.  The computer notation of sin(x)^2 is actually closer to the truth.

I particularly like the way the TI-Nspire CAS handles this point.  As is often the case with this software, it accepts computer input (next image), while its output converts it to the more commonly understood written WYSIWYG formatting (2nd image below).



Further recent (?) development:  Students have long struggled with the by-hand notation of sin^2(x) needing to be converted to (sin(x))^2 for computers.  Personally, I’ve always liked both because the computer notation emphasizes the squaring of the function output while the by-hand version was a notational convenience.  My students pointed out to me recently that Desmos now accepts the sin^2(x) notation while TI Calculators still do not.

Desmos: sine4

The enhancement of WYSIWYG computer input formatting means that while some of the differences in 2-dimensional hand writing and computer inputs are narrowing, common classroom technologies no longer accept the same linear formatting — but then that was possibly always the case….

To rail against the fact that many software packages interpret sin(x)^2 as (sin(x))^2 or sin^2(x) misses the point that 1-dimensional computer input is not necessarily the same as 2-dimensional paper writing.  We don’t complain when two human speakers misunderstand each other when they speak different languages or dialects.  Instead, we should focus on what each is trying to say and learn how to communicate clearly and efficiently in both venues.

In short, “When in Rome, …”.

Roots of Complex Numbers without DeMoivre

Finding roots of complex numbers can be … complex.

This post describes a way to compute roots of any number–real or complex–via systems of equations without any conversions to polar form or use of DeMoivre’s Theorem.  Following a “traditional approach,” one non-technology example is followed by a CAS simplification of the process.


Most sources describe the following procedure to compute the roots of complex numbers (obviously including the real number subset).

  • Write the complex number whose root is sought in generic polar form.  If necessary, convert from Cartesian form.
  • Invoke DeMoivre’s Theorem to get the polar form of all of the roots.
  • If necessary, convert the numbers from polar form back to Cartesian.

As a very quick example,

Compute all square roots of -16.

Rephrased, this asks for all complex numbers, z, that satisfy  z^2=-16.  The Fundamental Theorem of Algebra guarantees two solutions to this quadratic equation.

The complex Cartesian number, -16+0i, converts to polar form, 16cis( \pi ), where cis(\theta ) = cos( \theta ) +i*sin( \theta ).  Unlike Cartesian form, polar representations of numbers are not unique, so any full rotation from the initial representation would be coincident, and therefore equivalent if converted to Cartesian.  For any integer n, this means

-16 = 16cis( \pi ) = 16 cis \left( \pi + 2 \pi n \right)

Invoking DeMoivre’s Theorem,

\sqrt{-16} = (-16)^{1/2} = \left( 16 cis \left( \pi + 2 \pi n \right) \right) ^{1/2}
= 16^{1/2} * cis \left( \frac{1}{2} \left( \pi + 2 \pi n \right) \right)
= 4 * cis \left( \frac{ \pi }{2} + \pi * n \right)

For n= \{ 0, 1 \} , this gives polar solutions, 4cis \left( \frac{ \pi }{2} \right) and 4cis \left( \frac{ 3 \pi }{2} \right) .  Each can be converted back to Cartesian form, giving the two square roots of -16:   4i and -4i .  Squaring either gives -16, confirming the result.

I’ve always found the rotational symmetry of the complex roots of any number beautiful, particularly for higher order roots.  This symmetry is perfectly captured by DeMoivre’s Theorem, but there is arguably a simpler way to compute them.


Because the solution to every complex number computation can be written in a+bi form, new possibilities open.  The original example can be rephrased:

Determine the simultaneous real values of x and y for which -16=(x+yi)^2.

Start by expanding and simplifying the right side back into a+bi form.  (I wrote about a potentially easier approach to simplifying powers of i in my last post.)

-16+0i = \left( x+yi \right)^2 = x^2 +2xyi+y^2 i^2=(x^2-y^2)+(2xy)i

Notice that the two ends of the previous line are two different expressions for the same complex number(s).  Therefore, equating the real and imaginary coefficients gives a system of equations:


Solving the system gives the square roots of -16.

From the latter equation, either x=0 or y=0.  Substituting y=0 into the first equation gives -16=x^2, an impossible equation because x & y are both real numbers, as stated above.

Substituting x=0 into the first equation gives -16=-y^2, leading to y= \pm 4.  So, x=0 and y=-4 -OR- x=0 and y=4 are the only solutions–x+yi=0-4i and x+yi=0+4i–the same solutions found earlier, but this time without using polar form or DeMoivre!  Notice, too, that the presence of TWO solutions emerged naturally.

Higher order roots could lead to much more complicated systems of equations, but a CAS can solve that problem.


Determine all fourth roots of 1+2i.

That’s equivalent to finding all simultaneous x and y values that satisfy 1+2i=(x+yi)^4.  Expanding the right side is quickly accomplished on a CAS.  From my TI-Nspire CAS:


Notice that the output is simplified to a+bi form that, in the context of this particular example, gives the system of equations,


Using my CAS to solve the system,


First, note there are four solutions, as expected.  Rewriting the approximated numerical output gives the four complex fourth roots of 1+2i-1.176-0.334i-0.334+1.176i0.334-1.176i, and 1.176+0.334i.  Each can be quickly confirmed on the CAS:



Given proper technology, finding the multiple roots of a complex number need not invoke polar representations or DeMoivre’s Theorem.  It really is as “simple” as expanding (x+yi)^n where n is the given root, simplifying the expansion into a+bi form, and solving the resulting 2×2 system of equations.

At the point when such problems would be introduced to students, their algebraic awareness should be such that using a CAS to do all the algebraic heavy lifting is entirely appropriate.

As one final glimpse at the beauty of complex roots, I entered the two equations from the last system into Desmos to take advantage of its very good implicit graphing capabilities.  You can see the four intersections corresponding to the four solutions of the system.  Solutions to systems of implicit equations are notoriously difficult to compute, so I wasn’t surprised when Desmos didn’t compute the coordinates of the points of intersection, even though the graph was pretty and surprisingly quick to generate.


Probability, Polynomials, and Sicherman Dice

Three years ago, I encountered a question on the TI-Nspire Google group asking if there was a way to use CAS to solve probability problems.  The ideas I pitched in my initial response and follow-up a year later (after first using it with students in a statistics class) have been thoroughly re-confirmed in my first year teaching AP Statistics.  I’ll quickly re-share them below before extending the concept with ideas I picked up a couple weeks ago from Steve Phelps’ session on Probability, Polynomials, and CAS at the 64th annual OCTM conference earlier this month in Cleveland, OH.


Once you understand them, binomial probability distributions aren’t that difficult, but the initial conjoining of combinatorics and probability makes this a perennially difficult topic for many students.  The standard formula for the probability of determining the chances of K successes in N attempts of a binomial situation where p is the probability of a single success in a single attempt is no less daunting:

\displaystyle \left( \begin{matrix} N \\ K \end{matrix} \right) p^K (1-p)^{N-K} = \frac{N!}{K! (N-K)!} p^K (1-p)^{N-K}

But that is almost exactly the same result one gets by raising binomials to whole number powers, so why not use a CAS to expand a polynomial and at least compute the \displaystyle \left( \begin{matrix} N \\ K \end{matrix} \right) portion of the probability?  One added advantage of using a CAS is that you could use full event names instead of abbreviations, making it even easier to identify the meaning of each event.


The TI-Nspire output above shows the entire sample space resulting from flipping a coin 6 times.  Each term is an event.  Within each term, the exponent of each variable notes the number of times that variable occurs and the coefficient is the number of times that combination occurs.  The overall exponent in the expand command is the number of trials.  For example, the middle term– 20\cdot heads^3 \cdot tails^3 –says that there are 20 ways you could get 3 heads and 3 tails when tossing a coin 6 times. The last term is just tails^6, and its implied coefficient is 1, meaning there is just one way to flip 6 tails in 6 tosses.

The expand command makes more sense than memorized algorithms and provides context to students until they gain a deeper understanding of what’s actually going on.


Still using the expand command, if each variable is preceded by its probability, the CAS result combines the entire sample space AND the corresponding probability distribution function.  For example, when rolling a fair die four times, the distribution for 1s vs. not 1s (2, 3, 4, 5, or 6) is given by


The highlighted term says there is a 38.58% chance that there will be exactly one 1 and any three other numbers (2, 3, 4, 5, or 6) in four rolls of a fair 6-sided die.  The probabilities of the other four events in the sample space are also shown.  Within the TI-Nspire (CAS or non-CAS), one could use a command to give all of these probabilities simultaneously (below), but then one has to remember whether the non-contextualized probabilities are for increasing or decreasing values of which binomial outcome.


Particularly early on in their explorations of binomial probabilities, students I’ve taught have shown a very clear preference for the polynomial approach, even when allowed to choose any approach that makes sense to them.


Given these earlier thoughts, I was naturally drawn to Steve Phelps “Probability, Polynomials, and CAS” session at the November 2014 OCTM annual meeting in Cleveland, OH.  Among the ideas he shared was using polynomials to create the distribution function for the sum of two fair 6-sided dice.  My immediate thought was to apply my earlier ideas.  As noted in my initial post, the expansion approach above is not limited to binomial situations.  My first reflexive CAS command in Steve’s session before he share anything was this.


By writing the outcomes in words, the CAS interprets them as variables.  I got the entire sample space, but didn’t learn gain anything beyond a long polynomial.  The first output– five^2 –with its implied coefficient says there is 1 way to get 2 fives.  The second term– 2\cdot five \cdot four –says there are 2 ways to get 1 five and 1 four.  Nice that the technology gives me all the terms so quickly, but it doesn’t help me get a distribution function of the sum.  I got the distributions of the specific outcomes, but the way I defined the variables didn’t permit sum of their actual numerical values.  Time to listen to the speaker.

He suggested using a common variable, X, for all faces with the value of each face expressed as an exponent.  That is, a standard 6-sided die would be represented by X^1+X^2+ X^3+X^4+X^5+X^6 where the six different exponents represent the numbers on the six faces of a typical 6-sided die.  Rolling two such dice simultaneously is handled as I did earlier with the binomial cases.

NOTE:  Exponents are handled in TWO different ways here.  1) Within a single polynomial, an exponent is an event value, and 2) Outside a polynomial, an exponent indicates the number of times that polynomial is applied within the specific event.  Coefficients have the same meaning as before.

Because the variables are now the same, when specific terms are multiplied, their exponents (face values) will be added–exactly what I wanted to happen.  That means the sum of the faces when you roll two dice is determined by the following.


Notice that the output is a single polynomial.  Therefore, the exponents are the values of individual cases.  For a couple examples, there are 3 ways to get a sum of 10 \left( 3 \cdot x^{10} \right) , 2 ways to get a sum of 3 \left( 2 \cdot x^3 \right) , etc.  The most commonly occurring outcome is the term with the largest coefficient.  For rolling two standard fair 6-sided dice, a sum of 7 is the most common outcome, occurring 6 times \left( 6 \cdot x^7 \right) .  That certainly simplifies the typical 6×6 tables used to compute the sums and probabilities resulting from rolling two dice.

While not the point of Steve’s talk, I immediately saw that technology had just opened the door to problems that had been computationally inaccessible in the past.  For example, what is the most common sum when rolling 5 dice and what is the probability of that sum?  On my CAS, I entered this.


In the middle of the expanded polynomial are two terms with the largest coefficients, 780 \cdot x^{18} and 780 \cdot x^{19}, meaning a sums of 17 and 18 are the most common, equally likely outcomes when rolling 5 dice.  As there are 6^5=7776 possible outcomes when rolling a die 5 times, the probability of each of these is \frac{780}{7776} \approx 0.1003, or about 10.03% chance each for a sum of 17 or 18.  This can be verified by inserting the probabilities as coefficients before each term before CAS expanding.


With thought, this shouldn’t be surprising as the expected mean value of rolling a 6-sided die many times is 3.5, and 5 \cdot 3.5 = 17.5, so the integers on either side of 17.5 (17 & 18) should be the most common.  Technology confirms intuition.


What is the distribution of sums when rolling a 4-sided and a 6-sided die together?  No problem.  Just multiply two different polynomials, one representative of each die.


The output shows that sums of 5, 6, and 7 would be the most common, each occurring four times with probability \frac{1}{6} and together accounting for half of all outcomes of rolling these two dice together.


My most unexpected gain from Steve’s talk happened when he asked if we could get the same distribution of sums as “normal” 6-sided dice, but from two different 6-sided dice.  The only restriction he gave was that all of the faces of the new dice had to have positive values.  This can be approached by realizing that the distribution of sums of the two normal dice can be found by multiplying two representative polynomials to get


Restating the question in the terms of this post, are there two other polynomials that could be multiplied to give the same product?  That is, does this polynomial factor into other polynomials that could multiply to the same product?  A CAS factor command gives


Any rearrangement of these eight (four distinct) sub-polynomials would create the same distribution as the sum of two dice, but what would the the separate sub-products mean in terms of the dice?  As a first example, what if the first two expressions were used for one die (line 1 below) and the two squared trinomials comprised a second die (line 2)?


Line 1 actually describes a 4-sided die with one face of 4, two faces with 3s, and one face of 2.  Line 2 describes a 9-sided die (whatever that is) with one face of 8, two faces of 6, three faces of 4, two faces of 2, and one face with a 0 ( 1=1 \cdot x^0).  This means rolling a 4-sided and a 9-sided die as described would give exactly the same sum distribution.  Cool, but not what I wanted.  Now what?

Factorization gave four distinct sub-polynomials, each with multitude 2.  One die could contain 0, 1, or 2 of each of these with the remaining factors on the other die.  That means there are 3^4=81 different possible dice combinations.  I could continue with a trail-and-error approach, but I wanted to be more efficient and elegant.

What follows is the result of thinking about the problem for a while.  Like most math solutions to interesting problems, ultimate solutions are typically much cleaner and more elegant than the thoughts that went into them.  Problem solving is a messy–but very rewarding–business.


Here are my insights over time:

1) I realized that the x^2 term would raise the power (face values) of the desired dice, but would not change the coefficients (number of faces).  Because Steve asked for dice with all positive face values.  That meant each desired die had to have at least one x to prevent non-positive face values.

2) My first attempt didn’t create 6-sided dice.  The sums of the coefficients of the sub-polynomials determined the number of sides.  That sum could also be found by substituting x=1 into the sub-polynomial.  I want 6-sided dice, so the final coefficients must add to 6.  The coefficients of the factored polynomials of any die individually must add to 2, 3, or 6 and have a product of 6.  The coefficients of (x+1) add to 2, \left( x^2+x+1 \right) add to 3, and \left( x^2-x+1 \right) add to 1.  The only way to get a polynomial coefficient sum of 6 (and thereby create 6-sided dice) is for each die to have one (x+1) factor and one \left( x^2+x+1 \right) factor.

3) That leaves the two \left( x^2-x+1 \right) factors.  They could split between the two dice or both could be on one die, leaving none on the other.  We’ve already determined that each die already had to have one each of the x, (x+1), and \left( x^2+x+1 \right) factors.  To also split the \left( x^2-x+1 \right) factors would result in the original dice:  Two normal 6-sided dice.  If I want different dice, I have to load both of these factors on one die.

That means there is ONLY ONE POSSIBLE alternative for two 6-sided dice that have the same sum distribution as two normal 6-sided dice.


One die would have single faces of 8, 6, 5, 4, 3, and 1.  The other die would have one 4, two 3s, two 2s, and one 1.  And this is exactly the result of the famous(?) Sicherman Dice.

If a 0 face value was allowed, shift one factor of x from one polynomial to the other.  This can be done two ways.


The first possibility has dice with faces {9, 7, 6, 5, 4, 2} and {3, 2, 2, 1, 1, 0}, and the second has faces {7, 5, 4, 3, 2, 0} and {5, 4, 4, 3, 3, 2}, giving the only other two non-negative solutions to the Sicherman Dice.

Both of these are nothing more than adding one to all faces of one die and subtracting one from from all faces of the other.  While not necessary to use polynomials to compute these, they are equivalent to multiplying the polynomial of one die by x and the other by \frac{1}{x} as many times as desired. That means there are an infinite number of 6-sided dice with the same sum distribution as normal 6-sided dice if you allow the sides to have negative faces.  One of these is


corresponding to a pair of Sicherman Dice with faces {6, 4, 3, 2, 1, -1} and {1,5,5,4,4,3}.


There are other very interesting properties of Sicherman Dice, but this is already a very long post.  In the end, there are tremendous connections between probability and polynomials that are accessible to students at the secondary level and beyond.  And CAS keeps the focus on student learning and away from the manipulations that aren’t even the point in these explorations.


FREE TI-Nspire iPad App Workshop


On Saturday, 31 May 2014, Texas Instruments (@TICalculators) and @HawkenSchool are hosting a FREE TI-Nspire iPad Workshop at Hawken’s Gries Center in Cleveland’s University Circle.  The workshop is designed for educators who are interested in or are just beginning to use the TI- Nspire App for iPad® (either CAS or numeric). It will cover the basics of getting started and teaching with the Apps.  Tom Reardon will be leading the training!

Sign up for the workshop here.  A pdf flyer for the workshop is here:   iPad App Training.

Dynamic Linear Programming

My department is exploring the pros and cons of different technologies for use in teaching our classes. Two teachers shared ways to use Desmos and GeoGebra in lessons using inequalities on one day; we explored the same situation using the TI-Nspire in the following week’s meeting.  For this post, I’m assuming you are familiar with solving linear programming problems.  Some very nice technology-assisted exploration ideas are developed in the latter half of this post.

My goal is to show some cool ways we discovered to use technology to evaluate these types of problems and enhance student exploration.  Our insights follow the section considering two different approaches to graphing the feasible region.  For context, we used a dirt-biker linear programming problem from NCTM’s Illuminations Web Pages.


Assuming x = the number of Riders built and = the number of Rovers built,  inequalities for this problem are


We also learn on page 7 of the Illuminations activity that Apu makes a $15 profit on each Rider and $30 per Rover.  That means an Optimization Equation for the problem is Profit=15x+30y.


Graphing all of the inequalities simultaneously determines the feasible region for the problem.  This can be done easily with all three technologies, but the Nspire requires solving the inequalities for y first.  Therefore, the remainder of this post compares the Desmos and GeoGebra solutions.  Because the Desmos solutions are easily accessible as Web pages and not separate files, further images will be from Desmos until the point where GeoGebra operates differently.

Both Desmos and GeoGebra can graph these inequalities from natural inputs–inputing math sentences as you would write them from the problem information:  without solving for a specific variable.  As with many more complicated linear programming problems, graphing all the constraints at once sometimes makes a visually complicated feasible region graph.


So, we decided to reverse all of our inequalities, effectively  shading the non-feasible region instead.  Any points that emerged unshaded were possible solutions to the Dirt Bike problem (image below, file here).  All three softwares shift properly between solid and dashed lines to show respective included and excluded boundaries.


Traditional Approach – I (as well as almost all teachers, I suspect) have traditionally done some hand-waving at this point to convince (or tell) students that while any ordered pair in the unshaded region or on its boundary (all are dashed) is a potential solution, any optimal solution occurs on the boundary of the feasible region.  Hopefully teachers ask students to plug ordered pairs from the feasible region into the Optimization Equation to show that the profit does vary depending on what is built (duh), and we hope they eventually discover (or memorize) that the maximum or minimum profit occurs on the edges–usually at a corner for the rigged setups of most linear programming problems in textbooks.  Thinking about this led to several lovely technology enhancements.

INSIGHT 1:  Vary a point.

During our first department meeting, I was suddenly dissatisfied with how I’d always introduced this idea to my classes.  That unease and our play with the Desmos’ simplicity of adding sliders led me to try graphing a random ordered pair.  I typed (a,b) on an input line, and Desmos asked if I wanted sliders for both variables.  Sure, I thought (image below, file here).


— See my ASIDE note below for a philosophical point on the creation of (a,b).
— GeoGebra and the Nspire require one additional step to create/insert sliders, but GeoGebra’s naming conventions led to a smoother presentation–see below.

BIG ADVANTAGE:  While the Illuminations problem we were using had convenient vertices, we realized that students could now drag (a,b) anywhere on the graph (especially along the boundaries and to vertices of the feasible region) to determine coordinates.  Establishing exact coordinates of those points still required plugging into equations and possibly solving systems of equations (a possible entry for CAS!).  However discovered, critical coordinates were suddenly much easier to identify in any linear programming question.

HUGE ADVANTAGE:  Now that the point was variably defined, the Optimization Equation could be, too!  Rewriting and entering the Optimation Equation as an expression in terms of a and b, I took advantage of Desmos being a calculator, not just a grapher.  Notice the profit value on the left of the image.


With this, users can drag (a,b) and see not only the coordinates of the point, but also the value of the profit at the point’s current location!  Check out the live version here to see how easily Desmos updates this value as you drag the point.

From this dynamic setup, I believe students now can learn several powerful ideas through experimentation that traditionally would have been told/memorized.


  1. Drag (a,b) anywhere in the feasible region.  Not surprisingly, the profit’s value varies with (a,b)‘s location.
  2. The profit appears to be be constant along the edges.  Confirm this by dragging (a,b) steadily along any edge of the feasible region.
  3. While there are many values the profit could assume in the feasible region, some quick experimentation suggests that the largest and smallest profit values occur at the vertices of the feasible region.
  4. DEEPER:  While point 3 is true, many teachers and textbooks mistakenly proclaim that solutions occur only at vertices.  In fact, it is technically possible for a problem to have an infinite number optimal solutions.  This realization is discussed further in the CONCLUSION.

ASIDE:  I was initially surprised that the variable point on the Desmos graph was directly draggable.  From a purist’s perspective, this troubled me because the location of the point depends on the values of the sliders.  That said, I shouldn’t be able to move the point and change the values of its defining sliders.  Still, the simplicity of what I was able to do with the problem as a result of this quickly led me to forgive the two-way dependency relationships between Desmos’ sliders and the objects they define.


In some ways, this result was even easier to create on GeoGebra.  After graphing the feasible region, I selected the Point tool and clicked once on the graph.  Voila!  The variable point was fully defined.  This avoids the purist issue I raised in the ASIDE above.  As a bonus, the point was also named.

linear4Unlike Desmos, GeoGebra permits multi-character function names.  Defining Profit(x,y)=15x+30y and entering Profit(A) allowed me to see the profit value change as I dragged point A as I did in the Desmos solution. The Profit(A) value was dynamically computed in GeoGebra as a number value in its Algebra screen.  A live version of this construction is on GeoGebraTube here.


At first, I wasn’t sure if the last command–entering a single term into a multivariable term–would work, but since A was a multivariable point, GeoGebra nicely handled the transition.  Dragging A around the feasible region updated the current profit value just as easily as Desmos did.

INSIGHT 2:  Slide a line.

OK, this last point is really an adaptation of a technique I learned from some of my mentors when I started teaching years ago, but how I will use it in the future is much cleaner and more expedient.  I thought line slides were a commonly known technique for solving linear programming problems, but conversations with some of my colleagues have convinced me that not everyone knows the approach.

Recall that each point in the feasible region has its own profit value.  Instead of sliding a point to determine a profit, why not pick a particular profit and determine all points with that profit?  As an example, if you wanted to see all points that had a profit of $100, the Optimization Equation becomes Profit=100=15x+30y.  A graph of this line (in solid purple below) passes through the feasible region.  All points on this line within the feasible region are the values where Apu could build dirt bikes and get a profit of $100.  (Of course, only integer ordered pairs are realistic.)


You could replace the 100 in the equation with different values and repeat the investigation.  But if you’re thinking already about the dynamic power of the software, I hope you will have realized that you could define profit as a slider to scan through lots of different solutions with ease after you reset the slider’s bounds.  One instance is shown below; a live Desmos version is here.


Geogebra and the Nspire set up the same way except you must define their slider before you define the line.  Both allow you to define the slider as “profit” instead of just “p”.


From here, hopefully it is easy to extend Student Discovery 3 from above.  By changing the P slider, you see a series of parallel lines (prove this!).  As the value of P grows, the line goes up in this Illuminations problem.  Through a little experimentation, it should be obvious that as P rises , the last time the profit line touches the feasible region will be at a vertex.  Experiment with the P slider here to convince yourself that the maximum profit for this problem is $165 at the point (x,y)=(3,4).  Apu should make 3 Riders and 4 Rovers to maximize profit.  Similarly (and obviously), Apu’s minimum profit is $0 at (x,y)=(0,0) by making no dirt bikes.

While not applicable in this particular problem, I hope you can see that if an edge of the feasible region for some linear programming problem was parallel to the line defined by the corresponding Optimization Equation, then all points along that edge potentially would be optimal solutions with the same Optimization Equation output.  This is the point I was trying to make in Student Discovery 4.

In the end, Desmos, GeoGebra, and the TI-Nspire all have the ability to create dynamic learning environments in which students can explore linear programming situations and their optimization solutions, albeit with slightly different syntax.  In the end, I believe these any of these approaches can make learning linear programming much more experimental and meaningful.

New Nspire Apps PLUS Weekend Savings

TI finally converted its Nspire calculators to the iPad platform and through this weekend only in celebration of 25 years of Teachers Teaching with Technology, they’re offering both of their Nspire apps at $25 off their usual $29.99, or $4.99 each.  This is a GREAT deal, especially considering everything the Nspire can do!  Clicking on either of the images below will take you to a description page for that app.  


In my opinion, if you’re going to get one of these, I’d grab the CAS version.  It does EVERYTHING the non-CAS version does plus great CAS tools.  Why pay the same money for the non-CAS and get less?  You aren’t required to use the CAS tools, but I’d rather have a tool and not need it than the other way around.  If you read my ‘blog, though, you know I strongly advocate for CAS use for anyone exploring mathematics.

Now, on to my brief review of the new apps.

MY REVIEW:  From my experimentations the last few days, this app appears to do EVERYTHING the corresponding handheld calculators can do.  I wouldn’t be surprised if there are a few things the computer version can do that the app can’t, but I haven’t been able to find it yet.  In a few places, I actually like the iPad app better than either the handheld or computer versions.  Here are a few.

  • When you start the app, your home page shows all of the documents available that have been created on the app.  It’s easy enough to navigate there on the handheld or computer, but it’s a nice touch (to me) to see all of my files easily arranged when I start up.


  • A BRILLIANT addition is the ability to export your working files to share with others.  Using the standard export button common to all iPad apps with export features, you get the ability to share your current doc via email or iTunes.
  • The calculator history items can now be accessed using a simple tap instead of just arrow key or mouse navigation.


  • Personally, I find it much easier to access the menus and settings with conveniently located app buttons.  I prefer having my tools available on a tap rather than buried in menus.  A nice touch, from my perspective.


  • Moving objects is easy.  I was easily able to graph y=x and the generic y=a\cdot x^2+b\cdot x+c with sliders for each parameter.  It’s easy to drag the slider values, and after a brief tap-and-hold, a pop-up gives you an option to animate, change settings, move, or delete your slider.


  • Also notice on the left side of the three previous screens that you have thumbnails of your currently open windows.  With a quick tap, you can quickly change between windows.
  • One of the best features of the Nspire has always been its ability to integrate multiple representations of mathematical ideas.  That continues here.  As I said, the app appears to be a fully-functional variation of the pre-existing handheld and computer versions.
  • The 3D-graphing option from a graphing page seems much easier to use on the iPad app.  Being able to use my finger to rotate a graph the way I want just seems much more intuitive than using my mouse.  As with the computer software, you can define your 3D surfaces and curves in Cartesian function form or parametrically.


  • A lovely touch on the iPad version is the ability to use finger pinch and spread maneuvers to zoom in and out on 2D and 3D graphs.  Dragging your finger over a 2D graph easily repositions it.  Combined, these options make it incredibly easy to obtain good graphing windows.

For now, I see two drawbacks, but I can easily deal with both given the other advantages.

  1. This concern has been resolved.  See my response here. At the bottom of the 3rd screenshot above, you can see that variable x is available in the math entry keyboard, but variables y and t are not.  You can easily grab a y through the alpha keyboard.  It won’t matter for most, I guess, but entering parametric equations on a graph page and solving systems of equations on a calculator page both require flipping between multiple screens to get the variable names and math symbols.  I get issues with space management, but making parametric equation entry and CAS use more difficult is a minor frustration.
  2. I may not have looked hard enough, but I couldn’t find an easy way to adjust the computation scales for 3D graphs.  I can change the graph scales, but I was not able to get my graph of z=sin \left( x^2 + y^2 \right) to look any smoother.

As I said, these are pretty minor flaws.

CONCLUSION:  It looks like strong, legitimate math middle and high school math-specific apps are finally entering the iPad market, and I know of others in development.  TI’s Nspire apps are spectacular (and are even better if you can score one for the current deeply discounted price).

Polar Derivatives on TI-Nspire CAS

The following question about how to compute derivatives of polar functions was posted on the College Board’s AP Calculus Community bulletin board today.

AP Calculus Polar

From what I can tell, there are no direct ways to get derivative values for polar functions.  There are two ways I imagined to get the polar derivative value, one graphically and the other CAS-powered.  The CAS approach is much  more accurate, especially in locations where the value of the derivative changes quickly, but I don’t think it’s necessarily more intuitive unless you’re comfortable using CAS commands.  For an example, I’ll use r=2+3sin(\theta ) and assume you want the derivative at \theta = \frac{\pi }{6}.

METHOD 1:  Graphical

Remember that a derivative at a point is the slope of the tangent line to the curve at that point.  So, finding an equation of a tangent line to the polar curve at the point of interest should find the desired result.

Create a graphing window and enter your polar equation (menu –> 3:Graph Entry –> 4:Polar).  Then drop a tangent line on the polar curve (menu –> 8:Geometry –> 1:Points&Lines –> 7:Tangent).  You would then click on the polar curve once to select the curve and a second time to place the tangent line.  Then press ESC to exit the Tangent Line command.


To get the current coordinates of the point and the equation of the tangent line, use the Coordinates & Equation tool (menu –> 1:Actions –> 8:Coordinates and Equations).  Click on the point and the line to get the current location’s information.  After each click, you’ll need to click again to tell the nSpire where you want the information displayed.


To get the tangent line at \theta =\frac{\pi }{6}, you could drag the point, but the graph settings seem to produce only Cartesian coordinates.  Converting \theta =\frac{\pi }{6} on r=2+3sin(\theta ) to Cartesian gives

\left( x,y \right) = \left( r \cdot cos(\theta ), r \cdot sin(\theta ) \right)=\left( \frac{7\sqrt{3}}{4},\frac{7}{4} \right) .

So the x-coordinate is \frac{7\sqrt{3}}{4} \approx 3.031.  Drag the point to find the approximate slope, \frac{dy}{dx} \approx 8.37.  Because the slope of the tangent line changes rapidly at this location on this polar curve, this value of 8.37 will be shown in the next method to be a bit off.


Unfortunately, I tried to double-click the x-coordinate to set it to exactly \frac{7\sqrt{3}}{4}, but that property is also disabled in polar mode.


Using the Chain Rule, \displaystyle \frac{dy}{dx} = \frac{dy}{d\theta }\cdot \frac{d\theta }{dx} = \frac{\frac{dy}{d\theta }}{\frac{dx}{d\theta }}.  I can use this and the nSpire’s ability to define user-created functions to create a \displaystyle \frac{dy}{dx} polar differentiator for any polar function r=a(\theta ).  On a Calculator page, I use the Define function (menu –> 1:Actions –> 1:Define) to make the polar differentiator.  All you need to do is enter the expression for a as shown in line 2 below.

Polar4This can be evaluated exactly or approximately at \theta=\frac{\pi }{6} to show \displaystyle \frac{dy}{dx} = 5\sqrt{3}=\approx 8.660.



As with all technologies, getting the answers you want often boils down to learning what questions to ask and how to phrase them.