# Tag Archives: TI Nspire

## Computers vs. People: Writing Math

Readers of this ‘blog know I actively use many forms of technology in my teaching and personal explorations.  Yesterday, a thread started on the AP-Calculus community discussion board with some expressing discomfort that most math software accepts sin(x)^2 as an acceptable equivalent to the “traditional” handwritten $sin^2 (x)$.

From Desmos:

Some AP readers spoke up to declare that sin(x)^2 would always be read as $sin(x^2)$.  While I can’t speak to the veracity of that last claim, I found it a bit troubling and missing out on some very real difficulties users face when interpreting between paper- and computer-based versions of math expressions.  Following is an edited version of my response to the AP Calculus discussion board.

MY THOUGHTS:

I believe there’s something at the core of all of this that isn’t being explicitly named:  The differences between computer-based 1-dimensional input (left-to-right text-based commands) vs. paper-and-pencil 2-dimensional input (handwritten notation moves vertically–exponents, limits, sigma notation–and horizontally).  Two-dimensional traditional math writing simply doesn’t convert directly to computer syntax.  Computers are a brilliant tool for mathematics exploration and calculation, but they require a different type of input formatting.  To overlook and not explicitly name this for our students leaves them in the unenviable position of trying to “creatively” translate between two types of writing with occasional interpretation differences.

Our students are unintentionally set up for this confusion when they first learn about the order of operations–typically in middle school in the US.  They learn the sequencing:  parentheses then exponents, then multiplication & division, and finally addition and subtraction.  Notice that functions aren’t mentioned here.  This thread [on the AP Calculus discussion board] has helped me realize that all or almost all of the sources I routinely reference never explicitly redefine order of operations after the introduction of the function concept and notation.  That means our students are left with the insidious and oft-misunderstood PEMDAS (or BIDMAS in the UK) as their sole guide for operation sequencing.  When they encounter squaring or reciprocating or any other operations applied to function notation, they’re stuck trying to make sense and creating their own interpretation of this new dissonance in their old notation.  This is easily evidenced by the struggles many have when inputting computer expressions requiring lots of nested parentheses or when first trying to code in LaTEX.

While the sin(x)^2 notation is admittedly uncomfortable for traditional “by hand” notation, it is 100% logical from a computer’s perspective:  evaluate the function, then square the result.

We also need to recognize that part of the confusion fault here lies in the by-hand notation.  What we traditionalists understand by the notational convenience of sin^2(x) on paper is technically incorrect.  We know what we MEAN, but the notation implies an incorrect order of computation.  The computer notation of sin(x)^2 is actually closer to the truth.

I particularly like the way the TI-Nspire CAS handles this point.  As is often the case with this software, it accepts computer input (next image), while its output converts it to the more commonly understood written WYSIWYG formatting (2nd image below).

Further recent (?) development:  Students have long struggled with the by-hand notation of sin^2(x) needing to be converted to (sin(x))^2 for computers.  Personally, I’ve always liked both because the computer notation emphasizes the squaring of the function output while the by-hand version was a notational convenience.  My students pointed out to me recently that Desmos now accepts the sin^2(x) notation while TI Calculators still do not.

Desmos:

The enhancement of WYSIWYG computer input formatting means that while some of the differences in 2-dimensional hand writing and computer inputs are narrowing, common classroom technologies no longer accept the same linear formatting — but then that was possibly always the case….

To rail against the fact that many software packages interpret sin(x)^2 as (sin(x))^2 or sin^2(x) misses the point that 1-dimensional computer input is not necessarily the same as 2-dimensional paper writing.  We don’t complain when two human speakers misunderstand each other when they speak different languages or dialects.  Instead, we should focus on what each is trying to say and learn how to communicate clearly and efficiently in both venues.

In short, “When in Rome, …”.

## Roots of Complex Numbers without DeMoivre

Finding roots of complex numbers can be … complex.

This post describes a way to compute roots of any number–real or complex–via systems of equations without any conversions to polar form or use of DeMoivre’s Theorem.  Following a “traditional approach,” one non-technology example is followed by a CAS simplification of the process.

Most sources describe the following procedure to compute the roots of complex numbers (obviously including the real number subset).

• Write the complex number whose root is sought in generic polar form.  If necessary, convert from Cartesian form.
• Invoke DeMoivre’s Theorem to get the polar form of all of the roots.
• If necessary, convert the numbers from polar form back to Cartesian.

As a very quick example,

Compute all square roots of -16.

Rephrased, this asks for all complex numbers, z, that satisfy  $z^2=-16$.  The Fundamental Theorem of Algebra guarantees two solutions to this quadratic equation.

The complex Cartesian number, $-16+0i$, converts to polar form, $16cis( \pi )$, where $cis(\theta ) = cos( \theta ) +i*sin( \theta )$.  Unlike Cartesian form, polar representations of numbers are not unique, so any full rotation from the initial representation would be coincident, and therefore equivalent if converted to Cartesian.  For any integer n, this means

$-16 = 16cis( \pi ) = 16 cis \left( \pi + 2 \pi n \right)$

Invoking DeMoivre’s Theorem,

$\sqrt{-16} = (-16)^{1/2} = \left( 16 cis \left( \pi + 2 \pi n \right) \right) ^{1/2}$
$= 16^{1/2} * cis \left( \frac{1}{2} \left( \pi + 2 \pi n \right) \right)$
$= 4 * cis \left( \frac{ \pi }{2} + \pi * n \right)$

For $n= \{ 0, 1 \}$, this gives polar solutions, $4cis \left( \frac{ \pi }{2} \right)$ and $4cis \left( \frac{ 3 \pi }{2} \right)$ .  Each can be converted back to Cartesian form, giving the two square roots of -16:  $4i$ and $-4i$.  Squaring either gives -16, confirming the result.

I’ve always found the rotational symmetry of the complex roots of any number beautiful, particularly for higher order roots.  This symmetry is perfectly captured by DeMoivre’s Theorem, but there is arguably a simpler way to compute them.

NEW(?) NON-TECH APPROACH:

Because the solution to every complex number computation can be written in $a+bi$ form, new possibilities open.  The original example can be rephrased:

Determine the simultaneous real values of x and y for which $-16=(x+yi)^2$.

Start by expanding and simplifying the right side back into $a+bi$ form.  (I wrote about a potentially easier approach to simplifying powers of i in my last post.)

$-16+0i = \left( x+yi \right)^2 = x^2 +2xyi+y^2 i^2=(x^2-y^2)+(2xy)i$

Notice that the two ends of the previous line are two different expressions for the same complex number(s).  Therefore, equating the real and imaginary coefficients gives a system of equations:

Solving the system gives the square roots of -16.

From the latter equation, either $x=0$ or $y=0$.  Substituting $y=0$ into the first equation gives $-16=x^2$, an impossible equation because x & y are both real numbers, as stated above.

Substituting $x=0$ into the first equation gives $-16=-y^2$, leading to $y= \pm 4$.  So, $x=0$ and $y=-4$ -OR- $x=0$ and $y=4$ are the only solutions–$x+yi=0-4i$ and $x+yi=0+4i$–the same solutions found earlier, but this time without using polar form or DeMoivre!  Notice, too, that the presence of TWO solutions emerged naturally.

Higher order roots could lead to much more complicated systems of equations, but a CAS can solve that problem.

CAS APPROACH:

Determine all fourth roots of $1+2i$.

That’s equivalent to finding all simultaneous x and y values that satisfy $1+2i=(x+yi)^4$.  Expanding the right side is quickly accomplished on a CAS.  From my TI-Nspire CAS:

Notice that the output is simplified to $a+bi$ form that, in the context of this particular example, gives the system of equations,

Using my CAS to solve the system,

First, note there are four solutions, as expected.  Rewriting the approximated numerical output gives the four complex fourth roots of $1+2i$$-1.176-0.334i$$-0.334+1.176i$$0.334-1.176i$, and $1.176+0.334i$.  Each can be quickly confirmed on the CAS:

CONCLUSION:

Given proper technology, finding the multiple roots of a complex number need not invoke polar representations or DeMoivre’s Theorem.  It really is as “simple” as expanding $(x+yi)^n$ where n is the given root, simplifying the expansion into $a+bi$ form, and solving the resulting 2×2 system of equations.

At the point when such problems would be introduced to students, their algebraic awareness should be such that using a CAS to do all the algebraic heavy lifting is entirely appropriate.

As one final glimpse at the beauty of complex roots, I entered the two equations from the last system into Desmos to take advantage of its very good implicit graphing capabilities.  You can see the four intersections corresponding to the four solutions of the system.  Solutions to systems of implicit equations are notoriously difficult to compute, so I wasn’t surprised when Desmos didn’t compute the coordinates of the points of intersection, even though the graph was pretty and surprisingly quick to generate.

## Probability, Polynomials, and Sicherman Dice

Three years ago, I encountered a question on the TI-Nspire Google group asking if there was a way to use CAS to solve probability problems.  The ideas I pitched in my initial response and follow-up a year later (after first using it with students in a statistics class) have been thoroughly re-confirmed in my first year teaching AP Statistics.  I’ll quickly re-share them below before extending the concept with ideas I picked up a couple weeks ago from Steve Phelps’ session on Probability, Polynomials, and CAS at the 64th annual OCTM conference earlier this month in Cleveland, OH.

BINOMIALS:  FROM POLYNOMIALS TO SAMPLE SPACES

Once you understand them, binomial probability distributions aren’t that difficult, but the initial conjoining of combinatorics and probability makes this a perennially difficult topic for many students.  The standard formula for the probability of determining the chances of K successes in N attempts of a binomial situation where p is the probability of a single success in a single attempt is no less daunting:

$\displaystyle \left( \begin{matrix} N \\ K \end{matrix} \right) p^K (1-p)^{N-K} = \frac{N!}{K! (N-K)!} p^K (1-p)^{N-K}$

But that is almost exactly the same result one gets by raising binomials to whole number powers, so why not use a CAS to expand a polynomial and at least compute the $\displaystyle \left( \begin{matrix} N \\ K \end{matrix} \right)$ portion of the probability?  One added advantage of using a CAS is that you could use full event names instead of abbreviations, making it even easier to identify the meaning of each event.

The TI-Nspire output above shows the entire sample space resulting from flipping a coin 6 times.  Each term is an event.  Within each term, the exponent of each variable notes the number of times that variable occurs and the coefficient is the number of times that combination occurs.  The overall exponent in the expand command is the number of trials.  For example, the middle term– $20\cdot heads^3 \cdot tails^3$ –says that there are 20 ways you could get 3 heads and 3 tails when tossing a coin 6 times. The last term is just $tails^6$, and its implied coefficient is 1, meaning there is just one way to flip 6 tails in 6 tosses.

The expand command makes more sense than memorized algorithms and provides context to students until they gain a deeper understanding of what’s actually going on.

FROM POLYNOMIALS TO PROBABILITY

Still using the expand command, if each variable is preceded by its probability, the CAS result combines the entire sample space AND the corresponding probability distribution function.  For example, when rolling a fair die four times, the distribution for 1s vs. not 1s (2, 3, 4, 5, or 6) is given by

The highlighted term says there is a 38.58% chance that there will be exactly one 1 and any three other numbers (2, 3, 4, 5, or 6) in four rolls of a fair 6-sided die.  The probabilities of the other four events in the sample space are also shown.  Within the TI-Nspire (CAS or non-CAS), one could use a command to give all of these probabilities simultaneously (below), but then one has to remember whether the non-contextualized probabilities are for increasing or decreasing values of which binomial outcome.

Particularly early on in their explorations of binomial probabilities, students I’ve taught have shown a very clear preference for the polynomial approach, even when allowed to choose any approach that makes sense to them.

TAKING POLYNOMIALS FROM ONE DIE TO MANY

Given these earlier thoughts, I was naturally drawn to Steve Phelps “Probability, Polynomials, and CAS” session at the November 2014 OCTM annual meeting in Cleveland, OH.  Among the ideas he shared was using polynomials to create the distribution function for the sum of two fair 6-sided dice.  My immediate thought was to apply my earlier ideas.  As noted in my initial post, the expansion approach above is not limited to binomial situations.  My first reflexive CAS command in Steve’s session before he share anything was this.

By writing the outcomes in words, the CAS interprets them as variables.  I got the entire sample space, but didn’t learn gain anything beyond a long polynomial.  The first output– $five^2$ –with its implied coefficient says there is 1 way to get 2 fives.  The second term– $2\cdot five \cdot four$ –says there are 2 ways to get 1 five and 1 four.  Nice that the technology gives me all the terms so quickly, but it doesn’t help me get a distribution function of the sum.  I got the distributions of the specific outcomes, but the way I defined the variables didn’t permit sum of their actual numerical values.  Time to listen to the speaker.

He suggested using a common variable, X, for all faces with the value of each face expressed as an exponent.  That is, a standard 6-sided die would be represented by $X^1+X^2+ X^3+X^4+X^5+X^6$ where the six different exponents represent the numbers on the six faces of a typical 6-sided die.  Rolling two such dice simultaneously is handled as I did earlier with the binomial cases.

NOTE:  Exponents are handled in TWO different ways here.  1) Within a single polynomial, an exponent is an event value, and 2) Outside a polynomial, an exponent indicates the number of times that polynomial is applied within the specific event.  Coefficients have the same meaning as before.

Because the variables are now the same, when specific terms are multiplied, their exponents (face values) will be added–exactly what I wanted to happen.  That means the sum of the faces when you roll two dice is determined by the following.

Notice that the output is a single polynomial.  Therefore, the exponents are the values of individual cases.  For a couple examples, there are 3 ways to get a sum of 10 $\left( 3 \cdot x^{10} \right)$, 2 ways to get a sum of 3 $\left( 2 \cdot x^3 \right)$, etc.  The most commonly occurring outcome is the term with the largest coefficient.  For rolling two standard fair 6-sided dice, a sum of 7 is the most common outcome, occurring 6 times $\left( 6 \cdot x^7 \right)$.  That certainly simplifies the typical 6×6 tables used to compute the sums and probabilities resulting from rolling two dice.

While not the point of Steve’s talk, I immediately saw that technology had just opened the door to problems that had been computationally inaccessible in the past.  For example, what is the most common sum when rolling 5 dice and what is the probability of that sum?  On my CAS, I entered this.

In the middle of the expanded polynomial are two terms with the largest coefficients, $780 \cdot x^{18}$ and $780 \cdot x^{19}$, meaning a sums of 17 and 18 are the most common, equally likely outcomes when rolling 5 dice.  As there are $6^5=7776$ possible outcomes when rolling a die 5 times, the probability of each of these is $\frac{780}{7776} \approx 0.1003$, or about 10.03% chance each for a sum of 17 or 18.  This can be verified by inserting the probabilities as coefficients before each term before CAS expanding.

With thought, this shouldn’t be surprising as the expected mean value of rolling a 6-sided die many times is 3.5, and $5 \cdot 3.5 = 17.5$, so the integers on either side of 17.5 (17 & 18) should be the most common.  Technology confirms intuition.

ROLLING DIFFERENT DICE SIMULTANEOUSLY

What is the distribution of sums when rolling a 4-sided and a 6-sided die together?  No problem.  Just multiply two different polynomials, one representative of each die.

The output shows that sums of 5, 6, and 7 would be the most common, each occurring four times with probability $\frac{1}{6}$ and together accounting for half of all outcomes of rolling these two dice together.

A BEAUTIFUL EXTENSION–SICHERMAN DICE

My most unexpected gain from Steve’s talk happened when he asked if we could get the same distribution of sums as “normal” 6-sided dice, but from two different 6-sided dice.  The only restriction he gave was that all of the faces of the new dice had to have positive values.  This can be approached by realizing that the distribution of sums of the two normal dice can be found by multiplying two representative polynomials to get

$x^{12}+2x^{11}+3x^{10}+4x^9+5x^8+6x^7+5x^6+4x^5+3x^4+2x^3+x^2$.

Restating the question in the terms of this post, are there two other polynomials that could be multiplied to give the same product?  That is, does this polynomial factor into other polynomials that could multiply to the same product?  A CAS factor command gives

Any rearrangement of these eight (four distinct) sub-polynomials would create the same distribution as the sum of two dice, but what would the the separate sub-products mean in terms of the dice?  As a first example, what if the first two expressions were used for one die (line 1 below) and the two squared trinomials comprised a second die (line 2)?

Line 1 actually describes a 4-sided die with one face of 4, two faces with 3s, and one face of 2.  Line 2 describes a 9-sided die (whatever that is) with one face of 8, two faces of 6, three faces of 4, two faces of 2, and one face with a 0 ( $1=1 \cdot x^0$).  This means rolling a 4-sided and a 9-sided die as described would give exactly the same sum distribution.  Cool, but not what I wanted.  Now what?

Factorization gave four distinct sub-polynomials, each with multitude 2.  One die could contain 0, 1, or 2 of each of these with the remaining factors on the other die.  That means there are $3^4=81$ different possible dice combinations.  I could continue with a trail-and-error approach, but I wanted to be more efficient and elegant.

What follows is the result of thinking about the problem for a while.  Like most math solutions to interesting problems, ultimate solutions are typically much cleaner and more elegant than the thoughts that went into them.  Problem solving is a messy–but very rewarding–business.

SOLUTION

Here are my insights over time:

1) I realized that the $x^2$ term would raise the power (face values) of the desired dice, but would not change the coefficients (number of faces).  Because Steve asked for dice with all positive face values.  That meant each desired die had to have at least one x to prevent non-positive face values.

2) My first attempt didn’t create 6-sided dice.  The sums of the coefficients of the sub-polynomials determined the number of sides.  That sum could also be found by substituting $x=1$ into the sub-polynomial.  I want 6-sided dice, so the final coefficients must add to 6.  The coefficients of the factored polynomials of any die individually must add to 2, 3, or 6 and have a product of 6.  The coefficients of $(x+1)$ add to 2, $\left( x^2+x+1 \right)$ add to 3, and $\left( x^2-x+1 \right)$ add to 1.  The only way to get a polynomial coefficient sum of 6 (and thereby create 6-sided dice) is for each die to have one $(x+1)$ factor and one $\left( x^2+x+1 \right)$ factor.

3) That leaves the two $\left( x^2-x+1 \right)$ factors.  They could split between the two dice or both could be on one die, leaving none on the other.  We’ve already determined that each die already had to have one each of the x, $(x+1)$, and $\left( x^2+x+1 \right)$ factors.  To also split the $\left( x^2-x+1 \right)$ factors would result in the original dice:  Two normal 6-sided dice.  If I want different dice, I have to load both of these factors on one die.

That means there is ONLY ONE POSSIBLE alternative for two 6-sided dice that have the same sum distribution as two normal 6-sided dice.

One die would have single faces of 8, 6, 5, 4, 3, and 1.  The other die would have one 4, two 3s, two 2s, and one 1.  And this is exactly the result of the famous(?) Sicherman Dice.

If a 0 face value was allowed, shift one factor of x from one polynomial to the other.  This can be done two ways.

The first possibility has dice with faces {9, 7, 6, 5, 4, 2} and {3, 2, 2, 1, 1, 0}, and the second has faces {7, 5, 4, 3, 2, 0} and {5, 4, 4, 3, 3, 2}, giving the only other two non-negative solutions to the Sicherman Dice.

Both of these are nothing more than adding one to all faces of one die and subtracting one from from all faces of the other.  While not necessary to use polynomials to compute these, they are equivalent to multiplying the polynomial of one die by x and the other by $\frac{1}{x}$ as many times as desired. That means there are an infinite number of 6-sided dice with the same sum distribution as normal 6-sided dice if you allow the sides to have negative faces.  One of these is

corresponding to a pair of Sicherman Dice with faces {6, 4, 3, 2, 1, -1} and {1,5,5,4,4,3}.

CONCLUSION:

There are other very interesting properties of Sicherman Dice, but this is already a very long post.  In the end, there are tremendous connections between probability and polynomials that are accessible to students at the secondary level and beyond.  And CAS keeps the focus on student learning and away from the manipulations that aren’t even the point in these explorations.

Enjoy.

## FREE TI-Nspire iPad App Workshop

On Saturday, 31 May 2014, Texas Instruments (@TICalculators) and @HawkenSchool are hosting a FREE TI-Nspire iPad Workshop at Hawken’s Gries Center in Cleveland’s University Circle.  The workshop is designed for educators who are interested in or are just beginning to use the TI- Nspire App for iPad® (either CAS or numeric). It will cover the basics of getting started and teaching with the Apps.  Tom Reardon will be leading the training!

Sign up for the workshop here.  A pdf flyer for the workshop is here:   iPad App Training.

## Dynamic Linear Programming

My department is exploring the pros and cons of different technologies for use in teaching our classes. Two teachers shared ways to use Desmos and GeoGebra in lessons using inequalities on one day; we explored the same situation using the TI-Nspire in the following week’s meeting.  For this post, I’m assuming you are familiar with solving linear programming problems.  Some very nice technology-assisted exploration ideas are developed in the latter half of this post.

My goal is to show some cool ways we discovered to use technology to evaluate these types of problems and enhance student exploration.  Our insights follow the section considering two different approaches to graphing the feasible region.  For context, we used a dirt-biker linear programming problem from NCTM’s Illuminations Web Pages.

Assuming x = the number of Riders built and = the number of Rovers built,  inequalities for this problem are

We also learn on page 7 of the Illuminations activity that Apu makes a $15 profit on each Rider and$30 per Rover.  That means an Optimization Equation for the problem is $Profit=15x+30y$.

GRAPHING THE FEASIBLE REGION:

Graphing all of the inequalities simultaneously determines the feasible region for the problem.  This can be done easily with all three technologies, but the Nspire requires solving the inequalities for y first.  Therefore, the remainder of this post compares the Desmos and GeoGebra solutions.  Because the Desmos solutions are easily accessible as Web pages and not separate files, further images will be from Desmos until the point where GeoGebra operates differently.

Both Desmos and GeoGebra can graph these inequalities from natural inputs–inputing math sentences as you would write them from the problem information:  without solving for a specific variable.  As with many more complicated linear programming problems, graphing all the constraints at once sometimes makes a visually complicated feasible region graph.

So, we decided to reverse all of our inequalities, effectively  shading the non-feasible region instead.  Any points that emerged unshaded were possible solutions to the Dirt Bike problem (image below, file here).  All three softwares shift properly between solid and dashed lines to show respective included and excluded boundaries.

Traditional Approach – I (as well as almost all teachers, I suspect) have traditionally done some hand-waving at this point to convince (or tell) students that while any ordered pair in the unshaded region or on its boundary (all are dashed) is a potential solution, any optimal solution occurs on the boundary of the feasible region.  Hopefully teachers ask students to plug ordered pairs from the feasible region into the Optimization Equation to show that the profit does vary depending on what is built (duh), and we hope they eventually discover (or memorize) that the maximum or minimum profit occurs on the edges–usually at a corner for the rigged setups of most linear programming problems in textbooks.  Thinking about this led to several lovely technology enhancements.

INSIGHT 1:  Vary a point.

During our first department meeting, I was suddenly dissatisfied with how I’d always introduced this idea to my classes.  That unease and our play with the Desmos’ simplicity of adding sliders led me to try graphing a random ordered pair.  I typed (a,b) on an input line, and Desmos asked if I wanted sliders for both variables.  Sure, I thought (image below, file here).

— See my ASIDE note below for a philosophical point on the creation of (a,b).
— GeoGebra and the Nspire require one additional step to create/insert sliders, but GeoGebra’s naming conventions led to a smoother presentation–see below.

BIG ADVANTAGE:  While the Illuminations problem we were using had convenient vertices, we realized that students could now drag (a,b) anywhere on the graph (especially along the boundaries and to vertices of the feasible region) to determine coordinates.  Establishing exact coordinates of those points still required plugging into equations and possibly solving systems of equations (a possible entry for CAS!).  However discovered, critical coordinates were suddenly much easier to identify in any linear programming question.

HUGE ADVANTAGE:  Now that the point was variably defined, the Optimization Equation could be, too!  Rewriting and entering the Optimation Equation as an expression in terms of a and b, I took advantage of Desmos being a calculator, not just a grapher.  Notice the profit value on the left of the image.

With this, users can drag (a,b) and see not only the coordinates of the point, but also the value of the profit at the point’s current location!  Check out the live version here to see how easily Desmos updates this value as you drag the point.

From this dynamic setup, I believe students now can learn several powerful ideas through experimentation that traditionally would have been told/memorized.

STUDENT DISCOVERIES:

1. Drag (a,b) anywhere in the feasible region.  Not surprisingly, the profit’s value varies with (a,b)‘s location.
2. The profit appears to be be constant along the edges.  Confirm this by dragging (a,b) steadily along any edge of the feasible region.
3. While there are many values the profit could assume in the feasible region, some quick experimentation suggests that the largest and smallest profit values occur at the vertices of the feasible region.
4. DEEPER:  While point 3 is true, many teachers and textbooks mistakenly proclaim that solutions occur only at vertices.  In fact, it is technically possible for a problem to have an infinite number optimal solutions.  This realization is discussed further in the CONCLUSION.

ASIDE:  I was initially surprised that the variable point on the Desmos graph was directly draggable.  From a purist’s perspective, this troubled me because the location of the point depends on the values of the sliders.  That said, I shouldn’t be able to move the point and change the values of its defining sliders.  Still, the simplicity of what I was able to do with the problem as a result of this quickly led me to forgive the two-way dependency relationships between Desmos’ sliders and the objects they define.

GEOGEBRA’S VERSION:

In some ways, this result was even easier to create on GeoGebra.  After graphing the feasible region, I selected the Point tool and clicked once on the graph.  Voila!  The variable point was fully defined.  This avoids the purist issue I raised in the ASIDE above.  As a bonus, the point was also named.

Unlike Desmos, GeoGebra permits multi-character function names.  Defining $Profit(x,y)=15x+30y$ and entering $Profit(A)$ allowed me to see the profit value change as I dragged point A as I did in the Desmos solution. The $Profit(A)$ value was dynamically computed in GeoGebra as a number value in its Algebra screen.  A live version of this construction is on GeoGebraTube here.

At first, I wasn’t sure if the last command–entering a single term into a multivariable term–would work, but since A was a multivariable point, GeoGebra nicely handled the transition.  Dragging A around the feasible region updated the current profit value just as easily as Desmos did.

INSIGHT 2:  Slide a line.

OK, this last point is really an adaptation of a technique I learned from some of my mentors when I started teaching years ago, but how I will use it in the future is much cleaner and more expedient.  I thought line slides were a commonly known technique for solving linear programming problems, but conversations with some of my colleagues have convinced me that not everyone knows the approach.

Recall that each point in the feasible region has its own profit value.  Instead of sliding a point to determine a profit, why not pick a particular profit and determine all points with that profit?  As an example, if you wanted to see all points that had a profit of $100, the Optimization Equation becomes $Profit=100=15x+30y$. A graph of this line (in solid purple below) passes through the feasible region. All points on this line within the feasible region are the values where Apu could build dirt bikes and get a profit of$100.  (Of course, only integer ordered pairs are realistic.)

You could replace the 100 in the equation with different values and repeat the investigation.  But if you’re thinking already about the dynamic power of the software, I hope you will have realized that you could define profit as a slider to scan through lots of different solutions with ease after you reset the slider’s bounds.  One instance is shown below; a live Desmos version is here.

Geogebra and the Nspire set up the same way except you must define their slider before you define the line.  Both allow you to define the slider as “profit” instead of just “p”.

CONCLUSIONS:

From here, hopefully it is easy to extend Student Discovery 3 from above.  By changing the P slider, you see a series of parallel lines (prove this!).  As the value of P grows, the line goes up in this Illuminations problem.  Through a little experimentation, it should be obvious that as P rises , the last time the profit line touches the feasible region will be at a vertex.  Experiment with the P slider here to convince yourself that the maximum profit for this problem is $165 at the point $(x,y)=(3,4)$. Apu should make 3 Riders and 4 Rovers to maximize profit. Similarly (and obviously), Apu’s minimum profit is$0 at $(x,y)=(0,0)$ by making no dirt bikes.

While not applicable in this particular problem, I hope you can see that if an edge of the feasible region for some linear programming problem was parallel to the line defined by the corresponding Optimization Equation, then all points along that edge potentially would be optimal solutions with the same Optimization Equation output.  This is the point I was trying to make in Student Discovery 4.

In the end, Desmos, GeoGebra, and the TI-Nspire all have the ability to create dynamic learning environments in which students can explore linear programming situations and their optimization solutions, albeit with slightly different syntax.  In the end, I believe these any of these approaches can make learning linear programming much more experimental and meaningful.

## New Nspire Apps PLUS Weekend Savings

TI finally converted its Nspire calculators to the iPad platform and through this weekend only in celebration of 25 years of Teachers Teaching with Technology, they’re offering both of their Nspire apps at $25 off their usual$29.99, or \$4.99 each.  This is a GREAT deal, especially considering everything the Nspire can do!  Clicking on either of the images below will take you to a description page for that app.

In my opinion, if you’re going to get one of these, I’d grab the CAS version.  It does EVERYTHING the non-CAS version does plus great CAS tools.  Why pay the same money for the non-CAS and get less?  You aren’t required to use the CAS tools, but I’d rather have a tool and not need it than the other way around.  If you read my ‘blog, though, you know I strongly advocate for CAS use for anyone exploring mathematics.

Now, on to my brief review of the new apps.

MY REVIEW:  From my experimentations the last few days, this app appears to do EVERYTHING the corresponding handheld calculators can do.  I wouldn’t be surprised if there are a few things the computer version can do that the app can’t, but I haven’t been able to find it yet.  In a few places, I actually like the iPad app better than either the handheld or computer versions.  Here are a few.

• When you start the app, your home page shows all of the documents available that have been created on the app.  It’s easy enough to navigate there on the handheld or computer, but it’s a nice touch (to me) to see all of my files easily arranged when I start up.

• A BRILLIANT addition is the ability to export your working files to share with others.  Using the standard export button common to all iPad apps with export features, you get the ability to share your current doc via email or iTunes.
• The calculator history items can now be accessed using a simple tap instead of just arrow key or mouse navigation.

• Personally, I find it much easier to access the menus and settings with conveniently located app buttons.  I prefer having my tools available on a tap rather than buried in menus.  A nice touch, from my perspective.

• Moving objects is easy.  I was easily able to graph $y=x$ and the generic $y=a\cdot x^2+b\cdot x+c$ with sliders for each parameter.  It’s easy to drag the slider values, and after a brief tap-and-hold, a pop-up gives you an option to animate, change settings, move, or delete your slider.

• Also notice on the left side of the three previous screens that you have thumbnails of your currently open windows.  With a quick tap, you can quickly change between windows.
• One of the best features of the Nspire has always been its ability to integrate multiple representations of mathematical ideas.  That continues here.  As I said, the app appears to be a fully-functional variation of the pre-existing handheld and computer versions.
• The 3D-graphing option from a graphing page seems much easier to use on the iPad app.  Being able to use my finger to rotate a graph the way I want just seems much more intuitive than using my mouse.  As with the computer software, you can define your 3D surfaces and curves in Cartesian function form or parametrically.

• A lovely touch on the iPad version is the ability to use finger pinch and spread maneuvers to zoom in and out on 2D and 3D graphs.  Dragging your finger over a 2D graph easily repositions it.  Combined, these options make it incredibly easy to obtain good graphing windows.

For now, I see two drawbacks, but I can easily deal with both given the other advantages.

1. This concern has been resolved.  See my response here. At the bottom of the 3rd screenshot above, you can see that variable x is available in the math entry keyboard, but variables y and t are not.  You can easily grab a y through the alpha keyboard.  It won’t matter for most, I guess, but entering parametric equations on a graph page and solving systems of equations on a calculator page both require flipping between multiple screens to get the variable names and math symbols.  I get issues with space management, but making parametric equation entry and CAS use more difficult is a minor frustration.
2. I may not have looked hard enough, but I couldn’t find an easy way to adjust the computation scales for 3D graphs.  I can change the graph scales, but I was not able to get my graph of $z=sin \left( x^2 + y^2 \right)$ to look any smoother.

As I said, these are pretty minor flaws.

CONCLUSION:  It looks like strong, legitimate math middle and high school math-specific apps are finally entering the iPad market, and I know of others in development.  TI’s Nspire apps are spectacular (and are even better if you can score one for the current deeply discounted price).

## Polar Derivatives on TI-Nspire CAS

The following question about how to compute derivatives of polar functions was posted on the College Board’s AP Calculus Community bulletin board today.

From what I can tell, there are no direct ways to get derivative values for polar functions.  There are two ways I imagined to get the polar derivative value, one graphically and the other CAS-powered.  The CAS approach is much  more accurate, especially in locations where the value of the derivative changes quickly, but I don’t think it’s necessarily more intuitive unless you’re comfortable using CAS commands.  For an example, I’ll use $r=2+3sin(\theta )$ and assume you want the derivative at $\theta = \frac{\pi }{6}$.

METHOD 1:  Graphical

Remember that a derivative at a point is the slope of the tangent line to the curve at that point.  So, finding an equation of a tangent line to the polar curve at the point of interest should find the desired result.

Create a graphing window and enter your polar equation (menu –> 3:Graph Entry –> 4:Polar).  Then drop a tangent line on the polar curve (menu –> 8:Geometry –> 1:Points&Lines –> 7:Tangent).  You would then click on the polar curve once to select the curve and a second time to place the tangent line.  Then press ESC to exit the Tangent Line command.

To get the current coordinates of the point and the equation of the tangent line, use the Coordinates & Equation tool (menu –> 1:Actions –> 8:Coordinates and Equations).  Click on the point and the line to get the current location’s information.  After each click, you’ll need to click again to tell the nSpire where you want the information displayed.

To get the tangent line at $\theta =\frac{\pi }{6}$, you could drag the point, but the graph settings seem to produce only Cartesian coordinates.  Converting $\theta =\frac{\pi }{6}$ on $r=2+3sin(\theta )$ to Cartesian gives

$\left( x,y \right) = \left( r \cdot cos(\theta ), r \cdot sin(\theta ) \right)=\left( \frac{7\sqrt{3}}{4},\frac{7}{4} \right)$ .

So the x-coordinate is $\frac{7\sqrt{3}}{4} \approx 3.031$.  Drag the point to find the approximate slope, $\frac{dy}{dx} \approx 8.37$.  Because the slope of the tangent line changes rapidly at this location on this polar curve, this value of 8.37 will be shown in the next method to be a bit off.

Unfortunately, I tried to double-click the x-coordinate to set it to exactly $\frac{7\sqrt{3}}{4}$, but that property is also disabled in polar mode.

METHOD 2:  CAS

Using the Chain Rule, $\displaystyle \frac{dy}{dx} = \frac{dy}{d\theta }\cdot \frac{d\theta }{dx} = \frac{\frac{dy}{d\theta }}{\frac{dx}{d\theta }}$.  I can use this and the nSpire’s ability to define user-created functions to create a $\displaystyle \frac{dy}{dx}$ polar differentiator for any polar function $r=a(\theta )$.  On a Calculator page, I use the Define function (menu –> 1:Actions –> 1:Define) to make the polar differentiator.  All you need to do is enter the expression for a as shown in line 2 below.

This can be evaluated exactly or approximately at $\theta=\frac{\pi }{6}$ to show $\displaystyle \frac{dy}{dx} = 5\sqrt{3}=\approx 8.660$.

Conclusion:

As with all technologies, getting the answers you want often boils down to learning what questions to ask and how to phrase them.

## Calling for More CAS in Statistics

When you allow your students to solve problems in ways that make the most sense to them, interesting and unexpected results sometimes happen.  On a test in my senior, non-AP statistics course earlier this semester, we posed this.

A child is 40 inches tall, which places her in the top 10% of all children of similar age. The heights for children of this age form an approximately normal distribution with a mean of 38 inches. Based on this information, what is the standard deviation of the heights of all children of this age?

From the question, you likely deduced that we had been exploring normal distributions and the connection between areas under such curves and their related probabilities and percentiles.  Hoping to get students to think just a little bit, we decided to reverse a cookbook question (given or derive a z-score and compute probability) and asked instead for standard deviation.  My fellow teacher and I saw the question as a simple Algebra I-level manipulation, but our students found it a very challenging revision.  Only about 5% of the students actually solved it the way we thought.  The majority employed a valid (but not always justified) trial-and-error approach.  And then one of my students invoked what I thought to be a brilliant use of a CAS command that I should have imagined myself.  Unfortunately, it did not work out, even though it should.  I’m hoping future iterations of CAS software of all types will address this shortcoming.

What We Thought Would Happen

The problem information can be visually represented as shown below.

Given x-values, means, and standard deviations, our students had practice with many problems which gave the resulting area.  They had also been given areas under normal curves and worked backwards to z-scores which could be re-scaled and re-centered to corresponding points on any normal curve.  We hoped they would be able to apply what they knew about normal curves to make this a different, but relatively straightforward question.

Given the TI-nSpire software and calculators each of our students has, we’ve completely abandoned statistics tables.  Knowing that the given score was at the 90th percentile, the inverse normal TI-nSpire command quickly shows that this point corresponds to an approximate z-score of 1.28155.  Substituting this and the other givens into the x-value to z-score scaling relationship, $z=\frac{x-\overline{x}}{s}$ , leads to an equation with a single unknown which easily can be solved by hand or by CAS to find the unknown standard deviation, $s \approx 1.56061$ .  Just a scant handful of students actually employed this.

What the Majority Did

Recognizing the problem as a twist on their previous work, most invoked a trial-and-error approach.  From their work, I could see that most essentially established bounds around the potential standard deviation and employed an interval bisection approach (not that any actually formally named their technique).

If you know the bounds, mean, and standard deviation of a portion of a normal distribution, you can find the percentage area using the nSpire’s normal Cdf command.  Knowing that the percentage area was 0.1, most tried a standard deviation of 1, and saw that not enough area (0.02275) was captured.  Then they tried 2, and goth too much area (0.158655).  A reproduction of one student’s refinements leading to a standard deviation of $s \approx 1.56$ follows.

THE COOL PART:  Students who attempted this approach got to deal directly with the upper 10% of the area; they weren’t required to adapt this to the “left-side area” input requirement of the inverse normal command.  While this adjustment is certainly minor, being able to focus on the problem parameters–as defined–helped some of my students.

THE PROBLEM:  As a colleague at my school told me decades ago when I started teaching, “Remember that a solution to every math problem must meet two requirements.  Show that the solution(s) you find is (are) correct, and show that there are no other solutions.”

Given a fixed mean and x-value, it seems intuitive to me that there is a one-to-one correspondence between the standard deviation of the normal curve and the area to the right of that point.  This isn’t a class for which I’d expect rigorous proof of such an assertion, but I still hoped that some might address the generic possibility of multiple answers and attempt some explanation for why that can’t happen here.  None showed anything like that, and I’m pretty certain that not a single one of my students in this class considered this possibility.  They had found an answer that worked and walked away satisfied.  I tried talking with them afterwards, but I’m not sure how many fully understood the subtle logic and why it was mathematically important.

The Creative Solution

Exactly one student remembered that he had a CAS and that it could solve equations.  Tapping the normal Cdf command used by the majority of his peers, he set up and tried to solve an equation as shown below.

Sadly, this should have worked for my student, but it didn’t.  (He ultimately fell back on the trial-and-error approach.) The equation he wrote is the heart of the trial-and-error process, and there is a single solution.  I suspect the programmers at TI simply hadn’t thought about applying the CAS commands from one area of their software to the statistical functions in another part of their software.  Although I should have, I hadn’t thought about that either.

Following my student’s lead, I tried a couple other CAS approaches (solving systems, etc.) to no avail.  Then I shifted to a graphical approach.  Defining a function using the normal Cdf command, I was able to get a graph.

Graphing $y=0.1$ showed a single point of intersection which could then be computed numerically in the graph window to give the standard deviation from earlier.

What this says to me is that the CAS certainly has the ability to solve my student’s equation–it did so numerically in the graph screen–but for some reason this functionality is not currently available on the TI-nSpire CAS’s calculator screens.

Extensions

My statistics students just completed a unit on confidence intervals and inferential reasoning.  Rather than teaching them the embedded TI syntax for confidence intervals and hypothesis testing, I deliberately stayed with just the normal Cdf and inverse normal commands–nothing more.  A core belief of my teaching is

Memorize as little as possible and use it as broadly as possible.

By staying with these just two commands, I continued to reinforce what normal distributions are and do, concepts that some still find challenging.  What my student taught me was that perhaps I could limit these commands to just one, the normal Cdf.

For example, if you had a normal distribution with mean 38 and standard deviation 2, what x-value marks the 60th percentile?

Now that’s even more curious.  The solve command doesn’t work (even in an approximation mode), but now the numerical solve gives the solution confirmed by the inverse normal command.

What if you wanted the bounds on the same normal distribution that defined the middle 80% of the area?  As shown below, I failed to solve then when I asked the CAS to compute directly for any of the equivalent distance of the bounds from the mean (line 1), the z-scores (line 2), or the location of the upper x-value (line 3).

But reversing the problem to define the 10% area of the right tail does give the desired result (line 2) (note that nsolve does work even though solve still does not) with the solution confirmed by the final two lines.

Conclusion

Admittedly, there’s not lots of algebra involved in most statistics classes–LOADS of arithmetic and computation, but not much algebra.  I’m convinced, though, that more attention to some algebraic thinking could benefit students.  The different statistics packages out there do lots of amazing things, especially the TI-nSpire, but it would be very nice if these packages could align themselves better with CAS features to permit students to ask their questions more “naturally”.  After all, such support and scaffolding are key features that make CAS and all technology so attractive for those of us using them in the classroom.

## Binomial Probability and CAS

I posted previously about a year ago an idea for using CAS in a statistics course with probability.  I’ve finally had an opportunity to use it with students in my senior one-semester statistics course over the last few weeks, so I thought I’d share some refinements.  To demonstrate the mathematics, I’ll use the following problem situation.

Assume in a given country that women represent 40% of the total work force.  A company in that country has 10 employees, only 2 of which are women.
1) What is the probability that by pure chance a 10-employee company in that country might employ exactly 2 women?
2) What is the probability that by pure chance a 10-employee company in that country might employ 2 or fewer women?

Over a decade ago, I used binomial probability situations like this as an application of polynomial expansions, tapping Pascal’s Triangle and combinatorics to find the number of ways a group of exactly 2 women can appear in a total group size of 10.  Historically, I encouraged students to approach this problem by defining m=men and w=women and expand $(m+w)^{10}$ where the exponent was the number of employees, or more generally, the number of trials.  Because question 1 asks about the probability of exactly 2 women, I was interested in the specific term in the binomial expansion that contained $w^2$.  Whether you use Pascal’s Triangle or combinations, that term is $45w^2m^8$.  Substituting in given percentages of women and men in the workforce, $P(w)=0.4$ and $P(m)=0.6$, answers the first question.  I used a TI-nSpire to determine that there is a 12.1% chance of this.

That was 10-20 years ago and I hadn’t taught a statistics course in a very long time.  I suspect most statistics classes using TI-nSpires (CAS or non-CAS) today use the binompdf command to get this probability.

The slight differences in the input parameters determine whether you get the probability of the single event or the probabilities for all of the events in the entire sample space.  The challenge for the latter is remembering that the order of the probabilities starts at 0 occurrences of the event whose probability is defined by the second parameter.  Counting over carefully from the correct end of the sequence gives the desired probability.

With my exploration of CAS in the classroom over the past decade, I saw this problem very differently when I posted last year.  The binompdf command works well, but you need to remember what the outputs mean.  The earlier algebra does this, but it is clearly more cumbersome.  Together, all of this screams (IMO) for a CAS.  A CAS could enable me to see the number of ways each event in the sample space could occur.  The TI-nSpire CAS‘s output using an expand command follows.

The cool part is that all 11 terms in this expansion appear simultaneously.  It would be nice if I could see all of the terms at once, but a little scrolling leads to the highlighted term which could then be evaluated using a substitute command.

The insight from my previous post was that when expanding binomials, any coefficients of the individual terms “received” the same exponents as the individual variables in the expansion.  With that in mind, I repeated the expansion.

The resulting polynomial now shows all the possible combinations of men and women, but now each coefficient is the probability of its corresponding event.  In other words, in a single command this approach defines the entire probability distribution!  The highlighted portion above shows the answer to question 1 in a single step.

Last week one of my students reminded me that TI-nSpire CAS variables need not be restricted to a single character.  Some didn’t like the extra typing, but others really liked the fully descriptive output.

To answer question 2, TI-nSpire users could add up the individual binompdf outputs -OR- use a binomcdf command.

This gets the answer quickly, but suffers somewhat from the lack of descriptives noted earlier.  Some of my students this year preferred to copy the binomial expansion terms from the CAS expand command results above, delete the variable terms, and sum the results.  Then one suggested a cool way around the somewhat cumbersome algebra would be to substitute 1s for both variables.

CONCLUSION:  I’ve loved the way my students have developed a very organic understanding of binomial probabilities over this last unit.  They are using technology as a scaffold to support cumbersome, repetitive computations and have enhanced in a few directions my initial presentations of optional ways to incorporate CAS.  This is technology serving its appropriate role as a supporter of student learning.

OTHER CAS:  I focused on the TI-nSpire CAS for the examples above because that is the technology is my students have.  Obviously any CAS system would do.  For a free, Web-based CAS system, I always investigate what Wolfram Alpha has to offer.  Surprisingly, it didn’t deal well with the expanded variable names in $(0.4women+0.6men)^{10}$.  Perhaps I could have used a syntax variation, but what to do wasn’t intuitive, so I simplified the variables here to get

Huge Pro:  The entire probability distribution with its descriptors is shown.
Very minor Con:  Variables aren’t as fully readable as with the fully expanded variables on the nSpire CAS.

## Exponential Derivatives and Statistics

This post gives a different way I developed years ago to determine the form of the derivative of exponential functions, $y=b^x$.  At the end, I provide a copy of the document I use for this activity in my calculus classes just in case that’s helpful.  But before showing that, I walk you through my set-up and solution of the problem of finding exponential derivatives.

Background:

I use this lesson after my students have explored the definition of the derivative and have computed the algebraic derivatives of polynomial and power functions. They also have access to TI-nSpire CAS calculators.

The definition of the derivative is pretty simple for polynomials, but unfortunately, the definition of the derivative is not so simple to resolve for exponential functions.  I do not pretend to teach an analysis class, so I see my task as providing strong evidence–but not necessarily a watertight mathematical proof–for each derivative rule.  This post definitely is not a proof, but its results have been pretty compelling for my students over the years.

Sketching Derivatives of Exponentials:

At this point, my students also have experience sketching graphs of derivatives from given graphs of functions.  They know there are two basic graphical forms of exponential functions, and conclude that there must be two forms of their derivatives as suggested below.

When they sketch their first derivative of an exponential growth function, many begin to suspect that an exponential growth function might just be its own derivative.  Likewise, the derivative of an exponential decay function might be the opposite of the parent function.  The lack of scales on the graphs obviously keep these from being definitive conclusions, but the hypotheses are great first ideas.  We clearly need to firm things up quite a bit.

Numerically Computing Exponential Derivatives:

Starting with $y=10^x$, the students used their CASs to find numerical derivatives at 5 different x-values.  The x-values really don’t matter, and neither does the fact that there are five of them.  The calculators quickly compute the slopes at the selected x-values.

Each point on $f(x)=10^x$ has a unique tangent line and therefore a unique derivative.  From their sketches above, my students are soundly convinced that all ordered pairs $\left( x,f'(x) \right)$ form an exponential function.  They’re just not sure precisely which one. To get more specific, graph the points and compute an exponential regression.

So, the derivatives of $f(x)=10^x$ are modeled by $f'(x)\approx 2.3026\cdot 10^x$.  Notice that the base of the derivative function is the same as its parent exponential, but the coefficient is different.  So the common student hypothesis is partially correct.

Now, repeat the process for several other exponential functions and be sure to include at least 1 or 2 exponential decay curves.  I’ll show images from two more below, but ultimately will include data from all exponential curves mentioned in my Scribd document at the end of the post.

The following shows that $g(x)=5^x$ has derivative $g'(x)\approx 1.6094\cdot 5^x$.  Notice that the base again remains the same with a different coefficient.

OK, the derivative of $h(x)=\left( \frac{1}{2} \right)^x$ causes a bit of a hiccup.  Why should I make this too easy?  <grin>

As all of its $h'(x)$ values are negative, the semi-log regression at the core of an exponential regression is impossible.  But, I also teach my students regularly that If you don’t like the way a problem appears, CHANGE IT!  Reflecting these data over the x-axis creates a standard exponential decay which can be regressed.

From this, they can conclude that  $h'(x)\approx -0.69315\cdot \left( \frac{1}{2} \right)^x$.

So, every derivative of an exponential function appears to be another exponential function whose base is the same as its parent function with a unique coefficient.  Obviously, the value of the coefficient depends on the base of the corresponding parent function.  Therefore, each derivative’s coefficient is a function of the base of its parent function.  The next two shots show the values of all of the coefficients and a plot of the (base,coefficient) ordered pairs.

OK, if you recognize the patterns of your families of functions, that data pattern ought to look familiar–a logarithmic function.  Applying a logarithmic regression gives

For $y=a+b\cdot ln(x)$, $a\approx -0.0000067\approx 0$ and $b=1$, giving $coefficient(base) \approx ln(base)$.

Therefore, $\frac{d}{dx} \left( b^x \right) = ln(b)\cdot b^x$.

Again, this is not a formal mathematical proof, but the problem-solving approach typically keeps my students engaged until the end, and asking my students to  discover the derivative rule for exponential functions typically results in very few future errors when computing exponential derivatives.

Feedback on the approach is welcome.

Classroom Handout:

Here’s a link to a Scribd document written for my students who use TI-nSpire CASs.  There are a few additional questions at the end.  Hopefully this post and the document make it easy enough for you to adapt this to the technology needs of your classroom.  Enjoy.