## Midpoints, midpoints, everywhere!

I didn’t encounter the Quadrilateral Midpoint Theorem (QMT) until I had been teaching a few years.  Following is a minor variation on my approach to the QMT this year plus a fun way I leveraged the result to introduce similarity.

In case you haven’t heard of it, the surprisingly lovely QMT says that if you connect, in order, the midpoints of the four sides of a quadrilateral–any quadrilateral–even if the quadrilateral is concave or if its sides cross–the resulting figure will always be a parallelogram.

This is a cool and easy property to explore on any dynamic geometry software package (GeoGebra, TI-Nspire, Cabri, …).

SKETCH OF THE TRADITIONAL PROOF:  The proof is often established through triangle similarity:  Whenever you connect the midpoints of two sides of a triangle, the resulting segment will be parallel to and half the length of the triangle’s third side.  Draw either diagonal in the quadrilateral to create two triangles.  Connecting the midpoints of the other two sides of each triangle creates two congruent parallel sides, so the quadrilateral connecting all four midpoints must be a parallelogram.

NEW APPROACH THIS YEAR:  I hadn’t yet led my class into similarity, but having just introduced coordinate proofs, I tried an approach I’d never used before.  I assigned a coordinate proof of the QMT.  I knew the traditional approach existed, but I wanted them to practice their new technique.  From a lab in December, they already knew the result of the QMT, but they hadn’t proved it.

PART I:  Let quadrilateral ABCD be defined by the points , A=(a,b), B=(c,d), C=(e,f),  and D=(g,h).  There are several ways to prove that the midpoints of ABCD are the vertices of a parallelogram.  Provide one such coordinate proof.

All groups quickly established the midpoints of the four sides:  $AB_{mid}=\left( \frac{a+c}{2},\frac{b+d}{2} \right)$$BC_{mid}=\left( \frac{c+e}{2},\frac{d+f}{2} \right)$$CD_{mid}=\left( \frac{e+g}{2},\frac{f+h}{2} \right)$, and $DA_{mid}=\left( \frac{g+a}{2},\frac{h+b}{2} \right)$.  From there, my students took three approaches to the final proof, each relying on a different sufficiency condition for parallelograms.

The most common was to show that opposite sides were parallel.  $\displaystyle slope \left( AB_{mid} \text{ to } BC_{mid} \right) = \frac{\frac{a-e}{2}}{\frac{b-f}{2}}=\frac{a-e}{b-f}$ and $\displaystyle slope \left( CD_{mid} \text{ to } DA_{mid} \right) =\frac{a-e}{b-f}$, making those two midpoint segments parallel.  Likewise, $\displaystyle slope \left( BC_{mid} \text{ to } CD_{mid} \right) =$ $\displaystyle slope \left( DA_{mid} \text{ to } AB_{mid} \right) = \frac{c-g}{d-h}$, proving the other opposite side pair also was parallel.  With both pairs of opposite sides parallel, the midpoint quadrilateral was necessarily a parallelogram.

I had two groups leverage the fact that the diagonals of parallelograms were mutually bisecting.    $\displaystyle midpoint \left( AB_{mid} \text{ to } CD_{mid} \right) = \left( \frac{a+c+e+g}{4},\frac{b+d+f+h}{4}\right) =$ $= midpoint \left( BC_{mid} \text{ to } DA_{mid} \right)$.  QED.

One student even proved that opposite sides were congruent.

While it was not readily available for my students this year, I can imagine allowing CAS for these manipulations if I use this activity in the future.

EXTENDING THE QMT TO SIMILARITY:  For the next stage, I asked my students to explains what happens when the QMT is applied to degenerate quadrilaterals.

PART II:  You could think of triangles as being degenerate quadrilaterals when two quadrilateral vertices coincide to make one side of the quadrilateral have side length 0.  Apply this to generic quadrilateral ABCD from above where points A and D coincide to create triangle BCD.  Use this to explain how the segment connecting the midpoints of any two sides of a triangle is related to the third side of the triangle.

I encourage you to construct this using a dynamic geometry package, but here’s the result:

Notice the parallelogram still exists and forms two midpoint segments on the triangle (degenerate quadrilateral).  By parallelogram properties, each of these segments is parallel and congruent to the opposite side of the parallelogram, making them parallel to and half the length of the opposite side of the triangle.

CONCLUSION:  I think it critical to teach in a way that draws connections between ideas and units. This exercise made a lovely transition from quadrilaterals through coordinate proofs to the triangle midpoint theorem.

## Optimization in Four Colors

I suspect many (most?) geometry teachers know of the Four Color Theorem (FCT) which roughly states that any flat map, no matter how complex, containing only contiguous regions with finite perimeter can be colored with no more than four colors with the only restriction for coloring being that regions have different colors if they share any boundary beyond a set of finite points. While the FCT is not a particularly useful to cartographers, it has historical significance as the first significant mathematical proof to have been established with extensive use of computers.

THE PROBLEM:  In secondary geometry classes, the FCT is typically just a footnote or factoid, but it is pretty easy to understand for students of all levels.  This year I decided to make it more interesting as an optimization problem.  If each color you use has a different “cost” per area unit, can you color a given map as “cheaply” as possible?

[I considered a maximum cost map, too, and convinced myself that the maximum cost map is just a flip of all the colors, assuming the change in cost is the same between all colors.  With that thought, saving money seemed the more "realistic" goal, so I went with minimum cost.]

MOTIVATIONS:  Perhaps the BEST part of this project was that I was not–and still am not–convinced that we have found THE optimal solution.  I was reasonably certain that I could determine a very good mapping cost, but the sheer number of possibilities would require significant computer run time and coding abilities (just like the original FCT proof!) to ferret out the best answer–resources not available to those solving the problem (the computing problem is an issue my school is actively addressing).  I loved having a problem in math where determined students might best their teacher–and some did!!  I also liked that this project significantly motivated my students to use spreadsheets to track their data–a different math resource than most were accustomed to using.

IMPLEMENTATION:  Experimenting, I decided to offer different versions of the coloring challenge to my 4th-5th grade math club and all of my 8th grade math classes (prealgebra, algebra, & geometry).

Project 1:  Our 8th grade humanities course had an Africa unit earlier in the year, so I returned there by asking all of the students to color this map of Africa.

We provided this spreadsheet of country names and areas along with these coloring costs:  Purple = $2.00/mi^2, Yellow=$2.50/mi^2, Red = $3.00/mi^2, and Blue=$3.50/mi^2.  After some discussions on the first day, the “border” rule was revised to note that countries whose borders were only large lakes (Democratic Republic of the Congo & Tanzania plus Chad & Nigeria) could be considered “not touching” for this project.

Political incorrectness confession:  We noticed a day after we assigned the project that we had inadvertently left off the relatively new South Sudan. I decided to leave the two Sudans as a single country for this exercise (thus the inked in portion of the map).  Having compromised the previous day on the lake-bordered countries, my error accidentally made the largest and 3rd largest African countries border each other–a nice confounding problem, I thought, for forcing students to determine which would get a cheaper color.

Project 2:  I gave the relatively simpler map of the lower 48 US States to our 4th-5th grade math club with coloring restrictions Red=$1.00, Yellow=$1.25, Blue=$1.50, and Green=$1.75.

RESULTS:  For the submission, students (in ones or pairs) had to submit their colored map, a spreadsheet showing their computations, and 1-3 paragraphs explaining their general coloring strategies, and especially how they handled the inevitable situations where their coloring strategies self-conflicted.  In general, we could have done a better job preparing students for the written portion, but the two most commonly stated strategies were

1. (Low level) We colored the biggest countries the cheapest as far as we could, and then colored the next largest using the next cheapest color, etc.  If we ran into conflicts, we “worked it out”.

2. (Stronger) Noticed after trying the obvious strategy above that the countries colored the 2nd cheapest surrounding a “cheapest color” country often had more area than the “cheapest color” country.  By paying a little more for the largest country, they more than made up for the added expense by coloring a collection of countries that in total had more area.

3. A  few members of my math club addressed some specific strategies like the 11-state ring of US states (MO-IL-IN-OH-WV-VA-NC-GA-AL-MS-AR) surrounding Kentucky & Tennessee made it possible to use just two alternating colors over a large area.

Using our color schemes, excellent scores for the US map were very close to, but just over $1,000,000. The best Africa map scores we found were just under$17,000,000.  As I noted earlier, I’m not at all convinced that we have found the optimum values, but part of the fun of these projects was that anyone with some calm logic and determination could break through.  My second best coloring scheme was from a student who had been exposed to the least amount of math.  If you can beat these scores, please share.

VARIATIONS:  After playing with this for a while, I’m convinced that all optimal solutions depend on the gap you set between the color costs.  The more expensive the next color is, the more motivation you have to not change colors.  I haven’t tried it, but I think strategy #2 above could be exploited more often if paint color jumps are smaller on a large, complicated map.

I’m also convinced that the initial paint cost is irrelevant.  It will change the total cost of the project, but it would just scale all values up or down.

I didn’t play with different step values in paint cost, but I can see that potentially changing the game, especially if the cost jumps increase as you approach the 4th color.

Enjoy.

## Powers of 2

Yesterday, James Tanton posted a fun little problem on Twitter:

So, 2 is one more than $1=2^0$, and 8 is one less than 9=2^3$, and Dr. Tanton wants to know if there are any other powers of two that are within one unit of a perfect square. While this problem may not have any “real-life relevance”, it demonstrates what I describe as the power and creativity of mathematics. Among the infinite number of powers of two, how can someone know for certain if any others are or are not within one unit of a perfect square? No one will ever be able to see every number in the list of powers of two, but variables and mathematics give you the tools to deal with all possibilities at once. For this problem, let D and N be positive integers. Translated into mathematical language, Dr. Tanton’s problem is equivalent to asking if there are values of D and N for which $2^D=N^2 \pm 1$. With a single equation in two unknowns, this is where observation and creativity come into play. I suspect there may be more than one way to approach this, but my solution follows. Don’t read any further if you want to solve this for yourself. WARNING: SOLUTION ALERT! Because D and N are positive integers, the left side of $2^D=N^2 \pm 1$, is always even. That means $N^2$, and therefore N must be odd. Because N is odd, I know $N=2k+1$ for some whole number k. Rewriting our equation gives $2^D=(2k+1)^2 \pm 1$, and the right side equals either $4k^2+4k$ or $4k^2+4k+2$. Factoring the first expression gives $2^D=4k^2+4K=4k(k+1)$. Notice that this is the product of two consecutive integers, k and $k+1$, and therefore one of these factors (even though I don’t know which one) must be an odd number. The only odd number that is a factor of a power of two is 1, so either $k=1$ or $k+1=1 \rightarrow k=0$. Now, $k=1 \longrightarrow N=3 \longrightarrow D=3$ and $k=0 \longrightarrow N=1 \longrightarrow D=0$, the two solutions Dr. Tanton gave. No other possibilities are possible from this expression, no matter how far down the list of powers of two you want to go. But what about the other expression? Factoring again gives $2^D=4k^2+4k+2=2 \cdot \left( 2k^2+2k+1 \right)$. The expression in parentheses must be odd because its first two terms are both multiplied by 2 (making them even) and then one is added (making the overall sum odd). Again, 1 is the only odd factor of a power of two, and this happens in this case only when $k=0 \longrightarrow N=1 \longrightarrow D=0$, repeating a solution from above. Because no other algebraic solutions are possible, the two solutions Dr. Tanton gave in the problem statement are the only two times in the entire universe of perfect squares and powers of two where elements of those two lists are within a single unit of each other. Math is sweet. ## Base-x Numbers and Infinite Series In my previous post, I explored what happened when you converted a polynomial from its variable form into a base-x numerical form. That is, what are the computational implications when polynomial $3x^3-11x^2+2$ is represented by the base-x number $3(-11)02_x$, where the parentheses are used to hold the base-x digit, -11, for the second power of x? So far, I’ve explored only the Natural number equivalents of base-x numbers. In this post, I explore what happens when you allow division to extend base-x numbers into their Rational number counterparts. Level 5–Infinite Series: Numbers can have decimals, so what’s the equivalence for base-x numbers? For starters, I considered trying to get a “decimal” form of $\displaystyle \frac{1}{x+2}$. It was “obvious” to me that $12_x$ won’t divide into $1_x$. There are too few “places”, so some form of decimals are required. Employing division as described in my previous post somewhat like you would to determine the rational number decimals of $\frac{1}{12}$ gives Remember, the places are powers of x, so the decimal portion of $\displaystyle \frac{1}{x+2}$ is $0.1(-2)4(-8)..._x$, and it is equivalent to $\displaystyle 1x^{-1}-2x^{-2}+4x^{-3}-8x^{-4}+...=\frac{1}{x}-\frac{2}{x^2}+\frac{4}{x^3}-\frac{8}{x^4}+...$. This can be seen as a geometric series with first term $\displaystyle \frac{1}{x}$ and ratio $\displaystyle r=\frac{-2}{x}$. It’s infinite sum is therefore $\displaystyle \frac{\frac{1}{x}}{1-\frac{-2}{x}}$ which is equivalent to $\displaystyle \frac{1}{x+2}$, confirming the division computation. Of course, as a geometric series, this is true only so long as $\displaystyle |r|=\left | \frac{-2}{x} \right |<1$, or $2<|x|$. I thought this was pretty cool, and it led to lots of other cool series. For example, if $x=8$,you get $\frac{1}{10}=\frac{1}{8}-\frac{2}{64}+\frac{4}{512}-...$. Likewise, $x=3$ gives $\frac{1}{5}=\frac{1}{3}-\frac{2}{9}+\frac{4}{27}-\frac{8}{81}+...$. I found it quite interesting to have a “polynomial” defined with a rational expression. Boundary Convergence: As shown above, $\displaystyle \frac{1}{x+2}=\frac{1}{x}-\frac{2}{x^2}+\frac{4}{x^3}-\frac{8}{x^4}+...$ only for $|x|>2$. At $x=2$, the series is obviously divergent, $\displaystyle \frac{1}{4} \ne \frac{1}{2}-\frac{2}{4}+\frac{4}{8}-\frac{8}{16}+...$. For $x=-2$, I got $\displaystyle \frac{1}{0} = \frac{1}{-2}-\frac{2}{4}+\frac{4}{-8}-\frac{8}{16}+...=-\frac{1}{2}-\frac{1}{2}-\frac{1}{2}-\frac{1}{2}-...$ which is properly equivalent to $-\infty$ as $x \rightarrow -2$ as defined by the convergence domain and the graphical behavior of $\displaystyle y=\frac{1}{x+2}$ just to the left of $x=-2$. Nice. I did find it curious, though, that $\displaystyle \frac{1}{x}-\frac{2}{x^2}+\frac{4}{x^3}-\frac{8}{x^4}+...$ is a solid approximation for $\displaystyle \frac{1}{x+2}$ to the left of its vertical asymptote, but not for its rotationally symmetric right side. I also thought it philosophically strange (even though I understand mathematically why it must be) that this series could approximate function behavior near a vertical asymptote, but not near the graph’s stable and flat portion near $x=0$. What a curious, asymmetrical approximator. Maclaurin Series: Some quick calculus gives the Maclaurin series for $\displaystyle \frac{1}{x+2}$ : $\displaystyle \frac{1}{2}-\frac{x}{4}+\frac{x^2}{8}-\frac{x^3}{16}+...$, a geometric series with first term $\frac{1}{2}$ and ratio $\frac{-x}{2}$. Interestingly, the ratio emerging from the Maclaurin series is the reciprocal of the ratio from the “rational polynomial” resulting from the base-x division above. As a geometric series, the interval of convergence is $\displaystyle |r|=\left | \frac{-x}{2} \right |<1$, or $|x|<2$. Excluding endpoint results, the Maclaurin interval is the complete Real number complement to the base-x series. For the endpoints, $x=-2$ produces the right-side vertical asymptote divergence to $+ \infty$ that $x=-2$ did for the left side of the vertical asymptote in the base-x series. Again, $x=2$ is divergent. It’s lovely how these two series so completely complement each other to create clean approximations of $\displaystyle \frac{1}{x+2}$ for all $x \ne 2$. Other base-x “rational numbers” Because any polynomial divided by another is absolutely equivalent to a base-x rational number and thereby a base-x decimal number, it will always be possible to create a “rational polynomial” using powers of $\displaystyle \frac{1}{x}$ for non-zero denominators. But, the decimal patterns of rational base-x numbers don’t apply in the same way as for Natural number bases. Where $\displaystyle \frac{1}{12}$ is guaranteed to have a repeating decimal pattern, the decimal form of $\displaystyle \frac{1}{x+2}=\frac{1_x}{12_x}=0.1(-2)4(-8)..._x$ clearly will not repeat. I’ve not explored the full potential of this, but it seems like another interesting field. CONCLUSIONS and QUESTIONS Once number bases are understood, I’d argue that using base-x multiplication might be, and base-x division definitely is, a cleaner way to compute products and quotients, respectively, for polynomials. The base-x division algorithm clearly is accessible to Algebra II students, and even opens the doors to studying series approximations to functions long before calculus. Is there a convenient way to use base-x numbers to represent horizontal translations as cleanly as polynomials? How difficult would it be to work with a base-$(x-h)$ number for a polynomial translated h units horizontally? As a calculus extension, what would happen if you tried employing division of non-polynomials by replacing them with their Taylor series equivalents? I’ve played a little with proving some trig identities using base-x polynomials from the Maclaurin series for sine and cosine. What would happen if you tried to compute repeated fractions in base-x? It’s an open question from my perspective when decimal patterns might terminate or repeat when evaluating base-x rational numbers. I’d love to see someone out there give some of these questions a run! ## Number Bases and Polynomials About a month ago, I was working with our 5th grade math teacher to develop some extension activities for some students in an unleveled class. The class was exploring place value, and I suggested that some might be ready to explore what happens when you allow the number base to be something other than 10. A few students had some fun learning to use their basic four algorithms in other number bases, but I made an even deeper connection. When writing something like 512 in expanded form ($5\cdot 10^2+1\cdot 10^1+2\cdot 10^0$), I realized that if the 10 was an x, I’d have a polynomial. I’d recognized this before, but this time I wondered what would happen if I applied basic math algorithms to polynomials if I wrote them in a condensed numerical form, not their standard expanded form. That is, could I do basic algebra on $5x^2+x+2$ if I thought of it as $512_x$–a base-x “number”? (To avoid other confusion later, I read this as “five one two base-x“.) Following are some examples I played with to convince myself how my new notation would work. I’m not convinced that this will ever lead to anything, but following my “what ifs” all the way to infinite series was a blast. Read on! Level 1–Basic Addition: If I wanted to add $(3x+5)$$(2x^2+4x+1)$, I could think of it as $35_x+241_x$ and add the numbers “normally” to get $276_x$ or $2x^2+7x+6$. Notice that each power of x identifies a “place value” for its characteristic coefficient. If I wanted to add $3x-7$ to itself, I had to adapt my notation a touch. The “units digit” is a negative number, but since the number base, x, is unknown (or variable), I ended up saying $3x-7=3(-7)_x$. The parentheses are used to contain multiple characters into a single place value. Then, $(3x-7)+(3x-7)$ becomes $3(-7)_x+3(-7)_x=6(-14)_x$ or $6x-14$. Notice the expanding parentheses containing the base-x units digit. Level 2–Advanced Addition: The last example also showed me that simple multiplication would work. Adding $3x-7$ to itself is equivalent to multiplying $2\cdot (3x-7)$. In base-x, that is $2\cdot 3(-7)_x$. That’s easy! Arguably, this might be even easier that doubling a number when the number base is known. Without interactions between the coefficients of different place values, just double each digit to get $6(-14)_x=6x-14$, as before. What about $(x^2+7)+(8x-9)$? That’s equivalent to $107_x+8(-9)_x$. While simple, I’ll solve this one by stacking. and this is $x^2+8x-2$. As with base-10 numbers, the use of 0 is needed to hold place values exactly as I needed a 0 to hold the $x^1$ place for $x^2+7$. Again, this could easily be accomplished without the number base conversion, but how much more can we push these boundaries? Level 3–Multiplication & Powers: Compute $(8x-3)^2$. Stacking again and using a modification of the multiply-and-carry algorithm I learned in grade school, I got and this is equivalent to $64x^2-48x+9$. All other forms of polynomial multiplication work just fine, too. From one perspective, all of this shifting to a variable number base could be seen as completely unnecessary. We already have acceptably working algorithms for addition, subtraction, and multiplication. But then, I really like how this approach completes the connection between numerical and polynomial arithmetic. The rules of math don’t change just because you introduce variables. For some, I’m convinced this might make a big difference in understanding. I also like how easily this extends polynomial by polynomial multiplication far beyond the bland monomial and binomial products that proliferate in virtually all modern textbooks. Also banished here is any need at all for banal FOIL techniques. Level 4–Division: What about $x^2+x-6$ divided by $x+3$? In base-x, that’s $11(-6)_x \div 13_x$. Remembering that there is no place value carrying possible, I had to be a little careful when setting up my computation. Focusing only on the lead digits, 1 “goes into” 1 one time. Multiplying the partial quotient by the divisor, writing the result below and subtracting gives Then, 1 “goes into” -2 negative two times. Multiplying and subtracting gives a remainder of 0. thereby confirming that $x+3$ is a factor of $x^2+x-6$, and the other factor is the quotient, $x-2$. Perhaps this could be used as an alternative to other polynomial division algorithms. It is somewhat similar to the synthetic division technique, without its significant limitations: It is not limited to linear divisors with lead coefficients of one. For $(4x^3-5x^2+7) \div (2x^2-1)$, think $4(-5)07_x \div 20(-1)_x$. Stacking and dividing gives So $\displaystyle \frac{4x^3-5x^2+7}{2x^2-1}=2x-2.5+\frac{2x+4.5}{2x^2-1}$. CONCLUSION From all I’ve been able to tell, converting polynomials to their base-x number equivalents enables you to perform all of the same arithmetic computations. For division in particular, it seems this method might even be a bit easier. In my next post, I push the exploration of these base-x numbers into infinite series. ## Dynamic Linear Programming My department is exploring the pros and cons of different technologies for use in teaching our classes. Two teachers shared ways to use Desmos and GeoGebra in lessons using inequalities on one day; we explored the same situation using the TI-Nspire in the following week’s meeting. For this post, I’m assuming you are familiar with solving linear programming problems. Some very nice technology-assisted exploration ideas are developed in the latter half of this post. My goal is to show some cool ways we discovered to use technology to evaluate these types of problems and enhance student exploration. Our insights follow the section considering two different approaches to graphing the feasible region. For context, we used a dirt-biker linear programming problem from NCTM’s Illuminations Web Pages. Assuming x = the number of Riders built and = the number of Rovers built, inequalities for this problem are We also learn on page 7 of the Illuminations activity that Apu makes a$15 profit on each Rider and $30 per Rover. That means an Optimization Equation for the problem is $Profit=15x+30y$. GRAPHING THE FEASIBLE REGION: Graphing all of the inequalities simultaneously determines the feasible region for the problem. This can be done easily with all three technologies, but the Nspire requires solving the inequalities for y first. Therefore, the remainder of this post compares the Desmos and GeoGebra solutions. Because the Desmos solutions are easily accessible as Web pages and not separate files, further images will be from Desmos until the point where GeoGebra operates differently. Both Desmos and GeoGebra can graph these inequalities from natural inputs–inputing math sentences as you would write them from the problem information: without solving for a specific variable. As with many more complicated linear programming problems, graphing all the constraints at once sometimes makes a visually complicated feasible region graph. So, we decided to reverse all of our inequalities, effectively shading the non-feasible region instead. Any points that emerged unshaded were possible solutions to the Dirt Bike problem (image below, file here). All three softwares shift properly between solid and dashed lines to show respective included and excluded boundaries. Traditional Approach - I (as well as almost all teachers, I suspect) have traditionally done some hand-waving at this point to convince (or tell) students that while any ordered pair in the unshaded region or on its boundary (all are dashed) is a potential solution, any optimal solution occurs on the boundary of the feasible region. Hopefully teachers ask students to plug ordered pairs from the feasible region into the Optimization Equation to show that the profit does vary depending on what is built (duh), and we hope they eventually discover (or memorize) that the maximum or minimum profit occurs on the edges–usually at a corner for the rigged setups of most linear programming problems in textbooks. Thinking about this led to several lovely technology enhancements. INSIGHT 1: Vary a point. During our first department meeting, I was suddenly dissatisfied with how I’d always introduced this idea to my classes. That unease and our play with the Desmos’ simplicity of adding sliders led me to try graphing a random ordered pair. I typed (a,b) on an input line, and Desmos asked if I wanted sliders for both variables. Sure, I thought (image below, file here). – See my ASIDE note below for a philosophical point on the creation of (a,b). – GeoGebra and the Nspire require one additional step to create/insert sliders, but GeoGebra’s naming conventions led to a smoother presentation–see below. BIG ADVANTAGE: While the Illuminations problem we were using had convenient vertices, we realized that students could now drag (a,b) anywhere on the graph (especially along the boundaries and to vertices of the feasible region) to determine coordinates. Establishing exact coordinates of those points still required plugging into equations and possibly solving systems of equations (a possible entry for CAS!). However discovered, critical coordinates were suddenly much easier to identify in any linear programming question. HUGE ADVANTAGE: Now that the point was variably defined, the Optimization Equation could be, too! Rewriting and entering the Optimation Equation as an expression in terms of a and b, I took advantage of Desmos being a calculator, not just a grapher. Notice the profit value on the left of the image. With this, users can drag (a,b) and see not only the coordinates of the point, but also the value of the profit at the point’s current location! Check out the live version here to see how easily Desmos updates this value as you drag the point. From this dynamic setup, I believe students now can learn several powerful ideas through experimentation that traditionally would have been told/memorized. STUDENT DISCOVERIES: 1. Drag (a,b) anywhere in the feasible region. Not surprisingly, the profit’s value varies with (a,b)‘s location. 2. The profit appears to be be constant along the edges. Confirm this by dragging (a,b) steadily along any edge of the feasible region. 3. While there are many values the profit could assume in the feasible region, some quick experimentation suggests that the largest and smallest profit values occur at the vertices of the feasible region. 4. DEEPER: While point 3 is true, many teachers and textbooks mistakenly proclaim that solutions occur only at vertices. In fact, it is technically possible for a problem to have an infinite number optimal solutions. This realization is discussed further in the CONCLUSION. ASIDE: I was initially surprised that the variable point on the Desmos graph was directly draggable. From a purist’s perspective, this troubled me because the location of the point depends on the values of the sliders. That said, I shouldn’t be able to move the point and change the values of its defining sliders. Still, the simplicity of what I was able to do with the problem as a result of this quickly led me to forgive the two-way dependency relationships between Desmos’ sliders and the objects they define. GEOGEBRA’S VERSION: In some ways, this result was even easier to create on GeoGebra. After graphing the feasible region, I selected the Point tool and clicked once on the graph. Voila! The variable point was fully defined. This avoids the purist issue I raised in the ASIDE above. As a bonus, the point was also named. Unlike Desmos, GeoGebra permits multi-character function names. Defining $Profit(x,y)=15x+30y$ and entering $Profit(A)$ allowed me to see the profit value change as I dragged point A as I did in the Desmos solution. The $Profit(A)$ value was dynamically computed in GeoGebra as a number value in its Algebra screen. A live version of this construction is on GeoGebraTube here. At first, I wasn’t sure if the last command–entering a single term into a multivariable term–would work, but since A was a multivariable point, GeoGebra nicely handled the transition. Dragging A around the feasible region updated the current profit value just as easily as Desmos did. INSIGHT 2: Slide a line. OK, this last point is really an adaptation of a technique I learned from some of my mentors when I started teaching years ago, but how I will use it in the future is much cleaner and more expedient. I thought line slides were a commonly known technique for solving linear programming problems, but conversations with some of my colleagues have convinced me that not everyone knows the approach. Recall that each point in the feasible region has its own profit value. Instead of sliding a point to determine a profit, why not pick a particular profit and determine all points with that profit? As an example, if you wanted to see all points that had a profit of$100, the Optimization Equation becomes $Profit=100=15x+30y$.  A graph of this line (in solid purple below) passes through the feasible region.  All points on this line within the feasible region are the values where Apu could build dirt bikes and get a profit of $100. (Of course, only integer ordered pairs are realistic.) You could replace the 100 in the equation with different values and repeat the investigation. But if you’re thinking already about the dynamic power of the software, I hope you will have realized that you could define profit as a slider to scan through lots of different solutions with ease after you reset the slider’s bounds. One instance is shown below; a live Desmos version is here. Geogebra and the Nspire set up the same way except you must define their slider before you define the line. Both allow you to define the slider as “profit” instead of just “p”. CONCLUSIONS: From here, hopefully it is easy to extend Student Discovery 3 from above. By changing the P slider, you see a series of parallel lines (prove this!). As the value of P grows, the line goes up in this Illuminations problem. Through a little experimentation, it should be obvious that as P rises , the last time the profit line touches the feasible region will be at a vertex. Experiment with the P slider here to convince yourself that the maximum profit for this problem is$165 at the point $(x,y)=(3,4)$.  Apu should make 3 Riders and 4 Rovers to maximize profit.  Similarly (and obviously), Apu’s minimum profit is \$0 at $(x,y)=(0,0)$ by making no dirt bikes.

While not applicable in this particular problem, I hope you can see that if an edge of the feasible region for some linear programming problem was parallel to the line defined by the corresponding Optimization Equation, then all points along that edge potentially would be optimal solutions with the same Optimization Equation output.  This is the point I was trying to make in Student Discovery 4.

In the end, Desmos, GeoGebra, and the TI-Nspire all have the ability to create dynamic learning environments in which students can explore linear programming situations and their optimization solutions, albeit with slightly different syntax.  In the end, I believe these any of these approaches can make learning linear programming much more experimental and meaningful.

## Common numerators

As long as I’m leveraging Five Triangles posts, here is another recent one worth discussing.

Too often, I think students believe that the only way to compare fractions is to find common denominators.  In this problem, three of the four given denominators are big enough primes that the common denominator approach would result in some painful enough by-hand computations.

But the pattern in the numerators screams for attention.  Why not find some common numerators and compare the fractions that way?  That approach cracks the problem pretty efficiently.

As a bonus, the common numerator approach also shows that the four given fractions are surprisingly close to each other in size.

Keep thinking …