Future of Coding

Kill Math

April 2011

What is math?

Bret Victor in Kill Math:

Kill Math is my umbrella project for techniques that enable people to model and solve meaningful problems of quantity using concrete representations and intuition-guided exploration. In the long term, I hope to develop a widely-usable, insight-generating alternative to symbolic math.

I’m not sure I ascribe to that definition of math: model and solve meaningful problems of quantity.

According to Wikipedia’s Definitions of Mathmatics, that’s how Aristotle conceived of the subject.

I think Wolfram’s definiton is a step in the right direction:

Mathematics is a broad-ranging field of study in which the properties and interactions of idealized objects are examined. - Wolfram MathWorld

Let’s shorten it to: “the study of idealized objects.” Thus we can see that mathmatics can help solve problems of quantity as long as we can model those quantities with idealized objects (numbers).

I think mathmatics has more to do with relationships than quantities. If you want to solve a particular problem – such as which is the most cost-effective denstist for you to see, taking into account the value of your time, the distance to the appointment, and the cost of the appointment – you first have to go from these concrete facts to an abstract model of the problem. As Chris Granger says, I’m sure Bret would in most respects agree, modeling is the key skill here:

…Excel is unquestionably the king. Through Excel we can model any system that we can represent as numbers on a grid, which it turns out, is a lot of them. We have modeled everything from entire businesses to markets to family vacations. Millions of people are able to use spreadsheets to model aspects of their lives and it could be argued that, outside of the Internet, it’s the single most important tool available to us on a computer. It gains this power by providing a simple and intuitive set of tools for shaping just one material: a grid of numbers. If we want to work with more than that, however, we have to code.

Which is, of course, how I eventually solved my dentist problem!

What is science

Recently, I’ve been wondering the same thing about science – I even have a book What is this thing called Science?. In particular I am wondering how science relates to the concept of measurement, particularly when you define a measurement as “a reduction in uncertainty.” From that definition, science is simply the study of measurement, which of course is why evidence is key to science (reduces uncertainty of reality via measurement of reality), and why good accounting is key to business (in order to diagnose problems you must reduce certainty of causes).

This makes me think that many of my favorite things are science: Alexander Technique, Lean Startup, debugging code.

And much of what I dislike about science are, in fact, not science, but accidental complexity: learning how to use equiptment, following instructions, getting approval.

Ex-Apple Designer Creates Teaching UI That “Kills Math” Using Data Viz


Although it can run on an iPad, Victor isn’t planning on releasing “anything resembling the design shown” as an app. “The prototype was intended to teach and inspire tool designers, so that’s the meta-audience.” It’s a variation on the old saw: give a man a fish, and he eats for a day, but teach him to fish, and he eats for a lifetime. If Victor is going to “kill math,” a single specific app won’t do it — but a set of inspiring examples just might, if they inspire others to think about how they can “kill math” themselves.

Wow, he makes his meta-strategy incredibly specific! Makes me wonder:

  1. Given that Bret exists to inspire tool creators and creations, would it be better to simply be a tool creator? Or could the world use another tool-inspirer?

Given that Chris Granger adds to the conversation, Bret is not enough. Also given the insane amount of impact a better tool, such as Excel, could have on the world, I say we could use another dozen tool-inspirers! Especially given the number of people who can build tools, programmers, is very high, and given that there are ways to make money as a tool builder, i.e. starting a company, it’s probably a more attractive path for most people. Given that I am obsessed with ideas and less excited about money, this may be the right path for me from a comparative-advantage perspective.

  1. In order to effectively inspire tool creators, that is follow Bret’s meta-strategy, should I first be a tool creator? Or can I merely study tool creators and tools and writing about them? Or somewhere in between, releasing various prototypes for inspiration, but not fully-fledged tools such as WoofJS?

Mathematics don’t think in symbols

“The dirty little secret is that the greatest mathematicians don’t actually think in symbols,” Victor explains. Einstein himself said that he “did his thing” with “sensations of a kinesthetic or muscular type.” Sure, e=mc² is an equation — a gloriously elegant and simple one. But the point is that the equation isn’t the math; it’s not the insight, the creativity, that actually happened inside Einstein’s head.

This point is SO MUCH more profound that meets the eye. From this perspective, virtually all math teachers don’t understand mathmatics. They are merely the human inside the chinese room thought experiement, teaching the next batch of symbol manipulators.

In high school, I had this perspective, that intuition, often physical intuition, was paramount. I felt like the only who felt this way. Some would reply, “well that’s just how you understand that particular subject best.” Now I feel confident in replying, “No, that how everyone understands anything.” Virtually all of my school activities could be reduced to formulaic symbol manipulation, from writing an essay based on a formula, spitting out facts out biology, or performing mathmatical computations.

This is why I insist that everything I learn “feel right.” Almost from a phsyical sense. The ideas have to fit together. I have to see how I would have been able to think them up myself. That’s profound. I think we’ve all had the experience of thinking we could’ve invented something. For example, you may have thought about that the first time you heard about Uber. I don’t feel like I understand a subject until I feel that way about it. “Oh, I see how that idea leads to this concept, which together combine with this third thing, and why together these ideas lead to…”

I owe this insistance and the deep understanding it leads to largely to my time at IMACS, learning LOGO and Scheme. LOGO helped me understand what understanding feels like as it relates to mathmatics. That’s key. We’ve all had deep experiences of insight in our lives. Few of us have had such experiences in mathematics. I don’t think I had a single positive experience in mathematics before LOGO. Of course, I didn’t realize that LOGO was mathematics. (As Papert says, it’s important to not tell students who hate math that LOGO is math.) By the time I realized that LOGO was mathematics, I already had a slew of positive, empowered experiences with it. The result of these new positive experience with math combine with my prior “I hate math” and “I’m bad at math” thoughts to induce a state of cognitive dissonance. My first try to escape was probably, “well I only like certain kinds of math,” which I then upgraded to “I only like math when I can learn it in a particular way.”

I will never forget when I discovered my “learning style.” The teacher had assigned us to apply the equations to a series of problems independently during class time. (What a crazy silly use of time! Teaching students to manipulate symbols and do arithmetic. It’s literally worth that useless as it teaches kids the wrong things about learning and mathematics.) I randomly stumbled upon the derivation of the quadratic equation in my math textbook. As I read down the page, I was filled with awe and wonder. Holy crap. So this is how they came up with it?! I jumped out of my seat and showed the teacher. If she had already known about it, she surely would’ve shown us in class.

She knew all about that page in the textbook. However, she was confused as to why I was so excited about that page. I don’t remember what I said, but I do remember saying it while excitedly moving my hands, and her responding something to the effect of, “Well I guess if it helps you learn the material better…” and me interuppting with, “Yes, yes it does.” And then asking, “Wait, are there more of these?”

It was only later, after spending time with the smartest kids in my school and at Penn, that I realized that all of us “smart” kids learn in the same inuitive way, by relating new things to old things we already understand well – often but not limited to physical metaphors.

Math isn’t natural

Writing and math are both symbol-based systems. But I speculate that written language is less artificial because its symbols map directly to words or phonemes, for which humans are hard-wired. Papert might disagree, and claim that a child raised in “Mathland”, an immersive interactive mathematical environment that “is to math what France is to French”, would become as fluent in symbolic math as in language. With regard to symbolic math, I might respond that a child raised in Antarctica would be quite tolerant of the cold, but maybe people shouldn’t need that sort of tolerance.

Oh shit! Throwing some shade on Papert. I think this is the first time I’ve heard Bret critize Papert. For that matter, I can’t recall him criticizing Kay either. To be clear, when I say critcize, I don’t mean condemn. I mean respond to with qualitifications. That is, what I’m doing now to Bret ;)

I strongly disagree with Bret’s statements above. Writing and reading is at least as difficult to learn as mathematics. The reason that people learn reading and writing is because they are so damn useful. Constrast the way students learn to read and learn basic arithmatic at the Sudbury Valley School: Students often learn to read in the course of playing video games. In order to learn “basic math”, students had to set up a structured arrangement with a teacher to work through the various chapters in the math textbook. The difference is that reading and writing can be picked up from the context, and mathematics cannot, as opposed to writing beind somehow “less artifical.”

Instead, we tend to reply on implicit physical metaphors, both for the mechanics of symbol manipulation (e.g., “moving” a term to the other side of the equation, “canceling out” two terms, etc.) and for the semantic interpretation of the symbols (e.g., exponential “blow-up”, or the “smallness” of a negligible term). To a certain extent, a person’s mathematical skill is tied to their ability to “feel” the symbols through these physical metaphors, and thereby make the abstract more concrete.

We learned our words and letters in the same way: by relating them to the sounds and words we already know from using them in our lives! I would be curious to see what the best evience says here, but given that I don’t know it, my intution is that human brains aren’t “hard-writed for language,” any more than human arms are hard-wired for jump-shots. Humans are incredibly flexible machines. We can wire and re-wire ourselves in terms of our pre-exsisting wiring to do whatever we want to. The question is what do we want to do? (as well as what do we already know how to do?)

Why is the sky blue?

There are three categories for using math:

  1. For problems in school. As stated above, worse than useless.
  2. To solve problems in live. For example, making a cost-effective decision or engineer a space ship to acheive LEO.
  3. For curiosity’s sake.

What’s crazy is that children, all children, are born with the urge to use math (and science) in category (3). Unfortunately, most parents don’t have the requisite training to foster such a mentality, and instead squash it with the absolutist “because I said so” answers or the resigned “who cares?”

Tools for curiosity

I think Nicky Case’s new JoyJS demo points in the correct direction, allowing students to ask open ended questions. However, when reviewing the tool, I did have trouble coming up with interesting questions to investigate with it. Constrast that with Excel which I just used so naturally to solve a problem. (The key here is that Excel solved a problem, not satiated a curiosity.) I feel the same way the Mathmatica: it doesn’t feel like it solves problems relevant to me, nor would address questions that I’m curious about.

Recently I saw a business magazie that promsied to “launch my career” over the photo of a rocketship. It made me wonder if the word “launch” entered our vocabulary around the time of the space race, the 1960s. A simple Google Ngram query satiated my curiosity: yes and no. The word does trend upwards with the word “rocket”, but it also seems to have an upward trend of its own, even when “rocket” falls out of favor.

Sometimes questions can be answered by just knowing about the right tool / dataset. This example also confers credence on Wolfram’s insistance that the programming language must come with data pre-installed. I would whole-heartedly agree to the extent that data should be available without parsing, networking, or storage concerns (beyond, possibly, the financial costs assiociated with each). In making a word-puzzle app for my dad, I spent hours and hours trying to get the right dataset of English words into the app. Much of that time was spent on essential complexity: finding a dataset with enough not-super-arcane words, but the right tool could’ve sped this proccess up 100-fold by allowing me to quickly compare and combine word datasets, all without leaving the tool. This is key: finding the right dataset is not simply googling and then importing because datasets are normally big so you can’t tell if it’s the “right one” until after importing. Thus, the right tool would allow you to compare data-sources within the context of how they’d be used in your program, and side-by-side with other data-sources.

Visceral interpretation

I want to quote this entire section. All gold. Here’s the key: building an ergonomic tool means building a tool that “adapts complex situations so they can be seen, experienced, and reasoned about with our plain old [monkey] brain.”

What I feel like he misses – which is strange given that this whole essay is Papert inspired – is that the way to make concepts visceral is to make them to relate to what has previously been “seen, experience, and reasoned about.” That is, by analogy to powerful ideas already in the monkey brain. Yes, our monkey brains already have a notion of visual spaces, thus graphs are relatively easy to understand (although, not totaly intuitive). Some of this might be inborn, but that’s not required: humans could learn these things as babies (which would in fact be responsible for the illusion that they are inborn). One may be tempted to say: let’s make all our tools in terms of the base-level of metaphors already in most people’s brains. I think that’s limiting, because as Seymour showed, even a few particularly powerful metaphors can enable a vast amount of learning (or assimilation, as he would say.)

Logo Turtle in Calculus

He tells the story of being able to better understand integration in calculus because he could relate it to the movement of the LOGO turtle. I tell a very similar story. A derivitive is simply the angle of the LOGO turtle as he walks along a curve. My experience with LOGO allowed me to seamlessly assimilate this idea from calculus, which to my peers made me look like some kind of inborn genius.

Memory Geniuses

Here’s the mystery for you. In Peak by Anders Ericson, he relates the story of teaching a randomly selected student and training him in his working memory, beyond the plus or minus seven that all humans share. At first they weren’t able to make much progress, but after months of effort they raised his number to 30, 50, 80, 100, crushing all world records until that point. From there, others have taken this memory game to ever-higher places.

How do these memory geniuses do it? They start out just like you and me. And through training, they change. How do the learn to use their brains?

The key to solving this riddle is realizing that it’s impossible to expand working memory. However it is possible to 1) compress information and 2) cleverly transfer it to long-term memory, which is exactly what these geniuses do.

To compress information, think of it like the Huffman encoding but inside a human’s head. To transform information to long-term memory, they use variations on the memory palace technique.

The key here is that expanding human capability is all abou the tools at the human’s disposal, inside their brain and out. A person that seems smarter simply has better software installed. The bright side is: we can copy software between brains.

Review: Don’t Kill Math

[Review published October 27, 2012 by Evan Miller]

The unifying goal of Victor’s work, as he puts it in his Inventing on Principle talk (for all practical purposes, Victor’s manifesto) is to bring people into closer communion with their creative ideas. I personally applaud this goal, and would be hard-pressed to find anyone who is against it (who can oppose human creativity?).

Huh, that’s a good point. This is strange because in that very talk Bret explains that one’s crusade needs to be controversial. And yet his goal of bringing people closer to their ideas is something I imagine all people would agree with…

Holy shit this article is both scathing and well-thought-out. It may be the best critique of BV I’ve seen thus far!

Let’s address Evan’s points.

Firstly, he characterizes Bret’s arguments as attacking “analytic methods.” This is slightly confusing because BV uses the term “symbolic” to reference what he’s unhappy about.

Secondly, Evan seems to think that Kill Math wants to replace all math with simulations. That’s misleading because BV wants to replace symbolic math with “concrete modeling, simulation, and visualization,” as stated in Simulation as a Practical Tool. Evan doesn’t address BV’s scrubbing calculator at all, which I think, would alleviate many of his concerns that Bret hates all analytical approaches. He simply wants to make them more concrete by leveraging our new magic paper.

Evan’s critique of the “UP and Down the Ladder of Abstraction” ring true, yet I think he misses the point a bit. Bret is describing an approach to thinking about problems, not trying to truly solve one problem. Maybe a better, more real example would’ve helped.

There are a few things that Evan misses:

  1. The reason that symbolic/analytical methods work for him (and people like him, including me) is that we’ve so thoroughly internalized how moving the knobs on the various bits of the equation will tweak things. I understand ratios physically through lots of suffering through not. What is math?
  2. And that’s just the basics. More complicated formula terrify me. It’s incredibly hard to learn about a new formula from the symbols. How does Evan expect me to visualize sin(theta) = r/h? I can’t do the inverse sin in my head! Imagine how much our magic computer paper could augment this formula to aid my inuition! Bret isn’t saying we should do away with formula, but that we gotta be able to do better now that we have magic paper!
  3. Evan talks about equations as being useful for the reader to understand how to model a problem, know what is important and what is irrelevant. I think that’s akin to giving the punchline of a joke to a person so they can reason out what the joke must’ve been. Instead, like Khalid suggests, the technical definition is often the last thing you want to see. Humans learn through invention. Thus we should come to formulas the same way their inventors do: with our intuitions first. Symbols last. Concrete then abstract.

Finally, Evan scores a nice shot on BV by pointing out that he hides the underlying formula in his Explorable Explanation article. Yet this criticism rings false as he’s criticizing the same guy who made TenBrighterIdeas which allows you to see the entire model as well as edit the actual source code.