Saturday, April 26, 2008

Abstraction Levels

The most emailed story at the New York Times is about researchers at Ohio State who claim that learning to manipulate abstract mathematical equations is better for students than story problems. Put this way, I actually agree with this. But here's the problem with the article and with the research.

1. Story problems aren't actually "concrete," and they're certainly not "real-world" examples. They're verbal, which is a quite different thing. The famous train problems, like most story problems, test your ability to translate a verbal description into a mathematical model (in this case, two linear equations). The recent push to add concrete explorations to mathematics has more to do with using physical experiments, measurements, in more of a laboratory style approach.

2. I love ripping on educational research as much as anyone, but to write that random, controlled experiments are "something relatively rare in education research" *or as Matthew Yglesias writes, "it's a bit bizarre how little effort we put into developing serious research-based pedagogical methods") just isn't true. I know hundreds of studies in mathematics education, some of it experimental, lots based on testing and controls. The trouble is that the body of research has yielded very few consistent results, in part because the research design is often designed to prove a point or justify a particular curricular change rather than to actually figure out what works. And, "what works" is pretty murky itself.

3. This experiment is flawed, and I'll tell you why.



The problem with the real-world examples, Dr. Kaminski said, was that they obscured the underlying math, and students were not able to transfer their knowledge to new problems.


This is wrong. In fact, the second "real world" example teaches a fundamentally different kind of mathematical problem. It's modular arithmetic. The relationship is one of addition, but the sums "reset" at 3, so 1+1=2, but 2+2=4, i.e. 3+1, so 4=1, and likewise 2+3=5, or its modular equivalent, 2. The water example teaches a completely different mathematical relationship than the purely symbolic systems so, while the shapes and the game are essentially the same. So what obscures the underlying math is the explicit math. The second group is doing a kind of arithmetic, while the first group isn't. The fact that you can model the first set of relationships using this modular arithmetic is interesting, but since the game doesn't rely on that skill, of course it doesn't transfer. It could be argued that the experiment proves exactly the opposite of what's claimed. The group working with relationships between specific objects does better than the group working with arithmetic-based models.

Oh, yeah, and the experiment used college students, but they think the results apply equally well to elementary students. Wha? Sociologists don't work that way. Pharmaceutical researchers don't either. Neither do serious educational researchers.

Here's how I'd test this theory. First, work with the target population -- let's say, eighth graders who know a little bit of algebra. Train one group in pure systems of equations and the other only in train modeling problems (or some other specific kind of story problem). Then give them both the same test, including abstract systems, train story problems, and a range of other modeling problems that use the same mathematical principles. Repeat with a bunch of different groups, using different teachers, maybe different income tiers, slightly different background knowledge. And then see what you see.

I think it's plausible that training students to do one or a handful of specific examples of algebraic modeling doesn't really teach them how to do modeling in general very well. Maybe a great deal of comfort and familiarity with systems of equations could actually improve their ability to do so. But to my admittedly skeptical mind, this research just stinks.

No comments: