2/7/09

Education Research: Silly Season

I have written before about education research, and the near uselessness of much of it. Here is a fellow blogger who also has issues with education research.
What is Scientifically Based Research?

A nagging little pamphlet from NIFL appeared in my teacher mailbox the other day. I’d been hopeful that the government’s fetish for experimental reading research design would go into remission with the new administration, but that seems not to be the case. Always curious about government propaganda, I read through “What is Scientifically Based Research?” instead of grading papers or running the copy machine to generate more papers to grade.

Page 1 says that “educators need ways to separate misinformation from genuine knowledge,” and we should be wise consumers of education research to help us “make decisions that guarantee quality instruction.” Looking for the punch line, I continued reading, drawn to riveting passages such as, “Teachers can further strengthen their instruction and protect their students’ valuable time in school by scientifically evaluating claims about teaching methods and recognizing quality research when they see it.” Translation: Good intentions are not enough. Teachers may be misled by educational hucksters. I’ve had those same suspicions myself, but the target population isn’t limited to the teaching profession.

The main point of this document is to give us the “federal perspective” on scientific research, which:

* Progresses by investigating testable problems;
* Yields predictions that could be disproven;
* Is subjected to peer review;
* Allows for criticism and replication by other scientists;
* Is bound by the logic of true experiments.

It reads like the introduction to a sixth-grade science textbook. Nothing on that list, however, is evident in our national school reform policy. But federal education reform is political, not educational. And since this is the age of double standards, I’ll let that go for now, and write it off as another example of how, when you write the rules, accountability is for everyone else.

What interests me at the moment is the federal perspective on curriculum and instruction. Principally, how much weight should be given to teacher observations in instructional decision-making? We often hear that innovation is a good thing, but it’s hard to imagine how new ideas are propagated in a standardized environment that myopically focuses on a single measure of success.

“What is Scientific Research?” tells us that teachers should “look for evidence that an instructional technique has been proven effective by more than one study,” cautioning us to be aware there are different stages of scientific investigation, and that we should “take care to use data generated at each stage in appropriate ways.” Then comes this attention grabber: “For example, some teachers rely on their own observations to make judgments about the success of educational strategies.”

Some teachers?!

At this point, we learn that “observations have limited value” and that scientific observations must be carefully structured to make determinations about cause and effect. Well, maybe so. But experimental evidence has limits, as well. We’re cautioned that, “In order to draw conclusions about outcomes and their causes, data must come from true experiments,” and “Only true experiments can provide evidence of whether an instructional practice works or not.”

So, teachers, don’t get any funny ideas about evaluating your own effectiveness.

Just to make sure we understand they don’t have every little detail quite worked out, we’re reminded that, “In many cases, science has not yet provided the answers teachers and others need to make fully informed decisions about adopting, or dropping, particular educational strategies.” No kidding.

So, what then? My teacher perspective is that all knowing is personal, classrooms are not sterile laboratories in which the variables can be tightly controlled, and doing experiments on children is still frowned upon in our society.

Coincidentally, The federal perspective on education research received some attention in Elaine Garan’s recent article about sustained silent reading in The Reading Teacher. Garan reminds us that the “medical model” is not well-suited for education research because messy human variables such as motivation, emotional difficulties, and other human qualities can contaminate the results. She argued that a lack of consensus among researchers converges with common sense, recommending that students have time to read freely each day, despite the National Reading Panel’s failure to find any evidence in support of the practice. If there is “no evidence” in support of a particular practice, it may have everything to do with the research methodology, and nothing to do with what is true about the real world of classrooms that researchers have awkwardly tried to shoehorn into a narrow view of reading instruction.

I’ll have more to say about free and voluntary reading some other time. It’s working out remarkably well for my students this year. That’s my observation, anyway.

Total Pageviews