Student Surveys Have LIMITED Value

The Center For American Progress, liberal bastion of public school bashing, released a report purporting to show that American schools are too easy. Over at Diane Ravitch's blog, Ed Fuller pretty much destroys the methodology and conclusions of the report, and then bemoans the fact that this poor piece of research is sadly all too typical.

It's worth mentioning that Bruce Baker's done plenty of work about CAP and how truly awful some of their work is. But there's one point I would like to add to Fuller's analysis that stands apart from their history of bad analysis:

Why is education research lately so dependent on student surveys?

As all parents and teachers know, kids are... well, they're kids. They don't see the world the way adults do. They don't have finely tuned notions of objective and subjective opinion. They don't have fully developed vocabularies that allow them to express subtle distinctions.

One example: children often substitute the phrase "yell at" for "criticize." I can't tell you how many times I've had an elementary student say to me that such-and-such an adult "yelled at me" for doing something wrong. When I reply with, "Really? Because I've never heard such-and-such an adult ever raise her voice to a student," the student will immediately say, "Oh, no, they didn't really yell. They were just mad."

"Really? They got mad?"

"OK, they didn't get mad, but they told me I wasn't allowed to staple my hair to the bulletin board."

(OK, that never happened. Get my point?)

The CAP survey is relying solely on student responses to surveys to determine if curricula have the appropriate difficulty. May I remind the researchers that the kids surveyed only know of one curriculum: the one they are learning. How can we possibly get any objective sense of the difficultly of school when the students themselves have such a limited base of experience to draw from?

The report also makes a big deal about student responses to whether or not they know what "their teacher talks about." Is there a big difference between "sometimes" and "often" in the mind of a 10-year-old? Isn't it likely one student's "sometimes" is another student's "often"? What are we really assessing here?

If we want to objectively judge the developmental appropriateness of curricula or standards, we should have experts in child development and learning make that judgment. Those experts would have a broad base of understanding in a variety of instructional techniques and pedagogical methods. They would have deep experience in what is and is not appropriate for children to learn at a particular age.

I believe we have a name for these experts: teachers.

CAP isn't the only outfit to rely heavily on student surveys. Michelle Rhee justified her ridiculous notion that test prep doesn't help on the basis of the Gates MET Project's student surveys. Of course, she got the conclusions completely wrong; but even if she didn't, can we really trust student reports alone to judge whether test prep activities are taking place in a classroom?

Student surveys definitely have their place in research. But we wouldn't judge medical treatments solely on patient surveys; why are we willing to radically change education policy largely on the reports of children? I think they deserve better than that.

ADDING: As if on cue:
Among the elements of a good teacher evaluation system, some of the "most surprising" results can come from what students say about their teachers on surveys, said Microsoft founder Bill Gates, speaking at the Education Commission of the States' National Forum on Education Policy in Atlanta today.
Delivering the keynote speech on support for high-quality teachers, Gates said, "Asking the students the right question is very, very diagnostic." He cited surveys as among the three components that can go into a good teacher evaluation system, along with supervisor observations and test scores.
Hey, if it worked for the X-box...

When did the entire world decide this guy knows anything about schools?