Tuesday, July 28, 2009


It started with a giggle that moved around the room quickly. I stopped typing the departmental e-mail that I know my colleagues will delete before reading and ask the kids what’s so funny.

The response: “This question asks us to choose the best meaning for the phrase, ‘Susie suddenly stopped dead in her tracks’ and here is one of the choices.' A) Susie was walking, then she died suddenly.’”

The class falls apart in scornful laughter. Two more hours of their time wasted taking these ridiculous departmental “benchmark” tests contracted to and developed by the Princeton Review. Two of the tests focus on exposition and persuasion and consist of mind-numbing ersatz “informational texts”-- brief essays, personal letters, cartoons, graphs. Please note that none of these is "informational" or "textured" enough to be effectively analyzed. The pieces are supposedly about “controversial” student-oriented topics like cell phone use, video games, obesity, or smoking, and all are constructed as patronizing life messages, which the students must read, analyze, and write about. If these topics are controversial, one would never know it, in that the pieces are always skewed. The students, nonetheless, are asked to discuss the “counterclaim” made in each piece, but since the stuff is so biased, there is often no clear counterclaim--obesity, smoking, video games are bad and that’s that.

The test is often not proofread effectively and there have occasionally been two best choices in the multiple choice answers, and sometimes not even one of the multiple choice answers applies. Invariably, when we examine this “test data” as a department, all the test proves is that kids struggle with vocabulary, but when the questions and answers are poorly constructed, how accurate can that assessment be? The results are only as good as the instrument, and this instrument is flawed at best.

For the literature component of this test, and I use that term loosely, in one instance, students were to answer questions related to a story about a village in some place a tad more exotic than their own suburban Los Angeles neighborhoods, where the narrator talks about the respect paid to “Grandfather.” Naturally, some of the kids, unaware that the custom in small villages is to generically call the elders “grandfather,” assumed that this meant the narrator was talking about his grandfather and answered about half the questions incorrectly.

One of the more startling moments came when another student brought her test up to me to ask what to do in the instance where the answers were listed A), B), D), C). Should she write D for the third answer which should have been marked C but was listed here as D? A real quandary for any kid who wants to get it right.

Now I am hearing that not only will my students be judged for their performance on these tests, but according to Arne Duncan and the DATA battle cry he stimulated, my credibility as a teacher will be based on this data as well. I am all for accountability, but who’s going to be judging and what criteria will they use? I don’t teach to any test that requires I water down my curriculum (let's just say that when my 10th graders were reading MACBETH in the context of excerpts of R.D. Laing's DIVIDED SELF, the kids were somewhat taken aback when they had to write an essay on the California Exit Exam that asked them to defend the kind of animal they thought best for children!). In fact, teaching to any test is the most shortsighted approach to teaching, yet it is championed by the ruling bureaucrats. Unfortunately, school districts--well, my district focuses more on what the powers that be believe to be wide-reaching efficiency and less on educating anyone. Just look at how they fired new teachers recently because of budget cuts (I will save that for a later entry).

Reducing teacher accountability to such test data is exactly that: reducing teacher accountability, PERIOD.

No comments:

Post a Comment