The results of the latest National Assessment of Educational Progress (NAEP), the “nation’s report card,” were released yesterday. The NAEP report indicates that here have been modest gains in reading achievement for 4th and 8th graders since 1992 when the NAEP was first administered. These gains generally hold for all groups – Whites, Blacks, and Hispanics and males and females. But the data also reveal the persistence of the so-called “achievement gap” between White students and Black and Hispanic students. For example, although the gap in reading achievement between White and Black students has narrowed slightly since 1992, the average reading scores for Black students in 8th grade still lag 27 points behind their White classmates, down from a 30 point gap in 1992. The difference between White and Hispanic in 8th grade is now 25 points, down from 26 points in 1992.
The primary goal of the No Child Left Behind (NCLB) legislation was to eliminate the achievement gap, particularly in reading, by focusing on children too often “left behind.” The report of the National Reading Panel, Reading First grants, and the establishment of the What Works Clearinghouse were all intended to help achieve this worthy goal. Following the release of the NAEP report yesterday, President Bush called the results “outstanding,” adding that the NAEP scores confirm that No Child Left Behind is working” (New York Times, “Scores Show Mixed Results for Bush Education Law,” September 25, 2007).
My reading of the NAEP report is less optimistic. At the current rate of improvement since 1992, it will take another 135 years for the average performance of Black students to pull even with White students. Using the same logic, it will take 375 years for the average performance of Hispanic students to catch up to their White classmates. A cynic might conclude that NCLB is working to maintain existing educational inequities.
Wednesday, September 26, 2007
Thursday, September 20, 2007
What works: Not for Everyone
A recently released review of beginning reading programs from the What Works Clearinghouse found that few commercial reading programs could claim evidence that they are effective in raising student achievement (Kathleen Manzo, Education Week, “Reading Curricula Don’t Make Cut for Federal Review, August 15, 2007). This isn’t particularly surprising to those who have studied commercial reading programs, but it raises a question that is seldom asked: what does it mean to claim that a reading intervention “works?”
To begin with, no reading intervention has been found to be effective with all children, all of the time. So a reading strategy that “works” does not work for everyone. From a statistical point of view, strategies work only for a mythical average student. The reliance on means for determining statistical significance obscures the fact that a strategy that was found to work did not work for everyone and may even have been detrimental for some.
The claim that a reading intervention works must be further qualified with the phrase, “compared to what?” Typically, reading research compares one intervention to one or two other interventions. In some cases, the intervention may actually be compared to nothing (i.e., the intervention is better than no intervention). In any case, a strategy that works only works better than the interventions to which it was compared and, even then, because of the reliance on the mythical average student, a strategy that didn’t work could still be effective for some children.
Finally, the assertion that a reading intervention works begs the question, “works at what?” Some researchers will be satisfied that an intervention was effective if it improved students’ performance sounding out nonsense words. Others will only be satisfied if the intervention improved students’ reading comprehension and, even then, reading researchers have different views on the meaning of reading comprehension.
So to say that a reading intervention works really means that the intervention was effective for some children compared to one or two other interventions on measures the reading researcher(s) – but likely not all reading researchers – believed were related to reading.
From this perspective, the ultimate arbiter of “what works?” is the teacher who determines the efficacy of various reading interventions with individual children in her/his classroom.
To begin with, no reading intervention has been found to be effective with all children, all of the time. So a reading strategy that “works” does not work for everyone. From a statistical point of view, strategies work only for a mythical average student. The reliance on means for determining statistical significance obscures the fact that a strategy that was found to work did not work for everyone and may even have been detrimental for some.
The claim that a reading intervention works must be further qualified with the phrase, “compared to what?” Typically, reading research compares one intervention to one or two other interventions. In some cases, the intervention may actually be compared to nothing (i.e., the intervention is better than no intervention). In any case, a strategy that works only works better than the interventions to which it was compared and, even then, because of the reliance on the mythical average student, a strategy that didn’t work could still be effective for some children.
Finally, the assertion that a reading intervention works begs the question, “works at what?” Some researchers will be satisfied that an intervention was effective if it improved students’ performance sounding out nonsense words. Others will only be satisfied if the intervention improved students’ reading comprehension and, even then, reading researchers have different views on the meaning of reading comprehension.
So to say that a reading intervention works really means that the intervention was effective for some children compared to one or two other interventions on measures the reading researcher(s) – but likely not all reading researchers – believed were related to reading.
From this perspective, the ultimate arbiter of “what works?” is the teacher who determines the efficacy of various reading interventions with individual children in her/his classroom.
Thursday, September 6, 2007
Taking Responsibility (Don't do as I do)
Accountability is the linchpin of the No Child Left Behind (NCLB) legislation. This is as it should be. Teachers are professionals and they must be accountable for student learning. There has, however, been considerable debate over the meaning of accountability in the context of NCLB including what teachers should be accountable for and how they should be held accountable. As a keen observer of American politics I think that teachers can learn a lot by observing how members of the Bush administration take responsibility for their actions.
When, for example, the Government Accountability Office (GAO) recently released a report that gave the Iraqi government failing grades for not meeting a series of political benchmarks, the White House complained that the GAO’s standards were “too high.” Following this example, I suggest that teachers whose students do poorly on state achievement tests utilize the same tactic. Claim that the test makers’ standards were just too high.
Alberto Gonzalez, Scooter Libby, and even the President have attempted to deflect criticism of failed policies and inept performance by occasionally asserting, “I don’t recall….” When teachers are chastised for their students’ failures, I suggest they consider a similar defense: “I don’t remember that student.”
Accountability in Washington often involves blaming failure on somebody else. The failure to aid the victims of Hurricane Katrina? The fault of state and local officials. Recommending Harriet Myers for the Supreme Court? It was John Roberts’ idea. When students fail, I suggest that teachers consider blaming parents, administrators, students, or even custodians (“my classroom was too dirty for learning to occur”).
But sometimes teachers need to be prepared for the ultimate gesture of accountability. Teachers must be ready to tell parents, administrators, and students that they take full responsibility for low test scores. There is no better way to show that they are doing a “heck of a job.”
When, for example, the Government Accountability Office (GAO) recently released a report that gave the Iraqi government failing grades for not meeting a series of political benchmarks, the White House complained that the GAO’s standards were “too high.” Following this example, I suggest that teachers whose students do poorly on state achievement tests utilize the same tactic. Claim that the test makers’ standards were just too high.
Alberto Gonzalez, Scooter Libby, and even the President have attempted to deflect criticism of failed policies and inept performance by occasionally asserting, “I don’t recall….” When teachers are chastised for their students’ failures, I suggest they consider a similar defense: “I don’t remember that student.”
Accountability in Washington often involves blaming failure on somebody else. The failure to aid the victims of Hurricane Katrina? The fault of state and local officials. Recommending Harriet Myers for the Supreme Court? It was John Roberts’ idea. When students fail, I suggest that teachers consider blaming parents, administrators, students, or even custodians (“my classroom was too dirty for learning to occur”).
But sometimes teachers need to be prepared for the ultimate gesture of accountability. Teachers must be ready to tell parents, administrators, and students that they take full responsibility for low test scores. There is no better way to show that they are doing a “heck of a job.”
Subscribe to:
Posts (Atom)