What legitimate inferences can be made from the 1999 release of SAT-9 scores with respect to the impact of Proposition 227 on the performance of LEP students?

Click here for a more polished but shorter text version (pdf format) prepared for the NABE Newsletter (August, 1999)

Kenji Hakuta

Professor of Education, Stanford University

(650) 725-7454

hakuta@leland.stanford.edu

July 23, 1999

Here are my preliminary observations:

The conclusion I reach from this pattern is as follows: The increases in scores for SAT-9 from 1998 to 1999 for LEP students need to be considered in light of the overall gains in scores found across the state for all students. LEP students rose, as did non-LEP students. LEP students in English-only programs rose, as did LEP students in bilingual programs. And, native English speakers in low-performing schools made gains as did LEP students in low-performing schools. These gains are probably the result of a combination of things: the fact that the schools and districts have gotten used to the tests and taken them more seriously (this is typically found in the second year of testing programs, as is the case for SAT-9 in California -- last year was the first time); the fact that a variety of other initiatives such as class-size reduction is taking effect; in the case of the low-scoring schools, the fact that statistically, low scores tend to rise (scores on the low end tend to be very unstable and tend to rise because they are more likely to go up than to go down -- something that statisticians call "regression to the mean"); and a host of other uncontrolled factors.

The policy conclusion I reach is that no one should be delighted by the fact that the overall performance of LEP students and poor, native English speakers is very low on these standardized tests. These data should be mined further to determine why increases and decreases happened, and we should learn from the instances where high achievement can be found. But I am delighted that policy makers and the public, because of these data, have become concerned about the achievement of LEP students, and I am hopeful that this will lead to a deep and profound inquiry into how we can do better for the students. I have long argued (as did the National Research Council) that focusing exclusively on whether one should teach only in English or using the native language is a major distraction that occurs at the expense of coming to serious grips with how to improve schools. I hope that this experience with trying to interpret the most recent release of SAT-9 data will convince the public that we should stop pointing the finger at bilingual programs, and get into a serious discussion of improving schools, whether English-only or bilingual.

A final note is in order about the incorrect claim by proponents of Proposition 227 who claim on their website (http://www.onenation.org) that "the Oceanside test scores revealed ... average percentile increases ranged from 120% in mathematics to over 180% in reading." We have been unable to determine exactly how they came up with these numbers. If you have a bit of taste for math, read on. Looking at the Oceanside data, even the most optimistic picture taking the very highest percentile increase in reading is from 12 to 23 (an 11 percentile point increase) in reading for the 2nd grade, and from 18 to 32 (a 14 percentile point increase) in math for the 2nd grade. None of these increases even in these best-case scenarios approaches the claim about a 120% to 180% increase. Their claim is probably based on taking the percentile point increase, and putting it in comparison to the 1998 base score (i.e., for the increase from 12 to 23, one might divide 23 by 12, and come up with about 190%). But this method is simply erroneous. If you start with a low base, any increase will end up as a much higher percent increase. A school starting at the 50th percentile (the national average) that goes up the same amount of 12 percentile points to 62, using the same division, will show only a 124% increase. By the same token, for a school at the 50th percentile to have the same amount of increase of 190%, it would have to increase its score to the 95th percentile! And, to carry it to the extreme, a school going from a percentile score of 1 to 2 (not a very respectable level of achievement) would have a 200% increase. To generalize, by this method, if you start low, then you don't have to go up very much to show a high rate of increase. To drive the point in another way, if you look at the statewide statistics for LEP and for All Students, just taking reading in 2nd grade, LEP students go from 19 to 23, and All Students go from 39 to 43. Both of these are 4 percentile point increases. That is the appropriate way to report statistics. But applying the erroneous method, it would show 23/19 = 121% increase for LEP students, and 43/39 = 110% increase for All Students. Does this mean that LEP students increased more than All Students, and therefore that we should accept the claim of a resounding success for Proposition 227? Mais non, monsieur! They both increased by 4 percentile points. Any claim that Propostion 227 worked is bunk. Punto.