Students in major urban centers around the country perform better in charter schools than they do in traditional public schools, according to a new study from the Center for Research on Education Outcomes at Stanford University.
The CREDO study found that charter schools in urban areas received “the equivalent of roughly 40 days of additional learning per year in math and 28 additional days of learning per year in reading," a substantial gain over their peers in traditional public schools.
Along with the results in math and reading scores, the study seems to dispel the notion that urban charter schools systematically pull in better-off students, but the numbers do vary significantly by region. The same is true of enrollment for English language learners, with some regions enrolling far more than their public school peers, and some far fewer.
The finding comes from a massive trove of data collected by CREDO, which looked at over 1.5 million charter school students. It also matches each charter student with a nearly identical "virtual twin" in a traditional public school.
This is the third such study conducted by CREDO. But the first two looked at charter schools nationwide, including rural and suburban schools. This one focused only on 41 urban centers.
The 2009 CREDO study found that charter schools actually performed slightly worse than traditional schools, while CREDO’s 2013 study found charters to be roughly equal.
But the CREDO study has left some researchers unimpressed, arguing that the actual differences shown in the data do not merit concluding that charters are performing better.
Explaining the data
Disputes over the meaning of complex numbers are a key part of the policy debate. Faced with standard deviations ranging from .05 to .3, how does a reader know which numbers are a big deal—and which might as well be zero?
Margaret Raymond, CREDO's director, says she is very conscious of the difficulty of explaining the data, and CREDO works over its reports obsessively to ensure maximum clarity and accessibility.
CREDO responds to that challenge in this case, in part, by “translating” standard deviations in “days of learning.” That is, rather than leave the reader guessing what the raw statistics say, they instead tell the reader that a given level of change in the study translates into extra days of schooling.
Using that metric, the CREDO report finds that urban charter students gain the equivalent of 40 days of math and 28 extra days of reading instruction over their virtual twins in a traditional public school.
Those are impressive gains. If CREDO has done that translation well, then the reader has something to hold on to.
But Andrew Maul, a UC Santa Barbara professor who specializes in measuring educational outcomes, questions the “days of learning” formula. Or, more precisely, he wonders what it is. The method for that calculation is not laid out in the report, he says, and he therefore cannot evaluate it.
Large or small?
Kevin Welner, director of the National Education Policy Center at the University of Colorado at Boulder, says he's impressed with the body of data CREDO collects, but he's cautious about drawing any conclusions.
Welner says that the results are “statistically significant,” which mainly reflects the large data set, but “practical significance” is another matter. And on the latter measures, he finds the results unimpressive. Welner argues that the data actually show what similar studies have shown in the past, namely that there is very little difference in effectiveness between charters and traditional public schools.
Looking at the raw data before that translation to “days of learning,” Maul said he doubts the effect sizes are truly newsworthy. He said that only six of the 41 regions in the CREDO study produced a standard deviation larger that 0.1 and only two were larger than 0.2.
“The very largest of these effect sizes, in Newark and Boston, may rise to the level of ‘small,’ ” Maul said, “and everything below that is ‘smaller than small,’ so basically trivial.”
But Raymond defends the effect sizes. “I call anything above .15 pretty dramatic,” she said, and she describes Boston and Newark, which range between .2 and .25 standard deviations, as “kicking it out of the park.”
Who funds whom?
When the data are too complicated to readily convey to the lay reader, as is the case here, there is a natural impulse to “follow the money.” Those who can’t be sure on the accuracy of a claim can at least weigh the motives of those who make it.
Raymond responded to the NEPC challenge, in part, by questioning the source. “The center in Colorado has a very specific objective in mind,” Raymond said. “They are paid, outright, by a set of organizations that have an interest in refuting anything that looks positive for charter schools.”
NEPC is heavily funded by the American Teachers Federation, Raymond said, which Kevin Welner denies.
Welner also said that he has never made an issue of CREDO’s funding, even though it is vulnerable. The CREDO research was supported by the Walton Family Foundation, created by the Walton family of Wal-Mart fame and a key financial player in the education wars in the side of school choice and charters.
Opponents of the "corporate reform" agenda often flag Walton money, on the assumption that they would not fund a neutral broker.
For her part, Raymond insists that Walton funding carries no strings for CREDO. The first study CREDO did, in 2009, was also funded by Walton, Raymond said, and the results were not friendly to charter school advocates.
“Our methods have not changed,” she said. “We let the data speak. We have been receiving funding from Walton for years, because they know they can never move us.”