Setting the Standard in Education

Tuesdays with Arne: Is Race to the Top Arbitrary?

In Federal on April 27, 2010 at 2:53 pm

It seems like I have some learning to do in terms of keeping consistency. I have had a busy last week and finally have some time to commit to the blog. Fear not, though. I will make sure to mend my ways and adjust my scheduling. On to more knowledge!

There seem to be lots of people angry about education these days. Usually they are angry that others are running things a different way than they think things should be run. Charter schools or no charter schools? Teach For America or education schools? Pump lots of money in or let them suffer? OK, that one’s not what people actually think, but sometimes it seems that way.  One way to get lots of people angry is to throw around some money. Everyone always wants to say how money should be spent. Give incentives. Pay teachers more. Buy more technology. Any use of government funds is therefore at the top of the anger meter. Public funds are in part everyone’s money, so everyone seems to think they should have some say over it.

Last week, The Washington Post‘s education blog, “The Answer Sheet,” ran an article entitled “Race to Top Winners Chosen Arbitrarily.” It was based on a report from the Economic Policy Institute which calls the Race to the Top “a muddled path to the finish line.” I have a hard time reading things that seem so categorical. As soon as a report comes out calling something fundamentally right or wrong, I become skeptical. However, the report has some strong analysis, despite its extreme conclusions. It has some important criticisms that should be corrected in future rounds of Race to the Top if it is to continue.

For those who don’t know, Race to the Top uses a 500-point rating system to determine the winners of hundreds of millions of dollars. The table at the bottom, which is shown in the report and comes from http://edocket.access.gpo.gov/2009/pdf/E9-27426.pdf, shows what factors go into the process. As you can see, there are thirty different factors that the were used to determine grades. And that leads me to the first criticism – Peterson and Rothstein (neither of whom I’d like to point out are education policy analysts) say that the process is “needlessly complex.” The point out that there are lots of factors with varying weights. I’m not sure that this is much of a problem, since education is complex in itself. Better to have a large number of specific criteria than a few vague ones.

However, the importance of this claim becomes more apparent in conjunction with the fact that these criteria are not scientifically chosen. I’m not sure how they came to that conclusion. They state that the factors themselves seem to be arbitrary. Clearly the people running this are not just picking issues out of a hat. The report brings up a good point that even though the factors are somewhat based on policy preferences, even that does not quite hold water. Duncan’s “Blueprint,” which  gives proposals for allocations in the ESEA (Elementary and Secondary Education Act) includes ideas for competitions in areas that are given no points in RttT. The report makes it seem as if this means that there is some incongruity – either Duncan thinks these are important issues or he doesn’t. But if he’s asking for money separately, does that necessarily mean there needs to be money directed at these areas twice?

In addition to the claim that the factors are arbitrary, the report says the weights themselves are arbitrary:

Is there scientific support for the “State Success Factors” being 90.6% as important as the “Great Teachers and Leaders”
factor? Should the “Great Teachers” maximum points be 140, or maybe 163, instead of 138?

Apparently they subscribe to the logical fallacy that I warned my my 10th graders against: a lack of evidence is not evidence of lack. In other words, just because these researchers don’t know reasons for there being varying weights does not mean there aren’t any reasons. It seems absurd to claim that there was no scientific basis for choosing these factors and these weights. They even point out that there was a time period for open comments from the public, some of which were accepted and others rejected (or as they claim “ignored”). The problem is that the reasoning was not given, not that there wasn’t reasoning.

The biggest problem with RttT’s system in my eyes that the report points out are the enormous scales used and the inconsistency in grading as a result. In my Master’s in Education program, we were warned against having grading scales that are too complex. There are usually only about three to five gradations that a normal person is able to distinguish between. Even getting up to seven starts to get hazy. What is the difference between scoring 42 or 43 points out of 50? That, I will agree, leads to arbitrariness. If the creators of the system want something to be worth 50 points, then differences in weighting need to occur, rather than a broad scale. Perhaps it should be out of five possible points and then multiplied by 10.  On top of the scale, it is clear that the factors themselves are not specific enough. The report points out that in one instance, Florida received scores of 25, 35, 38, 40, and 40 points from the five judges on the same criteria. The scores are then averaged. I don’t know about you but to me, for one person to think  that Florida should get 25 while two others think 40 means that someone doesn’t know what they’re doing. There needs to be consistency. The report suggests an olympic-style dropping of lowest and highest scores to account for outliers. This does not satisfy me. If there are extreme outliers, that points to a problem in the criterion itself. The graders need to have consensus and not just agree to disagree.

With all of these problems, and I will agree that there are a fair number, Peterson and Rothstein recommend that the government move toward a pass/fail system, rather than such a complex one that is sure to have many inconsistencies. However, this seems to be a step backwards. The whole point of Race to the Top is that there is a top that states are aiming for, not a bottom. To have a bare minimum that states need to achieve sets states’ aims at that minimum, rather than creating an education market of sorts in which the best state wins. With so much money riding, there certainly need to be improvements, but altering concept is not necessary in light of these particular problems.

Metric Weighting for Race to the Top Competition

Possible points Weight
A. State success Factors 125 25%
(A)(1) Articulating State’s education reform agenda and LEA’s participation in it 65 13
(i) Articulating comprehensive, coherent reform agenda 5 1
(ii) Securing LEA commitment 45 9
(iii) Translating LEA participation into statewide impact 15 3
(A)(2) Building strong statewide capacity to implement, scale up, and sustain proposed plans 30 6
(i) Ensuring the capacity to implement 20 4
(ii) Using broad stakeholder support 10 2
(A)(3) Demonstrating significant progress in raising achievement and closing gaps 30 6
(i) Making progress in each reform area 5 1
(ii) Improving student outcomes 25 5
B. Standards and Assessments 70 14
(B)(1) Developing and adopting common standards 40 8
(i) Participating in consortium developing high-quality standards 20 4
(ii) Adopting standards 20 4
(B)(2) Developing and implementing common, high-quality assessments 10 2
(B)(3) Supporting the transition to enhanced standards and high-quality assessments 20 4
C. Data systems to support Instruction 47 9
(C)(1) Fully implementing a statewide longitudinal data system 24 5
(C)(2) Accessing and using state data 5 1
(C)(3) Using data to improve instruction 18 4
D. Great Teachers and Leaders 138 28
(D)(1) Providing high-quality pathways for aspiring teachers and principals 21 4
(D)(2) Improving teacher and principal effectiveness based on performance 58 12
(i) Measuring student growth 5 1
(ii) Developing evaluation systems 15 3
(iii) Conducting annual evaluations 10 2
(iv) Using evaluations to inform key decisions 28 6
(D)(3) Ensuring equitable distribution of effective teachers and principals 25 5
(i) Ensuring equitable distribution in high-poverty or high-minority schools 15 3
(ii) Ensuring equitable distribution in hard-to-staff subjects and specialty areas 10 2
(D)(4) Improving the effectiveness of teacher and principal preparation programs 14 3
(D)(5) Providing effective support to teachers and principals 20 4
E. Turning around the Lowest-achieving schools 50 10
(E)(1) Intervening in the lowest-achieving schools and LEAs 10 2
(E)(2) Turning around the lowest-achieving schools 40 8
(i) Identifying the persistently lowest-achieving schools 5 1
(ii) Turning around the persistently lowest-achieving schools 35 7
F. General 55 11
(F)(1) Making education funding a priority 10 2
(F)(2) Ensuring successful conditions for high-performing charter schools and other innovative schools 40 8
(F)(3) Demonstrating other significant reform conditions 5 1
Competitive Preference Priority 2: Emphasis on STEM (Science, Technology, Engineering, Mathematics) 15 3
Total 500 100%
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: