Setting the Standard in Education

Posts Tagged ‘PARCC’

Assessing the Assessments

In Federal on July 4, 2010 at 5:37 pm

Happy 4th! I’m about to be patriotic and read the SMARTER and PARCC applications for Race to the Test. Both are enormous, so I’m sure I won’t catch every detail of their applications. PARCC hasn’t even put their entire application online because it’s so big. Before I begin, I’ve got a few opening thoughts.

Assessment Applications and Fireworks: What could be better?

First of all, I think the state involvement is not what you’d expect. SMARTER has 31 states while PARCC has 26. Of course, the usual states aren’t participating: Texas, Alaska, and Wyoming. A few other states aren’t in on it either, though: Nebraska, Minnesota, and  Virginia (notice that Wisconsin is participating, even though Rep. Obey wanted to reduce the funding to it to RttT). It’s notable that none of those states applied for Race to the Top. Minnesota and Virginia claim that their standards are higher than the CORE standards on which the assessments will be based. While Minnesota may have some basis for their claim, all seven states ahead of them in the strength of their standards according to Education Next, Virginia ranks 40th among states in standards rigor. I’m having a hard time believe a C would bring down a D. Of course, the timeline could have something to do with this. States had to sign on to being a part of these consortia before the CORE standards were even released, so states didn’t have much to go on. By most accounts, the standards are higher than most if not all states currently, so that’s not a worry to those participating.

Aside from the states not participating, some states are participating in both of the consortia. Their staffs must be working overtime. Those in both are Colorado, North Dakota, Oklahoma, Kentucky, Alabama, Georgia, South Carolina, Ohio, Pennsylvania, Delaware, New Jersey, and New Hampshire. It’s important to note that those states are considered “Advisory States” within SMARTER, rather than “Governing States.” Here’s a graphic to explain what that means. Most likely, they are only decision-makers in PARCC, but are working collaboratively with SMARTER on the R&D parts. This might actually prove handy to SMARTER. They get more states doing work for them (31 to 26), but have fewer states arguing over what to do (19 to 26).

Other observations? Going back to the state standards rigor, 9 out of the top 10 states participating are in SMARTER (Massachusetts isn’t), while only three are in PARCC (New Hampshire and New Jersey are in both). The states not participating are ranked 8 (MN), 23 (WY), 27(AK),  40(VA), 45(TX), and 49 (NE). It’s clear that it’s local-control politics (read Conservatives), rather than worry of low standards that is keeping these states out of participating. I suppose it’s just as well. That means that Virginia, Texas, and Nebraska can’t drag down the quality.

CORRECTION: After reading Bill Tucker’s post, I realize I overlooked the organization of PARCC. Those states that are advisory within SMARTER and are also in PARCC are also only advisory in PARCC. In addition, Iowa and South Dakota are advisory within SMARTER and not in PARCC, while California, Mississippi, and Arkansas are advisory within PARCC, but not in SMARTER. That means that PARCC only has 11 governing states, 8 less than SMARTER. Tucker also points out that the openness of the structure allows states to join or become governing states fairly easily, so it is likely that the consortia one stay in this configuration.

Creating the Finish Line

In Federal, Uncategorized on June 29, 2010 at 2:02 pm

It seems as though I am not alone in my criticism of Cortines from my last post. The mayor’s not too fond of his mettle, either. He says he’s not supportive enough of charter schools.

As I said on Thursday, one of the big recent developments is Race to the Test, a contest to create assessments that align with the new Common Core standards. Personally, if they can actually make legitimate assessments that aren’t just multiple-choice fill-in-the-bubble exams that force students to regurgitate information for a few hours at a time, not only will I be impressed, but it will be a major step forward from what the current state tests look like.

The odd thing about this particular race, though, is that instead of allowing a number of different groups to compete for the prize, only three organizations were tapped and all three will be getting something. According to EdWeek, there were originally six consortia, but because there was so much overlap, they combined forces. Two of the three consortia, consisting of 26 and 31 states, are competing for $320 million of the total $350 million to create tests for all grades, while a smaller group of 12 states is aiming at making reliable high school exit exams. Even though these are states creating the assessments, it seems as though the federal government’s suggestions may go a long way to shaping what the tests look like. The SMARTER Balanced Assessment Consortium and the Partnership for the Assessment of Readiness for College and Careers initially started out with major differences in their proposals, but after the receiving comments on them, their plans now look very similar to one another. Both are planning on having performance assessments spread throughout the year to track development along with a big exam at the end.

Perhaps smarter tests can be integrated into schools better.

One major difference is that despite the fact that both groups are using technology, SMARTER seems to have latched onto the computer-adaptive model, which can lead to greater accuracy and faster tests. For those unfamiliar, most graduate entrance exams (the LSAT, the GRE, etc) at this point use computer-adaptive tests that change depending on how well the student is doing. They are essentially the exam equivalent of an optometrist trying to figure out your vision. Instead of changing the strength of the prescription depending on how well you can see,  questions get harder or easier depending on how well you perform on each question. That way, they can zero in on where your performance is. Why is this important? In most states, standardized tests take a significant amount of time. I can remember my own high school experience in Indiana when the entire school would essentially shut down for a week for the sophomores and students who hadn’t previously passed to take the ISTEP. As a teacher in Arizona, there were separate days  for reading, writing, math, and science sections of AIMS, as well as an additional day for freshmen to take the TerraNova, and that doesn’t even include the extra days in the fall for students who failed the previous spring to retake the tests. If these tests used adaptive technology, they could be whittled down to a fraction of the length, resulting in more accuracy because of less burn-out from the sheer length of the tests.

While these developments are encouraging, there has been some criticism of the process. EdWeek’s Catherine Gewertz says that the timeline might be too quick. The DOE wants these tests to be in use by 2014-2015. For a strong test to be administered, there needs to be considerable piloting and adjustment and four years may not be enough to reliably do that. Bill Tucker of The Quick and the Ed and EducationNext warns that there are some important steps that may or may not happen that could determine the success of the process. His biggest concern is having open platforms and shared infrastructure. The two consortia seem to be working together so far, so it looks like a positive start. We won’t know how successful this initiative is for a couple of years probably, though.

Follow

Get every new post delivered to your Inbox.