As I said on Thursday, one of the big recent developments is Race to the Test, a contest to create assessments that align with the new Common Core standards. Personally, if they can actually make legitimate assessments that aren’t just multiple-choice fill-in-the-bubble exams that force students to regurgitate information for a few hours at a time, not only will I be impressed, but it will be a major step forward from what the current state tests look like.
The odd thing about this particular race, though, is that instead of allowing a number of different groups to compete for the prize, only three organizations were tapped and all three will be getting something. According to EdWeek, there were originally six consortia, but because there was so much overlap, they combined forces. Two of the three consortia, consisting of 26 and 31 states, are competing for $320 million of the total $350 million to create tests for all grades, while a smaller group of 12 states is aiming at making reliable high school exit exams. Even though these are states creating the assessments, it seems as though the federal government’s suggestions may go a long way to shaping what the tests look like. The SMARTER Balanced Assessment Consortium and the Partnership for the Assessment of Readiness for College and Careers initially started out with major differences in their proposals, but after the receiving comments on them, their plans now look very similar to one another. Both are planning on having performance assessments spread throughout the year to track development along with a big exam at the end.
One major difference is that despite the fact that both groups are using technology, SMARTER seems to have latched onto the computer-adaptive model, which can lead to greater accuracy and faster tests. For those unfamiliar, most graduate entrance exams (the LSAT, the GRE, etc) at this point use computer-adaptive tests that change depending on how well the student is doing. They are essentially the exam equivalent of an optometrist trying to figure out your vision. Instead of changing the strength of the prescription depending on how well you can see, questions get harder or easier depending on how well you perform on each question. That way, they can zero in on where your performance is. Why is this important? In most states, standardized tests take a significant amount of time. I can remember my own high school experience in Indiana when the entire school would essentially shut down for a week for the sophomores and students who hadn’t previously passed to take the ISTEP. As a teacher in Arizona, there were separate days for reading, writing, math, and science sections of AIMS, as well as an additional day for freshmen to take the TerraNova, and that doesn’t even include the extra days in the fall for students who failed the previous spring to retake the tests. If these tests used adaptive technology, they could be whittled down to a fraction of the length, resulting in more accuracy because of less burn-out from the sheer length of the tests.
While these developments are encouraging, there has been some criticism of the process. EdWeek’s Catherine Gewertz says that the timeline might be too quick. The DOE wants these tests to be in use by 2014-2015. For a strong test to be administered, there needs to be considerable piloting and adjustment and four years may not be enough to reliably do that. Bill Tucker of The Quick and the Ed and EducationNext warns that there are some important steps that may or may not happen that could determine the success of the process. His biggest concern is having open platforms and shared infrastructure. The two consortia seem to be working together so far, so it looks like a positive start. We won’t know how successful this initiative is for a couple of years probably, though.