So my students are currently taking a publisher’s online pilot test. We are doing it for three reasons:
1. This gives the students practice for the SBAC online test. The format of these practice tests is meant to mimic that of the SBAC.
2. Students are excited to be able to give feedback to the company, which, the publisher claims, is why they are piloting these tests in classrooms. (Of course, if they really wanted feedback, perhaps they would have designed a way for the students to communicate to them about their experience interacting with the assessment.)
3. Our programs get money for each individual students’ participation; and, let’s face it, we need it.
As the kids are taking the tests, they are marking what’s working and what’s not. They are jotting down the slight glitches as they arise, and it is my hope that the company has an ear to hear such earnest feedback.
Some glitches might seem minor while others are more detrimental. One student found that when you’re asked to drag-and-drop something, you really just have to click on the item and then click on the destination. It’s minor, sure, but during tests, kids take things very literally, and rightfully so. All they are concerned about is displaying their knowledge, not inferring how to display it. For me, this becomes a teachable moment in writing specific procedural directions. There was another instance where one student couldn’t click on the correct response. This kind of glitch is far more frustrating, but because it’s a no-stakes pilot, the kids are willing to give their feedback without revenge on their minds.
Overall, the students seem really interested with the format of the test. They can click with far more accuracy than they used to bubble, the illustrations are vibrant and colorful, and watching them navigate on the laptops seems far closer to using real-world skills than watching them use an industrial era #2 pencil. They are also being asked to type, highlight, drag, click, bold, read, scroll, etc…There is no doubt it is a more engaging test than before, and a more engaged test-taker, I believe, will eventually translate into more successful test results.
What’s got me concerned, however, is the quality of the questions themselves. After all, I’ve been looking over the students’ shoulders, and I’m wincing a bit. For in an attempt to ask more critical thinking questions, the test-makers are, in fact, simply asking tricky ones.
There’s a fine line between something that asks you to think deeply and something that is meant to mislead you. Test-makers have gotten this wrong before in the past. Now, however, the potential for more widespread carelessness is far greater because, perhaps in their heads, exam writers are tasked with creating these kinds of questions. Perhaps they feel they have been granted permission to create these tricky questions because the Common Core standards ask for questions that require more critical thinking. But these are not one and the same thing.
In general, critical thinking questions TEND to be open-ended, Level 2 or 3 questions, ones that ask students to Create, Evaluate, Justify, etc…However, when a test-maker needs to develop one for a multiple choice test, they find it’s much harder to do. That’s why multiple choice questions TEND to be Level 1 in nature…hence the tendency to try to trick the test taker into selecting the wrong answer. It’s a poor attempt at rigor. And that’s what I’m seeing.
Now, I want to say that I am not anti-standardized tests. I have no problem with yearly standardized tests if used formatively, as they were meant to be used. Do I like over-testing? Heck no. But having some kind of baseline of progress is not so terrible. I also like the Common Core Standards. Authentic assessments, real-world alignment, writing across the content areas. Consumption and Creation. Project Based Learning and the expectation of collaboration. I’m all for it.
I say this because I want it clear that I want the assessments, and those meant to help students prepare for them, to succeed. I don’t care about their political past. What I do care about, however, is best practice and what works for our students. The need for change was due to our system’s stagnancy. But the roll-out won’t be successful simply due to demand. We desperately need a successful upgrade in both education’s philosophy and quality.
Philosophically, I absolutely agree with using technology as a tool to assess. This drives its more frequent use in the classroom, and, call me Machiavellian, but in this case, the ends justifies the means.
However, it’s the execution and testing quality that has me worried. That being said, I see this new era of test development as an opportunity to improve on something in our broken system. The question becomes whether we are living in an era of squandered opportunity.
We’ve all known for quite some time that the past standardized tests were inequitable, antiquated, and flawed. The goal was to make them better. Requiring questions that demand more critical thought is a good start, but, as it turns out, actually creating those questions and assessing critical thinking isn’t as easy as it sounds.
I can’t get specific about what I’m seeing. I know, after all, that this is a pilot and a work in progress. However, I can’t help but think that test-creators need to be on the students’ side. Creating questions meant to trick are not, I daresay, an assessment of knowledge or thought process. It’s an assessment of confidence, a quality not many students have until they are developmentally much older. After all, you need a certain amount of confidence, arguably verging on arrogance, to look at a trick question and say, “Wait a minute. Something’s not right here,” and trust that you know more than what’s printed in black-and-white.
The test makers must figure that a student who is reading critically would figure out the “trick,” but they have to understand that as students take a test, they throw distrust out the door and hope that the subtleties and complexities of their brain are seen clearly through the format of what is still, to them, just another standardized test.
Except this time, the makers seem out to get them in the name of critical thinking.