Students would like to sit fewer multiple-choice exams, because they feel these don’t prepare them for the future. But that’s incorrect, says professor of psychometric and statistical techniques Rob Meijer.
Several people have written to UKrant to argue in favour of more open-question exams. They feel that multiple-choice exams only gauge whether students can recognise answers and don’t account for higher cognitive processes such as analytic capabilities and the application and assessment of knowledge.
None of the writers, however, presented any scientific arguments in favour of their viewpoint. What does science have to say?
Research has shown that well-written multiple-choice exams and open-question exams measure higher cognitive functions to the same degree. It’s also turned out that multiple-choice exams in the medical field can predict how people will function in practice with more accuracy than open questions can. It’s also interesting to not that research has shown that there is a high correlation in scores between multiple-choice and open-question exams.
Generally speaking, it would be best to write ‘context-rich’ multiple-choice questions. Context-rich questions (questions that present a case, for example) don’t just gauge knowledge, but also analytical capabilities and the ability to apply knowledge.
graders can affect the results in an unwanted and inconsistent way
Reliability plays a big part in assessing the quality of an exam. Reliability depends on the consistency with which questions are assessed; the absence of noise. Research has shown that multiple-choice exams are more reliable than open-question exams.
Every lecturer who’s ever had to grade open-question exams knows why. In spite of the example answer to compare answers to, grading open questions requires the ability to closely read answers, and that ability doesn’t remain consistent across various questions answered by different students.
The following isn’t a didactic argument, but efficiency plays a big role, too. Multiple-choice exams are more efficient than open-question exams. Writing multiple-choice questions takes up more time than writing open ones, but grading them takes much less time. Recently, approximately seven hundred students sat a test-theory exam, and grading all of them costs a lot of time and therefore money.
Finally, some considerations based on anecdotal evidence. Some people say that multiple-choice exams are easier than open-question exams. But there are easy multiple-choice exams and there are difficult ones. One of my colleagues recently remarked that students only passed an exam because they did better in the open-question portion than in the multiple-choice one.
That’s probably because lecturers tend to give students the benefit of the doubt and even give them partial credit for a question when they don’t really deserve it. Strict graders might think the opposite, however. My point is that graders can affect the results in an unwanted and inconsistent way.
Lecturers should write multiple-choice exams unless there’s no other way.
Another argument that the critics of multiple-choice exams often bring up is that students should learn how to write well-formulated answers. If this is the goal of an exam, multiple-choice questions won’t do. However, very few courses actually aim to teach their students writing abilities, which means exam don’t need to test it.
Open-question exams are preferable over multiple-choice ones if we want to test the creative capabilities of students. Can they properly write a logical argument?
In conclusion, multiple-choice exams are probably just as suitable for measuring what you want to know. On top of that, they are more reliable and efficient. Scientific testing and exam literature would argue that lecturers should write multiple-choice exams unless there’s no other way.
Finally, as a psychologist, I’m intrigued that people seem to think that performing a simple task like ticking a box for the right answers is seen as measuring simple cognitive abilities. But that’s not true, of course.
Did you know that selecting the right answer on an hour-long intelligence test designed to predict academic success can also predict how you’ll do in your future job, your creative endeavours, and how successful you’ll be in your career? Isn’t that fascinating?
Rob Meijer is a professor of psychometric and statistical techniques at the Faculty of Behavioural and Social Sciences.