Towards a better exam
Mastery, not memorisation
Every exam season, the University Library is full of them: hundreds of students diligently cramming for their tests. Anxiously, they push as much information into their heads as they can, right before that all-important test. And four or five days later, it will all be forgotten again.
‘These people are studying more for the exam than for the knowledge’, says Barbro Melgert, who has taught immuno-pharmacology at the UG since 2010.
Her field, like many others in medical sciences, requires students to remember huge amounts of information to pass tests and get their degree. But Melgert doesn’t believe that this is the best way to prepare students for their careers after university – and she’s not alone in that opinion. ‘Pass grades are nice, but you’re doing this because you want to acquire a certain type of knowledge’, she says. ‘Because you want to do something with it later in life.’
Faulty testing
While most students realise they need that knowledge to pursue their chosen careers, during their studies, passing exams is always priority number one – and cramming is an effective way to pass. But is it still the best way? And was it ever?
These people are studying more for the exam than for the knowledge
No, says Hedderik van Rijn, who teaches neurosciences and is the founder of a new, adaptive learning system called SlimStampen (Dutch for ‘smart cramming’). ‘Immediately after the exam, you’ve already forgotten most of the information.’
The way students are currently being tested is faulty, he explains. ‘Most courses have one thing in common: their learning goals state that after taking the course, a student should have knowledge about the important ideas and concepts in that particular field.’ However, students aren’t asked to demonstrate the insight they have into their subject, just what they can commit to their short-term memory.
He feels testing would be better if it aimed to show how well they can discuss the important concepts.
Group tasks
Of course, the simple reproduction of facts is not the only way courses test their students’ knowledge and abilities. There’s also the writing of essays, the presentations, and different group assignments. How a student scores in a blend of summative and formative assessment goes towards a single grade, which represents how well they did in a course or the whole programme.
The whole system needs to move towards acknowledging group assessment grades
We pretend all these grades together reflect the students’ personal potential. The problem is, they really don’t, according to Jan-Willem Strijbos, who is a member of the department of Education and Research at the UG, GION.
‘With the diploma, the university certifies what a student individually can do’, he says. But many courses will include group tasks, as it means less work for lecturers who are grading them, while at the same time teaching collaborative skills that are necessary for life in the work field. And this means that a student’s final grade is partly based on work that is not totally their own.
Although this is not necessarily a bad thing if the end goal of testing is to find out what a student is capable of, Strijbos says, there should be more of a recognition of how grades from group assessment affects the final score. ‘The whole system needs to move towards acknowledging that more explicitly.’
Learning outcomes
Changes are necessary, the specialists agree. However, with a massive institution like the UG that teaches hundreds of courses to thousands of students, these changes are not easily made.
One of the problems is that faculties are responsible for the way they test their students’ knowledge. Course coordinators and individual lecturers are free to structure their teaching and assessment whatever way they feel is best, but the methods need to be in line with the learning outcomes of the course and accepted by their faculty.
‘For individual courses, there’s always room to experiment’, says Robert van Ouwerkerk, head of the department of Strategy Education and Students that advises faculties when they make changes to their assessment methods.
Courses usually have a mix of graded and ungraded tasks, including essays, participation, oral examinations, and group assignments. But even though faculties are always looking for new ways to find out what students have learned from a block of study, changes happen slowly, he has found.
Bottom up approach
There are basically two approaches, explains Van Ouwerkerk. One is through the faculty deciding to transform the entire course. For instance, medical sciences made a seismic shift in how their courses were structured in recent years. They carefully planned out the move, and after they implemented the new design, they tweaked the course over time as they saw what worked and what didn’t. ‘That is more top down.’
You don’t need to be nervous that you won’t remember a term during the exam
The other is by individual lectures making small incremental changes: bottom up. Here, individual lectures try something new in how they teach or test their students, and if it works, it will be adopted by other teachers as well.
That happened when Van Rijn started using SlimStampen – an online tool which helps students learn facts more effectively in his classes. The programme tests students’ ability to recall factual knowledge over the course of a module and rewards that with marks. Then, at the exam, only their insights about the concepts of the course are tested. ‘You don’t need to be nervous that you won’t remember a term during the exam, as long as you can describe it and discuss the topic.’
SlimStampen found its way to other lecturers, like Melgert. She feels it has been a good addition, because it gets students studying the material earlier, and they also enjoy using it. ‘It’s an easy and sort of fun way to get some extra points.’
AI
Both approaches work well but slowly, and this may be an issue for faculties now that AI has made assessing students’ abilities all that more difficult almost overnight.
New challenges include making sure that the work students hand in is their own, as well as deciding what information students need to learn off by heart, if ultimately they will just be able to ask AI for the answers in the future. ‘The whole discussion is booming at this moment’, says Van Ouwerkerk.
His department is supporting faculties by giving guidelines on how they could go about this, but in the end, how they change their testing methods is up to them.
Though faculties are still debating this, Van Ouwerkerk believes that there will be a much greater emphasis put on oral examinations to assess students rather than take-home papers, and a need to have students explain how they did the work they hand in for grading. ‘Just to see: did you in fact understand?’