115 law students failed their exams after all when they’d already been given a passing mark. The blame lies with the grading process in Brightspace, say lecturers: it’s too error prone.
It’s a bunch of small things that, put together, are very annoying, says lecturer Laurent Jensma. He teaches a course to six hundred students, which means he spends a lot of time grading. He has to constantly stay alert, he says.
One of his colleagues wanted to award 3.5 points to a question, ‘but he accidentally hit the zero instead of the decimal point.’ As a result, the student got 305 points instead.
Whenever something like this happened in Nestor, the system would notify the lecturer that they were handing out extra points, but Brightspace doesn’t do this. Conversely, if a lecturer is a little too eager with their clicks, students get zero points.
There is no overview that allows lecturers to quickly and easily check whether they did it right, says Jensma. In Nestor, he could sort the questions by the number of points, but that’s no longer an option. That means he has to go over them all just to spot a mistake.
The ‘save’ and ‘publish’ buttons are also confusing, says tax law lecturer Mirjam de Weerd-de Jong. ‘I’m always worried that I shouldn’t hit the “publish” button, or the whole thing will be visible to everyone.’ But that’s not what happens. She feels the name for the button has been poorly chosen.
The lecturers also say that things easily go wrong involving lists of marks. These lists need to be exported as an Excel file, after which they’re input in Progress. But it’s really difficult to calculate the final mark in Excel based on the different questions, says De Weerdt-De Jong.
‘That’s probably what went wrong’, she says, alluding to the wrongly marked exam. ‘Someone used the wrong formula on the marks.’
‘Terrifying’, she says. ‘Things worked perfectly in Nestor. The system added the individually marked questions itself, and you could see how it had done that. It was really easy to transfer that to Progress. Everything was so much simpler.’
‘It’s like you need a black belt in Excel just to get it to calculate the right marks’, says university lecturer of IT law Gerard Ritsema Eck, summarising the problem.
Inputting exams into Brightspace is also awful, says De Weerdt-De Jong. The other day, she spent half an hour reading the instructions and getting all the settings right. ‘It’s the worst. You have to check pages and pages of boxes. It never ends.’
Changes will be made before the next exam period, says CIT team lead of assessment support Josien de Boer. That should solve at least a few of the teething problems. ‘But not all of them.’ Things simply can’t be done that quickly, she emphasises.
She doesn’t think it’s entirely fair to compare Brightspace to Nestor. ‘Nestor was developed over the course of seven years, Brightspace in four months.’
Marking exams in Nestor didn’t go smoothly right from the start, either, she says. ‘We ended up building our own marking tool. We’re currently in the process of copying that.’ It will have the same functionalities ‘and it will look the same’.
This workaround would also solve Jensma’s problem where Brightspace marks skipped questions with zero points. But the CIT also reported this as a bug to Brightspace, says De Boer.
The new marking programme is currently being tested by lecturers who have an exam coming up. It should be available to all lecturers by January.