Education
Illustration by Iede van der Wal

Lecturers worry for students

In Chat we trust

Illustration by Iede van der Wal
The introduction of ChatGPT has been detrimental to the quality of students’ work, UG lecturers say. They worry about the consequences. ‘Students aren’t just using it for their homework, but also as some sort of second brain.’
8 April om 13:14 uur.
Laatst gewijzigd op 8 April 2025
om 13:14 uur.
April 8 at 13:14 PM.
Last modified on April 8, 2025
at 13:14 PM.
Avatar photo

Door Joana Abreu Morais

8 April om 13:14 uur.
Laatst gewijzigd op 8 April 2025
om 13:14 uur.
Avatar photo

By Joana Abreu Morais

April 8 at 13:14 PM.
Last modified on April 8, 2025
at 13:14 PM.
Avatar photo

Joana Abreu Morais

Alarm bells first started ringing for Lorena Rojas when she noticed her students were handing in assignments with random or non-existent sources. ‘I felt a little bit betrayed’, says the assistant professor of technology law. ‘The entire course is about researching, but students were just fulfilling assignments for the sake of it.’ 

ChatGPT had just been released and was becoming popular amongst students. AI lecturer Juan Diego Cardenas quickly saw the effects as well. ‘Their grammar suddenly improved. And they started using more fancy English words in their assignments.’

And while better writing usually isn’t something to complain about, in this case, it stood out. ‘Students were adding a lot of hyperboles, adjectives, and unnecessary text that basically doesn’t make sense in technical documentation.’

Before and after

Many lecturers have witnessed a clear before and after ChatGPT – released in November 2022 – in their students’ work. How do they handle this? And is it all bad news, or can generative AI be a good thing for education as well?

They just trust everything the chatbot says

Most lecturers UKrant spoke to don’t have a problem with students using ChatGPT as an assistant. Forbidding the tool altogether is not an option, they realise. However, one of the main issues they identified is that students are heavily depending on it, without thinking of the consequences.

‘They just trust everything the chatbot says and don’t compare it with reliable sources’, says Cardenas. In the last two years, he has noticed a significant drop in the quality of students’ work. ‘Chatbots are amazing for brainstorming, as long as you critically analyse what they’re producing.’ 

Proper prompts

‘In general, the experience with students and ChatGPT has not been so great’, adds Matias Valdenegro, assistant professor of machine learning. He is sitting with Cardenas and colleague Ivo de Jong – also a lecturer in AI – in a study room on the third floor of the Bernoulliborg and they are eager to talk. 

ChatGPT is doing the students more bad than good, they feel. ‘I think the takeaway from some of our research is that these tools don’t always really work as well as you would like. So it’s good to study them, but not necessarily good to just use them and trust that they work’, says De Jong. ‘Students are not just using it for their homework and copy-pasting what ChatGPT says, but also as some sort of second brain’, adds Cardenas.

According to Rojas, one of the issues is that most students don’t know how to properly prompt ChatGPT, often resulting in poor answers. ‘If you tell ChatGPT to give you an example of something, the answer could possibly be hypothetical or made up.’ 

Misconceptions

Genetics lecturer Kai Yu Ma has had a more positive experience with AI tools, though. In his genetics course, for example, chatbots can be useful to automate certain tasks, and provide the exact same learning experience to students in the lab: ‘We can simulate a failed experiment and students can use the chatbot to try to solve the problem in a safe way’, he explains. They can also reflect on the experience with the chatbot. 

It all starts with the professor knowing what the systems can do

He believes that the problem lies more with the lecturers themselves. ‘I think it all starts with the professor knowing what the systems can do. A lot of them have misconceptions about that. Many of my colleagues don’t really know what ChatGPT is. They think it’s the next Google.’ 

To him, it is crucial that lecturers learn about generative AI in order to have a more honest relationship with their students. ‘There is no escaping ChatGPT. So I always explain to students what my view is and how I allow them to use and benefit from it.’ Having an open discussion like this allows students to use the tool more responsibly and be more critical of it, he says.

Disclosure

The university’s guideline for AI chatbots also says that lecturers need to take the lead and decide for themselves what the best approach is for students using these tools. And that means everyone has their own tactic.

Rojas, for example, found it was best to have students disclose their use of AI. ‘It removes the stigma and makes students feel like it isn’t bad or taboo. If they see their professor using it responsibly, they are more likely to do the same.’ But the method is not infallible. ‘I can’t make it mandatory, so students choose if they want to disclose it or not.’ 

Miklós Kiss, associate professor of audiovisual arts, only recently asked his students whether they were using ChatGPT. They are still scared to discuss their use, he says. However, he believes that encouraging students to experiment more with the tool could lead to more analytical reflections: ‘I think we should talk more with our students, but also let them figure  out themselves in which ways ChatGPT potentially threatens their creativity and in which ways it helps them.’ 

Assessment methods

Other lecturers have made changes to their assessment methods to prevent students using AI. Professional writing lecturer Jonathan Groubert, for example, has transformed most of the home assignments to in-class exercises. ‘Because they’re doing the work in the classroom, I also have a basis for comparison. I can see whether or not they can actually write like that’, he explains. 

ChatGPT will not teach you whether a text is good

His students shouldn’t be using ChatGPT for anything more than brainstorming, he says. ‘It’s a fundamental course. And they are not just here to learn to write, but also to judge whether a text is good or not. ChatGPT will not teach you that.’ 

‘We need to rely less on just writing things and more on knowing things’, echoes Cardenas. Adds Valdenegro: ‘Maybe we’ll start doing more oral exams.’

Damage

Because what they all agree on: the problem isn’t so much that students get undeserved high grades, but that they are selling themselves short and thereby hurting their future. ‘The reputational damage that you can do to your own work is unbelievable if these tools are not used responsibly’, says Rojas. 

And so, they also say, the UG needs a mandatory course for students on how to use AI chatbots, to prevent misuse – the course that is currently available is optional. But lecturers should also be required to learn about how these tools work, stresses Kai Yu Ma. ‘They need to be more open with students about it, but they also need to be capable of talking about it.’

Dutch