It has become impossible to imagine academic life without ChatGPT. The UG has therefore drawn up ten basic rules to regulate the use of generative AI.
The rules have been made because of the advance of so-called generative AI (genAI). This is artificial intelligence such as ChatGPT, DALL-E and Gemini (formerly Google Bard), which can not only modify input (such as a spell checker in Word), but actually ‘invent’ entirely new output.
At its core, the UG wants students to be trained to use AI tools ‘competently and responsibly’. ‘In line with academic practices, attitudes and core principles,’ the guidelines state.
Fraud
AI tools may only be used as ‘tools for general functionalities’ and students must always report the use. By this, the university means brainstorming, inspiration, and summarising general information, among other things. In addition, lecturers may impose additional requirements on the use of genAI within their subject.
It becomes fraud if a student’s work is no longer recognisable as their own and therefore the student’s knowledge and grasp of the material cannot be tested. It is also fraud if a student does not mention that generative AI has been used for (part of) an assignment and the result is literally copied and handed in as their own work.
AI-free zones
If a teacher suspects that a student did not create the work themselves, an oral exam may be conducted to check this. In the case of theses and final papers, there will be a standard interim test to make sure that a student has mastered the material and created the work themselves.
Despite embracing GenAI, the UG will also create so-called ‘AI-free zones’. These are zones where genAI websites are locally inaccessible and thus cannot be used. These zones are already in place in the rooms at the Aletta Jacobs hall.