Every day, the editorial staff at the UKrant wonders: What are we writing about, why are we writing about it, and how are we writing about it? ‘At UKrant’, an irregular column, we take a look behind the scenes.
UKrant regularly publishes articles on the use of AI tools like ChatGPT at the university. There’s a lot of confusion surrounding this: what is allowed and what isn’t? So it’s not strange to wonder what we do; does UKrant use AI?
For context: the discussion is more relevant than ever in media. Reliable journalism is based on transparency and verifiability, two concepts that are somewhat at odds with AI. But AI is also on the rise and it can in fact be a useful tool. So anyone ignoring AI is sort of like Don Quixote fighting windmills.
Some examples:
Newspaper de Gelderlander is using AI to write weather forecasts to be as local as possible: it might be raining in Bronkhorst at 8 a.m. tomorrow while it’s sunny in Arnhem, 30 kilometres away. Omroep Brabant experimented with a digital presenter (an AI clone of their ‘real’ presenter – the experiment ended after six months for various reasons).
De Volkskrant, however, claims all its ‘content is made by human editors, reporters, copy editors, photographers, and illustrators’.
Anyone ignoring AI is sort of like Don Quixote fighting windmilss
Back to the question at hand: does UKrant use AI? The answer to that is ‘yes, but’.
For one, we write all our articles ourselves. Sure, we occasionally use ChatGPT as a tool (a sort of Google 2.0) and I don’t think there’s anything wrong with asking ChatGPT to help come up with five hard-hitting questions for education minister Eppo Bruins. But that’s all it is: a tool. We write our own stories. Exclamation mark.
Translations are a slightly different matter. At least nine out of ten of the (news) articles that UKrant publishes are also translated (either from English into Dutch or vice versa, depending on where the original author is from). While we used to send every single article to our human translator, we’ve recently changed that to only the longer background stories, op-eds, or other articles that might be a little more difficult to translate.
The other articles, especially short news stories, are translated by AI tools (mostly DeepL and ChatGPT), which have turned out to be reliable and fast translators. And, I must admit, quite a bit cheaper. Nevertheless, there’s a ‘but’: sometimes, AI makes strange choices – almost like it’s a real person. That is why we employ the principle of ‘human-machine-human’. That means the input was created by a person (the article), the output by AI (the translation), and it won’t be published until the copy editor or ‘real’ translator has checked it.
The pyramid of Giza on Antarctica? That has nothing to do with reality
A less visible AI tool we use is that of transcription. A journalist’s biggest source is other people, so we do a lot of interviews. Especially for longer interviews, automatically transcribing recordings is a godsend (transcribing by hand takes forever, while AI does it in fifteen minutes). But once again, since these tools are anything but infallible, we apply the human-machine-human principle.
A final AI tool UKrant occasionally uses, one that is, in fact, visible, is an illustration tool. We do practise some restraint here, since we’re journalists: we write about what’s going on, we establish the truth, and so our images must be truthful, as well.
We might edit a photo sometimes because it won’t fit on the website otherwise, by adding or removing a piece of background or something innocuous like that. But asking AI to create a drawing (or photo or video – AI can do it all) of the great pyramid of Giza surrounded by penguins on Antarctica might be very funny, but it’s got nothing to do with reality. So we don’t do that.
So what do we use AI for when it comes to images? Some time ago, UKrant publishes a series on UG students from countries plagued by poverty, corruption, famine, or human rights violations.
One of these articles centred on Vera, who was originally from Putin’s Russia. This was difficult to illustrate, especially since we needed to protect the interviewee. Our designer used an AI tool to create the picture above.
We were quite pleased with the result. And as far as I’m concerned, it’s well within the limits of what journalism can or should be allowed to do. Do you agree? Disagree? Head on over to the comments to let us know.
Rob Siebelink, editor-in-chief UKrant