Science
Animation enhanced with AI

AI taking over people’s work

A deal with the devil?

Animation enhanced with AI
A police officer replaced by a chatbot, or a computer acting as a psychiatrist: the UG is doing a lot of research on how to use AI for jobs that can seemingly only be done by real people. But is that a good idea? ‘It’s better to make sure we’ve thought everything through beforehand.’
3 December om 17:03 uur.
Laatst gewijzigd op 4 December 2024
om 15:07 uur.
December 3 at 17:03 PM.
Last modified on December 4, 2024
at 15:07 PM.
Avatar photo

Door Rob van der Wal

3 December om 17:03 uur.
Laatst gewijzigd op 4 December 2024
om 15:07 uur.
Avatar photo

By Rob van der Wal

December 3 at 17:03 PM.
Last modified on December 4, 2024
at 15:07 PM.
Avatar photo

Rob van der Wal

Rob begon als student-redacteur bij UKrant en is sinds mei 2023 terug als vaste medewerker. Hij schrijft nieuwsberichten, achtergrondartikelen – met een voorkeur voor wetenschap – en houdt zich bezig met internationaliseringszaken. Daarnaast werkt Rob als freelance wetenschapsjournalist. In zijn vrije tijd is hij drummer, radiomaker en moestuinier.

You’re crossing the road when a van comes speeding down the road on your right. In front of you, another car hits the brakes, but it’s too late. There is a loud crash as the two cars run into each other. You immediately call the police.

‘Are you in a safe place?’ you hear a woman’s voice say through the phone. ‘What is the date, location, and time of the accident?’ Her intonation is a little off, the voice a little too empathetic. You realise your witness statement is being taken by an AI system.

This could just be a real situation in a few years, says Laura Peters, associate professor of criminal law and criminology. She’s doing research on whether AI assistants could replace police officers in case of incidents like a collision or shoplifting. The idea is that the AI system asks questions and immediately turns the answers into written text.  It saves time and manpower, which comes in handy in times of staff shortages.

Other UG departments are also doing research on what AI can be used for: to help corporations find their perfect collaboration partner, diagnose psychoses, or even replace psychiatrists altogether. 

In other words, jobs that are normally done by real people. Can those jobs simply be handed over to a computer? More importantly, is it wise to do so? The use of AI comes with a host of ethical dilemmas, and the risks aren’t always clear. Wouldn’t it be like selling our soul to the devil? How do researchers plan to anticipate this? 

Correct advice

These types of questions lie at the basis of psychology professor Wim Veling’s research. He’s trying to find out if it’s possible to replace psychiatrists with AI. ‘We deliberately set our goals to be as extreme as possible, because it does give rise to ethical questions. There’s always resistance at first: should we really be doing this?’

We deliberately set our goals to be as extreme as possible

~ Wim Veling

No one knows, for example, if AI therapists will be able to dispense the correct advice to patients, he explains. ‘In cases with suicidal patients, you want the system to be secure, and you’d also want a real person to be in the loop somewhere.’ 

But AI systems might be an improvement over real people in other areas, he thinks. ‘People have a tendency to veer off course in extremely structured treatment.’ AI wouldn’t have that issue. ‘If you also connect physical responses such as pupil reactions and heart rate to the digital system, it could even be better at picking up signals than real people.’ 

Loss of autonomy

Thijs Broekhuizen, associate professor of marketing and innovation, is also aware of the potential consequences of his research. He studies the role AI plays in corporate innovation. ‘They don’t come up with all their ideas by themselves, they collaborate with knowledge institutes, research agencies, and other corporations. This type of open innovation requires a lot of data comparison’, he explains.

However, not enough corporations actually compare data. ‘They’re worried they might lose ownership of their data’, Broekhuizen says. ‘But smart data technology allows them to maintain sovereignty, because no one else is looking directly at their data.’

That sounds great, but there are risks. ‘The data monitoring could skew so negative that it starts affecting people. When you start analysing smartwatch data or keystrokes, you run the risk of feeling like you’re losing autonomy. The worst-case scenario here is creating a control state like china.

Calculated decision

Another risk is removing the human component entirely and letting the AI make all the decisions, says post-doctoral researcher Sanne Schuite-Koops, part of psychiatry professor  Iris Sommer’s research group. ‘Science fiction books are rife with examples where AI makes a calculating decision that people would never make.’ 

We have to properly explain what we’re doing

~ Sanne Schuite-Koops

Schuite-Koops is working on a language model that can monitor people who are risk of a psychotic relapse. ‘People experiencing psychosis speak notably slower and in shorter sentences’, she says. ‘A language model is better at systematically and objectively recognising that than a psychiatrist.’ 

AI can also be used to carry out intermediate checks between psychiatrist visits, meaning a potential relapse could be prevented. If someone doesn’t display signs of a potential relapse for an extended period of time, they might even be able to skip check-ups. ‘That would mean psychiatrists could use their time more efficiently.’ 

She realises some patients might be wary of AI treatment. ‘We have to make sure to properly explain what we’re doing. Show them what happens to the data we’re saving. And that only their doctor is notified when the AI system signals that they’re at risk of a relapse.’  

Strict rules

A European AI law that went into effect in August of 2024 will have to ensure that AI systems meet these kinds of security demands. This law divides the use of AI in various risk categories. Using a chatbot poses a low risk, for example, while AI systems that monitor people and give them a ‘social score’ comes with an unacceptably high risk.

But, says Oskar Gstrein, AI ethics expert at the UG: ‘The law isn’t so much about the use of AI in research, it’s more about the commodification of AI.’ That means that during the research phase, the researchers have more control over things. 

There are also differences from field to field. In the medical sciences, which is what Veling’s and Schuite-Koops’ studies focus on, the use of AI has strict rules. That’s because of the high risks that come with using AI in conjunction with patients, says Gstrein. ‘It’s also normal in this field that systems undergo lengthy testing.’ 

But when it comes to the use of AI in legal and fiscal activities, researchers have more freedom. That also means they have to take more responsibility, says Peters. ‘Academics have to help ensure timely laws and regulations concerning AI. I personally prefer contributing to good laws instead of complaining afterwards.’

Start small

To avoid the risks, it’s key to start small, she says. ‘Whatever you do with AI, it’s automatically revolutionary. So, ethically speaking, it’s best to start with common petty crime rather than a murder case.’

She doesn’t think anything drastic like replacing judges with AI is likely to happen any time soon. ‘That would make the justice system too black and white; AI can’t make any deals based on emotions. A judge in Rotterdam can make entirely different decisions than one in Leeuwarden, and we still need that.’

It’s possible our research concludes that we shouldn’t do this

~ Laura Peters

Veling also expects the implementation of AI to be a gradual process. ‘We’ll first be looking at things that can be treated according to set protocols, like medium anxiety and light depression.’

There are several issues when it comes to the use of AI. One well-known problem is the implicit bias caused by the training data. And don’t forget the privacy issue. ‘Who owns the data generated by the conversations AI systems have with patients?’ Veling wonders. Schuite-Koops says: ‘I can’t make the audio files that I use for my research anonymous. So if it’s my second cousin, for instance, I will immediately recognise him.’

The researchers don’t want to get ahead of themselves by embracing AI. ‘The question isn’t whether we’ll be marketing our respective tools, but if it’s feasible at all. It’s entirely possible that our research will conclude that we shouldn’t’, says Peters.

Collaboration

There’s only one thing Gstrein would advise, therefore: ‘Get professionals from as many different fields as possible to work together, from lawyers to behavioural experts and engineers. It’s the best way to rule out potential bias in the final product.’

Veling is taking this advice to heart. His first action is to look for psychologists and other mental health experts to set up a focus group. They’ll be working with a wide range of engineers, policymakers, and ethics experts. ‘Based on what we’ve got so far, we’re able to build a prototype. That might help us understand the ethical quandaries better.’

Legal expert Peters is also collaborating with behavioural scientists and psychologists. ‘We’d like to know in which circumstances people are willing to talk to a bot.’

Schuite-Koops’ psychosis tool is also being tested for compliance with relevant laws and regulations by an external legal expert, as well as an ethical expert. ‘They’re checking whether we’re using the right procedures to test this, and whether we’re treating our participants right. Our last ethical evaluation was really good.’

Back-up

Besides, they will always use real people for back-up, at least for the time being. ‘AI systems shouldn’t be independent’, says Schuite-Koops. ‘They still make too many mistakes, for instance when they simply forget to include relevant information.’

Thanks to these checks and balances, the AI researchers agree that research on artificial intelligence is necessary. If only, says Veling, because we’ll have AI psychiatrists in a few years regardless. ‘And we’d better make sure that we’ve thought everything through beforehand so nothing goes wrong.’

Peters agrees that the implementation of AI simply seems unstoppable. ‘It’s better to be open to what’s possible, before a different corporation markets it. Closing our eyes to these developments is probably a lot more dangerous.’

Dutch