Science
Lucy Avraamidou Photo by Reyer Boxem, image editing by René Lapoutre

Lucy Avraamidou combats AI bias

‘We could be heading for a tech dystopia’

Lucy Avraamidou Photo by Reyer Boxem, image editing by René Lapoutre
Artificial intelligence is supposed to make our lives better. The algorithms tend to favour groups in power, though, and thus add to discrimination and bias. Lucy Avraamidou is doing what she can to change that.
29 June om 9:30 uur.
Laatst gewijzigd op 29 June 2022
om 10:32 uur.
June 29 at 9:30 AM.
Last modified on June 29, 2022
at 10:32 AM.
Avatar photo

Door Christien Boomsma

29 June om 9:30 uur.
Laatst gewijzigd op 29 June 2022
om 10:32 uur.
Avatar photo

By Christien Boomsma

June 29 at 9:30 AM.
Last modified on June 29, 2022
at 10:32 AM.

First, says Lucy Avraamidou, she felt ‘erased’ from her own social community. And what was more: ‘It was a machine that did it.’

Then, she says, she felt angry. ‘Because behind that machine are people. White upper class men are usually behind these algorithms. So yes, that made me very mad.’

Avraamidou is a professor of science education. She has been focusing on topics such as diversity, the representation of women in science, and the issues of intersectionality: being part of different minorities at once. But recently, something new has been added to the mix. She’s researching the ways in which artificial intelligence enhances existing biases and how to tackle that problem. This spring, she was awarded a grant of 330,000 euros for her part in the international MAMMOth project on bias in AI.

It’s one of the big challenges we are facing today, she says. ‘In many ways, AI can make our lives easier. But it is also quite possible we are heading for a tech dystopia, and everything blows up.’

She’d like to make sure that doesn’t happen.

Minorities

Back to that moment she felt ‘erased’. It happened at the Groninger Forum’s opening exhibition on AI and its influence on everyday life. ‘We passed this camera that scanned me with facial recognition software. But the machine didn’t recognise me.’

First, it classified her as a man. Then the machine provided her with a DNA analysis, explaining to her that she had a high percentage of African DNA. 

White upper class men are usually behind these algorithms

Avraamidou, who is originally from Cyprus, had been home for the summer. She had come back tanned and had her hair pulled back, giving her a somewhat ‘African’ look. ‘My big nose probably didn’t help either’, she jokes. The problem, however, was that her blond, light-skinned friends passed the camera without any hiccups. But she, clearly, didn’t ‘fit’.

That is something minorities deal with every day. Facial recognition often doesn’t recognise women of colour correctly, classifying them as men. The software also makes more mistakes with black faces on the whole. So when police use it – and they do – that increases the chance for people of colour to be pulled over by the police, or be wrongly accused of a crime. 

When you’re a woman of colour, you are doubly at risk. Add another trait that usually doesn’t work in your favour – say being disabled or part of the LGBTQI+ community – and the problem can triple, or quadruple. Many minorities experience these things so often, they don’t even realise where it comes from anymore, Avraamidou says. ‘Because that is just the way it is.’

‘If I need an expert in a specific field and I search on Google, I will most likely find a white man. He is the one who will receive invitations to collaborate or speak at a conference.’ Photo by Reyer Boxem

Unequal

It’s not that problems with racism and bias are new. But at the moment, AI is making existing problems worse. Algorithms, after all, need data to be able to do their job. And the data we feed them is taken from the real world. ‘AI reproduces the world as it is, not as it should be, which is more diverse and more equal. Because I would say the world is unequal right now. We’re witnessing inequalities at many different levels of life.’

The examples are everywhere these days. Take Amazon, which secretly used algorithms in its hiring procedures. As a result, women were discriminated against, and women of colour doubly so. ‘The input provided by Amazon was based on ten years of a series of people who were hired in the company.’ And they were – of course – mostly white men, so the algorithm believed being white and male was the key to success.

AI reproduces the world as it is, not as it should be

In the Netherlands, there’s the tax scandal: the tax authorities used algorithms that secretly flagged citizens from a different ethnic background as possible frauds. Thousands of people were wrongly accused and are still dealing with the damaging consequences today.

Another example from the Netherlands that Avraamidou recently stumbled upon: during Covid times, a camera algorithm was detecting social distancing in public areas. ‘But there is evidence that the common base model didn’t work well for all skin colours, ages and sizes. So it benefited thinner people over oversized people, or people with certain clothing styles.’ In other words, the average middle and upper class  Dutch person.

Critical approach

Many people still believe that computers and algorithms will give you a clear and unbiased truth, Avraamidou says. ‘But they clearly do not. We need to make people aware of that on every level, so they take a more critical approach towards the use of AI.’   

That is what she’ll be doing over the next three years. Her new project will focus on three cases where AI affirms existing inequality. There’s the finance sector, which often uses algorithms to decide whether or not you will get a mortgage. Asylum seekers and other migrants will be at a disadvantage, because they don’t have the required ‘history’ in the country of origin. Secondly, there’s the immigration sector. Thirdly, she’ll focus on academia. 

Researchers have become increasingly critical towards AI

Until now, she says, we have focused on women scientists and tried to adjust the hiring procedures to tackle the problem, with limited success. But the problem goes much deeper.  ‘If I need an expert in a specific field and I search on Google, I will most likely find a white man. I will cite him instead of looking for people from underrepresented groups.’

That is where it becomes discriminatory, Avraamidou explains, because that person’s citations will just continue to increase because he is the first result. ‘He is the one who will receive invitations to collaborate or speak at a conference, for example. It nurtures inequality, starting from the input. And the input is not right.’

Bottom-up approach

An issue like this isn’t solved easily. People have to be aware of it and willing to change things – for example by changing the database you start out with. What if Amazon had used an algorithm that focused on what staff the company wanted, instead of what it had? 

Avraamidou has gone a bottom-up intersectional approach. Over the next three years, she will try to bring the message to future developers – by integrating the awareness in university courses – and to government parties, to companies, or the police who use AI and don’t realise how biased it can be, but also to schools and young children, through exhibitions.  

She has to, she says, because things can go very wrong if we don’t take action. At the same time, she is optimistic as well. ‘I see that resistance is growing and that researchers have become increasingly critical towards AI.’ 

Avraamidou knows she’s not alone. ‘We are planning big campaigns around Europe, targeting ten thousand organisations. So we’re talking about big numbers.’

And, more importantly perhaps: she believes most people want to do the right thing. ‘I start from the premise that there is an intention to tackle these issues. Because that benefits everyone.’

Dutch