How man and algorithm influence each other
Wrong questions generate wrong answers
We meet with our co-workers online, visit grandma over FaceTime, and when we’re bored, Netflix recommends us something to watch based on our earlier views. We don’t realise it, but without algorithms, our lives in lockdown would have been very different.
What about dating apps who tell us about potential matches, or exercise apps that tell us when to move in order to stay healthy? Algorithms are everywhere. But a lot of people are wary of them, for instance because it was the use of predictive algorithms that led to discrimination in the Dutch child allowance scandal.
But associate professor of artificial intelligence Davide Grossi says that algorithms aren’t all that mysterious. ‘It’s nothing more than a recipe, a set of instructions we write to solve a problem. When you add two numbers together, you’re also using an algorithm; you’re just using pen and paper.’
Diagnosing cancer
The algorithm recipe is extremely versatile, leading to great discoveries in scientific research. In collaboration with colleagues from medical sciences, professor of informatics Michael Biehl developed a self-learning algorithm that is much better at diagnosing adrenal gland cancer than traditional scans are.
When you add two numbers together, you’re also using an algorithm
‘There are two algorithms that play an important role in this’, he explains. ‘There’s a training algorithm that analyses sample data, enabling the diagnosing algorithm to work optimally. It can detect diseases much more efficiently. A person would take much longer to analyse the data of thousands of patients.’
Algorithms can also help fight crime, says Oscar Gstrein, an assistant professor at Campus Fryslân specialising in governance and emerging technologies and a member of the Data Research Centre. Take predictive policing, for instance: ‘Algorithms search through big data to predict where repeat criminality, like car theft and break-ins, are most likely to happen. It’s easier to analyse large batches of information, which allows us to make better decisions.’
More objective
Will algorithms also be making medical decisions in the future? Will people be convicted of criminal behaviour based on a formula? After the child allowance scandal, it’s a sensitive issue.
‘If I may play devil’s advocate: when a judge or a doctor makes a decision, you don’t always know how they reached that conclusion’, says Gstrein. ‘Some people say that algorithmic decision making is more objective. A computer just does what it does, and when this is based on a greater social discussion about legitimate goals and expectations it could lead to better decision making.’
The mistakes are made in the criteria
When it does go wrong, the three experts say, it’s usually because neither designers nor users understand how algorithms work. ‘People forget it’s a socio-technical system; an interaction between man and machine’, says Grossi. The more complex the subject, the more complex the algorithm. ‘This is especially true for subjects involving the government.’
In a case like the child allowance scandal, Biehl says the people responsible are hiding behind the technology. ‘When I hear politicians on the news say that we should curtail the use of algorithms, that just shows that they don’t understand it. It’s not an algorithm’s fault when things go wrong. The mistakes are made in the data that’s being collected and the criteria that have been input in the system. Those are human errors.’
Not a washing machine
‘The problem’, says Grossi, ‘is that people think algorithms are these amazing and efficient things that can help them save tonnes of money.’ But while automation will save money in the long term, it actually costs money in the short term, provided it’s being implemented correctly. People tend to underestimate this, Gstrein has noticed. ‘I regularly share my research findings with governments. Often, people are too eager to get started. They don’t properly figure out their objectives, or who the automation will impact.’
‘Automation’, says Gstrein, ‘isn’t as simple as turning on a washing machine and taking out the clean laundry when it’s done. But that’s not something people like to hear at the start of a process.’
People are too eager to get started
Biehl says transparency is crucial if we want people to trust algorithms. ‘We need to know exactly what algorithms are meant to calculate; what are the criteria being used to make a decision?’ This is especially important in the medical field, he says. ‘Algorithms are being used to assist doctors. If a patient or a doctor doesn’t understand the algorithmic advice, no one will trust it.’
Climate change
Algorithms certainly aren’t perfect, but they can do fantastic stuff, emphasises Grossi. ‘As long as there are people involved who are familiar with the situation.’ He means people who have the courage to talk about the advantages and the limitations of the systems, people who know the interests that are at stake. People who challenge assumptions and prejudice, who are transparent about how decisions are made. ‘It’s not always easy, but when you have people like that, algorithms have so much to offer society.’
The three scientists unanimously agree that algorithms will play an important role in big issues such as climate change, human rights, and curing diseases. ‘The future is full of algorithms’, says Grossi. ‘But it is a good thing that society is keeping a critical eye on things.’
That also means it’s important that people’s digital knowledge is expanded, says Gstrein. Grossi agrees. ‘It promotes transparency, safety, and in the end, the opportunities that algorithms afford us.’