Just when you thought you’d figured out your future, ChatGPT – with currently more than one million users – appears to prove you wrong.
My social media newsfeed is drowning in news extolling its brilliance, while my colleagues and students cannot stop talking about it. I admit that I did log in and play with it for a few days. I was more than impressed with what it could do: it wrote two reference letters for me and came up with a brilliant syllabus and bibliography for my new course; all this in less than a minute.
At times, however, it sounded, well, a little “too drunk and obvious”. When I asked why scientists don’t trust AI, the response I received was that my question is a play on words and that some scientists may trust it, while others may not.
ChatGPT owes its wealth of knowledge to the fact that it was trained on different kinds of texts from the internet, including academic articles. It writes in a human-like way, and it does so really well about almost anything under the sun. This is precisely why it raises a series of existential questions for higher education, including whether students need to learn to write, if it makes any sense to evaluate any kind of writing in the future, or if a great number of humans in higher education are about to lose their jobs.
There’s one reason not be excited about AI: it lacks emotional intelligence, empathy, morality, compassion, and integrity
There may be many reasons to be excited about AI, but there’s definitely one not to be: it lacks emotional intelligence, empathy, morality, compassion, and integrity, all of which are (or, if not, then they should be) central values to any higher-education institution.
AI technologies have been largely criticized for content-related flaws connected to ethics, biases, discrimination, violence, and sexual abuse. But ChatGPT developers claim to have done better than others. How? By employing workers in Kenya to read and label text that describes in graphic detail situations like murder, rape, child abuse, torture, self-harm, and hate speech.
Furthermore, the company claims to have built “safe and useful AI systems that limit bias and harmful content”. The oxymoron in this is that these “safe and useful systems” have been created in the most unethical and exploitative manner imaginable. As reported in an article in Time a few days ago, the 50,000 workers whom the AI industry claims to have helped out of poverty were paid less than two USD per hour and suffered mental breakdowns while performing their disturbing tasks.
To pretend that open AI will not change how we live our lives, or will not revolutionize higher education is delusional. But the question that emerges right now is not how to utilize ChatGPT for higher education. The real question, rather, is this: How do we resist supporting neoliberal, multi-millionaire artificial intelligence companies that exploit workers in a world that is starving for empathy and emotional intelligence?
LUCY AVRAAMIDOU