ChatGPT is amazing and everything that’s wrong with the world

Just when you thought you’d figured out your future, ChatGPT – with currently more than one million users – appears to prove you wrong. 

My social media newsfeed is drowning in news extolling its brilliance, while my colleagues and students cannot stop talking about it. I admit that I did log in and play with it for a few days. I was more than impressed with what it could do: it wrote two reference letters for me and came up with a brilliant syllabus and bibliography for my new course; all this in less than a minute. 

At times, however, it sounded, well, a little “too drunk and obvious”. When I asked why scientists don’t trust AI, the response I received was that my question is a play on words and that some scientists may trust it, while others may not. 

ChatGPT owes its wealth of knowledge to the fact that it was trained on different kinds of texts from the internet, including academic articles. It writes in a human-like way, and it does so really well about almost anything under the sun. This is precisely why it raises a series of existential questions for higher education, including whether students need to learn to write, if it makes any sense to evaluate any kind of writing in the future, or if a great number of humans in higher education are about to lose their jobs. 

There’s one reason not be excited about AI: it lacks emotional intelligence, empathy, morality, compassion, and integrity

There may be many reasons to be excited about AI, but there’s definitely one not to be: it lacks emotional intelligence, empathy, morality, compassion, and integrity, all of which are (or, if not, then they should be) central values to any higher-education institution.

AI technologies have been largely criticized for content-related flaws connected to ethics, biases, discrimination, violence, and sexual abuse. But ChatGPT developers claim to have done better than others. How? By employing workers in Kenya to read and label text that describes in graphic detail situations like murder, rape, child abuse, torture, self-harm, and hate speech. 

Furthermore, the company claims to have built “safe and useful AI systems that limit bias and harmful content”. The oxymoron in this is that these “safe and useful systems” have been created in the most unethical and exploitative manner imaginable. As reported in an article in Time a few days ago, the 50,000 workers whom the AI industry claims to have helped out of poverty were paid less than two USD per hour and suffered mental breakdowns while performing their disturbing tasks.

To pretend that open AI will not change how we live our lives, or will not revolutionize higher education is delusional. But the question that emerges right now is not how to utilize ChatGPT for higher education. The real question, rather, is this: How do we resist supporting neoliberal, multi-millionaire artificial intelligence companies that exploit workers in a world that is starving for empathy and emotional intelligence?




  1. AI is not meant to to replace humans or human intelligence, but rather augment it. This column poses many interesting questions regarding the ethics of AI that have been under investigation for years now. However, saying that supporting the initiatives like OpenAI removes emotional intelligence out of the equation in higher education is a far stretch, as that is simply not how it should be used. We should find solutions for these kinds of ethical issues that arise, but simply resisting or discontinuing support for these initiatives out of fear for the future is not one of them. AI will become a part of our future, whether we want it or not. So let’s start thinking in terms of solutions.

      • To elaborate on my statement, here is a link that explains what I mean by the way AI is meant to augment human intelligence: . Yes, AI will provide incredible value for us and our economy. Yes big corporations will profit from this, as they’re the ones with enough funds to invest in its growth. No, we should not let that keep us from supporting initiatives like OpenAI.

        What I like about the OpenAI initiative is that – in contrary to those “neoliberal, multi-millionaire” greedy capitalist firms the author’s referring to – it’s mostly open and transparent to the public, they’ve been sharing the incredible progress they’ve been making on the field of AI for a long time now. They’re as open as reasonably possible for a company making AI language models. The biggest examples of this are ChatGPT and Dall-E, they didn’t have to open those to the public. Greed and making money would be keeping the technologies for themselves and capitalising on them. That is not OpenAI’s mission however.

        Ofcourse this does not stop us from asking the necessary ethical questions regarding AI, but I don’t see how resisting support of a company like this accomplishes anything. AI is here to stay, whether we like it or not, so let’s engage in the discussion in what AI can mean for us and how we can implement it in a sustainable way instead of merely resisting its development in the name of anti-capitalism.

        • “Whether we like it or not” does not seem a convincing argument, and I would say it rather reflects the tendencies that this article criticises. I guess the workers in Kenya work for 2 USD under bad conditions “whether they like it or not” as well, in order to make a living. Evolution is multifaceted and value-driven so there shouldn’t be one-way directions.

          According to my interpretation, I don’t think that the article calls for resisting the technological evolution of AI, but it rather draws attention on the purposes and methods we use to achieve progress on AI. AI companies should also be evaluated according to this kind of criteria. Being open as you mentioned is of course another one.

        • Hi Sander,
          While I agree with your final paragraph that AI is here to stay and we should learn to embrace it, especially in relation to education, I also agree with Lucy and other commenters that we should scrutinize every step from the companies developing it. Because, yes, indeed, OpenAI started as a seemingly innocent and transparent non-profit company that put openness at the forefront, it was still founded by billionaires with dubious intentions (Musk, Thiel and the likes). Moreover, in 2019 they moved from non-profit status to a “capped-profit” company, coinciding with a 1 billion dollar investment from Microsoft. The “profit-cap” is set to a whopping 100-fold return of investment though, which means that only after yielding Microsoft a 1300 billion dollar return – they just announced another round of investment of 10 billion, after a previous round of 2 billion – the profit would flow back into the non-profit section of the company. I am quite skeptical towards these mission statements since then and so should others.

          There is a very interesting article from about three years ago about OpenAI from Karen Hao on MIT Technology Review – which I won’t link to here because it needs approval – it makes, among other topics, the following conclusion: “There is a misalignment between what the company publicly espouses and how it operates behind closed doors. Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration. Many who work or worked for the company insisted on anonymity because they were not authorized to speak or feared retaliation. Their accounts suggest that OpenAI, for all its noble aspirations, is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.”. It’s three years old so I would be very curious what happened in the meantime.

          As for the argument that they did not have to release it to the public and they could capitalize on it behind closed doors: yes, that is indeed in their rather abstract ‘mission’. However, the current ChatGPT version is very much a free research preview, so just a beta version to test it’s interaction with real humans. Our input is used to optimize their product and that it is free to use only reminds me of the adage ‘when the product is free, you are the product’. Meanwhile, this week, they already announced their pro subscription service: for $42 (per month, that is!) you get faster responses, first access to new functions and priority accessibility to the platform when it is at max capacity. Yes they need to make a revenue somehow, the platform is free of ads (for now..) and has no other source of income, but it only adds to the point from the opinion piece above that it will create more inequality between those who can afford it’s added value and those who can’t. Moreover, it is unclear when they have collected enough data to shut down this free research version, so that could be next month, next year, or in the next decade. I would not call this openness.

          This latter point is also a good argument in my opinion to not embed it too firmly into educational practices. ChatGPT could be a great tool for teaching, but mostly to demonstrate the limited framework it operates in and I am curious about it’s implementation in courses. But as long as OpenAI is a commercial company it should be regarded and criticized as such. OpenAI does not need our support, it needs our data.

    • What do you mean by augmenting human intelligence? That’s a weird statement if its under the condition of exploitation of others humans. Augmenting something, anything, for a few under the oppression of others will always be againts the direction of growth as human species….the current situations we are living in are telling us it in our faces. Stopping the support of things is *in practice and concrete* a solution. Technology does not necessarily equate future and development, and actually can serve exactly the opposite effect as argued here.


Reacties met een link worden beoordeeld en kunnen worden geweigerd. / Comments containing a link will be reviewed and may not be published.

Vul alstublieft uw commentaar in!
Vul hier uw naam in