I don’t think you need a whole article to tell you that using artificial intelligence (AI) writing software to complete your assignments is plagiarism.
You’ve sat through the anti-plagiarism lectures. You’ve read the pamphlets and heard the horror stories about expulsions and academic probations and visits to the dean’s office. You know that if you take something someone else has created and try to pass it off as your own, you’ve committed the most egregious of academic sins.
In fact, there’s one thing that every university student knows. Plagiarism is bad, and doing it will lead to serious consequences.
So what’s the deal with chatbots like OpenAI’s Chat Generative Pre-trained Transformer (ChatGPT)? Why have they occupied the attention of universities across the country, who have debated whether or not these essay-writing softwares should be (or even can be) banned?
ChatGPT was launched by the San Francisco-based company OpenAI in November of 2022. It can write impressively genuine answers to even the strangest of requests in real time. It tells jokes, answers limericks, debugs code, and of course writes essays.
For me, the case seems relatively open and shut. Using something like ChatGPT to complete an assignment is plagiarism.
People have contended that since it’s a computer and not a person who writes ChatGPT essays it’s fair game. To this, I say — it doesn’t matter if it was a computer. It wasn’t you.
However, not enough people are talking about the way that chatbot tools like ChatGPT learn to create such compelling answers. We know that when you write an essay with ChatGPT, those words aren’t your own. But who’s words are they? What does it really mean for a computer to “write” something?
Companies like OpenAI are infamously tight-lipped about where they collect their data. But, most programs of their kind — a family of software called language models — rely on large collections of text stolen from the internet without consent. Because of the relatively unrestrained way that these training datasets are collected, language models are inadvertently taught to reflect dominant viewpoints, often at the expense of marginalized people.
In simple terms, this means that whatever you ask ChatGPT to write for you is really an amalgamation of the writing of other people. And, it’s all taken from the open internet without their consent. So, it often reflects whatever biases appear in the original source. The internet is not exactly a famously inclusive space.
This of course, adds layers to the plagiarism argument. What ChatGPT creates is not yours to claim. But, it also only comes into existence off the backs of other (uncredited) writers across the globe. What’s more, a TIME investigation released last week also revealed that the charmingly authentic answers created by ChatGPT couldn’t exist without the work of underpaid labourers at Sama, a data-training company in Kenya.
According to TIME’s investigation, these workers’ payments depended on seniority at the company and overall performance, and ranged between $1.32 and $2 per hour. Tasked with reading thousands of snippets of brutally disturbing text, the goal was to teach ChatGPT to avoid potentially harmful content. By labelling written samples of murder, suicide, and bestiality, the Kenyan workers help improve the palatability of answers provided by ChatGPT. This work, of course, is incredibly alarming.
This is why our discussion of AI writing software must go further than just debating the possibility of plagiarism. We can no longer peacefully stick our heads in the sand, pretending that software like ChatGPT comes into being in a far-off, impactless bubble.
I’m not interested in playing into the moral panic surrounding artificial intelligence. No, I don’t think that ChatGPT is going to become sentient and start sucking the juice out of humans like the computers in The Matrix. However, I do think it is imperative that we start paying more attention to who’s behind the delightfully convenient software that continues to become more common.