Features

Can generative AI and academic integrity co-exist?

The line on generative AI in academia is blurred. Should the use of AI be discouraged, or is compromise possible?

No matter how it’s done, passing off someone else’s intellectual property as your own is plagiarism. However, the rule has become less clear with generative artificial intelligence (AI).

On November 30, 2022, OpenAI released ChatGPT, a language model-based chatbot. Anyone can give ChatGPT a prompt that specifies the topic, main points, length, or degree of formality, and the AI will do the writing for them. If the response isn’t exactly what’s needed, they can request adjustments or fine-tune their original prompt until the desired content and quality are achieved. While it’s much faster than researching and writing an essay on your own, the work produced is not yours.

Since the creation of ChatGPT, plagiarism has become a larger concern. Because generative AI is easily accessible and often hard to notice if used, people differ in opinion on whether tools such as ChatGPT should be banned or encouraged as a tool in academia. So, the question is: can generative AI such as ChatGPT co-exist with academic integrity? Or, is compromise the only option, as ChatGPT is already so widely-used? 

“I’m really excited about the potential and the ways in which we can harness this tool to do amazing things,” vice-provost (learning initiatives) and AI taskforce chair says

The University of Alberta’s Office of the Provost and vice-president (academic)’s Taskforce on AI and the Learning Environment is chaired by Vice-provost (learning initiatives)  Karsten Mündel. The taskforce, made up of U of A experts, aims to “help us think through what the arrival of tools like ChatGPT and tools that work in the visual realm mean for our learning environment,” Mündel said. 

The taskforce is making recommendations on dealing with generative AI in the U of A’s learning environment. As well, it provided specific guidance for courses in winter 2023. The recommendations are based on consultations with students and U of A stakeholders.

The topic of academic integrity and generative AI has come up, Mündel said, but the conversation sometimes focuses on students using AI to cheat.

“I don’t think students are setting out to [cheat]. This is a new tool. Wikipedia was a new tool. We need to think about how to incorporate these new tools into our learning environments,” Mündel said.

“But I would say [academic integrity is] actually not the biggest part of the conversation. It’s really about, how do we prepare our students for life after the U of A? How do we incorporate generative AI into all of those different realms of our lives afterwards?”

For students seeking clarity on the use of generative AI such as ChatGPT in class, Mündel said that asking questions is key.

“If it’s not clear from the syllabus, course outline, or an assignment sheet, have a conversation with the professor or teaching assistant. Instructors are incorporating [AI]. That also signals to students really clearly, ‘here’s how I want you to engage with generative AI in this course, in this assignment.”

Academic integrity and generative AI can co-exist, according to Mündel. He added that through generative AI tools such as Grammarly and suggestions on Google Apps, the two are already co-existing. However, when it comes to a tool like ChatGPT, other factors should be considered.

“I think it really depends on how one is using it. Are you using that tool for idea generation? Are you using that tool to think through a bunch of data? Or are you using it to write your assignment?” Mündel said. 

He added that another aspect students should consider is the security of generative AI. Mündel said that everyone should be careful about the information they input in such tools.

“You’ve written this amazing essay and you’ve put it into generative AI to give you some feedback. How is that using your intellectual property, your work, in the development of that model?”

The Code of Student Behaviour states that “no student shall represent another’s substantial editorial or compositional assistance on an assignment as the student’s own work.” Mündel said that this still applies to generative AI such as ChatGPT, in the case that a student is “misrepresenting the assistance of ChatGPT.”

“If you read the code, it doesn’t explicitly say generative AI. We will make sure it does, so that there’s no question for anybody,” Mündel said. Ultimately, Mündel is excited about the opportunities that generative AI tools can bring.

“There is this sky-is-falling narrative out there about generative AI. I’m really excited about the potential and the ways in which we can harness this tool to do amazing things.”

“I think generative AI can be used in a way that sparks students’ curiosity,” philosophy PhD candidate says

Tugba Yoldas, assistant lecturer and PhD candidate in the philosophy department, will be teaching PHIL 385: Ethics and AI in winter 2024. According to Yoldas, it’s important to consider the ethics of using generative AI.

“Ethics is important because living with other people and other sentient beings requires our moral consideration. For us to live well and treat other living beings and non-living beings [well], that requires our moral consideration,” Yoldas said. “What is the right thing to do in this situation? What would be wrong?”

The course will cover ethical challenges caused by AI technologies such as generative AI. Yoldas said that there are two dangerous assumptions being made about the use of AI in academia: the assumption that students will cheat when given the chance and the assumption that instructors want to control students. 

“[Cheating] shouldn’t be the starting point of conversations about honestly using generative AI. The goal of instructors and higher education institutions is not, I think, to control students on what to do or what they don’t,” Yoldas said. 

However, according to Yoldas, using generative AI ethically in an academic setting requires transparency and collaboration from both the instructor and the student. For unethical uses such as plagiarism, she thinks the definition should be reassessed.

“At this point at least, [AI] is not somebody, but it is an entity. It’s a technological entity and artifact. So we should look at reassessing our understanding of plagiarism,” Yoldas said. She added that there are additional challenges with generative AI such as ChatGPT, including a lack of regulations.

“We are asking students not to plagiarize, but we don’t have any regulations in place for these AI systems to ethically use the data that’s available to them. I do feel like there should be more initiatives about regulating generative AI systems’ use of all the data all over the internet” Yoldas said.

She added that the issue of using generative AI ethically in academia is “an issue of lifelong learning,” related to “the student’s ability to ask and inquire about things that they are curious about.”

“I think generative AI can be used in a way that sparks students’ curiosity, that makes students think about which generative AI tools they can use in specific disciplines. And also how they can use it later on in the workplace, and in their life in general.”

“There’s lots of people claiming right now that we’re two or three or five years away from AI being human level intelligence, and that it will take over. That’s absolute garbage nonsense,” U of A assistant professor says

Matthew Guzdial is an assistant professor in the U of A’s department of computing science. He researches computational creativity, or the intersection between machine learning and creativity. 

Guzdial believes that AI can be integrated into academia. In certain instances, it makes sense to use machine learning tools like ChatGPT. However, he said it’s complicated and there are “pluses and minuses.”

He mentioned that in computing science, there are AI classes where students are expected to use AI for assignments. He also mentioned students using AI as an ideation tool as a starting point for their assignments. However, he doesn’t recommend that students use AI tools directly. 

“There’s issues around copyright. There’s [concerns that AI] will make up fake things. There’s issues around it just not being particularly good at certain things,” Guzdial said. “In those cases [where] people will attempt to use it rather than learning themselves, which is obviously not ideal.”

So far, Guzdial has seen a big impact from AI on academia, particularly when it comes to professors who are afraid of students plagiarizing. He also mentioned people are exploiting that fear by saying they have tools to detect AI-generated text and work. Guzdial said there are a lot of “false positives” from these detection tools, as even small edits can impact detection. 

However, Guzdial still believes that academic integrity and AI can coexist. He said that this is an issue of good versus bad pedagogy and creating assignments that aren’t “cheatable or hackable.”  

“If … a professor or instructor is saying, ‘write me a five-page essay on blank.’ I think that’s a terrible idea, in terms of how you actually teach someone something. That’s the kind of thing that you could use ChatGPT or any of these other large language models to produce content for.”

For a series of videos posted by the U of A’s  official Instagram account called Fact or Cap, Guzdial quizzed students on their knowledge of AI. 

Guzdial said that these videos serve to improve AI literacy and educate students and the general public. He added that it’s important for people to understand what the real risks are amidst the “apocalyptic nonsense that people are throwing around.”

Right now, there is a lot of fear surrounding the future of AI, which is creating lots of misconceptions. 

“There’s lots of people claiming right now that we’re two or three or five years away from AI [having] human level intelligence, and that it will take over. That’s absolute garbage nonsense. There’s no evidence that we’re going to get there. It’s just ridiculous,” Guzdial said. 

On the other hand, there are people who are undermining the risks of AI by thinking that it’s not as good as largely believed, Guzdial said.

Guzdial considers AI to have mostly positive potential for the world and academia, but thinks that “we won’t get there for free.”  

Guzdial is worried about AI’s impact on the workforce. He believes that AI will begin replacing jobs held by humans, but humans will still have to go in and check over the work done by AI. The result would be underpaid and undervalued work done by people. 

“I am worried about people misusing artificial intelligence or misusing the fear of artificial intelligence. If we leave it alone, it is certainly a big problem,” Guzdial said. 

“But I’m not worried for example, about artificial intelligence becoming sapient or sentient and destroying all humans.”  

Vice-president (academic) cautions students to “safeguard themselves” when considering using AI in assignments by setting clear expectations with professors

Pedro Almeida, vice-president (academic) (VPA) of the U of A  Students’ Union (UASU), sits on the AI taskforce with Mündel, representing the student view. 

Since he’s joined the taskforce, they’ve largely focused on creating a list of recommendations that center around transparency and communication. Almeida said that professors will have different expectations on whether or not AI is allowed in their courses. However, they need to disclose that information to students ahead of time. To Almeida, it’s vital that professors set expectations so students know exactly when it’s appropriate to use AI on assignments. 

He added that it’s important to “[understand] the context of AI and [bring] it back to the UASU” so they can properly advise students on how to approach AI in their academics. Providing students with potential courses of action is a priority for the UASU, he said. 

Almeida believes that it’s “too early to tell” the impacts of AI, which will come in the next few years. However, he has seen a narrative of cautioning students regarding the use of AI.  

“The main thing that I’ve seen as VPA is just this push towards students to be careful as they approach the use of AI and ensure they communicate well with instructors to avoid any issues of academic integrity. Especially as the new academic integrity policy comes into place.”

When it comes to students using AI in their assignments, Almeida said it’s course-specific. Some instructors are okay with it, while others are not, so it’s up to students to clarify what’s allowed and what isn’t. 

As AI develops and becomes more integrated into academia, policies that apply to all courses should be considered, Almeida said. But, because the U of A offers such a wide variety of courses, some decisions should be left up to professors. In the end, however, it’s “everyone’s responsibility” to ensure students are prepared to engage with AI ethically and responsibly, Almeida said. 

Almeida likened the use of AI in academia to when calculators were first introduced. 

“At first, there was a lot of fear around that. But as time went on, instructors were better able to adapt, students were better able to adapt. Still, in some courses, you may not always be allowed to use certain calculators or a calculator, but the expectation is in the communication.”

Currently, the taskforce is drafting an academic integrity policy, which Almeida has been providing input for based on student experiences. When it comes to academic integrity, Almeida said their primary objective is providing as much clarity as possible to students, since it isn’t possible to lay out the punishments for each offence. If they did that, the university would be removing the nuance from each situation, Almeida said. 

Above all else, Almeida cautions students to “safeguard themselves” ahead of using AI in their assignments. If a student is considering using AI technology, he suggests they ask their professor over email, so they have the response in writing. That way, they are entirely protected in case they are accused of cheating. 

“On our end, we’ve done our best so that the academic integrity policy protects students better. We also want to make sure that students don’t have to go through unnecessary stress, if they can help it. The best way to do that is through communication in writing ahead of time.” 

For many, the direction of AI is unclear, especially in an academic context. There is no clear path to follow when it comes to AI integration, and there are many valid concerns. However, it’s hard to plan for something when it hasn’t fully happened yet.

“A lot of the answers will unveil themselves as both processes develop. As the policy is in place and we see what happens and we fine tune it over time, and as AI develops. I think that will unfold and it will make clear what the future looks like,” Almeida said.

For now, all students and professors can do is prepare and communicate with each other. Until we have a clear understanding of what the future of AI in academia looks like, there’s no point in fearing the worst.

Lily Polenchuk

Lily Polenchuk is the 2023-24 Managing Editor at The Gateway. She previously served as the 2023-24 and 2022-23 News Editor, and 2022-23 Staff Reporter. She is in her second year, studying English and political science. She enjoys skiing, walks in the river valley, and traveling.

Katie Teeling

Katie Teeling is the 2023-24 Editor-in-Chief at The Gateway. She previously served as the 2022-23 Opinion Editor. She’s in her fifth year, studying anthropology and history. She is obsessed with all things horror, Adam Driver, and Lord of the Rings. When she isn’t crying in Tory about human evolution, Katie can be found drinking iced capps and reading romance novels.

Related Articles

Back to top button