As students tackle papers and other end-of-term assignments in the coming weeks, some will inevitably turn to AI tools such as ChatGPT for help.
David Medler, an associate teaching professor at UVic and associate chair of the psychology department, said three such cases had already landed on his desk in a single week, an uptick from previous years and a sign of what he calls “skyrocketing” AI integrity violations.
One anecdote in particular has stuck with Medler.
“My son goes to UVic, and he told me the story of him looking over at the person next to him, who was having an AI generate the paper that was due at the end of class,” he said.
Medler, who has done research in the fields of computational modelling and artificial intelligence for many years, said he expects both learners and teachers to be adapting to the new technology for the next two or three years “until we get a really good handle on what’s going on.”
Nearly all the questions asked at a recent roundtable event for instructors on academic integrity at the university were on the topic of AI.
Some instructors asked how they could better identify AI-assisted cheating, while others commented on the benefits AI has brought to their academic research.
The university has yet to create formal guidelines on use of AI, though ChatGPT, a generative-AI prompt program that can be used for free, has been available since last November. Tools like ChatGPT can generate conversational, human-like responses when users type in questions or tasks — outputting answers that can be used for a variety of purposes, including writing university-level essays.
Use of artificial intelligence in assignments at UVic is currently considered unauthorized use of an editor under the academic integrity policy.
Individual departments have also set their own policies and put together ad-hoc solutions. The psychology department has prohibited use of AI unless expressively allowed by an instructor for teaching purposes, Medler said.
Elizabeth Adjin-Tettey, associate vice-president of academic programs, told the Times Colonist that UVic will release institutional guidelines on AI before the start of the next semester in January.
“We do want it to be in place because our colleagues are asking about it as they prepare for teaching next term.”
For first-year history and environmental studies student Henry Brooks, the idea of being able to prompt an AI program to do his homework has been tempting.
“But honestly, I don’t know. For the amount that a class costs, I’m going to go through the mental work of it and actually write the essay,” he said.
Erin Kelly, an associate English professor and director of UVic’s academic technical writing program, said she has yet to see a serious AI-assisted academic violation this year, due to how the courses are structured — assignments in the near-mandatory writing program, which sees about 3,500 students each year, build onto each other. “It would almost be more work to have a student use an AI tool.”
Kelly also attributed the low cheating rates for students in the program to small class sizes and diligent instructors, who pay special attention to the first class assignment — a summary of an academic article, which can be easily generated with a simple AI prompt — for anything “funky.”
“But that is labour. If you had to do that for every single one of your students [in a class of 100], you couldn’t do it,” she said. “We can — by a horsehair — because we’re at 30 or 33 [students per class], and even so, people are exhausted.”
Kelly is hoping not to see more than the usual 30 to 40 academic integrity violations by the end of the year, but notes there’s still a half a semester left. “All hell could break loose.”
Julia Hengstler, a professor and educational technologist for VIU’s faculty of education, said it’s not always possible to catch the use of AI in coursework. “If you’re willing to pay to use AI tools, there’s a whole other class of abilities, citations, creation of content that probably the average professor would not be able to identify.”
Hengstler predicts AI will have as much impact on the world as the internet did, and argues society isn’t ready for those changes yet. “A lot of what we do here in academia and K-12 is geared to what the students are going to do when they get out in the real world with an employable job,” she said. “AI is going to revolutionize how so many aspects of work are done.”
While it would be an “abdication of responsibility” for educators not to prepare students to use AI responsibly, Hengstler said, that doesn’t mean there aren’t concerns with AI, which is largely being developed and owned by large technology companies.
Those concerns include AI outputs that reflect biases — since the output is based on existing writings — and intellectual-property issues surrounding how AI databases gather information.
“We need regulations around this. We need responsible AI, we need guardrails,” Hengstler said. “It’s not just all great wonderful stuff. It comes with a whole bunch of caveats and potential harms and those need to be balanced.”
And then there’s the fact that sometimes AI outputs are just completely wrong, Hengstler added. “AI is prone to answer your question whether it knows an answer to it or not.”
>>> To comment on this article, write a letter to the editor: firstname.lastname@example.org