When ChatGPT debuted last fall, reactions to the new artificial intelligence (AI) tool ranged from wonder and curiosity to fear and panic, including among school officials already concerned about cheating and academic misconduct in our internet age.
Now, about two months later, a wave of professors and academic integrity experts are sharing more in-depth reactions to OpenAI developer ChatGPT, which can quickly spit out human-like graffiti, computer code, and more, based on a makeover of billions of samples from the web.
They check out the robot themselves, raise it with their classmates, and even bring it into classes. Some call it a teachable moment: for students and faculty, as a reminder to regularly reassess new technologies and how to evaluate student learning.
For academic colleagues who “think a lot about how best to teach and help students learn in a digital space,” don’t panic about ChatGPT just because the latest technological advances are already on their radar, says Luke Stark, associate professor. at Western University of Information and Media Studies.
Luke Stark, Professor of Media Studies at Western University, shows CBC’s Diana Sumanak how ChatGPT generates authentic texts from basic prompts.
“ChatGPT is just one of the many technological curves that higher education has had to grapple with in recent years,” noted Stark, who studies the history, ethics and social implications of artificial intelligence, machine learning and similar technologies.
“I see this as an opportunity for all of us to be aware of the new things we can do with technology and how it will affect our students.”
When it opened to the public last fall, Stark unleashed ChatGPT in his classrooms, a move he encouraged his peers to make, too.
“Do a little research yourself, and then deal with it in class. Show those in front of you that you know these systems…know that someone might use them and make it a learning moment about how this technology can shape speech, language, and writing.”
The main thing is to participate [and] Be honest with your students, and remind them that they want to be here. They want to learn. »
Liane Gabora also talked about ChatGPT in her classes last fall, and after immersing herself in the test with her students, University of British Columbia (UBC) Okanagan professor of psychology said she admitted his initial feelings were a mixture of surprise and anxiety.
After making adjustments, getting used to the robot and discovering some of its limitations, Gabora is now exploring the new opportunities it can present when used to engage tasks that encourage students’ creativity and critical thinking.
“They are having fun [ChatGPT assignments]. They play with her. They explore it…they test the boundary conditions. They are trying all these jailbreak techniques to get rid of this kind of default limitation. »

Gaborra first told her students that the UBC administration is very aware of ChatGPT — and the new software is also detecting AI-generated articles. However, she believes the way forward is to incorporate new tools like these.
“We can’t go back in time, can we? He’s here with us and he’ll stay.”
Institutions monitoring the situation
UBC is taking an “educational approach” to AI tools and services, according to Simon Bates, the school’s current vice-chancellor and associate vice-president for teaching and learning.
Advisory groups, including faculty and student representatives, are exploring how we can do more on the educational side of academic integrity: to examine how course designs are used to support academic integrity, and how to identify and communicate what is acceptable and what is not. in their various course contexts. “.
Western University students talk about whether it is cheating to use ChatGPT for homework and whether colleagues already know and use AI tools.
The University of Toronto, which has more than 88,000 students on three campuses and is Canada’s largest university by enrollment, is taking a similar approach.
“We regularly monitor and evaluate the development of technology tools that may impact learning, teaching, and assessment, and pay particular attention to ChatGPT and other emerging technologies,” said Susan McCahan, vice president of Academic Programs and Innovations in Undergraduate Education. Allow.
The University of Toronto has formed groups to monitor generative AI technology and provide guidance to trainers on evaluation, while discussing these technologies, which is necessary to ensure the development of common concepts and methods, she said.

Bob Mann, Director of Discipline and Resume at Dalhousie University in Halifax, hasn’t had a problem using ChatGPT on an assignment — he thinks we’re still in the “curiosity and interest” stage. – but he believes that the school’s integrity policies are clear.
He said, “We don’t just collect assignments, no matter where they come from. We want them to come from you.”
” [It] This also applies to the situation, whether you ask your cousin or friend to do work for you, borrow equipment from the Internet, or get some AI or technology to do it for you. »
Mann credits his IT colleagues for noticing the potential of these new AI tools some time ago and is confident that the “gut” alerts already starting to pop in for professors and teaching assistants – a submission far beyond what a student has provided before, for example – will remain the same. Link.
“Our process at least allows us to give a shot through the arc to a student… to say, ‘Listen, you’re on our radar.’ You take things back and we read it and say, ‘There’s something wrong with that.
polarized reactions
While some colleagues felt that the idea of exploring AI tools and their ethical use in higher education “sounds like … Star TrekIt’s a topic that academic integrity researcher Sarah Ellen Eaton has admired for some time.
An associate professor in the University of Calgary’s College of Education is currently studying AI tools and has initially applied for a grant to do so in 2020.
“I get messages from people on social media, [from those] states that it is “plagiarism and must be stopped” to “the best creative disruption of our lives”.
“Right now, I see that attitudes are a bit polarized, so I’m working on a middle ground.” [approach] And trying to figure out how we can use it without overdoing it. »

Eaton believes that artificial intelligence will play an increasing and game-changing role in society, but she does not believe it can replace the human touch. She said, “Human imagination goes nowhere. Creativity goes nowhere.”
An OpenAI spokesperson said in a statement to CBC News that the explosion of interest in and actual use of ChatGPT since November is “an important part of developing and deploying capable and secure AI systems.”
“If we want to use ChatGPT for misleading purposes in schools or other places, its atmosphere is the main topic.” We look forward to working with teachers on solutions and approaches 25