Except where otherwise noted, the contents of this guide were created by humans (not bots) and are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0).
Looking to support student understanding of Academic Integrity? Have your students complete the Academic Integrity Course, take the quiz, and earn a digital badge!
The self-registration course takes 45 minutes to compete and is supported by librarians at integrity@camosun.ca -- no extra work for faculty! Course content includes:
While artificial intelligence tools may be relatively new, the practices that we can use to promote academic integrity are the same.
In her 2021 book Plagiarism in Higher Education: Tackling Tough Topics in Academic Integrity, Dr. Sarah Eaton, a leading researcher in the field of academic integrity, introduces the idea of a "postplagiarism world" characterized by six tenets which she describes here and which are illustrated in the graphic below.
Are we already in a post-plagiarism world? Maybe not, but the suggestion is provocative. The power dynamics and tools which traditionally have been used to discourage plagiarism and other forms of cheating (surveillance, invigilation, text-matching software) are either breaking down or being rendered useless in a technological arms race. What does that mean for us as educators? What does academic integrity look like in our disciplines in a world where students have powerful artificial intelligence tools sitting at their fingertips?
Artificial intelligence presents new challenges for our collective understanding of academic integrity. Is ChatGPT a "source"? Can it be cited like other resources? Or is it a tool? If we use it in a similar way to how we use search engines, spell check, or text prediction in an email, is that plagiarism? The answers to these questions may not be straightforward.
AI detection tools may seem like a logical response to concerns about illicit use of AI in student assignments. However, emerging research suggests AI detectors are generally unreliable and may actually do more harm than good.
Current detectors are clearly unreliable and easily gamed, which means we should be very cautious about using them as a solution to the AI cheating problem.
- James Zou, machine learning scholar at Stanford University
Do AI-detection tools work? In short, not really. Evidence that such tools actually work is scant. See, for example, Sadasivan et al., 2023. In April 2023, the University of British Columbia decided not to enable Turnitin's AI-detection feature. You can read about their reasons here. Other institutions have since chosen to remove this feature from their Turnitin license. At the same time, false accusations against students have the potential to cause serious harm. Eaton (May 6, 2023) recommends four ethical principles for detecting AI-generated text in student work:
Before adopting any technological tool, it is worth first reviewing key considerations for adopting new educational technology. You can also ask to setup a consultation with an eLearning Instructional Designer.