Universities Need Systemic Overhaul to Manage AI Use

2 Oct 2024

Associate Professor Jason Lodge writes for Financial Review.


As AI technologies like ChatGPT become increasingly integrated into student work, universities are being urged to reform their assessment practices to ensure meaningful learning. A recent article highlights that around 60% of university students have used generative AI (genAI) for academic work, posing new challenges for academic integrity and learning outcomes.

Challenges and Adaptations

Jason Lodge, an education professor at the University of Queensland, argues that the shift in AI use requires a move away from simply detecting cheating toward verifying whether students are learning effectively. The rapid accessibility of AI, which generates well-structured text instantly, makes traditional assessments vulnerable to misuse.

Some universities are experimenting with oral exams, known as vivas, to gauge students' understanding. However, as Toby Walsh, chief scientist at the UNSW Artificial Intelligence Institute, notes, this method is expensive and not scalable for larger student populations. Walsh suggests that instead of banning AI, students should be required to submit both the generated content and the prompts used to create it. This ensures transparency and maintains academic accountability.

Risks with AI Detection Tools

AI detection software, such as Turnitin’s, is widely used by Australian universities but comes with limitations. Walsh emphasizes the need for caution, as AI detection tools can incorrectly flag students for misconduct. Turnitin’s Asia-Pacific vice president, James Thorley, explains that their system minimizes false positives by focusing on high-probability cases but stresses that these flags should prompt further discussion rather than serve as definitive evidence of cheating.

Redefining Ethical AI Use

Experts recognize that AI can be beneficial when used ethically. For example, it helps international students improve their English or assists students with disabilities. Associate Professor Jemma Skeat from Deakin University found through surveys that most students understand the limitations of AI-generated content, such as its tendency to generate false information. Many students also acknowledge legitimate uses of AI, such as brainstorming or refining ideas, but stress the importance of understanding where to draw the line between assistance and academic misconduct.

In summary, universities are being pushed to rethink their approach to assessments, balancing AI’s potential benefits with the need for academic integrity. Rather than focusing solely on detection, institutions must adapt to ensure students gain real knowledge while integrating AI responsibly.

 

Read the full article at Financial Review

Latest