Exploring the AI shift in UW classrooms
| January 14, 2026

A student uses AI. (Photo credit: Karen Zhou)
With AI becoming an everyday part of learning, UW courses are evolving to balance the responsible use of technology.
At the administrative level, David DeVidi, Academic Associate Vice President, explained that the university is intentionally avoiding one-size-fits-all rules because every course has different learning outcomes, assessment constraints, and disciplinary needs. “Highly specific language could become outdated before it is even approved, and existing policies already provide mechanisms to address unacceptable use of AI,” DeVidi said. In practice, unacceptable use includes submitting AI-generated work as your own without permission or failing to clearly disclose and cite AI contributions. And Students who cross these boundaries may be investigated under Policy 71 for academic misconduct.
Elaborating on existing policy, UW is guiding instructors toward future-ready assessment design in several ways. The university’s Information Systems & Technology team has published guidance for the Waterloo community, and suggested text for course outlines is available to clarify expectations around AI use in assignments, quizzes, tests, and exams. Additionally, Instructors are encouraged to familiarize themselves with GenAI tools, foster student discussions on their capabilities and explore innovative ways to incorporate AI into teaching while remaining mindful of its constraints, such as potential inaccuracies, biases, or culturally insensitive outputs. More so ever, transparency is emphasized, with instructors expected to require students to disclose any AI assistance in their work, and to model ethical use themselves. The university also points faculty to additional resources, including the Centre for Teaching Excellence guide, educational slides on GenAI, and similar resources from other Canadian universities, to support informed, responsible, and adaptable assessment design in the AI era. Put together, these guidelines provide a flexible framework, leaving instructors the final say in how AI fits into their teaching.
Across campus, that freedom has led to some very discipline-specific creativity.
In the English department, Andrew McMurry teaches ENGL108g: horror, ENGL492: rhetoric of fascism, ENGL306f: semiotics, and ENGL306 g: critical discourse analysis. McMurry has watched GenAI challenge the very foundation of traditional writing pedagogy. Rather than staging a losing battle against ChatGPT use, he redesigned portions of his writing course to protect what he calls the “human” elements of writing, a shift that led to in-person writing checkpoints where students compose during class and instructors can identify authentic student voices. He has also pivoted away from take-home essays, where professors often encounter AI use, and added oral components with reflective tasks that require students to explain how they developed their arguments.
McMurry frames these changes not as punitive but as protective: “If AI can automate the intellectual work of writing — the analysis, the synthesis, the argument-building — we need to ask what parts of that process are uniquely human.” He is deeply skeptical of the idea that humanities education can simply “adapt” to AI. He believes that embracing AI as a routine classroom tool risks hollowing out the very skills English departments exist to teach. “The idea that humanities classrooms can somehow accommodate AI is a pipe dream,” he said. Instead, he argues that English must position itself as an AI-free space, one where students who value human-centred thinking, interpretation and expressions can still develop those capacities without technological shortcuts. His concerns extend beyond assessment design to the long-term effects on learning itself. McMurry worries that as AI becomes normalized, students may lose the habit of mental struggle that underpins critical intelligence. Tasks like brainstorming, outlining and researching can now be generated instantly by a prompt. “Why would a student second-guess what AI gives them?” he asked. The truth is, AI is better at this kind of work than they are. “But if they never do the hard mental slogging, they’ll never develop the capacity to surpass it,” he concludes.
Several buildings over, Robin Duncan in the Department of Kinesiology and Health Sciences is encountering a very different set of AI-related challenges — ones rooted in metabolism, micronutrients, and scientific authority. Duncan teaches KIN343: Micronutrient Metabolism, KIN446/646: Physiological and Biochemical Aspects of Nutrition and Health, KIN470: Seminar in Kinesiology, and KIN605: Introduction to Genetics for the Biosciences. She sees potential for AI to support education when used carefully. “It can reinforce learning by drawing together lots of resources,” she said, noting that AI can sometimes help students make sense of complex material. But the problem arises, she explained, when information is incomplete or uncertain. And in those cases, AI tends to generate confident answers rather than acknowledging gaps in knowledge. Despite these concerns, Duncan does not dismiss AI outright. “It can be a valuable learning tool,” she said, “but right now it can be drastically misleading and completely wrong.” The solution? This year, she modified KIN343 coursework, which had previously been offered online. While term tests remain open book and online, the final exam has been moved to in-person.
While instructors navigate these challenges in their courses, students are also encountering the evolving landscape of AI firsthand. Jordan Bauman is an undergraduate psychology student in the Faculty of Science and a student senator at UW. Although Bauman hasn’t taken a course that allows the use of AI, he is aware of his surroundings. “If course assessment is taking place in person, it’s very hard to use AI in a dishonest way. But if it’s taking place online or submitted as an assignment, then it’s much easier to do that,” he said. Bauman added that these inconsistencies have fuelled ongoing debate among students about fairness and enforcement. In online assessments where most resources are permitted but AI is prohibited, Bauman questioned the realism of those restrictions. “You kind of get this game theory set up where it’s like, if everyone else is using it, I’m at a clear disadvantage if I don’t,” he said. Furthermore, Bauman’s perspective is also shaped by his experience moderating Waterloo Forum on AI in assignments, which he founded to encourage open and respectful academic debate on campus. The Forum’s pilot debate, “Should Universities Lift Restrictions on AI?” brought together students, faculty, and experts to discuss the benefits and risks of AI in coursework. Through these discussions, participants highlighted that standalone assignments make it difficult to verify genuine learning. In contrast, interactive or dialogical approaches, such as oral review of submitted work, give instructors more reliable ways to assess understanding while still allowing responsible AI use. For Bauman, these conversations reinforced that well-designed assessments, rather than rigid rules alone, are central to integrating AI effectively into coursework.
Share this story
More
Arts & Life
Envisioning the future: How vision boards can support goal achievement
Carla Stocco
| January 14, 2026
Campus News
UW ranks first among comprehensive universities for scholarships and bursaries
Saira Ikoroha
| January 14, 2026



