• House Ad: A graphic urging you to follow us on social media! Featuring TikTok and Instagram handles, "@uwimprint" and "@uwimprintfiles," set against a blue background with abstract shapes in orange, green, mustard, and white.

Is your professor really checking for AI?

| June 11, 2025

It’s the night before an assignment is due. Your fingers hover over the keyboard. You’ve done the readings and attended the lectures but the blank page glares back deep into your soul. Maybe, just maybe, you could open ChatGPT. A prompt here, a reworded answer there. You tweak the phrasing, shuffle the structure, and soon enough, three flickers after, the essay takes form.

Sound familiar?

As AI tools like ChatGPT, Claude, and Gemini become more integrated into academic life, students across campus are navigating a new kind of grey zone. It is not cheating exactly, but not all students are writing entirely original work either. The lines are blurry, the boundaries are undefined, and the stakes feel high — very high at certain times, perhaps 11:59 p.m. on a weekday. 

Meanwhile, professors are facing the same tools from the other side of the desk. Do they crack down on AI-aided writing, revert to traditional assessments featuring in-person presentations, or adapt to a new era of learning altogether?

At Northeastern University, a student recently made headlines for calling out their professor—not for catching AI use, but for allegedly using ChatGPT to give feedback on their work. The student claimed the comments felt vague and generic, and even demanded an $8,000 tuition refund. Whether the claim holds or not, it’s worth noting that students aren’t just being watched for AI—they’re watching right back. So when it comes to our university do the professors actually want to strike up a Policy 71 allegation every time they sense the use of AI tools? Is that even feasible?

Some already have.
Others aren’t even trying.

Many don’t even care.

“I’ve used AI for nearly every written assignment this past term,” said a third-year environment student, who asked to remain anonymous. “It’s definitely helpful in brainstorming or rephrasing ideas, but why bother when it can pretty much get it over with in a minute. Honestly, it’s like Grammarly on steroids.”

Their approach isn’t unique. The use of generative AI for idea generation, outline-building, and sentence rewriting has quietly become the norm for many students. The key difference? Some treat it as a word-counting software — others treat it like a ghostwriter.

In this shifting landscape, many students are left wondering: are professors actually checking? 

“Plagiarism was the big fear when I was in first-year,” said Jason Li, a fourth-year biology student, “Now, it’s all about AI — but honestly, I don’t think most profs have a reliable tool to catch it.”

He’s not wrong. Tools like Turnitin’s AI detection, once hailed as a solution, are facing their own expiration date. With Turnitin sunsetting its AI detection feature by September 2025, after growing criticism over its accuracy, fairness, and transparency. Multiple institutions, including the University of Waterloo, found that the tool falsely flagged human-written work—especially from non-native English speakers. As a result, universities are being forced to rethink their reliance on external policing. That leaves a vacuum, and in it, a quiet tension brews: will students get bolder, or will professors get wiser?

So far, the answer seems to depend on the professor.

“I teach first-years, and the writing is often too clean,” said one UW professor from the faculty of arts, who asked not to be named. “Sometimes it reads more like a blog post or a product review. You can’t always prove it’s AI, but it doesn’t sound like a student.” Upon asking the frequency of assignments their course requires, they stated around five to seven on an average term.

When asked how they respond, the professor admitted, “I don’t act on suspicion alone. I might ask them to explain their argument in person, but that’s rare. We don’t have the bandwidth to police every sentence.”

Some instructors are adapting by shifting assignment formats entirely — oral presentations, handwritten exams, in-class essays. Others double down on authenticity, asking students to include personal anecdotes, classroom references, or drafts of their work. But while faculty navigate how to uphold academic integrity in the AI age, students are quietly adapting too.

“It’s almost like an unspoken rule,” said a second-year economics undergraduate. “Everyone knows someone who uses AI — maybe everyone is that someone. But you just make sure it doesn’t sound too AI. Keep it casual. Add some typos if you’re really paranoid.”

For many students, using AI isn’t about laziness — it’s about efficiency. Tight schedules, overlapping deadlines, and mounting expectations make AI feel less like a cheat and more like a lifeline. But there’s still fear: of getting caught, of being misunderstood, of crossing an invisible line.

Even students who write everything themselves sometimes run their work through ChatGPT — not to generate, but to polish. Fix the grammar, reword the clunky parts, restructure the awkward transitions. The result? Assignments that sound cleaner, but less like the student.

And professors can tell.

“A huge chunk of my job now is asking, ‘Does this sound like them?’” said a sociology professor. “You get to know your students’ voices — and suddenly, they’re writing like bloggers.”

She doesn’t always confront them. Instead, she’s begun assigning more open-ended prompts and real-time responses. “If you’re talking about a personal experience or applying a concept in your own words in a class discussion, AI can’t really help you there.”

So, are your professors really checking for AI?

Some are. Some aren’t. And some can’t.

What’s clear is that we’re in a tricky transition period. The rules — if they even exist yet — are still being figured out, not just by schools but by the students and professors in the thick of it. It’s especially hard for many: for students to realize where to draw boundaries and for instructors who feel like they’re constantly playing catch-up. Academic integrity isn’t just about copying someone’s work — it’s about where we draw the line between getting support and giving up ownership.

AI tools are now part of the learning process, whether we acknowledge them or not. They’re not flashy — they just slip into the background. So maybe the real issue isn’t whether professors are detecting them — maybe it’s whether we’re all ready to talk about how education is changing, together.

Share this story

  • Illustration with a magnifying glass spotlighting "Affordable Weekend Adventures" in Waterloo on a red background adorned with stars. The "Imprint" logo is positioned at the bottom right.

    Arts & Life

    Affordable weekend adventures in Waterloo

    Emma Danesh

    | June 12, 2025

  • A completed June crossword solutions grid, 15 by 15 squares, showcases answers like FAIR, NINTENDO, ATTACK, ACTOR, MASONRY, POLITELY, and TRIASSIC filled in among others.

    Distractions

    June crossword solutions

    Zoe Cushman

    | June 11, 2025

  • Public lecture held by UW TRuST Network: The Iron Ring at 100 - Trust, Transformation and the Future of Canadian Engineering. From left, Molly Thomas, Hamid Arabzadeh, Stephanie Hazelwood, Ashley Melenbacher, and Mary Wells.

    Campus News, News

    100 years of the Iron Ring: What it means to students and staff at UW

    Thea East

    | June 11, 2025