Aeshaan Kumar opens his laptop at 11 p.m., stares at a CS135 problem set, and does what most of his classmates do: he asks ChatGPT. Not for the answer, he tells himself, but for a nudge in the right direction. By midnight, the assignment is done. By morning, he’s not sure how much of it was actually his.
Kumar is a first-year computer science student at UW, and like a lot of people in his year, he showed up with a clear goal: land a good co-op job related to AI or machine learning, maybe get into research, and eventually work somewhere worth working. The pressure that comes with that starts pretty much immediately. Courses are hard, the competition is real, and tools like ChatGPT and Claude are always a tab away, ready to help at any hour.
“It started as just [helping with] debugging,” Kumar said. “But now I’ll ask it to explain a concept, then ask it to show me an example, and sometimes before I know it, the whole solution is just… there.”
He is not the only one. According to a 2024 survey by the Chronicle of Higher Education, over 60 per cent of university students say they use AI for academic work, with STEM students making up a large share of that. At UW, a school known for producing engineers who go on to work in AI, there is something a bit odd about the picture: students are using the technology they are being trained to build to pass the courses teaching them how to build it.
A Blurry Line
UW’s academic integrity policy prohibits submitting AI-generated work as one’s own, but the challenge of effective enforcement is enormous. AI-detection tools like Turnitin’s AI detector have been widely criticized for high false-positive rates and easy circumvention. A student can paraphrase an AI-generated solution by hand in under five minutes and render it virtually undetectable.
For Kumar, the line between legitimate use and dishonesty feels genuinely unclear. “If I use it to understand a concept and then write the code myself, that’s studying, right? But if I look at the code it writes and modify it a bit, is that plagiarism?” He shrugged. “I genuinely don’t know.”
This is what a lot of instructors are worried about. A 2023 paper in the Journal of Academic Ethics found that students who used AI for problem-solving scored lower on independent reasoning tests later on, even when they felt sure they had understood the material. The problem is not just that some students are copying answers. It is something quieter: skipping the part of learning that only happens when you sit with something hard.
The Co-op Pressure Cooker
For international students like Kumar, who came from India to study here, the pressure has an extra layer. His study permit depends on staying enrolled and in good standing. His family put a lot on the line for him to be here. Furthermore, the co-op system, which is a big part of why students choose UW in the first place, is genuinely competitive. First-years are often up against people with more years of experience.
“I’m trying to learn PyTorch on the side, build projects, keep my grades up, and get a co-op,” Kumar said. “AI just helps me keep up. Without it, I’d probably fall behind.” He pauses. “But I also wonder sometimes if I’m actually learning anything.”
That last part is worth sitting with. These tools are genuinely useful, and most students are not going to stop using them. But AI does something a little tricky: it gives you the answer and the feeling that you understood it, even when you did not actually do the work of getting there. Researchers have started calling this out as its own kind of problem, separate from cheating.
What Comes Next
Some professors at UW have started changing how they assess students in response: oral components, in-class problem sets, questions that need real contextual judgment rather than something a model can pattern-match on. Others have taken a different approach, treating AI as something students need to learn to use well rather than something to block out.
Kumar says he wants to get better at knowing when to close the tab. “I think the goal is to actually be good at this stuff, not just get through it. I’m not sure I’ve figured that out yet.”
For a program that trains people to build AI, it is a pretty important question to get right.


