ChatGPT: An end to education?


There have been many bold and triumphant perspectives when it comes to the potential impacts of ChatGPT — some more overrated than others. Concerns over whether it is going to “change education,” “become a threat to our jobs,” or “cause widespread cheating in universities” continue to flood my social media accounts and newspages. Since the prototype quickly gained in popularity, more advancements are being made to follow suit — Microsoft’s new AI search engine, Bing, for example, was just introduced last month

The release of ChatGPT has undoubtedly caused a wave of panic. RealClearEducation declared that “with ChatGPT, education may never be the same.” Teachers have expressed similar concerns. Daniel Herman, an English teacher of 12 years, calls it “The End of High School English.” A survey from also found that 43 per cent of K-12 teachers “think ChatGPT will make their jobs more difficult.” 

Schools and universities are scrambling to respond to this emergent technology. Within the first few weeks of its release, the University of Ottawa considered delaying take-home exams. UW and UofT have also formed committees “to develop guidance for instructors on assessment design.”

The implications of ChatGPT are present in all kinds of conversations between educators, students, and writers. A general question has emerged: will ChatGPT ruin education? There is considerable doubt as to what ChatGPT or other Large-Language Model (LLM) software may look like in the future, but at least right now, it does not live up to the hype. Once we overcome fear-mongering, generative AI may even do some good. 

It’s understandable that people feel threatened by this technology. After all, it is a powerful tool that can answer questions, summarize text, and write code all in a matter of seconds. While it may be able to generate well-structured, human-like responses, ChatGPT does not know everything. In fact, it has no idea what it is saying. The software doesn’t possess human understanding or thought; rather, it is trained to generate responses based on probabilities and patterns. The GPT model produces sentences one word at a time by “selecting the most likely token that should come next, based on its training.” In other words, it arrives at its answers “by making a series of guesses.” Although ChatGPT has access to an extensive amount of data, it lacks the ability to discern the validity of its statements which can lead to blatantly incorrect responses. Reports show that it will sometimes even fail to solve simple math equations. Tae Kim, a writer for Barron’s, states that ChatGPT “doesn’t understand, comprehend, or know how to fact-check the large swaths of the internet it has scraped for data.” It “tends to regurgitate the misinformation,” he says. The model can only access data published before 2021, so asking about any recent event is a waste of time. 

Despite its inherent limitations, ChatGPT will act “as confident in its wrong answers as its right ones.” Sam Altman, the CEO of OpenAI, who holds an honorary doctorate of engineering from UW, confessed the bot “create[s] a misleading impression of greatness.” The company warns its users that “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” But, “fixing this issue is challenging,” they say. 

ChatGPT can craft a seemingly convincing argument (accurate or not) with persuasive language, but its writing normally lacks depth and insight. Amit Kawala, a writer for Wired, says the technology is a good example why “language is a poor substitute for thought and understanding.” Sure, ChatGPT can produce a grammatically structured paragraph, but it’s not a sentient being. It has no intention behind anything it produces — it lacks expressionism, and it can be wrong, mix people up, write with bias, and produce stereotypes. “The first indicator that I was dealing with AI was that, despite the syntactic coherence of the essay, it made no sense,” says an assistant professor from Furman University.

An emerging consensus is that ChatGPT could trigger a rise of cheating, but you could say the same about any other online cheating tool. Chegg, for example, is an online subscription disguised as a “homework help app” that can instantly connect students with tutors to answer any quiz or assignment question. In 2022, it had over eight million subscribers. Likewise, dozens of websites are available to write papers on any topic Paperhelp, EssayPro and GradeMiner are just some examples. The only difference between ChatGPT and these tools is that it doesn’t require an investment, making it more accessible. While that may be of concern, OpenAI and others are working to create software that detects AI-generated text

Rest-assured, AI experts have widely agreed “that [ChatGPT is] not anything close to sentient in the way a human is.” Until a generative AI can consistently create accurate, insightful responses, I am not convinced it will cause as much harm as the media predicts. Altman acknowledges that “it’s a mistake to be relying on it for anything important right now. It’s a preview of progress; we have lots of work to do on robustness and truthfulness.” 

Until then, Ian Bogost, a writer for the Atlantic, has summarized its potential: “LLMs are surely not going to replace college or magazines or middle managers. But they do offer those and other domains a new instrument.” 

Several teachers have already started looking for ways to apply the software in classrooms by using ChatGPT to create learning plans, as additional help to enhance knowledge of subjects, or even to break-down challenging paragraphs. It could even be used as a writing exercise, for example, by telling students to write a prompt on the software and having them edit it or identify any mistakes.

ChatGPT was released as a prototype and is still very new. It might not (yet) be the sentient AI we were all warned about – but it still has the potential to become a good technological aid.