One thing none of us can seem to escape is the ever-looming presence of AI. AI will revolutionize every industry. AI will be the end of the human race. AI will turn all ensuing generations into lonely, brainless shells of humanity. Our fear of AI and of being over-reliant on technology has been clearly evident for decades, even before it went mainstream. Frankenstein in 1818, The Terminator in 1984, Wall-E in 2008: these books and movies can almost make up their own genre.
Today, however, we face a new dilemma: the use of AI, specifically chatbots, to complete tasks for us.
It is important to acknowledge that AI is simply a tool, and like any tool, it can be used for harm or for good. With time, there may be social or legal restrictions that are placed on its usage and abilities, however, it isn’t going to disappear from society. Thus, as individuals, and especially as students, we can make its use safer by being mindful of its shortcomings, knowing possible risks, and facilitating human thought and reasoning before relying on AI to think for us.
Two of the most common tasks ChatGPT is used for are those requiring quantitative reasoning such as in math or physics, and those requiring literacy such as reading, analysis, and writing. With these subjects, there is a fine line to be crossed on where AI can be either beneficial or harmful.
ChatGPT is most commonly used for writing purposes. A survey by Intelligent of 588 college students found that 69% use the tool for help with writing assignments, 29% of which have ChatGPT write entire essays. Other than the fact that this usage is unfair to both other students and teachers, using GPT to aid in writing essays can be harmful to a student’s literacy. While a big impending deadline may seem like a much more demanding problem in that moment, turning to AI again and again to act as a quick fix is ultimately much more harmful than missing a few school deadlines.
We can break down the process of writing an essay, or any piece of writing, into a few main steps: brainstorming, research and analysis, writing, and proofreading. Students use AI to aid in any combination of these steps, but they can all be harmful in their own way.
Skipping on the brainstorming process puts us at risk for not taking advantage of the many benefits that come with it. In order to come up with compelling and nuanced ideas, a thorough base knowledge of the writing topic is usually needed. If we get rid of the brainstorming process, we also rid ourselves of the need to learn about the subject, to explore unfamiliar ideas, and possibly even pick up bits of unrelated — but intriguing — information. The depth of learning that comes from this kind of information scavenging is lost. As such, skimping on brainstorming diminishes our need for intellectual curiosity, the same curiosity that led Alexander Fleming to discover penicillin, and save millions of lives, and that led Hedy Lamarr to pave the way for the invention of WiFi. The potential innovations that are inspired by brainstorming, by trial-and-error, are at risk of being lost, as innovation is born of struggle, and there is little point in struggling to find an answer or idea when it is just presented to you on a silver AI platter.
To further understand the dangers that come with using AI to write, a bit of knowledge on how LLMs (Large Language Models), such as ChatGPT, function is needed. LLMs work by predicting the next word in a sequence with the highest probability based on the patterns learned from the amounts of text data it has been trained on. Thus, while it may seem otherwise, these models don’t really have a mechanical understanding of how and why they generate an answer. When used for analysis or research, AI runs the risk of “hallucinating” and making up facts, increasing the risk of perpetuating biases and spreading false information.
Furthermore, the writing that comes from ChatGPT can be bland, as the model tends to generate word salads, paragraphs that seem fancy and articulate but are much less rich in meaning. Certain phrases such as “ever-evolving landscape” and “delve” are also used very often, making the writing boring and slightly infuriating at times. When we choose to write with GPT instead of on our own, we run the risk of losing or failing to develop our individual writing voice, which is what compels readers and helps them feel a connection with the author. While the overall meaning the writing conveys may be the same no matter who or what writes the piece, the writer’s voice is what gives the writing flavor, color, and personality. All of this vibrance is lost when AI writes or rephrases one’s work. I fear that in the next decade if we continue to neglect exercising our creative brains, all writing will tend towards a gray, dull mush. Furthermore, a writing voice is not only important for writing professions, as the ability to express ideas clearly and authentically is instrumental to any profession.
Although AI can be detrimental when used to replace human thought and creation, it can be a valuable tool when used correctly for subjects like math and physics. The end product of writing can never be predicted, as it can change course and evolve during the writing process. However, for more technical subjects, the answer already exists, and students only need to understand the process of how to get to that determined end result.
In these mathematical subjects, using AI as an answering machine to spit out solutions defeats the purpose of learning the subject. However, if faced with a difficult question, a student can use clever prompting techniques to get the AI to generate hints that can guide them to the solution. In this way, AI can supplement learning, instead of detracting from it. This is the equivalent of having a tutor or teacher beside you at all times. In a paper by Gregory Kestin, a physics teacher at Harvard, Kestin states that unguided use of ChatGPT lets students complete assignments without engaging in critical thinking. However, students who used his AI tutor, which gave small hints instead of full answers, were able to double their learning gains compared to students who just participated in active lectures. Kestin said he believes that this technology will not replace lecturers and human interaction but it will instead make that human interaction richer.
The dangers of AI arise when we start to use the machine to think and speak for us. As such, we must carefully consider where and when to use this tool, as a lifetime of stunted critical thinking is not worth a semi-good grade on an assignment. Are we really comfortable with AI becoming the voice of the next generation because we never learned how to effectively communicate for themselves?