Not a single major news outlet missed the story about the strange new presence in everyday life. Millions of curious people rushed to the site, excited, intrigued, terrified of what this new technology would be. In November 2022, Artificial Intelligence (AI) company OpenAI released ChatGPT, the first ever AI chatbot to capture public attention at such a massive scale. While AI itself has existed for decades, this technology reached unprecedented levels of mainstream popularity after the release of ChatGPT. It became the fastest-growing consumer application in history, reaching 100 million monthly active users within two months, according to TIME magazine, growing faster than TikTok and Instagram.
Today, generative AI and Large Language Models (LLMs), machine learning models that use natural language processing to generate text by predicting the next word in a sequence, have shifted from being a novelty to a necessity. They are now commonly found in schools, companies, and homes, with around 4.79 billion users a month.
Rather than a fear of the technology itself, people now fear being left behind, as in many industries, using AI to boost productivity is now a given. As such, AI is something unavoidable. And rather than living in fear and denial, there is a consensus within the scientific community that this novelty must be approached with tact.
AI IN SCHOOLS
Entering her first year of college at UCI, English and Political Science major Grace Tseng (’24) was astonished by the sheer amount of people using AI in their daily lives. While people already used AI while she was in high school, she saw significantly more people use it in college.
“In high school, I was very much against the use of AI entirely, but everyone uses it here,” she said. “People use AI very liberally in college. It’s usually smaller things like generating practice exams, but I know a lot of people who will use it to a greater extent as well. They’ll plug in the prompts of a writing assignment and say, ‘Give me X number of words on this prompt,’ and then turn it in. Even in class when the professor brings up a reading for discussion, a lot of the time, people will plug the question into ChatGPT and then reference what it says.”
Similarly, for Arvind Kalyan (’20), a first-year Masters student at UCLA studying computer science, AI use quickly became a problem in the ethics writing class he was a teaching assistant for. According to Kalyan, out of around 20 students in the class, five were caught using AI to complete a report.
“The syllabus doesn’t allow you to use generative AI because it’s considered cheating, but a lot of the reports had kind of the telltale signs of AI, like the very similar sentence structure and common words used,” he said.
Balaji Rama, AI researcher and developer at Rutgers University, says he’s worried about how often he’s seen others use AI to do their work for them.
“It’s really easy to turn to AI for everything,” he said. “It’s quick, it delivers, and the quality seems good. But in the long run, we will face diminishing returns.”
When facing impending deadlines, Kalyan said that many students feel a pull towards the quick solution that AI provides.
“Even with myself, when I look at a homework problem or when I think about answering any question that requires thought, one of the first thoughts I have is, ‘what if I just copy- paste it into ChatGPT”, Kalyan said. “That’s an urge that I’ve had and that I know a lot of other people have had as well.”
According to Kalyan, the negative effects of AI reliance for menial tasks have already become very apparent in those around him.
“Some people will turn to ChatGPT for everything,” Kalyan said. “I’ve seen ChatGPT take over very fundamental parts of thinking, especially with the more advanced reasoning models have been coming out. [It’s] replacing thinking, and it takes a very cautious effort to get away from that.”
When Tseng was in high school, she had similar experiences.
“I have a heavy writing background,” Tseng said. “I was in Nexus and I tutored in Writer’s Workshop, so I got to see firsthand how relying too heavily on AI would significantly cause this impediment on your ability to communicate in an articulate way.”
According to Kalyan, one major method to get students to steer away from using AI for school assignments is for institutions to enforce and discourage AI.
“It has to come from above,” Kalyan said. “Teachers and mentors should really emphasize how it is important to be able to write and you have to be able to sacrifice that short-term, easy crutch of ChatGPT for actually thinking, actually writing, and actually developing your thoughts. So, I think it’s important that teachers specifically take strong stances against using it just for everything and again emphasize why they’re taking that strict stance.”
According to Rama, it is essential for students to understand the ramifications that can come from constant reliance on generative AI, especially because school is where lifelong habits are made.
“Lots of computer science students nowadays use ChatGPT, and are virtually unable to code without it,” he said, “AI seems harmless when students are just using it to answer questions for them, but if it becomes the norm to not think at all to complete tasks, we will end up with a workforce that is basically impaired. Rather than being produced by humans, work will be produced by AI under the mask of a human.”
Kalyan shares similar beliefs.
“Telling students to avoid using ChatGPT and strict enforcement can only get them so far,” Kalyan said. “The important thing is to be able to think for yourself without it and not use it as a crutch. Being acutely aware of how it works, what it can help you with, what it can do, as well as its limitations, and then being very firm in learning things for yourself, learning to be creative, to solve problems, to do that all without AI.”
Trevor Chen (12), an intern at the UCSD Supercomputer Center who works with machine learning, said the line of acceptable AI use should be drawn when AI starts taking over thought processes.
“For me, it’s about personal growth,” he said. “If I use it for APEL, what’s the point? But, if I’m using it as a physics tutor, then I’m becoming more intelligent. Some people would also argue that since they have chat GPT or other AI sources there, once they go into the real world, they won’t even need to know how to use those kinds of functions. But there’s also 7 billion other people who know how to use chat GPT as well, and you are now one of the masses that are not capable of individual thought. So at the end of the day, the people that are actually going to be smart or have an advantage are the ones that don’t fall into the chat, right?”
Kalyan said that preparing ahead of time and putting in a conscious effort has allowed him to keep himself from completely depending onAI for last-minute assignments.
“It’s always kind of like a conscious decision I have to make when I start my homework to not use AI unless I absolutely have to,” he said. “I go to office hours every week and I ask questions, but it also helps to put [hard questions] into ChatGPT because it can help explain some things. Anything I don’t understand after that, I can just ask the professor.”
CAREER PATH
From a young age, Tseng has loved literature. Thus, she knew from the start that she would pursue a major in English. However, by the time her high school graduation and course selections for college rolled around, Tseng found herself facing a world that was shifting at unimaginable speeds. Automated writing agents began producing large amounts of content in seconds for free. Traditional roles in journalism, content marketing, and technical writing were being automated or reduced. So, Tseng was forced to consider a different approach.
“[AI] was a substantial part of why I chose to double major,” Tseng said. “English will always be my first [major]. It was what I came into school with. It was what I knew I wanted to study, but I also knew that English is also something that AI is very directly impacting.”
Instead of solely pursuing a major in English, Tseng considered other less impacted majors, keeping an eye on the job market as well. She ultimately decided on a double major in English and Political Science.
“When I chose to double major, I knew that I wanted something that still preserved my love for writing and speaking,” Tseng said. “But also, if AI really [does] become as powerful to the extent that I fear it could become, then I would have something to fall back on that really requires a human sort of angle to it.”
For Amy Wang (’22), a Systems Software major at Stanford, AI had its largest impact during her job search.
“You’re put in an environment that’s entirely different, and obviously, we haven’t seen the full ramifications of what that means for young people, but it is just a vastly different job environment,” Wang said. “When I was a freshman [in college] or when I was a senior in high school, one thing that I wasn’t really expecting when I came to college was how impacted not just my job prospects, but my perception of job prospects were going to be by the developments.”
Wang’s previous plans of going into software engineering were quickly halted by the rise of coding agents.
“With the proliferation of coding agents and given the popularity of AI, it has become one of those conversations with myself where I’ve been like, ‘This job doesn’t necessarily have the same job security that it once did,’” she said.
Tseng and Wang haven’t been the only ones affected by this. In fact, according to Kinsey Global Institute projections, by the end of the decade, approximately 14% of workers globally are expected to be forced into changing their careers due to the impact of AI, affecting around 375 million workers.
“I ended up recruiting for finance instead of software engineering, and that was a major decision that I made, in part because of the pressure from the job market,” Wang said.
According to Rama, AI won’t be entirely replacing humans, at least in the near future.
“It’s the ability for humans to see something and instantly correlate [that] AI right now can’t do,” he said. “AI is like a search space where it tries many different answers until it gets the right one. That means AI will never replace the ‘humanness’ of humans, but because it’s a search space it can theoretically do anything else.”
Even so, Rama said the future is still uncertain.
“Even with all of these predictions, we never really know what will happen in the future,” Rama said. “What we call AI is simply the next wave [of technology], and there will probably be something past AI that we can’t see now, and then after that, and so forth.”
According to Rama, AI may just be another repeat of history.
“An interesting conjecture is that AI will result in no overall net change in society,” he said. “In the 1800s, you had people who took out the toilet for you and made your doors. The minimum job was different. Now the minimum wage job is fast food or adjacent. As AI replaces jobs that no one wants to do there will be a shift and the minimum bar of work will be raised. While we very well may automate factory work, those people who used to do factory work will get more education and the new low might be higher than the older low. If you look at the distributions, it will be the exact same distribution just shifted up. The software engineers of the future will be the factory workers of today. But the software engineers of today may be the architects of tomorrow, like a step up hierarchy.”
AI WEAKNESSES
AI technology itself has substantial flaws that have yet to be sorted out, causing problems in academia and information transfer.
One of the most pervasive and harmful problems is hallucination, where LLMs give false information.
“LLMs work using next-token prediction, where the model essentially tries to predict the next most likely word from learned patterns in the given training data,” Rama said. “This means there isn’t any semantic understanding, it’s just probability.”
Wang says that people too often blindly trust LLMs because they sound confident.
“We should be wary to not necessarily always take AI at face value,” Wang said. “if you don’t know anything about the topic, you’re just going to have to assume that everything that AI is saying is true because it would have sounded correct. Once you are really relying on a system like that and you’re not doing your own critical thinking and you don’t know what’s going on, you’re putting yourself in a very compromised position. AI is no longer a tool, it’s just a crutch.”
Similarly, Rama warns of the dangers of using AI as an information source, as it may be inaccurate.
“AI does not reason like true reasoning,” Rama said. “ So if you’re good at something and you use AI, it’s useful, but if you’re blindly trusting AI, it’s more deadly because it’s not 100% right and you have no way of knowing when it is wrong.”
According to Tseng, college students most commonly use AI as an information search tool, a decision that Kalyan thinks is problematic in many ways.
“I think the most obvious danger is that Google switched their search to include the [gemini AI overview], and it’ll tell you just completely wrong things,” Kalyan said. “It’s a very visible place where the hallucinations are dangerous and I think it’s pretty bad that Google pushes that as its first search result because you don’t have to click on anything to get it to, it just shows up.”
According to Rama, the main reason we should be wary and educated about AI usage is that the technology is still in its developmental stage. Currently, there is no transparency in LLMs: the decision-making processes of any model is opaque and there is no way to “see” inside the LLM.
“The real problem that people are ignoring is the fact that AI development is vastly outpacing AI safety,” Rama said. “We still haven’t figured out how to make GPT-2 safe, and that was in 2019, and here we are making GPT5. That’s why [AI developers] Murati, Sutsveker, and Dario all left OpenAI and made their own companies because they saw the need for safe AI.”
For students, while everything may seem overwhelming, Kalyan says the best thing we can do is educate ourselves about the situation we are in.
“I think it’s very important to know how it works and what it can do,” he said. “AI is going to be a thing in our lives whether we like it or not. So the first step to accepting that there is this very powerful but potentially dangerous entity in our lives is knowing how it works. I think that’s much more important than avoiding its existence because it does exist and it will continue to exist for the foreseeable future.”