The Future of AI in the Classroom

by Mahdi AlmosawiPhoto of Mahdi Almosawi

Mahdi Almosawi is a computer science major from Hingham, MA. Mahdi says that “as a person with ADHD, education was a challenge for me at times, and I often used AI to create a learning process for me that worked. I believe that it can help others do the same.” Madhi believes that there is strong resistance to using AI in school because of the limited exposure we have had to it so far, and he “hopes that this essay takes away some of the stigma” associated with it. Mahdi loves to learn new things and hopes to inspire others to embrace change and to see the potential of AI as a tool that can help create personalized and effective learning experiences.


The ideas that led to modern Artificial Intelligence language models were being discussed as early as the 1950s. As data analysts sought to find ways to compute and process immense amounts of data, progress was made gradually until, in the past eight years, four versions of a new language model known as ChatGPT were released, with the third version released in 2022 being the one currently in use today. Ultimately, such models were meant to be used to analyze and understand language, so that one day it could be tasked with working with those languages (Cruz). ChatGPT, however, is not the only one of these models. The Faculty Center of the University of Central Florida states that “there are multiple ‘large language model’ software solutions similar to (and competing with) ChatGPT. Most of the large technology companies have their own. There are related AIs for drawing pictures, and many other domains.”

AI language models, such as ChatGPT, have become an increasingly relevant topic of discussion now more than ever, due to their recent advancements. Author Wael Alharbi notes the transformation from what was once a tool “designed to assist writing teachers in assessing their students’ assignments… to offering extensive support in identifying writing problems and offering suggestions for improving the writing quality” (1). The Faculty Center at UCF warns that ChatGPT is now able to “quickly create coherent, cohesive prose and paragraphs on a seemingly limitless set of topics.” I use the word warn because the fact that AI is able to easily create paragraphs on anything it is asked is an exciting fact that exemplifies the potential of programs like ChatGPT. However, despite the positive implication, it has become a conflict within many discourse communities. It is these exact capabilities that authors, educators, and writers are worried about, fearing it will create issues for their work. The education system in particular has had trouble maneuvering around the sudden surge of AI usage, and it has been met with negative feedback from teachers who believe that it is being used to complete assignments without doing any actual work. However, I believe that it is within education that AI can offer its best support. The negatives AI brings to writing are just a side effect of change, in the sense that since AI is so new it has not been adapted to yet, and when it is properly implemented into schools, it will become a very helpful tool for both teachers and students alike.

In order to understand how AI can be utilized, it is important to first understand how it works, which I will explain in the following paragraphs. I will give a general overview of how ChatGPT works, as well as what it is capable of, as a base representation of how AI language models generally work. ChatGPT “learns” from large samples of human-generated text, fed into an algorithm created to recognize patterns and follow them. After processing enough sentences and paragraphs, it becomes capable of following patterns in order to create similar sentences and paragraphs. When prompted with a question, ChatGPT processes what is being asked of it, then returns what it is programmed to consider the best answer. Usually, this means data and information it derives from the internet is turned into appropriate responses. However, it would be negligent to imagine ChatGPT as a search engine with the ability to speak and respond to you. ChatGPT is capable of understanding questions, amending prompts with additional requirements, and using user-inputted information to base its answers. You can also ask ChatGPT to present its answers in various formats. For example, you can ask it to create a bulleted list, or provide visuals or more in-depth explanations.

However, with all these features and capabilities, there are also weak points and drawbacks. ChatGPT is susceptible to simply creating completely false statements and presenting them as if they were true. This is referred to as “hallucinating” (University of Central Florida) and results from the fact that “ChatGPT… remains to be limited and hinges on the data that it was fed with. And this will never be enough. For one, its current training data was cut off in 2021” (Cruz). More or less, if asked for anything that requires information from after 2021, ChatGPT will simply provide misinformation. One of the repercussions of this is ChatGPT creating sources that sound plausible but are completely made up. I personally have had experience with ChatGPT’s “hallucinations.” Upon asking for a specific guide on how to win a puzzle game step by step, ChatGPT provided me with answers that would sound correct enough to anyone who does not know what the solution actually is (“What is the step-by-step guide”).

Screenshot of an example ChatGPT response
ChatGPT-generated response

In this game, the objective is to fulfill a certain number of challenges and objectives to lead you to the ending cutscene, signifying that you have completed the quest. The first step it lists is correct: you have to construct the “Pack-a-Punch” machine in order to progress any further. However, it fails in every specific part of the explanation. It mentions “parts,” which is something that is involved with these types of quests in this game. It mentions the “Ritual Room,” which is a significant aspect of the specific level it is being asked about. However, the way it is explained is completely wrong and lacks a significant amount of other details and important steps. For the sake of comparison, a more accurate and complete first step would look like the following, human-written version which draws from information on the IGN website (Madrigal et al.):

Step 1: Build the Pack-A-Punch
To begin the easter egg, utilize the various beast statues in each district, starting at spawn, and while in the beast form, power on the gate to the first ritual room, acquire the first artifact, and then complete the first ritual. Once the gateworm is acquired, complete this step for the other districts, waterfront, footlight, and canals. Once all four rituals are completed, and the four gateworms are acquired, open one of the rifts in beast form if you have not already. Enter the rift and place down all four of the gateworms, and then activate the final ritual. Once this ends, pack a punch will be available in that same room.

This human-authored explanation highlights how lacking and incorrect ChatGPT’s guide was. After its original answer, it continues to deviate and begins to become more and more incoherent and incorrect as it goes on in its prompt generation. ChatGPT was able to follow its pattern recognition and provide an answer with its given information, and that’s all it had to do, even though the answer it gave was wrong. ChatGPT is not designed to be a concrete resource that always provides correct answers, and this could be especially harmful if students are not prepared to check and compare the information ChatGPT is providing them to other sources. For teachers to be confident in allowing AI in schools it is important that “hallucinations,” as well as the reason for them, are understood so students do not consume misinformation.

Furthermore, “Language educators and researchers may have reservations about the authenticity of students’ submitted writing,” due to the fact that AI can be used to complete assignments all on its own (Alharbi 2). AI can answer questions, write essays, and perform a multitude of tasks without any work required from the student, enabling academic dishonesty. The aforementioned UCF Faculty Center states that “the potential for abuse in academic integrity is clear, and our students could be using these tools already.” My professors have also voiced concerns about receiving work from students that they believe may have been written by AI. Concerns about AI’s misuse are evident and need to be considered and addressed to build comfortability and familiarity with AI in schools.

Does all of this mean that ultimately ChatGPT causes too many issues and should be avoided and prohibited by teachers? I believe that the opposite is true, and so does the University of Central Florida, which now has a page dedicated to AI and how professors should respond to its sudden surge in usage. Using AI to “overcome writer’s block” and “treat[ing] it like a spellchecker” are just some of the ways they suggest it can be used in the classroom in a manner that is beneficial to the teacher and student. In comparison to tools already made to do such things, such as applications that check the grammar of your paper, ChatGPT is more comparable to an actual person assisting you. If, for example, there was a grammatical error in your paper, and ChatGPT suggested a correction, one could ask ChatGPT why it needs correction, an explanation of that grammatical rule with examples, as well as anything else the student may want to know. A simple spell checker could perhaps provide a general explanation or example to go with the correction, but it could not answer questions as clearly and humanly as ChatGPT could.

Furthermore, for teachers that fear students will use AI to do their work for them, students have always been able to do so to a certain extent. If one wanted, they could pay to have an essay written for them on any topic. Or search for the answers to their school assignments. AI does make this easier, and more readily available; however academic integrity is unavoidable. Enforcing more rules and regulations cannot change this. Another, more in-depth, idea that UCF provides for the usage of AI is an activity that involves the student improving upon their own writing by using it as a revision tool. They propose that students take their writing and learn how to use ChatGPT to revise, fix grammatical errors, and perhaps even restructure certain parts of their paper (University of Central Florida). By learning how to use ChatGPT, they specifically mean more or less learning how it accepts prompts, how you can amend prompts and change them, and how to specifically and effectively communicate what it is that you want ChatGPT to do. The process may seem straightforward, but there are actually many tricks one is able to learn when it comes to utilizing the AI bot, such as giving it instructions prior to the prompt to format its response in a certain manner, or telling it to assume a certain bias or position whilst answering. By learning these skills, students will be more prepared to potentially integrate into a more “AI-rich future workplace” (University of Central Florida). Seeing how AI is predicted to revolutionize multitudes of jobs, from programmers and engineers to writers and authors, it is important to have the skills necessary to be able to enter these fields.

Before moving further into how AI can benefit educators and learners, I would like to provide an example of how I have used AI to my benefit as a Computer Science major studying at UMass Boston. In my class “Data Structures and Algorithms,” I have found several topics to be difficult to understand due to their complexity. Furthermore, my teacher’s method of teaching is not always best suited for the way I learn, which is to be expected in a class that teaches over fifty students at a time; it would be impossible to teach the way everyone would like. As a result, I took to ChatGPT and formed a method to teach myself advanced concepts.

  1. I begin by studying the presentations in class, and the notes I take from them. I try to gain the best understanding I can of the topic.
  2. Then, I try to find videos that explain the topic, preferably ones with visuals. Whilst watching the videos, I take notes and further try to understand the topic at hand. With this specific portion of computer science, you mainly learn concepts from presentations and videos about a topic, but not usually the actual code that goes with them.
  3. After the video, any components that are still unclear to me are given to ChatGPT, where I ask, “Could you explain (x) to me?”
  4. If any parts of the explanation are unclear, I ask more about them, which can look like, “What is the purpose of (x) in this example”, or any other specific questions. I also at times will ask it to add to its response, asking “Could you be more thorough?” or “Could you provide visuals to go with the explanation?”
  5. Then, I move onto working with the actual code, where if I face issues, rather than searching the issue up on Google and receiving a response that is often concise and unrelated to the context of your program, ChatGPT will explain the issue with the code and provide a suggested fix for the issue. It is at this step of using ChatGPT that I find it to be the most usable. I could ask it about the most minute detail about the code, such as the big picture significance of a specific variable, what a certain part of the function is supposed to accomplish, and I can also ask it to present these answers in any format. As a visual learner, I ask it to show line by line what the code does.

I began employing this method during my second semester, whereas in the first semester, I would be forced to look through forums for the answers I needed. The majority of the time, those answers were not much more than answers and did not teach me much about the issue I was having. With this method, it feels to me like a tutor that is readily available at all times. I have learned much more this semester as a result.

Ultimately, I provide this personal example for two reasons. First, I believe that what I do with ChatGPT can be done by anyone studying anything. And with that, I think students could find a lot of help from these tools in studying, learning something difficult, or to accomplish anything that would normally be done by a Google search instead. In-depth step-by-step explanations are not something always readily available for any topic, but now they are, and they are written within any context they need to be written in. While programs exist to ensure it is often available, many students still do not have the privilege of receiving tutoring, and I believe that this could ultimately act as the next best thing going forward. Furthermore, I would like educators to see a specific example of how AI is being employed in a beneficial manner because it seems as though teachers are worried that students are doing nothing more than getting their essays written for them.

Another example of AI being utilized appropriately in schools is the current work being done to help ESL students with AI tools. Cruz mentions how ChatGPT “can understand hundreds of languages, including various dialects and mother tongues.” Across all language classrooms Alharbi states how educators would prefer the use of “language-proofing” AI tools over simply translating (2). There is a want for an alternative to tools like Google Translate, and AI’s multilingualism makes it the perfect candidate. Specific AI tools have taken advantage of AI’s extensive support for other languages, one example of which is AI KAKU (Gayed et al. 5). KAKU is a tool that employs several functions that are meant to assist, but not do the work for students learning English. For example, it can show the reader the English words they are typing translated into their native language to provide feedback that the student is writing what they want to convey (Gayed et al. 5).

The goal is to give ESL students greater confidence in writing in English, without the need to check with translators or incentivize simply translating assignments all together. John Maurice Gayed, a Doctor of Philosophy studying at Waseda University, and co-researchers, are currently working on the project and performed a study assessing its results on ESL students utilizing AI. Their study, as of 2022, had students using AI KAKU for various assignments and activities and reporting how it aided in completing those assignments. Gayed et al. reports that there were not substantial results reporting a significant improvement in the quality of the work complete. However, they do note that “the reaction from the participants about using AI KAKU was largely positive, with 95% of the participants indicating affirmative responses on the 6-point Likert scale” (6). Earlier in the paper, Gayed et al. remind the reader that “writing in a second language (L2) involves considerable cognitive stress,” and anything that could simplify that process is significant in making the lives of ESL learners easier (1). The majority of studies on the topic of AI and ESL students are young, and many researchers did not collect data over a long enough period of time in order to be able to say how such tools could impact ESL students over the course of an entire education. However, from what has been seen so far, it is safe to say that AI will continue to show positive results for ESL students, and that it has a place in schools in general.

To ensure that the negative effects of AI are not allowed to fester within school systems, it is important that schools adapt to use AI to their advantage and reap the benefits it can provide students and teachers. Years ago, schools were hesitant to transition towards smart devices and computers, but now it is necessary for students to utilize them. Non-acceptance of change will only lead to the tools continuing to be misused, and it results in a lot of potential being lost, considering how AI is capable of reforming how students learn for the better.

Works Cited

Alharbi, Wael. “AI in the Foreign Language Classroom: A Pedagogical Overview of Automated Writing Assistance Tools.” Education Research International, vol. 2023, 2023, pp. 1-15.

Cruz, Jace Dela. “CHATGPT Timeline: Evolution and Rise of AI, Impact, Threat, and Opportunities.” Tech Times, 27 Feb. 2023.

Gayed, John Maurice, et al. “Exploring an AI-Based Writing Assistant’s Impact on English Language Learners.” Computers and Education: Artificial Intelligence, vol. 3, 2022.

Madrigal, Hector, et al. “Shadows of Evil – Call of Duty: Black Ops III Wiki Guide.” IGN, 31 Mar. 2016.

University of Central Florida Faculty Center. (n.d.). “Artificial Intelligence.” University of Central Florida.

“What is the step by step guide to complete the Easter Egg in Call of Duty: Black Ops III Zombies Map?” prompt. ChatGPT, April 2023, chat.openai.com/chat.