Skip to content

eric bodwell

teacher / librarian / writer

Menu
  • Home
  • Librarian
  • Writer
  • Contact
Menu
brown toy box character

The Other Problem With AI & High School Students: Thinking

Posted on June 25, 2025June 26, 2025 by Eric Bodwell

When ChatGPT was introduced a few years ago, it started a predictable cycle of hype and hysteria about the future of education with Large Language Models (LLM) creeping into our classrooms. This year, after the Trump administration issued an executive order on AI in schools, an opinion piece in the NY Times declared “A.I. Will Destroy Critical Thinking in K-12.” To be fair, the article was more subtle than the headline, but is still an example of the lastest moral panic to knock on the classroom door.

In my 25 years (so far) as a high school librarian, there has been handwringing about several ed tech issues around web searching, web 2.0, social media, ebooks, fake news, cell phones, remote learning, and now ChatGPT. This is not to say that these aren’t issue that need serious study, but techno panics, including the current one, are nothing new. But from a practice perspective, I am not worried if AI will take my job…yet. I want to know if and how it can impede or enhance learning in my school.

I looked at a handful of recent studies that looked at how use of LLMs in an educational context affects critical and creative thinking. The citations for my sources are the bottom of the post. I summarized the research and some recommendations from the studies and linked to a few resources useful for high school librarians too.

Disclaimer: No LLMs were used to write this article, so any dad jokes or style mistakes are all me.

What the Research Says about AI, Students, and “Metacognitive Laziness”

My sources below include single studies as well as systematic reviews and meta-analyses. Most of the studies focused on critical thinking, and learning transfer, but a few studied the effects on creativity. Across the board, the research shows lower learning outcomes for students using ChatGPT. This isn’t an indictment of the educational value of ChatGPT or other LLM programs. When used effectively, they can be used to provide immediate tailored feedback, assess student work, enhance productivity, and complement metacognition and problem solving skills, and the research supports this idea. Like other technology tools, using these programs also increased productivity and student engagement.

The alarm sounding in high schools has mostly been about AI written papers,The fear and loathing, and in some cases, starry-eyed wonder, about these tools mirror earlier debates about the internet, social media, and Wikipedia.Researchers also acknowledge that these programs can give answers that are biased, incorrect, outdated, sycophantic, or just made up. The programs can’t think or have a conversation with a user. They run on sophisticated math and and impressive computer programming.

In the study, “AI Tools in Society,” Gerlich and his colleagues cite earlier research about how search engines searches can lead to shallow processing, a decline in retention, knowledge transfer and independent problem solving skills. Other studies about online information seeking behavior from the the last few decades have also found that users, especially children and teenagers, tend to use unsophisticated strategies, take uncritical stances, and put in less effort into web based research. These LLM studies have come to similar conclusions.

One study referred to the problem as “metacognitive laziness.” As we start to roll up our intellectual sleeves and figure out how to integrate these programs into the classroom, it is important to determine best practice.. In other words, how do we best use these tools to teach our students? To evaluate the impact on learning that LLM programs have, these researchers have measured study self-regulation, metacognition, and cognitive load.

Self-Regulated Learning & Cognitive Overload

For a student to integrate new knowledge or skills, they need to be able to self-regulate their learning, and the task needs have the right amount of cognitive load for the learner.

Self-regulation of learning is a cyclical process that involves the learner setting goals, choosing the appropriate learning strategy, monitoring progress, metacognition (aka thinking about your thinking), and reflecting on your performance. To make it into long term memory, knowledge has to go through the working memory of the brain.

The working memory is a bit like a small workshop. To get encoded in long term memory for later use, we have to take what we already know and integrate it with the new material. This can take a lot of mental carpentry to build something new. To complicate things, the working memory can only hold a little information at a time and for very short periods of time. The more mental effort and time we spend on the new learning, the better we integrate it. But if this new learning is too complex or too much, it can lead to cognitive overload and learning doesn’t happen. If the process is too easy, or if the learner doesn’t put in the cognitive effort, they might get bored or disengaged or just won’t learn the material. The thing they build will be pretty shoddy and useless.

When the students in the studies outsourced this thinking process to the technology instead of doing the work themselves, the cognitive load was too low and the students didn’t do the active processing they needed to integrate the new learning in long term memory. This was especially true when the focus of the task was more on the outcomes than the process, and with younger study participants. In other words, without putting scaffolds or teaching specific routines that encouraged or required critical thinking, the students tended to use cognitive shortcuts that sabotaged the learning process. This won’t be a shock to the average high school teacher. In my own work, I see it with students taking shortcuts throughout the research process as they skim across their inadequate sources like skiers on a summer lake.

The “Your Brain on ChatGPT” study from MIT researchers is currently making its way around online news sites and TikTok channels. Though the study has not gone through the peer review process and published in an academic journal at this point, the results echo other studies. The researchers measured students’ brain activity while they completed essay writing tasks. They were divided into three groups:

  1. used an LLM program to assist them
  2. used a search engine
  3. used their own knowledge (aka the “brain only” group)

During three different sessions they wrote using the same method. The “brain only” group had the strongest performance in remembering and internalizing what they wrote. They also had the strongest sense of ownership of their work. The search engine group was second, and the LLM group was last.

A striking example of the performance deficit came in a task where the participants were asked to quote their own essays. 83% of the LLM group members had difficulty quoting their own work, with none provide totally accurate answers. By the third session, 6 out of 18 of the group were still not able to provide accurate quotes. The participants in the other two groups were able to quote their own essays with 100% accuracy, with only a few exceptions.

In a fourth session, the students were assigned to different groups and were told to come up with their own topics. They also were asked to use a different tool. The students who were originally in the “brain only” group and then used an LLM program ended up using more finely tuned prompts in the LLM. They still showed showed evidence of the best self regulated learning and metacognition, and the group that originally used the LLM program still performed the worst.

Other studies that looked at critical thinking had writing tasks, solved math problems, or writing computer code. The creativity studies had the LLM programs involved creative brainstorming or problem solving. The results were like those of the MIT study. When the LLM program was used to substitute for thinking, learning performance suffered. If they used the program to aid their thinking instead, a “complimentary” use, performance was just as good or better than without the program. When they asked for explanations of difficult topics, asked follow-up questions, and critically examined and worked with the results to push their own thinking, they put in more of the cognitive labor to learn the material and performed better in the experiment.

Despite the gloomy news about “metacognitive laziness” and the lack of learning, researchers do point educators to some possible solutions. A group in Amsterdam has developed a model called the Hybrid-Human AI-Regulation Framework that includes an LLM program that is designed to help students develop their own self-regulated learning process as they use it, along with guidance and context provided by their teachers. Until ed tech software developers catch up, there are some principles and practices educators can keep in mind.

How Do We Teach With ChatGPT in the Classroom?

We can teach students to use LLM programs as a “cognitive partner” in ways that help self-regulation and metacognitive processing. At the high school level, this means designing assignments to include teaching the routines and baking in guidance in the lessons themselves. In general, students will do better with these tools when they have learned critical thinking and self-regulation skills in place, but these skills could be taught before or in conjunction with LLM enhanced projects.

Provide scaffolding that has explicit prompts for self regulating learning—goal setting, self-monitoring, and evaluation. Add reflection points or metacognitive scaffolds into assignment steps so there would be friction—emphasize process steps— to slow students down and encourage active processing.

  • Include checklists or rubrics.to promotes self-evaluation.
  • Incorporate teaching models that explicitly showcase thinking skills and evidence of their use, such as Harvard’s Project Zero and its Making Thinking Visible strategies.
  • Add assignment components that ask students to evaluate their own performance and the strengths and weaknesses of the AI results.
  • Use prompts to model the thinking process for solving a problem or steps in a process before they use an LLM.. Then adapt these frameworks for use with an AI program to assist rather than replace their thinking process.

If you plan to use thee tools on a regular basis, be very intentional about when they would be helpful. If you use a flipped classroom or blended learning model, where are some strategic places LLMs could be included — as pre, during, or post class exercises?

One very specific recommendation from the research is that if you are going to teach students to use the program with guidance and context, it should be for a period of at least 4 weeks, but no longer than 8 weeks. If you use a shorter period of time, make sure to put scaffolds in place (e.g. instruction on writing effective prompts).

The best results came in courses or assignments that had a heavy skills focus. These had well defined objectives and procedural steps, as well as clear and structured requirements and scoring criteria.. The LLM program could provide feedback, guidance, and support with problem solving in these assignments.

In language and writing classes, students could ask for feedback on grammar, targeted feedback on composition, or help with understanding a word or concept in text. When writing essays with an AI assist, have students critique the LLM’s output. They might look for logical gaps, incorrect facts, and weak arguments. Have them fact check or respond with better information.

In math class, for example, after learning about quadratic equations and working through several problems, they could then turn to ChatGPT to compare their approach to solving one of the problems against alternative suggestions from the LLM program. Alternatively, they could ask the AI for real world applications of the equation in their everyday life and then build a word problem out of it to create a pool of practice equations.

STEM class textbooks are filled with complex terms and concepts. LLM programs could be help explain, provide background, scenarios, and models to illuminate the topics they cover.

What is the school librarian’s role? We have long had a role in helping classroom teachers teach our students how to find, evaluate, and use information. Part of the that legacy has meant helping them to incorporate new technology into their classroom, even if that message isn’t always heard or embraced by our school colleagues or communicated by us.

In addition to providing instruction about the basics of AI and how to write effective LLM prompts for busy teachers and their classes, two final points in the research point to a useful corner for us to contribute.

Students with background knowledge of a topic use LLM programs more effectively, but strong information and media literacy skills can help with gaps or just improve their performance overall. Frankly, the same is true of using web research, but that is a rant for another post.

Finally, problem based learning projects where students have to wrestle with a complex real world issue would be an excellent place to include an LLM program, according to researchers. A typical class project where students are gathering and reporting on the results, on the other hand, would not be an effective, but ones that emphasize process over product (e.g. I-search papers or design thinking) could work well.

Curating excellent examples of methods, models, and case studies of best practices for all kinds of assignments, inducing for research projects, to share and create teaching resources for would be a great first step.

Sources

Ahmad, S. F., Han, H., Alam, M. M., Rehmat, Mohd. K., Irshad, M., Arraño-Muñoz, M., & Ariza-Montes, A. (2023). Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanities and Social Sciences Communications, 10(1), 1–14. https://doi.org/10.1057/s41599-023-01787-8

Fan, Y., Tang, L., Le, H., Shen, K., Tan, S., Zhao, Y., Shen, Y., Li, X., & Gašević, D. (2024). Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance. British Journal of Educational Technology, 56(2), 489–530. https://doi.org/10.1111/bjet.13544

Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. https://doi.org/10.3390/soc15010006

Habib, S., Vogel, T., Anli, X., & Thorne, E. (2024). How does generative artificial intelligence impact student creativity? Journal of Creativity, 34(1), 100072. https://doi.org/10.1016/j.yjoc.2023.100072

Hardman, P. (2025, January 24). The Impact of Gen AI on Human Learning: A research summary. Dr Phil’s Newsletter, Powered by DOMSTM️ AI. https://drphilippahardman.substack.com/p/the-impact-of-gen-ai-on-human-learning

Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025, June 10). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. arXiv.Org. https://arxiv.org/abs/2506.08872

Lehmann, M., Cornelius, P. B., & Sting, F. J. (2024, August 29). AI meets the classroom: When do large language models harm learning? https://arXiv.Org. https://arxiv.org/abs/2409.09047

Sardi, J., Darmansyah, Candra, O., Yuliana, D. F., Habibullah, Yanto, D. T. P., & Eliza, F. (2025). How generative AI influences students’ self-regulated learning and critical thinking skills? A systematic review. International Journal of Engineering Pedagogy (iJEP), 15(1), 94–108. https://doi.org/10.3991/ijep.v15i1.53379

Stadler, M., Bannert, M., & Sailer, M. (2024). Cognitive ease at a cost: LLMs reduce mental effort but compromise depth in student scientific inquiry. Computers in Human Behavior, 160, 108386. https://doi.org/10.1016/j.chb.2024.108386

Wang, J., & Fan, W. (2025). The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: Insights from a meta-analysis. Humanities and Social Sciences Communications, 12(1), 1–21. https://doi.org/10.1057/s41599-025-04787-y

Zhang, X., Zhang, P., Shen, Y., Liu, M., Wang, Q., Gašević, D., & Fan, Y. (2024). A systematic literature review of empirical research on applying generative artificial intelligence in education. Frontiers of Digital Education, 1(3), 223–245. https://doi.org/10.1007/s44366-024-0028-5

Zhao, G., Sheng, H., Wang, Y., Cai, X., & Long, T. (2025). Generative artificial intelligence amplifies the role of critical thinking skills and reduces reliance on prior knowledge while promoting in-depth learning. Education Sciences, 15(5), 554. https://doi.org/10.3390/educsci15050554

Resource to Read Or Share With Colleagues

Articles & Podcast Episodes

Ditch That Textbook: Artificial Intelligence Posts

Educating AI Substack: Speedboats, Motorboats, and Tugboats: Navigating AI Integration in the English Classroom

Education Week: How AI Helps Our Students Deepen Their Writing (Yes, Really)

Education Week Special Reports: How AI Is Reshaping Teachers’ Jobs, Schools Are Using AI. But Are They on the Right Track?, The Transformative Potential of AI: 6 Big Questions for Schools, What Students Really Need to Learn About AI, Math & AI: Can They Work Together?,

Edutopia: Authentic Writing in the Age of AI, How to Thoughtfully Use AI to Create Meaningful Lessons, Guiding Students to Use AI to Build Science Writing Skills, Enhancing World Language Instruction With AI Image Generators, Using ChatGPT to Support Student-Led Inquiry

eSchool News: 5 practical ways to integrate AI into high school science

Psychology Today: How AI Changes Student Thinking: The Hidden Cognitive Risks

Shifting Schools (Podcast): AI in CTE Classrooms: A New Era Is Here, The Future of AI in Trade Classrooms

Student Perspectives: NY Times Learning Network & 10 Minute Teacher Podcast (Instagram)

Your Undivided Attention (Podcast): Rethinking School in the Age of AI

Lesson Plans

The AI Education Project (aiEDU): Teach AI

Common Sense Media: AI Literacy Lessons for Grades 6–12

ISTE: AI Lessons

School Librarian Specific

Ctrl Alt Achieve: Beyond the Book – AI for Librarians

Email Newsletter to Stay Current: The AI School Librarian’s Newsletter

Tech Notes: What Would a Librarian Ask ChatGPT? Turns Out…A Lot.


Discover more from eric bodwell

Subscribe to get the latest posts sent to your email.

Category: Librarian

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Categories

  • Librarian (1)
  • Writer (1)

Tags

  • artificial intelligence (1)
  • cognitive science (1)
  • creative writing (1)
  • peer reviewed research (1)
  • teaching strategies (1)

Year

  • 2025 (4)
  • 2024 (1)

Veteran high school librarian on a mission to demystify information literacy & spark a love of reading. Spoken Word poetry coach helping teenagers find and share their voice.

  • Linktree
  • Bluesky
  • LinkedIn
  • RSS Feed
  • Librarian
  • Writer
  • June 2025

artificial intelligence cognitive science creative writing peer reviewed research teaching strategies teen poets writing exercises writing prompts

© 2025 eric bodwell | Powered by Minimalist Blog WordPress Theme