Right now, it feels as if we are at the peak of the tech hype cycle for large language models (e.g. GhatGPT). As educators, we are usually slow to adopt new technologies that promise to “revolutionize” the classroom. Early adopting educators have been evangelized about the World Wide Web, Web 2.0, ebooks, and other new(ish) technology that they promised would change the nature of teaching and learning.
Some of the best educators I have known take a breath and ask questions like, “how can I use this new technology to motivate my students or to aid their critical thinking?” They focus on good teaching first.
I’m reading The AI Con by Emily M. Bender and Alex Hanna, and I hope to write about it sometime soon, but for now, it has me a bit cynical about A.I. Scratch that. My cynicism about educational technology miracle “solutions” is nothing new, but it has given it a focus for now.
LLMs are one of those technologies that sets off a warning system in a librarian’s brain. So, this is a new tool that will make research easier, frictionless? It will give you answers without much effort? Our students take the bait. But the research process and writing are a critical part of building new knowledge when taking apart and solving a information problem.
Thankfully, the the conversation has started to shift to AI as a cognitive partner, an aid to student thinking, but I am still cautious. Our tendency as a species is to use heuristics, or cognitive shortcuts, to solve complex problems. This can lead to errors and biases in our thinking. Technologies like AI can make taking these shortcuts way too tempting.
Inquiry learning, problem based learning, and other pedagogy that requires complex thinking are places where LLMs could be a great add on.. But using LLMs in place of a search engine or other parts of the research process is still fraught with problems.
I am always looking for best practice examples that help light the way for how to best use these tools effectively Here are few tools that have across my feed that seem to be going in the right direction.
SIFT Toolbox
Mike Caulfield, co-author of the excellent book Verified, has been working on a custom LLM prompt for fact checking that gives useful information and minimizes hallucinations. He has several posts about it in his Substack newsletter The End of Argument. He also has an explanation on its own site. If you have a paid Claude AI account, you try the full version, but if not, he created a scaled down version for ChatGPT 03. Log into your free ChatGPT account and click Caulfield’s direct link he provides in his newsletter.
Learn About
Google always has some interesting experiments on its Google Labs site. They currently have a collection of AI tools. The most interesting one to me is Learn About. Want to learn about a topic? It provides a structured way to explore it. Try one of the sample topics at first to get an idea of how it works. ZDnet has a good overview.
Consensus
Consensus is an academic search engine that searches peer reviewed literature in response to a question and then gives you useful context. Try one of the example questions by clicking on the Ask a Research Question button.
The results will summarize and show the scientific consensus in the results with a score on their “consensus meter.” Each result has information about the study itself (e.g. Non-CRI for a study that was not randomized or controlled). There is also a line extracted from the study that sums up the findings. You can click on the link to get more information or link to the study. Like all commercial tools, you can only do so much with a free account, but still, it’s very cool.
Note: It works best if they have a response in their system. If your question isn’t in their database, or if they don’t have enough studies on it, you will still get most of it, but not the consensus meter.
Discover more from eric bodwell
Subscribe to get the latest posts sent to your email.