S5 E5 - AI: A Tool, Not a Teacher

Hi there and welcome to this episode of the Structured Literacy Podcast brought to you from Padaway, Burnie, in the gorgeous Tasmania. There is no doubt that the ever-growing capabilities of AI are changing how we live, learn and work. Every day I'm seeing a new GPT or tool claiming to make teachers' lives easier and improving student outcomes. In this episode, I would like to share my current perspectives on the role of AI in curriculum planning, some cautions for schools and a couple of suggestions for ways that AI can help lighten our load.
No chatbot or AI platform will ever replace our knowledge of students or the impact that a skilled, capable teacher has on student outcomes. Teaching and learning is inherently about relationships and the connection that a teacher has to their content and students. Yes, there are evidence-based structures and instructional routines that help us do that, but it is us who evaluate their impact and respond appropriately. AI tools are only as good as the prompt that is entered into them. In order for an AI tool to produce a high quality suggestion, the person making the suggestion has to know why they're doing what they're doing. They have to know the difference between poor, mediocre and high quality output. The output from AI can only help us achieve high quality student learning if the person using it is skilled in teaching and has sufficient expertise to be able to craft high quality instruction. There's a danger in throwing open the doors to AI tools and telling teachers to go for it. That danger rests in a cognitive bias that has been identified about using the internet. In season four, episode six, I spoke about this bias and we'll revisit it now.
Fisher and Keel (2015) conducted research about the impact of the internet on people's estimations of knowledge. They conducted nine separate experiments and they concluded, "The results of these experiments suggest that searching the internet may cause a systematic failure to recognise the extent to which we rely on outsourced knowledge. Searching for explanations on the internet inflates self-assessed knowledge in unrelated domains." Further research evidence suggests similar illusions occur when users search for fact-based information online. After using Google to retrieve answers to questions, people seem to believe that they came up with these answers on their own. They show an increasing cognitive self-esteem, a measure of confidence in one's own ability to think about and remember information, and predict higher performance on a subsequent trivia quiz to be taken without access to the internet. So when we look something up online, we perceive we're becoming smarter. Now there hasn't been the same type of research done about AI yet, but there certainly are suggestions from scientists that this technology might not be such a benign tool, Messeri and Crockett (2024) said, "Recently proposed AI solutions can also exploit our cognitive limitations, making us vulnerable to illusions of understanding in which we believe we understand more about the world than we actually do."
At this point in time, the number of teachers who deeply understand cognitive load theory, information processing theory and the nuance of explicit teaching in a range of contexts is slowly growing, but I think it would be a mistake to assume that the majority of teachers share this knowledge. I would also ask you to exercise caution about assuming that the creators of AI tools and custom GPTs have the required knowledge of cognitive science and explicit teaching needed to make those tools do great things for you. I try out tools when I see them so that I can stay on top of the reality of the working lives of teachers. I have yet to find one that I've used and thought, "Well, yes, this person knew what they were doing." There are some tools that allow you to choose the pedagogy you want to be used in lessons. You can choose between constructivist approaches, discovery approaches, project-based learning, play-based learning and, yes, explicit teaching is in there as an option, (as if all of these options are equally grounded in evidence. And, just in case you're wondering, they're not.) Even when I select explicit teaching as the option, set the grade and enter specific details about content and needs of students, the output I see falls short of good. Invariably, the planning provided reflects inquiry approaches with big questions, collaborative sections of lessons where year one students are expected to work in a small group and complete a detailed graphic organiser, and other unrealistic lesson elements.
I've been pretty scathing in my assessment of current AI tools, so you might be wondering whether I think that they have a place in teacher planning at all. Well, the answer is that I do, but and this is a huge caveat at this point in time, in February 2025, I think that the place of AI tools is as an assistant to our efforts, not a replacement for them. How do we know the difference? Well, if we are outsourcing our thinking about our instructions, we've crossed the line. Now, I'm well aware of the irony of this when I write programs and produce resources, and you might well wonder whether using a program isn't outsourcing thinking. Well, in a way, that argument may be valid.
From my perspective, though, I don't want people outsourcing their thinking to us. I want people to have deep knowledge of students, research and pedagogy so that they know why they're doing what they are doing and then use our tools to help them save time and reduce cognitive load. When that happens, that's when we see the very best outcomes for students. So that's my big takeaway for leaders setting boundaries for their teachers around use of AI and planning is "We use it to help lighten our administrative load and contextualise learning to help our students get the best outcomes, not to do the thinking for us." If we are approaching the session with the AI tool with the thought in our minds of "I'm not sure how to teach x, y, z", then we need to stop.
Some examples of how we might actually use these AI tools to support us rather than do our thinking for us are as follows.
Here's one example. Choose a text that aligns with HASS or science units to use in partner reading in the classroom. This is a great thing to do. Then upload the text into Chat GPT or CLAUDE (CLAUDE is my favorite these days. The writing is much more natural) and give it the following prompt,
I am planning lessons for my year (we'll say) four class to practise writing complex sentences with the subordinating conjunction 'when'. Use the attached passage as the basis of this practice and provide me with 20 sets of independent clauses that can be used for sentence combining.
I say ask for 20, because they won't all be good and you'll probably have to discard about half of them Once you have the pairs of independent clauses, then ask it to provide 20 simple sentences that can be used for sentence expansion. In this way, you're able to connect your syntax instruction with the reading the students are doing in partner reading and the other curriculum areas, and you're getting this really nice connection across the elements of the literacy block. You've also just saved yourself about an hour and a half of prep time, but you haven't outsourced your thinking. You could have sat there and manually found those independent clauses, but you've taken a shortcut.
Ai tools can also be used to contextualise materials to students. A teacher in a school I was working with was introducing paired reading to her year five six class and she knew they were not excited by the idea because they hadn't come through the grades with this in place, so she used an AI tool to help her write passages that included a couple of the students each time. What this did was that it really hooked the students. They were excited to see who was going to be in the next passage. Now, she was specific about the details in the passage, what the events were, how long the passage would be and all of the other things, but the AI tool helped lighten her cognitive load and save her time. Again, She didn't outsource her thinking, just the time-consuming admin.
There is no doubt that the advances in AI technology over the last couple of years has impacted our working lives. As with all technological advances, there are positives and negatives associated with it. The key to navigating these changes is to ask ourselves, "Who is doing the driving? Are we making the tools work for us or are we working for the tools?" Let's not take our eye off the need to build knowledge, capacity and experience within our teams so that we can truly harness the power of AI to help us, help our students to achieve great things, so that they can leave the world a better place. That's all from me for now, until next week. Happy teaching, bye.
References
Fisher M, Goddu MK, Keil FC. Searching for explanations: How the Internet inflates estimates of internal knowledge. J Exp Psychol Gen. 2015 Jun;144(3):674-87. doi: 10.1037/xge0000070. Epub 2015 Mar 30. PMID: 25822461.
Messeri, L., Crockett, M.J. Artificial intelligence and illusions of understanding in scientific research. Nature 627, 49–58 (2024). https://doi.org/10.1038/s41586-024-07146-0
Looking for teaching resources to help take your Year 3-6 students to the next level of spelling, vocabulary and writing? Choose Spelling Success in Action and find full guidance for your teachers without taking them out of the equation. Find your sample unit and further information here.
0 comments
Leave a comment