S4 Ep6 - Five Reasons that Traditional PL May Not Lead to Student Outcomes

Podcast



Subscribe to the Podcast

Hi there, welcome to the Structured Literacy Podcast. I'm Jocelyn and I'm so very pleased to welcome you to this episode recorded here on the lands of the Palawa people of Tasmania.

My goodness, the year is flying by. I'm recording this in the last week of September and Christmas feels like it's just around the corner.

Over the years, I have supported many schools with consultancy and coaching to help them achieve their big picture goals. This human-centered work is my favourite thing to do, as I get to spend time with people, build relationships and provide the at-point-of-need support that I know makes the biggest difference to a school. Besides helping people with simple teaching resources to use in the classroom, one of the most impactful things that I do with teams is to help them build knowledge. Not just deliver information sessions about cognitive load theory and literacy frameworks, but help teachers make deep connections between the findings of the learning and reading sciences and the work that they're doing in the classrooms

If you've been a listener for a while or participated in any of my professional learning workshops, you will have heard me speak about the three types of knowledge we develop over time: declarative, procedural and conditional knowledge.

When we attend training for a program and get good at following the steps of that program, we're developing procedural knowledge of teaching. We learn some steps through this training. We'll be able to talk about the elements of the program, hopefully, and maybe be able to touch on some surface level understanding of why we're doing them.

In a recent podcast and blog post, Timothy Shanahan talked about the difference in students following the steps of a task in order to get it done and truly connecting with the purpose of that task. Without the connection to purpose, the student might do all the steps but not actually learn anything, because they try to take shortcuts, copy and generally don't develop strong knowledge and understanding. As I listened to that episode, it occurred to me that working with teachers is a bit the same. We can train people in the tools we want them to use, but how often are they really connecting with the purpose of that work?

The other thing that I've been thinking about over the past week is what it really means to know something, because these two things are closely related. If we're going to have the conditional knowledge to know what to do when to really connect with the purpose of something, we have to have declarative knowledge, we have to know things about it, its frameworks, its theory and its background. It's not enough to know just how to do stuff.

So to frame this discussion, I'd like to share some information with you from research. We'll start with a couple of areas of cognitive bias.

A cognitive bias occurs when something goes askew in our views of the world or ourself, in this case, in how much knowledge and skill we have. So we'll start with the Dunning-Kruger effect. The Dunning-Kruger effect was first described in 1999 as a cognitive bias in which people with limited competence in a particular domain overestimate their abilities. This is sometimes misunderstood to be about intelligence. It's not. I'll read a little now from the original paper.

"We argue that when people are incompetent in strategies they adopt to achieve success and satisfaction, they suffer a dual burden. Not only do they reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the ability to realize it. Instead, they are left with the mistaken impression that they're doing just fine."

What this means is people don't have enough knowledge to understand that what they're doing isn't actually hitting the mark. Their perception is that they're doing okay.

One of the experiments conducted as part of this research was about grammar, and here's what the researchers found: Participants scoring in the bottom quartile grossly overestimated their ability relative to their peers. These bottom quartile participants scored in the 10th percentile on average, but estimated their grammar ability and performance on the test to be in the 67th and 61st percentiles.

When it comes to teachers, this view of people's tendency to overestimate their own knowledge of grammar was also discussed in Aero's report about writing and writing instruction. This report cited a study and the author of the report wrote: "While a high degree of confidence in teaching grammar was reported, along with grammar being rated of high importance, responses revealed that the participants experienced many challenges with both knowledge and practice in the grammar domain."

So people overestimated their knowledge, they knew something was important, but they actually didn't have that knowledge that they needed to make impactful differences in the classroom.

So we can see that this pattern, both in humans in general and with teachers, is common.

Very closely related to the misconception of how much knowledge you have on a topic is the illusion of explanatory depth bias. This term was coined by Yale researchers Leonid Rozenblit and Frank Keil in 2002. Their study showed that this bias was particularly relevant to causal reasoning, or explanations about why things happen, something that would seem very important for our work in the classrooms.

In 2016, Fisher and Keil found that for passive expertise, as in the topics are really familiar, miscalibration is moderated by education, so you're a little bit off in your thinking, but you can learn more about it and you get better at understanding things and you become more closely aligned with the reality.

Those with more education are accurate in their self-assessments, but when those with more education consider topics related to their area of concentrated study, as in, they're American, so as in their college major, they also display an illusion of understanding. This curse of expertise is explained by a failure to recognise the amount of detailed information that had been forgotten.

While expertise can sometimes lead to accurate self-knowledge, it can also create illusions of competence, this is from the researchers. So, in other words, what they're saying is they asked the participants in the study about knowledge that they had supposedly learned, but a long time ago, earlier in their degrees, and these participants assumed that because they'd done it, that they knew about it. So, in other words, I learned a 'thing', so I assume that I still know it.

In relation to students I often talk about this as the fluency fallacy. The students look fluent in the moment, so we stop focusing on it, and then we're highly surprised later when students don't know what we think they know.

Guess what: grown-ups suffer from this same issue!

Fisher and Keil also conducted research about the impact of the internet on people's estimations of knowledge. They conducted nine separate experiments and they concluded: The results of these experiments suggest that searching the internet may cause a systematic failure to recognise the extent to which we rely on outsourced knowledge. Searching for explanations on the internet inflates self-assessed knowledge in unrelated domains. Further, recent evidence suggests similar illusions occur when users search for fact-based information online. After using Google to retrieve answers to questions, people seem to believe they came up with these answers on their own. They show an increase in cognitive self-esteem, a measure of confidence in one's own ability to think about and remember information, and predict higher performance on a subsequent trivia quiz to be taken without access to the internet. So when we look something up online, we perceive we're becoming smarter.

Now there hasn't been this same type of research done about AI yet, but there certainly are suggestions from scientists that this technology might not be such a benign tool. Messeri and Crocket said recently, "Proposed AI solutions can also exploit our cognitive limitations, making us vulnerable to illusions of understanding in which we believe we understand more about the world than we actually do." I'm going to share some more about this and some cautions for education in using AI in a future episode, and I don't just mean cautions around students using AI, I mean teachers using AI to develop programs and resources and lessons.

The final reason that I'm going to share about why our teams might not know as much as we think they do is related to what happens when we watch someone performing a task.

Let's look at a paper by Kardas and O'Brien from 2018. They wrote, "Although people may have good intentions when trying to learn by watching others, we explored unforeseen consequences of doing so. When people repeatedly watch others perform before ever attempting the skill themselves, they may overestimate the degree to which they can perform the skill, which is what we call an illusion of skill acquisition."

So we've had the illusion of knowledge and now we've got the illusion of skill acquisition. Kardas and O'Brien conducted six experiments with good sample size to explore this concept, so it wasn't just three people in a room, they had significant numbers of participants. They consistently found that watching without doing resulted in the observer assuming that they could perform the skill much better than they actually could. This has significant implications for the way we work in teams in our school and the place of observations in professional learning.

Don't panic, I'm not about to say that observations have no place, but maybe we need to think a little more deeply about how we use them.

The tasks people were watching in the Kardas and O'Brien study were fairly repetitive, such as throwing darts or doing the moonwalk.

But what about more complex undertakings? Are people still biased when observing someone complete a complex task?

Kayla Jordan and colleagues asked the question:

To what extent does watching a video of an expert performing a skill inflate people's confidence in their own ability to perform that skill?

They divided their participants into two groups. One group watched a video of a pilot landing a plane and the other group did not. The participants who watched the video were shown a view from behind the pilots. They could see that the pilots were handling controls, but could not actually see what they were doing. The group who did not watch the video were asked to imagine the scenario in their head when asked how confident the two groups were to land the plane. People who watched the video, despite it having zero instructional benefits, reported feeling more confident than people who did not.

Now, I think we can all agree that it would appear that confidence does not equal competence. In fact, it seems that the opposite may be true.

So what has all of this got to do with teaching? Well, one of the things that I consistently see happening in schools is professional learning is conducted and the knowledge is not returned to. Teachers observing each other, with little follow-up or processes to make this meaningful. Leaders relying on teachers' self-reports about how confident they feel, using particular pedagogies and approaches to decide on future professional learning efforts. Teachers relying on their feeling about teaching to evaluate its impact.

And this is seen when I'll ask,

How did that go?

How successful was the lesson?

And they say good.

And then I'll say tell me more, what made it successful?

Why do you think it was successful?

And they'll say, well, it felt good and the kids liked it.

Nothing about learning, nothing about teaching, nothing about relating their practice back to core theories and frameworks.

It seems to me that, while we recognise the need for explicit, evidence-informed practice for our students, our profession isn't supporting our teachers and leaders in the same way. This is leading to frustration, stress, anxiety and overwhelm.

I was speaking with an instructional leader this week who said that she's been trying to get her team to understand explicit teaching for four years. They've done all the usual things: outside PL, experts have come into the school, readings and sessions in staff meetings, observations, provision of resources and everything else that usually happens, but still, four years on, she feels that teacher knowledge isn't terrific and she isn't seeing the transfer of what has been gained in the classroom. It's the old thing of we said it, but did they learn it. Now does that sound familiar to you? If so, don't despair, you aren't alone, and you do have options.

Now this episode is already long enough, so I'm going to bring you some suggestions for how we can address this lack of knowledge and transfer in the next episode of the podcast. Between now and then, though, have a think about how the cognitive biases and misconceptions that I've described today may be showing up for your staff and in your school. Then tune in next week for some practical suggestions for how to address these issues. You can find the link to all of the papers and articles mentioned in today's episode in the show notes.

I hope that you have a wonderful week ahead. If you're on school holidays right now, have a fantastic time and make sure you get some rest. Until I see you next time. Happy teaching or resting, bye.

 

References:

Aero Writing Report https://www.edresearch.edu.au/sites/default/files/2022-02/writing-instruction-literature-review.pdf 

Fisher M, Goddu MK, Keil FC. Searching for explanations: How the Internet inflates estimates of internal knowledge. J Exp Psychol Gen. 2015 Jun;144(3):674-87. doi: 10.1037/xge0000070. Epub 2015 Mar 30. PMID: 25822461.

Fisher, M., & Keil, F.C. (2016). The Curse of Expertise: When More Knowledge Leads to Miscalibrated Explanatory Insight. Cognitive science, 40 5, 1251-69.

Jordan, K., Zajac, R., Bernstein, D., Joshi, C. & Garry, M. (2022) Trivially informative semantic context inflates people's confidence they can perform a highly complex skillR. Soc. Open Sci. 9211977 http://doi.org/10.1098/rsos.211977 

Kardas, M., & O’Brien, E. (2018). Easier Seen Than Done: Merely Watching Others Perform Can Foster an Illusion of Skill Acquisition. Psychological Science, 29(4), 521–536. https://www.jstor.org/stable/26957404 

Kruger, Justin & Dunning, David. (2000). Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments. Journal of Personality and Social Psychology. 77. 1121-34. 10.1037//0022-3514.77.6.1121. https://www.researchgate.net/publication/12688660_Unskilled_and_Unaware_of_It_How_Difficulties_in_Recognizing_One's_Own_Incompetence_Lead_to_Inflated_Self-Assessments 

Messeri, L., Crockett, M.J. Artificial intelligence and illusions of understanding in scientific research. Nature 627, 49–58 (2024). https://doi.org/10.1038/s41586-024-07146-0

 

Looking for a way to give your teachers full guidance while helping them build their skills? Download a brochure and sample unit here

Website Banners - SSIA2

0 comments

There are no comments yet. Be the first one to leave a comment!

Leave a comment