S4 Ep14 - Guidance for Evaluating Impact
Hello, hello, it's Jocelyn here with the final episode of the Structured Literacy Podcast for 2024, recorded right here on the lands of the Palawa people of Tasmania. I'm so pleased that you've been able to find time to join me amongst what I know is a hugely busy end of year schedule. Oh my, what a year it's been.
So much has happened and I know that many schools have felt the effects of the whirlwind of 2024.
Even as you celebrate positive changes that your system might be making, you are being pulled in all sorts of directions. Saying, Boy, we are tired and maybe a little overwhelmed, does not make you uncommitted to the cause. It just means that you are human.
I'm finishing this year of podcasts off by inviting you to reflect on the year that has been and what progress you have made in moving towards your goals.
Have you smashed them?
Have you achieved some of them?
Have you forgotten about them and are now realising that you spend a whole lot of time and energy on doing great things that isn't actually aligned to the goals that you set out at the start of the year?
If any or all of these apply to you in different ways, well, that just makes you like the rest of us. Welcome.
In this episode, I'll be walking you through some different lenses you can look through as you evaluate the year of literacy that has been.
When we return next year, we'll be diving into setting goals for the year to come.
But for now, let's just sit with 2024 and reflect. Let's get started by considering the metrics that you can use when evaluating practice for the year that has just gone. When thinking about data, it's important to consider both achievement and growth because they might be the same or they might not be.
Achievement is about grades or marks you receive based on the quality of your work.
Growth, on the other hand, is about how much you've learned in a set time, in this case, a school year. This growth can also be called relative growth, comparing where you were to where you are now.
While not every student will attain high achievement in a school year, the only acceptable situation for any school is that every student in our care has grown and learned in real terms.
So when we consider our students, yes, we are going to look at A to E grading, NAPLAN results and other measures such as normed assessments. But, as an equally important measure of our impact, we need to consider growth. Which of your students have not made the minimum growth you'd expect for 12 months at school?
And don't just focus on the students who struggle with academics. Have a look at the students at the other end too. If you have students who operate at the higher end of achievement, but they have not demonstrated 12 months growth for 12 months at school, then you have students who are cruising.
Measuring Growth
It's all very well to talk about 12 months growth for 12 months at school, but we need to know how we are measuring growth and what we are measuring.
Oh, it would be brilliant to have an assessment or simple way to measure learning across every aspect of the curriculum, but we just don't. So many things are criterion referenced. That is, we grade or assess against a set of criteria. Now that's not bad. It's just what it is. For most things across the school, that means that we grade against the Curriculum Achievement Standard exclusively.
However, when it comes to some aspects of literacy, we do have great ways to measure growth that's really reliable. So, let's start with phonics.
The simplest way to measure growth in phonics is to track how many phoneme grapheme correspondences a student has learned. You can use the monitoring assessment that comes with your school's phonics program, or simply write graphemes on a piece of paper and ask students to tell you what the phoneme is that goes with them.
We also need to track students ability to use these correspondences. To do this, give students a mix of real words and nonsense words and ask them to read them. This is the core element of the monitoring assessment for any systematic synthetic phonics program. Ours included.
So that's how to find out what students know, but what about measuring growth?
Different programs and systems will have different expectations for phonics learning at different times. I'm going to share how I do it, but I'm not saying that it's the definitive way to think about it. You'll take away from this discussion what works for you.
What I did when considering this issue was to take the 99 graphemes that we've included in our program Reading Success and spread them across the first three grades of school. I also considered the Australian curriculum expectations here. I reasoned that students could be expected to learn 10 to 12 graphemes in the first term of Foundation. Not that these will be the only ones taught, but there's a difference between how much we teach and how much learning sticks. So when I'm talking about how many graphemes we're learning a term, I'm talking about how many have stuck and the student knows them.
So I came up with 10 to 12 to allow students to settle into school and find their feet with the most common parts of the basic code. Of course, phonemic awareness is a part of this picture, but we're talking about phonics purely right now.
Then, after that, I reasoned that they would learn a minimum of 10 more graphemes for Term 2, and 3, and maybe 5 or 6 new graphemes in Term 4. Let's go with 5. I've suggested five in Term 4 because there are Christmas concerts and swimming lessons and there really isn't a whole lot of point teaching new phoneme grapheme correspondences in the last couple of weeks of the school year because students won't necessarily have the time for consolidation and to make it stick, you'd have to just re-teach that at the start of the next year.
So in Foundation, an at least point for phonics learning is around 35-37 graphemes. which is the basic code and a few vowel digraphs. On that basis, you can look at your data, regardless of the program you use, and answer the questions, which students have made optimal growth. You can do the same for Year 1, allow a couple of weeks for review at the start of the year, then dive into new content.
That gives you 8 graphemes in Term 1, 10 in Term 2 and 3, and 5 in Term 4, giving you an additional 33 graphemes in Year 1. That all adds up to that base level of knowledge students need to be acquiring the alphabetic principle. That's a really good whack of the most common graphemes that are in use.
You can take these expectations and ask which students have not made optimal growth and evaluate why.
Then we repeat the whole process for Year 2.
Now this does not have to be complicated. I would also suggest doing this not at the end of each year, but every term. There is little point getting to the end of the year and noticing that some students have not made growth. You have just missed a massive opportunity to intervene and change things up so that those students are learning.
Better yet, giving students a weekly check-in based on the graphemes you taught the previous week will help you take action in real time. I'm not saying sit every child down for 15 minutes and assess them. I'm saying let's take what we learned last week and do a really quick check-in about who's got this and who doesn't.
Now, you can try and do this on the mat and say, Write down this grapheme, whatever it happens to be. But kids are really good at copying their peers. So if you have a classroom assistant, a teaching assistant, you have them just take the cards, go to the students and say, Tell me what these are. And these are the ones we did last week and they keep a simple tally and away you go.
You will then know whether you need to reteach that to everyone or just provide additional dose to a couple of children. This real time adjustment of instruction is critical. Really, waiting till 10 weeks have passed to notice that a student hasn't been learning is not going to get us where we want to go.
So that when we evaluate impact, we can say, yes, look at all of these students who have achieved what we needed them to. If you do this and you base instruction on what you see in your data rather than what a program tells you to teach in a particular week of a particular term, you will have far stronger growth and achievement for students across the board.
Closely monitoring what is and isn't sticking for students means that you can do two things. Firstly, you can evaluate practice right now to determine what tweaks and changes need to be made to have more content stick in general Tier 1 instruction. Tier 1 has to be the hero. Secondly, you can intervene straight away when learning isn't sticking and provide Tier 2 additional doses of instruction for the students who need it.
You can use this measure of an average of 10 new graphemes per week in Upper Primary too. While we don't want phonics to be a major focus, the reality is that there are a bunch of students who haven't had the chance for strong learning in this area in the early years, and they just don't have conscious knowledge of the code.
So whatever the age of the students, if phonics is being taught because of a lack of knowledge on the part of the student, we have to have a way to measure impact. We're not doing things just because.
You Need to Hear This...
Before we get into other areas of literacy in this discussion, I want to say a couple of things that may be difficult and challenging to hear.
If your school has had a systematic and explicit phonics approach for the last five or six years, and you're getting students in Year 3-6 who've been with you that whole time, who still need to focus on learning code knowledge, it's time to take a serious look at what's been happening. As Mark Seidenberg said in a recent article,
One thing that will surely encourage a return to the past is if the "science of reading" yields poor results. Rhetoric and politics aside, the approach has to work.
Seidenberg raises a good point about what it's going to take to hold on to the gains that we've made in instruction in the past few years.
The other more important reason to keep an eye on your data and make sure that you take action when needed is that it is our moral, ethical and professional obligation to make sure that students are learning. If we're going to go to all the effort of implementing programs, implementing an assessment schedule, spending thousands of dollars on training and resources, we have to make sure that it works.
If students aren't learning in the first place, or if learning isn't sticking into the longer term, we need to acknowledge that and take action. It doesn't mean we've failed if we put our hand up and say, Hey, guess what? I think there's some changes we need to make here. And I'm not talking about just go get another program.
Please don't program hop because you're not seeing the results you want. I'm talking about saying, Let's critically evaluate what we're doing and check in to see where we can improve, tweak and refine our practice. If you've discovered that your Tier 1 instruction isn't getting you the runs on the board that really it should be, you might like to have a listen to Season 2, Episode 8, Why Isn't My Tier 1 Instruction Working?
The second potentially challenging thing is that idea of relative growth, as in how far students have come in the last 12 months versus the desired achievement. We have to be really careful that we don't misuse this concept.
I still tutor students with reading difficulty and I talk with parents from various places around the country. One of the most frustrating things for a parent is to be patted on the head and told that all is well with their child when they know that it just isn't. When we as teachers and leaders meet with the parents of a student with a reading difficulty, we often want to reassure them and relative growth can be a way of doing that.
So the conversation might go like this. Yes, Sam is three years behind their peers or in the high risk category on a norm tool, but look at the growth they've made, things are going really well. Now, we feel good about sharing and celebrating this data because we can actually see what the students learned, but the parent is probably thinking, What do you mean things are going really well? They are still three years behind their peers. And that makes the parent feel that we aren't acknowledging the issues or taking responsibility for our role in getting their child to an appropriate level of reading skill. It breaks trust and they don't know that when we speak with them, we're being upfront.
So instead, the conversation could be,
The data is showing us that Sam has made great progress over the past year. We've seen 12 months growth for 12 months at school. Now that's more growth than we've seen for Sam in the past. How we've gotten there is by doing this and this and this. However, our job here isn't done. He's still scoring in the high risk category on this particular assessment. So here's our plan for ensuring that Sam's growth continues, and we're even going to try and accelerate it to get greater progress for him.
This is a much more honest and reassuring conversation for the parent, and it keeps us in motion to continually improve how we work with this student.
It's just something to think about. And of course, these conversations will be framed around the learning profile and needs of the individual student. But we do need to be very careful that we're not using growth as a way to deflect attention from the challenges of achievement that exist.
Measuring Growth Part 2
Let's turn attention now to two other areas that we can measure reasonably objectively in literacy.
The first of these is fluency. There are three areas of fluency, accuracy, rate, and prosody (and prosody is the phrasing and expression). We can monitor accuracy and rate through fluency norms and various normed reading assessments such as DIBELs or Acadians, or even the Neal analysis of reading ability that you may have in your school.
If students maintain their percentile in words correct per minute, we can say that they've made that year's growth for a year at school. When it comes to spelling, things aren't so easy. There really isn't a reliable normed tool that measures growth across phonics, orthographic conventions, and morphology.
However, there are tools such as Motif (and you can find assessments on the Motif website for free). They have tests for both real and no-words. That's the DiST-i and the DiST-n. You can use these for everyone. However, I encourage you to have a conversation with your team about the time and costs versus the benefits of completing and marking these assessments.
Administering them doesn't take that long, it's whole class, but marking them is another story and this goes for any assessment. Consider what you are likely to learn that you don't already know and whether spending the time to conduct and mark assessments is worth it. We also need to consider that assessment really needs to help inform our next steps for teaching.
So the phonics assessment that I described earlier in the episode absolutely informs your next steps in teaching, as does the fluency norms and monitoring those, because you'll note the things where the students are having difficulty. So if you have a plan for how you're going to manage the time constraints of all this assessment and how you're going to use that assessment to inform teaching, great. If you don't have a plan and you're not clear about how it's going to help you, really have a good think about it and a talk about it and see whether it's worth it.
There is one more thing that we can monitor and that's behaviour-related data.
Any school I've ever worked with who has successfully implemented explicit teaching has noted a reduction in negative behaviours in the classroom.
Students can be either on-task and learning, or off-task and engaging in negative behaviours. So we need to choose which of those we want. It's not unreasonable to suggest that one of the positive impacts we can expect from our explicit teaching efforts is a lowering of the number of negative behaviour-related incidents in classrooms.
And I have worked in some really tough schools and have seen this first hand. I'm also hearing this from schools that I work with who might be considered also pretty tough schools in terms of student need.
When we engineer the learning environment and instruction to help students be successful, they want to learn. They can learn, or they can muck around, we need to pick which one.
When we evaluate the impacts of our efforts, we often talk about how hard teachers have worked, Or that we've successfully implemented programs. So tickety boo for us, we did the training, we implemented the program. Fantastic result everyone. Merry Christmas. These things are part of the strategy to get to the success.
They aren't the definition of success. The definition of success is that every student in our care has made appropriate growth throughout the year and is either achieving or moving towards achieving the age-appropriate levels of learning.
For our students who do not have disability or difficulty, that's a minimum 12 months of learning for 12 months at school. For our students with disability or significant difficulty, we're looking to ensure that they have consistently achieved the goals set out in their individual learning plans. When it comes to these students, we have to set ambitious yet sensitive goals based on the student's learning profile, and then monitor what percentage of students with individual plans actually achieved these goals.
As the school year draws to a close, I encourage you to take the time to ask the question, Who hasn't been served this year? Or Who hasn't been served as well as they could have been this year? Then use your data to make a list. Write down the names of the students and get real with the team. Have a frank and vulnerable conversation about the circumstances that resulted in this lack of progress.
And keep your language distanced. And by that, I don't mean, when you, Jan, were teaching that lesson. I mean that you say, during instruction, the student experienced. During instruction, the teacher needs to ensure that by having this unemotional language, it helps us not take it personally. Well, it helps it to be easier to not take it personally.
But please remember that data is not a personal judgment about individuals, it's feedback and it's a gift.
So once we've looked the data and sometimes the ugly data in the face, we have to make a plan and ask each other, What is it going to take to get every student learning at an appropriate rate towards appropriate achievement? That will inform part of the goals for the next school year.
I want to take the opportunity to thank you for making me a part of your world each week through this podcast. I hear from so many people that they find the episodes useful and that it reflects the reality of their school lives. My number one goal is to help you work through the challenges that you're navigating.
I haven't forgotten what it's like to be the one responsible, either for standing in front of the class and getting those data results, or being the leader, trying to support your team navigate those choppy waters. It was only a few years ago that was me, and I continue to work in schools and with schools as they do this work.
So if I can be of service, if I can help you create success for your students through all of the things we do, including this podcast, then I'm a very happy person because I truly believe that every student has the right to be taught with evidence informed practice and every teacher has the right to be supported to make that happen.
I'll be back next year with more episodes of the Structured Literacy Podcast. If you have requests for topics that you'd like covered, please don't hesitate to get in touch and ask. I'm here to help. Until we're together again in the start of the next school year, happy teaching everyone. Bye.
Show Notes:
S2 E8 - Why isn't my Tier 1 Instruction working?
0 comments
Leave a comment