Welcome back for The Results. Just a reminder – the C group were taught Chronologically and the T group were taught Thematically.
Speed of teaching
There was no significant difference. The T group were slightly behind, but I attribute this to having mostly afternoon lessons – we definitely got less done. It also FELT like I had a lot more cover lessons with them, so I was quite surprised when I worked back through my registers and found this wasn’t especially the case.
Student confidence about their retention
I gave them this questionnaire during half term. I asked these 6 questions and gave them a 1-4 scale.
On a scale of 1-4 (1 is low)…
- how much did you enjoy the Medicine Unit?
- how well did you feel you understood the Medicine Through Time unit?
- how confident did you feel about the exam in February?
- how easy was it to revise from your classnotes for the exam?
- if you heard there was going to be a Medicine exam at the end of the summer term, how confident would you feel about it?
- I’m not giving you another Medicine exam at the end of the summer term, I promise. Don’t worry.
(Q6 ran from 1 – ‘thank goodness’ – to 4 – ‘Oh no, I’d love another mock)
Eight students from each class completed the survey. Here are the averages for each class.
As you can see, they were roughly the same in terms of enjoyment, understanding and how easy they found it to revise from their exercise books. The biggest difference is how confident they felt going into their mocks after February half term – the thematic students were, on average, more confident. They remain more confident now, if another mock was imminent (and they’re a bit less relieved to hear there won’t be one, although that appears to maybe be skewed by one joker), but the confidence drop is bigger.
The mock data
I measured their data by comparing mock grade against FFT20 target. The final column is their distance from target. I would prefer LoPs but it’s not how we analyse data in my current setting.
So – average difference from target grade is –
It looks good, on first look. T have made more progress to target than C. The quality of their responses was better, not particularly in terms of knowledge, but in the way that they were able to analyse change over time. They had a slightly better understanding of the three different strands – it’s very common for students to confuse treatment and prevention, for example.
Let’s unpick it a bit though.
|High Prior Attainer +||-2.7||-1.7|
|High Prior Attainer||-3.3||-1|
|Mid Prior Attainer||-1.4||-2.3|
I only officially have one Low PA across both groups, in the T group, who came out at one grade below target. My gut feeling is that several of my MPAs are right on the very border, but that’s a blog for another day.
So, this data tells me that it worked best for H+ and H students. It doesn’t seem to have worked well for MPAs (but it did for the LPA!) There was no discernible difference when considering PP or SEN.
These classes moved into Y11 last September and sat their Y11 mocks at the start of December. I used the 2018 summer paper and markscheme, but I kept the grade boundaries the same as I’d used in Y10. This is because grade boundaries fluctuate and I think students need to measure their progress against their own prior attainment, not against a national curve. I peg at 50% for a grade 4 and move up/down in increments of 10%, with the exception of grade 9 which sits at 95%.
A couple of changes had taken place in the classes. Two moved away. One developed a significant issue with exams. One had missed most of y10 but made massive progress in her knowledge before sitting the mocks. I have weeded these anomalies out of my data.
I didn’t tell the students what was on the mock – not even a hint about the topics. I gave them a revision planner in September with a weekly topic and activity, and each class had a 5 question multiple choice quiz on the topic the following week. I gave out the knowledge organisers to both classes to use for revision.
Here’s the mock breakdown. The have added some rows which are progress from Y10 mock. I scored this as a 0 for negative progress, a 1 for staying the same and a 2 for an increased grade – so I’d be hoping for an average as close to 2 as possible.
|Overall progress to target||-1||-1|
|Overall progress from Y10||1.22||1.27|
My LPA retained the same grade from Y10.
I haven’t spent much time thinking about this data yet, mainly because I’m admiring how much progress my students have made from March last year, when they sat the original mock. I am gifted with a bunch of hard-working, motivated students. I’m going to miss them a lot.
Enough of that fluffy stuff, though. The results this year broadly bear out what I found last year when it’s broken down into prior attainment groups. You could perhaps say that my T group MPAs retained more knowledge into Y11, but it doesn’t seem to have done the HPAs much good who, on average, went down slightly.
Even though my gut feeling is that the T group have a stronger grasp on the chronology, the only thing I can say for certain is that I haven’t done a great job of proving my hypothesis about impact on student attainment. Sigh. But at least I had some fun thinking about teaching it.
Now I just have to sit tight and wait for the summer results.
I’ve taught my current year 10 group thematically this year, with the addition of thematically-organised knowledge organisers and weekly quizzes. I also broke it into three strands – ideas about cause, treatment and prevention. I’ve got just the one group this year, though, so it will be difficult to measure them against anything.
If you’ve got any thoughts on this, I’d love to hear from you.