One of the struggles I’ve had is coming up with an easy way to assess Interpretive Reading and Interpretive Listening. Through my masters and my work as an online teacher, I’ve taken courses to become a Quality Matters Reviewer for online and blended courses. One of the stresses of Quality Matters is that the activities, objectives in lesson and course objectives are all aligned.
I thought I had aligned my objectives with the course and activities when I moved to doing Integrated Performance Assessments (IPA), but after actually reviewing and working through the rubric, as well as working through my master classes on instructional design, I discovered I wasn’t quite explicit enough.
So I’ve been focusing on helping my students learn about the different components of the ACTFL Interpretive Rubric. I’ve used Madame Shepard’s blog and using her examples and Ohio’s template for Interpretive Tasks.
But I’ve had some problems…
Problem 1: Feedback
I’ve had trouble making the Interpretive Tasks manageable to grade.
Those template assessments are long. They take time to make. Then you have all those long-answer sections. It just takes time to get the feedback to the students (and it’s frustrating to juggle this with managing feedback for the presentational mode!).
To make it more manageable, I started doing most of the assessments in Canvas (the LMS our school uses) and using the rubric to grade the “quiz” after the fact. It auto-graded the items for me and it’s quicker to scroll through a screen with legible fonts than it is to decipher what a kid may have written down but scribbled through three times with pen and then crossed it all out and wrote something illegible in maybe franglais.
How do you want to assess versus grade? We’re trying to give students quicker more detailed feedback. (At my face-to-face school we’re currently discussing the merits of grades versus standards-based grading. But that’s for another post!)
- use the rubric component(s) that you’re focusing on to grade with. This takes time, but hopefully it’ll give you an idea of where your students are.
- take the raw quiz grade. This leaves the option of a very poor score going into the gradebook. In Canvas, I cannot make answers worth different point values (best answer 4, mostly good answer 3, kinda good answer 2, clearly a poor answer 1) so the quiz is either you got it or you didn’t. If you do this, I recommend making the IPA worth more points to be their big performance, so it doesn’t hurt their grade too much along the way.
- take a participation grade. Did they really attempt to complete the task? If these are designed as stepping stones toward a better performance on the final IPA, then these are learning experiences. A completion grade can be given without guilt!
- take no grade at all. GASP! I know. I do this a lot. And kids kinda hate it at first. But the science backs me up on this. You can use this as a self-evaluative tool. They need to reflect on their learning (constructivist style) and identify what they need to do better to improve (or identify what they didn’t identify!)
Problem 2: More practice
Students need more practice, focus, and feedback on all the different parts of the rubric. For some, this is really their first explicit exposure to these ideas. My course, unit and interpretive lesson objectives all share the same objectives:
- identify key words, main ideas, supporting details and organizational features in a given text/video/audio
- infer the meaning of words based on context, the author’s perspective/purpose, and the cultural products/practices/perspectives in a given text/video/audio.
- make inferences in a given text/video/audio.
But I don’t have to do all of these with each interpretive task. I’ve broken them down into one or two focal points and made quizzes in Canvas. I’ve been focusing on giving specific feedback on the different answers to multiple choice questions.
Problem 3: Writing multiple choice questions that don’t…suck
How do you write a good multiple choice question? Test questions are hard to write. I am not a test writer. I have not taken a dedicated course on assessment or test creation. Someday maybe?
Instead, I’ve looked to places that also are teaching and assessing these skills: the lower grades. Standardized tests are looking at these same skills for reading comprehension.
I’ve tried this in all of my courses for one unit, and I have to say I’m pleased with the improvement. Students did improve on identifying the organizational features, inferring the author’s perspective/purpose, identifying and inferring cultural products/practices/perspectives, and identifying main ideas.
The more explicit I was with my “feedback” on the questions’ correct and wrong answers, the better their understanding was. They’ve been able to apply what they’ve learned in other tasks.
I’ll post some examples of the Interpretive mode for you soon!