From “Data Driven” to “Evidence Driven”
In my last post, “Common Assessments” vs “Common Understandings”, I was reminded of how powerful comments are on a blog, and why blogging is a hugely powerful tool for not only sharing your learning, but learning from others. To be honest, one thing that I feel guilty about is not responding to comments on my blog, but I love reading them. I kind of feel that I have already shared my thinking, and I hope that I can stand back and watch others discuss and learn from their conversations. I always read the comments and appreciate when people take time to grapple with and share their thinking. (Read Bill Ferriter’s post on commenting…it is great.)
To catch you up on the last post, here is what I shared:
Now there is a difference between wanting students to have the same test, or the same understandings of material. If I ask students to show that they understand the same objective, does the way we assess truly have to be the same?2 What I think we mean is that we are looking for “common understandings”, not “common assessments”. The notion of a “common assessment” does not take the individual into account, where “common understandings” allows for different pathways to show learning.
As I am just starting to explore this concept, I would love to know your challenges to these thoughts. Differentiated instruction cannot come with standardized assessments, or am I way off here?
There were lots of great comments, and I loved this response that gives a concrete example from Jennifer Casa-Todd:
I think the key is the idea of common understandings that you mention. So here’s my take.There can be common assessments that still ask students to demonstrate their learning. For example, in an English exam, I can ask. What new understanding do you have about human nature based on your learning in this course? In your answer, draw from three specific course materials (texts, characters, discussions, etc…). In a History course I can give students a passage from an article and ask them to Identify three connections they see in the article and what they have learned in the course. If this same assessment was given in 6 different grade 9 classes, each of whom have studied different things and had different class discussions, the responses would be very individual and very much a demonstration of their own learning of course materials. So if we need to have the “same assessment” because we are worried about what parents would say (not that I believe that should be a justification), the questions need to be open ended enough that each student brings their own unique learning and perspective to the response. And it has to be an application of that knowledge rather than a regurgitation of it. This would be far more interesting to mark, but definitely takes more time than marking a content-based, straightforward response which may be why it isn’t as common a practice as it could be.
What I love is that there is a grappling of ideas from what could be holding us back (perception of fairness from parents, time constraints), but also solid ideas. If you read from the comments, you will learn MUCH more than you do from the original post which was simply batting some thoughts around.
Here was another comment from Ross Cooper that struck me:
Not too long ago I wrote a post on reasons for assessing project based learning with a (somewhat) traditional test Here’s a quote from it to consider:
I have heard the cries of those who claim, “Students should be able to demonstrate their knowledge however they want!” I disagree. Throughout the school year a wide array of opportunities should exist, but at certain points students should be “forced” to communicate what they know in written/essay format, as this is a valuable skill in and of itself. Also, when assessing and grading in other formats — e.g., videos, posters, various apps, etc. — let’s make sure not to prioritize flash over substance.
Ross is a good friend of mine and I respect deeply what he is sharing (check out his co-authored book on project based learning). The one part that I struggle with in his comment though is that students should be “forced” to write an essay. You would obviously not (well as least I would hope not) expect a student who did not speak english as their first language to write their understanding of a concept, unless you were willing to read it in their first language, right? As I was thinking about this, unless the skill you are evaluating is the ability to write an essay, why would you ensure that they have to do something in any certain way. Is your (the teacher’s) way the best way to evaluate for any specific topic? I do agree with Ross’ belief that we often mark flash over substance, and that is something we need to change. It is the ability to share your understanding of a concept, not the ability to make a poster or video, unless that is the specific skill you are assessing.
I will give you an example of this in professional learning opportunities I provide. Often when I share some ideas, I ask participants after to share a reflection through a 30 second video on Twitter. There are a few things that I am looking for here:
- Your reflection on your learning which helps me understand what you have taken away from my learning.
- Your ability to create a video on Twitter (skill).
- Your ability to learn something that you may not have done before.
What we some times get caught up in is looking for numbers as an evaluation tool, but it is not always accurate. In one session, a participant discussed that we need to change the terminology from “Data Driven” to “Evidence Driven”. The former term often is connected with numbers, where “evidence driven”, is much more open. Believe me, when I do that activity, there are no numbers provided, but there is a ton of evidence of learning.
One last comment I want to bring up, from Bill Ferriter. He says the following:
…a quick reaction as a guy who promotes common assessments as a part of the PLC process: Common assessments to me aren’t about the students at all. They become the starting point for conversations between teachers who are reflecting on their instructional practices — and unless the assessments are “common,” those conversations aren’t all that productive.
They are also about holding teachers accountable for teaching a basic set of shared skills/content to kids so that students in one class aren’t getting a drastically different learning experience than students in another class.
I’m all for allowing kids to demonstrate mastery in a thousand different ways and I think that’s something schools rarely do.
But I also believe in the power of a common assessment to drive conversations between groups of teachers on what they are doing well and where they could be doing better.
I think this is an important conversation as assessment should often guide teaching, not the other way around. When you change the way you think about assessments, teaching changes, as you may often see in schools that move from “grades” to standards-based reporting. I love Bill’s idea of “common assessments” being a part of conversation on teaching and learning, but I will admit that I have seen common assessments being implemented for the sake of “fairness”, not conversation, in the past.
As I go through these comments and my own thinking, I am not sure if I am any closer to my own answers, but what I believe is that schools should teach students not what to think, but how to think. These conversations where we share our learning are crucial for modelling to our students, and just the ability to grapple with ideas openly in your own space is important, which is why I am a huge proponent of portfolios showing the summaries of your learning, as well as your process. If we work with our students to focus on both elements, our schools will continue to move in a powerful direction