The Importance of Student Voice in Social-Emotional Assessment
How can you learn about how youth feel and develop without asking them?
While there is increasing recognition in the field of education that social-emotional development (SED) is an important component of student success and that young people’s voices matter, skepticism remains around using youth self-report as a valid measurement of SED. Those of us in the SED world are excited that education is focusing on social emotional skills but understand the apprehension behind the criticism — how do we accurately measure these skills and, if we do, will the results be biased and used punitively against teachers, students and schools? At a time of widespread assessment fatigue, the resistance to additional social-emotional testing is understandable. Unfortunately, the misuse of SED self-report assessments for school accountability and ranking is combined with a critique of self-report. It is essential that we reject one but expand on the other.
It is entirely nonsensical to dismiss self-report as biased and soft when in fact, the whole philosophy of youth development and student engagement is based on the participation of youth in their own growth, learning and self-assessment.
While I agree with researchers who warn of the dangers of ranking schools based on social-emotional metrics, I still believe these types of self-report assessments have great value for schools, if they are used correctly. How can you learn about how youth feel and develop without asking them? It is entirely nonsensical to dismiss self-report as biased and soft when in fact, the whole philosophy of youth development and student engagement is based on the participation of youth in their own growth, learning and self-assessment. First and foremost, schools should work hard to know every child from the beginning of the school year. A well-crafted, self-report survey can give students voice to share their strengths and challenges with teachers. Schools can use that student-provided information to tailor individualized learning and services to students’ needs at the beginning of the school year instead of waiting until a student is in crisis to step in.
What are the arguments against self-report in general and in SED specifically? First, there is the argument of bias: social conformity (pleasing teachers or peers, for example), self-promotion (self-aggrandizement), or answering questions in the middle of the Likert Scale and avoiding extremes. Self-report is also sensitive to the conditions under which the assessment is given (alone, in a group, with adults who are liked or feared) or if the student is feeling upset at the time the report is taken. A common example we see in our research is a dip in self-reporting scores at the end of a school semester when students are feeling the most pressure from the school to perform well academically. An important starting point for a discussion is a recognition that no method is free of potential bias in the collection, reporting and interpretation of data, whether the method is observation of youth behavior, evaluation of student work, or examination of reports by parents, teachers or peers. All of these methods carry a certain amount of bias that needs to be managed through careful psychometric research.
Second, many believe trained behavioral observation of students is the stronger approach because it seems more objective. At The PEAR Institute, we use this type of trained observation in our Dimensions of Success tool to help educators understand what makes a science program in schools and afterschool programs successful. Despite the benefits, many of the things we’re most interested in cannot be easily observed, and if they are observed, it is not totally clear whether they refer back to the construct we’re trying to measure. We can count student behaviors, but that does not say that we understand the meaning behind the behaviors or the goals of the behaviors. Thus, simply observing young people is not enough. Nor is it necessarily the better method or the gold standard. It depends very much on the question. How many times a student disrupts a classroom is clearly the domain of observation, but how many times a student feels disappointment or frustration isn’t as clearly observable.
We should put youth voice in the center of educational research and push back against an academic purist attitude that argues against it.
Third, many think teachers are more reliable observers of student behavior and emotions and meaning than their students. Teachers are indeed, often very good observers of their students and get to know them very well. But it is hard to know what students are thinking or the underlying issues that they face. It is hard even for parents to be sure about their child’s inner life, aspirations and attitudes. This is particularly true because the period of adolescence is developmentally defined by a separation of the child from the parent, along with the creation of an inner life that is often more easily shared with peers than with parents.
Finally, some suggest evaluating the work that students create in and out of school to define their mastery of certain learning goals in the social-emotional or non-academic realms. These artifacts of learning provide windows into the students’ abilities, but again, they do not tell us what went through a child’s mind while creating them. There is not, as of yet, a measurement system, not even taking images of the brain, that can provide us with the sense-making of young people without including the student directly in the conversation.
After careful consideration of the available methods it is clear that we should put youth voice in the center of educational research and push back against an academic purist attitude that argues against it. The PEAR Institute’s approach is that of many academic researchers and evaluators: the most thorough information is found by triangulating methods. This multi-method approach provides different systematic relationships: parents, teachers, evaluators and students all contribute perspectives that help tell a larger story. Some will agree with us that student voice is essential but that a good discussion or in-depth interview is far better than a self-report, forced-choice questionnaire. Others will argue that no measure will be able to replace a teacher getting to know a student or an anthropologist observing a classroom and interviewing students. But with 55 million students in U.S. schools, we can succeed if we introduce measures that allow students to help communicate their social-emotional strengths and program their school experience from the start to leverage those strengths to address challenges. By basing the decisions we make for our educational systems on quantifiable, self-reported student data, we can improve both personalized learning and student support in schools.
Note: My team and I have developed the Holistic Student Assessment a student social-emotional self-report that is used widely throughout the U.S. and internationally. This article speaks not about the specifics of the HSA and its utility, but for all the contributors to the field who have developed tools and those who are using them or contemplating moving forward with them.