Measuring the Quality and Impact of Afterschool STEM Programs
by Kathleen Traphagen
STEM Next — along with many others seeking expanded opportunities for young people to experience high quality STEM learning — looks to afterschool programs as part of the solution.
Yet how can we understand the impact of afterschool experiences on young people’s STEM learning and engagement? And how can we provide aggregate evidence of the impact to policymakers when arguing for increased public resources and commitment to these programs?
With seed funding from the Noyce Foundation and others, the Partnerships in Education and Resilience Institute (PEAR) at Harvard Medical School and McLean Hospital has developed two widely used tools: a survey for students called the Common Instrument Suite (CIS) and a program quality observation tool called Dimensions of Success (DoS).
The Common Instrument (CI) is a survey for youth 10 years or older that includes 14 questions to assess interest and engagement in science among children and adolescents attending programs fostering STEM learning in school, community and museum-based settings. PEAR has recently expanded the CI to integrate other important science learning-related dimensions that can aid in the development of more effective OST science programming. This latest catalogue of tools is called the Common Instrument Suite (CIS), and programs work with PEAR to choose the outcome measures that best fit with the program’s current evaluation goals (see Table).
The first version of the Common Instrument (CI) was developed in 2009 by PEAR Director Dr. Gil Noam and OST practitioners and educators from major organizations including Girls, Inc. and 4-H. The CI has been administered nearly 40,000 times to students enrolled in informal science programs in more than 25 states, and it has shown strong reliability (alpha’s >0.85). This is one way of knowing that this is a quality tool that consistently and accurately measures science-related attitudes among youth.
Dimensions of Success (DoS) is an observation tool that measures the quality of students’ STEM learning experiences in a program setting. Designed at first for afterschool, it is also being piloted in some in-school settings. DoS defines 12 indicators of quality within 4 broad areas, and asks the person completing the survey to rate the program on each. For a detailed introduction to DoS, including sample rubrics, data reports, and summaries of each dimension, please download PEAR’s DoS Guide here: DoS Guide for Organizations (pdf).
DoS results are used in two important ways:
- for STEM program administrators and staff to better understand the strengths and weaknesses of their programming and design improvements; and
- for external evaluators or funders to track quality in programs over time and/or quality across a city or a state.
Programs can choose to be observed by either internal staff or external evaluators who are trained and certified to use the tool. Currently, The PEAR Institute has trained over 1000 observers across the country including teams from 16 statewide afterschool networks. Programs range from STEM clubs in afterschool programs, community-center based programs, afterschool and summer programming at museums and science centers, and a range of other STEM learning experiences for youth.
An aggregate national sample of more than 700 DoS observations showed stronger results for dimensions such as relationships and organization and weaker results for inquiry, content learning, and reflection, giving practitioners valuable information about where to direct professional development efforts. PEAR offers the DoS Program Planning tool for programs to use as they plan and prepare activities.
Connecting program quality scores with a range of student-level outcomes is the next important step for showing the impact of STEM learning experiences beyond the school hours. Initial data shows that high program quality is linked to higher scores on the Common Instrument, and future work will consider different ways of capturing these relationships.