Problems with PIRS and Potential Alternatives
Cameron Hecht; EPS 518 Weekly Blog Post; Due Date: 10/6/2014
This week’s readings focused on colleges’ accountability, both to students and funders. The American Council on Education’s consideration of President Obama’s college rating plan addresses some of the plan’s potentially negative unintended consequences (Espinosa, Crandall & Tukibayeva, 2014). The goals behind creating the Postsecondary Institution Ratings System (PIRS) are to make it possible to allocate federal money to postsecondary schools based on their outcomes and to provide important information for students making decisions about where to enroll in college. The authors argue that the system, which is intended to be distinct from a rankings system (e.g., that used by U.S. News and World Report), will nonetheless be used as a de facto rankings system by students, which have been shown to lead to further stratification of schools’ student bodies based on social class and income. Specifically, the authors point to data demonstrating that students from high-income families are more likely to view rankings as a “very important” factor when choosing a college than students from low-income families (see Figure 1). Given this trend, if the PIRS is in fact used as a ranking system, we may see high-income students flock to the schools with the highest ratings more than low-income students, which could in turn drive a greater separation in ratings between highly- and lowly- rated schools, producing a stratified system that continues to reproduce itself. Another potential problem is that because these ratings are supposed to both provide information to students and help determine federal funding, they force schools into a conflict of interest in which they may be willing to incur a cost to low-income students in order to attain higher rankings and superior funding.
Addressing the potential problems with the PIRS, the authors make some general suggestions about what might constitute a better approach. They state that, “for the students that the administration is rightly the most concerned about-namely low-income students- timely information, resource sharing, and hands-on guidance will be far superior to static information sources that rely on the individuals to seek them out” (Espinosa, Crandall & Tukibayeva, 2014). While it is hard to deny that these solutions would benefit low-income students more than the PIRS, it is not clear how the authors’ alternative would systematically operate and avoid problems of its own. If more hands-on decision-making help for low-income students were to be transformed into policy, we would need to consider some fundamental questions. First, how could we ensure that all qualifying students actually get this treatment? It is easy to ensure that ratings information is publically available, but making sure that individual students are getting direct help would undoubtedly be more difficult. Furthermore, how much would such a program cost to implement, and would its’ outcomes be worth the cost to taxpayers and the strain on the federal budget? Perhaps adapting the role of already-hired high school counselors could make this goal more affordable, but requiring schools to hire new counselors for low-income students (if necessary) could be costly. Finally, how would it be determined which students are actually eligible for such a program, and could the income cutoff possibly create a disincentive effect where families are motivated to remain just below a certain income cutoff while their child is applying to colleges so that their child can receive free hands-on guidance? While the PIRS clearly contains flaws and could incur negative unintended consequences, finding a superior alternative will not be an easy task.