Is the ACT a Valid Test? (Spoiler Alert: No.)

William Bryant, PhD
Age of Awareness
Published in
9 min readSep 2, 2017

ACT, Inc.’s National Curriculum Survey goes out every three or four years to elementary, middle school, high school, and college teachers, as well as to workforce professionals. It collects information about what respondents are teaching, how they teach it, what they care about, and so forth. It serves as the basis upon which ACT builds its tests.

Because the Survey provides a look into both what pre-college students are being taught, and what they need to know to be prepared for college, it is a useful tool for examining the serious and persistent problem of college readiness — why it is that the majority of high school graduates are underprepared for college-level academic work.

ACT itself reports that 72% of its test-takers fall short of at least one of its college-readiness benchmarks, which confirms the widespread underpreparedness reported by other sources. And indeed, ACT’s 2016 National Curriculum Survey reveals wide disjunctures between high school teaching and college expectations, which may have something to do with why students aren’t better prepared.

But ironically, by pointing out these disjunctures, ACT’s Survey raises questions about the validity of the ACT exam itself. The ACT is a test that straddles the space between high school and college, claiming to be both reflective of high school curricula and a measure of college readiness. But if ACT’s own Survey reveals that high school curricula do not align with college expectations, how can ACT validly claim that its exam measures both?

Tests are all about validity. Their value and utility depend upon them actually measuring what they purport to measure. If a test does not actually assess what it purports to, then it’s not a valid test, and any inferences made based on its results are faulty — inferences such as “this kid has been taught the skills needed for college success but hasn’t learned them very well.”

The two claims ACT, Inc. makes about the ACT test are at odds with each other, which calls into question the test’s validity. The claim that the test is “curriculum based” rests on Survey results, which ACT says serve as empirical evidence upon which it decides how to build the test. In this way, according to ACT, the test reflects what is being taught in high schools — an important claim, since testing kids on things they haven’t been taught doesn’t tell anyone much about their abilities.

ACT also, of course, claims that the test is a measure of college readiness. Through the Survey, it gathers an understanding of what college instructors expect from entering students. This understanding is reflected in ACT’s College and Career Readiness Standards, a set of “descriptions of the essential skills and knowledge students need to become ready for college and career.”

According to ACT, the Standards are validated by the Survey in a process that “ensures that our assessments always measure not only what is being taught in schools around the country, but also what demonstrably matters most for college and career readiness.”

But if what students learn in high school and what is expected of them in college don’t square up, as is suggested by their National Curriculum Survey and other research, how can ACT measure both with a single test?

According to a 2015 white paper, “ACT first identifies what postsecondary faculty, including instructors of entry-level college and workforce training courses, expect of their entering students — that is, the knowledge and skills students need to be ready for entry-level postsecondary courses and jobs. ACT then compares these expectations to what is really happening in elementary, middle, and high school classrooms. ACT uses the results of these comparisons to determine the skills and knowledge that should be measured on ACT assessments and to guide its test blueprints.”

The company does not explain how this process of comparison works, but it implies that they identify a subset of knowledge and skills that fall into both camps, then simply test kids on that.

To feel confident in this process, we would need to be certain that the subset is sufficient in size and scope to support the dual claims. That is, we would need to know what lies outside the overlap slice, as well as what lies within. What is being taught in high school that does not appear on the test because it is not a college-ready expectation? Likewise, what college-ready expectations do not appear on the test because they are not being taught in high school?

Once we knew those things, then we could validate the ACT by answering this question: Is the overlap slice sufficient to support both the claim that the test measures what is being taught in high school and the claim that it measures college readiness?

In other words, is there enough of the high school curriculum on the test to justify calling it a valid measure of high school achievement? And are there, at the same time, enough college expectations on the test to justify calling it a valid measure of college readiness?

ACT doesn’t attempt to answer these questions. As far as the ACT is concerned, if you demonstrate proficiency on the test, then ipso facto you’ve both mastered your high school curriculum and are ready for college, because the claims they make for the test require that the two constructs be identical.

What if you don’t do so well on the test? Is it because you haven’t learned well enough what you’ve been taught? Or because you haven’t been taught what you’re being tested on?

The ACT simply doesn’t allow for the second possibility.

In point of fact, if high schools were teaching certain essential college-ready skills — how to revise your work in response to feedback, for example — a conventional standardized test like the ACT would never be able to detect it, because it cannot provide for test-takers opportunities to do the kind of authentic, extended, or collaborative intellectual work that will be required of them in college.

As mentioned, plenty of research demonstrates that there is a significant difference between high school learning and college expectations, suggesting that any overlap might not be very robust. According to a six-year national study on college readiness from Stanford, “coursework between high school and college is not connected; students graduate from high school under one set of standards and, three months later, are required to meet a whole new set of standards in college.”

ACT’s own research confirms this. Two things jump out from the 2016 National Curriculum Survey results. First, as we can see from the table below, in many cases the Survey does not ask high school teachers and college instructors the same questions, so there is not much opportunity for determining where high school teaching lines up, or not, with college expectations. The Survey doesn’t look like a very good tool for comparing high school teaching to college expectations in Writing, for example.

The second thing is, where the Survey does provide opportunities for comparing high school with college, it finds that high school teaching does not align with college expectations. The Survey report points out, for example, that high school Writing teachers and college instructors are not emphasizing the same skills. Further, high school math teachers do not agree with college math instructors about what skills are important for success in STEM courses. Less than half of high school teachers believe that the Common Core math standards (which ACT stresses are in line with its own College Readiness Standards) match college instructors’ expectations for college readiness.

In other words, ACT’s own Survey shows that, to a significant extent, the knowledge and skills high school teachers are teaching are not the knowledge and skills college instructors are expecting of entering students.

Hence the college-readiness gap.

So if those two bodies of knowledge and skills aren’t the same, how can ACT support the claim that its test measures both what students actually learn and what ACT says they should learn for college readiness? The test doesn’t distinguish a “high-school-learning” part from a “college-requirements” part. As far as the test is concerned, it’s all the same.

In fact, ACT can’t really support both claims at the same time. But they make them anyway because they want to sell the test to two distinct markets.

They want to sell it to students who are trying to get into college, so they call it a college-readiness test. And they want to sell it to states and districts for accountability purposes. These entities want to know whether their students are learning what they’re being taught; thus ACT calls the test curriculum-based.

But, we might wonder, don’t standards take care of all this? Standards, after all, both reflect the skills needed for college readiness and guide high school curriculum, right? Therefore, if the test aligns with the standards, then it’s both curriculum-based and a college-readiness indicator, because those are the same thing.

Most states have adopted the Common Core State Standards. Those that haven’t have concocted their own state standards, which are pretty much in line with the CCSS. In addition, ACT has its own College and Career Readiness Standards, which, it says, line up with both the CCSS and any non-CCSS state standards you care to throw at it. (As ACT says, “If a state’s standards represent the knowledge and skills that prepare students for college and career, then ACT Aspire and the ACT measure that knowledge and those skills.” — a statement that manages to be both a non sequitur and a tautology.)

Again, however, ACT’s own research shows that neither high school teachers nor college instructors are much convinced that the CCSS reflect college-level expectations anyway.

Asked by the Survey, “To what extent do you feel that the Common Core State Standards are aligned with college instructors’ expectations regarding college readiness,” the majority of both high school and college teachers responded little or slightly, rather than a great deal or completely.

In other words, according to its own data, ACT shouldn’t really get away with equating standards-based “curriculum achievement” with “college readiness.”

So what’s the cost of the ACT’s tricky claim-game? The cost is that we get farther away from understanding and addressing the college-readiness gap, so long as everyone believes that the ACT is really measuring what it says it does.

Wherever high school curricula lack significant overlap with the skills and knowledge ACT identifies as necessary for college readiness, the test measures not what students have learned but what they haven’t been taught. This, then, contrary to ACT’s claims, is not an indicator of student readiness or achievement, but a measure of the distance between high school teaching and college expectations (or at least those ACT identifies and can test for).

But this is not how the interpretation of test results falls out for either student or state customers. Rather, the inescapable inference for both is that the majority of students have been taught what they need to know but simply haven’t learned it well enough — student’s fault, or teacher’s fault, but not the test’s fault for leading everyone to a lousy inference.

The faulty inference that issues from the ACT doesn’t help matters where students’ future opportunities are at stake; prospective colleges have no way of knowing that a kid was tested on things she was never actually taught. And it doesn’t help where states are trying to figure out how to improve their education systems. Rather, it makes matters worse by misdirecting both states and students away from the problem of how better to connect high school learning to necessary college skills, and toward the problem of how to get kids to score better on the test.

We do indeed want an education system in which high school curricula are focused securely on the skills and knowledge we confidently know are needed for success upon entry into college. Demonstrably, that’s not what we have now, so we don’t need a test that falsely suggests otherwise.

Originally published on LevelUp.

ABOUT THE AUTHOR

William Bryant, PhD, is founder and CEO of BetterRhetor, a company dedicated to closing the college-readiness gap and creating more opportunities for success after high school for ALL students — regardless of income or background.

You can contact him at wbryant@better-rhetor.com.

Support BetterRhetor’s mission by signing up for our email list and following us on Facebook, Twitter, and Instagram.

BetterRhetor’s FREE .pdf — A High Schooler’s Guide to the Culture of College Academics: 8 Key Concepts — is available here. Figuring out college academic culture, and adapting to it, can be challenging, in part because no one ever really explains it. Until now.

--

--

William Bryant, PhD
Age of Awareness

BetterRhetor CEO. Our online instructional resource creates a bridge between high school & college writing: https://betterrhetor.instructure.com/courses/10