A Plea for Data: The Value of Informed Testing Decisions

By: Erick Hyde

Editor’s note: In August 2020, NACAC released a report urging colleges and universities to examine their policies and practices concerning standardized tests and their potential impact on equity and access. This column is the second in a series of articles reflecting on the report’s recommendations and offering insight into the current state of standardized admission testing.

At this time last year, “I don’t know” felt like a valid reply to the question of “to test or not to test” when guiding students through the college admission process. Granted, “I don’t know” left everyone feeling unfulfilled, but how could we “know” when each element of testing brought more questions than answers. Relying on some combination of experience, instinct, patience, and each other, we methodically felt our way through last year’s admission cycle and landed somewhere between survival and triumph. A year later, the world is thankfully in a different place, but both the testing question and the “I don’t know” reply remain.

One of several recommendations in NACAC’s Ensuring All Students Have Access to Higher Education: The Role of Standardized Testing in the Time of COVID-19 and Beyond report stands out as a solution to the issue of “I don’t know.”

The report directs colleges and universities to: “Share outcomes data, disaggregated by key demographic variables (e.g., race, ethnicity, socioeconomic status, financial aid, and high school type). In addition to analyzing outcomes data for the purpose of developing admission policy, share data widely to promote broad understanding of the effects of test-optional policies. Doing so will enable stakeholders, including students, faculty, administrators, and policymakers, to maintain confidence in admission processes.”

However, at this point in the test-optional experiment, few institutions seem to have put this vital recommendation into practice.

I recognize that Fall 2021 admission has not concluded for many institutions and that good data takes time. Nonetheless, by now, I was hoping to have more transparent, useable data on test-optional admission to inform how I counsel the next class of students. Much of the data I have found—which took some digging—has come from two sources, institutional news releases and student newspaper articles. As an aside, thank you to all those student reporters who published some excellent articles. While I appreciate both sources, they usually provide the same basic data points of “x percent of applicants did not submit test scores” or “y percent of applicants were admitted without testing.” This information is helpful for an overall understanding, but it is not helpful in answering the testing question. Furthermore, if I—someone within the field with a vested interest in admission data—had to seek out the information, it is unlikely the public has been exposed to what limited information has been shared.

As a counselor of English-language learners who hail from around the globe and largely pursue STEM fields, I expect that most of my students will choose to test. For the majority of them, testing really is the best path. It was always part of their plan, they are well-prepared to test, and they are comfortable with testing. Furthermore, the abbreviated history of test-optional admission is on their side after witnessing peers with test scores celebrate Fall 2021 admission. My concern is not with my students who choose to test, but instead with my students who choose not to test. What can I point to that will instill confidence in their choice? What about the student who is on the fence? I would like to present fair arguments on both sides, but that is difficult to do when the depth and breadth of available data is so severely skewed. Sure, I’ve got anecdotal evidence and enough experience to shine a light on the test-optional path. It will be bright enough to quell fears, but it will be a flashlight compared to the glowing Vegas Strip that is the testing path.

Speaking with friends on the admission side of the desk, I am reminded that 2020 was the first year in this new test-optional world. I have to temper my hunger for information with the reality that the data is happening in real time, as we speak, and that both the collection and analysis of that data will take time. I can be patient, but with my patience comes concern. We are already reading posts on the NACAC Exchange about testing site and seat shortages. It seems my students are not alone in returning to testing, which begs the question of how long the test-optional experiment will truly last? The timeline seems to be fragile and I hope we, as a field, don’t miss our opportunity to explore real change.

Moving forward from “I don’t know” requires time and data. To those colleges and universities already sharing data and those preparing data to be shared, thank you. To the remaining institutions, this is a plea to practice NACAC’s recommendation to “share outcomes data,” and to share it “widely.” Only then will students—and their counselors—be able to confidently answer the question “to test or not to test.”

Erick Hyde is a university connection counseling specialist for University of Pennsylvania’s English Language Program and served on the NACAC Task Force on Standardized Admission Testing for International and US Students.