A new study by the Center for Analysis of Postsecondary Readiness (CAPR) points to early signs of impact around using multiple measures to place college students into developmental education or college-level courses.
When institutions opted to use multiple measures placement alternatives beyond a single placement test like the ACCUPLACER, student assignment to college-level courses increased by five percentage points in math and more than 30 percentage points in English, the study found. CAPR researchers plan to follow their initial implementation and early impacts report with a conclusive report on a variety of student outcomes for the study’s three cohorts.
“What we’re seeing is that the single placement test alone is generally not the best thing for students,” said Dr. Elisabeth A. Barnett, senior research scientist at the Community College Research Center and project lead for the CAPR study. “Bringing other information into the mix is important.”
Barnett added that student data such as high school GPA can be a “valuable piece of information” to have when considering student placement.
“Colleges are not uniformly collecting that at the time of admission, so just doing that and having that available as part of the initial placement discussion is probably a really important first step,” Barnett said.
The researchers worked closely with officials at seven community colleges in the State University of New York (SUNY) system for a year or more to develop and implement an alternative placement system for their students.
Researchers first developed a predictive algorithm by analyzing each institution’s historical data to predict how a student would do in their college-level math and English courses. Each college’s predictive algorithm – one for English and one for math – was mainly dependent on the student information colleges had available such as a student’s high school GPA, years since high school graduation, past placement test scores or other variables, Barnett said.
The next stage involved the actual implementation of the algorithm into the colleges’ current placement system.
CAPR’s study tracked three cohorts of students entering college in fall 2016, spring 2017 and fall 2017. The more than 13,000 students were randomly assigned to the “control” group or the “program” group placed using alternative measures; researchers will track their outcomes through the fall 2018 semester.
This early report presented results of the fall 2016 cohort of 4,729 students:
– In math, 14 percent of students placed higher than they would have as a result of multiple measures assessment, while 7 percent placed lower.
– In English, 41.5 percent of students placed higher, while 6.5 percent placed lower.
– Program group students were 3.1 and 12.5 percentage points more likely than control group students to both enroll in and complete (with a grade of C or higher) a college-level math or English course in the first term.
– Implementation of the alternative placement system cost colleges roughly $110 more per student for testing and placement. Ongoing costs in the subsequent fall term cost approximately $40 more dollars per student.
CAPR’s early report findings on successful completion of college-level English and math courses matters because when “students start off in remediation, they are less likely to make good progress and graduate for a number of reasons,” Barnett said. Among them, students may feel discouraged from the initial message of not being “college-ready” or developmental courses might not be “as strong as we would want,” she added.
As a result, the researchers recommend for students who could be successful in college-level courses to start there, underscoring the impact of comprehensive placement systems.
“On the one hand, you have to make sure that there is developmental education available for students who really need it,” Barnett continued. “On the other hand, you want to make sure that students are getting, first of all, every opportunity to place into college-level if they could be successful, and secondly, encouragement in general about their chances to move forward.”
Researchers also examined subgroup differences in placement outcomes after using the alternative measures. In most categories, results were not statistically significant because the sample was too small, but slight benefits for race and gender were evident.
“It looked like Black and Hispanic students were benefiting somewhat more than students in other racial/ethnic categories,” Barnett said. “And we did see a statistically significant benefit to women in math.”
The report’s implementation findings outlined placement system procedures and also feedback given by the colleges participating in the study.
“A lot of people were interested in considering other measures, but that doesn’t mean that everybody on campus had been fully on board with that idea to begin with,” Barnett said, noting that there was still leadership buy-in from the beginning.
“Sometimes, some groups [on campuses] were more on board than others, and some colleges did a better job of involving everybody in the discussions than others,” she added.
Outside of the participating colleges, researchers have seen other institutions using their own placement systems that factor in non-cognitive measures, a hierarchal sequence of variables or a student’s own judgement on their placement decision, for example.
A final CAPR report slated for next year will examine the credit accumulation and other outcomes of the three cohorts of students over the course of the study period, as well as the return on investment for colleges using a multiple measures placement system.
Tiffany Pennamon can be reached at firstname.lastname@example.org. You can follow her on Twitter @tiffanypennamon.