Create a free Diverse: Issues In Higher Education account to continue reading

Algorithmic Bias Continues to Negatively Impact Minoritized Students

user-gravatar

As institutions of higher education turn to AI machine learning and data-driven algorithms to make their work more efficient, a new study published in the American Educational Research Association (AERA) peer-reviewed journalAERA Open, reminds administrators that algorithms can be racially biased.

Dr. Denisa Gándara, assistant professor of educational leadership and policy at the University of Texas at Austin and co-author of the study.Dr. Denisa Gándara, assistant professor of educational leadership and policy at the University of Texas at Austin and co-author of the study.In their study, “Inside the Black Box,” researchers discovered that algorithms used to predict student success produced false negatives for 19% of Black and 21% of Latinx students, incorrectly calculating these percentages of students would fail out of college. Using data from the last decade collected by the National Center for Education Statistics that included over 15,200 students, the study looked for bachelor’s degree attainment at four-year institutions eight years after high school graduation.

“It is essential for institutional actors to understand how models perform for specific groups. Our study indicates that models perform better for students categorized as white and Asian,” said Dr. Denisa Gándara, an assistant professor of educational leadership and policy at the University of Texas at Austin and co-author of the study.

Dr. Hadis Anahideh, an assistant professor of industrial engineering at the University of Illinois Chicago and another co-author of the study, said she and her team expected to encounter bias in algorithms. But she was surprised, she said, to discover that attempts to mitigate that bias did not produce the strong, fair results they were hoping for.

“[Institutional leaders] should know that machine learning models on their own cannot be reliable. They need to be aware that algorithms can be biased and unfair because of bias in the history, data. All the algorithms can see and learn,” said Anahideh.

Institutions use algorithms to predict college success, admissions, allocation for financial aid, inclusion in student success programs, recruitment, and many more tasks.

“Even if you use bias-mitigation technology, which you should, you may not be able to reduce unfairness from all aspects and to the full extent, mitigation technology won’t do magic,” said Anahideh. “You really need to be aware, what notion are you using to mitigate non-fairness, and how much you can reduce it.”

That’s why Anahideh and Gándara agree that institutions should avoid the use of highly biased algorithms and should investigate and disrupt the sources of bias inherent in algorithms by including variables that are “more predictive of student success for minoritized students,” said Gándara.

A potential new variable to include could be a data-based interpretation of campus climate. As an example, an indicator of success for Black students is a high percentage of Black faculty. This environmental factor, said Anahideh, could keep the experience of marginalized students more positive, which would contribute to their overall success in a postsecondary institution in a way most algorithms don’t account for.

Dr. Hadis Anahideh, assistant professor of industrial engineering at the University of Illinois Chicago and co-author of this study.Dr. Hadis Anahideh, assistant professor of industrial engineering at the University of Illinois Chicago and co-author of this study.“We use a certain set of predictors and factors to predict student outcome. Most important predictors common in the literature of higher education don’t cover everything,” said Anahideh. “Campus climate, family support, distance from home, and other factors that can affect students’ behavior might be missed in the model. It becomes biased.”

Another solution, Anahideh said, could be the inclusion of a human perspective in the mix, someone who analyzes the returns of an algorithm specifically for indicators of bias.

Gándara said that, while this study in particular focused on racial bias, "it is important to consider how models are biased against other historically underserved groups in higher education, like students with disabilities, women in STEM, and students from rural backgrounds."

Anahideh and Gándara agreed that the workload of administrators and even faculty are greatly eased by algorithms and AI, which can analyze millions of datapoints in milliseconds, saving work, time and resources. It’s a powerful tool and can help make decisions, said Anahideh, “if you have a model that’s fair and accurate enough.”

Liann Herder can be reached at [email protected]

The trusted source for all job seekers
We have an extensive variety of listings for both academic and non-academic positions at postsecondary institutions.
Read More
The trusted source for all job seekers