Comment: It’s not just Facebook; colleges using algorithms, too

And while the intent is to help students, ‘predictive analytics’ can use race and income to students’ disadvantage.

By Shea Swauger / Special To The Washington Post

Imagine being rejected from a university or advised out of your major because you’re Black, or a woman, or a first-generation college student. Imagine learning that these decisions were made by predictive analytics software that you can’t object to or opt out of. Just over a decade ago, this seemed unlikely. Now it seems difficult to stop.

That may sound futuristic, but St. George’s Hospital Medical School in London deployed this technology as early as the 1980s. Administrators trained a predictive model using historical admissions data to determine who was accepted to the medical program. It was supposed to eliminate the biases of admissions officers; unsurprisingly, it reproduced a pattern of discrimination. The demographics of incoming students skewed heavily toward white men, forcing the school to stop the practice.

Today, this is the reality faced by millions of students. This year, The Markup reported that more than 500 universities use a single company’s predictive analytics product, which assigns students an academic “risk score” based on variables that are supposedly associated with people’s ability to succeed in college; including, at many schools, race. Black and Latino students were consistently rated as higher risk than their white or Asian peers.

The software also has a feature that encourages academic advisers to steer students toward majors where their risk scores are lower. Predictive analytics may be more likely to flag Black women as a risk in STEM majors; not because they’re bad at STEM but because STEM programs have historically been racist and sexist, making their race and gender appear as risk factors to the algorithm. Even when variables such as race or gender are removed, predictive analytics can still use proxy data to infer demographic information including ZIP code, test scores and socioeconomic status; factors directly affected by structural racism and sexism.

In theory, predictive analytics software might equip schools with information that could help them direct resources to the students who need the most help. In practice, however, it can easily prejudice an administrator’s or professor’s impression of their students’ interests and abilities.

It’s very difficult for people who are harmed by software to protest its use or prove that it discriminated against them. Tech companies often object to transparent audits of their code, calling them a violation of intellectual-property laws. Students usually aren’t aware that their schools are gathering data or using it to make decisions about them in the first place. When applying to college, prospective students may sign terms of service or general data-use agreements that give implicit permission to be tracked and evaluated; but they aren’t explicitly told how much impact these technologies can have on their lives.

So why do colleges and universities use predictive analytics? Higher ed has been on a decades-long defunding streak at the hands of states and the federal government, and at most institutions, students generate the majority of revenue. Recruiting students and keeping them in school is a financial necessity. While ed tech companies promise that their software can identify paying students who will stay enrolled until graduation, the evidence for their claims is mixed.

If this trend continues, tools like this could be used on students much earlier in their education. K-12 public school systems have very limited resources and are accountable to governments for things like test scores, which could make predictive analytics a tempting tool to try to “optimize” students’ performance. It could determine who’s selected for advanced classes or denied supplemental learning support, or what kind of accommodations are granted for a student with a disability. Technology like this often is implemented for one purpose but has a bad habit of expanding its scope.

Judging by last year’s wave of protests against universities’ use of facial recognition, students will continue to voice opposition to discriminatory technology. If faculty members become subject to these tools, they’ll probably protest, too. In response, tech companies will, as usual, step up their public relations efforts. That’s what they did when called upon to act more responsibly with artificial intelligence, creating “ethical” technology initiatives as cover while they fired the employees actually attempting to build more ethical technology.

Recent movements to ban facial recognition and remote test proctoring have shown how the public can effectively push back against the expanding technological panopticon. About 20 different jurisdictions, including cities such as Boston and Portland, Ore., and the state of Vermont, have banned certain uses of facial recognition; and there’s a good chance we’ll see more such regulations in the next few years. This burgeoning movement may help spark a broader public conversation that will shape emerging norms of privacy, informed consent, data rights and justice in technology.

Predictive analytics tends to encode historic harms into our planning, limiting our sense of possibility. The companies that sell it to educational institutions may have gee-whiz marketing campaigns all about the future. Ultimately, though, such technology will keep us bound to the past, stuck in our old patterns of oppression; unless we choose to imagine something better.

Shea Swauger is a librarian, researcher and doctoral student in education and critical studies at University of Colorado-Denver.

Talk to us

More in Opinion

Editorial cartoons for Saturday, July 2

A sketchy look at the news of the day.… Continue reading

Joe Kennedy, a former assistant football coach at Bremerton High School in Bremerton, Wash., poses for a photo March 9, 2022, at the school's football field. After losing his coaching job for refusing to stop kneeling in prayer with players and spectators on the field immediately after football games, Kennedy will take his arguments before the U.S. Supreme Court on Monday, April 25, 2022, saying the Bremerton School District violated his First Amendment rights by refusing to let him continue praying at midfield after games. (AP Photo/Ted S. Warren)
Editorial: Court majority weakens church, state separation

The Supreme Court’s 6-3 decision does more to hurt religious liberty than protect a coach’s prayer.

Dan Hazen
Dan Hazen: Political labels set fight, leave out the middle

‘Conservative’ and ‘liberal’ don’t address each sides’ true motivations and ignore collaboration we need.

Jeremy Steiner: Look again; you might see reason to celebrate

Despite our worries, Americans have a lot to celebrate as the nation marks its 246th birthday.

Comment: Celebrating independence while facing inequality

The 19th century essayist Paul Dunbar challenged white Americans to confront the racism borne from slavery.

Comment: Next big gun fight will focus on where to allow guns

The Supreme Court’s recent decision allows that guns can be resticted in ‘sensitive’ areas. What does that mean?

Commentary: Oliver North, Trump share a hold on populist base

Playing to the fears of Christian nationalists, both wrongly claim an exclusive right to patriotism and God.

Editorial cartoons for Friday, July 1

A sketchy look at the news of the day.… Continue reading

Schwab: May it please the court; because the rest of us aren’t

The Supreme Court’s ‘Sanctimonious Six’ have enshrined a theocratic plutocracy in the the law.

Most Read