Comment: Social media may be losing its algorithm camouflage

Regulators in Europe are looking at ways to understand AI through its effects rather than its data.

By Parmy Olson / Bloomberg Opinion

There’s a perfectly good reason to break open the secrets of social media giants. Over the past decade, governments have watched helplessly as their democratic processes were disrupted by misinformation and hate speech on sites like Meta Platforms’ Facebook, Alphabet’s YouTube and Twitter. Now some governments are gearing up for a comeuppance.

In the next two years, Europe and the United Kingdom are preparing laws that will rein in the troublesome content that social media firms have allowed to go viral. There has been much skepticism over their ability to look under the hood of companies like Facebook. Regulators, after all, lack the technical expertise, manpower and salaries that Big Tech boasts. And there’s another technical snag: The artificial-intelligence systems tech firms use are notoriously difficult to decipher.

But naysayers should keep an open mind. New techniques are developing that will make probing those systems easier. AI’s “black box” problem isn’t as impenetrable as many think.

ADVERTISEMENT
0 seconds of 0 secondsVolume 0%
Press shift question mark to access a list of keyboard shortcuts
00:00
00:00
00:00
 

AI powers most of the action we see on Facebook or YouTube and, in particular, the recommendation systems that line up which posts go into your newsfeed, or what videos you should watch next; all to keep you scrolling. Millions of pieces of data are used to train AI software, allowing it to make predictions loosely similar to humans’. The hard part, for engineers, is understanding how AI makes a decision in the first place. Hence the black-box concept.

Consider two pictures, one of a fox and the other of a dog:

You can probably tell within a few milliseconds which animal is the fox and which is the dog. But can you explain how you know? Most people would find it hard to articulate what it is about the nose, ears or shape of the head that tells them which is which. But they know for sure which picture shows the fox.

A similar paradox affects machine-learning models. It will often give the right answer, but its designers often can’t explain how. That doesn’t make them completely inscrutable. A small but growing industry is emerging that monitors how these systems work. Their most popular task: Improve an AI model’s performance. Companies that use them also want to make sure their AI isn’t making biased decisions when, for example, sifting through job applications or granting loans.

Here’s an example of how one of these start-ups works. A financial firm recently used Israeli start-up Aporia to check whether a campaign to attract students was working. Aporia, which employs software and human auditors, found that the company’s AI system was actually making errors, granting loans to some young people it shouldn’t have, or withholding loans from others unnecessarily. When Aporia looked closer, it found out why: Students made up less than 1 percent of the data the firm’s AI had been trained on.

In a lot of ways, the reputation of AI’s black box for impenetrability has been exaggerated, according to Aporia’s chief executive officer, Liran Hosan. With the right technology, you can even — potentially — unpick the ultra-complicated language models that underpin social media firms, in part because in computing, even language can be represented by numerical code. Finding out how an algorithm might be spreading hate speech, or failing to tackle it, is certainly harder than spotting mistakes in the numerical data that represent loans, but it’s possible. And European regulators are going to try.

According to a spokesman for the European Commission, the upcoming Digital Services Act will require online platforms to undergo audits once a year to assess how “risky” their algorithms are to citizens. That may sometimes force firms to provide unprecedented access to information that many consider trade secrets: code, training data and process logs. (The commission said its auditors would be bound by confidentiality rules.)

But let’s suppose Europe’s watchdogs couldn’t delve into Facebook or YouTube code. Suppose they couldn’t probe the algorithms that decide what videos or posts to recommend. There would still be much they could do.

Manoel Ribeiro, a doctoral student at the Swiss Federal Institute of Technology in Lausanne, Switzerland, published a study in 2019 in which he and his co-authors tracked how certain visitors to YouTube were being radicalized by far-right content. He didn’t need to access any of YouTube’s code to do this. The researchers simply looked at comments on the site to see what channels users went to over time. It was like tracking digital footprints; painstaking work, but it ultimately revealed how a fraction of YouTube users were being lured into white-supremacist channels by way of influencers who acted like a gateway drug.

Ribeiro’s study is part of a broader array of research that has tracked the psychological side effects of Facebook or YouTube without needing to understand their algorithms. While offering relatively superficial perspectives of how social media platforms work, they can still help regulators impose broader obligations on the platforms. These can range from hiring compliance officers to ensure a company is following the rules, or giving accurate, random samples to auditors about the kinds of content people are being driven toward.

That is a radically different prospect to the secrecy that Big Tech has been able to operate under till now. And it’ll involve both new technology and new policies. For regulators, that could well be a winning combination.

Parmy Olson is a Bloomberg Opinion columnist covering technology.

Talk to us

> Give us your news tips.

> Send us a letter to the editor.

> More Herald contact information.

More in Opinion

toon
Editorial cartoons for Friday, June 6

A sketchy look at the news of the day.… Continue reading

A rendering of possible configuration for a new multi-purpose stadium in downtown Everett. (DLR Group)
Editorial: Latest ballpark figures drive hope for new stadium

A lower estimate for the project should help persuade city officials to move ahead with plans.

The Buzz: As long as we’re all going to die, might as well laugh

Split you sides as Elon and Trump split the sheets. And Sen. Debbie Downer lightens the mood at a town hall.

Schwab: Reveling in the dis-Enlightenment of America

Fearing an educated and informed electorate, Trump and MAGA target knowledge, science and reason.

Is church engaged in ‘worship warfare’?

Imagine; Snohomish’s very own Russell Johnson, pastor of the Pursuit Church, quoted… Continue reading

Christians’ civic engagement is a right and duty

Recent calls for Christians to avoid political involvement in the name of… Continue reading

Ensure due process to all threatened with deportation

I am writing to express my concerns regarding immigrants, migrants and students… Continue reading

A rendering of the new vessels to be built for Washington State Ferries. (Washington State Ferries)
Editorial: Local shipyard should get shot to build state ferries

If allowed to build at least two ferries, Nichols Brothers can show the value building here offers.

Solar panels are visible along the rooftop of the Crisp family home on Monday, Nov. 14, 2022 in Everett, Washington. (Olivia Vanni / The Herald)
Editorial: ‘Big, beautiful bill’ would take from our climate, too

Along with cuts to the social safety net, the bill robs investments in the clean energy economy.

A Lakewood Middle School eighth-grader (right) consults with Herald Opinion Editor Jon Bauer about the opinion essay he was writing for a class assignment. (Kristina Courtnage Bowman / Lakewood School District)
Youth Forum: Just what are those kids thinking?

A sample of opinion essays written by Lakewood Middle School eighth-graders as a class assignment.

toon
Editorial cartoons for Thursday, June 5

A sketchy look at the news of the day.… Continue reading

Goldberg: Musk’s leaves legacy of disease, starvation and death

DOGE may only break even, and at the cost of some 300,000 deaths from the end of USAID.

Support local journalism

If you value local news, make a gift now to support the trusted journalism you get in The Daily Herald. Donations processed in this system are not tax deductible.