Comment: We should worry more about AI’s creators than AI itself

Their warnings of an ‘extinction threat’ are part marketing tool and part effort to avoid scrutiny.

By Parmy Olson / Bloomberg Opinion

For a hot minute last week it looked like we were already on the brink of killer AI.

Several news outlets reported that a military drone attacked its operator after deciding the human stood in the way of its objective. Except it turned out this was a simulation. And then it transpired the simulation itself didn’t happen. An Air Force colonel had mistakenly described a thought experiment as real at a conference.

Even so, fibs travel halfway around the world before the truth laces up its boots and the story is bound to seep into our collective, unconscious worries about AI’s threat to the human race, an idea that has gained steam thanks to warnings from two “godfathers” of AI and two open letters about existential risk.

Fears deeply baked into our culture about runaway gods and machines are being triggered; but everyone needs to calm down and take a closer look at what’s really going on here.

First, let’s acknowledge the cohort of computer scientists who have long believed AI systems, like ChatGPT, need to be more carefully aligned with human values. They propose that if you design AI to follow principles like integrity and kindness, they are less likely to turn around and try to kill us all in the future. I have no issue with these scientists.

But in the last few months, the idea of an extinction threat has become such a fixture in public discourse that you could bring it up at dinner with your in-laws and have everyone nodding in agreement about the issue’s importance.

On the face of it, this is ludicrous. It is also great news for leading AI companies, for two reasons:

1. It creates the specter of an all-powerful AI system that will eventually become so inscrutable we can’t hope to understand it. That may sound scary, but it also makes these systems more attractive in the current rush to buy and deploy AI systems. The technology might one day, maybe, wipe out the human race, but doesn’t that just illustrate how powerfully it could impact your business today?

This kind of paradoxical propaganda has worked in the past. The prestigious AI lab DeepMind, largely seen as OpenAI’s top competitor, started life as a research lab with the ambitious target of building AGI, or artificial general intelligence that could surpass human capabilities. Its founders Demis Hassabis and Shane Legg weren’t shy about the existential threat of this technology when they first went to big venture capital investors like Peter Thiel to seek funding more than a decade ago. In fact, they talked openly about the risks, and got the money they needed.

Spotlighting AI’s world-destroying capabilities in vague ways allows us to fill in the blanks with our imagination, ascribing future AI with infinite capabilities and power. It’s a masterful marketing ploy.

2. It draws attention away from other initiatives that could hurt the business of leading AI firms. Some examples: The European Union this month is voting on a law, called the AI Act, that would force OpenAI to disclose any copyrighted material used to develop ChatGPT. (OpenAI’s Sam Altman initially said his firm would “cease operating” in the European Union because of the law, then backtracked.) An advocacy group also recently urged the U.S. Federal Trade Commission to launch a probe into OpenAI, and push the company to satisfy the agency’s requirements for AI systems to be “transparent, explainable [and] fair.”

Transparency is at the heart AI ethics, a field that large tech firms invested more heavily in between 2015 and 2020. Back then, Google, Twitter and Microsoft all had robust teams of researchers exploring how AI systems like those powering ChatGPT could inadvertently perpetuate biases against women and ethnic minorities, infringe on people’s privacy and damage the environment.

Yet the more their researchers dug up, the more their business models appeared to be part of the problem. A 2021 paper by Google AI researchers Timnit Gebru and Margaret Mitchell said the large language models being built by their employer could have dangerous biases for minority groups, a problem made worse by their opacity, and they were vulnerable to misuse. Gebru and Mitchell were subsequently fired. Microsoft and Twitter also went on to dismantle their AI ethics teams.

That has served as a warning to other AI ethics researchers, according to Alexa Hagerty, an anthropologist and affiliate fellow with the University of Cambridge. “‘You’ve been hired to raise ethics concerns,’” she says, characterizing the tech firms’ view, “but do not raise the ones we don’t like.’”

The result is now a crisis of funding and attention for the field of AI ethics, and confusion about where researchers should go if they want to audit AI systems, made all the more difficult by leading tech firms becoming more secretive about how their AI models are fashioned.

That’s a problem even for those who worry about catastrophe. How are people in the future expected to control AI if those systems aren’t transparent, and humans don’t have expertise in scrutinizing them?

The idea of untangling AI’s black box — often touted as near impossible — may not be so hard. A May 2023 article in the Proceedings of the National Academy of Sciences (PNAS), a peer-reviewed journal of the National Academy of Sciences, showed that the so-called explainability problem of AI is not as unrealistic as many experts have thought till now.

Technologists who warn about catastrophic AI risk, like OpenAI CEO Sam Altman, often do so in vague terms. Yet if such organizations truly believed there was even a tiny chance their technology could wipe out civilization, why build it in the first place? It certainly conflicts with the longtermist moral math of Silicon Valley’s AI builders, that says a tiny risk with infinite cost should be a major priority.

Looking more closely at AI systems now, versus wringing our hands about a vague apocalypse of the future, is not only more sensible, it puts humans in a stronger position to prevent a catastrophic event from happening in the first place. Yet tech companies would much prefer that we worry about that distant prospect than push for transparency around their algorithms.

When it comes to our future with AI, we must resist the distractions of science fiction from the greater scrutiny that’s necessary today.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “We Are Anonymous.”

Talk to us

More in Opinion

Patricia Gambis, right, talks with her 4-year-old twin children, Emma, left, and Etienne in their home, Thursday, Jan. 24, 2019, in Maplewood, N.J. Gambis' husband, an FBI agent, has been working without pay during the partial United States government shutdown, which has forced the couple to take financial decisions including laying off their babysitter. (AP Photo/Julio Cortez)
Editorial: Shutdown hits kids, families at difficult moment

The shutdown risks food aid for low-income families as child poverty doubled last year and child care aid ends.

Editorial cartoons for Friday, Sept. 29

A sketchy look at the news of the day.… Continue reading

Sen. June Robinson, D-Everett, left, and Sen. Mark Mullet, D-Issaquah, right, embrace after a special session to figure out how much to punish drug possession on Tuesday, May 16, 2023, in Olympia, Wash. Without action, Washington's drug possession law will expire July 1, leaving no penalty in state law and leaving cities free to adopt a hodgepodge of local ordinances.  (Karen Ducey/The Seattle Times via AP)
Editorial: Robinson smart choice to head Senate budget panel

A 10-year legislative veteran, the Everett senator displays a mastery of legislation and negotiation.

Randall Tharp’s month recovery coins after battling a fentanyl addiction.  (Kevin Clark / The Herald)
Fentanyl crisis should force rethinking of approach

A continuum of care, that includes treatment in jails, is imperative, says a journalist and author.

Schwab: Since GOP won’t use ‘law and order’ title, Democrats will

Exhibit A: The ‘weaponized’ Justice Department charged a Democratic senator. And who complained?

Congress can pass housing tax credit to make housing affordable

Thanks to The Herald for keeping the housing crisis in front of… Continue reading

Adams has proven herself with work on Snohomish School Board

As a prior Snohomish School board member for twelve years I have… Continue reading

Do clothes really make the senator?

Regarding Kathleen Parker’s column on the relaxed dress code in the U.S.… Continue reading

Comment: Shutdown politics won’t get any easier for McCarthy

A long shutdown may be necessary before McCarthy decides it’s safer to offend the GOP radicals than its mainstream.

Most Read