Commentary: Elon Musk wants chatbots to pause; it’s too late

We’re all guinea pigs for an utterly new and potentially flawed technology with few guardrails in place.

By Parmy Olson / Bloomberg Opinion

Elon Musk and an array of public figures have signed their names to an open letter that went viral this week, calling for a six-month pause on training language models more powerful than GPT-4, the technology underpinning ChatGPT.

Some strange inconsistencies with the signatories aside, the letter is odd. It criticizes the deployment of powerful chatbot technology as rash, but also over-hypes its capabilities, drawing on the doom-mongering about AI and killer robots that have captivated the press and distracted from more nuanced, real-world risks.

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” it asks dramatically. “Should we risk loss of control of our civilization?”

Of course not, but there are issues we should be more worried about now, like the concentration of AI capabilities among a few increasingly secretive companies, inequality as artists find their work plagiarized without compensation, and all the risks to come from companies racing to plug ChatGPT into their systems.

On that last point, the toothpaste is already out of the tube. OpenAI last week launched a new system that will allow businesses to plug ChatGPT into their proprietary databases, allowing its chatbot to carry out tasks on their systems like retrieving information, making bookings and even running new software that it creates.

While the plugin announcement didn’t get much attention in the mainstream press, many technologists saw it as a stunning leap forward for ChatGPT. Not only could it search and synthetize information it had been trained on, it could take action.

Think about that for a moment. Machine learning systems make decisions in an inscrutable black box. OpenAI spent seven months testing GPT-4 before releasing it into to the wild, but its so-called “red team” engineers, who tested how it might be misused, could only cover a fraction of the ways it might be exploited by millions of real-world users. However much OpenAI has tested and prodded its system to make sure it is safe, no one really knows the full extent of its risks until it is deployed publicly. And those risks become more serious when ChatGPT can start doing things on the web.

Taking such gambles is, of course, what made ChatGPT so popular. OpenAI made a bet by making ChatGPT public and the result was public adulation and viral success for an AI team that’s tiny compared with those that work for Big Tech firms.

Now with these plugins OpenAI is making another gamble, giving ChatGPT even more capabilities and unleashing it to businesses. So far, 11 companies including Expedia Group Inc. and payments operator Klarna Bank SB have “plugged” ChatGPT into their servers as initial test cases.

Klarna’s plugin, for instance, will turn ChatGPT into a shopping assistant that can make recommendations based on the millions of products that the Swedish company facilitates online payments for. The plugin isn’t available yet but when it is, users will go to the ChatGPT web page and login, then click on the Klarna plugin to activate and “talk” to it.

Some of the new capabilities that OpenAI is offering businesses look dicey. One, called “code interpreter,” allows a business to get ChatGPT to write and even execute code on its behalf. Software developers typically use a code interpreter to make sure the program they’re writing works as intended, allowing them to test small snippets. With the new plugin, ChatGPT can also write and test code, and that sounds potentially hazardous from a security perspective.

“I hope that there are enough guardrails in place,” said Edo Liberty, the former head of Amazon AI Labs and current CEO of Pinecone, a startup that has a technical partnership with OpenAI on one of its new plugins, when I asked him about the risks of letting ChatGPT run code. “This is something that could theoretically be a vulnerability.”

Marco Cardoso, a software engineer with Microsoft Corp. based in Sao Paulo, Brazil, tested ChatGPT’s new code-writing abilities and noted how remarkable it was that the system could “now make requests to any server on the Internet.” But that also meant there was “nothing stopping” ChatGPT from accessing a network it wasn’t supposed to. To prevent that, Cardoso put in his own guardrail, explicitly instructing the tool to only access one particular domain. But that puts extra responsibility on the users of ChatGPT, he noted. What’s to stop it from making a request to an unknown server?

Ethan Mollick, a professor at Wharton who has experimented extensively with ChatGPT, tweeted this week that trying out its plugins put a spotlight on OpenAI’s own research about the “unpredictable risks” of connecting the tool to other systems.

OpenAI’s nimbleness as a small company has allowed it to move quickly to release remarkable new technology to the public, from DALL-E 2 to ChatGPT to GPT-4, all within a year, achieving the kind of impact that much larger companies could only dream of. But the flip side is that we’re all guinea pigs for utterly new and potentially flawed technology.

In some ways, it doesn’t really matter if OpenAI agrees to the open letter and pauses its development work for six months. With companies rushing to plug ChatGPT into their systems and test it out on the public, it may already be too late.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “We Are Anonymous.”

Talk to us

> Give us your news tips.

> Send us a letter to the editor.

> More Herald contact information.

More in Opinion

toon
Editorial cartoons for Saturday, June 7

A sketchy look at the news of the day.… Continue reading

Labor Secretary Lori Chavez-DeRemer testifies during a budget hearing before a House Appropriations subcommittee on Capitol Hill in Washington on Thursday, May 15, 2025. (Al Drago/The New York Times)
Editorial: Ending Job Corps a short-sighted move by White House

If its jobs the Trump administration hopes to bring back to the U.S., it will need workers to fill them.

Comment: We can’t manage what we refuse to measure

The Trump administration’s war against climate science will compound the devastation from disasters.

Comment: Proposed stadium is an investment in Everett’s future

A methodical process has outlined a multipurpose facility that can be built without new taxes.

Comment: Some DEI programs ensured protection of veterans’ health

Cut as a cost-saving measure, such programs helped ensure services for women and minorities.

Forum: Nonprofits and communities face an existential crisis

When missions, and not just methods, are questioned, how do groups reweave to remain vital and valued?

The Buzz: As long as we’re all going to die, might as well laugh

Split you sides as Elon and Trump split the sheets. And Sen. Debbie Downer lightens the mood at a town hall.

toon
Editorial cartoons for Friday, June 6

A sketchy look at the news of the day.… Continue reading

A rendering of possible configuration for a new multi-purpose stadium in downtown Everett. (DLR Group)
Editorial: Latest ballpark figures drive hope for new stadium

A lower estimate for the project should help persuade city officials to move ahead with plans.

Schwab: Reveling in the dis-Enlightenment of America

Fearing an educated and informed electorate, Trump and MAGA target knowledge, science and reason.

Is church engaged in ‘worship warfare’?

Imagine; Snohomish’s very own Russell Johnson, pastor of the Pursuit Church, quoted… Continue reading

Christians’ civic engagement is a right and duty

Recent calls for Christians to avoid political involvement in the name of… Continue reading

Support local journalism

If you value local news, make a gift now to support the trusted journalism you get in The Daily Herald. Donations processed in this system are not tax deductible.