Commentary: Elon Musk wants chatbots to pause; it’s too late

We’re all guinea pigs for an utterly new and potentially flawed technology with few guardrails in place.

By Parmy Olson / Bloomberg Opinion

Elon Musk and an array of public figures have signed their names to an open letter that went viral this week, calling for a six-month pause on training language models more powerful than GPT-4, the technology underpinning ChatGPT.

Some strange inconsistencies with the signatories aside, the letter is odd. It criticizes the deployment of powerful chatbot technology as rash, but also over-hypes its capabilities, drawing on the doom-mongering about AI and killer robots that have captivated the press and distracted from more nuanced, real-world risks.

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” it asks dramatically. “Should we risk loss of control of our civilization?”

ADVERTISEMENT
0 seconds of 0 secondsVolume 0%
Press shift question mark to access a list of keyboard shortcuts
00:00
00:00
00:00
 

Of course not, but there are issues we should be more worried about now, like the concentration of AI capabilities among a few increasingly secretive companies, inequality as artists find their work plagiarized without compensation, and all the risks to come from companies racing to plug ChatGPT into their systems.

On that last point, the toothpaste is already out of the tube. OpenAI last week launched a new system that will allow businesses to plug ChatGPT into their proprietary databases, allowing its chatbot to carry out tasks on their systems like retrieving information, making bookings and even running new software that it creates.

While the plugin announcement didn’t get much attention in the mainstream press, many technologists saw it as a stunning leap forward for ChatGPT. Not only could it search and synthetize information it had been trained on, it could take action.

Think about that for a moment. Machine learning systems make decisions in an inscrutable black box. OpenAI spent seven months testing GPT-4 before releasing it into to the wild, but its so-called “red team” engineers, who tested how it might be misused, could only cover a fraction of the ways it might be exploited by millions of real-world users. However much OpenAI has tested and prodded its system to make sure it is safe, no one really knows the full extent of its risks until it is deployed publicly. And those risks become more serious when ChatGPT can start doing things on the web.

Taking such gambles is, of course, what made ChatGPT so popular. OpenAI made a bet by making ChatGPT public and the result was public adulation and viral success for an AI team that’s tiny compared with those that work for Big Tech firms.

Now with these plugins OpenAI is making another gamble, giving ChatGPT even more capabilities and unleashing it to businesses. So far, 11 companies including Expedia Group Inc. and payments operator Klarna Bank SB have “plugged” ChatGPT into their servers as initial test cases.

Klarna’s plugin, for instance, will turn ChatGPT into a shopping assistant that can make recommendations based on the millions of products that the Swedish company facilitates online payments for. The plugin isn’t available yet but when it is, users will go to the ChatGPT web page and login, then click on the Klarna plugin to activate and “talk” to it.

Some of the new capabilities that OpenAI is offering businesses look dicey. One, called “code interpreter,” allows a business to get ChatGPT to write and even execute code on its behalf. Software developers typically use a code interpreter to make sure the program they’re writing works as intended, allowing them to test small snippets. With the new plugin, ChatGPT can also write and test code, and that sounds potentially hazardous from a security perspective.

“I hope that there are enough guardrails in place,” said Edo Liberty, the former head of Amazon AI Labs and current CEO of Pinecone, a startup that has a technical partnership with OpenAI on one of its new plugins, when I asked him about the risks of letting ChatGPT run code. “This is something that could theoretically be a vulnerability.”

Marco Cardoso, a software engineer with Microsoft Corp. based in Sao Paulo, Brazil, tested ChatGPT’s new code-writing abilities and noted how remarkable it was that the system could “now make requests to any server on the Internet.” But that also meant there was “nothing stopping” ChatGPT from accessing a network it wasn’t supposed to. To prevent that, Cardoso put in his own guardrail, explicitly instructing the tool to only access one particular domain. But that puts extra responsibility on the users of ChatGPT, he noted. What’s to stop it from making a request to an unknown server?

Ethan Mollick, a professor at Wharton who has experimented extensively with ChatGPT, tweeted this week that trying out its plugins put a spotlight on OpenAI’s own research about the “unpredictable risks” of connecting the tool to other systems.

OpenAI’s nimbleness as a small company has allowed it to move quickly to release remarkable new technology to the public, from DALL-E 2 to ChatGPT to GPT-4, all within a year, achieving the kind of impact that much larger companies could only dream of. But the flip side is that we’re all guinea pigs for utterly new and potentially flawed technology.

In some ways, it doesn’t really matter if OpenAI agrees to the open letter and pauses its development work for six months. With companies rushing to plug ChatGPT into their systems and test it out on the public, it may already be too late.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “We Are Anonymous.”

Talk to us

> Give us your news tips.

> Send us a letter to the editor.

> More Herald contact information.

More in Opinion

toon
Editorial cartoons for Wednesday, May 21

A sketchy look at the news of the day.… Continue reading

A visitor takes in the view of Twin Lakes from a second floor unit at Housing Hope’s Twin Lakes Landing II Wednesday, Feb. 22, 2023, in Marysville, Washington. (Ryan Berry / The Herald)
Editorial: Housing Hope’s ‘Stone Soup’ recipe for community

With homelessness growing among seniors, an advocate calls for support of the nonprofit’s projects.

Burke: Don’t let Trump & Co. get away with ‘no comment’ on outrages

For the tiring list of firings, cuts, busted norms and unconstitutional acts, hold them accountable.

Opposition to amendment to habitat rules ignored

The article regarding Snohomish County’s habitat ordiance reports a preponderance of Snohomish… Continue reading

DIYers should get a PUD’s heat-pump break, too

I have just completed a do-it-yourself installation of a central heat pump… Continue reading

Everett Council, Dist. 2: Paula Rhyne has proved herself

Four years ago, a public servant ran for District 2 and won… Continue reading

Comment: Era of Buffett-, Gates-style philanthropy is ending

The tech billionaires see their space and AI quests as better than a bequest to fight poverty and disease.

toon
Editorial cartoons for Tuesday, May 20

A sketchy look at the news of the day.… Continue reading

Douthat: What guides Trump policy is a doctorine of the deal

Hawk or dove, former friend or foe; what matters most is driving a bargain, for good or ill.

Friedman: The uncertainties facing Biden and the world order

Biden, facing infirmities of mind and body, still understands the mission of America in the world.

Comment: GOP’s tax cut bill is ill-timed for economic moment

If a recession does hit, it’s the lower- and middle-income who can spend the economy’s way out; not the rich.

Comment: AmeriCorps staffers were making America healthy again

A modest stipend for students was providing experience and value. Until the Trump administration fired them.

Support local journalism

If you value local news, make a gift now to support the trusted journalism you get in The Daily Herald. Donations processed in this system are not tax deductible.