By Parmy Olson / Bloomberg Opinion
Elon Musk and an array of public figures have signed their names to an open letter that went viral this week, calling for a six-month pause on training language models more powerful than GPT-4, the technology underpinning ChatGPT.
Some strange inconsistencies with the signatories aside, the letter is odd. It criticizes the deployment of powerful chatbot technology as rash, but also over-hypes its capabilities, drawing on the doom-mongering about AI and killer robots that have captivated the press and distracted from more nuanced, real-world risks.
“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” it asks dramatically. “Should we risk loss of control of our civilization?”
Of course not, but there are issues we should be more worried about now, like the concentration of AI capabilities among a few increasingly secretive companies, inequality as artists find their work plagiarized without compensation, and all the risks to come from companies racing to plug ChatGPT into their systems.
On that last point, the toothpaste is already out of the tube. OpenAI last week launched a new system that will allow businesses to plug ChatGPT into their proprietary databases, allowing its chatbot to carry out tasks on their systems like retrieving information, making bookings and even running new software that it creates.
While the plugin announcement didn’t get much attention in the mainstream press, many technologists saw it as a stunning leap forward for ChatGPT. Not only could it search and synthetize information it had been trained on, it could take action.
Think about that for a moment. Machine learning systems make decisions in an inscrutable black box. OpenAI spent seven months testing GPT-4 before releasing it into to the wild, but its so-called “red team” engineers, who tested how it might be misused, could only cover a fraction of the ways it might be exploited by millions of real-world users. However much OpenAI has tested and prodded its system to make sure it is safe, no one really knows the full extent of its risks until it is deployed publicly. And those risks become more serious when ChatGPT can start doing things on the web.
Taking such gambles is, of course, what made ChatGPT so popular. OpenAI made a bet by making ChatGPT public and the result was public adulation and viral success for an AI team that’s tiny compared with those that work for Big Tech firms.
Now with these plugins OpenAI is making another gamble, giving ChatGPT even more capabilities and unleashing it to businesses. So far, 11 companies including Expedia Group Inc. and payments operator Klarna Bank SB have “plugged” ChatGPT into their servers as initial test cases.
Klarna’s plugin, for instance, will turn ChatGPT into a shopping assistant that can make recommendations based on the millions of products that the Swedish company facilitates online payments for. The plugin isn’t available yet but when it is, users will go to the ChatGPT web page and login, then click on the Klarna plugin to activate and “talk” to it.
Some of the new capabilities that OpenAI is offering businesses look dicey. One, called “code interpreter,” allows a business to get ChatGPT to write and even execute code on its behalf. Software developers typically use a code interpreter to make sure the program they’re writing works as intended, allowing them to test small snippets. With the new plugin, ChatGPT can also write and test code, and that sounds potentially hazardous from a security perspective.
“I hope that there are enough guardrails in place,” said Edo Liberty, the former head of Amazon AI Labs and current CEO of Pinecone, a startup that has a technical partnership with OpenAI on one of its new plugins, when I asked him about the risks of letting ChatGPT run code. “This is something that could theoretically be a vulnerability.”
Marco Cardoso, a software engineer with Microsoft Corp. based in Sao Paulo, Brazil, tested ChatGPT’s new code-writing abilities and noted how remarkable it was that the system could “now make requests to any server on the Internet.” But that also meant there was “nothing stopping” ChatGPT from accessing a network it wasn’t supposed to. To prevent that, Cardoso put in his own guardrail, explicitly instructing the tool to only access one particular domain. But that puts extra responsibility on the users of ChatGPT, he noted. What’s to stop it from making a request to an unknown server?
Ethan Mollick, a professor at Wharton who has experimented extensively with ChatGPT, tweeted this week that trying out its plugins put a spotlight on OpenAI’s own research about the “unpredictable risks” of connecting the tool to other systems.
OpenAI’s nimbleness as a small company has allowed it to move quickly to release remarkable new technology to the public, from DALL-E 2 to ChatGPT to GPT-4, all within a year, achieving the kind of impact that much larger companies could only dream of. But the flip side is that we’re all guinea pigs for utterly new and potentially flawed technology.
In some ways, it doesn’t really matter if OpenAI agrees to the open letter and pauses its development work for six months. With companies rushing to plug ChatGPT into their systems and test it out on the public, it may already be too late.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “We Are Anonymous.”
Talk to us
> Give us your news tips.
> Send us a letter to the editor.
> More Herald contact information.