By Parmy Olson / Bloomberg Opinion
Elon Musk has painted himself as a humanitarian figure building a utopian future through a passel of companies. Don’t fall for it. The billionaire’s silence on the sudden reversal of U.S. government guidelines for building safer artificial intelligence shows his priorities are political capital; and his own business interests.
Among the cornucopia of executive orders that President Donald Trump enacted this week was a repeal of Joseph Biden’s order on AI. Launched in October 2023, it called on major AI companies to share safety test results with the government. It was a simple list of requests. Biden’s executive order couldn’t legally force tech firms to do anything, but it was the strongest signal so far that the U.S. government was serious about the safety and oversight of AI systems.
Trump did say on the campaign trail that he would revoke the order, following grumbling from members of the Republican party that it stifled innovation. But Musk, now serving as an adviser with a White House role and direct access to Trump, has remained conspicuously silent on an action he might once have forcefully opposed.
In March 2023, he signed an open letter calling for a six-month pause on advanced AI, warning it posed “profound risks to society and humanity.” A few months later he told BBC News that AI could cause “civilization destruction,” comparing it with nuclear weapons. Musk ended his longtime friendship with Google co-founder Larry Page over an argument about AI risk, according to Walter Isaacson’s biography, and he co-founded OpenAI over concerns that Google wasn’t paying enough attention to the technology’s existential threat to humanity.
“I think we need to regulate AI safety,” Musk said in 2023. “It is, I think, actually a bigger risk to society than cars or planes or medicine.”
If Musk truly believed that, he’d be advising the new president to maintain the system already in place, which wasn’t that onerous to begin with. So far, the largest AI labs have voluntarily cooperated with AI safety institutes both in the U.S., based in the National Institute of Standards and Technology (NIST) in Maryland, and in the United Kingdom. Biden’s order hadn’t set hard standards so much as guidance for reporting and transparency on the part of tech firms.
That’s sorely needed at a time when, thanks to the opaque nature of the largest AI labs, we know more about the ingredients in a packet of Doritos than we do about a generative AI model that banks and legal firms are plugging into their systems.
Yet the stakes of AI development have only grown bigger, with OpenAI and partners including Softbank Group Corp. and Oracle Corp. now planning a $500 billion infrastructure investment that would dramatically accelerate AI development; exactly the kind of rapid scaling that Musk once warned could be catastrophic. Yet on this, too, the former doomsayer remains quiet.
Such selective silence is hardly surprising from someone who launched Tesla Inc. to combat climate change but now aligns with anti-electric vehicle politicians, or who claims to champion free speech while kicking journalists off his platform and suing his critics.
Musk’s principles seem to be as erratic as his tweets and, right now, being Trump’s new best friend seems to outweigh being humanity’s self-appointed sentinel.
Musk’s warnings on AI weren’t necessarily right. There are more near-term concerns about the security and fairness of AI models, and their impact on the job market. But his current hush speaks volumes about how a billionaire’s apocalyptic concerns can be set aside for political convenience.
Perhaps we should expect to see less agitating from Musk on AI standards, and for him to spend more time and energy unblocking policies that could impede his companies from getting ahead in the AI race, including Space Exploration Technologies Corp., Tesla and X.AI Corp. If the man who once called AI humanity’s greatest existential threat won’t speak up to defend basic safety measures, it’s worth asking what other principles of his might crumble in the face of power and access.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “Supremacy: AI, ChatGPT and the Race That Will Change the World.”
Talk to us
> Give us your news tips.
> Send us a letter to the editor.
> More Herald contact information.