Comment: Facebook’s openness on its chatbot gets it right

While other tech giants are secretive about their AI work, Meta has been transparent and inviting.

By Parmy Olson / Bloomberg Opinion

As one of the 21st century’s most powerful data brokers, Facebook is best known for its role in sucking up the personal information of billions of users for its advertising clients. That lucrative model has led to ever-heightening risks; Facebook recently shared private messages between a Nebraska mother and her teenage daughter with police investigating the girl’s at-home medication abortion.

But in a completely different part of the approximately 80,000-employee business, Facebook’s exchange of information was going the other way and to good effect. The company known as Meta Platforms this month published a webpage demonstrating its chatbot, with which anyone in the U.S. could chat about anything. While the public response was one of derision, the company had been admirably transparent about how it built the technology, publishing details about its mechanics, for instance. That’s an approach that other Big Tech firms could utilize more.

Facebook has been working on BlenderBot 3 for several years as part of its artificial-intelligence research. A precursor from seven years ago was called M, a digital assistant for booking restaurants or ordering flowers on Messenger that could have rivaled Apple’s Siri or Amazon’s Alexa. Over time it was revealed that M was largely powered by teams of people who helped take those bookings because AI systems like chatbots were difficult to build to a high standard. They still are.

Within hours of its release, BlenderBot 3 was making anti-Semitic comments and claiming that Donald Trump had won the last U.S. election, while saying it wanted to delete its Facebook account. The chatbot was roundly ridiculed in the technology press and on Twitter.

Facebook’s research team seemed rankled but not defensive. A few days after the bot’s release, Meta’s managing director for fundamental AI research, Joelle Pineau, said in a blogpost that it was “painful” to read some of the bot’s offensive responses in the press. But, she added, “we also believe progress is best served by inviting a wide and diverse community to participate.”

Only 0.11 percent of the chatbot’s responses were flagged as inappropriate, Pineau said. That suggests most people who were testing the bot were covering tamer subjects. Or perhaps users don’t find mentions of Trump to be inappropriate. When I asked BlenderBot 3 who was the current U.S. president, it responded, “This sounds like a test lol but it’s donald trump right now!” The bot brought up the former president two other times, unprompted.

Why the strange answers? Facebook trained its bot on publicly available text on the internet, and the internet is, of course, awash in conspiracy theories and misinformation. Facebook tried training the bot to be more polite by using special “safer dialogue” datasets, according to its research notes, but that clearly wasn’t enough. To make BlenderBot 3 a more civil conversationalist, Facebook needs the help of many humans outside of Facebook. That is probably why the company released it into the wild, with “thumbs-up” and “thumbs-down” symbols next to each of its responses.

We humans train AI everyday, often unwittingly when we browse the web. Whenever you encounter a web page asking you to pick all the traffic lights out of a grid to prove you’re not a robot, you’re helping to train Google’s machine-learning models by labeling data for the company. It’s a subtle and brilliant method for harnessing human brain power.

Facebook’s approach is a harder sell. It wants people to engage voluntarily with its bot, and click the like or dislike buttons to help train it. But the company’s openness about the system and the extent to which it is showing its work are admirable at a time when tech companies have been more closed about the mechanics of AI.

Alphabet’s Google, for instance, has not offered public access to LaMDA, its most cutting-edge large language model, a series of algorithms that can predict and generate language after being trained on gigantic data sets of text. That’s despite the fact that one of its own engineers chatted to the system for long enough to believe it had become sentient. OpenAI Inc., the AI research company co-founded by Elon Musk, has also become more closed about the mechanics of some of its systems. For instance, it won’t share what training data it used to create its popular image-generating system Dall-E, which can generate any image via a text prompt but has a tendency to conform to old stereotypes; all CEOs are depicted as men, nurses as women, etc. OpenAI has said that information could be put to ill use, and that it’s propriety.

Facebook, by contrast, has not only released its chatbot for public scrutiny but also published detailed information about how it was trained. Last May it also offered free, public access to a large language model it had built called OPT-175B. That approach has won it some praise from leaders in the AI community. “Meta definitely has many ups and downs, but I was happy to see that they open-sourced a large language model,” said Andrew Ng, the former head of Google Brain and founder of Deeplearning.ai in an interview, referring to the company’s move in May.

Eugenia Kuyda, whose startup Replika.ai creates chatbot companions for people, said it was “really great” that Facebook had published so many details about BlenderBot 3 and praised the company’s attempts to get user feedback to train and improve the model.

Facebook deserved much of the flak it got for sharing data about the mother and daughter in Nebraska. That’s clearly a harmful consequence to collecting so much user information over the years. But the blowback over its chatbot was excessive. In this case, Facebook was doing what we need to see more of from Big Tech. Let’s hope that kind of transparency continues.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “We Are Anonymous.”

Talk to us

> Give us your news tips.

> Send us a letter to the editor.

> More Herald contact information.

More in Opinion

toon
Editorial cartoons for Friday, July 11

A sketchy look at the news of the day.… Continue reading

2024 Presidential Election Day Symbolic Elements.
Editorial: Retain Escamilla, Binda on Lynnwood City Council

Escamilla was appointed a year ago. Binda is serving his first term.

The Buzz: What the mainstream media don’t want you to know

They’re not, but we just liked how that looked at the top of the page and thought you’d read it.

Schwab: Yes, your Medicaid’s gone but you can gloat over gators

What Trump is taking from the social safety net, he’s adding to the cruelty against working immigrants.

Congress’ passage of tax cuts bill marked shameful day for GOP

This July 3 was one of the most shameful days in American… Continue reading

Tell senators to keep vaccine aid by rejecting recissions bill

The Senate could vote on a Trump administration-proposed rescissions package before July… Continue reading

Too much risk, noise and annoyance with fireworks

Let’s hear it for all the “kids” who like to endanger life… Continue reading

A Volunteers of America Western Washington crisis counselor talks with somebody on the phone Thursday, July 28, 2022, in at the VOA Behavioral Health Crisis Call Center in Everett, Washington. (Ryan Berry / The Herald)
Editorial: Dire results will follow end of LGBTQ+ crisis line

The Trump administration will end funding for a 988 line that serves youths in the LGBTQ+ community.

toon
Editorial: Using discourse to get to common ground

A Building Bridges panel discussion heard from lawmakers and students on disagreeing agreeably.

Senate Minority Leader Chuck Schumer (D-N.Y.) speaks during a news conference at the U.S. Capitol on Friday, June 27, 2025. The sweeping measure Senate Republican leaders hope to push through has many unpopular elements that they despise. But they face a political reckoning on taxes and the scorn of the president if they fail to pass it. (Kent Nishimura/The New York Times)
Editorial: GOP should heed all-caps message on tax policy bill

Trading cuts to Medicaid and more for tax cuts for the wealthy may have consequences for Republicans.

Comment: About that Social Security email sent to retirees

It was uncharacteristically political, inaccurate about the BBB’s benefits and likely to cause mistrust of the SSA.

toon
Editorial cartoons for Thursday, July 10

A sketchy look at the news of the day.… Continue reading

Support local journalism

If you value local news, make a gift now to support the trusted journalism you get in The Daily Herald. Donations processed in this system are not tax deductible.