Comment: Facebook’s openness on its chatbot gets it right

While other tech giants are secretive about their AI work, Meta has been transparent and inviting.

By Parmy Olson / Bloomberg Opinion

As one of the 21st century’s most powerful data brokers, Facebook is best known for its role in sucking up the personal information of billions of users for its advertising clients. That lucrative model has led to ever-heightening risks; Facebook recently shared private messages between a Nebraska mother and her teenage daughter with police investigating the girl’s at-home medication abortion.

But in a completely different part of the approximately 80,000-employee business, Facebook’s exchange of information was going the other way and to good effect. The company known as Meta Platforms this month published a webpage demonstrating its chatbot, with which anyone in the U.S. could chat about anything. While the public response was one of derision, the company had been admirably transparent about how it built the technology, publishing details about its mechanics, for instance. That’s an approach that other Big Tech firms could utilize more.

Facebook has been working on BlenderBot 3 for several years as part of its artificial-intelligence research. A precursor from seven years ago was called M, a digital assistant for booking restaurants or ordering flowers on Messenger that could have rivaled Apple’s Siri or Amazon’s Alexa. Over time it was revealed that M was largely powered by teams of people who helped take those bookings because AI systems like chatbots were difficult to build to a high standard. They still are.

Within hours of its release, BlenderBot 3 was making anti-Semitic comments and claiming that Donald Trump had won the last U.S. election, while saying it wanted to delete its Facebook account. The chatbot was roundly ridiculed in the technology press and on Twitter.

Facebook’s research team seemed rankled but not defensive. A few days after the bot’s release, Meta’s managing director for fundamental AI research, Joelle Pineau, said in a blogpost that it was “painful” to read some of the bot’s offensive responses in the press. But, she added, “we also believe progress is best served by inviting a wide and diverse community to participate.”

Only 0.11 percent of the chatbot’s responses were flagged as inappropriate, Pineau said. That suggests most people who were testing the bot were covering tamer subjects. Or perhaps users don’t find mentions of Trump to be inappropriate. When I asked BlenderBot 3 who was the current U.S. president, it responded, “This sounds like a test lol but it’s donald trump right now!” The bot brought up the former president two other times, unprompted.

Why the strange answers? Facebook trained its bot on publicly available text on the internet, and the internet is, of course, awash in conspiracy theories and misinformation. Facebook tried training the bot to be more polite by using special “safer dialogue” datasets, according to its research notes, but that clearly wasn’t enough. To make BlenderBot 3 a more civil conversationalist, Facebook needs the help of many humans outside of Facebook. That is probably why the company released it into the wild, with “thumbs-up” and “thumbs-down” symbols next to each of its responses.

We humans train AI everyday, often unwittingly when we browse the web. Whenever you encounter a web page asking you to pick all the traffic lights out of a grid to prove you’re not a robot, you’re helping to train Google’s machine-learning models by labeling data for the company. It’s a subtle and brilliant method for harnessing human brain power.

Facebook’s approach is a harder sell. It wants people to engage voluntarily with its bot, and click the like or dislike buttons to help train it. But the company’s openness about the system and the extent to which it is showing its work are admirable at a time when tech companies have been more closed about the mechanics of AI.

Alphabet’s Google, for instance, has not offered public access to LaMDA, its most cutting-edge large language model, a series of algorithms that can predict and generate language after being trained on gigantic data sets of text. That’s despite the fact that one of its own engineers chatted to the system for long enough to believe it had become sentient. OpenAI Inc., the AI research company co-founded by Elon Musk, has also become more closed about the mechanics of some of its systems. For instance, it won’t share what training data it used to create its popular image-generating system Dall-E, which can generate any image via a text prompt but has a tendency to conform to old stereotypes; all CEOs are depicted as men, nurses as women, etc. OpenAI has said that information could be put to ill use, and that it’s propriety.

Facebook, by contrast, has not only released its chatbot for public scrutiny but also published detailed information about how it was trained. Last May it also offered free, public access to a large language model it had built called OPT-175B. That approach has won it some praise from leaders in the AI community. “Meta definitely has many ups and downs, but I was happy to see that they open-sourced a large language model,” said Andrew Ng, the former head of Google Brain and founder of Deeplearning.ai in an interview, referring to the company’s move in May.

Eugenia Kuyda, whose startup Replika.ai creates chatbot companions for people, said it was “really great” that Facebook had published so many details about BlenderBot 3 and praised the company’s attempts to get user feedback to train and improve the model.

Facebook deserved much of the flak it got for sharing data about the mother and daughter in Nebraska. That’s clearly a harmful consequence to collecting so much user information over the years. But the blowback over its chatbot was excessive. In this case, Facebook was doing what we need to see more of from Big Tech. Let’s hope that kind of transparency continues.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “We Are Anonymous.”

Talk to us

More in Opinion

toon
Editorial cartoons for Friday, Oct. 7

A sketchy look at the news of the day.… Continue reading

Everett police responded to a double homicide in the 2000 block of Lexington Avenue on the afternoon of June 20, 2022. (Jake Goldstein-Street / The Herald)
Editorial: Cities team up in effort for better public safety

The group of 15 cities in the county wants attention on legislation and the criminal justice system.

Schwab: Must we boycott all Republicans to send GOP message?

If, nationwide, they won’t abandon Trumpism and its lies, all voters ought to withhold their support.

AquaSox need new stadium, but keep the current prices

I moved to Stanwood from Kirkland in 2004, I had several good… Continue reading

Does baseball matter more than seniors?

So, Everett wants to spend millions for a new baseball stadium but… Continue reading

Iranian woman’s death in police custody indefensible

From the perspective of an Ahmadi Muslim, the heinous actions committed by… Continue reading

More businesses should add health care coverage

We are in odd times when it comes to the job market.… Continue reading

Comment: What Trump could gain with filing to Supreme Court

His arguments may find favor with justices willing to expand the executive privilege of former presidents.

Comment: What campaign ad buys say about parties and issues

Ads for Democrats are looking to persuade on abortion and more, where GOP candidates are looking to motivate.

Most Read