Comment: If AI ‘writers’ were human, they would have been fired

A series of stories, written by AI, have embarrassed news sites and raised questions about their use.

By Paul Farhi / The Washington Post

In their short life as machine-generators of news stories, artificial intelligence programs have screwed up simple interest calculations, botched the chronology of Star Wars movies and produced sports stories that appeared to contain little actual knowledge of sports.

The latest embarrassing bit of robots-gone-wild “reporting”: An obituary of a former NBA player described in the headline as “useless at 42.”

The article — published by an obscure news site, Race Track, then shared widely by — appears to be based on a legitimate news story from TMZ about the death of Brandon Hunter and then run through a tool known as a “spinner” that masks plagiarism by replacing certain words with synonyms.

But some synonyms don’t scan; hence, the bizarre description of Hunter as a former NBA “participant” who “performed” for the Boston Celtics and Orlando Magic and “achieved a career-high of 17 factors in a recreation.”

The belly flop, ridiculed across social media, wasn’t just an embarrassment for MSN — whose actual human editors took the story down — and its parent Microsoft, a leading AI developer, but for automated journalism generally.

Newsrooms have utilized simple AI tools for several years, mostly to produce corporate earnings reports, transcribe recordings and check spelling. The potentially revolutionary advance is generative AI, which remixes vast amounts of data to create new stories; raising fears that publishers could someday replace their ever-diminishing news staffs with armies of bot reporters.

But generative AI still has bugs and limitations that would get a rookie reporter fired. They can’t discern fact from fiction, which means they can pass off nonsense just as easily as the real goods. They can’t call up experts and sources to gather new information, which limits their effectiveness on breaking news stories. They also have trouble understanding context and cultural nuance; that is, what’s appropriate in the body of a news article.

And so a travel article generated by AI and published by Microsoft in August recommended that tourists in Ottawa pay a visit to the Ottawa Food Bank. “Consider going into it on an empty stomach,” the article suggested, rather cruelly. Microsoft removed the piece after it was mocked on Twitter.

Microsoft hasn’t explained how the AI content slipped past its human gatekeepers, if any were involved. The company issued a statement about the Hunter obituary saying that “we continue to enhance our systems to identify and prevent inaccurate information from appearing on our channels.” The original publisher, Race Track, could not be reached for comment and appears to have been deleted.

While AI-written prose can be serviceable, it can also be painfully clunky. Readers of the Columbus Dispatch last month encountered an article about a high school football game described as a “close encounter of the athletic kind” and another reporting that a team “avoided the brakes and shifted into victory gear.” They were the product of Lede AI, a program deployed by Gannett, the nation’s largest newspaper chain, and suspended after the stories drew mockery.

“As with any new technological advance, some glitches can occur,” Jay Allred, chief executive of Lede AI, conceded in a statement to The Washington Post.

To be fair, the defects exposed in AI-written articles so far suggest the flaw is as much in the humans as in the robots. MSN and other red-faced publishers of AI-generated news articles all appear to have skipped a critical step in the journalistic assembly line; double-checking and editing copy before it’s published.

The root of the issue may be Microsoft’s decision in 2020 to lay off dozens of journalists who maintain MSN’s homepage and news pages of its Edge browser, said Victor Tangermann, a senior editor at Futurism, which has closely covered AI’s march into journalism.

“Publishers are trying to cut costs and keep the content machine spinning,” he said. “But what we’re seeing over and over is that AI isn’t quite up to the job yet, so it’s backfiring embarrassingly for publications that try to use it … This has allowed a lot of bad material to slip through.”

He added, “It’s been hard to identify any cases of compelling or even acceptable journalism” produced by AI so far.

Human input also appears to have been lacking at Gizmodo, a tech site, after its i09 entertainment section published the flawed list of Star Wars movies and TV shows in early July. The site’s deputy editor, James Whitbrook, said on Twitter that his editorial team had no involvement with its publication and blasted it in an email to parent company G/O Media as “embarrassing, unpublishable, disrespectful of both the audience and the people who work here, and a blow to our authority and integrity.”

Some AI news developers worry that defective automated news stories will damage both AI and the news media before AI develops its full potential.

The viral AI stories “detract and distract from more thoughtful and useful applications of the technology that could help to sustain news organizations that are publishing unique, quality information for their readers,” said Matt MacVey, who is leading an AI news-development initiative at New York University.

“AI and automation are here to stay and will be integrated into all sorts of tech and software that we use daily,” he said. But “it just takes one unscrupulously published article going viral to raise a lot of scrutiny and concern.”

Among others, Google has been testing an AI-based system that can take in information and produce news stories; it has pitched its Genesis system to various publishers, including The Washington Post and the New York Times. Neither have implemented it.

In the meantime, publications should be more forthcoming about when a robot is the reporter, said Tangermann. He urges editors to clearly mark machine-generated copy so that readers know what they’re getting.

“In a few years, AI may have become astoundingly clever, and if its capabilities do surpass that of human journalists, it’ll be a whole different conversation,” he said.

But at the moment, he’s not impressed. So far, he said, generative AI has mostly generated “chaos.”

Paul Farhi has been a media reporter at The Washington Post since 2010. Prior to that, he was a financial reporter, a political reporter and a Style reporter. Daniel Wu contributed to this report.

Talk to us

More in Opinion

Editorial cartoons for Friday, Dec. 8

A sketchy look at the news of the day.… Continue reading

The Everett City Council approved a $375,000 settlement Wednesday with a former firefighter who alleged racist harassment went unaddressed in the department.
Editorial: Fosse shouldn’t have to choose between elected roles

The Everett City Council can bar its members from other offices, but should not do so retroactively.

Schwab: Throwing cliches at Trump to see what sticks

Except for his ardent yet uninformed supporters, it all sticks in the craws of the fair-minded.

Comment: McCarthy delivers parting elbow to GOP’s House majority

Two months after saying he’d stay in office, McCarthy bows out, complicating things for his party.

Comment: Hospitals turning health care into a stay at a spa

The focus on amenities and health care as ‘a journey’ are driving up the cost of U.S. medicare care.

Editorial cartoons for Thursday, Dec. 7

A sketchy look at the eews of the day.… Continue reading

Ben Ramirez is doused with water by teammates after the AquaSox beat the Emeralds to clinch a playoff berth on Monday, Sept. 4, 2023, in Everett. (Photo provided by AquaSox)
Editorial: City’s $1 million an investment in Everett baseball

Contracts for preliminary work on an AquaSox stadium honor team’s 40 years of family fun and tradition.

civic health white board
Editorial: Improving civic health starts by coming to table

Efforts locally and at the state level seek to counter the incivility that has mired public discourse.

From the bodycam footage of Everett police officer Ryan Greely and footage from Molly Wright, Wright films officer Greely before he arrests her for obstructing a law enforcement officer on Aug. 10, 2023 in Everett, Washington. (Screenshot from a video provided by Molly Wright)
Editorial: Duties on both sides of camera during arrests

The right to record police activity is clear, but so is the need to respect the safety of officers and others.

Comment: Ranked-choice voting the big winnter on election day

More cities and counties — and two states — are using RCV and instilling more confidence among voters.

Comment: Democracy survived Nixon; Trump is a greater threat

A special. prosecutor in the Nixon investigation is concerned about how society has changed since then.

Burke: Dilemma in donations is in where to put your money

With a range of worthy causes — charitable and political — how should one weigh where the need is greatest?

Support local journalism

If you value local news, make a gift now to support the trusted journalism you get in The Daily Herald. Donations processed in this system are not tax deductible.