Comment: AI, social media usher in the end of photographic proof

It’s now easy to convincingly fake photos; and even easier to make them go viral on Twitter.

By Parmy Olson / Bloomberg Opinion

A fake photo of an explosion near the Pentagon went viral across Twitter on Monday, and stocks dipped. The incident confirmed what many have said for months: Misinformation is on course to be supercharged as new AI tools for concocting photos get easier to use.

Fixing this problem with technology will be an endless game of whack-a-mole. It’s certainly worth trying to track image provenance, as Adobe is doing with its Content Authenticity Initiative. But as the saying goes, a lie can travel around the world and back again while the truth is still lacing up its boots. In a world where more content than ever is being generated artificially, we’ll all need to become more skeptical about what we see online – especially in the run-up to a U.S. presidential election next year.

The Pentagon “photo” became particularly messy because of Twitter’s poor excuse for a verification system. Elon Musk revamped the site’s blue ticks so that they would no longer be monopolized by “elites” like press and celebrities, and so more people could become “verified” and have a louder voice for a flat fee. Unfortunately, his system has become a target for imitators, like the paid account BloombergFeed, which was one of several verified accounts that posted the Pentagon photo before getting suspended Monday morning.

BloombergFeed and a Twitter account called Walter Bloomberg, which also carried the report, are not affiliated with Bloomberg News, according to a spokesperson for Bloomberg News.

Although Twitter has made a perfect environment for fake AI photos to flourish, the problem ultimately goes beyond the platform. The Pentagon photo originated on Facebook and we can expect more photos like it circulating on other social networks too, such as WhatsApp, where fake information about the elections in Brazil last year went viral through the app’s forwarding feature.

TikTok could also become more susceptible to fake videos soon enough. Early examples of videos made from AI tools still look glitchy, but they’re likely to become more realistic in the next year or two, with millions of dollars of venture-capital investment going into start-ups building deepfake technology (for legitimate purposes, of course).

For instance, New York start-up Runway has just released a tool that allows anyone to transform one video into another type of video using words and images as prompts, while San Francisco-based Gan.ai has raised money from VC luminaries like Sequoia Capital to sell “video personalization” software to brands.

While realistic fake videos might still be a year or two out, image generation is becoming easier than ever. Adobe has just updated its Photoshop software with generative AI tools that allow users of the ubiquitous image-editing software to manipulate photos in much more drastic ways. And there are several good image-generating tools available as mobile apps, making them easier for people to access on the go. While tools like Adobe’s, Midjourney’s, or OpenAI’s DALL-E 2 won’t create images of celebrities, politicians, violence and war, open-source alternatives like Stable Diffusion will.

When I asked Stable Diffusion’s co-founder last year about how the world should deal with a surge in fake photos, he said we’ll all have to adjust. “People will be aware of the fact that anyone can create that image on their phone, in one second,” Emad Mostaque said. “People will be like, ‘Oh it’s probably just created.’”

Remember the internet jargon, “pics or it didn’t happen?” Soon enough, pics won’t be so useful for proof, and we’ll find ourselves questioning legitimate images, too. Twitter users got a taste of AI’s potential for accelerating misinformation in March, when a fake photo of Pope Benedict in a puffer jacket went viral. As we predicted back then, the potential for fakery has taken a darker turn.

Generative AI and dodgy blue check marks are a perfect mix for misinformation to thrive on Twitter, and as Meta Platforms prepares to cut more jobs in the coming weeks, staff are concerned their content moderation teams will get curbed too, according to a Tuesday report in The Washington Post, meaning there will be fewer people at Facebook to handle the problem.

This time last year, platforms like Twitter and Facebook had improved their abilities to stamp out misinformation. Things look different today. The tech companies have to do a better job of preventing fake news from spreading; but we will also need to approach them with greater doses of skepticism. At a time when seeing is no longer believing, we must arm ourselves with more discerning eyes, and a little more doubt.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “We Are Anonymous.”

Talk to us

More in Opinion

Lummi Tribal members Ellie Kinley, left, and Raynell Morris, president and vice president of the non-profit Sacred Lands Conservancy known as Sacred Sea, lead a prayer for the repatriation of southern resident orca Sk’aliCh’elh-tenaut — who has lived and performed at the Miami Seaquarium for over 50 years — to her home waters of the Salish Sea at a gathering Sunday, March 20, 2022, at the sacred site of Cherry Point in Whatcom County, Wash.

The Bellingham Herald
Editorial: What it will require to bring Tokitae home

Bringing home the last captive orca requires expanded efforts to restore the killer whales’ habitat.

toon
Editorial cartoons for Tuesday, June 6

A sketchy look at the news of the day.… Continue reading

A map of the I-5/SR 529 Interchange project on Tuesday, May 23, 2023 in Marysville, Washington. (Olivia Vanni / The Herald)
Editorial: Set your muscle memory for work zone speed cameras

Starting next summer, not slowing down in highway work zones can result in a $500 fine.

File - A teenager holds her phone as she sits for a portrait near her home in Illinois, on Friday, March 24, 2023. The U.S. Surgeon General is warning there is not enough evidence to show that social media is safe for young people — and is calling on tech companies, parents and caregivers to take "immediate action to protect kids now." (AP Photo Erin Hooley, File)
Editorial: Warning label on social media not enough for kids

The U.S. surgeon general has outlined tasks for parents, officials and social media companies.

Anabelle Parsons, then 6, looks up to the sky with binoculars to watch the Vaux's swifts fly in during Swift's Night Out, Sept. 8, 2018 in Monroe. (Olivia Vanni / The Herald)
Editorial: Birders struggle with legacy, name of Audubon

Like other chapters, Pilchuck Audubon is weighing how to address the slaveholder’s legacy.

Comment: Biggest part of debt limit deal was the dealing

The White House and Congress showed they could find a path that can make real progress in reducing the debt.

Comment: Do we need refuge from drag shows and naked staues?

GOP lawmakers should know that most parents have bigger concerns than men in dresses and Michelangelo’s David.

Comment: To save Twitter, Musk should take it public

It goes against conventional wisdom, but then Musk has always defied how others get business done.

Comment: Milton Friedman was right; CEOs should focus on profit

Stumbles by Target and Budweiser show why wading into politics brings too many variables into the mix.

Most Read