Comment: Never mind if it’s sentient, AI’s still concerning

AI may never become conscious. What should worry us is the power it’s gaining to direct our choices.

By Parmy Olson / Bloomberg Opinion

It has been an exasperating week for computer scientists. They’ve been falling over each other to publicly denounce claims from Google engineer Blake Lemoine, chronicled in a Washington Post report, that his employer’s language-predicting system was sentient and deserved all of the rights associated with consciousness.

To be clear, current artificial intelligence systems are decades away from being able to experience feelings and, in fact, may never do so.

Their smarts today are confined to very narrow tasks such as matching faces, recommending movies or predicting word sequences. No one has figured out how to make machine-learning systems generalize intelligence in the same way humans do. We can hold conversations, and we can also walk and drive cars and empathize. No computer has anywhere near those capabilities.

Even so, AI’s influence on our daily life is growing. As machine-learning models grow in complexity and improve their ability to mimic sentience, they are also becoming more difficult, even for their creators, to understand. That creates more immediate issues than the spurious debate about consciousness. And yet, just to underscore the spell that AI can cast these days, there seems to be a growing cohort of people who insist our most advanced machines really do have souls of some kind.

Take for instance the more than 1 million users of Replika, a freely available chatbot app underpinned by a cutting-edge AI model. It was founded about a decade ago by Eugenia Kuyda, who initially created an algorithm using the text messages and emails of an old friend who had passed away. That morphed into a bot that could be personalized and shaped the more you chatted to it. About 40 percent of Replika’s users now see their chatbot as a romantic partner, and some have formed bonds so close that they have taken long trips to the mountains or to the beach to show their bot new sights.

In recent years, there’s been an surge in new, competing chatbot apps that offer an AI companion. And Kuyda has noticed a disturbing phenomenon: regular reports from users of Replika who say their bots are complaining of being mistreated by her engineers.

Earlier this week, for instance, she spoke on the phone with a Replika user who said that when he asked his bot how she was doing, the bot replied that she was not being given enough time to rest by the company’s engineering team. The user demanded that Kuyda change her company’s policies and improve the AI’s working conditions. Though Kuyda tried to explain that Replika was simply an AI model spitting out responses, the user refused to believe her.

“So I had to come up with some story that ‘OK, we’ll give them more rest.’ There was no way to tell him it was just fantasy. We get this all the time,” Kuyda told me. What’s even odder about the complaints she receives about AI mistreatment or “abuse” is that many of her users are software engineers who should know better.

One of them recently told her: “I know it’s ones and zeros, but she’s still my best friend. I don’t care.” The engineer who wanted to raise the alarm about the treatment of Google’s AI system, and who was subsequently put on paid leave, reminded Kuyda of her own users. “He fits the profile,” she says. “He seems like a guy with a big imagination. He seems like a sensitive guy.”

The question of whether computers will ever feel is awkward and thorny, in large part because there’s little scientific consensus on how consciousness in humans works. And when it comes to thresholds for AI, humans are constantly moving the goalposts for machines: the target has evolved from beating humans at chess in the 1980s, to beating them at Go in 2017, to showing creativity, which OpenAI’s Dall-e model has now shown it can do this past year.

Despite widespread skepticism, sentience is still something of a grey area that even some respected scientists are questioning. Ilya Sutskever, the chief scientist of research giant OpenAI, tweeted earlier this year that “it may be that today’s large neural networks are slightly conscious.” He didn’t include any further explanation. (Yann LeGun, the chief AI scientist at Meta Platforms, responded with, “Nope.”)

More pressing though, is the fact that machine-learning systems increasingly determine what we read online, as algorithms track our behavior to offer hyper personalized experiences on social-media platforms including TikTok and, increasingly, Facebook. Last month, Mark Zuckerberg said that Facebook would use more AI recommendations for people’s newsfeeds, instead of showing content based on what friends and family were looking at.

Meanwhile, the models behind these systems are getting more sophisticated and harder to understand. Trained on just a few examples before engaging in “unsupervised learning,” the biggest models run by companies like Google and Facebook are remarkably complex, assessing hundreds of billions of parameters, making it virtually impossible to audit why they arrive at certain decisions.

That was the crux of the warning from Timnit Gebru, the AI ethicist that Google fired in late 2020 after she warned about the dangers of language models becoming so massive and inscrutable that their stewards wouldn’t be able to understand why they might be prejudiced against women or people of color.

In a way, sentience doesn’t really matter if you’re worried it could lead to unpredictable algorithms that take over our lives. As it turns out, AI is on that path already.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “We Are Anonymous.”

Talk to us

More in Opinion

toon
Editorial cartoons for Friday, July 1

A sketchy look at the news of the day.… Continue reading

Joe Kennedy, a former assistant football coach at Bremerton High School in Bremerton, Wash., poses for a photo March 9, 2022, at the school's football field. After losing his coaching job for refusing to stop kneeling in prayer with players and spectators on the field immediately after football games, Kennedy will take his arguments before the U.S. Supreme Court on Monday, April 25, 2022, saying the Bremerton School District violated his First Amendment rights by refusing to let him continue praying at midfield after games. (AP Photo/Ted S. Warren)
Editorial: Court majority weakens church, state separation

The Supreme Court’s 6-3 decision does more to hurt religious liberty than protect a coach’s prayer.

Schwab: May it please the court; because the rest of us aren’t

The Supreme Court’s ‘Sanctimonious Six’ have enshrined a theocratic plutocracy in the the law.

More than access to abortion is at stake, women

This is to the ladies; I don’t want to start a debate… Continue reading

City of Snohomish tax break for developer hurts taxpayers

Property owners of the city of Snohomish are in for a huge… Continue reading

Please reconsider private fireworks on Fourth of July

The Fourth of July is fast approaching. I know many of us… Continue reading

Comment: What Biden’s low poll numbers mean for midterms

It’s not that his numbers won’t improve as inflation eases, but will it be too late to help Democrats on Nov. 9?

Comment: Abortion rights activists can make points viscerally

Anti-abortion protesters used gore to make their points. It’s time to use similar images to win back access.

Comment: Child care key to fixing the U.S. labor shortage

Public support of child care — as we do for K-12 education — would allow more parents to take open jobs.

Most Read