Stanford AI experts call BS on claims that Google’s LaMDA chatbot is sentient

Two Stanford heavyweights have weighed in on the fiery AI sentience debate — and the duo is firmly in the “BS” corner.

The wrangle recently rose to a crescendo over arguments about Google’s LaMDA system.

DeveloperBlake Lemoine sparked the controversy. Lemoine, who worked for Google’s Responsible AI team, had been testing whether the large-language model (LLM) used harmful speech.

Greetings, humanoids

Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.

The 41-year-old told The Washington Post that his conversations with the AI convinced him that it had a sentient mind.

“I know a person when I talk to it,” he said. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

Google denied his claims. In July, the company put Lemoine on leave for publishing confidential information.

An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB

— Blake Lemoine (@cajundiscordian) June 11, 2022

 

The episode triggered sensationalist headlines and speculation that AI is gaining consciousness. AI experts, however, have largely dismissed Lemoine’s argument.

The Stanford duo this week shared further criticisms with The Stanford Daily.

 “LaMDA is not sentient for the simple reason that it does not have the physiology to have sensations and feelings,” said John Etchemendy, the co-director of the Stanford Institute for Human-centered AI (HAI). “It is a software program designed to produce sentences in response to sentence prompts.” 

Yoav Shoham, the former director of the Stanford AI Lab, agreed that LaMDA isn’t sentient. He described The Washington Post article “pure clickbait.”

“They published it because, for the time being, they could write that headline about the ‘Google engineer’ who was making this absurd claim, and because most of their readers are not sophisticated enough to recognize it for what it is,” he said.

Distraction techniques

Shoham and Etchemendy join a growing range of critics who are concerned the public is being misled.

The hype may generate clicks and market products, but researchers fear it’s distracting us from more pressing issues.

LLMs are causing particular alarm. While the models have become adept at generating humanlike text, excitement about their “intelligence” can mask their shortcomings.

Research shows systems can have enormous carbon footprints, amplify discriminatory language, and pose real dangers.

“Debate around whether LaMDA is sentient or not moves the whole conversation towards debating nonsense and away from critical issues like how racist and sexist LLMs often are, huge compute resources LLMs require, [and] their failure to accurately represent marginalized language/identities,” tweeted Abeba Birhane, a senior fellow in trustworthy AI at Mozilla.

debate around whether LaMDA is sentient or not moves the whole conversation towards debating nonsense & away from critical issues like how racist & sexist LLMs often are, huge compute resources LLMs require, their failure to accurately represent marginalized language/identities..

— Abeba Birhane (@Abebab) June 15, 2022

It’s hard to predict when — or if — truly sentient AI will emerge. But focusing on that prospect is making us overlook the real-life consequences that are already unfolding.

Facebook Twitter Google+ Pinterest
Tel. 619-537-8820

Email. This email address is being protected from spambots. You need JavaScript enabled to view it.