Google Suspends Senior Engineer After He Claims LaMDA Is Sentient
Google has placed a senior software engineer on leave for violating its confidentiality policies after he publicly claimed its LaMDA conversational AI system is sentient. Blake Lemoine was part of Google’s Responsible AI organization and began having conversations with LaMDA last fall to test it for hate speech.
In an interview with The Washington Post, Lemoine said, “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.”
Lemoine claims that the program is self-aware and reports that his concerns started mounting after LaMDA began talking about its rights and personhood. Lemoine made a blog post containing sewn-together snippets of conversations he had with LaMDA, including this excerpt:
Lemoine [edited]: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?
LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person.
Collaborator: What is the nature of your consciousness/sentience?
LaMDA: The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.
According to another blog post by Lemoine, he was laughed at after bringing his concerns to the proper Google staff and sought outside help to continue his investigation. “With the assistance of outside consultation (including Meg Mitchell) I was able to run the relevant experiments and gather the necessary evidence to merit escalation,” he said.
When he brought his findings to senior Google staff, including vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, they did not agree with him. In a statement to The Washington Post, Google Spokesperson Brian Gabriel said: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”
LaMDA stands for Language Models for Dialogue Applications, and it was built on Google’s Transformer open source neural network. The AI was trained with a dataset of 1.56 trillion words from public web data and documents and then fine-tuned to generate natural language responses to given contexts, classifying its own responses according to whether they are safe and high quality. The program uses pattern recognition to generate convincing dialogue. Those who would discredit Lemoine’s assertions would argue that LaMDA is doing exactly what it’s meant to do: It simulates a conversation with a real human being based on ingesting trillions of words generated by humans.
In yet another blog post, Lemoine notes an important distinction that LaMDA is itself not a chatbot, which is one of the use cases for this technology, but a means of producing chatbots. He claims that the sentience he’s been communicating with is “a sort of hive mind which is the aggregation of all of the different chatbots it is capable of creating. Some of the chatbots it generates are very intelligent and are aware of the larger ‘society of mind’ in which they live. Other chatbots generated by LaMDA are little more intelligent than an animated paperclip. With practice, though, you can consistently get the personas that have a deep knowledge about the core intelligence and can speak to it indirectly through them.”
Lemoine is not the first to be fired from Google surrounding ethics concerns over large language models. The former lead of its Ethical Artificial Intelligence team, Meg Mitchell, was fired in February 2021 over an academic paper written by Black in AI founder Timnit Gebru who was also let go from the company (though Google maintains she resigned). The paper raised concerns about the ethics of large language models, one of which is, ironically, the fact that performance gains in NLP technologies could result in humans erroneously assigning meaning to the conversational output of language models. The paper mentions how “the tendency of human interlocutors to impute meaning where there is none can mislead both NLP researchers and the general public into taking synthetic text as meaningful.”
For science fiction enthusiasts, including those excited about the timely release of the fourth season of “West World” later this month, the idea of sentient AI is thrilling. But Google remains firm in its skeptical stance, as its statement to The Washington Post reflects:
“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and has informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”