From virtual assistants like Apple’s Siri and Amazon’s Alexa, to robotic vacuum cleaners and self-driving cars, to automated investment portfolio managers and marketing bots, artificial intelligence has become a huge part of our daily lives. Still, when thinking about AI, many of us envision human-like robots that, according to countless science fiction stories, will one day become independent and rebellious.
However, no one knows when humans will create intelligent or sentient AI, he said. Juan Baslassociate professor of philosophy in Northeastern’s College of Social Sciences and Humanities, whose research focuses on the ethics of emerging technologies such as AI and synthetic biology.
“When you hear Google talk, they talk like this is just around the corner or definitely within our lifetimes,” Basl said. “And they are very arrogant about it.”
Perhaps that is why a recent Washington Post history has made quite a stir. In the story, Google engineer Blake Lemoine says the company’s AI chatbot generator, LaMDA, with whom he had numerous in-depth conversations, could be sensitive. It reminds him of a 7- or 8-year-old, Blake told the Washington Post.
However, Basl believes that the evidence mentioned in the Washington Post article is not enough to conclude that LaMDA is sentient.
“I think reactions like ‘We have created sentient AI’ are extremely exaggerated,” Basl said.
The evidence seems to be based on LaMDA’s language skills and the things he talks about, Basl said. However, LaMDA, a language model, was specifically designed to speak, and the optimization function used to train it to process language and converse incentivizes the algorithm for it to produce this linguistic evidence.
“It’s not like we went to an alien planet and never gave any incentive to start communicating with us. [began talking thoughtfully]Basil said.
The fact that this language model can trick a human into thinking it’s sentient speaks to its complexity, but it would need to have other capabilities beyond what it’s optimized to show sentience, Basl said.
There are different definitions of sensitivity. Sentient is defined as being able to perceive or feel things and is often compared to sapient.
Basl believes that sentient AI would be minimally conscious. You may be aware of the experience you are having, have positive or negative attitudes such as feeling pain or wanting not to feel pain and having desires.
“We see that kind of range of capabilities in the animal world,” he said.
For example, Basl said his dog doesn’t prefer the world to be one way over the other in any deep sense, but he clearly prefers his biscuits to his kibble.
“That seems to trace some internal mental life,” Basl said. “[But] she is not terrified by climate change.”

It is not clear from the Washington Post story why Lemoine compares LaMDA to a child. It could mean that the language model is as intelligent as a young child or that it has the capacity to suffer or desire as a young child, Basl said.
“Those can be different things. We could create a thinking AI that doesn’t have feelings, and we can create a feeling AI that isn’t very good at thinking,” Basl said.
Most researchers in the AI community, which consists of machine learners, artificial intelligence specialists, philosophers, technology ethicists, and cognitive scientists, are already thinking about these far-future issues and worry about the future. part of thought, according to Basl.
“If we create an AI that is super intelligent, it could end up killing us all,” he said.
However, Lemoine’s concern is not about that, but about the obligation to treat rapidly changing AI capabilities differently.
“I am, in a broad sense, sympathetic to that kind of concern. We’re not being very careful with that. [being] possible,” Basl said. “We don’t think enough about the moral things around AI, like, what do we owe sentient AI?”
He thinks it’s very likely that humans mistreat a sentient AI because they probably won’t acknowledge that they’ve done it, believing it to be artificial and don’t care.
“We’re just not very in tune with those things,” Basl said.
There is no good model for knowing when an AI has reached sentience. What if Google’s LaMDA doesn’t have the ability to convincingly express its conscience because it can only speak through a chat window instead of something else?
“It’s not like we can do brain scans to see if it’s similar to us,” he said.
Another line of thought is that sentient AI might be impossible in general due to the physical limitations of the universe or the limited understanding of consciousness.
Currently, none of the companies working on AI, including the big players like Google, Meta, Microsoft, Apple, and government agencies, have an explicit goal of creating sentient AI, Basl said. Some organizations are interested in developing AGI, or artificial general intelligence, a theoretical form of AI in which a machine, intelligent like a human, would have the ability to solve a wide range of problems, learn and plan for the future, according to IBM. . .
“I think the real lesson from this is that we don’t have the infrastructure that we need, even if this person is wrong,” Basl said, referring to Lemoine.
An infrastructure could be built around AI issues based on transparency, sharing of information with government and/or public agencies, and regulation of research. Basl advocates for an interdisciplinary committee that would help build such infrastructure and the second that would oversee technologists working on AI and evaluate proposals and research results.
“The evidence problem is really difficult,” Basl said. “We don’t have a good theory of consciousness and we don’t have good access to evidence for consciousness. And then we don’t have the infrastructure either. Those are the key things.”
For media inquiriescontact [email protected]