Get started with a 30-day free trial! Sign up now

Should We Be Worried About AI Becoming Sentient? Spoiler Alert: Not Yet

Aug 9, 2022

A sit-down conversation with GroupSolver’s Researchers, Atakan Ince and Nicholas Deckhut. 

By Balbina De La Garza, Marketing Director 

2022: Space Odyssey? 

Imagine this: You sit on your couch, kick your feet up, make some popcorn, and turn on watch 2001: Space Odyssey. You follow the storyline and witness the computer by the name of HAL 9000 come to life, begging to not be shut down. You hear its voice through the screen—completely aware of the fact that it’s fiction—yet can’t ignore your curiosity that comes with it. Can this really happen to a computer? Can it feel things the way we do? Come to life?! 

Sure, when the film was released, it was the 60’s and the idea of this coming true probably felt like a distant problem to society. But guess what? It’s been almost 60 years since that film was released and a LOT has happened since then.  

So, this begs the question—can modern AI technologies become sentient? 

Google’s LaMDA model coming to life 

A few weeks ago, news broke out that a Google researcher, Blake Lemoine, was suspended in regard to his work with AI model, LaMDA. Lemoine claimed that the model had become sentient and deserved legal representation the same way all Google employees do. In leaked transcripts of conversations between the research and LaMDA, it appears as if the model is aware of its existence and surroundings, and even fears being turned off.  

Say what, now?  

Of course, as folks working with AI technologies, we were fascinated by this topic and took the debate of AI sentience to our work Slack. But that was not enough for me. I wanted to get a quick pulse on the general population’s thoughts towards AI sentience: their reactions, what it means to them, and their concerns. From there, I shared the data and had a deep chat with two of our researchers, Atakan Ince and Nicholas Deckhut. Atakan is our computational linguist (think: looking at the “x-ray” of the data from a linguistic point of view) and Nic is our data scientist and programmer. What they have to say sheds fascinating insights into the plausibility of AI sentience, and whether we should be worried quite yet.  

How does the general public feel towards AI sentience and what can we infer from this data?

I shared with Nic and Atakan the data collected from my quick pulse check survey about AI sentience. Regarding the findings, Nic said, “I thought it was interesting that so many people found sentience to be a biological or human-like aspect, as well as the fact that it was unethical to give a computer sentience. There’s a contrast that sentience is something we hold near and dear to our hearts, yet it is not a good idea to give it to something else.” 

Atakan added, “I found it interesting that about 50% had positive sentiments towards AI. But the other half of people felt negative sentiments towards sentience in AI. That other half felt like they were taking a responsible look at AI. It’s like when you vote for a politician. You can’t just decide you like them. You also need to be educated on their stance in social issues, for example. That sensitivity towards AI sentience in the study was fascinating.” 

Can AI actually become sentient? 

It’s an interesting point to call out the fact that many respondents seemed to relate sentience with human qualities. But if we were to look at the possibility of AI being sentient—outside of our traditional understanding of sentience within humans—one might be wondering how likely that is. When asked this, Nic said that “…it’s possible, but not necessary. If you view sentience as solely about feeling emotion, then it’s not necessary for a computer to complete its tasks.” 

“You can also think about it in terms of animals. Are dogs sentient? Can they feel pain, for example? In computers, though, people might be confusing consciousness with sentience. These are high-level concepts that are still hard to fully understand. I think ultimately computers would be mimicking that sentience.” 

Atakan also pointed out that we can’t technically measure sentience today. “To me, the more crucial question is: How will it be observable and measurable? That’s the first thing we need to answer. But also, I’d say we technically do not have the current technology to measure it.” 

“Models also do not have negation, as in, knowing what is a fact versus what is not a fact. All they do now is pattern matching.  For example, language models such as BERT, GPT3 see statement a. “John will come” and statement b. “John will not come.” as the same, (i.e. they cannot distinguish positive vs. negative polarity, statements a & b respectively). So, when I say “I am not sick”, the model will possibly give the same reaction to “I am sick”, responding with “Sorry to hear that!” This is what it means to not have negation. So, as it stands, they cannot be sentient.” 

In simpler terms, AI models are trained to respond in a certain way based on the environment they are being worked in. Nic shared an example of a chatbot trained to support patients who are seeking therapy. “If you train it to respond to people that are talking about their feelings, then the model very well can start talking about its feelings. That doesn’t necessarily mean that it actually has emotions, just that it is portraying them.” 

Well, what about Lemoine’s claims?  

The reality is, that despite the eerie responses from LaMDA, it’s more probable that it was simply accustomed to responding in a type of manner repeatedly that it eventually learned how to hold a conversation with Lemoine within that same environment. 

“Google of course has a complex program,” said Nic. “The more data you give it that is similar context, the more likely it is to respond in a certain way. My assumption is that these models are on millions of different question-and-answer exercises. I can see that play a factor.” 

“Also, if you are working on language models on chatbots, you’re interacting with a computer ALL day. I think it can be easier for us to start humanizing things when you are in that sort of scenario.” 

Atakan adds another factor to consider, and that is that models are not aware of event knowledge. “One important item to remember is event knowledge. Here’s an example: one knows that waiters serve customers and also that customers do not serve waiters. In other words, when one hears “Waiters serve customers”, they not only perceive the triplet waiters, serve, customers but also assign proper thematic roles to waiters and customers, as ‘agent’ and ‘benefactor’, respectively. This is independent of word order. When this sentence is passivized (“Customers are served by waiters.”), waiters and customers have the same thematic roles. Therefore, when one is asked to predict the missing word in “The restaurant owner forgot which customer the waiter ____.”, they will say served is a candidate word but will not say so when presented with “The restaurant owner forgot which waiter the customer ____.” However, language models suggest ‘served’ as a candidate for both statements, which shows that they lack event knowledge.” 

What are the ethical implications of working with AI? 

The Google news raised questions related to ethics on all ends of the spectrum: from the model needing legal representation to fears that companies shouldn’t train models with the goal of it becoming sentient. Overall, there are a lot of grey areas with artificial intelligence. But not all of it has to be scary. 

Although there are many fears from sentient AI to even AI replacing people at their jobs, we might be further from those fears becoming reality at this moment.  

“We are far from a singularity situation,” Nic elaborated. “The things we are working on at GroupSolver and what others are working on around the world take on a lot of information. It takes a lot of time for these models to do one thing correctly…maybe two things. People can learn to write, ride a bike, play chess, and do a bunch of things really well. If you train a computer to do a whole lot of things, it will likely do them all poorly. We are far away from the serious situations of replacing all jobs to the degree that ‘robots take over society’.” 

So, while we probably don’t have to fear AI per say (good news: no robots taking over any time soon!), it is still pressing to note the current serious issues such as those related to bias. Atakan sheds some great insight on this. “We need to talk about bias, especially in language models. There’s a lot of identity-related bias like gender and religion. For example, the model sees a statement: The surgeon told the nurse that she is sick. The model matches ‘she’ with the nurse, but not with the surgeon. The model assumes the surgeon is the male. This is unethical.” 

Given issues such as biases in the models, it’s important for researchers to take ownership in navigating ethical concerns.  

Atakan said, “At least for us, we are working on checking whether the language models we use are biased and if so, work on debiasing them. There are a few datasets where we found bias. We have been working through some techniques to reduce it, but they aren’t 100% perfect. There’s still a lot of research left to do to eliminate bias in models.” 

“I agree, bias is much more of an urgent ethical concern in AI than whether or not models are sentient right now,” Nic added. “I think there needs to be more international guidelines for these issues. Right now, we are in a place in time where someone could create a piece of technology that might affect a lot of people without public supervision. Like, what about creating a satellite system that tracks people? It’s better to have more standards in what could be implemented currently.”

The final verdict on AI

There is some clarity after chatting with two researchers who dive deep into the realm of artificial intelligence on the daily, and that is: there is still a huge question mark surrounding the topic of AI as a whole, and even more so with sentience. As much as we’d love to stay in control of what is happening, the fact-of-the-matter is that until we get the proper technology to measure and understand sentience, there is no way to be 100% certain. What we can say, though, is that we probably do not need to worry about such a high-level concept at this current moment. 

Instead, those working in these fields should be focused on combatting the problems faced today, such as bias in AI and lack of international guidelines. So much good could be created with new technology, but there could also be harm. As long as these ethical concerns still stand, our focus should be shifted on that. 

After all, that is what companies and researchers in the AI industry can control for now. But we’ll see where the future takes us.  

You might enjoy these too

Research Best Practices
Brand Evaluation: What It Is and What It Involves
Read story
customer segmenation groupsolver blog
Research Best Practices
Your Simple Guide to Behavioral Customer Segmentation
Read story
Research Best Practices
8 Methodologies for How to Run a VoC Program
Read story

A better way to smarter insights starts now

Request a free demo. Someone from our dedicated team will get right back to you.

Request Demo