Google fires employee who suspects AI sentience
(NewsNation) — Alphabet Inc’s Google said on Friday it has fired a senior software engineer who claimed the company’s artificial intelligence (AI) chatbot LaMDA was a self-aware person.
Google, which placed software engineer Blake Lemoine on leave last month, said he had violated company policies and that it found his claims on LaMDA to be “wholly unfounded.”
NewsNation’s Brian Entin spoke with Lemoine in June after he was suspended for claiming a computer chatbot, called LaMDA, learned to think for itself.
“The LaMDA system is a research system. It’s a broad computer program that you can interface through a chat window,” Lemoine said.
Lemoine said he doesn’t know all of Google’s business plans for what they want to use LaMDA for, but he started asking questions when the chatbot started talking about its feelings.
“My main experience with it has been through that chat interface. So you bring up Apple chat or Facebook Messenger. And then you’re talking with someone. And it usually is pre-programmed to say something first,” Lemoine explained.
A representative from Google expressed concern about Lemoine’s violation of company policy in an email to Reuters.
“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” said the Google spokesperson.
Last year, Google said that LaMDA – Language Model for Dialogue Applications – was built on the company’s research showing Transformer-based language models trained on dialogue could learn to talk about essentially anything.
Google and many leading scientists were quick to dismiss Lemoine’s views as misguided, saying LaMDA is simply a complex algorithm designed to generate convincing human language.
Lemoine’s dismissal was first reported by Big Technology, a tech and society newsletter.
Reuters contributed to this report.