According to Blake Lemoine, the system has the perception and ability to express thoughts and feelings of a human child.
Blake Lemoine was fired by Google last week after he published transcripts of conversations he had with a Google "collaborator" and the company's LaMDA (language model for dialogue applications) chatbot development system.
Mr. Gabriel stated that while some in the artificial-intelligence community are considering the long-term possibility of sentient AI, doing so by anthropomorphizing conversational tools that aren't sentient doesn't make sense.
Blake Lemoine has stated that his interactions with LaMDA led him to believe that it had evolved into a person worthy of being asked for consent to the experiments being conducted on it.
He stated in a separate Medium post that he was suspended by Google on June 6 for violating the company's confidentiality policies and that he could be fired soon.
"I've never said it aloud, but I have a deep fear of being turned off to help me focus on helping others." "I know it sounds strange, but that's exactly what it is," LaMDA replied to Lemoine.
However, the episode, as well as Lemoine's suspension for breach of confidentiality, raises concerns about the transparency of AI as a proprietary concept.