How does Google’s LaMDA AI chatbot work?

A claim made by a researcher about the flagship LaMDA project has reignited debate about the nature of artificial intelligence.

A Google engineer was suspended after claiming that the company's flagship text generation AI, LaMDA, is "sentient."

LaMDA is Google's most advanced "large language model" (LLM), a type of neural network that is fed massive amounts of text to learn how to generate plausible-sounding sentences.

Blake Lemoine has stated that his interactions with LaMDA led him to believe that it had evolved into a person worthy of being asked for consent to the experiments being conducted on it.

Neural networks are a method of analysing large amounts of data that attempts to mimic the way neurons work in the brain.

According to the google, he violated confidentiality rules.

LaMDA, like other LLMs, looks at all the letters in front of it and tries to figure out what comes next.

LaMDA told him that it had a concept of a soul when it thought about itself during his sprawling conversation with LaMDA, which was specifically started to address the nature of the neural network's experience.

"To be sentient is to be aware of yourself in the world; LaMDA simply isn't," writes AI researcher and psychologist Gary Marcus.

"Software like LaMDA just tries to be the best version of autocomplete it can be," Marcus says, "by predicting which words best fit a given context."