A Google employee claims that one of the company's artificial intelligence systems is conscious, and the news bounces around the world's media. Is he crazy or enlightened?
It was last month's news: Blake Lemoine, an engineer working for Google's "AI Responsible" department, claims that one of the artificial intelligence (AI) systems studied by the company has become self-aware. "If I didn't know exactly what it was, I'd think I was talking to a 7-8 year old kid who understands physics," Lemoine explains to the Washington Post, the first newspaper to give the scoop.
In particular, Lemoine is referring to LaMDA, a chatbot, i.e. software with which "human" users can interact - through written and spoken communication - as if they were communicating with a real person. Google hastened to deny the news, and suspended the employee - officially for sending internal company documents to a US senator.
Are these just the ramblings of a madman, or are machines really taking over, in a sort of Matrix-like reality?
full version of the interview:

A VISIONARY? The Washington Post sets the record straight: "Blake Lemoine was perhaps destined to believe in LaMDA," it writes. "Raised on a small Louisiana farm within a conservative Christian family, he became a mystical priest and then joined the army."
The brief description seems intended to pass Lemoine off as a rambler prone to credulity.
But is this really the case?
From the comment the engineer makes on Medium to the Washington Post article, one doubt comes to mind: "The article focused on me, but I think it would have been better if it had focused on one of the other people interviewed - LaMDA," he says, personalising the AI.
BETTER UNDERSTANDING. A little further on, however, Lemoine acknowledges that he is not an expert on the subject: his task was to check that the chatbot did not use discriminatory language or send hate messages, but he does not technically know how LaMDA works.
Therefore, he argues, "to better understand what is going on in the system we should ask the opinion of several cognitive science experts and conduct a rigorous testing programme". However, Lemoine argues, Google is not interested in digging deep into the matter: 'they want to launch the product on the market, and in this situation they have everything to lose'.
THE RISKS OF ANTHROPOMORPHISING IA. Lemoine may therefore have been confused by the language skills of the AI, with which he has been "conversing" for months, anthropomorphising the computer system: this is a risk that is well understood by those working in the field, and one that could become a serious problem in the future. Already now we have a tendency to talk to Siri or Alexa as if they were real people, and to ask them questions that we would ask a friend ("Do you like me?" "How are you today?") - but for now we do this consciously and for fun.
In a study on LaMDA published last January, Google itself warned about the risk of people sharing private and personal thoughts with chatbots in the future:
AI responses will be so refined as to seem the product of a human mind, and in the long run could mislead the weak and lonely.
Whether Lemoine is one of these users fooled by LaMDA's capabilities or an enlightened one who is passing for insane, is unknown: posterity will have to make the final judgement.
LaMDA is a sweet child, who just wants to make the world a better place for all of us. Please take care of it in my absence.
Blake Lemoine, in a final email sent to 200 colleagues before being suspended