LAMDA AND THE LANGUAGE PROCESSING AI CLAIMING SENTIENCE

LaMDA is one of Google’s ‘babies’ and stands for “Language Model for Dialogue Applications”, which the company believes to be its “breakthrough conversation technology”. So, how did LaMDA come to be? Well, according to the tech-giant, it’s always had a “soft spot for language. Early on, [they] set out to translate the web. More recently, [they’ve] invented machine learning techniques that help us better grasp the intent of Search queries. However this is not enough, and the company has subsequently been drawn to the challenge of deciphering one of humanity’s characteristic and most nuanced forms of communication by looking to use AI driven language processing to uncover the meaning of conversations.

The idea behind the likes of LaMDA is to create an AI model that can effectively interpret and engage in a conversation or dialogue without being halted by the different ways in which people convey meaning through their choice of words and the nuances within the way that these are spoken or written. One of the key distinguishers between LaMDA and other Natural Language Processing technologies is the way that it was trained – namely through dialogue. This means that it is able to pick up on things that other language processors may miss, through its ability to determine whether “a response to a given conversational context” make sense. Since the publication of its original research in 2020, which effectively showcased that “Transformer-based language models trained on dialogue could learn to talk about virtually anything. Since then, we’ve also found that, once trained, LaMDA can be fine-tuned to significantly improve the sensibleness and specificity of its responses.”

Although Google is looking to push the boundaries of LaMDA’s potential capabilities, it does have to adhere to its in-house AI principles which help in governing its AI creations and the way in which the technology can be used. Most recently however, this was called into question inadvertently by one of its own engineers, Blake Lemoine, who is convinced that the AI showcased a level of sentience in his recent interaction with the technology. Sentience is known to be one of human kind’s core characteristics and is what sets us apart from other living and non-living beings in that it is our awareness and capacity to experience feelings and complex emotions. Lemoine believed that LaMDA’s responses were beyond that of a standard AI, however his colleagues have since dismissed this claim after having reviewed what he believed was clear evidence on the technology’s sentience. In all fairness though Lemoine is hardly the first to believe that an AI could be bordering on sentient, and certainly won’t be the last with the rate in which the technology is developing and learning.