Asking if whether or not "Google Made Sentient AI?" is the wrong question
Nobel prize winning research from the 1950’s on damaged human brains can tell us a lot about whether or not the computer program LaMDA that Google built last year is actually conscious.
Back in 2017, engineers at Google started working on a program designed to perfectly mimc human speech and personality. It was called LaMDA–short for Language Model for Dialogue Applications. The process was straightforward, LaMDA would scan all the literature on the internet, from reddit forums, to newspaper articles, to product reviews on amazon and everything else so that when you asked it a question it could create a sort of collage of human babble that would look pretty darned real.
By all accounts, it worked pretty well. Engineers could program LaMDA to imitate all sorts of personalities–from customer service representatives, to famous actors, philosophers and even irate teenage gamers.
And then, in March 2022 something crazy happened. A google engineer named Blake Lemoine booted up LaMDA and started asking it questions about the nature of the program’s own conscious experience. He asked it to explain how the program felt about being a chat bot. He taught it to meditate. It even wrote him a weird zen fable about its own role in the universe.
Within a month Lemoine was convinced LaMDA was conscious and he started raising a fuss with his bosses, who promptly placed him on administrative leave. You've probably seen this story in the news already. It has been everywhere. It basically set the internet on fire.
In this week's video I'm looking past the internet memes and wild speculation to try to understand what a conscious computer program means in terms of human consciousness. In it I draw on the neuroscience of split brains that I wrote about in The Wedge and speculate that the most important consideration in studying consciousness isn't what things are thinking, but the relationships that form during conscious interaction.