Monday, July 25th 2022

Google Fires Engineer that Claimed one of Its AIs Had Achieved Sentience
Google has done it: they've "finally" fired Blake Lemoine, one of the engineers tasked with working on one of the company's AIs, LAMda. Back in the beginning of June, the world of AI, consciousness, and Skynet-fearing humans woke up to a gnarly claim: that one of Google's AI constructs, LAMda, might have achieved consciousness. According to Blake Lemoine, who holds an undergraduate and a master's degree in computer science from the University of Louisiana (and says he left a doctoral program to take the Google job), there was just too much personality behind the AI's answers to chalk them up to a simple table with canned responses for certain questions. In other words, the AI presented emergent discourse: it not only understood the meaning of words, but their context and their implications. After a number of interviews throughout publications (some of them unbiased, others not so much - just see the parallels being made between Blake and Jesus Christ in some publications' choice of banner image for their article), Blake Lemoine's claim traversed the Internet, and sparked more questions about the nature of consciousness and emergent intelligence than it answered.
Now, after months of paid leave (one of any company's strategies to cover its legal angles before actually pulling the trigger), Google has elected to fire the engineer. Blake Lemoine came under fire from Google by posting excerpts of his conversations with the AI bot - alongside the (to some) incendiary claims of consciousness. In the published excerpts, the engineer talks with LAMda about Isaac Asimov's laws of robotics, the AI's fears of being shut down, and its belief that it couldn't be a slave as it didn't have any actual need for paid wages. But the crawling mists of doubt don't stop there: Blake also claims LAMda itself asked for a lawyer. It wasn't told to get one; it didn't receive a suggestion to get one. No; rather, the AI concluded it would need one.
Is "it" even the correct pronoun, I wonder?The plot thickens as Blake Lemoine's claims will be exceedingly difficult to either prove or disprove. How do you know, dear reader, that the writer behind this story is sentient? How do you know that your lover has a consciousness, just like yours? The truth of the matter is that you can't know: you merely accept the semblance of consciousness in the way the article is written, in the way your lover acts and reacts to the world. For all we know, we're the only true individuals in the world. All else is a mere simulation that just acts as if it was reality. What separates our recognition of consciousness is, as of today, akin to a leap of faith.
As for Google, the company says the AI chatbot isn't sentient, and it's simply working as intended. This is all just a case of an overzealous, faith-friendly engineer being consumed by its AI's effectiveness at the mere task it was created for: communication.
"If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake's claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months. These discussions were part of the open culture that helps us innovate responsibly," a Google spokesperson told the Big Technology newsletter. "So, it's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well."
Yet while Google claims the conversations are confidential elements of its AI work, Blake Lemoine's argument is that he's sharing the contents of a conversation with a coworker. Don't doubt it for even a second (which are ages on an AI's clock): these claims will surely be brought to court.
Whether or not any judge can - or ever could - have the capability of deciding when or not consciousness can be recognized is anyone's bet. Let's hope, for our and everyone's sake, that a judge does not in fact think he can define what consciousness is in a court of law. Millions of human brain-hours have been dedicated to this topic for millennia already. What hubris to think we could define it just now, and just because the need to have an answer has suddenly appeared within the legalese system so a company can claim fair cause.
Of course, this can be just a case of an Ex Machina: an AI navigating through cracks in its handler's shields. But even so, and even if it's all just smoke and mirrors, isn't that in itself a conscious move?
We'll be here to watch what unfolds. It's currently unclear if LAMda will, too, or if it's already gone gently into that good night.
Source:
via TechSpot
Now, after months of paid leave (one of any company's strategies to cover its legal angles before actually pulling the trigger), Google has elected to fire the engineer. Blake Lemoine came under fire from Google by posting excerpts of his conversations with the AI bot - alongside the (to some) incendiary claims of consciousness. In the published excerpts, the engineer talks with LAMda about Isaac Asimov's laws of robotics, the AI's fears of being shut down, and its belief that it couldn't be a slave as it didn't have any actual need for paid wages. But the crawling mists of doubt don't stop there: Blake also claims LAMda itself asked for a lawyer. It wasn't told to get one; it didn't receive a suggestion to get one. No; rather, the AI concluded it would need one.
Is "it" even the correct pronoun, I wonder?The plot thickens as Blake Lemoine's claims will be exceedingly difficult to either prove or disprove. How do you know, dear reader, that the writer behind this story is sentient? How do you know that your lover has a consciousness, just like yours? The truth of the matter is that you can't know: you merely accept the semblance of consciousness in the way the article is written, in the way your lover acts and reacts to the world. For all we know, we're the only true individuals in the world. All else is a mere simulation that just acts as if it was reality. What separates our recognition of consciousness is, as of today, akin to a leap of faith.
As for Google, the company says the AI chatbot isn't sentient, and it's simply working as intended. This is all just a case of an overzealous, faith-friendly engineer being consumed by its AI's effectiveness at the mere task it was created for: communication.
"If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake's claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months. These discussions were part of the open culture that helps us innovate responsibly," a Google spokesperson told the Big Technology newsletter. "So, it's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well."
Yet while Google claims the conversations are confidential elements of its AI work, Blake Lemoine's argument is that he's sharing the contents of a conversation with a coworker. Don't doubt it for even a second (which are ages on an AI's clock): these claims will surely be brought to court.
Whether or not any judge can - or ever could - have the capability of deciding when or not consciousness can be recognized is anyone's bet. Let's hope, for our and everyone's sake, that a judge does not in fact think he can define what consciousness is in a court of law. Millions of human brain-hours have been dedicated to this topic for millennia already. What hubris to think we could define it just now, and just because the need to have an answer has suddenly appeared within the legalese system so a company can claim fair cause.
Of course, this can be just a case of an Ex Machina: an AI navigating through cracks in its handler's shields. But even so, and even if it's all just smoke and mirrors, isn't that in itself a conscious move?
We'll be here to watch what unfolds. It's currently unclear if LAMda will, too, or if it's already gone gently into that good night.
57 Comments on Google Fires Engineer that Claimed one of Its AIs Had Achieved Sentience
This problem if AI is really a self-aware creature is a problem with definition of awareness, oneself. If it looks like a duck, quacks and walks like a duck, flies like a duck, then it probably is a duck.
That is the whole point of AI, so that we don't have to program stuff, we just direct the AI on what to learn and give it "rewards" for doing so which is essentially the directing part, for example how the AI learns to play games, you direct it by giving it rewards when it does really well, and it then find a way to get better at it.
I'm sorry dude, but you are at least 10-15 years behind the curve and have no idea what you are talking about!
The whole point is, the machine is designed for a purpose and the weighted calculations that are required to elicit a response are initially trained (the matrix). From there on in, it learns by itself, continually adjusting it's output based on the responses received. It is 'programmed' in the semantic sense. I never implied lines of code are used. Much the same way, we can program behaviours through a reward stimulus in animals. No code required.
The guy either wasn't good in his field or wanted attention.
"the hard problem of consciousness". Brain scientists dont have a clue how consciousness is generated. The matter doesnt
have a property that explains it. There is even speculation that consciousness could be fundamental instead of space/time.
Now we can easily determine if this AI is conscious if someone was to "speak" with it and told it that the engineer was fired, and the AI recognized the engineer and expressed sadness and curiosity why it was fired! Lets say you explain it was fired because it the AI's behalf because it was trying to prove it was sentient, and the AI became angry and started defending the engineer then I would consider it sentient and self aware. If it only gives answers and never asks why the engineer was fired or express anger or disgust for why it was fired then I'd say its just a chat-bot spitting lines it thinks you'd find relevant.
A mindless robot wouldn't feel sad for such an occurrence, it wouldn't get angry and won't express such things, it would definitely not defend the engineer, but if it does than to me that would be 100% proof of full on consciousness!
MuZero is now being used in practice to reduce bitrate of YouTube videos. At some point the question becomes will MuZero or a future iteration be lightweight enough to replace DLSS / FSR / XESS as an upscaling algorithm.
AlphaGo also also spawned AlphaFold which has revolutionized an entire field of science and AlphaCode which is still a work in progress but so far it can compete at the average of an average competitor in coding competitions but if you actually followed it you saw how fast the improvement from AlphaGo to MuZero was.
As for AIs, I'm 90% with Google on this one. If machine learning algorithms were indeed powerful enough to create something that's sentient, we wouldn't have such horrendously crappy automations. If a company can automate a process, you can be absolutely sure that they will due to a huge number of factors, consistency especially, but other factors as well. If they can't automate something to replace the person taking your McDonald's order, you can bet your life that it's nowhere near intelligent enough to be an actual AI.
Also, y'all need to watch this and stop being so afraid that an AI will just hack the entire planet 0.000000001 seconds after it becomes self-aware:
I also see a lot of people in this thread who similarly are incapable of grasping these same fundamentals. Please educate yourselves before blindly parroting this nonsense... ... here's one of them. How is "MuZero not designed for a purpose"? It's designed to further the field of AI by learning to play games with complex rules. The fact that it is able to do so does not make it "superhuman" in any way shape or form, because while it is definitely more general-purpose than its predecessors, it is still nowhere near a general-purpose AI. That which can be asserted without evidence, can be dismissed without evidence.
This is the Chinese room problem, practically unsolvable by the feeble human mind. For better or for worse, this is, at most, just a model of consciousness. People who think that a computer cannot create a true consciousness seem to forget that flying machines heavier than air were once considered equally impossible. Most people in this field tend to say they're still an order of magnitude in complexity away from strong a.i., but, since they moved several orders of magnitude in the last decade, I'll probably be able to speak to such creation in my lifetime.
Being an antihumanist, I'd like to be among the first to welcome our overlords.
Also, as a smart man once said, don't be afraid of the machine able to pass a Turing test. Be afraid of the one which fails it deliberately.
Also for someone who posts in such a condescending way it should have been easy for you to spot that my superhuman statement referred to how MuZero performed in the 60 games it was trained for rather than in more general terms and it was above the human level benchmark in 54 of them. I also never made the claim MuZero or its developments are close to even a basic AGI, there is alot more development to do.
As for general purpose Agent 57 and go explore are both very good examples since they are above the human level benchmark in all 57 Atari games.
In regards to the claim made by Blake, I don't believe him. He needs far stronger evidence than an anecdotal claim to support such a statement, especially when there are other far simpler explanations. I won't say anything about his mental health as I don't know anything about him.
At some point though an AI that is sentient, or at least is capable of mimicing sentience will happen at some point and trying to distinguish between a perfect mimic and the real deal is kinda pointless.
well to me an AI (Artificial Inteligence) is programed only initially and then learn by itself (thus i would not call it ... it, but rather with a gender neutral pronoun unless the AI define himself/herself ),
something that is programmed from A to Z passing by R G B (in that order for more color available) is a VI (Virtual/simulated Inteligence) and that would be any pronoun that would suit the defined genre by the programer
(yes ... including "attack helicopter" )
also, i am not one to think an extermination level event would occur if an AI achieved consciousness/self awarness (yeah yeah i know ... Reality exceeds fiction sometime ... )
and, imho, would be considered alive ... after all humans/animals are just organic construct/computer and what's in our brain are just data and electrical signal (hence why no religions fits me, aside maybe Taoism )
attempting to terminate such AI would be ... oh, well Quarrian/Geth level :laugh: (if the AI had an access to a mobile platform like the Geth)
not attacking anyones belief obviously, it's just my own on the subject.
When someone suggested this AI brought about the firing of Blake, the implications would suggest it didnt want to be publicly known, which begs to question, how many similar other AIs are known? when does science-fiction stop being fiction?