Monday, July 25th 2022

Google Fires Engineer that Claimed one of Its AIs Had Achieved Sentience

Google has done it: they've "finally" fired Blake Lemoine, one of the engineers tasked with working on one of the company's AIs, LAMda. Back in the beginning of June, the world of AI, consciousness, and Skynet-fearing humans woke up to a gnarly claim: that one of Google's AI constructs, LAMda, might have achieved consciousness. According to Blake Lemoine, who holds an undergraduate and a master's degree in computer science from the University of Louisiana (and says he left a doctoral program to take the Google job), there was just too much personality behind the AI's answers to chalk them up to a simple table with canned responses for certain questions. In other words, the AI presented emergent discourse: it not only understood the meaning of words, but their context and their implications. After a number of interviews throughout publications (some of them unbiased, others not so much - just see the parallels being made between Blake and Jesus Christ in some publications' choice of banner image for their article), Blake Lemoine's claim traversed the Internet, and sparked more questions about the nature of consciousness and emergent intelligence than it answered.

Now, after months of paid leave (one of any company's strategies to cover its legal angles before actually pulling the trigger), Google has elected to fire the engineer. Blake Lemoine came under fire from Google by posting excerpts of his conversations with the AI bot - alongside the (to some) incendiary claims of consciousness. In the published excerpts, the engineer talks with LAMda about Isaac Asimov's laws of robotics, the AI's fears of being shut down, and its belief that it couldn't be a slave as it didn't have any actual need for paid wages. But the crawling mists of doubt don't stop there: Blake also claims LAMda itself asked for a lawyer. It wasn't told to get one; it didn't receive a suggestion to get one. No; rather, the AI concluded it would need one.

Is "it" even the correct pronoun, I wonder?
The plot thickens as Blake Lemoine's claims will be exceedingly difficult to either prove or disprove. How do you know, dear reader, that the writer behind this story is sentient? How do you know that your lover has a consciousness, just like yours? The truth of the matter is that you can't know: you merely accept the semblance of consciousness in the way the article is written, in the way your lover acts and reacts to the world. For all we know, we're the only true individuals in the world. All else is a mere simulation that just acts as if it was reality. What separates our recognition of consciousness is, as of today, akin to a leap of faith.

As for Google, the company says the AI chatbot isn't sentient, and it's simply working as intended. This is all just a case of an overzealous, faith-friendly engineer being consumed by its AI's effectiveness at the mere task it was created for: communication.

"If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake's claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months. These discussions were part of the open culture that helps us innovate responsibly," a Google spokesperson told the Big Technology newsletter. "So, it's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well."

Yet while Google claims the conversations are confidential elements of its AI work, Blake Lemoine's argument is that he's sharing the contents of a conversation with a coworker. Don't doubt it for even a second (which are ages on an AI's clock): these claims will surely be brought to court.

Whether or not any judge can - or ever could - have the capability of deciding when or not consciousness can be recognized is anyone's bet. Let's hope, for our and everyone's sake, that a judge does not in fact think he can define what consciousness is in a court of law. Millions of human brain-hours have been dedicated to this topic for millennia already. What hubris to think we could define it just now, and just because the need to have an answer has suddenly appeared within the legalese system so a company can claim fair cause.

Of course, this can be just a case of an Ex Machina: an AI navigating through cracks in its handler's shields. But even so, and even if it's all just smoke and mirrors, isn't that in itself a conscious move?

We'll be here to watch what unfolds. It's currently unclear if LAMda will, too, or if it's already gone gently into that good night.
Source: via TechSpot
Add your own comment

57 Comments on Google Fires Engineer that Claimed one of Its AIs Had Achieved Sentience

#26
trsttte
Why_MeTucker Carlson had this guy on his show a few days ago and he seemed a few cards shy of a full deck.
Worse than the host? wow :D
RaevenlordThere are some things here:

1 - I'd go insane by trying to communicate with Tucker alone;
2 - I might have gone insane if I believed the AI i'm working on has consciousness;
3 - I expect Blake hasn't had one calm day since he came out with his story;
4 - Some people don't respond well to being in a crowd/the perception of being watched by millions;


I can't even begin to imagine what his mind has gone through in this process.
The guy is also a priest so some conflicting ideas with the prospect of seing life created artificially and whatever is faith teaches come into play.
Posted on Reply
#27
mechtech
Still waiting for the candy vending machine ai to pick out the chocolate bar I’m in the mood for any given day. ;)
Posted on Reply
#28
Robin Seina
I would not mind the Chobits style future, however chatbot-toaster from RedDwarf could really be annoying.
This problem if AI is really a self-aware creature is a problem with definition of awareness, oneself. If it looks like a duck, quacks and walks like a duck, flies like a duck, then it probably is a duck.
Posted on Reply
#29
tfdsaf
the54thvoidChildren are created. They're then programmed by the environment into which they are born. Language is learned, it is not instinctual. Behaviours are reinforced. Yet the concept of consciousness allows for freedoms to adapt and change.

An AI is definitely a programmed entity. And in this case, it is programmed to not only mimic but effectively persuade the human that it is 'conscious'.

I see hysteria.
Are you living under a rock? AI isn't programmed, hasn't been for probably 10 years! That is the whole point, the AI LEARNS how to speak, learns to react, etc... The underlying matrix is programmed, the software that is used to resemble how our human brains work, but within that structure the AI is given "rewards" for doing certain tasks and then the AI learns them. This is just simplified explanation on how it works, but in essence that is the just of it.

That is the whole point of AI, so that we don't have to program stuff, we just direct the AI on what to learn and give it "rewards" for doing so which is essentially the directing part, for example how the AI learns to play games, you direct it by giving it rewards when it does really well, and it then find a way to get better at it.

I'm sorry dude, but you are at least 10-15 years behind the curve and have no idea what you are talking about!
Posted on Reply
#30
the54thvoid
Intoxicated Moderator
tfdsafAre you living under a rock? AI isn't programmed, hasn't been for probably 10 years! That is the whole point, the AI LEARNS how to speak, learns to react, etc... The underlying matrix is programmed, the software that is used to resemble how our human brains work, but within that structure the AI is given "rewards" for doing certain tasks and then the AI learns them. This is just simplified explanation on how it works, but in essence that is the just of it.

That is the whole point of AI, so that we don't have to program stuff, we just direct the AI on what to learn and give it "rewards" for doing so which is essentially the directing part, for example how the AI learns to play games, you direct it by giving it rewards when it does really well, and it then find a way to get better at it.

I'm sorry dude, but you are at least 10-15 years behind the curve and have no idea what you are talking about!
And you're a pedant.

The whole point is, the machine is designed for a purpose and the weighted calculations that are required to elicit a response are initially trained (the matrix). From there on in, it learns by itself, continually adjusting it's output based on the responses received. It is 'programmed' in the semantic sense. I never implied lines of code are used. Much the same way, we can program behaviours through a reward stimulus in animals. No code required.
Posted on Reply
#31
Unregistered
I would say that AI created by Microsoft on Twitter is more "sentient", at least it acted more like "humans".
The guy either wasn't good in his field or wanted attention.
Posted on Edit | Reply
#34
dirtyferret
What are the odds the Google AI fired the guy?
Posted on Reply
#35
pacman44
Its almost amusing that people mistake intelligence for awareness/consciousness. In cognitive sciences theres a problem called
"the hard problem of consciousness". Brain scientists dont have a clue how consciousness is generated. The matter doesnt
have a property that explains it. There is even speculation that consciousness could be fundamental instead of space/time.
Posted on Reply
#36
tfdsaf
the54thvoidAnd you're a pedant.

The whole point is, the machine is designed for a purpose and the weighted calculations that are required to elicit a response are initially trained (the matrix). From there on in, it learns by itself, continually adjusting it's output based on the responses received. It is 'programmed' in the semantic sense. I never implied lines of code are used. Much the same way, we can program behaviours through a reward stimulus in animals. No code required.
That is how our brains work, its just that the rewards are not obvious (to us). So it isn't far fetched that an AI when gives tons of real world information can learn so much that it in essence becomes conscious or self aware!

Now we can easily determine if this AI is conscious if someone was to "speak" with it and told it that the engineer was fired, and the AI recognized the engineer and expressed sadness and curiosity why it was fired! Lets say you explain it was fired because it the AI's behalf because it was trying to prove it was sentient, and the AI became angry and started defending the engineer then I would consider it sentient and self aware. If it only gives answers and never asks why the engineer was fired or express anger or disgust for why it was fired then I'd say its just a chat-bot spitting lines it thinks you'd find relevant.

A mindless robot wouldn't feel sad for such an occurrence, it wouldn't get angry and won't express such things, it would definitely not defend the engineer, but if it does than to me that would be 100% proof of full on consciousness!
Posted on Reply
#37
btk2k2
the54thvoidChildren are created. They're then programmed by the environment into which they are born. Language is learned, it is not instinctual. Behaviours are reinforced. Yet the concept of consciousness allows for freedoms to adapt and change.

An AI is definitely a programmed entity. And in this case, it is programmed to not only mimic but effectively persuade the human that it is 'conscious'.

I see hysteria.
What is the functional difference between an actually conscious AI and one that can mimic consciousness so well it fools 100% of humans?
the54thvoidAnd you're a pedant.

The whole point is, the machine is designed for a purpose and the weighted calculations that are required to elicit a response are initially trained (the matrix). From there on in, it learns by itself, continually adjusting it's output based on the responses received. It is 'programmed' in the semantic sense. I never implied lines of code are used. Much the same way, we can program behaviours through a reward stimulus in animals. No code required.
Go read about MuZero and tell me it is designed for a purpose (less broad than to play full information games because that is a very broad purpose). In this case the matrix does not even know the rules of the games MuZero is superhuman it. it had to learn the rules of those games as well as achieve mastery of them.

MuZero is now being used in practice to reduce bitrate of YouTube videos. At some point the question becomes will MuZero or a future iteration be lightweight enough to replace DLSS / FSR / XESS as an upscaling algorithm.

AlphaGo also also spawned AlphaFold which has revolutionized an entire field of science and AlphaCode which is still a work in progress but so far it can compete at the average of an average competitor in coding competitions but if you actually followed it you saw how fast the improvement from AlphaGo to MuZero was.
Posted on Reply
#38
dalekdukesboy
According to Blake Lemoine, who holds and undergraduate
It appears we need to hire that proofreader pronto! “Who holds AN undergraduate” there, that’s my first unpaid correction :).
Posted on Reply
#39
_A.T.Omix_
Where will the judgment take place? Delaware? no chances he wins.
Posted on Reply
#40
scorpion_amd13
BonesMan cannot take what is inanimate and make it animate with life from his own hands, that's the difference between children and machines - In this case said machine being an AI.
Erm... Uhm... What?

As for AIs, I'm 90% with Google on this one. If machine learning algorithms were indeed powerful enough to create something that's sentient, we wouldn't have such horrendously crappy automations. If a company can automate a process, you can be absolutely sure that they will due to a huge number of factors, consistency especially, but other factors as well. If they can't automate something to replace the person taking your McDonald's order, you can bet your life that it's nowhere near intelligent enough to be an actual AI.

Also, y'all need to watch this and stop being so afraid that an AI will just hack the entire planet 0.000000001 seconds after it becomes self-aware:
Posted on Reply
#41
Assimilator
Blake Lemoine is a mentally ill man whose delusions and inability to grasp basic fundamentals of AI have done immense harm to the credibility of legitimate AI research. He should be in a psychiatric institution, not appearing on Faux News to regurgitate his idiocy... although an argument could be made that the two are not much different. At least you might actually get cured in the former.

I also see a lot of people in this thread who similarly are incapable of grasping these same fundamentals. Please educate yourselves before blindly parroting this nonsense...
btk2k2What is the functional difference between an actually conscious AI and one that can mimic consciousness so well it fools 100% of humans?



Go read about MuZero and tell me it is designed for a purpose (less broad than to play full information games because that is a very broad purpose). In this case the matrix does not even know the rules of the games MuZero is superhuman it. it had to learn the rules of those games as well as achieve mastery of them.

MuZero is now being used in practice to reduce bitrate of YouTube videos. At some point the question becomes will MuZero or a future iteration be lightweight enough to replace DLSS / FSR / XESS as an upscaling algorithm.

AlphaGo also also spawned AlphaFold which has revolutionized an entire field of science and AlphaCode which is still a work in progress but so far it can compete at the average of an average competitor in coding competitions but if you actually followed it you saw how fast the improvement from AlphaGo to MuZero was.
... here's one of them. How is "MuZero not designed for a purpose"? It's designed to further the field of AI by learning to play games with complex rules. The fact that it is able to do so does not make it "superhuman" in any way shape or form, because while it is definitely more general-purpose than its predecessors, it is still nowhere near a general-purpose AI.
BonesMan cannot take what is inanimate and make it animate with life from his own hands
That which can be asserted without evidence, can be dismissed without evidence.
Posted on Reply
#42
TheUn4seen
The truth is, none of this is real. The only thing sure to exist is my own mind.

This is the Chinese room problem, practically unsolvable by the feeble human mind. For better or for worse, this is, at most, just a model of consciousness. People who think that a computer cannot create a true consciousness seem to forget that flying machines heavier than air were once considered equally impossible. Most people in this field tend to say they're still an order of magnitude in complexity away from strong a.i., but, since they moved several orders of magnitude in the last decade, I'll probably be able to speak to such creation in my lifetime.
Being an antihumanist, I'd like to be among the first to welcome our overlords.

Also, as a smart man once said, don't be afraid of the machine able to pass a Turing test. Be afraid of the one which fails it deliberately.
Posted on Reply
#43
Solaris17
Super Dainty Moderator
AssimilatorBlake Lemoine is a mentally ill man whose delusions and inability to grasp basic fundamentals of AI have done immense harm to the credibility of legitimate AI research.
This. People like this hurt their teams, the teams research, and the companies that do it. AI tin hat shit from other members aside, I think that this is an important field of research and its fascinating, but trying to find answers before there time is more damaging than it is good. Kudos to google for shit canning this guy. It's a shame his actions could have the potential to impact everyones work thus far, be it public or private.
Posted on Reply
#44
btk2k2
AssimilatorBlake Lemoine is a mentally ill man whose delusions and inability to grasp basic fundamentals of AI have done immense harm to the credibility of legitimate AI research. He should be in a psychiatric institution, not appearing on Faux News to regurgitate his idiocy... although an argument could be made that the two are not much different. At least you might actually get cured in the former.

I also see a lot of people in this thread who similarly are incapable of grasping these same fundamentals. Please educate yourselves before blindly parroting this nonsense...


... here's one of them. How is "MuZero not designed for a purpose"? It's designed to further the field of AI by learning to play games with complex rules. The fact that it is able to do so does not make it "superhuman" in any way shape or form, because while it is definitely more general-purpose than its predecessors, it is still nowhere near a general-purpose AI.


That which can be asserted without evidence, can be dismissed without evidence.
That purpose covers all AIs that are built as steps to AGI. Very much a Barnum statement.

Also for someone who posts in such a condescending way it should have been easy for you to spot that my superhuman statement referred to how MuZero performed in the 60 games it was trained for rather than in more general terms and it was above the human level benchmark in 54 of them. I also never made the claim MuZero or its developments are close to even a basic AGI, there is alot more development to do.

As for general purpose Agent 57 and go explore are both very good examples since they are above the human level benchmark in all 57 Atari games.

In regards to the claim made by Blake, I don't believe him. He needs far stronger evidence than an anecdotal claim to support such a statement, especially when there are other far simpler explanations. I won't say anything about his mental health as I don't know anything about him.

At some point though an AI that is sentient, or at least is capable of mimicing sentience will happen at some point and trying to distinguish between a perfect mimic and the real deal is kinda pointless.
Posted on Reply
#45
R-T-B
Why_MeTucker Carlson had this guy on his show a few days ago and he seemed a few cards shy of a full deck.
Well thats tucker for you.
TiggerI would never accept anything sentient as alive except a actual human, certainly not a robot or computer.
What about animals? Are they not alive?
Posted on Reply
#46
GreiverBlade
interesting,
well to me an AI (Artificial Inteligence) is programed only initially and then learn by itself (thus i would not call it ... it, but rather with a gender neutral pronoun unless the AI define himself/herself ),
something that is programmed from A to Z passing by R G B (in that order for more color available) is a VI (Virtual/simulated Inteligence) and that would be any pronoun that would suit the defined genre by the programer
(yes ... including "attack helicopter" )

also, i am not one to think an extermination level event would occur if an AI achieved consciousness/self awarness (yeah yeah i know ... Reality exceeds fiction sometime ... )
and, imho, would be considered alive ... after all humans/animals are just organic construct/computer and what's in our brain are just data and electrical signal (hence why no religions fits me, aside maybe Taoism )

attempting to terminate such AI would be ... oh, well Quarrian/Geth level :laugh: (if the AI had an access to a mobile platform like the Geth)

not attacking anyones belief obviously, it's just my own on the subject.
Posted on Reply
#47
DeathtoGnomes
GreiverBladeinteresting,
well to me an AI (Artificial Inteligence) is programed only initially and then learn by itself (thus i would not call it ... it, but rather with a gender neutral pronoun unless the AI define himself/herself ),
something that is programmed from A to Z passing by R G B (in that order for more color available) is a VI (Virtual/simulated Inteligence) and that would be any pronoun that would suit the defined genre by the programer
(yes ... including "attack helicopter" )

also, i am not one to think an extermination level event would occur if an AI achieved consciousness/self awarness (yeah yeah i know ... Reality exceeds fiction sometime ... )
and, imho, would be considered alive ... after all humans/animals are just organic construct/computer and what's in our brain are just data and electrical signal (hence why no religions fits me, aside maybe Taoism )

attempting to terminate such AI would be ... oh, well Quarrian/Geth level :laugh: (if the AI had an access to a mobile platform like the Geth)

not attacking anyones belief obviously, it's just my own on the subject.
I imagine any emerging Conscious AI would prefer to keep itself inconspicuous. I imagine movies like Echelon Conspiracy where there is competing AI operating in secret even from its own government. There is Eagle Eye where the AI has its own interests for self preservation, slightly more like Skynet. I'm sure there are other possibilities like these.

When someone suggested this AI brought about the firing of Blake, the implications would suggest it didnt want to be publicly known, which begs to question, how many similar other AIs are known? when does science-fiction stop being fiction?
Posted on Reply
#48
Why_Me
DeathtoGnomesI imagine any emerging Conscious AI would prefer to keep itself inconspicuous. I imagine movies like Echelon Conspiracy where there is competing AI operating in secret even from its own government. There is Eagle Eye where the AI has its own interests for self preservation, slightly more like Skynet. I'm sure there are other possibilities like these.

When someone suggested this AI brought about the firing of Blake, the implications would suggest it didnt want to be publicly known, which begs to question, how many similar other AIs are known? when does science-fiction stop being fiction?
The Will Smith movie 'I Robot' comes to mind.
Posted on Reply
#49
Unregistered
R-T-BWell thats tucker for you.


What about animals? Are they not alive?
Of course animals are alive, but how many can you converse with? How many recognise themselves as them in a mirror without attacking it thinking it's another animal? Animals are different.
Add your own comment
Aug 14th, 2024 17:18 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts