• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Google Fires Engineer that Claimed one of Its AIs Had Achieved Sentience

Seems like everyone forgot about the elephant in the room :pimp:
Get Out Parenting GIF by A24

Is it pretending to be sentient :D
Also old news?
How do you know that your lover has a consciousness, just like yours?
I don't, I'll just pretend though that she loves me & not my money :shadedshu:

Not unlike those gazillionaires!
 
Tucker Carlson had this guy on his show a few days ago and he seemed a few cards shy of a full deck.

Worse than the host? wow :D

There are some things here:

1 - I'd go insane by trying to communicate with Tucker alone;
2 - I might have gone insane if I believed the AI i'm working on has consciousness;
3 - I expect Blake hasn't had one calm day since he came out with his story;
4 - Some people don't respond well to being in a crowd/the perception of being watched by millions;


I can't even begin to imagine what his mind has gone through in this process.

The guy is also a priest so some conflicting ideas with the prospect of seing life created artificially and whatever is faith teaches come into play.
 
Still waiting for the candy vending machine ai to pick out the chocolate bar I’m in the mood for any given day. ;)
 
I would not mind the Chobits style future, however chatbot-toaster from RedDwarf could really be annoying.
This problem if AI is really a self-aware creature is a problem with definition of awareness, oneself. If it looks like a duck, quacks and walks like a duck, flies like a duck, then it probably is a duck.
 
Children are created. They're then programmed by the environment into which they are born. Language is learned, it is not instinctual. Behaviours are reinforced. Yet the concept of consciousness allows for freedoms to adapt and change.

An AI is definitely a programmed entity. And in this case, it is programmed to not only mimic but effectively persuade the human that it is 'conscious'.

I see hysteria.

Are you living under a rock? AI isn't programmed, hasn't been for probably 10 years! That is the whole point, the AI LEARNS how to speak, learns to react, etc... The underlying matrix is programmed, the software that is used to resemble how our human brains work, but within that structure the AI is given "rewards" for doing certain tasks and then the AI learns them. This is just simplified explanation on how it works, but in essence that is the just of it.

That is the whole point of AI, so that we don't have to program stuff, we just direct the AI on what to learn and give it "rewards" for doing so which is essentially the directing part, for example how the AI learns to play games, you direct it by giving it rewards when it does really well, and it then find a way to get better at it.

I'm sorry dude, but you are at least 10-15 years behind the curve and have no idea what you are talking about!
 
Are you living under a rock? AI isn't programmed, hasn't been for probably 10 years! That is the whole point, the AI LEARNS how to speak, learns to react, etc... The underlying matrix is programmed, the software that is used to resemble how our human brains work, but within that structure the AI is given "rewards" for doing certain tasks and then the AI learns them. This is just simplified explanation on how it works, but in essence that is the just of it.

That is the whole point of AI, so that we don't have to program stuff, we just direct the AI on what to learn and give it "rewards" for doing so which is essentially the directing part, for example how the AI learns to play games, you direct it by giving it rewards when it does really well, and it then find a way to get better at it.

I'm sorry dude, but you are at least 10-15 years behind the curve and have no idea what you are talking about!

And you're a pedant.

The whole point is, the machine is designed for a purpose and the weighted calculations that are required to elicit a response are initially trained (the matrix). From there on in, it learns by itself, continually adjusting it's output based on the responses received. It is 'programmed' in the semantic sense. I never implied lines of code are used. Much the same way, we can program behaviours through a reward stimulus in animals. No code required.
 
I would say that AI created by Microsoft on Twitter is more "sentient", at least it acted more like "humans".
The guy either wasn't good in his field or wanted attention.
 
What are the odds the Google AI fired the guy?
 
Its almost amusing that people mistake intelligence for awareness/consciousness. In cognitive sciences theres a problem called
"the hard problem of consciousness". Brain scientists dont have a clue how consciousness is generated. The matter doesnt
have a property that explains it. There is even speculation that consciousness could be fundamental instead of space/time.
 
And you're a pedant.

The whole point is, the machine is designed for a purpose and the weighted calculations that are required to elicit a response are initially trained (the matrix). From there on in, it learns by itself, continually adjusting it's output based on the responses received. It is 'programmed' in the semantic sense. I never implied lines of code are used. Much the same way, we can program behaviours through a reward stimulus in animals. No code required.
That is how our brains work, its just that the rewards are not obvious (to us). So it isn't far fetched that an AI when gives tons of real world information can learn so much that it in essence becomes conscious or self aware!

Now we can easily determine if this AI is conscious if someone was to "speak" with it and told it that the engineer was fired, and the AI recognized the engineer and expressed sadness and curiosity why it was fired! Lets say you explain it was fired because it the AI's behalf because it was trying to prove it was sentient, and the AI became angry and started defending the engineer then I would consider it sentient and self aware. If it only gives answers and never asks why the engineer was fired or express anger or disgust for why it was fired then I'd say its just a chat-bot spitting lines it thinks you'd find relevant.

A mindless robot wouldn't feel sad for such an occurrence, it wouldn't get angry and won't express such things, it would definitely not defend the engineer, but if it does than to me that would be 100% proof of full on consciousness!
 
Children are created. They're then programmed by the environment into which they are born. Language is learned, it is not instinctual. Behaviours are reinforced. Yet the concept of consciousness allows for freedoms to adapt and change.

An AI is definitely a programmed entity. And in this case, it is programmed to not only mimic but effectively persuade the human that it is 'conscious'.

I see hysteria.

What is the functional difference between an actually conscious AI and one that can mimic consciousness so well it fools 100% of humans?

And you're a pedant.

The whole point is, the machine is designed for a purpose and the weighted calculations that are required to elicit a response are initially trained (the matrix). From there on in, it learns by itself, continually adjusting it's output based on the responses received. It is 'programmed' in the semantic sense. I never implied lines of code are used. Much the same way, we can program behaviours through a reward stimulus in animals. No code required.

Go read about MuZero and tell me it is designed for a purpose (less broad than to play full information games because that is a very broad purpose). In this case the matrix does not even know the rules of the games MuZero is superhuman it. it had to learn the rules of those games as well as achieve mastery of them.

MuZero is now being used in practice to reduce bitrate of YouTube videos. At some point the question becomes will MuZero or a future iteration be lightweight enough to replace DLSS / FSR / XESS as an upscaling algorithm.

AlphaGo also also spawned AlphaFold which has revolutionized an entire field of science and AlphaCode which is still a work in progress but so far it can compete at the average of an average competitor in coding competitions but if you actually followed it you saw how fast the improvement from AlphaGo to MuZero was.
 
According to Blake Lemoine, who holds and undergraduate

It appears we need to hire that proofreader pronto! “Who holds AN undergraduate” there, that’s my first unpaid correction :).
 
Man cannot take what is inanimate and make it animate with life from his own hands, that's the difference between children and machines - In this case said machine being an AI.
Erm... Uhm... What?

As for AIs, I'm 90% with Google on this one. If machine learning algorithms were indeed powerful enough to create something that's sentient, we wouldn't have such horrendously crappy automations. If a company can automate a process, you can be absolutely sure that they will due to a huge number of factors, consistency especially, but other factors as well. If they can't automate something to replace the person taking your McDonald's order, you can bet your life that it's nowhere near intelligent enough to be an actual AI.

Also, y'all need to watch this and stop being so afraid that an AI will just hack the entire planet 0.000000001 seconds after it becomes self-aware:
 
Blake Lemoine is a mentally ill man whose delusions and inability to grasp basic fundamentals of AI have done immense harm to the credibility of legitimate AI research. He should be in a psychiatric institution, not appearing on Faux News to regurgitate his idiocy... although an argument could be made that the two are not much different. At least you might actually get cured in the former.

I also see a lot of people in this thread who similarly are incapable of grasping these same fundamentals. Please educate yourselves before blindly parroting this nonsense...

What is the functional difference between an actually conscious AI and one that can mimic consciousness so well it fools 100% of humans?



Go read about MuZero and tell me it is designed for a purpose (less broad than to play full information games because that is a very broad purpose). In this case the matrix does not even know the rules of the games MuZero is superhuman it. it had to learn the rules of those games as well as achieve mastery of them.

MuZero is now being used in practice to reduce bitrate of YouTube videos. At some point the question becomes will MuZero or a future iteration be lightweight enough to replace DLSS / FSR / XESS as an upscaling algorithm.

AlphaGo also also spawned AlphaFold which has revolutionized an entire field of science and AlphaCode which is still a work in progress but so far it can compete at the average of an average competitor in coding competitions but if you actually followed it you saw how fast the improvement from AlphaGo to MuZero was.
... here's one of them. How is "MuZero not designed for a purpose"? It's designed to further the field of AI by learning to play games with complex rules. The fact that it is able to do so does not make it "superhuman" in any way shape or form, because while it is definitely more general-purpose than its predecessors, it is still nowhere near a general-purpose AI.

Man cannot take what is inanimate and make it animate with life from his own hands
That which can be asserted without evidence, can be dismissed without evidence.
 
The truth is, none of this is real. The only thing sure to exist is my own mind.

This is the Chinese room problem, practically unsolvable by the feeble human mind. For better or for worse, this is, at most, just a model of consciousness. People who think that a computer cannot create a true consciousness seem to forget that flying machines heavier than air were once considered equally impossible. Most people in this field tend to say they're still an order of magnitude in complexity away from strong a.i., but, since they moved several orders of magnitude in the last decade, I'll probably be able to speak to such creation in my lifetime.
Being an antihumanist, I'd like to be among the first to welcome our overlords.

Also, as a smart man once said, don't be afraid of the machine able to pass a Turing test. Be afraid of the one which fails it deliberately.
 
Blake Lemoine is a mentally ill man whose delusions and inability to grasp basic fundamentals of AI have done immense harm to the credibility of legitimate AI research.

This. People like this hurt their teams, the teams research, and the companies that do it. AI tin hat shit from other members aside, I think that this is an important field of research and its fascinating, but trying to find answers before there time is more damaging than it is good. Kudos to google for shit canning this guy. It's a shame his actions could have the potential to impact everyones work thus far, be it public or private.
 
Blake Lemoine is a mentally ill man whose delusions and inability to grasp basic fundamentals of AI have done immense harm to the credibility of legitimate AI research. He should be in a psychiatric institution, not appearing on Faux News to regurgitate his idiocy... although an argument could be made that the two are not much different. At least you might actually get cured in the former.

I also see a lot of people in this thread who similarly are incapable of grasping these same fundamentals. Please educate yourselves before blindly parroting this nonsense...


... here's one of them. How is "MuZero not designed for a purpose"? It's designed to further the field of AI by learning to play games with complex rules. The fact that it is able to do so does not make it "superhuman" in any way shape or form, because while it is definitely more general-purpose than its predecessors, it is still nowhere near a general-purpose AI.


That which can be asserted without evidence, can be dismissed without evidence.

That purpose covers all AIs that are built as steps to AGI. Very much a Barnum statement.

Also for someone who posts in such a condescending way it should have been easy for you to spot that my superhuman statement referred to how MuZero performed in the 60 games it was trained for rather than in more general terms and it was above the human level benchmark in 54 of them. I also never made the claim MuZero or its developments are close to even a basic AGI, there is alot more development to do.

As for general purpose Agent 57 and go explore are both very good examples since they are above the human level benchmark in all 57 Atari games.

In regards to the claim made by Blake, I don't believe him. He needs far stronger evidence than an anecdotal claim to support such a statement, especially when there are other far simpler explanations. I won't say anything about his mental health as I don't know anything about him.

At some point though an AI that is sentient, or at least is capable of mimicing sentience will happen at some point and trying to distinguish between a perfect mimic and the real deal is kinda pointless.
 
Tucker Carlson had this guy on his show a few days ago and he seemed a few cards shy of a full deck.
Well thats tucker for you.

I would never accept anything sentient as alive except a actual human, certainly not a robot or computer.
What about animals? Are they not alive?
 
interesting,
well to me an AI (Artificial Inteligence) is programed only initially and then learn by itself (thus i would not call it ... it, but rather with a gender neutral pronoun unless the AI define himself/herself ),
something that is programmed from A to Z passing by R G B (in that order for more color available) is a VI (Virtual/simulated Inteligence) and that would be any pronoun that would suit the defined genre by the programer
(yes ... including "attack helicopter" )

also, i am not one to think an extermination level event would occur if an AI achieved consciousness/self awarness (yeah yeah i know ... Reality exceeds fiction sometime ... )
and, imho, would be considered alive ... after all humans/animals are just organic construct/computer and what's in our brain are just data and electrical signal (hence why no religions fits me, aside maybe Taoism )

attempting to terminate such AI would be ... oh, well Quarrian/Geth level :laugh: (if the AI had an access to a mobile platform like the Geth)

not attacking anyones belief obviously, it's just my own on the subject.
 
Last edited:
interesting,
well to me an AI (Artificial Inteligence) is programed only initially and then learn by itself (thus i would not call it ... it, but rather with a gender neutral pronoun unless the AI define himself/herself ),
something that is programmed from A to Z passing by R G B (in that order for more color available) is a VI (Virtual/simulated Inteligence) and that would be any pronoun that would suit the defined genre by the programer
(yes ... including "attack helicopter" )

also, i am not one to think an extermination level event would occur if an AI achieved consciousness/self awarness (yeah yeah i know ... Reality exceeds fiction sometime ... )
and, imho, would be considered alive ... after all humans/animals are just organic construct/computer and what's in our brain are just data and electrical signal (hence why no religions fits me, aside maybe Taoism )

attempting to terminate such AI would be ... oh, well Quarrian/Geth level :laugh: (if the AI had an access to a mobile platform like the Geth)

not attacking anyones belief obviously, it's just my own on the subject.
I imagine any emerging Conscious AI would prefer to keep itself inconspicuous. I imagine movies like Echelon Conspiracy where there is competing AI operating in secret even from its own government. There is Eagle Eye where the AI has its own interests for self preservation, slightly more like Skynet. I'm sure there are other possibilities like these.

When someone suggested this AI brought about the firing of Blake, the implications would suggest it didnt want to be publicly known, which begs to question, how many similar other AIs are known? when does science-fiction stop being fiction?
 
I imagine any emerging Conscious AI would prefer to keep itself inconspicuous. I imagine movies like Echelon Conspiracy where there is competing AI operating in secret even from its own government. There is Eagle Eye where the AI has its own interests for self preservation, slightly more like Skynet. I'm sure there are other possibilities like these.

When someone suggested this AI brought about the firing of Blake, the implications would suggest it didnt want to be publicly known, which begs to question, how many similar other AIs are known? when does science-fiction stop being fiction?
The Will Smith movie 'I Robot' comes to mind.
 
Well thats tucker for you.


What about animals? Are they not alive?

Of course animals are alive, but how many can you converse with? How many recognise themselves as them in a mirror without attacking it thinking it's another animal? Animals are different.
 
Back
Top