Monday, May 1st 2023

"Godfather of AI" Geoffrey Hinton Departs Google, Voices Concern Over Dangers of AI

Geoffrey Hinton, British-Canadian psychologist, computer scientist, and 2018 Turing Award winner in deep learning, has departed the Google Brain team after a decade-long tenure. His research on AI and neural networks dating back to the 1980s has helped shape the current landscape of deep learning, neural processing, and artificial intelligence algorithms with direct and indirect contributions over the years. 2012's AlexNet, designed and developed in collaboration with his students Alex Krizhevsky and Ilya Sutskever, formed the modern backbone of computer vision and AI image recognition used today in Generative AI. Hinton joined Google when the company won the bid for the tiny startup he and his two students formed in the months following the reveal of AlexNet. Ilya Sutskever left their cohort at Google in 2015 to become co-founder and Chief Scientist of OpenAI; creators of ChatGPT and one of Google's most prominent competitors.

In an interview with the New York Times Hinton says that he quit his position at Google so that he may speak freely about the risks of AI, and that a part of him regrets his life's work in the field. He said that during his time there Google has acted as a "proper steward" of AI development, and was careful about releasing anything that might be harmful. His viewpoint on the industry shifted within the last year as Microsoft's Bing Chat took shots at Google's core business, the web browser, leading to Google being more reactionary than deliberate in response with Bard. The concern arises that as these companies battle it out for AI supremacy they won't take proper precautions against bad-faith actors using the technologies to flood the internet with false photos, text, and even videos. That the average person would no longer be able to tell what was real, and what was manufactured by AI prompt.
Hinton believes that the latest systems are starting to encroach on or even eclipse Human capabilities, telling the BBC that, "Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning. And given the rate of progress, we expect things to get better quite fast. So we need to worry about that. Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be." Hinton thinks that as the systems improve, newer generations of AI could become more dangerous. These new generations may exhibit better learning capabilities from the larger data sets they use, which could lead to the AI generating and running its own code or setting its own goals. "The idea that this stuff could actually get smarter than people—a few people believed that, but most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."

Dr. Hinton admits that at 75 years old it's time to retire, and makes it very clear that he in no way quit so that he could criticize Google or the team he worked with for over 10 years. He tells the BBC that he has, "some good things" to say about Google and their approach to AI, but that those comments would be more credible if he did not work for them.
Source: New York Times
Add your own comment

30 Comments on "Godfather of AI" Geoffrey Hinton Departs Google, Voices Concern Over Dangers of AI

#1
ymbaja
So 40 years ago he had no concerns? Just recently huh…

What was that Jerassic Park quote about scientists?
Posted on Reply
#2
sepheronx
ymbajaSo 40 years ago he had no concerns? Just recently huh…

What was that Jerassic Park quote about scientists?
"when you gotta go, you gotta go"
Posted on Reply
#3
Darmok N Jalad
Fouquin"The idea that this stuff could actually get smarter than people—a few people believed that, but most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."
Well, we’re raising the bar on AI, while lowering the bar on our own education and development, so maybe that’s part of the reason they got the timeline wrong. And every big corp is chasing AI to make and/or save a buck. The culmination of laziness and greed.
Posted on Reply
#4
AusWolf
How long until there's a ban on AI in every civilised country?
Posted on Reply
#5
usiname
Remember, Chatgpt is Skynet
Posted on Reply
#6
erocker
*
If countries don't start setting better standards for communications, television and the internet this could be society collapsing stuff.
Posted on Reply
#7
Prima.Vera
I think all the AI doomsayers are overreacting and exaggerating too much. AI is not dangerous, it just uses the free public available info online.
What is dangerous is the tools such as deepfake, voiceovers, etc, that can be used in malicious purposes, so that can be regulated.
For me, the tools such as BARD, ChatGPT and BingChat are tremendous helpful in my work, helping me complete tasks in 1H, that usually were taking 1 day or more to achieve.
It all depends how you use those tools.
Posted on Reply
#8
Fouquin
Prima.Verait just uses the free public available info online.
Free public info includes how to write and implement malware, break encryption, social engineer personal data, and more. What isn't available it can infer, what it can't infer it can solve. The point Dr. Hinton is making is that continued development at breakneck pace for the sake of 'being the best' is going to lead to a point where the algorithms developed are better than our own cognition with certain tasks. Who will be faster: a machine writing and implementing malicious code or teams of developers combating it? Hope that you never have to find out.

What began as acedemia has become an arms race with no oversight.
Posted on Reply
#9
Prima.Vera
FouquinWho will be faster: a machine writing and implementing malicious code or teams of developers combating it?
This is a 2 way thing. Use the AI to create better, stronger Antivirus/Antimalware software. See?
Modern problems require modern solutions. ;)
Posted on Reply
#11
Guwapo77
Prima.VeraI think all the AI doomsayers are overreacting and exaggerating too much. AI is not dangerous, it just uses the free public available info online.
What is dangerous is the tools such as deepfake, voiceovers, etc, that can be used in malicious purposes, so that can be regulated.
For me, the tools such as BARD, ChatGPT and BingChat are tremendous helpful in my work, helping me complete tasks in 1H, that usually were taking 1 day or more to achieve.
It all depends how you use those tools.
You are basically saying the same thing the creator said regarding bad faith actors. A gun used for hunting was a good tool at first until some bad guys started using them to kill people. This tech will be used for nefarious reasons and there is absolutely no way around it.
Posted on Reply
#12
bug
His concerns are legit, but even without AI, many people today can't tell a true story for a made up one. I will admit I have also fallen for that. But at least I try to track down sources and find alternative, independent reports when something smells fishy.
Posted on Reply
#13
the54thvoid
Super Intoxicated Moderator
Guwapo77You are basically saying the same thing the creator said regarding bad faith actors. A gun used for hunting was a good tool at first until some bad guys started using them to kill people. This tech will be used for nefarious reasons and there is absolutely no way around it.
That's not sound reasoning. Guns were the portable evolution of cannons (an evolution of Chinese fire lances). They were never created for the purpose of hunting, rather, they were invented to win battles. AI is not such a thing; it is simply an entity in its own right, with an exceptionally diverse field of application. Most of which require to be heavily controlled to avoid the consequences.

IMO, AI created in our world will be no different than the people and nations that create it. Greedy, ideological, and/or altruistic. There will be bad AI and good AI. Problem is, a good AI probably wouldn't have the ethics to beat a bad AI. Story of life.
Posted on Reply
#14
Bomby569
let me retire with the money i made creating the monster and btw beware of the monster i created.
Posted on Reply
#15
ZoneDymo
ymbajaSo 40 years ago he had no concerns? Just recently huh…

What was that Jerassic Park quote about scientists?
They move in herds?
Posted on Reply
#16
_Flare
The first results are here since years, used for false information to humans by humans. That can/was/is being used for good and bad human goals.

But if AI gets more than a tool used by humans ... if you then feel the cold breath of that machines in your neck, it will be way too late to stop them.
Posted on Reply
#17
Aquinus
Resident Wat-man
Prima.VeraAI is not dangerous, it just uses the free public available info online.
With all due respect, the internet is being scraped regardless of copyright laws or licenses that are being violated. There are some very real concerns in the software space where if you ask ChatGPT to write you some code, it was trained on publically available, open-source software which almost always has some form of license requirement to give the original author recognition for their work. You don't get that when code is "generated" based on trained data that is under a license.

AI can be dangerous just as any piece of software can be, but the idea that they can pull whatever information they want to create a tool that can be sold really opens a pandora's box of who's IP has been quite literally stolen to make this happen. This is China level theft of IP if I'm going to be completely honest.
the54thvoidaltruistic
You mean like SBF? :roll:
Posted on Reply
#18
Fouquin
Prima.VeraThis is a 2 way thing. Use the AI to create better, stronger Antivirus/Antimalware software. See?
Modern problems require modern solutions. ;)
Do you not see the issue with having an AI create security that it is also breaking? If it is being told to program better security, it already knows the solutions to break it because it made it in the first place. There is no layer of obfuscation or abstraction, it's literally building the keys for its own lock.
Posted on Reply
#19
bug
the54thvoidThat's not sound reasoning. Guns were the portable evolution of cannons (an evolution of Chinese fire lances). They were never created for the purpose of hunting, rather, they were invented to win battles. AI is not such a thing; it is simply an entity in its own right, with an exceptionally diverse field of application. Most of which require to be heavily controlled to avoid the consequences.
That's if you concentrate on "guns" only. Replace "guns" with "tools", realize AI is just another tool and you have the whole picture.
Posted on Reply
#20
the54thvoid
Super Intoxicated Moderator
bugThat's if you concentrate on "guns" only. Replace "guns" with "tools", realize AI is just another tool and you have the whole picture.
I see the bigger picture. I was inferring the 'guns' metaphor wasn't appropriate given that they were created for a purpose intended to harm others. AI isn't the same. It's end result will be determined by those who guide it.
Posted on Reply
#21
Eternit
the54thvoidI see the bigger picture. I was inferring the 'guns' metaphor wasn't appropriate given that they were created for a purpose intended to harm others. AI isn't the same. It's end result will be determined by those who guide it.
And what about nuclear fission? You can use it in the bomb or have nuclear power plant. But in both cases you need a strict law to control it. If you let everyone have a nuclear reactor at home and throw waste to the pile in the backyard it will be very dangerous. The same is with AI. It can be used in many ways, but we need a strict law to control the way we can use AI.

Also some say people at first were skeptical towards steam engines, factories and industrial revolution, but now they accept it as it created better paid jobs and improved our lives. Well, if you compare 18th century with 21st. But what about 19th? Was the quality of life of the factory worker so great? Was it better then quality of life of worker in 18th century? It wasn't because uncontrolled capitalism wasn't so great and stakeholders of the corporations didn't care about workers. The same will be with AI revolution. It will wipe out well paid middle class and we will return to situation where everything is owned by very few and the rest have nothing at all. And you might think people will rebel. well the rich will have robocops to pacify rebellion. While police and soldiers had emphaty to the civilians they were ordered to kill, robocops will not.
Posted on Reply
#22
Aquinus
Resident Wat-man
the54thvoidI was inferring the 'guns' metaphor wasn't appropriate given that they were created for a purpose intended to harm others.
Well, that's not completely accurate. That is one of their uses, sure, but you do realize that people do hunt for food. Maybe not where you live, but it's incredibly common where I live. Well before firearms we did have things like bows and arrows and we used those also against people and for hunting as well. It was a tool and the Native Americans knew that better than just about anyone else. As @bug said, it's a tool that completely depends on how you use it. So no, I completely disagree. It is appropriate and the difference between it and AI are quite literally the same. It's not the tool that's the problem, it's how it's utilized that is.
Posted on Reply
#23
the54thvoid
Super Intoxicated Moderator
AquinusWell, that's not completely accurate. That is one of their uses, sure, but you do realize that people do hunt for food. Maybe not where you live, but it's incredibly common where I live. Well before firearms we did have things like bows and arrows and we used those also against people and for hunting. As @bug said, it's a tool that completely depends on how you use it. So no, I completely disagree. It is appropriate and the difference between it and AI are quite literally the same. It's not the tool that's the problem, it's how it's utilized that is.
You're all missing my point. My reply was to this:
A gun used for hunting was a good tool at first until some bad guys started using them to kill people
...which directly infers that 'bad actors' altered the purpose of guns; from hunting to killing. My point was, that guns were not created to hunt - they were created to kill. By definition, their purpose was harmful.

You then need to see that I was pitching that against the concept of AI - which -- unlike guns -- was not deisgned to kill. Bad actors didn't alter the purpose of a gun. Whereas, AI's purpose can be altered (or guided).

The initial post implied guns were not created to be harmful to us, when that is the whole point of their initial premise. Unlike AI, which was pretty much thought of as a tool to help humanity, not hinder it.



Edit: if the item quoted had been a bow and arrrow - yes, it'd been fine. Bows were created to hunt. Guns weren't.
Posted on Reply
#24
droopyRO
I was watching this, when i stumbled on to this article, kind of a nice coincidence:
Posted on Reply
#25
Aquinus
Resident Wat-man
the54thvoidMy point was, that guns were not created to hunt - they were created to kill.
Pardon my ignorance, but what do you think hunting is?
the54thvoidEdit: if the item quoted had been a bow and arrrow - yes, it'd been fine. Bows were created to hunt. Guns weren't.
Bows and arrows were designed to kill. Both animals and humans. So, I'm not sure where you're going with this. I'm seeing conflicting points in your argument. Again, it's a potentially lethal tool, both of them, with the same intent. A firearm is merely an evolution of more archaic tools. What you're describing is rhetorical nonsense.
Posted on Reply
Add your own comment
Dec 22nd, 2024 03:04 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts