Monday, May 1st 2023
"Godfather of AI" Geoffrey Hinton Departs Google, Voices Concern Over Dangers of AI
Geoffrey Hinton, British-Canadian psychologist, computer scientist, and 2018 Turing Award winner in deep learning, has departed the Google Brain team after a decade-long tenure. His research on AI and neural networks dating back to the 1980s has helped shape the current landscape of deep learning, neural processing, and artificial intelligence algorithms with direct and indirect contributions over the years. 2012's AlexNet, designed and developed in collaboration with his students Alex Krizhevsky and Ilya Sutskever, formed the modern backbone of computer vision and AI image recognition used today in Generative AI. Hinton joined Google when the company won the bid for the tiny startup he and his two students formed in the months following the reveal of AlexNet. Ilya Sutskever left their cohort at Google in 2015 to become co-founder and Chief Scientist of OpenAI; creators of ChatGPT and one of Google's most prominent competitors.
In an interview with the New York Times Hinton says that he quit his position at Google so that he may speak freely about the risks of AI, and that a part of him regrets his life's work in the field. He said that during his time there Google has acted as a "proper steward" of AI development, and was careful about releasing anything that might be harmful. His viewpoint on the industry shifted within the last year as Microsoft's Bing Chat took shots at Google's core business, the web browser, leading to Google being more reactionary than deliberate in response with Bard. The concern arises that as these companies battle it out for AI supremacy they won't take proper precautions against bad-faith actors using the technologies to flood the internet with false photos, text, and even videos. That the average person would no longer be able to tell what was real, and what was manufactured by AI prompt.Hinton believes that the latest systems are starting to encroach on or even eclipse Human capabilities, telling the BBC that, "Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning. And given the rate of progress, we expect things to get better quite fast. So we need to worry about that. Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be." Hinton thinks that as the systems improve, newer generations of AI could become more dangerous. These new generations may exhibit better learning capabilities from the larger data sets they use, which could lead to the AI generating and running its own code or setting its own goals. "The idea that this stuff could actually get smarter than people—a few people believed that, but most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."
Dr. Hinton admits that at 75 years old it's time to retire, and makes it very clear that he in no way quit so that he could criticize Google or the team he worked with for over 10 years. He tells the BBC that he has, "some good things" to say about Google and their approach to AI, but that those comments would be more credible if he did not work for them.
Source:
New York Times
In an interview with the New York Times Hinton says that he quit his position at Google so that he may speak freely about the risks of AI, and that a part of him regrets his life's work in the field. He said that during his time there Google has acted as a "proper steward" of AI development, and was careful about releasing anything that might be harmful. His viewpoint on the industry shifted within the last year as Microsoft's Bing Chat took shots at Google's core business, the web browser, leading to Google being more reactionary than deliberate in response with Bard. The concern arises that as these companies battle it out for AI supremacy they won't take proper precautions against bad-faith actors using the technologies to flood the internet with false photos, text, and even videos. That the average person would no longer be able to tell what was real, and what was manufactured by AI prompt.Hinton believes that the latest systems are starting to encroach on or even eclipse Human capabilities, telling the BBC that, "Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning. And given the rate of progress, we expect things to get better quite fast. So we need to worry about that. Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be." Hinton thinks that as the systems improve, newer generations of AI could become more dangerous. These new generations may exhibit better learning capabilities from the larger data sets they use, which could lead to the AI generating and running its own code or setting its own goals. "The idea that this stuff could actually get smarter than people—a few people believed that, but most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."
Dr. Hinton admits that at 75 years old it's time to retire, and makes it very clear that he in no way quit so that he could criticize Google or the team he worked with for over 10 years. He tells the BBC that he has, "some good things" to say about Google and their approach to AI, but that those comments would be more credible if he did not work for them.
30 Comments on "Godfather of AI" Geoffrey Hinton Departs Google, Voices Concern Over Dangers of AI
What was that Jerassic Park quote about scientists?
What is dangerous is the tools such as deepfake, voiceovers, etc, that can be used in malicious purposes, so that can be regulated.
For me, the tools such as BARD, ChatGPT and BingChat are tremendous helpful in my work, helping me complete tasks in 1H, that usually were taking 1 day or more to achieve.
It all depends how you use those tools.
What began as acedemia has become an arms race with no oversight.
Modern problems require modern solutions. ;)
nautil.us/deep-learning-is-hitting-a-wall-238440/
IMO, AI created in our world will be no different than the people and nations that create it. Greedy, ideological, and/or altruistic. There will be bad AI and good AI. Problem is, a good AI probably wouldn't have the ethics to beat a bad AI. Story of life.
But if AI gets more than a tool used by humans ... if you then feel the cold breath of that machines in your neck, it will be way too late to stop them.
AI can be dangerous just as any piece of software can be, but the idea that they can pull whatever information they want to create a tool that can be sold really opens a pandora's box of who's IP has been quite literally stolen to make this happen. This is China level theft of IP if I'm going to be completely honest. You mean like SBF? :roll:
Also some say people at first were skeptical towards steam engines, factories and industrial revolution, but now they accept it as it created better paid jobs and improved our lives. Well, if you compare 18th century with 21st. But what about 19th? Was the quality of life of the factory worker so great? Was it better then quality of life of worker in 18th century? It wasn't because uncontrolled capitalism wasn't so great and stakeholders of the corporations didn't care about workers. The same will be with AI revolution. It will wipe out well paid middle class and we will return to situation where everything is owned by very few and the rest have nothing at all. And you might think people will rebel. well the rich will have robocops to pacify rebellion. While police and soldiers had emphaty to the civilians they were ordered to kill, robocops will not.
You then need to see that I was pitching that against the concept of AI - which -- unlike guns -- was not deisgned to kill. Bad actors didn't alter the purpose of a gun. Whereas, AI's purpose can be altered (or guided).
The initial post implied guns were not created to be harmful to us, when that is the whole point of their initial premise. Unlike AI, which was pretty much thought of as a tool to help humanity, not hinder it.
Edit: if the item quoted had been a bow and arrrow - yes, it'd been fine. Bows were created to hunt. Guns weren't.