Wednesday, March 20th 2024
NVIDIA CEO Jensen Huang: AGI Within Five Years, AI Hallucinations are Solvable
After giving a vivid GTC talk, NVIDIA's CEO Jensen Huang took on a Q&A session with many interesting ideas for debate. One of them is addressing the pressing concerns surrounding AI hallucinations and the future of Artificial General Intelligence (AGI). With a tone of confidence, Huang reassured the tech community that the phenomenon of AI hallucinations—where AI systems generate plausible yet unfounded answers—is a solvable issue. His solution emphasizes the importance of well-researched and accurate data feeding into AI systems to mitigate these occurrences. "The AI shouldn't just answer; it should do research first to determine which of the answers are the best," noted Mr. Huang as he added that for every single question, there should be a rule that makes AI research the answer. This also refers to Retrieval-Augmented Generation (RAG), where LLMs fetch data from external sources, like additional databases, for fact-checking.
Another interesting comment made by the CEO is that the pinnacle of AI evolution—Artificial General Intelligence—is just five years away. Many people working in AI are divided between the AGI timeline. While Mr. Huang predicted five years, some leading researchers like Meta's Yann LeCunn think we are far from the AGI singularity threshold and will be stuck with dog/cat-level AI systems first. AGI has long been a topic of both fascination and apprehension, with debates often revolving around its potential to exceed human intelligence and the ethical implications of such a development. Critics worry about the unpredictability and uncontrollability of AGI once it reaches a certain level of autonomy, raising questions about aligning its objectives with human values and priorities. Timeline-wise, no one knows, and everyone makes their prediction, so time will tell who was right.
Source:
TechCrunch
Another interesting comment made by the CEO is that the pinnacle of AI evolution—Artificial General Intelligence—is just five years away. Many people working in AI are divided between the AGI timeline. While Mr. Huang predicted five years, some leading researchers like Meta's Yann LeCunn think we are far from the AGI singularity threshold and will be stuck with dog/cat-level AI systems first. AGI has long been a topic of both fascination and apprehension, with debates often revolving around its potential to exceed human intelligence and the ethical implications of such a development. Critics worry about the unpredictability and uncontrollability of AGI once it reaches a certain level of autonomy, raising questions about aligning its objectives with human values and priorities. Timeline-wise, no one knows, and everyone makes their prediction, so time will tell who was right.
21 Comments on NVIDIA CEO Jensen Huang: AGI Within Five Years, AI Hallucinations are Solvable
Like, I am not denying advances that are made, but what we have now can really be called AI in the loosest sense. Going from here to a full blown AGI in just 5 years is absolutely implausible.
Yeah they said fact checker was reliable to and turned out to be preprogrammed bias "pick a word or phrase" bs more than the entire context so even "mostly false" wouldn't apply to it's obvious twisting of what was said hehe
AI will just be more of the same just more long winded.
And all AI is not LLM.
You can be sure that AI will be used for military applications (as they are starting to be used in Ukraine for example).
Then hallucinations of AI will have a very different meaning.
That is actually reasonable. Training off the internet was always going to result in an AI that is as dumb as a cat meme.
But, the way we do IA now is not a new tech. It has been this way for quite some time, for example for OCR or speech recognition. What has changed is the miniaturization of silicon that has allowed much better performances and results.
And the way it is done, it is really hard to get a perfect 100% result. Since you can't train a model with every possible input, you can't predict every possible output. Even if you could train a model with every possible input, each successive input or feedback would also erase a bit of previous inputs.
This problem is with us since a very long time and I don't see how it will be adressed in the future, when the models will obviously become more and more complex. If we understood how the neuron layers really worked, we wouldn't need to train them : we would just build them directly. Now we train models with a finite set of inputs and expect that when confronted to other inputs, they will give sensible results. But since they are black boxes in essence, you may get weird results. What Jensen Huang is advocating is postprocessing, but that may not be applicable to every implementation of IA. Not everything is LLM.
Maybe AGI will help, if it really arrives in five years, but I feel that, like ours, Artificial General Intelligence will be flawed.
Last thing we need is Xenon to murders us all, while we do not have ATF (AGI Task Force)/Terran Protectorate naval assets to deal with this threat.