Wednesday, April 17th 2024

Intel Builds World's Largest Neuromorphic System to Enable More Sustainable AI

Today, Intel announced that it has built the world's largest neuromorphic system. Code-named Hala Point, this large-scale neuromorphic system, initially deployed at Sandia National Laboratories, utilizes Intel's Loihi 2 processor, aims at supporting research for future brain-inspired artificial intelligence (AI), and tackles challenges related to the efficiency and sustainability of today's AI. Hala Point advances Intel's first-generation large-scale research system, Pohoiki Springs, with architectural improvements to achieve over 10 times more neuron capacity and up to 12 times higher performance.

"The computing cost of today's AI models is rising at unsustainable rates. The industry needs fundamentally new approaches capable of scaling. For that reason, we developed Hala Point, which combines deep learning efficiency with novel brain-inspired learning and optimization capabilities. We hope that research with Hala Point will advance the efficiency and adaptability of large-scale AI technology." -Mike Davies, director of the Neuromorphic Computing Lab at Intel Labs
What It Does: Hala Point is the first large-scale neuromorphic system to demonstrate state-of-the-art computational efficiencies on mainstream AI workloads. Characterization shows it can support up to 20 quadrillion operations per second, or 20 petaops, with an efficiency exceeding 15 trillion 8-bit operations per second per watt (TOPS/W) when executing conventional deep neural networks. This rivals and exceeds levels achieved by architectures built on graphics processing units (GPU) and central processing units (CPU). Hala Point's unique capabilities could enable future real-time continuous learning for AI applications such as scientific and engineering problem-solving, logistics, smart city infrastructure management, large language models (LLMs) and AI agents.

How It will be Used: Researchers at Sandia National Laboratories plan to use Hala Point for advanced brain-scale computing research. The organization will focus on solving scientific computing problems in device physics, computer architecture, computer science and informatics.

"Working with Hala Point improves our Sandia team's capability to solve computational and scientific modeling problems. Conducting research with a system of this size will allow us to keep pace with AI's evolution in fields ranging from commercial to defense to basic science," said Craig Vineyard, Hala Point team lead at Sandia National Laboratories.

Currently, Hala Point is a research prototype that will advance the capabilities of future commercial systems. Intel anticipates that such lessons will lead to practical advancements, such as the ability for LLMs to learn continuously from new data. Such advancements promise to significantly reduce the unsustainable training burden of widespread AI deployments.

Why It Matters: Recent trends in scaling up deep learning models to trillions of parameters have exposed daunting sustainability challenges in AI and have highlighted the need for innovation at the lowest levels of hardware architecture. Neuromorphic computing is a fundamentally new approach that draws on neuroscience insights that integrate memory and computing with highly granular parallelism to minimize data movement. In published results from this month's International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Loihi 2 demonstrated orders of magnitude gains in the efficiency, speed and adaptability of emerging small-scale edge workloads.

Advancing on its predecessor, Pohoiki Springs, with numerous improvements, Hala Point now brings neuromorphic performance and efficiency gains to mainstream conventional deep learning models, notably those processing real-time workloads such as video, speech and wireless communications. For example, Ericsson Research is applying Loihi 2 to optimize telecom infrastructure efficiency, as highlighted at this year's Mobile World Congress.

About Hala Point: Loihi 2 neuromorphic processors, which form the basis for Hala Point, apply brain-inspired computing principles, such as asynchronous, event-based spiking neural networks (SNNs), integrated memory and computing, and sparse and continuously changing connections to achieve orders-of-magnitude gains in energy consumption and performance. Neurons communicate directly with one another rather than communicating through memory, reducing overall power consumption.

Hala Point packages 1,152 Loihi 2 processors produced on Intel 4 process node in a six-rack-unit data center chassis the size of a microwave oven. The system supports up to 1.15 billion neurons and 128 billion synapses distributed over 140,544 neuromorphic processing cores, consuming a maximum of 2,600 watts of power. It also includes over 2,300 embedded x86 processors for ancillary computations.

Hala Point integrates processing, memory, and communication channels in a massively parallelized fabric, providing a total of 16 petabytes per second (PB/s) of memory bandwidth, 3.5 PB/s of inter-core communication bandwidth, and 5 terabytes per second (TB/s) of inter-chip communication bandwidth. The system can process over 380 trillion 8-bit synapses and over 240 trillion neuron operations per second.

Applied to bio-inspired spiking neural network models, the system can execute its full capacity of 1.15 billion neurons 20 times faster than a human brain and up to 200 times faster rates at lower capacity. While Hala Point is not intended for neuroscience modeling, its neuron capacity is roughly equivalent to that of an owl brain or the cortex of a capuchin monkey.

Loihi-based systems can perform AI inference and solve optimization problems using 100 times less energy at speeds as much as 50 times faster than conventional CPU and GPU architectures. By exploiting up to 10:1 sparse connectivity and event-driven activity, early results on Hala Point show the system can achieve deep neural network efficiencies as high as 15 TOPS/W2 without requiring input data to be collected into batches, a common optimization for GPUs that significantly delays the processing of data arriving in real-time, such as video from cameras. While still in research, future neuromorphic LLMs capable of continuous learning could result in gigawatt-hours of energy savings by eliminating the need for periodic re-training with ever-growing datasets.

What's Next: The delivery of Hala Point to Sandia National Labs marks the first deployment of a new family of large-scale neuromorphic research systems that Intel plans to share with its research collaborators. Further development will enable neuromorphic computing applications to overcome power and latency constraints that limit AI capabilities' real-world, real-time deployment.

Together with an ecosystem of more than 200 Intel Neuromorphic Research Community (INRC) members, including leading academic groups, government labs, research institutions and companies worldwide, Intel is working to push the boundaries of brain-inspired AI and progressing this technology from research prototypes to industry-leading commercial products over the coming years.
Source: Intel
Add your own comment

6 Comments on Intel Builds World's Largest Neuromorphic System to Enable More Sustainable AI

#1
P4-630
But ..But... They want Nvidia....
Posted on Reply
#2
AleksandarK
News Editor
I thought Loihi had its faith sealed. Good to see it back and the push for different uArch designs!
Posted on Reply
#3
Solaris17
Super Dainty Moderator
AleksandarKI thought Loihi had its faith sealed. Good to see it back and the push for different uArch designs!
Agreed! It’s pretty cool to see it back and this approach is certainly the next step for AI; doing it in real time that is.
Posted on Reply
#4
Minus Infinity
Good tom see somebody addressing the elephant in the room. When people thought AI would destroy I'll bet they didn't think it was because of energy consumption. I would ban all AI data centres from operating until they have 100% off grid renewable energy sources and I could care less how long that takes them.
Posted on Reply
#5
Random_User
Minus InfinityGood tom see somebody addressing the elephant in the room. When people thought AI would destroy I'll bet they didn't think it was because of energy consumption. I would ban all AI data centres from operating until they have 100% off grid renewable energy sources and I could care less how long that takes them.
Exactly. Not to mention they consume all that energy, just for their ovn corporate benefit and margins, with no real useful output for 99% of all people living. It's just being used to fake the info and pump user personal data, or make the AI pixel sausages and waifus. I doubt there's need of such huge amount or AI stuf, for just real data base managing, or scientific stuff.
They AI stuff buyers/users only inflate prices, and thus giving the arrogant TSMC even more hubris. Hence, the stuff that should have had much higher priority, e.g. internationally governmental/public contracts with AMD/Intel for their energy efficient APUs, to replace the old power hungry usless crap that they have now for the schools, hospitals, universities, gov offices etc, not being made in big ammounts, because all allocation goes to scum/scam stuff like effin AI. Not to mention thousands if not millions of people, wanting to make low-profile small ITX, or not desktops, or mobile PCs for daily se, but cant get, since the availability is nonexistant. AMD is huge bait maker. That could be useful move for both AMD and Intel to provide those for environmental benefits, as much as for countries, where the electric power is heavily priced. But no. Instead, these companies are the culprit of even bigger never ending exponental rise of electric power and internet banwidth for nought apparent useful purpose.
Posted on Reply
#6
Vayra86
Random_UserExactly. Not to mention they consume all that energy, just for their ovn corporate benefit and margins, with no real useful output for 99% of all people living. It's just being used to fake the info and pump user personal data, or make the AI pixel sausages and waifus. I doubt there's need of such huge amount or AI stuf, for just real data base managing, or scientific stuff.
They AI stuff buyers/users only inflate prices, and thus giving the arrogant TSMC even more hubris. Hence, the stuff that should have had much higher priority, e.g. internationally governmental/public contracts with AMD/Intel for their energy efficient APUs, to replace the old power hungry usless crap that they have now for the schools, hospitals, universities, gov offices etc, not being made in big ammounts, because all allocation goes to scum/scam stuff like effin AI. Not to mention thousands if not millions of people, wanting to make low-profile small ITX, or not desktops, or mobile PCs for daily se, but cant get, since the availability is nonexistant. AMD is huge bait maker. That could be useful move for both AMD and Intel to provide those for environmental benefits, as much as for countries, where the electric power is heavily priced. But no. Instead, these companies are the culprit of even bigger never ending exponental rise of electric power and internet banwidth for nought apparent useful purpose.
The dystopia is coming bud. Cyberpunk was just a prediction and we're closing in rapidly. Just a few more climate related disasters, some war, and we're hopeless enough to go there.

We're still doing more, producing more, using more than we ever did historically, year over year... And the amount of waste is immense. To me, AI is nothing more still than crypto mining capacity looking for a new problem. Something infinite. The economy that was created there is just keeping itself up. Is this human brilliance... or is it hopelessness. I can't decide. We already had predictive models and most other stuff that is now touted as new but 'oh eh yeah oops it does create some false results'. Just because the result is less predictable, we think its better o_O
Posted on Reply
May 1st, 2024 02:22 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts