Tuesday, September 24th 2024

Intel Launches Gaudi 3 AI Accelerator and P-Core Xeon 6 CPU

As AI continues to revolutionize industries, enterprises are increasingly in need of infrastructure that is both cost-effective and available for rapid development and deployment. To meet this demand head-on, Intel today launched Xeon 6 with Performance-cores (P-cores) and Gaudi 3 AI accelerators, bolstering the company's commitment to deliver powerful AI systems with optimal performance per watt and lower total cost of ownership (TCO).

"Demand for AI is leading to a massive transformation in the data center, and the industry is asking for choice in hardware, software and developer tools," said Justin Hotard, Intel executive vice president and general manager of the Data Center and Artificial Intelligence Group. "With our launch of Xeon 6 with P-cores and Gaudi 3 AI accelerators, Intel is enabling an open ecosystem that allows our customers to implement all of their workloads with greater performance, efficiency and security."
Introducing Intel Xeon 6 with P-cores and Gaudi 3 AI accelerators

Intel's latest advancements in AI infrastructure include two major updates to its data center portfolio:
  • Intel Xeon 6 with P-cores: Designed to handle compute-intensive workloads with exceptional efficiency, Xeon 6 delivers twice the performance of its predecessor. It features increased core count, double the memory bandwidth and AI acceleration capabilities embedded in every core. This processor is engineered to meet the performance demands of AI from edge to data center and cloud environments.
  • Intel Gaudi 3 AI Accelerator: Specifically optimized for large-scale generative AI, Gaudi 3 boasts 64 Tensor processor cores (TPCs) and eight matrix multiplication engines (MMEs) to accelerate deep neural network computations. It includes 128 gigabytes (GB) of HBM2e memory for training and inference, and 24 200 Gigabit (Gb) Ethernet ports for scalable networking. Gaudi 3 also offers seamless compatibility with the PyTorch framework and advanced Hugging Face transformer and diffuser models. Intel recently announced a collaboration with IBM to deploy Intel Gaudi 3 AI accelerators as a service on IBM Cloud. Through this collaboration, Intel and IBM aim to lower the total cost of ownership to leverage and scale AI, while enhancing performance.
Enhancing AI Systems with TCO Benefits
Deploying AI at scale involves considerations such as flexible deployment options, competitive price-performance ratios and accessible AI technologies. Intel's robust x86 infrastructure and extensive open ecosystem position it to support enterprises in building high-value AI systems with an optimal TCO and performance per watt. Notably, 73% of GPU-accelerated servers use Intel Xeon as the host CPU3.

Intel partners with leading OEMs including Dell Technologies and Supermicro to develop co-engineered systems tailored to specific customer needs for effective AI deployments. Dell Technologies is currently co-engineering RAG-based solutions leveraging Gaudi 3 and Xeon 6.

Bridging the Gap from Prototypes to Production with Co-Engineering Efforts
Transitioning generative AI (Gen AI) solutions from prototypes to production-ready systems presents challenges in real-time monitoring, error handling, logging, security and scalability. Intel addresses these challenges through co-engineering efforts with OEMs and partners to deliver production-ready retrieval-augmented generation (RAG) solutions.

These solutions, built on the Open Platform Enterprise AI (OPEA) platform, integrate OPEA-based microservices into a scalable RAG system, optimized for Xeon and Gaudi AI systems, designed to allow customers to easily integrate applications from Kubernetes, Red Hat OpenShift AI and Red Hat Enterprise Linux AI.

Expanding Access to Enterprise AI Applications
Intel's Tiber portfolio offers business solutions to tackle challenges such as access, cost, complexity, security, efficiency and scalability across AI, cloud and edge environments. The Intel Tiber Developer Cloud now provides preview systems of Intel Xeon 6 for tech evaluation and testing. Additionally, select customers will gain early access to Intel Gaudi 3 for validating AI model deployments, with Gaudi 3 clusters to begin rolling out next quarter for large-scale production deployments.

New service offerings include SeekrFlow, an end-to-end AI platform from Seekr for developing trusted AI applications. The latest updates feature Intel Gaudi software's newest release and Jupyter notebooks loaded with PyTorch 2.4 and Intel oneAPI and AI tools 2024.2, which include new AI acceleration capabilities and support for Xeon 6 processors.
Add your own comment

9 Comments on Intel Launches Gaudi 3 AI Accelerator and P-Core Xeon 6 CPU

#1
ncrs
It would be great if this news contained some slides from the press deck with specifications.

Those Xeon 6 are using Redwood Cove cores, same as Meteor Lake but with the extended configuration of full AVX-512, AMX and AMX-FP16.
Their TDP is 400 to 500W, 72 to 128 P-Cores, 12 channels of DDR5-6400 or MRDIMM-8800.

Phoronix has benchmarks of the 128-core model and it's impressive.
Posted on Reply
#2
xSneak
These Gaudi "ai accelerators" seem to be the most interesting thing intel is producing yet I can't any reviews and apparently a gaudi 2 server costs $90k starting. :confused:
Posted on Reply
#3
Dristun
xSneakThese Gaudi "ai accelerators" seem to be the most interesting thing intel is producing yet I can't any reviews and apparently a gaudi 2 server costs $90k starting. :confused:
Yeah, and if you subtract the other parts and some rough Supermicro margin from total server cost and divide the remaining by 8 (The server ships with 8xGaudi2's), the card itself ends up costing roughly on par with RTX6000 Ada! It's a shame they're not widely available, at least on paper it feels like killer pricing.
Posted on Reply
#4
efikkan
ncrsThose Xeon 6 are using Redwood Cove cores, same as Meteor Lake but with the extended configuration of full AVX-512, AMX and AMX-FP16.
Their TDP is 400 to 500W, 72 to 128 P-Cores, 12 channels of DDR5-6400 or MRDIMM-8800.

Phoronix has benchmarks of the 128-core model and it's impressive.
Quite interesting.
I'm really looking forward to the Xeon W variants, and see how they scale with workstation and consumer workloads. Hopefully we wouldn't have to wait a full year.
Posted on Reply
#5
ncrs
efikkanQuite interesting.
I'm really looking forward to the Xeon W variants, and see how they scale with workstation and consumer workloads. Hopefully we wouldn't have to wait a full year.
Both lines of Xeon W were refreshed as 2500- and 3500-series not even a month ago, so I don't expect a Granite Rapids versions any time soon. Intel didn't even upgrade Xeon W to Emerald Rapids with that refresh - it was just a bump in core counts and frequencies for the cost of increased TDPs.
Xeon 6 is a staggered release consisting of two platforms with 8- and 12-channels, and two types of CPUs with P- or E-cores.
8ch E-core launched already, this is the 12ch P-core. Next year they'll launch 8ch P-core and 12ch E-core versions so I'm not sure they have the manufacturing capacity to support a new line of Xeon W at the same time.
Posted on Reply
#6
efikkan
ncrsBoth lines of Xeon W were refreshed as 2500- and 3500-series not even a month ago, so I don't expect a Granite Rapids versions any time soon. Intel didn't even upgrade Xeon W to Emerald Rapids with that refresh - it was just a bump in core counts and frequencies for the cost of increased TDPs.
Xeon 6 is a staggered release consisting of two platforms with 8- and 12-channels, and two types of CPUs with P- or E-cores.
8ch E-core launched already, this is the 12ch P-core. Next year they'll launch 8ch P-core and 12ch E-core versions so I'm not sure they have the manufacturing capacity to support a new line of Xeon W at the same time.
The Xeon W 2500/3500 series are just a refresh with slightly improved yields of Sapphire Rapids. (and this refresh was almost a year delayed too)
They come from the same production line as Sapphire Rapids-SP, just different bins. So it's very logical if Granite Rapids-SP will do the same if it's high enough volume to yield enough chips in respective bins to supply a workstation lineup.
There are several "leaks" referring to a "W890" chipset, so I would expect that to be scheduled for late Q3 or Q4 next year, although I would hope for summer. :)
Posted on Reply
#7
ncrs
efikkanThe Xeon W 2500/3500 series are just a refresh with slightly improved yields of Sapphire Rapids. (and this refresh was almost a year delayed too)
They come from the same production line as Sapphire Rapids-SP, just different bins. So it's very logical if Granite Rapids-SP will do the same if it's high enough volume to yield enough chips in respective bins to supply a workstation lineup.
There are several "leaks" referring to a "W890" chipset, so I would expect that to be scheduled for late Q3 or Q4 next year, although I would hope for summer. :)
Oh they definitely have a plan to ship the next Xeon W platform. I just wasn't sure about the timing you meant - Q3/Q4 2025 sounds reasonable. It all depends on how well the Intel 3 process is doing, as you wrote.
Posted on Reply
#8
Frank_100
ncrsBoth lines of Xeon W were refreshed as 2500- and 3500-series not even a month ago, so I don't expect a Granite Rapids versions any time soon. Intel didn't even upgrade Xeon W to Emerald Rapids with that refresh - it was just a bump in core counts and frequencies for the cost of increased TDPs.
Xeon 6 is a staggered release consisting of two platforms with 8- and 12-channels, and two types of CPUs with P- or E-cores.
8ch E-core launched already, this is the 12ch P-core. Next year they'll launch 8ch P-core and 12ch E-core versions so I'm not sure they have the manufacturing capacity to support a new line of Xeon W at the same time.
The w5-3525 looks real tempting. I just wish I could get a smaller motherboard then EATX or CEB.
Posted on Reply
#9
efikkan
Frank_100The w5-3525 looks real tempting. I just wish I could get a smaller motherboard then EATX or CEB.
ASRock W790D8UD-1L1N2T/BCM is the smallest that I know of, "deep micro-atx", and as far as I can see it only supports 4 memory channels so a w5-3525 would be a bit wasted (use a 25xx instead).
Posted on Reply
Dec 11th, 2024 20:31 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts