Tuesday, August 18th 2020

Raja Koduri Previews "PetaFLOPs Scale" 4-Tile Intel Xe HP GPU

Raja Koduri, Intel's chief architect and senior vice president of Intel's discrete graphics division, has today held a talk at HotChips 32, the latest online conference of 2020, that shows off the latest architectural advancements in the semiconductor industry. So Intel has prepared two talks, one about Ice Lake-SP server CPUs and one about Intel's efforts in the upcoming graphics card launch. So what has Intel been working on the whole time? Raja Koduri took over the talk and has benchmarked the upcoming GPU and recorded how much raw power the GPUs posses, possibly counting in PetaFLOPs.

When Mr. Koduri got to talk, he pulled the 4-tile Xe HP GPU out of his pocket and showed for the first time how the chip looks. And it is one big chip. Featuring 4 tiles, the GPU represents Intel's fastest and biggest variant of Xe HP GPUs. The benchmark Intel ran was made to show off scaling on the Xe architecture and how the increase in the number of tiles results in a scalable increase in performance. Running on a single tile, the GPU managed to develop the performance of 10588 GFLOPs or around 10.588 TeraFLOPs. When there are two tiles, the performance scales almost perfectly at 21161 GFLOPS (21.161 TeraFLOPs) for 1.999X improvement. At four tiles the GPU achieves 3.993 times scaling and scores 41908 GFLOPs resulting in 41.908 TeraFLOPS, all measured in single-precision FP32.
Intel Xe HP GPU Demo Intel Xe HP GPU Demo Intel Xe HP GPU Demo
Mr. Koduri has mentioned that the 4-tile chip is capable of "PetaFLOPs performance" which means that the GPU is going to be incredibly fast for tasks like machine learning and AI. Given that the GPU supports tensor cores if we calculate that it has 2048 compute units (EUs), capable of performing 128 operations per cycle (128 TOPs) and the fact that there are about 2 FMA (Fused Multiply-Add) units, that equals to about 524,288 FLOPs of AI power. This means that the GPU needs to be clocked at least at 2 GHz clock to achieve the PetaFLOP performance target, or have more than 128 TOPs of computing ability.
Source: Tom's Hardware
Add your own comment

32 Comments on Raja Koduri Previews "PetaFLOPs Scale" 4-Tile Intel Xe HP GPU

#26
mtcn77
stimpy88Him leaving AMD was the best thing that has happened to them since Lisa Su and the Zen architecture.
The guy literally aimed at destroying Mr. Eric Demer's career. I happen to be a fan of him.
I hope industry sees a comeback until the score is settled...
Intel's main advantage is EMIB vs. tsv architecture. Intel's is clearly better, though AMD has taken great strides and knows the ins-and-outs of the technology very clearly. AMD can rain down on the Intel parade anytime an opportunity presents itself.
www.techpowerup.com/245521/on-the-coming-chiplet-revolution-and-amds-mcm-promise
Posted on Reply
#27
PowerPC
stimpy88Him leaving AMD was the best thing that has happened to them since Lisa Su and the Zen architecture.
I never said anything about what him leaving meant for AMD.

I only said his expression in this picture says it all about what he probably feels now about this move. He left way before it was clear that Intel was going under and AMD was rising over them. Kinda strange that you have to point out something from my post that I never argued.
Posted on Reply
#28
stimpy88
PowerPCI never said anything about what him leaving meant for AMD.

I only said his expression in this picture says it all about what he probably feels now about this move. He left way before it was clear that Intel was going under and AMD was rising over them. Kinda strange that you have to point out something from my post that I never argued.
I answered your opinion of him, with my own opinion of him.
Posted on Reply
#29
dragontamer5788
BlueberriesHaving linear scalability is WILD, and 10.5TFLOPS on a single chipset is nothing to scoff at.

I'll rehash what I said when Xe was announced: if Intel doesn't provide a competitive product with their initial launch, they absolutely will with their third or fourth generation.
Their track record with Itanium and Xeon Phi would say otherwise.

I mean heck, Xe could arguably bee the continuation of Larabee / Xeon Phi, since its simply Intel's next coprocessor. Granted, they're starting over from scratch on this one (or at least, on their Gen11 architecture), but this isn't the first time Intel has tried to enter the high-end Coprocessor market.
Posted on Reply
#30
Mescalamba
Vayra86What strikes me with Intel in all of their new developments is the lack of focus on scalability in terms of yields. Nowhere can we see a straight copy of the idea of chiplets that are as small as possible. They're still trying to make big complicated stuff. Even these tiled GPUs are humongous. They're also differentiating everything all over the place with a myriad of product lines and tweaks... its like they literally don't WANT to make an efficient, single product stack and derive new products from it - they just build a whole new one for every little segment. The wide variety of core configurations alone... wtf.

Looks like old ideas desperately trying to keep themselves relevant, despite ever increasing foundry challenges. Its like they love to repeat 10nm. Intel seems to be adamant that extreme specialization and tweaking is the way forward... but isn't that a dead end, ultimately, and probably pretty soon?
I think their decisions are dictated by their marketing department, not development one.

Its basically looking like Kodak which tried really hard to pi** against wind, only to capitulate later, train has already left the station and Kodak left the building a bit later too..

Trying to force any market to do whatever you want is really really stupid idea. Much like mankind trying to do same with the nature. It never worked and never will. And it always comes back and bites bottom of anyone who is trying to do that.
Posted on Reply
#31
IopaNalop
What is the Logic of the Title : PetaFLOPs = 1000 TeraFlops , But in the post the 21161 GFLOPS (21.161 TeraFLOPs) is nothing close to 1000 TeraFlops !!!
Posted on Reply
#32
dragontamer5788
IopaNalopWhat is the Logic of the Title : PetaFLOPs = 1000 TeraFlops , But in the post the 21161 GFLOPS (21.161 TeraFLOPs) is nothing close to 1000 TeraFlops !!!
There are 2048 execution units, and each can perform 8xFP32 operations per clock cycle.

However, 4-bit Neural Networks do exist. If the GPU provides 8x FMA instructions per FP32 unit, that's 128 4-bit Tensor ops per cycle. Leading to...
Mr. Koduri has mentioned that the 4-tile chip is capable of "PetaFLOPs performance" which means that the GPU is going to be incredibly fast for tasks like machine learning and AI. Given that the GPU supports tensor cores if we calculate that it has 2048 compute units (EUs), capable of performing 128 operations per cycle (128 TOPs) and the fact that there are about 2 FMA (Fused Multiply-Add) units, that equals to about 524,288 FLOPs of AI power. This means that the GPU needs to be clocked at least at 2 GHz clock to achieve the PetaFLOP performance target, or have more than 128 TOPs of computing ability.
Roughly 1 PetaFLOP. Except its not a "Flop", its a "IOP" (integer-op), and only 4-bit at that. (Unless there's some 4-bit floating-point unit that I haven't heard of before...). Its a stretch for sure, but neural-nets are popular enough that it might be realistic for one or two customers out there...
Posted on Reply
Add your own comment
Dec 22nd, 2024 05:38 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts