Wednesday, May 12th 2021

Intel Xe HP "Arctic Sound" 1T and 2T Cards Pictured

Intel has been extensively teasing its Xe HP scalable compute architecture for some time now, and Igor's Lab has an exclusive look at GPU compute cards based on the Xe HP silicon. We know from older reports that Intel's Xe HP compute accelerator packages come in three essential variants—1 tile, 2 tiles, and 4 tiles. A "tile" here is an independent GPU accelerator die. Each of these tiles has 512 execution units, which convert to 4,096 programmable shaders. The single-tile card is a compact, half-height card capable of 1U and 2U chassis. According to Igor's Lab, it comes with 16 GB of HBM2E memory with 716 GB/s memory bandwidth, and the single tile has 384 out of 512 EUs enabled (3,072 shaders). The card also has a typical board power of just 150 W.

The Arctic Sound 2T card is an interesting contraption. A much larger 2-slot card of length easily above 28 cm, and a workstation spacer, the 2T card uses a 2-tile variant of the Xe HP package, but each of the two tiles only has 480 out of 512 EUs enabled. This works out to 7,680 shaders. The dual-chiplet MCM uses 32 GB of HBM2E memory (16 GB per tile), and a typical board power of 300 W. A single 4+4 pin EPS connector, capable of up to 225 W, is used to power the card.
Source: Igor's Lab
Add your own comment

33 Comments on Intel Xe HP "Arctic Sound" 1T and 2T Cards Pictured

#1
80251
The memory bandwidth for Intel's HBM2e memory isn't very impressive considering what AMD did with their Radeon VII.
Posted on Reply
#2
shadow3401
Very nice looking GPU. Half height, low profile, passively cooled. Great for small form factor computers. If this product reaches retail Intel will have a sale from me.
Posted on Reply
#3
Sihastru
The impressiveness level depends on the number of HBM stacks.
Posted on Reply
#4
_Flare
Retail will be HPG if i´m not mistaking.
Posted on Reply
#5
Mussels
Freshwater Moderator
Why 4+4 pin EPS?

Why not use PCI-E 8 pin, its more prevalent on PSUs?
Posted on Reply
#6
Emanulele
MusselsWhy 4+4 pin EPS?

Why not use PCI-E 8 pin, its more prevalent on PSUs?
Server PSUs have more EPS than PCI power connectors.
Posted on Reply
#7
zlobby
Is it me or 'Xe' sounds like a gendervoid demiqueer foxkin pronoun?
Posted on Reply
#8
Mussels
Freshwater Moderator
zlobbyIs it me or 'Xe' sounds like a gendervoid demiqueer foxkin pronoun?
I'm not sure where you're planning to go with that comment, so let's file it under "probably should leave it at that" okay?
Posted on Reply
#9
napata
shadow3401Very nice looking GPU. Half height, low profile, passively cooled. Great for small form factor computers. If this product reaches retail Intel will have a sale from me.
There's no way it's passively cooled.
Posted on Reply
#10
Valantar
80251The memory bandwidth for Intel's HBM2e memory isn't very impressive considering what AMD did with their Radeon VII.
The bandwidth makes it most likely that this is two 8GB stacks per tile, at ~2.8Gbps/pin. That just tells us that they aren't using pushed-to-the-limit HBM2e, likely for thermal reasons (HBM is efficient, but dense).
shadow3401Very nice looking GPU. Half height, low profile, passively cooled. Great for small form factor computers. If this product reaches retail Intel will have a sale from me.
napataThere's no way it's passively cooled.
It is. In a server chassis with a bank of 15000rpm screamers pointing at every passive heatsink at there. Not quite what 'passive' means in consumer PCs ;)

For consumer applications ... well, the HHHL card is 150W. Most GPU makers struggle to cool 75W cards silently with dual-slot HHHL coolers. There have been higher rated ones (up to 125W IIRC) in that form factor, but that's really pushing things. But this isn't coming to the consumer market. Period.
Posted on Reply
#11
freeagent
But!

Does it run Crysis?

I hope so because since no one can get their shit together I will buy one if I have to.
Posted on Reply
#12
1d10t
No benchmark? Come on Intel, I know you good at number put some 50% higher frame rate than M100 and 50% higher clock than A100.
Posted on Reply
#13
Chrispy_
zlobbyIs it me or 'Xe' sounds like a gendervoid demiqueer foxkin pronoun?
As a Zekwhod Demicanadian Tickle-Mimic I find your lack of gendervoid demiqueer foxkin pronoun awareness upsetting.

(progressquest.com/play/)
Posted on Reply
#14
zlobby
MusselsI'm not sure where you're planning to go with that comment, so let's file it under "probably should leave it at that" okay?
Yeah, no insult towards the transgender community was intended. For the sake of not invoking a huge s**tstorm better leave it like that.
Posted on Reply
#15
Caring1
MusselsWhy 4+4 pin EPS?

Why not use PCI-E 8 pin, its more prevalent on PSUs?
I think that is a typo as the picture shows a single 8 pin connector.
Posted on Reply
#16
IceShroom
Normal EPS 8 PIN is 336W not 225W. So this cards power comsumption is atleast 250W+.
Posted on Reply
#17
Valantar
IceShroomNormal EPS 8 PIN is 336W not 225W. So this cards power comsumption is atleast 250W+.
The post says TBP of 300W ;) But you're right about that rating though.
Caring1I think that is a typo as the picture shows a single 8 pin connector.
Or rather that the EPS spec is 4+4 at its base, regardless of whether the connector is split or not? But more to Mussels' point, as was mentioned above, server GPUs/accelerators/AICs typically use EPS rather than PCIe power cables.
Posted on Reply
#18
shadow3401
shadow3401Very nice looking GPU. Half height, low profile, passively cooled. Great for small form factor computers. If this product reaches retail Intel will have a sale from me.
napataThere's no way it's passively cooled.
If you can point out where the the blades of a fan are on those pics of the Intel GPU I will edit my post.
Posted on Reply
#19
Patriot
shadow3401If you can point out where the the blades of a fan are on those pics of the Intel GPU I will edit my post.
The fact that is lacking a fan does not make it fit for SFF... The fan for it is simply located at the front of the server chassis.
Server accelerators in general do not have fans and rely on high static chassis forced airflow.
Those accelerators are generally 150-450w...
While it is "passive" in the sense it doesn't have a dedicated fan you aren't cooling that in a SFF without a waterblock.
When determining if something needs a fan, if its >15w it needs a fan. The baby gpu uses 150w, and the biggone uses 300w.
Posted on Reply
#20
Valantar
shadow3401If you can point out where the the blades of a fan are on those pics of the Intel GPU I will edit my post.
It's been pointed out several times in the thread already that this is a server accelerator reliant on extreme levels of forced airflow from fans in the server chassis. So yes, it's passive by itself, but if you put this into any regular PC case with normal airflow it would overheat at the first sign of a load. You directly connected this being passive to SFF and you wanting to buy one (presumably for SFF, and presumably not an SFF server), which is why people are pointing out to you that this would never, ever work, and that you seem to have fundamentally misunderstood the product.
Posted on Reply
#21
Caring1
ValantarOr rather that the EPS spec is 4+4 at its base, regardless of whether the connector is split or not? But more to Mussels' point, as was mentioned above, server GPUs/accelerators/AICs typically use EPS rather than PCIe power cables.
What?
They are wired differently.
Care to explain?
Posted on Reply
#22
Patriot
Caring1What?
They are wired differently.
Care to explain?
8pin EPS is the same as your mobo 8pin power. and yes its wired differently, 4 12v 4 ground, opposite of pcie and and pcie has 3 power 5 ground.
Posted on Reply
#23
Valantar
Caring1What?
They are wired differently.
Care to explain?
What is wired differently? PCIe and EPS? Yes. That is precisely the point. Server PSUs typically output EPS wiring due to it being four 12V pairs rather than three + two extra grounds like 8-pin PCIe, allowing it to handle higher currents. I was just pointing out that "4+4 EPS" can still be a single 8-pin connector - the base spec is a 4-pin connector, then an option for another 4 pins was added, and that combined 8-pin connector is used (in non-splittable form) in servers for powering AICs.
Posted on Reply
#24
outpt
ANY idea about cost;cough
Posted on Reply
#25
Vayra86
Nice specs.

STILL No performance metric. We already know for many years that Intel can make overpriced, non-competitive GPU designs. That they abandon shortly after. All I am seeing here is a continuation and 'making it scalable' of their eternal IGP. So yay, you have lots of EUs now and you need pricy memory to feed it. It is nowhere near the level of refinement of the competition. All I smell here is something akin to company XYZ coming out with their own version of an ARM soc.

The fact that Raja straight up jumped on four tiles as a starting point speaks volumes. It smells like that strange junkyard CPU range Intel is trying with four dies glued together. A slight hint of bullshit coupled with smoke from electrical fires and some of Raja's hair. Their 2T is already running on below optimal clocks and 4T will be even worse if they plan on burning less than 1KW per unit.

Next!
Posted on Reply
Add your own comment
May 18th, 2024 00:18 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts