Monday, October 25th 2021

AMD Readies MI250X Compute Accelerator with 110 CUs and 128 GB HBM2E

AMD is preparing an update to its compute accelerator lineup with the new MI250X. Based on the CDNA2 architecture, and built on existing 7 nm node, the MI250X will be accompanied by a more affordable variant, the MI250. According to leaks put out by ExecutableFix, the MI250X packs a whopping 110 compute units (7,040 stream processors), running at 1.70 GHz. The package features 128 GB of HBM2E memory, and a package TDP of 500 W. As for speculative performance numbers, it is expected to offer double-precision (FP64) throughput of 47.9 TFLOP/s, ditto full-precision (FP32), and 383 TFLOP/s half-precision (FP16 and BFLOAT16). AMD's MI200 "Aldebaran" family of compute accelerators are expected to square off against Intel's "Ponte Vecchio" Xe-HPC, and NVIDIA Hopper H100 accelerators in 2022.
Sources: ExecutableFix (Twitter), VideoCardz
Add your own comment

14 Comments on AMD Readies MI250X Compute Accelerator with 110 CUs and 128 GB HBM2E

#1
delshay
I know a lot of user's disagree with HBM in normal GFX card because of the cost, but yeah, I wish they dump GDDRx & move back to HBM.
The cost of todays GFX Card should come with HBM given the high price.
Posted on Reply
#2
Chomiq
Just call it a mining card and be done with it.
Posted on Reply
#3
Flyordie
delshayI know a lot of user's disagree with HBM in normal GFX card because of the cost, but yeah, I wish they dump GDDRx & move back to HBM.
The cost of todays GFX Card should come with HBM given the high price.
I'm still rocking my HBM2 equipped graphics card. lol. 525GB/s bandwidth isn't bad for just 2 stacks of HBM2. Oh and.. its kept COOL. under 45C at all times.
Posted on Reply
#4
Daven
Its two chiplets on one package for 14,080 SPs. This is an EXTREMELY important part of the spec.
Posted on Reply
#5
delshay
FlyordieI'm still rocking my HBM2 equipped graphics card. lol. 525GB/s bandwidth isn't bad for just 2 stacks of HBM2. Oh and.. its kept COOL. under 45C at all times.
You must be overclocked because it's 409.6 GB/s standard.


R9 Nano & Vega 56 Nano user/owner.
Posted on Reply
#6
Chrispy_
ChomiqJust call it a mining card and be done with it.
Nah, mining cards are about ROI times and this thing has too much HMB2 to be appealing to miners because that much VRAM costs $$$$$$ that will take years to claw back by mining ETH. There aren't even years left to mine ETH as mid-2022 is a realistic estimate of when ETH mining ends for good.

ETH needs about 6GB of VRAM. so 122GB of HBM2 are going to be wasted on this card. You're better off selling it to buy half a dozen RX 6800 cards and spending the leftover change on a holiday to Hawaii. Based on the generational cost increase and last year's Instinct accelerators, this card will likely cost $10-12K a pop.
Posted on Reply
#7
medi01
No mention of how it squares against existing NV, so let me fill that gap:

nvidia A100
FP64: 9.7 TFLOPS
FP64 Tensor Core: 19.5 TFLOPS
Single-Precision Performance FP32: 19.5 TFLOPS
delshayI wish they dump GDDRx & move back to HBM.
Given that they can get away with slower VRAM and still be on par with competitor, that would not be wise.
Posted on Reply
#8
Flyordie
delshayYou must be overclocked because it's 409.6 GB/s standard.


R9 Nano & Vega 56 Nano user/owner.
Slightly. Vega64's made with Samsung HBM2 is rated for 1,000Mhz. Just downclocked to 945Mhz.
medi01Given that they can get away with slower VRAM and still be on par with competitor, that would not be wise.
I think he is just referring to the flagship cards. 6900XT, 6800XT. Those cards. HBM2 Aquabolt would suffice and allow for lower power consumption and smaller PCBs. (thereby reducing e-waste in the long term also)
Posted on Reply
#9
Chomiq
FlyordieSlightly. Vega64's made with Samsung HBM2 is rated for 1,000Mhz. Just downclocked to 945Mhz.




I think he is just referring to the flagship cards. 6900XT, 6800XT. Those cards. HBM2 Aquabolt would suffice and allow for lower power consumption and smaller PCBs. (thereby reducing e-waste in the long term also)
You're forgetting the part about HBM2 being more expensive than GDDR6.
Posted on Reply
#10
xkm1948
ROCm is still a shit show. Wonder when AMD will start serious commitment of software to their hardware. Without proper dev support the hw numbers are just numbers, does not translate to productivity
Posted on Reply
#11
Flyordie
ChomiqYou're forgetting the part about HBM2 being more expensive than GDDR6.
Not by much anymore with the yield increases Samsung has gotten on HBM2 Aquabolt. We are looking at around $150-160 for 2x 4GB HBM2 stacks including the interposer. If we are gonna be paying premium prices for GPUs going forward, we should damn well be getting something for it. Smaller cards, lower latency, higher efficiency.
Posted on Reply
#13
delshay
FlyordieNot by much anymore with the yield increases Samsung has gotten on HBM2 Aquabolt. We are looking at around $150-160 for 2x 4GB HBM2 stacks including the interposer. If we are gonna be paying premium prices for GPUs going forward, we should damn well be getting something for it. Smaller cards, lower latency, higher efficiency.
Look at the launch price for four stacks on the Radeon VIl "699 USD" see link. ...Now look what your getting today & in future products..

AMD Radeon VII Specs | TechPowerUp GPU Database
Posted on Reply
#14
prtskg
It's 110CUs per die and there are two dies in MI250s.
Posted on Reply
Add your own comment
May 21st, 2024 22:05 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts