Tuesday, November 6th 2018

AMD Unveils World's First 7 nm GPUs - Radeon Instinct MI60, Instinct MI50

AMD today announced the AMD Radeon Instinct MI60 and MI50 accelerators, the world's first 7nm datacenter GPUs, designed to deliver the compute performance required for next-generation deep learning, HPC, cloud computing and rendering applications. Researchers, scientists and developers will use AMD Radeon Instinct accelerators to solve tough and interesting challenges, including large-scale simulations, climate change, computational biology, disease prevention and more.

"Legacy GPU architectures limit IT managers from effectively addressing the constantly evolving demands of processing and analyzing huge datasets for modern cloud datacenter workloads," said David Wang, senior vice president of engineering, Radeon Technologies Group at AMD. "Combining world-class performance and a flexible architecture with a robust software platform and the industry's leading-edge ROCm open software ecosystem, the new AMD Radeon Instinct accelerators provide the critical components needed to solve the most difficult cloud computing challenges today and into the future."
The AMD Radeon Instinct MI60 and MI50 accelerators feature flexible mixed-precision capabilities, powered by high-performance compute units that expand the types of workloads these accelerators can address, including a range of HPC and deep learning applications. The new AMD Radeon Instinct MI60 and MI50 accelerators were designed to efficiently process workloads such as rapidly training complex neural networks, delivering higher levels of floating-point performance, greater efficiencies and new features for datacenter and departmental deployments.

The AMD Radeon Instinct MI60 and MI50 accelerators provide ultra-fast floating-point performance and hyper-fast HBM2 (second-generation High-Bandwidth Memory) with up to 1 TB/s memory bandwidth speeds. They are also the first GPUs capable of supporting next-generation PCIe 4.02 interconnect, which is up to 2X faster than other x86 CPU-to-GPU interconnect technologies, and feature AMD Infinity Fabric Link GPU interconnect technology that enables GPU-to-GPU communications that are up to 6X faster than PCIe Gen 3 interconnect speeds.

AMD also announced a new version of the ROCm open software platform for accelerated computing that supports the architectural features of the new accelerators, including optimized deep learning operations (DLOPS) and the AMD Infinity Fabric Link GPU interconnect technology. Designed for scale, ROCm allows customers to deploy high-performance, energy-efficient heterogeneous computing systems in an open environment.

"Google believes that open source is good for everyone," said Rajat Monga, engineering director, TensorFlow, Google. "We've seen how helpful it can be to open source machine learning technology, and we're glad to see AMD embracing it. With the ROCm open software platform, TensorFlow users will benefit from GPU acceleration and a more robust open source machine learning ecosystem."

Key features of the AMD Radeon Instinct MI60 and MI50 accelerators include:
  • Optimized Deep Learning Operations: Provides flexible mixed-precision FP16, FP32 and INT4/INT8 capabilities to meet growing demand for dynamic and ever-changing workloads, from training complex neural networks to running inference against those trained networks.
  • World's Fastest Double Precision PCIe 2 Accelerator5: The AMD Radeon Instinct MI60 is the world's fastest double precision PCIe 4.0 capable accelerator, delivering up to 7.4 TFLOPS peak FP64 performance5 allowing scientists and researchers to more efficiently process HPC applications across a range of industries including life sciences, energy, finance, automotive, aerospace, academics, government, defense and more. The AMD Radeon Instinct MI50 delivers up to 6.7 TFLOPS FP64 peak performance1, while providing an efficient, cost-effective solution for a variety of deep learning workloads, as well as enabling high reuse in Virtual Desktop Infrastructure (VDI), Desktop-as-a-Service (DaaS) and cloud environments.
  • Up to 6X Faster Data Transfer: Two Infinity Fabric Links per GPU deliver up to 200 GB/s of peer-to-peer bandwidth - up to 6X faster than PCIe 3.0 alone4 - and enable the connection of up to 4 GPUs in a hive ring configuration (2 hives in 8 GPU servers).
  • Ultra-Fast HBM2 Memory: The AMD Radeon Instinct MI60 provides 32GB of HBM2 Error-correcting code (ECC) memory6, and the Radeon Instinct MI50 provides 16GB of HBM2 ECC memory. Both GPUs provide full-chip ECC and Reliability, Accessibility and Serviceability (RAS)7 technologies, which are critical to deliver more accurate compute results for large-scale HPC deployments.
  • Secure Virtualized Workload Support: AMD MxGPU Technology, the industry's only hardware-based GPU virtualization solution, which is based on the industry-standard SR-IOV (Single Root I/O Virtualization) technology, makes it difficult for hackers to attack at the hardware level, helping provide security for virtualized cloud deployments.
Updated ROCm Open Software Platform
AMD today also announced a new version of its ROCm open software platform designed to speed development of high-performance, energy-efficient heterogeneous computing systems. In addition to support for the new Radeon Instinct accelerators, ROCm software version 2.0 provides updated math libraries for the new DLOPS; support for 64-bit Linux operating systems including CentOS, RHEL and Ubuntu; optimizations of existing components; and support for the latest versions of the most popular deep learning frameworks, including TensorFlow 1.11, PyTorch (Caffe) and others. Learn more about ROCm 2.0 software here.

Availability
The AMD Radeon Instinct MI60 accelerator is expected to ship to datacenter customers by the end of 2018. The AMD Radeon Instinct MI50 accelerator is expected to begin shipping to data center customers by the end of Q1 2019. The ROCm 2.0 open software platform is expected to be available by the end of 2018.
Sources: Radeon Instinct MI60, Radeon Instinct MI50
Add your own comment

45 Comments on AMD Unveils World's First 7 nm GPUs - Radeon Instinct MI60, Instinct MI50

#1
geon2k2
7.4 TFlops, FP64, this normally doubles for FP32, so 14.8 Tflops for FP32.
This is pretty good, and pretty similar to 16.3 produced by Quadro RTX and slightly better than 2080Ti which has 13.4 TFlops.

BTW FP64 of the quadro according to TPU DB is 0.5TFlops, so this thing will compete in 32 bit calculations but run in circles around green camp in 64 bit.
Posted on Reply
#2
Recus
geon2k27.4 TFlops, FP64, this normally doubles for FP32, so 14.8 Tflops for FP32.
This is pretty good, and pretty similar to 16.3 produced by Quadro RTX and slightly better than 2080Ti which has 13.4 TFlops.

BTW FP64 of the quadro according to TPU DB is 0.5TFlops, so this thing will compete in 32 bit calculations but run in circles around green camp in 64 bit.
Indeed it will run in circles like blind chicken without display outputs. Let's see how it performs against V100 or T4.
Posted on Reply
#3
ZoneDymo
RecusIndeed it will run in circles like blind chicken without display outputs. Let's see how it performs against V100 or T4.
Im sorry, what?
Posted on Reply
#4
Steevo
RecusIndeed it will run in circles like blind chicken without display outputs. Let's see how it performs against V100 or T4.
Nvidia V100 = 7.5 Tflop FP64
Nvidia T4 = 242Gflop FP64

So this is equal to a V100 or 32X faster in FP64 than a T4.
Posted on Reply
#5
DR4G00N
Dang, I wonder what kind of PPD these could get crunching Milkyway@Home with 7.5 TFLOPS FP64!
My three 7950's had around 3 TFLOPS put together. :laugh:
Posted on Reply
#6
Recus
ZoneDymoIm sorry, what?
Datacenter vs content creators GPU. It's joke about GPU without "eyes".
Posted on Reply
#7
Tomorrow
geon2k27.4 TFlops, FP64, this normally doubles for FP32, so 14.8 Tflops for FP32.
This is pretty good, and pretty similar to 16.3 produced by Quadro RTX and slightly better than 2080Ti which has 13.4 TFlops.

BTW FP64 of the quadro according to TPU DB is 0.5TFlops, so this thing will compete in 32 bit calculations but run in circles around green camp in 64 bit.
It it were only so. AMD GCN has always had pure FP32 troughput advantage over Nvidia but failed to convert i to meaningful performance advantage in games. For example RX580 has higher FP32 than GTX 1060 and only after years of driver releases it has become as fast as a GTX 1060. Considering pure FP32 numbers RX580 should compete with GTX 1070.
Posted on Reply
#8
londiste
geon2k27.4 TFlops, FP64, this normally doubles for FP32, so 14.8 Tflops for FP32.
This is pretty good, and pretty similar to 16.3 produced by Quadro RTX and slightly better than 2080Ti which has 13.4 TFlops.

BTW FP64 of the quadro according to TPU DB is 0.5TFlops, so this thing will compete in 32 bit calculations but run in circles around green camp in 64 bit.
AMD product/specs page is up: www.amd.com/en/products/professional-graphics/instinct-mi60
Other than FP64, seems to be a straightforward dieshrink of Vega 10. Main difference for the GPU itself is 20% higher peak clock - 1800MHz on MI60 instead of 1500MHz on MI25. AMD's quoted performance difference is also 20% which matches specs exactly. Twice the memory on twice as large bus is the other difference.
FP64 being 1:2 FP32 is new, Vega10 did not have that.

Btw, 2080Ti's 13.4 TFLOPs is at specced boost clock 1545 MHz... These usually boost more than that. Vegas so far tend to boost less than peak clock. We will have to wait and see how Vega20 behaves.
Quadro RTX 16.3 is Quadro RTX 6000 number at 1770 MHz which is probably a more realistic clock speed.
Posted on Reply
#9
Vya Domus
londisteBtw, 2080Ti's 13.4 TFLOPs is at specced boost clock 1545 MHz... These usually boost more than that. Vegas so far tend to boost less than peak clock. We will have to wait and see how Vega20 behaves.
Quadro RTX 16.3 is Quadro RTX 6000 number at 1770 MHz which is probably a more realistic clock speed.
You are all comparing two products that operate within different markets and environments. Nvidia doesn't a have a Turing based Tesla equivalent, but if that would be the case it's clocks would suffer a significant downgrade as well in order to reach an optimum power consumption curve.
Posted on Reply
#10
Xzibit
Vya DomusYou are all comparing two products that operate within different markets and environments. Nvidia doesn't a have a Turing based Tesla equivalent, but if that would be the case it's clocks would suffer a significant downgrade as well in order to reach an optimum power consumption curve.
Should be compared currently to Tesla V100 PCIe then with what ever replaces that
Posted on Reply
#11
Fluffmeister
geon2k27.4 TFlops, FP64, this normally doubles for FP32, so 14.8 Tflops for FP32.
This is pretty good, and pretty similar to 16.3 produced by Quadro RTX and slightly better than 2080Ti which has 13.4 TFlops.

BTW FP64 of the quadro according to TPU DB is 0.5TFlops, so this thing will compete in 32 bit calculations but run in circles around green camp in 64 bit.
As mentioned already, Tesla V100 offered all this already quite some time ago, and from what I've been reading all within the same power envelope too.... no 7nm tech required, so nothing too exciting really, in fact rather disappointing.
Posted on Reply
#12
crazyeyesreaper
Not a Moderator
Hmm same board power 300w between the 14nm MI25 and 7nm MI60.

So same power draw with a large clock speed bump going by FP32 and FP16 results in about a 17% uplift in theoretical performance.

Judging how similar the Vega 64 / Frontier edition is to the MI25 you can in theory apply 17% to those GPUs and that would be the perfect 100% scaling best case scenario for a 7nm Vega consumer GPU. In that best case scenario a 7nm VEGA consumer card would likely result in performance on par with a stock 1080 Ti / 2070 maybe a 2080 in AMD centric games.

That performance is not really good enough. So I would expect AMD to skip Vega 7nm for NAVI on the consumer front.
Posted on Reply
#13
Steevo
FluffmeisterAs mentioned already, Tesla V100 offered all this already quite some time ago, and from what I've been reading all within the same power envelope too.... no 7nm tech required, so nothing too exciting really, in fact rather disappointing.
March 2018 is a long time ago? They talked about it in December of 2017, but it was only soft launched in March 2018 from what I know.
Posted on Reply
#14
Fluffmeister
SteevoMarch 2018 is a long time ago? They talked about it in December of 2017, but it was only soft launched in March 2018 from what I know.
And? I take it you take issue with this news. :laugh:
Posted on Reply
#15
M2B
geon2k27.4 TFlops, FP64, this normally doubles for FP32, so 14.8 Tflops for FP32.
This is pretty good, and pretty similar to 16.3 produced by Quadro RTX and slightly better than 2080Ti which has 13.4 TFlops.

BTW FP64 of the quadro according to TPU DB is 0.5TFlops, so this thing will compete in 32 bit calculations but run in circles around green camp in 64 bit.
RTX 2070 with 7.5TFLOPS of FP32 performance and less memory bandwidth manages to beat RX Vega 64 by 10-15 percent which has 12.5TFLOPS of FP32 performance. So yeah...
Posted on Reply
#16
Steevo
FluffmeisterAnd? I take it you take issue with this news. :laugh:
Really I hate the "First PCIe 4.0 7Tflop bla bla bla" if it weren't for the PCIE 4.0 they couldn't say that, and PCIe 4.0 is currently unsupported in reality and isn't going to be supported before this card hits the market, so the spin on this proves they very carefully worded it and I would rather just see the numbers, is it going to take less than the 300W the Nvidia uses to get the same 7Tflop performance? Is it going to do some other fancy faster math? Is it going to do something more or the same at a lower cost.

AMD is trying really hard in the server market, but I don't think 2018 will be their year to take any crown, and neither will 2019. Maybe 2020 if they keep up with Zen2 and Navi is impressive. But that will also require thousands of hours to write the tools to make their supposed cards faster, or to make the same speed cards as fast and easy to use, which if Lisa is in the know she will already have people working on, but if not we will know the reason why they fail. Given AMD's vaporware issues, where they build hardware for software that isn't ready, or software that has great implementation of either ease of use, speed, or functionality, and you can only choose one......

I think AMD is playing their cards right for the midsize guys where a few IT guys run the show and want to save thousands to put into software development for long life peak performance, they will survive and their prosumer, gaming and server business will work out in the end, they will never be as big as Nvidia or Intel though. The same reason the Ford F-150 sells so many shitty trucks, its the king, its the classic standard of equal to the neighbors. AMD is the ful featured but still slightly odd Holden, the loud and hot Corvette versus the supercars, its second dog to the CPU and GPU business mostly due to mismanagement, Im just glad they are here to keep us from paying thousands more that Intel and Nvidia would charge if they could.
Posted on Reply
#17
jabbadap
FluffmeisterAs mentioned already, Tesla V100 offered all this already quite some time ago, and from what I've been reading all within the same power envelope too.... no 7nm tech required, so nothing too exciting really, in fact rather disappointing.
Quadro GV100 has the same TFLops and 250W TDP, quite saddening that amd needs to drive 7nm ~331mm² chip at 300W tdp to equal that. They really need new arch.
Posted on Reply
#18
ShurikN
jabbadapQuadro GV100 has the same TFLops and 250W TDP, quite saddening that amd needs to drive 7nm ~331mm² chip at 300W tdp to equal that. They really need new arch.
Those would make sense if GV100 was the same size. But it isn't, it's 2.5 times larger.
Posted on Reply
#19
Zubasa
crazyeyesreaperJudging how similar the Vega 64 / Frontier edition is to the MI25 you can in theory apply 17% to those GPUs and that would be the perfect 100% scaling best case scenario for a 7nm Vega consumer GPU. In that best case scenario a 7nm VEGA consumer card would likely result in performance on par with a stock 1080 Ti / 2070 maybe a 2080 in AMD centric games.

That performance is not really good enough. So I would expect AMD to skip Vega 7nm for NAVI on the consumer front.
Vega 64 is already around the same performance level of the 2070.
Because the 2070 is around 1080 Non-Ti level.
Posted on Reply
#20
londiste
SteevoMarch 2018 is a long time ago? They talked about it in December of 2017, but it was only soft launched in March 2018 from what I know.
Tesla V100 came in summer 2017.
Posted on Reply
#21
sergionography
SteevoReally I hate the "First PCIe 4.0 7Tflop bla bla bla" if it weren't for the PCIE 4.0 they couldn't say that, and PCIe 4.0 is currently unsupported in reality and isn't going to be supported before this card hits the market, so the spin on this proves they very carefully worded it and I would rather just see the numbers, is it going to take less than the 300W the Nvidia uses to get the same 7Tflop performance? Is it going to do some other fancy faster math? Is it going to do something more or the same at a lower cost.

AMD is trying really hard in the server market, but I don't think 2018 will be their year to take any crown, and neither will 2019. Maybe 2020 if they keep up with Zen2 and Navi is impressive. But that will also require thousands of hours to write the tools to make their supposed cards faster, or to make the same speed cards as fast and easy to use, which if Lisa is in the know she will already have people working on, but if not we will know the reason why they fail. Given AMD's vaporware issues, where they build hardware for software that isn't ready, or software that has great implementation of either ease of use, speed, or functionality, and you can only choose one.......
They dont need to win the crown, they just simply need to increase their market share and simply become relevant in that space, and even with epyc being far superior to anythint intel has to offer, its impractical and delusional in the first place to expect it to increase market share to be over what intel capitalizes as business investments in such platforms take long time in planning. This instinct card also isnt made to win crowns in the first place, it simply extends their epyc portfolio where whoever invests in epyc can have an extensive choice of solutions in case they contract AMD. In order to win crowns and compete in market share AMD need to keep this trend of competitive portfolio consistent.

But one thing that AMD does deserve credit for is that for the past couple of years they have been doing an excellent job executing their moves, and seem to be headed in the right direction.
Posted on Reply
#22
medi01
ZoneDymoIm sorry, what?
team overpriced green fan got bh over your comment.
jabbadapQuadro GV100 has the same TFLops and 250W TDP, quite saddening that amd needs to drive 7nm ~331mm² chip at 300W tdp to equal that
So saddening AMD needs 331mm² 7nm 300W chip to take on300W 815mm2 12nm chip.
Posted on Reply
#23
londiste
medi01So saddening AMD needs 331mm² 7nm 300W chip to take on300W 815mm2 12nm chip.
AMD's comparison's on the slides are with the PCIe V100 - a 250W TDP card.

Conveniently, their comparisons are marketing material worthy. For example, RESNET-50 Training V100's 357 vs MI60's 334 (images per second) where MI60 has "comparable performance". I wonder what could a GPU do if it had spent some die space to add dedicated hardware units for something like that. Lets call these hardware units, say, Tensor Cores? Nvidia's RESNET-50 Training numbers for V100 are in the same range for CUDA cores and 1000-ish on Tensor Cores :D
Posted on Reply
#24
medi01
londisteConveniently... [mental gymnastics on nVidia greatness}...
Yes, Huang is great, Amen to that.

But back to the point, it's 337mm² 7nm chip vs 815mm² 12nm chip, both have similar TDP, there is nothing to be sad about.
londistedie space to add dedicated hardware units for something
Yeah. For something. But apparently, that's not much die space, GV100 has 1.4 times more CUDA cores, with 33% bigger die (and only a tiny bit improved process):

londiste1000-ish on Tensor Core
Yeah, brought to you by "1060 is muh faster than 480". Actual tests show something like this:

Posted on Reply
#25
Vya Domus
londisteConveniently, their comparisons are marketing material worthy.
Conveniently, tensor ops do not come by default when running software, it has to be specifically programmed that way. It's also not as scalable as Nvidia makes it out to be. As far as out-of-the-box performance metrics go, it's a fair comparison.
Posted on Reply
Add your own comment
Nov 5th, 2024 12:36 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts