Tuesday, June 4th 2019

AMD Announces the Radeon Pro Vega II and Pro Vega II Duo Graphics Cards

AMD today announced the Radeon Pro Vega II and Pro Vega II Duo graphics cards, making their debut with the new Apple Mac Pro workstation. Based on an enhanced 32 GB variant of the 7 nm "Vega 20" MCM, the Radeon Pro Vega II maxes out its GPU silicon, with 4,096 stream processors, 1.70 GHz peak engine clock, 32 GB of 4096-bit HBM2 memory, and 1 TB/s of memory bandwidth. The card features both PCI-Express 3.0 x16 and InfinityFabric interfaces. As its name suggests, the Pro Vega II is designed for professional workloads, and comes with certifications for nearly all professional content creation applications.

The Radeon Pro Vega II Duo is the first dual-GPU graphics card from AMD in ages. Purpose built for the Mac Pro (and available on the Apple workstation only), this card puts two fully unlocked "Vega 20" MCMs with 32 GB HBM2 memory each on a single PCB. The card uses a bridge chip to connect the two GPUs to the system bus, but in addition, has an 84.5 GB/s InfinityFabric link running between the two GPUs, for rapid memory access, GPU and memory virtualization, and interoperability between the two GPUs, bypassing the host system bus. In addition to certifications for every conceivable content creation suite for the MacOS platform, AMD dropped in heavy optimization for the Metal 3D graphics API. For now the two graphics cards are only available as options for the Apple Mac Pro. The single-GPU Pro Vega II may see standalone product availability later this year, but the Pro Vega II Duo will remain a Mac Pro-exclusive.
Add your own comment

41 Comments on AMD Announces the Radeon Pro Vega II and Pro Vega II Duo Graphics Cards

#1
ratirt
Dual GPU. Now that is a squeaker. I thought AMD would never go dual GPU. Infinity fabric between chips. Hmm wonder how that is going to pan out. Although it is an HBM2 so I wonder what the delay would be since Ryzen had some latency. Especially the 1st gen. Hope the GPU can avoid such a delay.
I assume this is only for MAC and we are not talking about PC alternative of the same GPU?
Posted on Reply
#2
TheLostSwede
News Editor
I'm curious as to what the connectors on the top are. I thought AMD had done away with CrossFire bridges...

Posted on Reply
#4
cucker tarlson
TheLostSwedeI'm curious as to what the connectors on the top are. I thought AMD had done away with CrossFire bridges...

Infinity Fabric,it says so.they used if bridge for quad mi60s already.
Posted on Reply
#5
Tsukiyomi91
this sounds like the successor of the beastly R9 295 X2 GPU & HD7990. Kinda sad this is a specific card for Apple "PC"....
Posted on Reply
#7
Xzibit
TheLostSwedeI'm curious as to what the connectors on the top are. I thought AMD had done away with CrossFire bridges...
Here you go.
The graphics card pulls its power entirely from a standard PCIe x16 slot, which is capable of 75W, and Apple's new propiertary PCIe connector that can supply up to 475W.

The graphics cards communicate with each other through AMD's Infinity Fabric Link connection for an aggregate bandwidth up to 84 GB/s per direction.
Posted on Reply
#8
TheLostSwede
News Editor
cucker tarlsonInfinity Fabric,it says so.they used if bridge for quad mi60s already.
Uhm, no, that's the blue outline between the chips...
But it might well be between cards as well.
Posted on Reply
#11
TheLostSwede
News Editor
VycyousPower connectors. I don't see any conventional 8- or 6-pin PCIe power connectors. Although, maybe they really are drawing all the power from the top left one in the photo, like it says.
I guess you didn't read the spec. See the front part of the rear PCIe connector, that does 475W of power, hence why it looks quite different.
Posted on Reply
#12
Vycyous
TheLostSwedeI guess you didn't read the spec. See the front part of the rear PCIe connector, that does 475W of power, hence why it looks quite different.
I saw that, but I almost found it hard to believe considering that two 8-pin PCIe connectors, plus PCIe power are limited (in spec) to 375 watts. However, it looks like a fairly substantial connector, so I tried editing my post as quickly as I could which is why you see the part where I said "Although, maybe they really are drawing all the power from the top left one in the photo, like it says." After which, I just deleted the entire reply.
Posted on Reply
#13
Assimilator
TheLostSwedeI'm curious as to what the connectors on the top are. I thought AMD had done away with CrossFire bridges...

No idea of the specifications of this "extended PCIe" they've invented, but considering it's roughly the same physical size as regular PCIe yet can supply over 6x the power, it's possible this has no data pins. Thus in a multi-card configuration (if that's even possible), the cards would need to talk to each other over the PCIe 3.0 bus, which would be severely limiting (in terms of both latency and bandwidth) compared to the Infinity Fabric link between the on-card GPUs. In that case the only moderately feasible solution would be a direct card-to-card link a la SLI or CrossFire.

As for AMD doing away with CF, it seems to me that this card throws all of the industry norms out the window, so I wouldn't read too much into its design in regards to more consumer-oriented products.
Posted on Reply
#14
TheLostSwede
News Editor
VycyousI saw that, but I almost found it hard to believe considering that two 8-pin PCIe connectors, plus PCIe power are limited (in spec) to 375 watts. However, it looks like a fairly substantial connector, so I tried editing my post as quickly as I could which is why you see the part where I said "Although, maybe they really are drawing all the power from the top left one in the photo, like it says." After which, I just deleted the entire reply.
It does indeed look a bit too good to be true, but if you look closely in the renders, it seems the slot has a dozen connectors and that part of the card is wider than the small part of the PCIe connector, so it seems like it should be possible. I like the design and would like to see it in PCs, but that's highly unlikely.

Note the weird little "plastic" blocks up front too, labelled with an exclamation mark and 1-2, 3-4 and 5-8. They look suspiciously like something to do with power as well.

Posted on Reply
#15
HwGeek
VycyousI saw that, but I almost found it hard to believe considering that two 8-pin PCIe connectors, plus PCIe power are limited (in spec) to 375 watts. However, it looks like a fairly substantial connector, so I tried editing my post as quickly as I could which is why you see the part where I said "Although, maybe they really are drawing all the power from the top left one in the photo, like it says." After which, I just deleted the entire reply.
Looks like they have gone to same solution that Server PSU's use:

Posted on Reply
#16
TheDeeGee
I don't want to know the noise levels.
Posted on Reply
#17
I No
TheDeeGeeI don't want to know the noise levels.
That's just the icing compared to those 2 nuclear reactors on it..... Well if you look at the bright-side you won't have to pay for heating. It's not like Macs are running cool these days anyway why not add more...
Posted on Reply
#18
ShurikN
ratirtDual GPU. Now that is a squeaker. I thought AMD would never go dual GPU.
For gaming they said no. But for compute-heavy workloads, dual GPU is great. They even said so when discussing MCM approach to GPUs. There was an interview with David Wang a while ago on the subject.
Posted on Reply
#19
ratirt
ShurikNFor gaming they said no. But for compute-heavy workloads, dual GPU is great. They even said so when discussing MCM approach to GPUs. There was an interview with David Wang a while ago on the subject.
It is infinity fabric. Maybe for gaming sooner or later it will be OK.I recall something different. AMD stated that improving gaming with monolithic Chip will get harder every year. Since we want to move forward with gaming advancement we will need dual GPU.
BTW you can still game on that GPU no problem.
Posted on Reply
#20
Assimilator
TheLostSwedeI like the design and would like to see it in PCs, but that's highly unlikely.
Please no, we don't need a repeat of EISA slots that were as long as the motherboard is wide. The issue is with 12V, what is needed is for the industry to migrate to higher multiples of that (24V, 36V) to bring down the high amperages and thick traces and cables necessitated by such a low voltage. High amperage is also far more dangerous than high voltage.
Posted on Reply
#21
ShurikN
ratirtIt is infinity fabric. Maybe for gaming sooner or later it will be OK.I recall something different. AMD stated that improving gaming with monolithic Chip will get harder every year. Since we want to move forward with gaming advancement we will need dual GPU.
BTW you can still game on that GPU no problem.
Yeah, of course you can it's just that most often than not the game engine will recognize only one gpu and not both. That was the biggest issue they talked about. Until they can make multiple chips appear as one to the the API and engines, they will not pursue it, as they can't hope that the devs will optimize their game for that setup. To quote him, devs see it as a burden.
They are definitely looking at it for gaming tho, so these kinds of cards could give them some insight. Besides, we all know that when SLI/Crossfire works, it becomes an amazing thing.

www.pcgamesn.com/amd-navi-monolithic-gpu-design
“To some extent you’re talking about doing CrossFire on a single package,” says Wang. “The challenge is that unless we make it invisible to the ISVs [independent software vendors] you’re going to see the same sort of reluctance.

Does that mean we might end up seeing diverging GPU architectures for the professional and consumer spaces to enable MCM on one side and not the other?

“Yeah, I can definitely see that,” says Wang, “because of one reason we just talked about, one workload is a lot more scalable, and has different sensitivity on multi-GPU or multi-die communication. Versus the other workload or applications that are much less scalable on that standpoint. So yes, I can definitely see the possibility that architectures will start diverging.”
Posted on Reply
#22
Darmok N Jalad
TheDeeGeeI don't want to know the noise levels.
I believe these custom cards are 4x tall and run the full length of the case, so they have massive heat sinks, and the large front system fans handle the airflow. They may not get that loud depending on how the thermal setup is. The classic MP let chips run warmer before ramping up fan speeds.
Posted on Reply
#23
ratirt
ShurikNYeah, of course you can it's just that most often than not the game engine will recognize only one gpu and not both. That was the biggest issue they talked about. Until they can make multiple chips appear as one to the the API and engines, they will not pursue it, as they can't hope that the devs will optimize their game for that setup. To quote him, devs see it as a burden.
They are definitely looking at it for gaming tho, so these kinds of cards could give them some insight. Besides, we all know that when SLI/Crossfire works, it becomes an amazing thing.

www.pcgamesn.com/amd-navi-monolithic-gpu-design
To be honest they are all monolithic due to the fact they can work as a single unit. I remember that AMD was inventing connection type that would allow games see the 2 chips as one unit. (Infinity fabric maybe with I/O die of some sort?) This might be the first approach to that solution. Or maybe I'm just thinking way to far into the future.
SLI and CrossFire sure. But you know they don't scale double the speed of a single card yet you can still see an improvement.
Posted on Reply
#24
xkm1948
I bet these mines crypto kitties like no tomorrow
Posted on Reply
#25
HwGeek
200Mhs+ on ETH for sure.
Posted on Reply
Add your own comment
Dec 22nd, 2024 07:18 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts