Sunday, August 26th 2018

AMD Announces Dual-Vega Radeon Pro V340 for High-Density Computing

AMD today at VMworld in Las Vegas announced their new, high-density computing, dual-GPU Radeon Pro V340 accelerator. This graphics card (or maybe accelerator) is based on the same Vega that makes AMD's consumer graphics card lineup, and crams its dual GPUs into a single card with a dual-slot design. 32 GB of second-generation Error Correcting Code (ECC) high-bandwidth memory (HBM) greases the wheels for the gargantuan amounts of data these accelerators are meant to crunch and power through, even as media processing requirements go through the roof.
"As the flagship of our new Radeon Pro V-series product line, the Radeon Pro V340 graphics card employs advanced security features and helps to cost effectively deliver and accelerate modern visualization workloads from the datacenter," said Ogi Brkic, general manager of Radeon Pro at AMD.

"The AMD Radeon Pro V340 graphics card will enable our customers to securely leverage desktop and application virtualization for the most graphically demanding applications," said Sheldon D'Paiva, director of Product Marketing at VMware. "With Radeon Pro for VMware, admins can easily set up a VDI environment, rapidly deploy virtual GPUs to existing virtual machines and enable hundreds of professionals with just a few mouse clicks."
"With increased density, faster frame buffer and enhanced security, the AMD Radeon Pro V340 graphics card delivers a powerful new choice for our customers to power their Citrix Workspace, even for the most demanding applications," said Calvin Hsu, VP of Product Marketing at Citrix.
Sources: AMD, via Tom's Hardware
Add your own comment

54 Comments on AMD Announces Dual-Vega Radeon Pro V340 for High-Density Computing

#1
lexluthermiester
I'm looking at this design and thinking; " Where's the fan? "
Posted on Reply
#2
NdMk2o1o
You and me both, could well be passive with lower clocks than standard but you're stiill looking at 400w TDP best case scenario, if not more so that's not liklely, maybe that front shroud is ventilated and they sit behind it, still odd to not show any though. I can't see AMD focusing on consumer GPU's all that much anymore which is a shame though it seems their focus is on HPC and professional cards.
Posted on Reply
#3
Blueberries
lexluthermiesterI'm looking at this design and thinking; " Where's the fan? "
THIS. I came here to post THIS.

AMD has issued a dual GPU powerhouse to compete with NVIDIA on several occasions and it doesn't surprise me that they're attacking Turing from the same angle... but there is no way in hell this is passively cooled. Somebody on the marketing team FUBAR'd the render image on this one.
Posted on Reply
#4
Fluffmeister
This thing is supposed to sit in a server rack packed full of cooling goodness.
Posted on Reply
#5
Nkd
FluffmeisterThis thing is supposed to sit in a server rack packed full of cooling goodness.
This! Everyone wondering there the air flow is. I believe they are designed to sit in a rack with using its airflow. That should be able to keep these 80c and below easy.
Posted on Reply
#6
T4C Fantasy
CPU & GPU DB Maintainer
BlueberriesTHIS. I came here to post THIS.

AMD has issued a dual GPU powerhouse to compete with NVIDIA on several occasions and it doesn't surprise me that they're attacking Turing from the same angle... but there is no way in hell this is passively cooled. Somebody on the marketing team FUBAR'd the render image on this one.
Its 100% passively cooled
Posted on Reply
#7
lexluthermiester
NdMk2o1oI can't see AMD focusing on consumer GPU's all that much anymore which is a shame though it seems their focus is on HPC and professional cards.
Why not? For the money(now that crypto-currencies have tanked) they are great performers.
FluffmeisterThis thing is supposed to sit in a server rack packed full of cooling goodness.
That's an interesting point. In a 1U or 2U rack the ventilation would come from the system itself. They can't just be focusing on that cooling scenario though.
Posted on Reply
#8
windwhirl
And earlier this week I was just thinking of how dual GPU cards had dropped off the map.

Btw, is it just me or the "dual GPU on one card" thing has become AMD's standard answer to any situation in which one card isn't/wouldn't be enough?
Posted on Reply
#9
TheGuruStud
windwhirlAnd earlier this week I was just thinking of how dual GPU cards had dropped off the map.

Btw, is it just me or the "dual GPU on one card" thing has become AMD's standard answer to any situation in which one card isn't/wouldn't be enough?
Density is always better and Vega Pros are untouchable for the money.
Posted on Reply
#10
GhostRyder
lexluthermiesterI'm looking at this design and thinking; " Where's the fan? "
Its meant for enterprise servers and will 100% rely on them by the way its designed. Kinda knocks out a bunch of options but something like an R Series server from dell will move more than enough air to cool it off.
Posted on Reply
#11
hat
Enthusiast
windwhirlAnd earlier this week I was just thinking of how dual GPU cards had dropped off the map.

Btw, is it just me or the "dual GPU on one card" thing has become AMD's standard answer to any situation in which one card isn't/wouldn't be enough?
Not really. This isn't for gaming, and there are many applications that can take advantage of many GPUs. It's kinda like having a dual core processor as opposed to two separate single core processors... in a 2P board. With the dual core option, you can have 4 cores... but only two with the single core chips. It means greater efficiency and expandability in this market.

The gaming market is a different story... they'll push out dual GPU cards just to have the top product... but that hasn't happened for a while. In fact, the gaming industry is kinda moving away from mutli GPU.
Posted on Reply
#12
Fouquin
lexluthermiesterI'm looking at this design and thinking; " Where's the fan? "
NdMk2o1oYou and me both, could well be passive with lower clocks than standard but you're stiill looking at 400w TDP best case scenario, if not more so that's not liklely, maybe that front shroud is ventilated and they sit behind it, still odd to not show any though. I can't see AMD focusing on consumer GPU's all that much anymore which is a shame though it seems their focus is on HPC and professional cards.
BlueberriesTHIS. I came here to post THIS.

AMD has issued a dual GPU powerhouse to compete with NVIDIA on several occasions and it doesn't surprise me that they're attacking Turing from the same angle... but there is no way in hell this is passively cooled. Somebody on the marketing team FUBAR'd the render image on this one.
Passive designs like this have existed in the enterprise market for over a decade. They're meant for high-density deployment in GPU Servers; rack-mount chassis that have forced and channeled airflow, such as the following examples:




Airflow is constant and evenly distributed in these chassis designs, and the cards are built to fit that. This actually makes them more flexible, not less, as it allows enterprise OEMs to pack in more cards at no scaling penalty. Packing in cards back-to-front across the whole chassis doesn't strangle the cards in between since they're all receiving the exact same cooling. For lower-density deployments (1-2 cards), or lower airflow chassis, some OEMs offer a small fan-rack that attaches to the back of each card such as the one pictured below:


So the takeaway from this is that if the V340 ever sees the light of day in the general consumer space, it'll absolutely be sporting an active cooler. As it stands, it's not a card for the general consumer, and as such is not designed for one.
Posted on Reply
#13
RejZoR
lexluthermiesterI'm looking at this design and thinking; " Where's the fan? "
It's a stack of hurricane speed Delta fans in front of the rack blowiing through a stack of such cards.
Posted on Reply
#14
hat
Enthusiast
RejZoRIt's a stack of hurricane speed Delta fans in front of the rack blowiing through a stack of such cards.
There is also the "Vantec Tornado"... I got one as a Christmas or Birthday present back when I first really got into computers. I think my dad was trolling me... I said it was too loud and he said "well, you wanted more airflow..." :laugh:
Posted on Reply
#15
RejZoR
hatThere is also the "Vantec Tornado"... I got one as a Christmas or Birthday present back when I first really got into computers. I think my dad was trolling me... I said it was too loud and he said "well, you wanted more airflow..." :laugh:
Haha, dad was trolling you :D I had Thermaltake Dragon Orb 3 on my AMD Athlon 1GHz back in the day. 7000 RPM of screaming goodness. :D
Posted on Reply
#16
londiste
Yeah, those fans at the front/back of the rackmount case are running at around 5000 RPM constant if not more. And these are not quiet fans, these are thicker than usual and are designed to push as much air as possible.
Posted on Reply
#17
Joss
Do you guys think this could prelude a consumer/gaming dual Vega?
Posted on Reply
#18
londiste
JossDo you guys think this could prelude a consumer/gaming dual Vega?
No.
Posted on Reply
#19
Prima.Vera

Better put 2 more GPUs just for the shits&giggles.
And then claim your way from there. Maybe with 32x GPUs AMD can finally beat SLI 2080Ti:
Posted on Reply
#20
hat
Enthusiast
Well, it obviously shows they have the capacity to produce a dual Vega card. Professional graphics cards like this (and Quadro) are usually the cream of the crop, and lesser models wind up in the consumer/gaming market. For example, the only full fat Pascal chip exists on a Quadro card... even the Titan is cut down from the real deal. You also get special drivers and a few things professionals use that gamers don't, and special support, in that huge price.

So, maybe. If they're making these, we might get the "defective" dual Vega cards as gaming products, or maybe these are the best Vega chips from already making Vega cards for so long, and they went to the top tier product.

@Prima.Vera if SLI/xfire was better, I wouldn't mind seeing a card like that, even if it had midrange chips on it (GTX1060 or RX560). Or maybe one day they'll figure out how to just "make" them work. It shouldn't really be enormously difficult; GPUs are already massively streamline processors by nature. I don't quite understand this enormous hurdle they have to overcome just to make two cards work together (properly) when a single chip is already comprised of thousands of cores anyway...
Posted on Reply
#21
Vya Domus
hatthe only full fat Pascal chip exists on a Quadro card... even the Titan is cut down from the real deal.
Actually the Titan Xp is fully enabled, FP16 ratios and all that are drastically lowered but the chip itself is all there. This is sort of a misconception, most chips that end in either consumer or server products are likely not defective at all, they're just cut down on purpose.
Posted on Reply
#22
Xajel
BlueberriesTHIS. I came here to post THIS.

AMD has issued a dual GPU powerhouse to compete with NVIDIA on several occasions and it doesn't surprise me that they're attacking Turing from the same angle... but there is no way in hell this is passively cooled. Somebody on the marketing team FUBAR'd the render image on this one.
This GPU is meant to be used in a server case where the airflow is controlled and maintained by the case it self, Even the CPU doesn't have a fan, all things that need cooling is designed with the fins all aligned in the same direction so the airflow in the case will -by design- cool all these fins by just putting the GPU inside or installing the CPU heatsink or just putting the HDD's.

That's why these servers must be closed all the time to force the air flow in the correct direction, If you open the case while the server is working you will quickly face an overheating issue, the server will first throttle but shutdown eventually. But most things are meant to be hot-plugged, even these fans, when one fan fails the server application will give you a notice/alarm you then go open the case and replace the fan quickly. before the server shutdown.

While genius, The main issue with this design is noise. Most servers uses small fans that operate in very high RPM to get proper airflow which rises the noise levels beyond comfort, but they're already designed to be used in a dedicated server room.
Posted on Reply
#23
londiste
hatif SLI/xfire was better, I wouldn't mind seeing a card like that, even if it had midrange chips on it (GTX1060 or RX560). Or maybe one day they'll figure out how to just "make" them work. It shouldn't really be enormously difficult; GPUs are already massively streamline processors by nature. I don't quite understand this enormous hurdle they have to overcome just to make two cards work together (properly) when a single chip is already comprised of thousands of cores anyway...
The problem is not the parallel computational part. The problem lies in other parts of the GPU, the rendering pipeline and moving data from one part to another (whether physically or logically). Memory bandwidth is in hundreds of GB/s (400-500GB/s on high-end GPUs), caches are obviously faster than that.

Connections between different GPUs are slow, compared to that. AMD had never said how fast their IF inside or out of a Vega is (they have said that memory controller, GPU, media engine etc are connected using IF but no real technical details). For comparison, Nvidia's new Turing has 2 NVLinks - 50GB/s both ways. IF scaling for connecting Vega to outside of chip is likely in the same range. PCI-e is shared with other uses, most notably moving data from RAM or storage to VRAM and does not have quaranteed latency as well as being slower (PCI-e 3.0 x16 has 16GB/s of bandwidth). These interconnects (both IF and NVLink in this case) are scalable and can be widened but it comes with cost of both die space and power usage.

Due to the way graphics pipeline works there needs to be a lot of synchronization between different GPUs if you want to use multiple GPUs for gaming. There are both bandwidth as well as (related) latency issues because real-time rendering has strict timing requirements.

Multiple GPUs work fine for applications where either requirements are not as strict - massive computations and GPGPU - or there is inherent resource sharing the other way around - multiple users for the same GPU like Desktop virtualization which this V340 is intended for.

tl;dr
There are technical reasons why multiGPU gaming is difficult. XF/SLI are being wound down and that is not likely to change.
Posted on Reply
#24
Vya Domus
londisteDue to the way graphics pipeline works there needs to be a lot of synchronization between different GPUs
Quite the contrary, synchronization is brought down to a minimum in graphics processing. Data parallelism is king and that does not require a lot synchronization, as a matter of fact the only type of synchronization you can do on a GPU is a barrier on all the threads within a SM/CU which is fairly primitive. The only real synchronization that is critical is between the stages themselves not within them and that does not inherently represent a big problem for a potential multi GPU implementation.
Posted on Reply
#25
hat
Enthusiast
Vya DomusActually the Titan Xp is fully enabled, FP16 ratios and all that are drastically lowered but the chip itself is all there. This is sort of a misconception, most chips that end in either consumer or server products are likely not defective at all, they're just cut down on purpose.
Well, the Quadro P6000 can beat the Titan XP by a decent margin even in gaming. Though it would be ridiculous to buy such a card for gaming, it at least shows that something's going on there...
Posted on Reply
Add your own comment
Nov 27th, 2024 14:50 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts