Wednesday, May 10th 2017

NVIDIA Announces Its Volta-based Tesla V100
Today at its GTC keynote, NVIDIA CEO Jensen Huang took the wraps on some of the features on their upcoming V100 accelerator, the Volta-based accelerator for the professional market that will likely pave the way to the company's next-generation 2000 series GeForce graphics cards. If NVIDIA goes on with its product carvings and naming scheme for the next-generation Volta architecture, we can expect to see this processor on the company's next-generation GTX 2080 Ti. Running the nitty-gritty details (like the new Tensor processing approach) on this piece would be impossible, but there are some things we know already from this presentation.
This chip is a beast of a processor: it packs 21 billion transistors (up from 15,3 billion found on the P100); it's built on TSMC's 12 nm FF process (evolving from Pascal's 16 nm FF); and measures a staggering 815 mm² (from the P100's 610 mm².) This is such a considerable leap in die-area that we can only speculate on how yields will be for this monstrous chip, especially considering the novelty of the 12 nm process that it's going to leverage. But now, the most interesting details from a gaming perspective are the 5,120 CUDA cores powering the V100 out of a total possible 5,376 in the whole chip design, which NVIDIA will likely leave for their Titan Xv. These are divided in 84 Volta Streaming Multiprocessor Units with each carrying 64 CUDA cores (84 x 64 = 5,376, from which NVIDIA is cutting 4 Volta Streaming Multiprocessor Units for yields, most likely, which accounts for the announced 5,120.) Even in this cut-down configuration, we're looking at a staggering 42% higher pure CUDA core-count than the P100's. The new V100 will offer up to 15 FP 32 TFLOPS, and will still leverage a 16 GB HBM2 implementation delivering up to 900 GB/s bandwidth (up from the P100's 721 GB/s). No details on clock speed or TDP as of yet, but we already have enough details to enable a lengthy discussion... Wouldn't you agree?
This chip is a beast of a processor: it packs 21 billion transistors (up from 15,3 billion found on the P100); it's built on TSMC's 12 nm FF process (evolving from Pascal's 16 nm FF); and measures a staggering 815 mm² (from the P100's 610 mm².) This is such a considerable leap in die-area that we can only speculate on how yields will be for this monstrous chip, especially considering the novelty of the 12 nm process that it's going to leverage. But now, the most interesting details from a gaming perspective are the 5,120 CUDA cores powering the V100 out of a total possible 5,376 in the whole chip design, which NVIDIA will likely leave for their Titan Xv. These are divided in 84 Volta Streaming Multiprocessor Units with each carrying 64 CUDA cores (84 x 64 = 5,376, from which NVIDIA is cutting 4 Volta Streaming Multiprocessor Units for yields, most likely, which accounts for the announced 5,120.) Even in this cut-down configuration, we're looking at a staggering 42% higher pure CUDA core-count than the P100's. The new V100 will offer up to 15 FP 32 TFLOPS, and will still leverage a 16 GB HBM2 implementation delivering up to 900 GB/s bandwidth (up from the P100's 721 GB/s). No details on clock speed or TDP as of yet, but we already have enough details to enable a lengthy discussion... Wouldn't you agree?
103 Comments on NVIDIA Announces Its Volta-based Tesla V100
Any post mentioning AMD/nVidia will look the same...fanboys screaming at each other, trolls crap posting just to get a rise and the few intelligent posts drowned out by the noise.
WHAT THE FUCK HAPPENED TO OBJECTIVITY???
/rant over
GV102 5120 Cuda out of 5376
GDDR6 384 bit 12 GB HYNIX
610 sq.mm.
AMD basically dropped all their market share in the mobile market, both on the CPU and GPU side. For many people I think the logical choice after buying an Intel-NV laptop is an Intel-NV desktop (not to mention that noone in their right mind would have bought FX after Haswell if it was for gaming).
AMD CPUs don't have the best rep in the last 5 years or so. Probably a lot of people (again, those who are not familiar with computers but want to game on PC) look at that and think the same applies to Radeon.
Crossfire 480s with waterblocks do 4k@ 60hz fine on all old stuff , and I have been using them a year for 565
RX Vega core config not different much from my R9 Fury X. Estimate 15% gain clock for clock from R9 Fury X GPU max.
Compare to those Volta GV 100 5000+ Cuda Core 32MB SM Memory (High Bandwidth Cache of Volta Cuda core) with 16GB HBM2 @ 900MHz.
RX Vega 8GB HBM2 is death and overkill by Volta GV 100.
336 Texture Unit OMG! 128 ROP 1455MHz Clock 5120 CUDA Core.
Maybe It was advance T-800 Chip from the terminator.
EDIT:
Also idiots comparing an out of this world expensive Volta compute unit with a consumer gaming GPU. I don't think I'm allowed to express the words I'd like to express right now, right here...
ATi/AMD has been behind for the last 10 years, i agree that they always were pretty close, incredibly actually, having far far less funds than nvidia, that's admirable, they surely worked harder than nvidia, that's no doubt, but they were behind, except for GTX4xx, in the rest of scenarios, nvidia just managed to collect more points in terms of pro vs cons, so if certain ATi/AMD card was more power efficient than its nvidia counterpart, it was at the same time slower and had higher temps and not so good driver support, that's why i used the term "overall" because we cannot just use, 1 pro as an example, and make it matter more than the others, if a X card was faster than a Y card, but the Y card was more efficient, cheaper, lower temps, and better drivers, the Y card would be the better card overall, and this repeats with all the other pros, as long as one card as more pros than the other, that's the better product.
You wanna talk about dies? Ok let's do it, let's talk about Polaris vs Pascal GP106, let's see, 480 is slower, hotter, FAR less power efficient, maybe has better drivers, cheaper (maybe, because initially, here in europe i can assure you, was far easier to find 1060 at a decent prices, 480 had insane prices and far less availability), less availability at launch and for the 2/3 months upon launch, so which one is the better product? You can't only make price matter, if 1060 cost more, how much more it's to see but, still, let's say it cost more, because we're not only talking about europe here, what's the problem? It's the better product, in almost everything, OH i forgot to mention that Polaris has a slightly better performance on DX12 and Vulkan (maybe, because DOOM's shift of framerates isn't only related to Vulkan), GTX 1060 is still a winner overall (i ordered an XFX rx 480 gtr black edition a week ago, should be shipped either today or tomorrow, so no, i'm not the fanboy you think i am).
Ok so i shared how AMD did worse in the last 10 years (no blame, seen the funds) they also failed miserably in marketing yeah, which they are correcting now? Really? "Poor Volta-ge" anyone? "It has a brain" "it has a soul"? Really, correcting? No. Gimping cards is still another one of those legends i really wanted to see with my eyes, because sounds so much like a BS, but no, hey, BS only come out of nvidia's mouth.
"You could see this loading at the start of many games back in the day. I remember seeing this at the begining of UT2003/4 as the skar busted through it, there was a mod to change it to an ATi, logo"
Basically ATi/AMD in a nutshell.
Clock speed is slower (expected).
800mm2 is monstrous, so will be the price of that thing.
Some say it means that consumer products are around the corner. If so, bad for AMD, although it won't be as big a jump as some expect. And they use nVidia marketing slides to somehow get to 12.5 TFlops figure (up from 8.6) with mere 15% clock bump and the same number of shaders.
Genious.
You should apply for a job at wccftech dude, you got talent. Market that matters to... whom?
All but gaming market share is laughable.
So are 20-80k cars Tesla makes annually, compared to it.
And I'd encourage you to check your facts whrn it comes to what markets nvidias chips are sold... Last I heard they were producing huge volumes of chips for supercomputers and processing clusters for commercial/government funded operations. From what I've heard from friends in astronomy, they are also interested in moving to GPGPU, which will be done with NV chips. The market is most definitely there, and it is also set to grow.