Wednesday, May 10th 2017
NVIDIA Announces Its Volta-based Tesla V100
Today at its GTC keynote, NVIDIA CEO Jensen Huang took the wraps on some of the features on their upcoming V100 accelerator, the Volta-based accelerator for the professional market that will likely pave the way to the company's next-generation 2000 series GeForce graphics cards. If NVIDIA goes on with its product carvings and naming scheme for the next-generation Volta architecture, we can expect to see this processor on the company's next-generation GTX 2080 Ti. Running the nitty-gritty details (like the new Tensor processing approach) on this piece would be impossible, but there are some things we know already from this presentation.
This chip is a beast of a processor: it packs 21 billion transistors (up from 15,3 billion found on the P100); it's built on TSMC's 12 nm FF process (evolving from Pascal's 16 nm FF); and measures a staggering 815 mm² (from the P100's 610 mm².) This is such a considerable leap in die-area that we can only speculate on how yields will be for this monstrous chip, especially considering the novelty of the 12 nm process that it's going to leverage. But now, the most interesting details from a gaming perspective are the 5,120 CUDA cores powering the V100 out of a total possible 5,376 in the whole chip design, which NVIDIA will likely leave for their Titan Xv. These are divided in 84 Volta Streaming Multiprocessor Units with each carrying 64 CUDA cores (84 x 64 = 5,376, from which NVIDIA is cutting 4 Volta Streaming Multiprocessor Units for yields, most likely, which accounts for the announced 5,120.) Even in this cut-down configuration, we're looking at a staggering 42% higher pure CUDA core-count than the P100's. The new V100 will offer up to 15 FP 32 TFLOPS, and will still leverage a 16 GB HBM2 implementation delivering up to 900 GB/s bandwidth (up from the P100's 721 GB/s). No details on clock speed or TDP as of yet, but we already have enough details to enable a lengthy discussion... Wouldn't you agree?
This chip is a beast of a processor: it packs 21 billion transistors (up from 15,3 billion found on the P100); it's built on TSMC's 12 nm FF process (evolving from Pascal's 16 nm FF); and measures a staggering 815 mm² (from the P100's 610 mm².) This is such a considerable leap in die-area that we can only speculate on how yields will be for this monstrous chip, especially considering the novelty of the 12 nm process that it's going to leverage. But now, the most interesting details from a gaming perspective are the 5,120 CUDA cores powering the V100 out of a total possible 5,376 in the whole chip design, which NVIDIA will likely leave for their Titan Xv. These are divided in 84 Volta Streaming Multiprocessor Units with each carrying 64 CUDA cores (84 x 64 = 5,376, from which NVIDIA is cutting 4 Volta Streaming Multiprocessor Units for yields, most likely, which accounts for the announced 5,120.) Even in this cut-down configuration, we're looking at a staggering 42% higher pure CUDA core-count than the P100's. The new V100 will offer up to 15 FP 32 TFLOPS, and will still leverage a 16 GB HBM2 implementation delivering up to 900 GB/s bandwidth (up from the P100's 721 GB/s). No details on clock speed or TDP as of yet, but we already have enough details to enable a lengthy discussion... Wouldn't you agree?
103 Comments on NVIDIA Announces Its Volta-based Tesla V100
If you weren't bitten by any of those problems, then congrats.
Trainwreck of CCC? Why you people keep on bashing AMD for PAST things but conveniently ignore great CURRENT ones? If you'd look at Crimson Contro Panel, it's trillion light years ahead of archaic NV Control Panel which is the same as it was 10 years ago. And equally as broken. It pisses me off every time I have to change anything in it because it keeps on resetting the god damn settings list to the top whenever you select ANYTHING. It's so infuriating and yet it has been this way for ages. Go figure...
And there is no "higher CPU overhead". NVIDIA just invested tons of time and resources into multithreading with DX11 which makes it look like AMD cards are more CPU intensive. But realistically, it's all drama bullshit. Been gaming in DX11 with Radeons for years and performance was always excellent. But when people see 3fps difference in benchmarks and they instantly lose their s**t entirely.
:banghead::banghead::banghead::banghead::banghead:
I kid, but seriously.
Xorg is old dinosaur, which handicaps whole desktop side of linux. No driver can't fix that.
Last time i tried it was almost impossible to get my 670 to work properly on ubuntu and mint. Was even more hopeless when i tried my old laptop (yay optimus), but from people i know who use the cards for actual compute stuff, its a different story.
But someone in the thread told me hpc was an irrelevant market :confused::confused::confused::confused:
Also, my trusty old 660Ti (not 670, but pretty damn close) worked flawlessly on Ubuntu for years. My work 610M continues to do so, along the IGP. Prime gave me a bit of a headache till I set it up right, but it's been smooth ever since. I can't imagine how you managed to screw it up, there's literally no distro that makes installing proprietary drivers easier than Ubuntu.
My 8500GT never gave me trouble though...
Regardless, NV is obviously pushing heavily on the HPC/compute market like they have done since tesla, and i think the results show. Since Kepler to Volta they have opened up many new markets for GPGPU, among these what i mentioned earlier in (radio) astronomy...
Edit: And if you're talking strictly Windows, yes, I've had no problem recommending AMD to friends over the years. But for me, it never made the cut, mostly because of abysmal Linux support.
Not to spawn another discussion, but stutter is one aspect where AMD still have a lot to improve in their driver. Linux might not have the same game selection as Windows, but as anyone into professional graphics would know; Nvidia is the only vendor offering enterprise quality drivers for Linux, and the drivers are even more stable than the counterparts for Windows.
bug, fyi, I am referring to Windows support with that being the prevalent gaming platform by vast margins.
Which then you quoted.
Still havent heard back about this thing of all markets outside gaming being irrelevant. AMD is simply uncompetitive in this regard because of how fast NV is moving.
It's painful when one call their under hood architecture and some consumer product with same name. Makes it quite hard to follow conversation.
instinct.radeon.com/en-us/about/
That's the level of stupidity here about how a product is being called. Gaming one is RX Vega, Vega, Vega 10, the big Vega, you name it. The "professional" ones were always called either RadeonPro/FirePro or now, Instinct (Mi25 in Vega's SKU case). No one cares how you might want to call it or how long version of the first paragraph applies to it. It's just simple basic communications common sense that prevents any kind of confusion. Any Vega is gaming card, any Mi or Fire is workstation stuff. It's not a rocket science to use it this way you know. It's not like there's gonna be any other Mi25 with I don't know, Navi core. It'll be called different. So, why the need to overcomplicate simple things?
That does not matter. Again, we are talking about datacenters which is why Vega is relevant (since the only existing Vega-based product is in that market). The only product we actually have data on. If you assume that we are talking about the RX-version you have not read the article.
If someone does talk about it, it is not relevant to either the news, the product or the competition.
And Instinct is most definitely no workstation cards.