Wednesday, May 10th 2017

NVIDIA Announces Its Volta-based Tesla V100

Today at its GTC keynote, NVIDIA CEO Jensen Huang took the wraps on some of the features on their upcoming V100 accelerator, the Volta-based accelerator for the professional market that will likely pave the way to the company's next-generation 2000 series GeForce graphics cards. If NVIDIA goes on with its product carvings and naming scheme for the next-generation Volta architecture, we can expect to see this processor on the company's next-generation GTX 2080 Ti. Running the nitty-gritty details (like the new Tensor processing approach) on this piece would be impossible, but there are some things we know already from this presentation.

This chip is a beast of a processor: it packs 21 billion transistors (up from 15,3 billion found on the P100); it's built on TSMC's 12 nm FF process (evolving from Pascal's 16 nm FF); and measures a staggering 815 mm² (from the P100's 610 mm².) This is such a considerable leap in die-area that we can only speculate on how yields will be for this monstrous chip, especially considering the novelty of the 12 nm process that it's going to leverage. But now, the most interesting details from a gaming perspective are the 5,120 CUDA cores powering the V100 out of a total possible 5,376 in the whole chip design, which NVIDIA will likely leave for their Titan Xv. These are divided in 84 Volta Streaming Multiprocessor Units with each carrying 64 CUDA cores (84 x 64 = 5,376, from which NVIDIA is cutting 4 Volta Streaming Multiprocessor Units for yields, most likely, which accounts for the announced 5,120.) Even in this cut-down configuration, we're looking at a staggering 42% higher pure CUDA core-count than the P100's. The new V100 will offer up to 15 FP 32 TFLOPS, and will still leverage a 16 GB HBM2 implementation delivering up to 900 GB/s bandwidth (up from the P100's 721 GB/s). No details on clock speed or TDP as of yet, but we already have enough details to enable a lengthy discussion... Wouldn't you agree?
Add your own comment

103 Comments on NVIDIA Announces Its Volta-based Tesla V100

#76
bug
RejZoRPeople keep yodeling this "bad software support" and "bad drivers" and "awful OpenGL support/performance" and in decade of owning ATi/AMD cards, I hardly ever experienced any issues, stability or performance. This too is one of the reasons why AMD just can't get any traction. People spreading misinformation and just plain lies. For reasons unknown to any logic.
I guess you're not a Linux user then. Only last year AMD came out with a more decent driver, but a year later even that one doesn't have all the features of the older driver. Minor stuff like HDMI audio, Vulkan or OpenCL. And need I remind you the trainwreck CCC was at launch? How people begged ATI/AMD not to release that monstrosity and how they went ahead and did it anyway? Today the driver seems to be in a much better place, but it still has higher CPU overhead.
If you weren't bitten by any of those problems, then congrats.
Posted on Reply
#77
GorbazTheDragon
medi01Ironic.


It's a quickly growing market that is still dwarfed by gaming GPU market.
Friends of mine who happen to work in astronomy?
Posted on Reply
#78
RejZoR
bugI guess you're not a Linux user then. Only last year AMD came out with a more decent driver, but a year later even that one doesn't have all the features of the older driver. Minor stuff like HDMI audio, Vulkan or OpenCL. And need I remind you the trainwreck CCC was at launch? How people begged ATI/AMD not to release that monstrosity and how they went ahead and did it anyway? Today the driver seems to be in a much better place, but it still has higher CPU overhead.
If you weren't bitten by any of those problems, then congrats.
And here we come with Linux thing argument again. For those 50 people who game on Linux, is it really a problem? C'mon?

Trainwreck of CCC? Why you people keep on bashing AMD for PAST things but conveniently ignore great CURRENT ones? If you'd look at Crimson Contro Panel, it's trillion light years ahead of archaic NV Control Panel which is the same as it was 10 years ago. And equally as broken. It pisses me off every time I have to change anything in it because it keeps on resetting the god damn settings list to the top whenever you select ANYTHING. It's so infuriating and yet it has been this way for ages. Go figure...

And there is no "higher CPU overhead". NVIDIA just invested tons of time and resources into multithreading with DX11 which makes it look like AMD cards are more CPU intensive. But realistically, it's all drama bullshit. Been gaming in DX11 with Radeons for years and performance was always excellent. But when people see 3fps difference in benchmarks and they instantly lose their s**t entirely.
Posted on Reply
#79
GorbazTheDragon
Linux drivers are shit on both sides so whats the argument? :D

:banghead::banghead::banghead::banghead::banghead:
Posted on Reply
#80
bug
RejZoRAnd here we come with Linux thing argument again. For those 50 people who game on Linux, is it really a problem? C'mon?

Trainwreck of CCC? Why you people keep on bashing AMD for PAST things but conveniently ignore great CURRENT ones? If you'd look at Crimson Contro Panel, it's trillion light years ahead of archaic NV Control Panel which is the same as it was 10 years ago. And equally as broken. It pisses me off every time I have to change anything in it because it keeps on resetting the god damn settings list to the top whenever you select ANYTHING. It's so infuriating and yet it has been this way for ages. Go figure...

And there is no "higher CPU overhead". NVIDIA just invested tons of time and resources into multithreading with DX11 which makes it look like AMD cards are more CPU intensive. But realistically, it's all drama bullshit. Been gaming in DX11 with Radeons for years and performance was always excellent. But when people see 3fps difference in benchmarks and they instantly lose their s**t entirely.
The world is not built around your personal usage pattern. For that, we apologize.
Posted on Reply
#81
xenocide
I don't post here for a few months and you all turn into god damn savages...

I kid, but seriously.
Posted on Reply
#82
jabbadap
GorbazTheDragonLinux drivers are shit on both sides so whats the argument? :D

:banghead::banghead::banghead::banghead::banghead:
In HPC space linux has over 90% market share, where these cards are well suited, and no, nvidia has great linux driver with full support of needed features.

Xorg is old dinosaur, which handicaps whole desktop side of linux. No driver can't fix that.
Posted on Reply
#83
GorbazTheDragon
We talking what products??? Geforce or tesla?

Last time i tried it was almost impossible to get my 670 to work properly on ubuntu and mint. Was even more hopeless when i tried my old laptop (yay optimus), but from people i know who use the cards for actual compute stuff, its a different story.

But someone in the thread told me hpc was an irrelevant market :confused::confused::confused::confused:
Posted on Reply
#84
bug
GorbazTheDragonWe talking what products??? Geforce or tesla?

Last time i tried it was almost impossible to get my 670 to work properly on ubuntu and mint. Was even more hopeless when i tried my old laptop (yay optimus), but from people i know who use the cards for actual compute stuff, its a different story.

But someone in the thread told me hpc was an irrelevant market :confused::confused::confused::confused:
The article has "Tesla" in its name, so...
Also, my trusty old 660Ti (not 670, but pretty damn close) worked flawlessly on Ubuntu for years. My work 610M continues to do so, along the IGP. Prime gave me a bit of a headache till I set it up right, but it's been smooth ever since. I can't imagine how you managed to screw it up, there's literally no distro that makes installing proprietary drivers easier than Ubuntu.
Posted on Reply
#85
alucasa
bugthere's literally no distro that makes installing proprietary drivers easier than Ubuntu.
True dat.
Posted on Reply
#86
GorbazTheDragon
Ehh probably me being unfamiliar with the platform, but neither my mother and father, who both used unix systems since for their whole careers, got it to work at the time.

My 8500GT never gave me trouble though...

Regardless, NV is obviously pushing heavily on the HPC/compute market like they have done since tesla, and i think the results show. Since Kepler to Volta they have opened up many new markets for GPGPU, among these what i mentioned earlier in (radio) astronomy...
Posted on Reply
#87
RejZoR
NokironWhat? Im talking business and enterprise (which is the market that matters). AMD did NOT have good software support, no matter if you like it or not.
Of course you were
bugThe world is not built around your personal usage pattern. For that, we apologize.
Now, all of a sudden my usage (as a gamer) doesn't matter because it's not a negative one towards AMD. Okay...
Posted on Reply
#88
bug
RejZoRNow, all of a sudden my usage (as a gamer) doesn't matter because it's not a negative one towards AMD. Okay...
Way to spin it. I gave you several areas where AMD fell short over the years, you dismissed them all because you didn't have a problem with them and now I'm cherry-picking? You're good.

Edit: And if you're talking strictly Windows, yes, I've had no problem recommending AMD to friends over the years. But for me, it never made the cut, mostly because of abysmal Linux support.
Posted on Reply
#89
efikkan
RaevenlordThis chip is a beast of a processor: it packs 21 billion transistors (up from 15,3 billion found on the P100); it's built on TSMC's 12 nm FF process (evolving from Pascal's 16 nm FF); and measures a staggering 815 mm² (from the P100's 610 mm².) This is such a considerable leap in die-area that we can only speculate on how yields will be for this monstrous chip, especially considering the novelty of the 12 nm process that it's going to leverage.
That's less than a 3% increase in transistor density. The reason for this is the fact that TSMC's "12 nm FinFet" is actually the same node as TSMC's "16nm FinFet" and is actually TSMC's "20nm" node. "Third generation 20nm" would be a more fair description, if we follow Intel's standard. If this was a real node shrink well see a close to doubling in density. So remember this, TSMC's "12nm" is not a node shrink :)
R4E3960FURYXRX Vega 8GB HBM2 is death and overkill by Volta GV 100.
GV100 will not arrive in any consumer product anytime soon, perhaps never.
RejZoRAnd there is no "higher CPU overhead". NVIDIA just invested tons of time and resources into multithreading with DX11 which makes it look like AMD cards are more CPU intensive. But realistically, it's all drama bullshit. Been gaming in DX11 with Radeons for years and performance was always excellent. But when people see 3fps difference in benchmarks and they instantly lose their s**t entirely.
One customer being "satisfied" doesn't prove anything.
Not to spawn another discussion, but stutter is one aspect where AMD still have a lot to improve in their driver.
GorbazTheDragonLinux drivers are shit on both sides so whats the argument? :D
Linux might not have the same game selection as Windows, but as anyone into professional graphics would know; Nvidia is the only vendor offering enterprise quality drivers for Linux, and the drivers are even more stable than the counterparts for Windows.
Posted on Reply
#90
Fx
HoodIt's common knowledge that AMD/ATI/Radeon software and driver support sucks, for all their hardware, including CPUs and chipsets. I guess you never got the memo...oh that's right, you're the one who ignores everything negative about AMD and makes up stats that "prove" your point. Please don't stop, you are providing comic relief with your delusions. Continue obsessing over trivial BS,
so we can keep laughing...
No sir, it is not common knowledge. That would be called a stigma which originated from a time when ATI was lacking in support. That was a long time ago.

bug, fyi, I am referring to Windows support with that being the prevalent gaming platform by vast margins.
Posted on Reply
#91
RejZoR
efikkanThat's less than a 3% increase in transistor density. The reason for this is the fact that TSMC's "12 nm FinFet" is actually the same node as TSMC's "16nm FinFet" and is actually TSMC's "20nm" node. "Third generation 20nm" would be a more fair description, if we follow Intel's standard. If this was a real node shrink well see a close to doubling in density. So remember this, TSMC's "12nm" is not a node shrink :)


GV100 will not arrive in any consumer product anytime soon, perhaps never.


One customer being "satisfied" doesn't prove anything.
Not to spawn another discussion, but stutter is one aspect where AMD still have a lot to improve in their driver.


Linux might not have the same game selection as Windows, but as anyone into professional graphics would know; Nvidia is the only vendor offering enterprise quality drivers for Linux, and the drivers are even more stable than the counterparts for Windows.
I can tell you a lot about unexplained stutter with my GTX 980 as well. It's fine for a while and then mouse motion just feels like absolute garbage for no logical reason. But hey, what does one user mean, right?
Posted on Reply
#92
Nokiron
RejZoROf course you were
We are in a thread about a datacenter GPU... Why else would I talk about programming models, APIs and developers?

Which then you quoted.
Posted on Reply
#93
RejZoR
NokironWe are in a thread about a datacenter GPU... Why else would I talk about programming models, APIs and developers?

Which then you quoted.
Then you may want to ask the same to everyone dragging Vega into this discussion. Vega is a consumer card designed for gaming.
Posted on Reply
#94
jabbadap
RejZoRThen you may want to ask the same to everyone dragging Vega into this discussion. Vega is a consumer card designed for gaming.
RX Vega is consumer card, Vega 10 on Mi25 is datacenter Vega.
Posted on Reply
#95
RejZoR
C'mon, who calls the compute one "Vega 10"? No one. We all call it Mi25.
Posted on Reply
#96
GorbazTheDragon
And you are expecing an AMD comeback sheerly off their radeon gaming card sales?

Still havent heard back about this thing of all markets outside gaming being irrelevant. AMD is simply uncompetitive in this regard because of how fast NV is moving.
Posted on Reply
#97
jabbadap
RejZoRC'mon, who calls the compute one "Vega 10"? No one. We all call it Mi25.
And yet it has the same vega 10 gpu inside than consumer card RX Vega. But granted you are not the one who dragged _RX Vega_ on this discussion in the first place.

It's painful when one call their under hood architecture and some consumer product with same name. Makes it quite hard to follow conversation.
Posted on Reply
#98
Nokiron
RejZoRThen you may want to ask the same to everyone dragging Vega into this discussion. Vega is a consumer card designed for gaming.
Vega is relevant since AMD calls MI25 "MI25 Vega with NCU"

instinct.radeon.com/en-us/about/
Posted on Reply
#99
RejZoR
Oh dear. That's like calling GeForce Titan Xp "a graphic card, but not the ordinary one, it's the one for developers, but you know, it's not the cut down Pascal, it's the full fat one, but it's not only for developers, gamers can also use it for you know, gaming, but we don't call it Pascal because that's it's core codename."

That's the level of stupidity here about how a product is being called. Gaming one is RX Vega, Vega, Vega 10, the big Vega, you name it. The "professional" ones were always called either RadeonPro/FirePro or now, Instinct (Mi25 in Vega's SKU case). No one cares how you might want to call it or how long version of the first paragraph applies to it. It's just simple basic communications common sense that prevents any kind of confusion. Any Vega is gaming card, any Mi or Fire is workstation stuff. It's not a rocket science to use it this way you know. It's not like there's gonna be any other Mi25 with I don't know, Navi core. It'll be called different. So, why the need to overcomplicate simple things?
Posted on Reply
#100
Nokiron
RejZoROh dear. That's like calling GeForce Titan Xp "a graphic card, but not the ordinary one, it's the one for developers, but you know, it's not the cut down Pascal, it's the full fat one, but it's not only for developers, gamers can also use it for you know, gaming, but we don't call it Pascal because that's it's core codename."

That's the level of stupidity here about how a product is being called. Gaming one is RX Vega, Vega, Vega 10, the big Vega, you name it. The "professional" ones were always called either RadeonPro/FirePro or now, Instinct Mi25. No one cares how you might want to call it or how long version of the first paragraph applies to it. It's just simple basic communications common sense that prevents any kind of confusion. Any Vega is gaming card, any Mi or Fire is workstation stuff. It's not a rocket science to use it this way you know. It's not like there's gonna be any other Mi25 with I don't know, Navi core. It'll be called different. So, why the need to overcomplicate simple things?
No, it is not. Titan Xp is a specific product with a very specific target audience.

That does not matter. Again, we are talking about datacenters which is why Vega is relevant (since the only existing Vega-based product is in that market). The only product we actually have data on. If you assume that we are talking about the RX-version you have not read the article.
If someone does talk about it, it is not relevant to either the news, the product or the competition.

And Instinct is most definitely no workstation cards.
Posted on Reply
Add your own comment
Dec 18th, 2024 11:59 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts