• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA Announces Its Volta-based Tesla V100

People keep yodeling this "bad software support" and "bad drivers" and "awful OpenGL support/performance" and in decade of owning ATi/AMD cards, I hardly ever experienced any issues, stability or performance. This too is one of the reasons why AMD just can't get any traction. People spreading misinformation and just plain lies. For reasons unknown to any logic.
I guess you're not a Linux user then. Only last year AMD came out with a more decent driver, but a year later even that one doesn't have all the features of the older driver. Minor stuff like HDMI audio, Vulkan or OpenCL. And need I remind you the trainwreck CCC was at launch? How people begged ATI/AMD not to release that monstrosity and how they went ahead and did it anyway? Today the driver seems to be in a much better place, but it still has higher CPU overhead.
If you weren't bitten by any of those problems, then congrats.
 
Ironic.


It's a quickly growing market that is still dwarfed by gaming GPU market.
Friends of mine who happen to work in astronomy?
 
I guess you're not a Linux user then. Only last year AMD came out with a more decent driver, but a year later even that one doesn't have all the features of the older driver. Minor stuff like HDMI audio, Vulkan or OpenCL. And need I remind you the trainwreck CCC was at launch? How people begged ATI/AMD not to release that monstrosity and how they went ahead and did it anyway? Today the driver seems to be in a much better place, but it still has higher CPU overhead.
If you weren't bitten by any of those problems, then congrats.

And here we come with Linux thing argument again. For those 50 people who game on Linux, is it really a problem? C'mon?

Trainwreck of CCC? Why you people keep on bashing AMD for PAST things but conveniently ignore great CURRENT ones? If you'd look at Crimson Contro Panel, it's trillion light years ahead of archaic NV Control Panel which is the same as it was 10 years ago. And equally as broken. It pisses me off every time I have to change anything in it because it keeps on resetting the god damn settings list to the top whenever you select ANYTHING. It's so infuriating and yet it has been this way for ages. Go figure...

And there is no "higher CPU overhead". NVIDIA just invested tons of time and resources into multithreading with DX11 which makes it look like AMD cards are more CPU intensive. But realistically, it's all drama bullshit. Been gaming in DX11 with Radeons for years and performance was always excellent. But when people see 3fps difference in benchmarks and they instantly lose their s**t entirely.
 
Linux drivers are shit on both sides so whats the argument? :D

:banghead::banghead::banghead::banghead::banghead:
 
And here we come with Linux thing argument again. For those 50 people who game on Linux, is it really a problem? C'mon?

Trainwreck of CCC? Why you people keep on bashing AMD for PAST things but conveniently ignore great CURRENT ones? If you'd look at Crimson Contro Panel, it's trillion light years ahead of archaic NV Control Panel which is the same as it was 10 years ago. And equally as broken. It pisses me off every time I have to change anything in it because it keeps on resetting the god damn settings list to the top whenever you select ANYTHING. It's so infuriating and yet it has been this way for ages. Go figure...

And there is no "higher CPU overhead". NVIDIA just invested tons of time and resources into multithreading with DX11 which makes it look like AMD cards are more CPU intensive. But realistically, it's all drama bullshit. Been gaming in DX11 with Radeons for years and performance was always excellent. But when people see 3fps difference in benchmarks and they instantly lose their s**t entirely.
The world is not built around your personal usage pattern. For that, we apologize.
 
I don't post here for a few months and you all turn into god damn savages...

I kid, but seriously.
 
Linux drivers are shit on both sides so whats the argument? :D

:banghead::banghead::banghead::banghead::banghead:

In HPC space linux has over 90% market share, where these cards are well suited, and no, nvidia has great linux driver with full support of needed features.

Xorg is old dinosaur, which handicaps whole desktop side of linux. No driver can't fix that.
 
We talking what products??? Geforce or tesla?

Last time i tried it was almost impossible to get my 670 to work properly on ubuntu and mint. Was even more hopeless when i tried my old laptop (yay optimus), but from people i know who use the cards for actual compute stuff, its a different story.

But someone in the thread told me hpc was an irrelevant market :confused::confused::confused::confused:
 
We talking what products??? Geforce or tesla?

Last time i tried it was almost impossible to get my 670 to work properly on ubuntu and mint. Was even more hopeless when i tried my old laptop (yay optimus), but from people i know who use the cards for actual compute stuff, its a different story.

But someone in the thread told me hpc was an irrelevant market :confused::confused::confused::confused:
The article has "Tesla" in its name, so...
Also, my trusty old 660Ti (not 670, but pretty damn close) worked flawlessly on Ubuntu for years. My work 610M continues to do so, along the IGP. Prime gave me a bit of a headache till I set it up right, but it's been smooth ever since. I can't imagine how you managed to screw it up, there's literally no distro that makes installing proprietary drivers easier than Ubuntu.
 
Ehh probably me being unfamiliar with the platform, but neither my mother and father, who both used unix systems since for their whole careers, got it to work at the time.

My 8500GT never gave me trouble though...

Regardless, NV is obviously pushing heavily on the HPC/compute market like they have done since tesla, and i think the results show. Since Kepler to Volta they have opened up many new markets for GPGPU, among these what i mentioned earlier in (radio) astronomy...
 
What? Im talking business and enterprise (which is the market that matters). AMD did NOT have good software support, no matter if you like it or not.

Of course you were
The world is not built around your personal usage pattern. For that, we apologize.

Now, all of a sudden my usage (as a gamer) doesn't matter because it's not a negative one towards AMD. Okay...
 
Now, all of a sudden my usage (as a gamer) doesn't matter because it's not a negative one towards AMD. Okay...

Way to spin it. I gave you several areas where AMD fell short over the years, you dismissed them all because you didn't have a problem with them and now I'm cherry-picking? You're good.

Edit: And if you're talking strictly Windows, yes, I've had no problem recommending AMD to friends over the years. But for me, it never made the cut, mostly because of abysmal Linux support.
 
This chip is a beast of a processor: it packs 21 billion transistors (up from 15,3 billion found on the P100); it's built on TSMC's 12 nm FF process (evolving from Pascal's 16 nm FF); and measures a staggering 815 mm² (from the P100's 610 mm².) This is such a considerable leap in die-area that we can only speculate on how yields will be for this monstrous chip, especially considering the novelty of the 12 nm process that it's going to leverage.
That's less than a 3% increase in transistor density. The reason for this is the fact that TSMC's "12 nm FinFet" is actually the same node as TSMC's "16nm FinFet" and is actually TSMC's "20nm" node. "Third generation 20nm" would be a more fair description, if we follow Intel's standard. If this was a real node shrink well see a close to doubling in density. So remember this, TSMC's "12nm" is not a node shrink :)

RX Vega 8GB HBM2 is death and overkill by Volta GV 100.
GV100 will not arrive in any consumer product anytime soon, perhaps never.

And there is no "higher CPU overhead". NVIDIA just invested tons of time and resources into multithreading with DX11 which makes it look like AMD cards are more CPU intensive. But realistically, it's all drama bullshit. Been gaming in DX11 with Radeons for years and performance was always excellent. But when people see 3fps difference in benchmarks and they instantly lose their s**t entirely.
One customer being "satisfied" doesn't prove anything.
Not to spawn another discussion, but stutter is one aspect where AMD still have a lot to improve in their driver.

Linux drivers are shit on both sides so whats the argument? :D
Linux might not have the same game selection as Windows, but as anyone into professional graphics would know; Nvidia is the only vendor offering enterprise quality drivers for Linux, and the drivers are even more stable than the counterparts for Windows.
 
It's common knowledge that AMD/ATI/Radeon software and driver support sucks, for all their hardware, including CPUs and chipsets. I guess you never got the memo...oh that's right, you're the one who ignores everything negative about AMD and makes up stats that "prove" your point. Please don't stop, you are providing comic relief with your delusions. Continue obsessing over trivial BS,
so we can keep laughing...

No sir, it is not common knowledge. That would be called a stigma which originated from a time when ATI was lacking in support. That was a long time ago.

bug, fyi, I am referring to Windows support with that being the prevalent gaming platform by vast margins.
 
Last edited:
That's less than a 3% increase in transistor density. The reason for this is the fact that TSMC's "12 nm FinFet" is actually the same node as TSMC's "16nm FinFet" and is actually TSMC's "20nm" node. "Third generation 20nm" would be a more fair description, if we follow Intel's standard. If this was a real node shrink well see a close to doubling in density. So remember this, TSMC's "12nm" is not a node shrink :)


GV100 will not arrive in any consumer product anytime soon, perhaps never.


One customer being "satisfied" doesn't prove anything.
Not to spawn another discussion, but stutter is one aspect where AMD still have a lot to improve in their driver.


Linux might not have the same game selection as Windows, but as anyone into professional graphics would know; Nvidia is the only vendor offering enterprise quality drivers for Linux, and the drivers are even more stable than the counterparts for Windows.

I can tell you a lot about unexplained stutter with my GTX 980 as well. It's fine for a while and then mouse motion just feels like absolute garbage for no logical reason. But hey, what does one user mean, right?
 
We are in a thread about a datacenter GPU... Why else would I talk about programming models, APIs and developers?

Which then you quoted.

Then you may want to ask the same to everyone dragging Vega into this discussion. Vega is a consumer card designed for gaming.
 
C'mon, who calls the compute one "Vega 10"? No one. We all call it Mi25.
 
And you are expecing an AMD comeback sheerly off their radeon gaming card sales?

Still havent heard back about this thing of all markets outside gaming being irrelevant. AMD is simply uncompetitive in this regard because of how fast NV is moving.
 
C'mon, who calls the compute one "Vega 10"? No one. We all call it Mi25.

And yet it has the same vega 10 gpu inside than consumer card RX Vega. But granted you are not the one who dragged _RX Vega_ on this discussion in the first place.

It's painful when one call their under hood architecture and some consumer product with same name. Makes it quite hard to follow conversation.
 
Oh dear. That's like calling GeForce Titan Xp "a graphic card, but not the ordinary one, it's the one for developers, but you know, it's not the cut down Pascal, it's the full fat one, but it's not only for developers, gamers can also use it for you know, gaming, but we don't call it Pascal because that's it's core codename."

That's the level of stupidity here about how a product is being called. Gaming one is RX Vega, Vega, Vega 10, the big Vega, you name it. The "professional" ones were always called either RadeonPro/FirePro or now, Instinct (Mi25 in Vega's SKU case). No one cares how you might want to call it or how long version of the first paragraph applies to it. It's just simple basic communications common sense that prevents any kind of confusion. Any Vega is gaming card, any Mi or Fire is workstation stuff. It's not a rocket science to use it this way you know. It's not like there's gonna be any other Mi25 with I don't know, Navi core. It'll be called different. So, why the need to overcomplicate simple things?
 
Back
Top