Monday, July 25th 2016

NVIDIA Accelerates Volta to May 2017?

Following the surprise TITAN X Pascal launch slated for 2nd August, it looks like NVIDIA product development cycle is running on steroids, with reports emerging of the company accelerating its next-generation "Volta" architecture debut to May 2017, along the sidelines of next year's GTC. The architecture was originally scheduled to make its debut in 2018.

Much like "Pascal," the "Volta" architecture could first debut with HPC products, before moving on to the consumer graphics segment. NVIDIA could also retain the 16 nm FinFET+ process at TSMC for Volta. Stacked on-package memory such as HBM2 could be more readily available by 2017, and could hit sizable volumes towards the end of the year, making it ripe for implementation in high-volume consumer products.
Source: WCCFTech
Add your own comment

102 Comments on NVIDIA Accelerates Volta to May 2017?

#26
Vayra86
btarunrPascal supports feature-level 12_1 but Polaris only supports 12_0. Maybe NVIDIA driver is exposing features hardware doesn't support.
Haha yeah that is a good one :) Nvidia 'supports' that feature level just like Maxwell's GTX 970 'supports' 4GB VRAM :D I suppose that is also why they push Volta now, because Pascal's feature levels are so well done, just like Maxwell's feature levels 'were DX12 compliant'. Let's not fool each other?
Posted on Reply
#27
deu
FrickPunching AMD in the face I guess.
I dont get how so many are ignoring the fact that AMD developed the HBM2 (and get a cut from it and by estimates gets it as 1. priority) and have better low-end yield with 480 and dont have to bleed to sell it, and it wins in dx12 titles. That paired with the fact that ALL games are now forces to be optimized with their GPU (all consoles are AMD in the future), as well is the CPU's. They closed china and CERN as customer for their new CPU, and have a potential lead in fab with zen later this year. (which if people are having problems with forseeing does mean that it can compete both in price and stability) and they just shown that they can be profitable. Still people keep shitting on them. Their stock have gone up 180% the last 4-5 months (yep money in the bank) VEGA is comming 2016 and VEGA 2 is coming March 2017 ..... now I wonder why would NVIDIA be so "strange" to up their release to may 2017..... I wonder.... Could it be that they know what I know (and most other people that dont just guess) and simply trying to react to VEGA 2 and see AMD as an actual competitor?

In short;

VEGA 1 will battle 1070/1080 and VEGA 2 will battle 1180/1080Ti or what ever they decide to call it. VEGA 1 will most likely have GDDR5x memory, but VEGA 2 comes with HBM2 (as will NVIDIA's counter answer if they can get enough chips), (they already know whether or not this is the case.)
Posted on Reply
#28
bug
RejZoRMy GTX 980 ain't. And it's D3D12_1. Supposedly higher than 12_0 on Radeons. Also, that thing in GTX 1000 series is highly questionable. I don't believe any of the shit NVIDIA says. They've also been promising a "driver update" that would unlock async capability on Maxwell 2 which turned out to be complete bullshit. What makes you think I'll believe GTX 1080 has it? If I'd make a "best of the best" graphic card which actually now has async, I'd brag about it on all ends. And NVIDIA didn't even mention it anywhere, some people started talking about it just now, like 2 month after release of the Pascal series.
Either you didn't read what I've linked or you didn't understand it: Pascal is doing async just fine, but it doesn't have enough shaders for async to make as much difference as it does for AMD's parts.
If Volta will increase the number of shaders, then you'll reap the benefits of async. Till then, it's there because DX12 says it has to be.

PS Your 980 precedes DX12 by almost a full year, I'm not surprised it doesn't handle everything DX12 flawlessly.
Posted on Reply
#29
rtwjunkie
PC Gaming Enthusiast
deuI dont get how so many are ignoring the fact that AMD developed the HBM2 (and get a cut from it and by estimates gets it as 1. priority) and have better low-end yield with 480 and dont have to bleed to sell it, and it wins in dx12 titles. That paired with the fact that ALL games are now forces to be optimized with their GPU (all consoles are AMD in the future), as well is the CPU's. They closed china and CERN as customer for their new CPU, and have a potential lead in fab with zen later this year. (which if people are having problems with forseeing does mean that it can compete both in price and stability) and they just shown that they can be profitable. Still people keep shitting on them. Their stock have gone up 180% the last 4-5 months (yep money in the bank) VEGA is comming 2016 and VEGA 2 is coming March 2017 ..... now I wonder why would NVIDIA be so "strange" to up their release to may 2017..... I wonder.... Could it be that they know what I know (and most other people that dont just guess) and simply trying to react to VEGA 2 and see AMD as an actual competitor?

In short;

VEGA 1 will battle 1070/1080 and VEGA 2 will battle 1180/1080Ti or what ever they decide to call it. VEGA 1 will most likely have GDDR5x memory, but VEGA 2 comes with HBM2 (as will NVIDIA's counter answer if they can get enough chips), (they already know whether or not this is the case.)
Welp, it looks like AMD PR showed up! How can I tell? The zealous writing of a wall of "facts" with some wrong facts.

Lets correct one of those: AMD only has priority on HBM2 from one of the fabs (Hynix, I believe).

:)
Posted on Reply
#30
PP Mguire
Vayra86Haha yeah that is a good one :) Nvidia 'supports' that feature level just like Maxwell's GTX 970 'supports' 4GB VRAM :D I suppose that is also why they push Volta now, because Pascal's feature levels are so well done, just like Maxwell's feature levels 'were DX12 compliant'. Let's not fool each other?
The irony here is the fact that Nvidia does support those feature levels just like the 970 does actually have 4GB of addressable RAM ;)

Sure, let's not fool each other. Volta hasn't been "pushed up", they have super computer contracts which have been in the works and AMD's little GPUs have nothing to do with it. The main page for Oakland says end of 2017 which means the architecture is already done or being refined and made to be readily available for them in bulk by end of year next year. We get the trickle effect which for those who don't whine about Nvidia is a win/win.

The funny thing about these threads going back and forth is it's quite obvious who understands the business end of it vs the ones who do nothing but post fanboy comments and speculate. "Oh Nvidia is doing this because they're scared of AMD" or "AMD pushed Vega up because the 1080 is a monster!". No, it doesn't work that way. These roadmaps that are "leaked" are planned years ahead with only minor modifications made as time goes on for whatever is happening in the industry. Don't think otherwise.
Posted on Reply
#31
Prima.Vera
This is turning into a pissing contest...
Posted on Reply
#32
ssdpro
P4-630True 4K with just a single card.
An increase in memory bandwidth, even if epic, will have a negligible impact on actual performance. Memory bandwidth nods to potential but the horse power in terms of shaders, texture maps, render output and frequency will be the determining factors. With the process staying the same you will get a boost but not the huge jump we saw from the 9xx cards to 10xx cards.
Posted on Reply
#33
PP Mguire
Prima.VeraThis is turning into a pissing contest...
They always do.
Posted on Reply
#34
Vayra86
PP MguireThe irony here is the fact that Nvidia does support those feature levels just like the 970 does actually have 4GB of addressable RAM ;)

Sure, let's not fool each other. Volta hasn't been "pushed up", they have super computer contracts which have been in the works and AMD's little GPUs have nothing to do with it. The main page for Oakland says end of 2017 which means the architecture is already done or being refined and made to be readily available for them in bulk by end of year next year. We get the trickle effect which for those who don't whine about Nvidia is a win/win.

The funny thing about these threads going back and forth is it's quite obvious who understands the business end of it vs the ones who do nothing but post fanboy comments and speculate. "Oh Nvidia is doing this because they're scared of AMD" or "AMD pushed Vega up because the 1080 is a monster!". No, it doesn't work that way. These roadmaps that are "leaked" are planned years ahead with only minor modifications made as time goes on for whatever is happening in the industry. Don't think otherwise.
Oh don't get me wrong, I understand the business end of this whole release scheme. I was merely responding to the way Nvidia handles its GPU's these days, where ever more often things are 'not entirely what's on the box'.
Posted on Reply
#35
deu
rtwjunkieWelp, it looks like AMD PR showed up! How can I tell? The zealous writing of a wall of "facts" with some wrong facts.

Lets correct one of those: AMD only has priority on HBM2 from one of the fabs (Hynix, I believe).

:)
Im studying IT in Denmark and work at a Bank and own a NVIDIA card; (miss) You are basically just confirming that some people will say whatever gets their agenda through.

I didnt write that they could veto down all HBM2 memory in the world; AMD invented HBM2 together with Hynix, and Samsung and GloFo are producing it (I looked on the net instead of believing.) Which means that AMD gets money and have a deal with GloFo to make zen, polaris AND HBM2 (guess how have the best order for GloFo?) NVIDIA have to will rely on TSMC / samsung to get their HBM2. Im not saying that it cannot happen, but they are going to fight the rest of the industry for it.
Posted on Reply
#36
bug
deuIm studying IT in Denmark and work at a Bank and own a NVIDIA card; (miss) You are basically just confirming that some people will say whatever gets their agenda through.

I didnt write that they could veto down all HBM2 memory in the world; AMD invented HBM2 together with Hynix, and Samsung and GloFo are producing it (I looked on the net instead of believing.) Which means that AMD gets money and have a deal with GloFo to make zen, polaris AND HBM2 (guess how have the best order for GloFo?) NVIDIA have to will rely on TSMC / samsung to get their HBM2. Im not saying that it cannot happen, but they are going to fight the rest of the industry for it.
Well, if you're studying IT, you may want to check who implemented the XmlHttpRequest first and who put it to good use. If you think "inventing" is an argument.
Posted on Reply
#37
rtwjunkie
PC Gaming Enthusiast
deuI looked on the net instead of believing
Hey smarta**, I looked it up too, just not this morning. So I did pretty good getting it right after reading a tiny little blurb about it several weeks ago. Next time, look your facts up first before you roll out the PR machine.

And, for the record, I despise PR machine posts from both sides.
Posted on Reply
#38
bug
rtwjunkieHey smarta**, I looked it up too, just not this morning. So I did pretty good getting it right after reading a tiny little blurb about it several weeks ago. Next time, look your facts up first before you roll out the PR machine.

And, for the record, I despise PR machine posts from both sides.
I'll see your despising and raise you a despise for practically any PR machine ;)
Posted on Reply
#39
Slizzo
RejZoRMy GTX 980 ain't. And it's D3D12_1. Supposedly higher than 12_0 on Radeons. Also, that thing in GTX 1000 series is highly questionable. I don't believe any of the shit NVIDIA says. They've also been promising a "driver update" that would unlock async capability on Maxwell 2 which turned out to be complete bullshit. What makes you think I'll believe GTX 1080 has it? If I'd make a "best of the best" graphic card which actually now has async, I'd brag about it on all ends. And NVIDIA didn't even mention it anywhere, some people started talking about it just now, like 2 month after release of the Pascal series.
If you don't believe nVidia, then why don't you take it from the horses mouth. Horses' mouth being Futuremark, who developed a benchmark that uses Async Compute, and shows that both nVidia and AMD gain performance when async compute is enabled. Oh, and by the way, AMD gains roughly 2x more performance (in percentage points) than nVidia does when you compare both of their top products that are on shelves today (R9 Fury X and GTX 1080 respectively).

www.pcper.com/reviews/Graphics-Cards/Whats-Asynchronous-Compute-3DMark-Time-Spy-Controversy

Keep in mind, that Futuremark works with the hardware OEMs to validate their code against their own best practices. BOTH AMD and nVidia have had to sign off on their work.
Posted on Reply
#40
RejZoR
That's like saying GTX 980 has async compute then. It actually does. At so small queues they make no practical use. What good is saying "yes, we have Async" which then does basically nothing. Like, lol?
Posted on Reply
#41
bug
RejZoRThat's like saying GTX 980 has async compute then. It actually does. At so small queues they make no practical use.
Dude, you really don't seem to be a retard. What's so hard to understand? Both Maxwell and Pascal have async compute, as required by DX12. They just aren't packing enough shaders to actually make a difference.
Maxwell came without dynamic scheduling, Pascal has fixed that. Maxwell/Pascal won't profit from async compute because there are no idle shaders to put to work in an async manner.

That's about as far as I can dumb it down.
RejZoRWhat good is saying "yes, we have Async" which then does basically nothing. Like, lol?
Because that's all that DX12 requires. DX12 is an API, it does not mandate any hardware implementation, nor does it guarantee using the API will speed things up.
Posted on Reply
#42
HD64G
bugEither you didn't read what I've linked or you didn't understand it: Pascal is doing async just fine, but it doesn't have enough shaders for async to make as much difference as it does for AMD's parts.
If Volta will increase the number of shaders, then you'll reap the benefits of async. Till then, it's there because DX12 says it has to be.

PS Your 980 precedes DX12 by almost a full year, I'm not surprised it doesn't handle everything DX12 flawlessly.
Then explain why Doom in Vulcan is so deadly for all nVidia GPU and only a bit less for Pascals? A game-benchmark about that is already here and it shows everything needed to know. And Doom fully uses all the DX12 features (unlike Time Spy from 3D Mark) and as an FPS game shouldn't give so much more performance to AND GPUs. And then think of a big scaled RTS game with 10s of 1000s of units battling each other in Vulcan. What a massacre that would be for nVidia GPUs. Warhammer TW DX12 is a sign of the things to come also.
Posted on Reply
#43
alucasa
PP MguireThey always do.
GPU talk is like talking politics. Neither side listens and they just bark at each other for no apparent reason.
Posted on Reply
#44
ViperXTR
Hmm Doom

AMD OpenGL: 4.3
AMD Vulkan: 1.17.1 + AMD Shader intrinsics extensions +async compute

nVidia OpenGL: 4.5
nVidia Vulkan: 1.8.1 with no special extensions or features, just basic and older Vulkan driver

i wonder if id software is gonna do another update+drivers
Posted on Reply
#45
the54thvoid
Super Intoxicated Moderator
HD64G....Warhammer TW DX12 is a sign of the things to come also.
Yup. Mid tier current architecture running rings round last gen top end.

And before anybody moans about "can't compare last gen to current gen" - die shrinks allow more hardware at less power. The hardware gives the benefit so Fury X's 8900 transistors trounces the 1080's 7200.

Posted on Reply
#46
efikkan
btarunrFollowing the surprise TITAN X Pascal launch slated for 2nd August...
Not to be rude or anything, but who were surprised by the new Titan? Everyone knew GP102 was coming, and it was taped out 3 months after GP104, and it's been in QA for a while now. It was quite obvious that the new Titan was coming, right?
-The_Mask-Because in some aspects architecture wise Pascal's GP102 and GP104 is just a generation behind Polaris and Vega, even if the performance and performance/watt is better by nVidia. Volta will have HBM2 just like Vega and Volta will probably have better DX12 support like Polaris and Vega.
How are Pascal "behind" Polaris or Vega? It has better support for Direct3D 12, and performs better in pretty much anything but edge cases.
Posted on Reply
#47
HD64G
@the54thvoid: You proved exactly what you tried to show as false. 1070 is matched by FuryX in DX12, when in all DX11 games it is at least 20-30% better. So, DX12 gives the archi built for that 20-30% more vs one built for previous DX APIs. Thanks for the clarification.
Posted on Reply
#48
the54thvoid
Super Intoxicated Moderator
HD64GYou proved exactly what you tried to disapprove. 1070 is exactly at FuryX level in DX12, when in all DX11 games it is at least 20-30% better. So, DX12 gives the archi built for that 20-30% more. Thanks for the clarification.
Oh dear, I made a terrible mistake, I see that.

The GTX 1080 and 1070 only have 7200 transistors compared to Fiji's 8900. That's a nut busting 24% increase. Fury X has 4096 shaders to 1920 on the GTX 1070 (a 113% increase in hardware). Same ROPS. Higher bandwidth.

So, you're happy that a card with the hardware prowess of the Fury X can only match the paltry hardware inside a GTX 1070? That's not impressive. Do you not see? That's worrying on AMD's side. How can the GTX 1070 even stand beside a Fury X in DX12?

If I were an AMD fanboy with a brain, I'd be worried that for all my posturing about DX12 and async, the mid range Pascal chip (GTX 1080) with the poorer async optimisations humps my beloved Fury X. I'd be worried that AMD's next step has to be a clock increase but to do that the hardware count has to suffer (relatively speaking). I'd be worried that Nvidia's top end Pascal consumer chip (GP102, Titan X) has 1000 more shaders than the GTX 1080 (which already beats everything at everything).

Your optimism is very misplaced. But that's okay, we need optimism in todays world.
Posted on Reply
#49
iO
So they havent even released their ultra expensive, huge die P100 to lower priority customers and someone comes up with the idea that it somehow would be economically viable to release the next gen within less than a year?!



Also gotta love how those threads always derail....
Posted on Reply
#50
bug
HD64GThen explain why Doom in Vulcan is so deadly for all nVidia GPU and only a bit less for Pascals?
All reviews that I have seen so far don't use the required drivers for Vulkan to work on Nvidia hardware. Some benchmarks were run when Vulkan support wasn't available at all for Nvidia.
HD64GA game-benchmark about that is already here and it shows everything needed to know. And Doom fully uses all the DX12 features (unlike Time Spy from 3D Mark) and as an FPS game shouldn't give so much more performance to AND GPUs.
Like you said above, Doom is using Vulkan, not DX12.
HD64GAnd then think of a big scaled RTS game with 10s of 1000s of units battling each other in Vulcan. What a massacre that would be for nVidia GPUs. Warhammer TW DX12 is a sign of the things to come also.
Based on invalid premises, one can draw any conclusion.
Posted on Reply
Add your own comment
Jan 2nd, 2025 20:39 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts