# NVIDIA to Unveil "Pascal" at the 2016 Computex



## btarunr (Feb 26, 2016)

NVIDIA is reportedly planning to unveil its next-generation GeForce GTX "Pascal" GPUs at the 2016 Computex show, in Taipei, scheduled for early-June. This unveiling doesn't necessarily mean market availability. SweClockers reports that problems, particularly related to NVIDIA supplier TSMC getting its 16 nm FinFET node up to speed, especially following the recent Taiwan earthquake, could delay market available to late- or even post-Summer. It remains to be seen if the "Pascal" architecture debuts as an all-mighty "GP100" chip, or a smaller, performance-segment "GP104" that will be peddled as enthusiast-segment over being faster than the current big-chip, the GM200. NVIDIA's next generation GeForce nomenclature will also be particularly interesting to look out for, given that the current lineup is already at the GTX 900 series. 





*View at TechPowerUp Main Site*


----------



## ZoneDymo (Feb 26, 2016)

GTX Olympus


----------



## RejZoR (Feb 26, 2016)

When is Computex "airing"? Pascal is interesting me. A lot.


----------



## Frick (Feb 26, 2016)

When is Polaris due?


----------



## the54thvoid (Feb 26, 2016)

News elsewhere suggested paper launch from Nvidia in early April and first release may actually be mobility parts. Strong chance is we might not see top tier desktop till Q4.


----------



## FordGT90Concept (Feb 26, 2016)

Frick said:


> When is Polaris due?


Before September.


----------



## RejZoR (Feb 26, 2016)

Wasn't GTX 900 series launched in September/October timeframe? Could be around the same...


----------



## AsRock (Feb 26, 2016)

RejZoR said:


> When is Computex "airing"? Pascal is interesting me. A lot.



Think we all getting eager AMD or nVidia side tbh, even more with the nm shrink and all.


----------



## medi01 (Feb 26, 2016)

For Polaris, AMD said "mid 2016", whatever that means.

PS
Heh, and it will be Samsung 14nm (AMD) vs TSMC 16nm (NV) and what I've heard so far, at least for Apple's chips, TSMC was superior, despite being "bigger".


----------



## rtwjunkie (Feb 26, 2016)

This seems to go right along with what I've been saying for 6 months: don't expect Pascal before the last few months of this year.  Which means, likely just like with Maxwell, the more affordable mainstream cards will not be released until Jan/Feb, just like 960.

It's still a long time to wait for anyone that feels they need an upgrade right now.


----------



## AsRock (Feb 26, 2016)

That make me even more happy, means i get more value out of my 290X already had it over 2 years now, wow don't time fly sheesh.


----------



## EarthDog (Feb 26, 2016)

RejZoR said:


> Wasn't GTX 900 series launched in September/October timeframe? Could be around the same...


970 and 980 were 9/2014. 980ti was 6/2015...


----------



## medi01 (Feb 26, 2016)

EarthDog said:


> 970 and 980 were 9/2014. 980ti was 6/2015...


980Ti was a bump up from 980 to spoil Fury launch.

PS
Yeah, or knockdown from Titan, but the point is, anti-Fury move.


----------



## rtwjunkie (Feb 26, 2016)

medi01 said:


> 980Ti was a bump up from 980 to spoil Fury launch.



By bump up, you mean the release date? That is correct.  If you mean a bump up of the 980 card, that is incorrect.  980Ti is cut down Titan, not bumped up 980.  980Ti is a GM200 chip, 980 is a GM204.


----------



## FordGT90Concept (Feb 26, 2016)

rtwjunkie said:


> It's still a long time to wait for anyone that feels they need an upgrade right now.


But the jump in performance is huge and comes at lower power draw.  IMHO, it's worth waiting.  I already plan on selling my R9 390 and getting a Polaris card for those very reasons.


----------



## nickbaldwin86 (Feb 26, 2016)

2018 yet..?.. I want Volta.


----------



## PP Mguire (Feb 26, 2016)

So we won't see Volta until late 2018 even though before it was scheduled for 2017. Highly disappointing. Guess I'm getting water blocks for Quakecon after all.


----------



## RejZoR (Feb 26, 2016)

Volta is just a tiny bump from Pascal. Pascal however will be a huge bump to the Maxwell 2. My GTX 980 is fast and all, but I somehow miss AMD. I guess having several generations of AMD's finest Radeons left a mark on me. If only they weren't so god damn late with Fury cards, I'd probably be rocking one today...


----------



## PP Mguire (Feb 26, 2016)

RejZoR said:


> Volta is just a tiny bump from Pascal. Pascal however will be a huge bump to the Maxwell 2. My GTX 980 is fast and all, but I somehow miss AMD. I guess having several generations of AMD's finest Radeons left a mark on me. If only they weren't so god damn late with Fury cards, I'd probably be rocking one today...


Volta will be refined with less power draw, but the fact here remains I'll be on 4k uncapable GPUs for 2 years before I get an upgrade. This rustles my jimmies so bad and I hope AMD comes out with something this year that kicks ass.


----------



## FordGT90Concept (Feb 26, 2016)

Polaris...does...kick...ass...it will probably debut before Pascal too.


----------



## rruff (Feb 26, 2016)

RejZoR said:


> Wasn't GTX 900 series launched in September/October timeframe? Could be around the same...



Maxwell was was launched in Jan-Feb of that year with the 750. If Nvidia is focusing on mobility, the first desktop cards may be the 750 replacements.


----------



## PP Mguire (Feb 26, 2016)

FordGT90Concept said:


> Polaris...does...kick...ass...it will probably debut before Pascal too.


Yea, in AMD figures the power consumption does, but I'm talking actual performance. I really don't want to wait until 2017 for a GPU upgrade.


----------



## rtwjunkie (Feb 26, 2016)

rruff said:


> the first desktop cards may be the 750 replacements.



That role has been laid out for the 950SE.  Since they announced it so late in Maxwell's life as the 750 replacement, my guess is it will fill that slot long into the Pascal cycle.


----------



## TheGuruStud (Feb 26, 2016)

medi01 said:


> For Polaris, AMD said "mid 2016", whatever that means.
> 
> PS
> Heh, and it will be Samsung 14nm (AMD) vs TSMC 16nm (NV) and what I've heard so far, at least for Apple's chips, TSMC was superior, despite being "bigger".



It wasn't much of a difference and TSMC is notoriously crap on yields. A tiny arm chip is simple compared to a monolithic GPU. This is where TSMC always falters. Plus, Sammy has no doubt been refining their process.


----------



## HumanSmoke (Feb 26, 2016)

TheGuruStud said:


> It wasn't much of a difference and TSMC is notoriously crap on yields. A tiny arm chip is simple compared to a monolithic GPU. This is where TSMC always falters. Plus, Sammy has no doubt been refining their process.


Where do you come up with this stuff? Xilinx is already shipping it's Zynq Ultrascale+ MPSoC's made on TSMC's 16nm FF+ process. Considering it ships up to 1506 pin package (comparable to an upper mainstream GPU or performance APU) I don't think it qualifies as "a tiny arm chip"


Spoiler: Package Dimensions












If Samsung are supposedly so far ahead, and I haven't seen any definitive proof that they are - either with yields or process ( 16nmFF vs 14nmLPE was definitely a TSMC win), it makes you wonder why TSMC secured over two-thirds of Apple's A9 business, and are looking increasingly likely to be sole supplier of the A10.

If a past history of bad yields is an indication of things going forward, AMD start ordering in industrial quantites of Xanax for the Zen ramp given Globalfoundries abysmal past record


----------



## TheGuruStud (Feb 26, 2016)

HumanSmoke said:


> Where do you come up with this stuff? Xilinx is already shipping it's Zynq Ultrascale+ MPSoC's made on TSMC's 16nm FF+ process. Considering it ships up to 1506 pin package (comparable to an upper mainstream GPU or performance APU) I don't think it qualifies as "a tiny arm chip"
> 
> 
> Spoiler
> ...



See yields of AMD and nvidia everytime a GPU launches on a new node that TSMC claimed was ready. 

And your proof is more arm?


----------



## HumanSmoke (Feb 26, 2016)

TheGuruStud said:


> See yields of AMD and nvidia everytime a GPU launches on a new node that TSMC claimed was ready.


Name a foundry that hasn't had ramp issues on a new process. You hold up Samsung as some process leader yet what have they commercially produced on 14nm that wasn't "a tiny arm chip" as you put it?


TheGuruStud said:


> And your proof is more arm?


You are going to tell me that a BGA-1506 package is "a tiny arm chip" again


----------



## rruff (Feb 26, 2016)

rtwjunkie said:


> That role has been laid out for the 950SE.  Since they announced it so late in Maxwell's life as the 750 replacement, my guess is it will fill that slot long into the Pascal cycle.



You may be right, but I think we will not wait long even if it is not the first to be introduced. Nvidia wants to keep competing in the laptop dGPU market (which they currently dominate by a huge margin), and AMD has said they will introduce Polaris for this market this year. This is where power consumption is super important, so it makes sense to use the latest architecture. That's why the 750s were the first to get Maxwell. It's the desktop version of the GTX 840-860m and 940-960m which are all GM107 chips. It would be easy to to the same for Pascal, and would make marketing sense if AMD uses Polaris in the low end gaming market.


----------



## ArdWar (Feb 27, 2016)

HumanSmoke said:


> Name a foundry that hasn't had ramp issues on a new process. You hold up Samsung as some process leader yet what have they commercially produced on 14nm that wasn't "a tiny arm chip" as you put it?
> 
> You are going to tell me that a BGA-1506 package is "a tiny arm chip" again



Package size, and in this case pin count, doesn't necessarily correlated to die size and complexity. An ASIC with many integrated peripherals could have very low pin count relative to its complexity, while a general purpose chip might be pad limited (it ran out of pin area rather than die size).

Nevertheless Zynq is hybrid FPGA+SoC, and if I not mistaken their much bigger brother Virtex FPGA also start shipping. FPGA's probably only second to GPU in sheer number of transistors.


----------



## HumanSmoke (Feb 27, 2016)

ArdWar said:


> Package size, and in this case pin count, doesn't necessarily correlated to die size and complexity.


That should be a given.
I would also have thought people could think laterally and use use package size dimensions to get an approximate size of the die as shown at the beginning of some of Xilinx's promotional videos and product literature - which shows that the die is still comfortably larger than the ~ 100mm² ARM chips currently in production at Samsung. Bear in mind the Zynq SKU shown below is one of the smaller die UltraScale+ chips.






ArdWar said:


> Nevertheless Zynq is hybrid FPGA+SoC, and if I not mistaken their much bigger brother Virtex FPGA also start shipping. FPGA's probably only second to GPU in sheer number of transistors.


True enough. Virtex-7/-7XT is a pretty big FPGA on TSMC's 28nm (and TSMC's 65nm for the FPGA's interposer). The die is ~ 375-385mm² and 6.8 billion transistors - basically the size of performance GPU or enthusiast CPU, but with a greater transistor density than either.


Spoiler


----------



## ManofGod (Feb 27, 2016)

Wood screws? Just a paper launch reveal? Actual release data and possible performance numbers? $1000 initial cost? 24GB of ram on a Titan version would be cool but not really doable I suppose.


----------



## FordGT90Concept (Feb 27, 2016)

AMD is likely to have up to 32 GB on HBM2 which translates to 8 GB per stack.  NVIDIA will likely offer the same.


----------



## newtekie1 (Feb 27, 2016)

btarunr said:


> It remains to be seen if the "Pascal" architecture debuts as an all-mighty "GP100" chip, or a smaller, performance-segment "GP104" that will be peddled as enthusiast-segment over being faster than the current big-chip, the GM200.



This will come down entirely to how AMD Polaris performs.  If we get a repeat of AMD's last few launches, then nVidia's mid-range GP104 will match or beat AMD's top end, and we won't see GP100 until AMD's next generation.  But I'm hoping AMD manages to pull a rabbit out of their had with Polaris and we finally see the 2nd from the top GP100 at $300 like it should be.

The sad truth is, nVidia has likely banked a crap ton of money by selling mid-range GPUs for $500 that would have normally sold for no more than $200 in the past.  And when they finally have to release their high end chip, they sell them for $650+.  AMD hasn't had this luxury, and we all know cash isn't something they have a lot of, this gives nVidia a very nice advantage.


----------



## Steevo (Feb 27, 2016)

The mid range and low power segments are where the real money is, smaller chips mean more per wafer and a larger product stack if a few have flaws out of the batch, I am thinking the days of high performance large die initial offerings is over, perhaps they will run out enough to get reviews of the big ones if they are smart, and then run out the rest as their primary movers, the $100-300 performance segment, considering the recent steam hardware survey shows that mid-range cards lead the pack its their best option.


----------



## FordGT90Concept (Feb 27, 2016)

Polaris, like Pascal, is likely to be twice as fast as cards available today using less power.  28nm to 14/16nm is a huge jump, as the slide in the OP shows.

Polaris is expected to have "up to 18 billion transistors" where Pascal has about 17 billion.


I still think the only reason why Maxwell can best Fiji is because Maxwell's async compute is half software, half hardware, where AMD's is all hardware.  Transistors for making async compute work in GCN were otherwise spent increasing compute performance in Maxwell.  It's not clear whether or not Pascal has a complete hardware implementation of async compute or not.

As with all multitasking, there is a overhead penalty.  So long as you aren't using async compute (which not much software does, regrettably), Maxwell will come out ahead because everything is synchronous.


I think the billion transistor difference comes from two areas: 1) AMD is already familiar with HBM and interposers.  They knew the exact limitations they were facing walking into the Polaris design so they could push the limit with little risk.  2) 14nm versus 16nm so more transistors can be packed into the same space.

Knowing the experience Apple had with both processes, it seems rather likely that AMD's 14nm chips may run hotter than NVIDIA's 16nm chips.  This likely translates to lower clocks but, with more transistors, more work can be accomplished per clock.

I think it ends up being very competitive between the two.  If Samsung improved their process since Apple's contract (which they should have, right?), AMD could end up with a 5-15% advantage over NVIDIA.


----------



## rruff (Feb 27, 2016)

FordGT90Concept said:


> Polaris, like Pascal, is likely to be twice as fast as cards available today using less power.  28nm to 14/16nm is a huge jump, as the slide in the OP shows.



Isn't this like going from Sandy Bridge to Broadwell? Only if you'd had a few years to tweak Sandy to get the most out of it.

I think people expecting a 2x jump will be disappointed. A small increase in performance (~20%) with a bigger reduction in power consumption would be more like it. And don't expect any of it to be cheap.


----------



## qubit (Feb 27, 2016)

Hail Hydra!


----------



## FordGT90Concept (Feb 27, 2016)

rruff said:


> Isn't this like going from Sandy Bridge to Broadwell? Only if you'd had a few years to tweak Sandy to get the most out of it.
> 
> I think people expecting a 2x jump will be disappointing.


No, because Intel has been making processors smaller and smaller, relativistically:




Also bare in mind that Intel has been making the cores smaller and increasing the size of the GPU with each iteration.  Each generation is a tiny bit faster...and cheaper for Intel to produce.

In GPUs, the physical dimesions stay more or less the same (right now, limited by interposer):


----------



## rruff (Feb 27, 2016)

Isn't that only relevant to the highest end chip? That is a niche market, and it won't be cheap. Granted Intel has had no competition lately, while Nvidia has at least a little. At the end of the day what 99% of us care about is FPS/$ and FPS/W, not absolute FPS for the biggest chip. Big gains in FPS/$ will only occur if there is fierce competition. We have only 2 players and one is hanging on by a thread. I don't see it happening, but it would be cool if it did.


----------



## FordGT90Concept (Feb 27, 2016)

The non-cutdown Pascal and Polaris chips will no doubt run for at least $600 USD but that's normal.  AMD has a card that competes with NVIDIA at every price point except Titan-Z but that's coming with the Fury X2.


AMD is not "hanging by a thread" in the graphics department.  They have 20% marketshare in the discreet card market and 100% of the console market.


----------



## HumanSmoke (Feb 27, 2016)

FordGT90Concept said:


> Polaris, like Pascal, is likely to be twice as fast as cards available today using less power.  28nm to 14/16nm is a huge jump, as the slide in the OP shows.





FordGT90Concept said:


> Polaris is expected to have "up to 18 billion transistors" where Pascal has about 17 billion. ....I think the billion transistor difference comes from two areas: 1) AMD is already familiar with HBM and interposers.  They knew the exact limitations they were facing walking into the Polaris design so they could push the limit with little risk.  2) 14nm versus 16nm so more transistors can be packed into the same space.


Both the figures come from an extrapolation (guesstimate) done by 3DCenter. The transistor count extrapolation is based almost entirely upon TSMC's 16nmFF product blurb


> TSMC's 16FF+ (FinFET Plus) technology can provide above 65 percent higher speed,* around 2 times the density*, or 70 percent less power than its 28HPM technology.


All 3DC did was basically double Fiji and GM 200's count, deduct the uncore that was represented twice, and in Pascal's case added a ballpark figure for the added SFU's (FP64) that they knew would be added.


FordGT90Concept said:


> Knowing the experience Apple had with both processes, it seems rather likely that AMD's 14nm chips may run hotter than NVIDIA's 16nm chips.  This likely translates to lower clocks but, with more transistors, more work can be accomplished per clock.


Yes, the comparisons between Samsung and TSMC manufactured Apple A9's would tend to bear that out. It is the nature of GCN that the "always on" nature of its hardware will further add to the imbalance - although if Nvidia adapt their thread scheduling technology (HyperQ) to work with the graphics pipeline, the power requirement differences could still come down to the foundry of manufacture. I am still not convinced that all AMD's GPUs will be built on 14nm. It really wouldn't surprise me if the flagship was a TSMC product. The added bonus for AMD would be that since TSMC could also provide the interposer, the supply chain can be somewhat more vertically integrated.


FordGT90Concept said:


> In GPUs, the physical dimesions stay more or less the same (right now, limited by interposer)


The GPU limiting size is the reticule limits of the lithography tools at ~625mm², which is why GPU's sit at around 600mm² to allow of a reasonable keep out space between dies for cutting. It is also the reason why large chips like the 20+ billion transistor Virtex Ultrascale XCVU440 and its smaller FPGA brethen are made in "slices" then mounted on a common interposer. The interposer itself can be both smaller than the package (as per Fiji), and not that limited in size to begin with. There are companies putting out larger interposers than the 1101mm² that UMC uses for Fiji. Bear in mind that the TSMC manufactured XCVU440 package is 55mm x 55mm and sits squarely atop an interposer made by the same company.


----------



## newtekie1 (Feb 27, 2016)

FordGT90Concept said:


> AMD is not "hanging by a thread" in the graphics department. They have 20% marketshare in the discreet card market and 100% of the console market.



100% of the console market doesn't help when they had to bid so low just to get the contracts that they aren't making any money on the deal.


----------



## FordGT90Concept (Feb 28, 2016)

They wouldn't be making them if they couldn't turn a profit.  Yeah, it likely isn't much per unit but all of the console developers are intimately familar with AMD GPUs.  AMD doesn't need to pad developer pockets to make them use their stuff like NVIDIA does.

The console market could turn into a huge boon for AMD as more developers use async compute.  Xbox One has 16-32 compute queues while the Playstation 4 has 64.  Rise of the Tomb Raider may be the only game to date that uses them for volumetric lighting.  This is going to increase as more developers learn to use the ACEs.  As these titles are ported to Windows, NVIDIA cards may come up lacking (depends on whether or not they moved async compute to the hardware in Pascal).


Then again, the reason why NVIDIA has 80% of the market while AMD doesn't is because of shady backroom deals.  AMD getting the performance lead won't change that.


----------



## rruff (Feb 28, 2016)

FordGT90Concept said:


> Then again, the reason why NVIDIA has 80% of the market while AMD doesn't is because of shady backroom deals.



Really? Nothing to do with hardware?


----------



## FordGT90Concept (Feb 28, 2016)

AMD cards have been competitive with NVIDIA cards, dollar for dollar, for decades.

AMD's processors bested Intel processors from K6 to K8.  Their market share grew during that period but they didn't even come close to overtaking Intel.  It was later discovered Intel did shady dealings of their own (offering rebates to OEMs that refused to sell AMD processors) and AMD won a lawsuit that had Intel paying AMD.

It's all about brand recognition.  People recognize Intel and, to a lesser extent, NVIDIA.  Only tech junkies are aware of AMD.  NVIDIA, like Intel, is in a better position to broker big deals with OEMs.


----------



## newtekie1 (Feb 28, 2016)

FordGT90Concept said:


> AMD cards have been competitive with NVIDIA cards, dollar for dollar, for decades.



No they haven't.  When AMD took the lead they overpriced their cards and failed to provide a good value.  The 7950 was a good card, but they overpriced it at launch.  The 290 was too late to the game, and nVidia just cut prices to best it. And the 970 was the card to beat for almost a year before AMD answered it with the 390, and the 390 still didn't best the 970's price to performance when the 390 launched.  Then everyone was biting their lips waiting for the Fury Nano, that had to be the card to best the 970, right?  Nope.  Sure it was faster than the 970, but they priced the damn thing at more than double the price of the 970, giving it one of the worst price/performance values second only to the Titan X.  If you go back through the reviews on TPU, there is not a lot of times where AMD is leading the price/performance, but there are a lot of times nVidia is.

They have missed some pretty good opportunities.  The Nano could have been great if they hadn't overpriced it(the Fury X too).  The Nano at $450 at launch would have flown out the door.  The 390 is a decent contender now, but now is too late.  The 390 needed to be on the market 4 months sooner than it was, and cost $20 less than the 970, not $20 more.  You don't gain market share by simply matching what your competitor has had on the market for a few months, you have to offer the consumer something worth switching to.


----------



## FordGT90Concept (Feb 28, 2016)

And the Titan isn't overpriced now?  Cards selling for >$500 aren't exactly volume movers.

HD 7950 -> R9 280(X)

R9 290X has been out since 2013.  GTX 970 didn't come for another year. 2014 and 2015 were crappy years for cards not because of what AMD and NVIDIA did but because both were stuck on TSMC 28nm.  The only difference is that NVIDIA debuted a new architecture while AMD didn't do much of anything.

Fiji is an expensive chip.  They couldn't sell Nano on the cheap because the chip itself is not cheap.

390 is effectively a 290 with clocks bumped and 8 GiB of VRAM (which only a handful of applications at ridiculous resolutions can even reach).  390, all things considered, is about on par with 290X which is only about a 13% difference.  Not something to write home about.


----------



## newtekie1 (Feb 28, 2016)

FordGT90Concept said:


> And the Titan isn't overpriced now?  Cards selling for >$500 aren't exactly volume movers.



Titan is a niche product, in fact it is a stupid product, I'm ignoring it.  But just because nVidia has one outragously overpriced product, that doesn't mean the rest of their portfolio is overpriced too.  And you are exactly right, $500+ products aren't volume movers.  That is why I talked about the 970 and the 390, and the 290.  They are in that sweet spot of price, where beyond that you start to spend a heck of a lot more money for a little more performance.  That is why I said pricing the Nano at $450 instead of $650 is what AMD needed to do.  AMD basically made Fiji, their first new GPU in almost 2 years, completely irrelevant to the market.  Even the regular Fury was overpriced.  There is no way $550 was a good price point for that.  It would have turned heads at $400, but at $550 there was no reason to buy it over the cheaper 980 or the much much cheaper 970.


----------



## HumanSmoke (Feb 28, 2016)

FordGT90Concept said:


> AMD cards have been competitive with NVIDIA cards, dollar for dollar, for decades.
> AMD's processors bested Intel processors from K6 to K8.  Their market share grew during that period but they didn't even come close to overtaking Intel.  It was later discovered Intel did shady dealings of their own (offering rebates to OEMs that refused to sell AMD processors) and AMD won a lawsuit that had Intel paying AMD.
> It's all about brand recognition.  People recognize Intel and, to a lesser extent, NVIDIA.  Only tech junkies are aware of AMD.  NVIDIA, like Intel, is in a better position to broker big deals with OEMs.


That is a very blinkered view of the industry IMO.
AMD did have a superior product in K7 and K8 and were competitive during that era - and for certain weren't helped by Intel's predatory practices (nor were Cyrix, C&T, Intergraph, Seeq and a whole bunch of other companies). It is also a fact that AMD were incredibly slow to realize the potential of their own product. As early as 1998 there were doubts about the companies ability to fulfill contracts and supply the channel, and while the cross-licence agreement with Intel allowed AMD to outsource 20% of the x86 production, Jerry Sanders refused point blank to do so. By the time shortages were acute, the company poured funds they could ill afford to spend into developing Dresden's Fab 36 at breakneck speed and cost rather than just outsource production to Chartered Semi (which they eventually did way too late in the game) or UMC, or TSMC. AMD never took advantage of the third-party provision of the x86 agreement past 7% of production when sales were there for the taking. The hubris of Jerry Sanders and his influence on lapdog Ruiz was true in the early years of the decade as it was when AMD's own ex-president and COO, Atiq Raza reiterated the same thing in 2013.

As for the whole Nvidia/AMD debate, that is less about hardware than the entire package. Nearly twenty years ago ATI was content to just sell good hardware knowing that a good product sells itself - which was a truism back in the day when the people buying hardware were engineers for OEMs rather than consumers. Nvidia saw what SGI was achieving with a whole ecosystem (basically the same model that served IBM so well until Intel started dominating the big iron markets), allied with SGI - and then were gifted the pro graphics area in the lawsuit settlement between the two companies - and reasoned that there was no reason that they couldn't strengthen their own position in a similar matter. Cue 2002-2003, and the company begin design of the G80, a defined strategy of pro software (CUDA) and gaming (TWIMTBP). The company are still reaping rewards of a strategy defined 15 years ago. Why do people still buy Nvidia products? Because they laid down the groundwork years ago and many people were brought up with the hardware and software - especially via boring OEM boxes and TWIMTBP splash screens at the start of games. AMD could have gained massive inroads into that market, but shortsightedness in cancelling ATI's own GIGT program, basically put the company back to square one in customer awareness  and all because AMD couldn't see the benefit of a gaming development program or actively sponsoring OpenCL. Fast forward to the last couple of years, and the penny has finally dropped, but it is always tough to topple a market leader if that leader basically delivers - I'm talking about delivering to the vast majority of customers - OEMs and the average user that just uses the hardware and software, not an minority of enthusiasts whose presence barely registers outside of specialized sites like this.

Feel free to blame everything concerning AMD's failings on outside influences and big bads in the tech industry. The company has fostered exactly that image. I'm sure the previous cadre of deadwood in the board room collecting compensation packages for 10-14 years during AMD's slow decline appreciate having a built in excuse for not having to perform. It's just a real pity that it's the enthusiast that pays for the laissez-faire attitude of a BoD that were content to not to have to justify their positions.


----------



## rruff (Feb 28, 2016)

FordGT90Concept said:


> AMD cards have been competitive with NVIDIA cards, dollar for dollar, for decades.



I'll choose AMD over Intel and Nvidia unless there is a sound reason to do otherwise. No AMD for me the last few years. They aren't competitive on the stuff I'm interested in, and I'm not buying high end. My computers are on all the time, and I use them to play movies and video. AMD's high power consumption more than erases their cost advantage in GPUs. In CPUs they are both slow and power hungry. 

If that changes I'll be more than happy to go with AMD.


----------



## FordGT90Concept (Feb 28, 2016)

AMD has always been horrible at marketing and branding.  AMD has also repeatedly made very bad decisions (like the one to aquire ATI in the first place when their position in the CPU market had a very grim outlook).  A lot of it does fall on AMD itself and their desire to not change the status quo.  At least Zen brings some hope that the culture of AMD is changing...but that remains to be seen.


----------



## vega22 (Feb 28, 2016)

i think the earthquake that hit tsmc shows that god loves amd and that nvidia is the work of the devil :lol:


*runs away*


----------



## HumanSmoke (Feb 28, 2016)

vega22 said:


> i think the earthquake that hit tsmc shows that god loves amd and that nvidia is the work of the devil :lol:
> *runs away*


The earthquake supposedly affected Fab 14 which is a 20nm SoC plant making Apple A9, other ARM ICs, and FPGA's.
Fab 14(B) - the 16nmFF+ extension fab where GPUs would be manufactured was back up and running quickly.

Techpowerup covered the story but neglected to research what process the fabs actually used for their production. Maybe "TSMC Damaged by Earthquake, Could Impact AMD and Nvidia GPU Production" would get more page views than"TSMC Damaged by Earthquake, Will Impact Apple A9 Production"

"God" might just be evening the score. Visit a natural disaster on Nvidia to offset all the man-made disasters that AMD have had visited upon them and have initiated themselves. If that is the case then AMD better pray for a higher level of divine intervention I suspect.


----------



## medi01 (Feb 28, 2016)

FordGT90Concept said:


> AMD is likely to have up to 32 GB on HBM2 which translates to 8 GB per stack.  NVIDIA will likely offer the same.


Dafuq would one need that much RAM for??? 8k gaming?



rruff said:


> AMD's high power consumption more than erases their cost advantage in GPUs.


I'll talk about current gen, bar Fury (which has better perf/watt), it's 40-80w difference (+20% total system power consumption) while being 10% faster. A fair trade and more than competitive, to say the least.



HumanSmoke said:


> Why do people still buy Nvidia products? Because they laid down the groundwork years ago...


Such as effectively bribing developers to cripple perf on competitor's product and pushing even more crap time from time, from PhysX (which at least has its uses) to G-Sync (which is a stinky piese of shit).
Groundwork, my ass...
A GPU should be not too noisy, not too power hungry. It should run games smoothly.. I've just listed ALL points average gamers care about.
I could add "oh, and it should last" but then NV definitely cannot boast that, can it? And I'm not talking about DX12.


----------



## the54thvoid (Feb 28, 2016)

medi01 said:


> A GPU should be not too noisy, not too power hungry. It should run games smoothly.. I've just listed ALL points average gamers care about.
> I could add "oh, and it should last" but then NV definitely cannot boast that, can it? And I'm not talking about DX12.



By that definition, AMD's Hawaii architecture was an abortion.  Nvidia haven't messed up that criteria since the GTX480 bacon maker. Which was when I went AMD with 5850's, which were nearly the best bang for bucks (IMO) I've ever bought. It's a shame in crossfire they broke the 3rd rule you listed, run games smoothly.
So, Hawaii fixed rule 3 with PCI-e crossfire but annihilated 1& 2.
Post GTX480, Nvidia haven't really dropped any design balls, 580, 680, 780, 780ti, 980ti. What they have done is create an entirely weird (but not new) price structure with pseudo professional Titan parts. The Titan Z, I will acquiesce to as being hysterically awful. 
People can berate Nvidia cards if they want to live in a red misted wonderland but Nvidia make good gaming hardware. And you can't blame Gameworks. There are many AMD titles that play fastest on Nvidia hardware and on brand agnostic games, Nvidia still performs better. Let's not delude ourselves.

As far as being OT, if Pascal is found to be lacking in Asynch (if the warp scheduling is still serialised and not parallelized) then yes, later this year we may well see a true situation where AMD will clearly win some titles and those that don't rely on Async may go Nvidia way.  I just hope both brands have good enough cards to avoid game specific performance problems, even when one is far better. A situation where a top tier card doesn't perform in a game because of an architecture deficit which switches title to title would be disastrous.


----------



## Ithanul (Feb 28, 2016)

rtwjunkie said:


> This seems to go right along with what I've been saying for 6 months: don't expect Pascal before the last few months of this year.  Which means, likely just like with Maxwell, the more affordable mainstream cards will not be released until Jan/Feb, just like 960.
> 
> It's still a long time to wait for anyone that feels they need an upgrade right now.


Indeed, reason I'm waiting out again like I did with Maxwell.  Then wait out for Pascal to hit used market.  I'm a cheap bugger.  All my cards are 2nd owned.  
At least this means I will get a good year or two out of these Tis before the next Ti shows up.



FordGT90Concept said:


> Polaris...does...kick...ass...it will probably debut before Pascal too.


I sure hope so, I would love to play around with an AMD card again.  But right now they suck major for folding compared to Maxwell cards.  Kind of miss folding on the 7970 I had, but that thing was very hot to run 24/7 on an air cooler.  But hell, I give that thing was a tank considering it was handling running over 70-75C 24/7 with a OC on it.




medi01 said:


> Dafuq would one need that much RAM for??? 8k gaming?



No kidding, unless you doing some crazy high renderings or something.  I could care less about a buttload of RAM since 4k or 8k does not interest me.  Give me a good high core clock and ocing ability plus performance/watts.  Then I be happy as a lark while folding the crap out of the cards.


----------



## FordGT90Concept (Feb 28, 2016)

With that kind of VRAM, they could load all of the game textures into the VRAM and leave them there freeing up the system RAM to do whatever they want.


----------



## rtwjunkie (Feb 28, 2016)

the54thvoid said:


> By that definition, AMD's Hawaii architecture was an abortion.  Nvidia haven't messed up that criteria since the GTX480 bacon maker. Which was when I went AMD with 5850's, which were nearly the best bang for bucks (IMO) I've ever bought. It's a shame in crossfire they broke the 3rd rule you listed, run games smoothly.
> So, Hawaii fixed rule 3 with PCI-e crossfire but annihilated 1& 2.
> Post GTX480, Nvidia haven't really dropped any design balls, 580, 680, 780, 780ti, 980ti. What they have done is create an entirely weird (but not new) price structure with pseudo professional Titan parts. The Titan Z, I will acquiesce to as being hysterically awful.
> People can berate Nvidia cards if they want to live in a red misted wonderland but Nvidia make good gaming hardware. And you can't blame Gameworks. There are many AMD titles that play fastest on Nvidia hardware and on brand agnostic games, Nvidia still performs better. Let's not delude ourselves.
> ...



Calm, cool, thoughtul and insightfull response. Thank you.


----------



## PP Mguire (Feb 28, 2016)

FordGT90Concept said:


> The non-cutdown Pascal and Polaris chips will no doubt run for at least $600 USD but that's normal.  AMD has a card that competes with NVIDIA at every price point except Titan-Z but that's coming with the Fury X2.
> 
> 
> AMD is not "hanging by a thread" in the graphics department.  They have 20% marketshare in the discreet card market and 100% of the console market.


The 295x2 made easy work of the Titan-Z. The Z was a really stupid card. It's why I own the 295x2 and not the Titan-Z because the Z is just a flat embarrassment.


----------



## HumanSmoke (Feb 28, 2016)

medi01 said:


> Such as effectively bribing developers to cripple perf on competitor's product and pushing even more crap time from time, from PhysX (which at least has its uses) to G-Sync (which is a stinky piese of shit).


PhysX and game dis/optimization has no bearing on the vast majority of users, or OEMs who sign big contracts. That should have been apparent when I actually stipulated such


HumanSmoke said:


> but it is always tough to topple a market leader if that leader basically delivers - *I'm talking about delivering to the vast majority of customers - OEMs and the average user that just uses the hardware and software*, not an minority of enthusiasts whose presence barely registers outside of specialized sites like this.


...and regardless of how you view G-Sync (and more than a few people don't share your view), OEMs seem quite happy to market it, customers seem quite happy to use it, and from a marketing point of view being first to market counts for a lot. Adaptive Sync/Freesync is cheaper, but like G-Sync it isn't a mainstream user priority.


medi01 said:


> Groundwork, my ass...


Yet it's fact that Nvidia's game development program raised the companies profile (especially when personal computing in general and PC gaming in particular were going through its expansion phase), so apropos of your comment, I'm going to say you are talking out of your one-eyed orifice.
I'd also ask you, if you think that the game dev software R&D has no merit, why would AMD spend resources copying the features? The ideas of GeForce Experience, frame pacing, G-Sync, Shadowplay, Optimus and a host of other features of varying merit have all been appropriated by AMD. If they don't add value to the brand do you think that AMD are just pathological imitators? Most neutral observers would probably note that the features help sell the brand and the hardware, and AMD would be foolish not to exploit any opportunity to advance both.


----------



## the54thvoid (Feb 28, 2016)

PP Mguire said:


> The 295x2 made easy work of the Titan-Z. The Z was a really stupid card. It's why I own the 295x2 and not the Titan-Z because the Z is just a flat embarrassment.



Frankly the best iteration of the whole Asus uber cards was this one.  







Absolutely gorgeous piece of hardware.  If Nvidia had allowed the board partners to do the same to the Titan Z (and of course not be so expensive) that round would have been awesome.


----------



## PP Mguire (Feb 28, 2016)

the54thvoid said:


> Frankly the best iteration of the whole Asus uber cards was this one.
> 
> 
> 
> ...


Yea ultimately cost and super low clock speeds made the Titan-Z a turd. It's stupid of Nvidia not to let AIB's play with the Titan class cards.


----------



## mcraygsx (Feb 28, 2016)

Is it safe to say we will see 1080 Ti or Titan version in Summer 2017 and not before ? Mind the naming scheme.


----------



## DarthBaggins (Feb 28, 2016)

Really can't wait for big pascal and the next line from amd, fan of both manufacturers as each have their string points.  Also AMD got me to get back into the PC gaming race


----------



## medi01 (Feb 28, 2016)

the54thvoid said:


> By that definition, AMD's Hawaii architecture


I recall 780 Ti used to be faster than 290x? Ironic.



the54thvoid said:


> Nvidia make good gaming hardware


Yes. And that's good. But then there comes proprietary "only me" crap with it, which is bad for the market as a whole.



HumanSmoke said:


> OEMs seem quite happy to market it, customers seem quite happy to use it


So they were with Prescott (all of 'em, OEMs and customers). This point is moot.

I didn't quite get exactly what you meant when talking about "delivering"... Delivering what?



HumanSmoke said:


> Adaptive Sync/Freesync is cheaper, but like G-Sync it isn't a mainstream user priority.


It isn't cheaper, it comes free (with most scaler chips). G-Sync users pay 100$ for something, the only point of which is to ban NVs competitors. Oh, and it limits monitors to only one port. Very convenient.



HumanSmoke said:


> Yet it's fact that Nvidia's game development program..


PhysX was BOUGHT and forcefully made exclusive. At best it is "NV bought game development program".
Then you slap another nice sum to bribe devs to use it, and, yay, it's sooo good for customers.


----------



## PP Mguire (Feb 28, 2016)

mcraygsx said:


> Is it safe to say we will see 1080 Ti or Titan version in Summer 2017 and not before ? Mind the naming scheme.


My money is on this time next year, and I hope I'm wrong.



medi01 said:


> I recall 780 Ti used to be faster than 290x? Ironic.
> 
> 
> *Yes. And that's good. But then there comes proprietary "only me" crap with it, which is bad for the market as a whole.*
> ...


Gsync brought the tech to the table where Freesync was vapor at conventions. You have to be daft to not realize Gsync pushed the tech into the hands of the consumers and made Freesync come out much quicker.

Newer Gsync monitors have more than one input.


----------



## HumanSmoke (Feb 28, 2016)

PP Mguire said:


> Gsync brought the tech to the table where Freesync was vapor at conventions. You have to be daft to not realize Gsync pushed the tech into the hands of the consumers and made Freesync come out much quicker.


QFT, although I suspect any reasoned argument is lost on medi01. He seems to have lost the plot of the thread he jumped on - which was about the various companies position in their respective markets and how they arrived at the present situation


medi01 said:


> PhysX was BOUGHT and forcefully made exclusive. At best it is "NV bought game development program" Then you slap another nice sum to bribe devs to use it, and, yay, it's sooo good for customers.


So what? The philosophical debate over the ethics of PhysX doesn't alter the fact that Nvidia used its gaming development program to further its brand awareness. They are two mutually exclusive arguments. Do me a favour - if you're quoting me at least make your response relevant to what is being discussed.


----------



## rruff (Feb 29, 2016)

AI is driving development of GPUs: http://www.itworld.com/article/2898...fold-performance-jump-with-next-gpu-tech.html

"Nvidia said it will offer up to 32GB of RAM per GPU. This will allow for up to five times better performance in what Nvidia calls "deep learning applications" which are applications capable of gathering data and learning to recognize patterns or images. It's also a sign that this card will be for high performance computing, as the majority of video cards have just 2GB of memory."


http://blogs.nvidia.com/blog/2016/02/23/pratt-gtc-toyota/
"And GPUs, which are key to training a new generation of machines with superhuman capabilities, are at the center of this story (see “Accelerating AI with GPUs: A New Computing Model”)."

Someone is willing to pay a lot of money for this stuff...


----------



## HumanSmoke (Feb 29, 2016)

rruff said:


> AI is driving development of GPUs: http://www.itworld.com/article/2898...fold-performance-jump-with-next-gpu-tech.html


Ten times the performance is easily doable just by reinstating the 1:3 rate FP64 of GK110/210 (as opposed to Maxwell's 1:32). Pascal also has verified half-precision (FP16) support - if it also has quarter-precision (FP8) support, that would also more than do it. FP16 has at least some gaming application unlike double precision.


----------



## cyneater (Feb 29, 2016)

So when are they going to release turbo pascal  ... and delphi ... bom tish


----------



## medi01 (Feb 29, 2016)

HumanSmoke said:


> So what?


So that world would have been better, had nVidia NOT bought it. 

There was NO NEED in GSync the way it was done, there was nothing special about variable refresh rate, that stuff was already there in notebooks (that's why it didn't take AMD long to counter). The only drive (and wasted money) was to come out with some "only me, only mine!!!" shit, nothing else. 

Had it been a common, open standard, that would have pushed market forward a lot. But no, we have crippled "only this company" shit now. Thanks, great progress.

It's great to have more than one competitive player in the market. It sucks when they play dirty, the way nVidia does.

Strong arm politics all over the place on all fronts: XFX, hell, ANAND BLOODY TECH. Punished, learned the lesson, next time put cherry picked overclocked fermi vs stock AMD. And that's only VISIBLE part of it, who fucking knows what's going on underneath.


----------



## HumanSmoke (Feb 29, 2016)

medi01 said:


> So that world would have been better, had nVidia NOT bought it.
> 
> There was NO NEED in GSync the way it was done, there was nothing special about variable refresh rate, that stuff was already there in notebooks (that's why it didn't take AMD long to counter). The only drive (and wasted money) was to come out with some "only me, only mine!!!" shit, nothing else.
> 
> ...


----------



## Frick (Feb 29, 2016)

the54thvoid said:


> Frankly the best iteration of the whole Asus uber cards was this one.
> 
> 
> 
> Absolutely gorgeous piece of hardware.  If Nvidia had allowed the board partners to do the same to the Titan Z (and of course not be so expensive) that round would have been awesome.



I loved the Ares series (wasn't there another company that made some similar things btw? very high end, premium dual cards) - dual GPU cards always were a hoot to me - but the latest one really was something else.


----------



## HumanSmoke (Feb 29, 2016)

Frick said:


> I loved the Ares series (wasn't there another company that made some similar things btw? very high end, premium dual cards) - dual GPU cards always were a hoot to me - but the latest one really was something else.


Non-reference dual cards? They've been a staple for a while. Asus's Ares and Mars cards are just the latest iterations. Both Gigabyte and Asus offered dual GF 6000 series boards, GeCube's X1650 XT Dual, and a few EVGA models. As for premium dual waterblocked cards, a few AIB's slap a Swiftech or EK block on their products - even hybrid coolers go back a fair way


----------



## Frick (Feb 29, 2016)

HumanSmoke said:


> Non-reference dual cards? They've been a staple for a while. Asus's Ares and Mars cards are just the latest iterations. Both Gigabyte and Asus offered dual GF 6000 series boards, GeCube's X1650 XT Dual, and a few EVGA models. As for premium dual waterblocked cards, a few AIB's slap a Swiftech or EK block on their products - even hybrid coolers go back a fair way



Yeah I know but I meant a series of them. It might have been Mars I was thinking about.. For some reason I was thinking about Sapphire.


----------



## medi01 (Feb 29, 2016)

HumanSmoke said:


> Whaaaaaaa....


Talking about arguments lost on opponents, ironic.

A company does NOT need to strong arm journalists and suppliers to build great products.
A company does NOT need to force proprietary APIs to build great products.

You referred to shitty practices as if they were something good (for customers) and worth following. 
No, they clearly aren't.


----------



## rtwjunkie (Feb 29, 2016)

medi01 said:


> It sucks when they play dirty, the way nVidia does.



That plus all the simar comments.  What I find amusing is that you are so very naive to imagine AMD are somehow a model company.  

Frankly, your idealized and warped view of the business world does nothing but show you to be out of your element. You expect perfection and exaggerate the negatives, with normal business practices being blown up into a nefarious scheme to spread "evil".

LMFAO


----------



## medi01 (Feb 29, 2016)

rtwjunkie said:


> I find amusing is that you are so very naive to imagine AMD are somehow a model company.


Strong arm politics works only if you have dominant market position. AMD, being a permanent underdog, can not do such things even if it wanted to, it doesn't mean they wouldn't if they could.



rtwjunkie said:


> ...normal business practices...


Normal? As "everyone does it"? Or "the way it should be"? Or "I don't give a flying f**k?
Make up your mind.

There are countries where "normal" things nVidia did to XFX are illegal.


----------



## HumanSmoke (Feb 29, 2016)

Frick said:


> Yeah I know but I meant a series of them. It might have been Mars I was thinking about.. For some reason I was thinking about Sapphire.


The only Sapphire premium dual-GPU cards I can think of was the Toxic version of the HD 5970 - they and XFX released fairly pricey 4GB versions, and the HD 4870 X2 Atomic.


medi01 said:


> Talking about arguments lost on opponents, ironic.


It's not irony. You are the only one involved in the argument you are making.


medi01 said:


> A company does NOT need to strong arm journalists and suppliers to build great products.
> A company does NOT need to force proprietary APIs to build great products.


That has absolutely nothing to do with the points being made by me and others. You are right, Nvidia and Intel don't have to do these things to build great products. It is also a *FACT* that both Intel and Nvidia are the respective market leaders based on a strategies that DO leverage these practices among other facets of their business. Whether they NEED to or not is immaterial to the point being made. It is a part of historical fact that it is part of how they got where they are. You can argue all day about the rights and wrongs but it has no bearing on the fact that they are in the position they occupy. Squealing about injustice doesn't retroactively change the totals in the account books.


medi01 said:


> You referred to shitty practices as if they were something good (for customers) and worth following.


No I didn't. You are so caught up in your own narrative that you don't understand that some people can view the industry dispassionately in historical context. Not everyone is like you, eager to froth at the bung at a drop of hat to turn the industry into some personal crusade. Stating fact isn't condoning a practice. By your reasoning, any fact based article or book of a distasteful event in human history (i.e. armed conflict) means that the authors automatically condone the actions of the combatants.
Let's face it, from your posting history you just need any excuse, however tenuous, to jump onto the soapbox. Feel free to do so, but don't include quotes and arguments that have no direct bearing on what you are intent on sermonizing about.


rtwjunkie said:


> That plus all the similar comments.  What I find amusing is that you are so very naive to imagine AMD are somehow a model company.


Presumably this model companies dabbling in price fixing (the price fixing continued for over a year after AMD assumed control of ATI), posting fraudulent benchmarks for fictitious processors and deliberately out of date Intel benchmarks, being hit for blatant patent infringements, and a host of other dubious practices don't qualify.
Underdog = Get out of Jail Free.
Market Leader = Burn in Hell.


----------



## FordGT90Concept (Feb 29, 2016)

medi01 said:


> So that world would have been better, had nVidia NOT bought it.
> 
> There was NO NEED in GSync the way it was done, there was nothing special about variable refresh rate, that stuff was already there in notebooks (that's why it didn't take AMD long to counter). The only drive (and wasted money) was to come out with some "only me, only mine!!!" shit, nothing else.
> 
> ...


Had G-Sync not come out, we wouldn't have external adapative sync today.  It likely wouldn't have appeared until DisplayPort 1.3 (coming with Pascal/Polaris) and HDMI 2.1 (no date known).  1.2a and 2.0a specifications exist because AMD, VESA, and HDMI Forum couldn't wait 3-4 years to compete with G-Sync.

It's a lot like AMD pushing out Mantle before Direct3D 12 and Vulkan.



Edit: It should also be noted that Ashes of Singularity now uses async compute and NVIDIA cards take a fairly severe performance penalty (25% in the case of Fury X versus 980 Ti) because of it:
http://www.techpowerup.com/reviews/Performance_Analysis/Ashes_of_the_Singularity_Mixed_GPU/4.html

GCN never cut corners for async compute (a Direct3D 11 feature)--these kinds of numbers should go back to 7950 when async compute is used.  The only reason why NVIDIA came out ahead in the last few years is because developers didn't use async compute.  One could stipulate why that is.  For example, because NVIDIA is the segment leader, did developers avoid using it because 80% of cards sold wouldn't perform well with it?  There could be more nefarious reasons like NVIDIA recommending developers not to use it (wonder if any developers would step forward with proof of this).  Oxide went against the grain and did it anyway.  The merits of having hardware support for something software doesn't use could be argued but, at the end of the day, it was part of the D3D11 specification for years now and NVIDIA decided to ignore it in the name of better performance when it is not used.

For their part, Oxide did give NVIDIA ample time to fix it but a software solution is never going to best a hardware solution.


----------



## BiggieShady (Feb 29, 2016)

FordGT90Concept said:


> Edit: It should also be noted that Ashes of Singularity now uses async compute and NVIDIA cards take a fairly severe performance penalty (25% in the case of Fury X versus 980 Ti) because of it:
> http://www.techpowerup.com/reviews/Performance_Analysis/Ashes_of_the_Singularity_Mixed_GPU/4.html
> 
> GCN never cut corners for async compute (a Direct3D 11 feature)--these kinds of numbers should go back to 7950 when async compute is used. The only reason why NVIDIA came out ahead in the last few years is because developers didn't use async compute. One could stipulate why that is. For example, because NVIDIA is the segment leader, did developers avoid using it because 80% of cards sold wouldn't perform well with it? There could be more nefarious reasons like NVIDIA recommending developers not to use it (wonder if any developers would step forward with proof of this). Oxide went against the grain and did it anyway. The merits of having hardware support for something software doesn't use could be argued but, at the end of the day, it was part of the D3D11 specification for years now and NVIDIA decided to ignore it in the name of better performance when it is not used.
> ...



Here is the nice read that should clear things up: http://ext3h.makegames.de/DX12_Compute.html
In a nutshell, both architectures benefit from async compute, GCN profits most by many small compute tasks highly parallelized, while Maxwell 2 profits most by batching async tasks just like they are draw calls.
When it comes to async compute GCN architecture is more forgiving and more versatile, Maxwell needs more special optimizations to extract peak performance (or even using DX12 only for graphics and CUDA for all compute ).
I'm just hoping Nvidia will make necessary changes in async compute with Pascal, because of all future lazy console ports.


----------



## FordGT90Concept (Feb 29, 2016)

Anandtech said:
			
		

> Update 02/24: NVIDIA sent a note over this afternoon letting us know that *asynchornous shading is not enabled in their current drivers*, hence the performance we are seeing here. Unfortunately they are not providing an ETA for when this feature will be enabled.



And no, Anandtech review shows NVIDIA only loses with async compute enabled (0 to -4%).  AMD was -2 to +10%:





The divide even gets crazier with higher resolutions and quality:


----------



## nem (Mar 1, 2016)

lies and more lies.. :B


----------



## rtwjunkie (Mar 1, 2016)

nem said:


> lies and more lies.. :B



Please provide empirical evidence or testing, references to disprove your post as  just trolling.


----------



## HumanSmoke (Mar 2, 2016)

rtwjunkie said:


> Please provide emoirical evidence or testing, references to disprove your post as anything but trolling.


Nvidia confirmed that they wouldn't pursue Vulkan development for Fermi-based cards six weeks ago at GTC (page 55 of the PDF presentation). With many people upgrading and many Fermi cards being underpowered for future games (as well as most having a 1GB or 2GB vRAM capacity) and the current profile of gaming shifting to upgrades ( as the new JPR figures confirm with enthusiast card sales doubling over the last year), they decided to concentrate on newer architectures. Realistically, only the GTX 580 maintains any degree of competitiveness with modern architectures....so nem, while not trolling the veracity of support, still continues to troll threads with unrelated content. Hardly surprising when even trolls at wccftech label him a troll of the highest order - not sure if that's an honour at wccf or the lowest form of life. The subject is too boring for me to devote fact-finding time to.


----------



## rtwjunkie (Mar 2, 2016)

HumanSmoke said:


> Nvidia confirmed that they wouldn't pursue Vulkan development for Fermi-based cards six weeks ago at GTC (page 55 of the PDF presentation). With many people upgrading and many Fermi cards being underpowered for future games (as well as most having a 1GB or 2GB vRAM capacity) and the current profile of gaming shifting to upgrades ( as the new JPR figures confirm with enthusiast card sales doubling over the last year), they decided to concentrate on newer architectures. Realistically, only the GTX 580 maintains any degree of competitiveness with modern architectures....so nem, while not trolling the veracity of support, still continues to troll threads with unrelated content. Hardly surprising when even trolls at wccftech label him a troll of the highest order - not sure if that's an honour at wccf or the lowest form of life. The subject is too boring for me to devote fact-finding time to.



Ok, thanks for an intelligent response. I'm so used to and weary of his trolling I can't tell when he's not.


----------



## the54thvoid (Mar 2, 2016)

rtwjunkie said:


> Ok, thanks for an intelligent response. I'm so used to and weary of his trolling I can't tell when he's not.



Must be that weird allusion to free speech being misconstrued as a right in privately owned forums. Quite poor of admin to continually allow troll posts and posters to continue.
I'm all for reasoned, if somewhat biased viewpoints, from either side but seriously, some members should be banned. TPU's tolerance of trolls is a sign of misguided liberalism. Troll posts are damaging to a sites reputation.


----------



## medi01 (Mar 2, 2016)

HumanSmoke said:


> By your reasoning, any fact based article or book of a distasteful event in human history (i.e. armed conflict) means that the authors automatically condone the actions of the combatants.


Stating a FACTS isn't. Voicing assessments, such as "Soviets bombed the hell out of Berlin, which was great, since it allowed to build modern houses" is. 



HumanSmoke said:


> if you think that the game dev software R&D has no merit


No, I never implied that.. 
Cross platform, PhysX like API could push market forward. Each hardware company would need to invest into implementing it on its platform, game developers could use it for CORE mechanics in game. 
Grab it, turn it into proprietary and suddenly it could only be used for a bunch of meaningless visual effects.

There isn't much to add to that, though, you clearly think the latter is good for the market, I think it is bad, these are just two opinions, not facts. Let's leave it at that.


----------



## HumanSmoke (Mar 2, 2016)

medi01 said:


> There isn't much to add to that, though, you clearly think the latter is good for the market


What a load of bullshit.
Show me any post in this thread where I've voiced the opinion that proprietary standards are good for the market. 

You are one very ineffectual troll


----------



## medi01 (Mar 2, 2016)

HumanSmoke said:


> Show me any post in this thread where


Post #67 in this very thread.


----------



## HumanSmoke (Mar 2, 2016)

medi01 said:


> HumanSmoke said:
> 
> 
> > Show me any post in this thread where I've voiced the opinion that proprietary standards are good for the market.
> ...


You really don't have a clue do you?    Nowhere in that post did I say anything about proprietary standards being good for the market. The post concerned Nvidia's strategy A FACT not an opinion....


HumanSmoke said:


> QFT, although I suspect any reasoned argument is lost on medi01. He seems to have lost the plot of the thread he jumped on - which was about the various companies position in their respective markets and how they arrived .
> So what? The philosophical debate over the ethics of PhysX doesn't alter the fact that Nvidia used its gaming development program to further its brand awareness. They are two mutually exclusive arguments. Do me a favour - if you're quoting me at least make your response relevant to what is being discussed.



You should spend some time trying to understand what is posted before answering a post. I'd suggest popping for a basic primer


Spoiler











I'd actually make an attempt to report your trolling, but 1. as @the54thvoid noted, the threshold must be quite high, and 2. I'm not sure you aren't just lacking basic reading skills rather than trolling.


----------



## the54thvoid (Mar 2, 2016)

medi01 said:


> PhysX was BOUGHT and forcefully made exclusive. At best it is "NV bought game development program".
> Then you slap another nice sum to bribe devs to use it, and, yay, it's sooo good for customers.



Just for the hell of it, let's use 100% reason.

Physx was exclusive before Nvidia bought it.  You had to buy an Ageia Physx card to run it (that made it a hardware exclusive technology).  Even then, Ageia had bought NovodeX, who had created the physics processing chip.  They didn't do very well with their product.  That was the issue, without Nvidia buying it, Physx, as devised by Ageia was going nowhere - dev's wouldn't code for it because few people bought the add on card.  Great idea - zero market traction.  Nvidia and ATI were looking at developing processing physics as well.  So, Nvidia acquired Ageia instead to use it's tech and in doing so, push it to a far larger audience with, as @HumanSmoke points out, a better gaming and marketing relationship with card owners and developers alike.



> At best it is "NV bought game development program



Is logically false.  NV bought Ageia - a company with a physical object to sell (IP rights).  NV used their own game development program to help push Physx.

As far as bribing dev's - it's not about bribing dev's.  You assist them financially to make a feature of a game that might help sell it.  A dev wont use a feature unless it adds to the game.  Arguably, Physx doesn't bring too much to the table anyway although in todays climate, particle modelling using physx and Async combined would be lovely.

All large companies will invest in smaller companies if it suits their business goal.  So buying Ageia and allowing all relevant Nvidia cards to use it's IP was a great way to give access to Physx to a much larger audience, albeit Nvidia owners only.  In the world of business you do not buy a company and then share your fruits with your competitor.  Shareholders would not allow it.  Nvidia and AMD/ATI are not charitable trusts - they are owned by shareholders who require payment of dividends.   In a similar way to Nvidia holding Physx, each manufacturer also has it's own architecture specific IP's.  They aren't going to help each other out.

Anyway, enough of reason.  The biggest enemy of the PC race is the console developers and software publishing houses, not Nvidia.  In fact, without Nvidia pushing and AMD reacting (and vice versa) - the PC industry would be down the pan.  So whining about how evil Nvidia is does not reflect an accurate understanding of how strongly Nvidia is propping up PC gaming.  Imagine if AMD stopped focusing on discrete GPU's and only worked on consoles?  Imagine what would happen to the PC development then?  Nvidia would have to fight harder to prove how much we need faster, stronger graphics.


----------



## BiggieShady (Mar 2, 2016)

the54thvoid said:


> Arguably, Physx doesn't bring too much to the table anyway although in todays climate


Let's not forget how Physx SDK has advanced over the years since x87 fiasco in 2010. Latest version PhysX SDK 3.x has all multithreading and SIMD optimizations and is one of the fastest solutions currently available.
My point is devs choose physx because it runs well across all cpu architectures. Yes, even AMD and ARM.
On the gpu side physx has grown into entire gameworks program everything optimized for nv arch which is worst case scenario for amd arch, and locked in a prebuilt dlls that come with the cheapest licence, you want to optimize for amd buy an expensive license where you get the source code. My take on that is that's a dick move when you already have 80% of the market, but also a necessary one when you consider 100% amd in consoles.


----------



## FordGT90Concept (Mar 2, 2016)

Devs only choose PhysX because NVIDIA sponsored the title/engine.  PhysX is rarely/never seen outside of sponsorship.

If you don't have an NVIDIA GPU, there is no hardware acceleration.  Because of this, PhysX is only used in games for cosmetic reasons, not practical reasons.  If they used PhysX for practical reasons, the game would break on all systems that lack an NVIDIA GPU.  PhysX is an impractical technology which goes to my previous point that it is only used where sponsorship is involved.

Most developers out there have made their own physics code for handling physics inside of their respective engines.  Case in point: Frostbite.  About the only major engine that still uses PhysX is Unreal Engine.  As per the above, most developers on Unreal Engine code on the assumption there is no NVIDIA card.


Edit: Three games come to mind as relying on physics: Star Citizen, BeamNG.drive, and Next Car Game: Wreckfest.  The former two are on CryEngine.  None of them use PhysX.


----------



## BiggieShady (Mar 2, 2016)

Gpu Physx and the rest of the gameworks is what it is, locked sponsored and heavily optimized for nv arch ... some of it in cuda, some of it direct compute, it's a mess and all cosmetics. I'm saying their CPU PhysX SDK is good and popular. Also on all architectures. Every game in Unreal and Unity 3D engine uses it.


----------



## HumanSmoke (Mar 2, 2016)

the54thvoid said:


> NV bought Ageia - a company with a physical object to sell (IP rights).  NV used their own game development program to help push Physx.


As ATI and later AMD would have done had they actually bought Ageia rather than spend 2 years trying to publicly lowball the company. It is no coincidence that Richard Huddy - then head of ATI/AMD's game dev program - was the one repeatedly talking about acquiring the company rather than AMD's CEO, CTO, or CFO.


the54thvoid said:


> Nvidia and ATI were looking at developing processing physics as well.


Yes. ATI had hitched their wagon to Intel's star. HavokFX was to be the consummation of their physics marriage.  Then AMD acquired ATI which broke off the engagement, Intel then swallowed up Havok, and proceeded to play along with AMD's pipe-dream for HavokFX to the tune of zero games actually using it.


the54thvoid said:


> All large companies will invest in smaller companies if it suits their business goal.  So buying Ageia and allowing all relevant Nvidia cards to use it's IP was a great way to give access to Physx to a much larger audience, albeit Nvidia owners only.  In the world of business you do not buy a company and then share your fruits with your competitor.


[sarcasm] Are you sure about that? AMD acquired ATI - didn't AMD make ATI's software stack such as  Avivo/Avivo HD free to Nvidia, Intel, S3, SiS etc. [/sarcasm]


FordGT90Concept said:


> Devs only choose PhysX because NVIDIA sponsored the title/engine.  PhysX is rarely/never seen outside of sponsorship.


Very much agree. Game developers are a lazy bunch of tightwads if the end result (unpatched) is any indication. Vendor's willing to make life easier for them with support ( and this doesn't just apply to PhysX) and the dev studios will in all likelihood sign up before the sales pitch is halfway through.


----------



## medi01 (Mar 11, 2016)

the54thvoid said:


> Just for the hell of it, let's use 100% reason.


Ok. Since HeSaidSheSaidIDidn'tMeanThatHereIsPersonalInsultToProveIt in the thread is already annoying enough, could you pleas confirm, that I got you right that:

1) PhysX was proprietary anyway, so nVidia did no harm in that regard. On the opposite, now much wider audience had access to PhysX. Shareholders would not understand it, if nVidia would have codepath for AMD GPUs.
2) What nVidia bought was basically an owner of a funny useless (since next to no market penetration) card that could do "physics computing". Well, there was some know-how in it, but actually NV used its own game development program to push PhysX.
3) Paying devs to use your software that runs well on your hardware, but has terrible impact when running on competitor's hardware is not bribing, it's "assisting them financially to make a feature of a game that might help sell it".
4) Consoles are the main enemies of PC world
5) If AMD quit discrete desktop GPU market altogether, nVidia "
would have to fight harder to prove how much we need faster, stronger graphics".


----------



## FordGT90Concept (Mar 11, 2016)

Where Ageia didn't have the resources to bribe developers to implement their code, NVIDIA does; therein lies the problem.

NVIDIA wasn't interested in Ageia hardware.  They wanted the API which acted as middleware and executed on dedicated hardware.  NVIDIA modified the API to execute on x86/x64/CUDA.  In response to NVIDIA snatching PhysX, Intel snatched Havok.  If memory serves, Intel was going to work with AMD on HavokFX but the whole thing kind of fell apart.

Pretty sure the decision to buy Ageia came from NVIDIA's GPGPU CUDA work.  NVIDIA had a reason for the scientific community to buy their cards but they didn't have a reason for the game development community to buy their cards.  Ageia was their door into locking developers (and consumers) into NVIDIA hardware.  Needless to say, it worked.

Consoles always have been and always will be simplified, purpose-built computers.  I wouldn't call them "enemies" because they represent an audience that makes games possible that wouldn't be if there were only PC gaming (Mass Effect comes to mind as does the size, scope, and scale of Witcher 3 and GTA5).

I don't buy that argument in #5 at all.  NVIDIA would likely double the price of GPUs at each tier and that's about it.  The market always needs better performance (e.g. VR and 4K gaming, laser scanning and 3D modeling).


----------

