Friday, February 26th 2016
NVIDIA to Unveil "Pascal" at the 2016 Computex
NVIDIA is reportedly planning to unveil its next-generation GeForce GTX "Pascal" GPUs at the 2016 Computex show, in Taipei, scheduled for early-June. This unveiling doesn't necessarily mean market availability. SweClockers reports that problems, particularly related to NVIDIA supplier TSMC getting its 16 nm FinFET node up to speed, especially following the recent Taiwan earthquake, could delay market available to late- or even post-Summer. It remains to be seen if the "Pascal" architecture debuts as an all-mighty "GP100" chip, or a smaller, performance-segment "GP104" that will be peddled as enthusiast-segment over being faster than the current big-chip, the GM200. NVIDIA's next generation GeForce nomenclature will also be particularly interesting to look out for, given that the current lineup is already at the GTX 900 series.
Source:
SweClockers
97 Comments on NVIDIA to Unveil "Pascal" at the 2016 Computex
Frankly, your idealized and warped view of the business world does nothing but show you to be out of your element. You expect perfection and exaggerate the negatives, with normal business practices being blown up into a nefarious scheme to spread "evil".
LMFAO
Make up your mind.
There are countries where "normal" things nVidia did to XFX are illegal.
Let's face it, from your posting history you just need any excuse, however tenuous, to jump onto the soapbox. Feel free to do so, but don't include quotes and arguments that have no direct bearing on what you are intent on sermonizing about. Presumably this model companies dabbling in price fixing (the price fixing continued for over a year after AMD assumed control of ATI), posting fraudulent benchmarks for fictitious processors and deliberately out of date Intel benchmarks, being hit for blatant patent infringements, and a host of other dubious practices don't qualify.
Underdog = Get out of Jail Free.
Market Leader = Burn in Hell.
It's a lot like AMD pushing out Mantle before Direct3D 12 and Vulkan.
Edit: It should also be noted that Ashes of Singularity now uses async compute and NVIDIA cards take a fairly severe performance penalty (25% in the case of Fury X versus 980 Ti) because of it:
www.techpowerup.com/reviews/Performance_Analysis/Ashes_of_the_Singularity_Mixed_GPU/4.html
GCN never cut corners for async compute (a Direct3D 11 feature)--these kinds of numbers should go back to 7950 when async compute is used. The only reason why NVIDIA came out ahead in the last few years is because developers didn't use async compute. One could stipulate why that is. For example, because NVIDIA is the segment leader, did developers avoid using it because 80% of cards sold wouldn't perform well with it? There could be more nefarious reasons like NVIDIA recommending developers not to use it (wonder if any developers would step forward with proof of this). Oxide went against the grain and did it anyway. The merits of having hardware support for something software doesn't use could be argued but, at the end of the day, it was part of the D3D11 specification for years now and NVIDIA decided to ignore it in the name of better performance when it is not used.
For their part, Oxide did give NVIDIA ample time to fix it but a software solution is never going to best a hardware solution.
In a nutshell, both architectures benefit from async compute, GCN profits most by many small compute tasks highly parallelized, while Maxwell 2 profits most by batching async tasks just like they are draw calls.
When it comes to async compute GCN architecture is more forgiving and more versatile, Maxwell needs more special optimizations to extract peak performance (or even using DX12 only for graphics and CUDA for all compute :laugh:).
I'm just hoping Nvidia will make necessary changes in async compute with Pascal, because of all future lazy console ports.
The divide even gets crazier with higher resolutions and quality:
I'm all for reasoned, if somewhat biased viewpoints, from either side but seriously, some members should be banned. TPU's tolerance of trolls is a sign of misguided liberalism. Troll posts are damaging to a sites reputation.
Cross platform, PhysX like API could push market forward. Each hardware company would need to invest into implementing it on its platform, game developers could use it for CORE mechanics in game.
Grab it, turn it into proprietary and suddenly it could only be used for a bunch of meaningless visual effects.
There isn't much to add to that, though, you clearly think the latter is good for the market, I think it is bad, these are just two opinions, not facts. Let's leave it at that.
Show me any post in this thread where I've voiced the opinion that proprietary standards are good for the market.
You are one very ineffectual troll :roll:
I'd actually make an attempt to report your trolling, but 1. as @the54thvoid noted, the threshold must be quite high, and 2. I'm not sure you aren't just lacking basic reading skills rather than trolling.
Physx was exclusive before Nvidia bought it. You had to buy an Ageia Physx card to run it (that made it a hardware exclusive technology). Even then, Ageia had bought NovodeX, who had created the physics processing chip. They didn't do very well with their product. That was the issue, without Nvidia buying it, Physx, as devised by Ageia was going nowhere - dev's wouldn't code for it because few people bought the add on card. Great idea - zero market traction. Nvidia and ATI were looking at developing processing physics as well. So, Nvidia acquired Ageia instead to use it's tech and in doing so, push it to a far larger audience with, as @HumanSmoke points out, a better gaming and marketing relationship with card owners and developers alike. Is logically false. NV bought Ageia - a company with a physical object to sell (IP rights). NV used their own game development program to help push Physx.
As far as bribing dev's - it's not about bribing dev's. You assist them financially to make a feature of a game that might help sell it. A dev wont use a feature unless it adds to the game. Arguably, Physx doesn't bring too much to the table anyway although in todays climate, particle modelling using physx and Async combined would be lovely.
All large companies will invest in smaller companies if it suits their business goal. So buying Ageia and allowing all relevant Nvidia cards to use it's IP was a great way to give access to Physx to a much larger audience, albeit Nvidia owners only. In the world of business you do not buy a company and then share your fruits with your competitor. Shareholders would not allow it. Nvidia and AMD/ATI are not charitable trusts - they are owned by shareholders who require payment of dividends. In a similar way to Nvidia holding Physx, each manufacturer also has it's own architecture specific IP's. They aren't going to help each other out.
Anyway, enough of reason. The biggest enemy of the PC race is the console developers and software publishing houses, not Nvidia. In fact, without Nvidia pushing and AMD reacting (and vice versa) - the PC industry would be down the pan. So whining about how evil Nvidia is does not reflect an accurate understanding of how strongly Nvidia is propping up PC gaming. Imagine if AMD stopped focusing on discrete GPU's and only worked on consoles? Imagine what would happen to the PC development then? Nvidia would have to fight harder to prove how much we need faster, stronger graphics.
My point is devs choose physx because it runs well across all cpu architectures. Yes, even AMD and ARM.
On the gpu side physx has grown into entire gameworks program everything optimized for nv arch which is worst case scenario for amd arch, and locked in a prebuilt dlls that come with the cheapest licence, you want to optimize for amd buy an expensive license where you get the source code. My take on that is that's a dick move when you already have 80% of the market, but also a necessary one when you consider 100% amd in consoles.
If you don't have an NVIDIA GPU, there is no hardware acceleration. Because of this, PhysX is only used in games for cosmetic reasons, not practical reasons. If they used PhysX for practical reasons, the game would break on all systems that lack an NVIDIA GPU. PhysX is an impractical technology which goes to my previous point that it is only used where sponsorship is involved.
Most developers out there have made their own physics code for handling physics inside of their respective engines. Case in point: Frostbite. About the only major engine that still uses PhysX is Unreal Engine. As per the above, most developers on Unreal Engine code on the assumption there is no NVIDIA card.
Edit: Three games come to mind as relying on physics: Star Citizen, BeamNG.drive, and Next Car Game: Wreckfest. The former two are on CryEngine. None of them use PhysX.
1) PhysX was proprietary anyway, so nVidia did no harm in that regard. On the opposite, now much wider audience had access to PhysX. Shareholders would not understand it, if nVidia would have codepath for AMD GPUs.
2) What nVidia bought was basically an owner of a funny useless (since next to no market penetration) card that could do "physics computing". Well, there was some know-how in it, but actually NV used its own game development program to push PhysX.
3) Paying devs to use your software that runs well on your hardware, but has terrible impact when running on competitor's hardware is not bribing, it's "assisting them financially to make a feature of a game that might help sell it".
4) Consoles are the main enemies of PC world
5) If AMD quit discrete desktop GPU market altogether, nVidia "
would have to fight harder to prove how much we need faster, stronger graphics".
NVIDIA wasn't interested in Ageia hardware. They wanted the API which acted as middleware and executed on dedicated hardware. NVIDIA modified the API to execute on x86/x64/CUDA. In response to NVIDIA snatching PhysX, Intel snatched Havok. If memory serves, Intel was going to work with AMD on HavokFX but the whole thing kind of fell apart.
Pretty sure the decision to buy Ageia came from NVIDIA's GPGPU CUDA work. NVIDIA had a reason for the scientific community to buy their cards but they didn't have a reason for the game development community to buy their cards. Ageia was their door into locking developers (and consumers) into NVIDIA hardware. Needless to say, it worked.
Consoles always have been and always will be simplified, purpose-built computers. I wouldn't call them "enemies" because they represent an audience that makes games possible that wouldn't be if there were only PC gaming (Mass Effect comes to mind as does the size, scope, and scale of Witcher 3 and GTA5).
I don't buy that argument in #5 at all. NVIDIA would likely double the price of GPUs at each tier and that's about it. The market always needs better performance (e.g. VR and 4K gaming, laser scanning and 3D modeling).