Tuesday, August 1st 2017
NVIDIA Unlocks Certain Professional Features for TITAN Xp Through Driver Update
In a bid to preempt sales of the Radeon Pro Vega Frontier Edition, and the Pro WX 9100, NVIDIA expanded the feature-set of its consumer-segment TITAN Xp graphics card, with certain features reserved for its Quadro family of graphics cards, through a driver update. NVIDIA is rolling out its latest GeForce software update, which adds professional features for applications such as Maya, unlocking "3X more performance" for the software.
Priced at USD $1,199, the TITAN Xp packs a full-featured "GP102" graphics processor, with 3,840 CUDA cores, 240 TMUs, 96 ROPs, and 12 GB of GDDR5X memory across the chip's 384-bit wide memory interface. At its given memory clock of 11.4 GHz (GDDR5X-effective), the card has a memory bandwidth of 547.6 GB/s, which is higher than the 484 GB/s of the Radeon Pro Vega Frontier Edition.DOWNLOAD: NVIDIA GeForce 385.12 for TITAN Xp
Source:
NVIDIA
Priced at USD $1,199, the TITAN Xp packs a full-featured "GP102" graphics processor, with 3,840 CUDA cores, 240 TMUs, 96 ROPs, and 12 GB of GDDR5X memory across the chip's 384-bit wide memory interface. At its given memory clock of 11.4 GHz (GDDR5X-effective), the card has a memory bandwidth of 547.6 GB/s, which is higher than the 484 GB/s of the Radeon Pro Vega Frontier Edition.DOWNLOAD: NVIDIA GeForce 385.12 for TITAN Xp
92 Comments on NVIDIA Unlocks Certain Professional Features for TITAN Xp Through Driver Update
has anybody found out what they have actually enabled with these special sauce drivers?
According to this the initial payout has happened or started:
www.bursor.com/2017/04/payments-sent-to-nvidia-gtx-970-class-members/
Settlement info (I have no idea if this is the final document):
cdn.arstechnica.net/wp-content/uploads/2016/07/show_temp.pl-2.pdf
If you want to slog through legal documents for more specific info, feel free.
"NVIDIA is a dominant player in the GPU market and GW allow for better graphics, independent game developers use the said features because they will work for at least 75% of people out there without a discernible performance loss."
This is just a blatantly false statement. I can't even think of a game where GameWorks has improved the graphics. In fact, where GameWorks is implemented performance often takes a dive for both Nvidia and AMD users. For example, GodRays on Fallout 4, a simple effect by any standard, were awful and completely thanks to Nvidia. The only time GameWorks doesn't have a toll on performance is when it is only using PhysX lightly. In borderlands 2, if you turn PhysX on medium or high it does introduce lag/stuttering, especially on high.
No, if you wanted to introduce performant effects into video games you'd use one of AMD's technologies like TressFX because everyone can optimize for it due to it being open. Compare that to Nvidia, where games that are in the GameWorks program must bar AMD from seeing large portions of the code and prevent them from optimizing.
GameWorks is absolute shit no matter which side you are on.
RIP P6000
A few level heads in here, and also some mild sarcasm meant to point out reality. 54thvoid said it best, to paraphrase: these are companies in business to make money, off you. Fanatical loyalty is misplaced, because neither one has any for you. It's money that matters! :cool:
GameWorks is licensing issue. it sucks on consumer side for sure but i don't think anything unfair happen. AMD can go to havok for example for source access but they will not going to get it unless they pay the access fee. it is the same with GameWorks. even AMD themselves did not open their tech for other to freely access until nvidia coming out with GameWorks.
I use a overclocking tool for a day and then flash the gpu cause I hate setting profiles in both Windows and Linux. thus never really experienced this.
2.\
Cause SLI works? now recently they've stopped using sli in most games, just ain't doing anything.
I'd say avoid multigpu regardless of vendor :) same issues with our computers with nvidia and intel graphics ;)
devtalk.nvidia.com/default/topic/1001191/how-fp32-and-fp16-units-are-implemented-in-gp100-gpus/
Multigpu works fine for nvidia right now. I am using it. Either it works or it doesn't in a game it isn't a game to see if the profile launches correctly like crossfire.
this is nvidia tesla card made specifically for deep learning stuff. look at tesla P4 (GP104 based) and tesla P40 (GP102 based). for both product nvidia did not mention it's FP16 performance. only tesla P100 FP16 performance was listed. for this kind of product there is no reason for nvidia to artificially limit it's FP16 performance unless the performance was poor to being with. and then source from anandech:
www.anandtech.com/show/10510/nvidia-announces-nvidia-titan-x-video-card-1200-available-august-2nd
look at the spec table both Titan X 2016 and GTX 1080 FP16 (native) performance is listed as 1/64 (of FP32 performance) that prospect is interesting but physic engine is integrated on game engine level not operating system. imagine the game suddenly cannot work when being port to linux or PS4 since both did not use Microsoft direct x. i don't think developer like the idea of changing physics engine in their game based on what operating system they use because right now when it comes to physic engine the one they use will pretty much work on all platform.