Thursday, July 23rd 2015

NVIDIA Bundles Metal Gear Solid V: The Phantom Pain with Select GeForce Products

NVIDIA, which recently withdrew its Batman: Arkham Knight GeForce Bundle following the game's spectacular failure that forced its publishers to pull it off the shelves; announced a new game bundle. The company will be giving away "Metal Gear Solid V: The Phantom Pain" with GeForce GTX 980 Ti, GTX 980, GTX 970, GTX 960, and notebooks with GTX 980M and GTX 970M. There's no info on whether the game got the company's GameWorks varnish (GeForce-exclusive effects), but NVIDIA will give it a proper support package (Game Ready driver with SLI profiles and GeForce Experience settings). As one of the comments to the press-release suggest, it would be nice if the people who fell for the Batman: Arkham Knight bundle could be compensated with keys to this game.
Source: NVIDIA
Add your own comment

31 Comments on NVIDIA Bundles Metal Gear Solid V: The Phantom Pain with Select GeForce Products

#1
the54thvoid
Super Intoxicated Moderator
Bet I don't get a copy with my 980ti....
Posted on Reply
#2
jboydgolfer
decent $59 title...even to just give away if its not your cup o tea. better than some of the offerings Ive noticed from AMD in the recent past..;)
Posted on Reply
#3
wiak
ew, yet another game that will run like crap on anything thats not nvidia...
Posted on Reply
#4
peche
Thermaltake fanboy
i didnt receive my free copy of splinter cell when purchased my current GTX 760... amazon customer support was a shit with that deal .... so since that moment i only focus on the card... wont expect any other thing bundled ... if i get somethig for free will be a plus...

F*ckyou amazon crappy support...
Posted on Reply
#5
profoundWHALE
It actually runs well on my R9 290... until you decide to put it on Ultra, and suddenly it's the crappiest frame rate I've ever seen. I also tried with Crossfire and it's the same crappy frame rate, so clearly it's doing something that the GPU doesn't know how to do.
Posted on Reply
#6
FrustratedGarrett
Not another GameWorks title! You'd think that with DX12 and the 20x reduction in CPU time wasted on batch generation and validation, those game developers would be tempted to port their games to DX12 or Vulkan, and have all that physics simulation, AI and post processing of some of the visual effects done on CPUs and possibly integrated IGPUs now that DX12 enables easy sharing of work between discrete and integrated GPUs.

Here's what GameWorks is: Offload everything to discrete GPUs and destroy the overall performance of a game that could run many times better on the same hardware. Performance is especially worse on competitor's hardware, which is a bonus. This way, we force PC gamers to spend hundreds of dollars on our high end graphics cards. Ain't that great?

GameWorks is basically a scam.
Posted on Reply
#7
Assimilator
FrustratedGarrettNot another GameWorks title. You'd think that with DX12 and the 20x reduction in CPU time wasted on batch generation and validation, those game developers would be tempted to port their games to DX12 or Vulkan, and have all that physics simulation, AI and post processing of some of the visual effects run on CPUs, and possibly integrated IGPUs now that DX12 enables easy sharing of work between discrete and integrated GPUs.
Yeah, and we can run all these awesome DX12-enabled games on all the awesome DX12 operating systems that aren't released yet.

It's probably a good thing you aren't in charge of a technology company.
Posted on Reply
#8
Dethroy
Console sales will be much higher than pc sales for MGS V. So I doubt it's gonna be a GameWorks title since the consoles are using AMD hardware.
Posted on Reply
#9
Cybrnook2002
Yes, would be nice for the Batman keys to be exchanged for this.... at least you can play it. (hopefully)
Posted on Reply
#10
FrustratedGarrett
AssimilatorYeah, and we can run all these awesome DX12-enabled games on all the awesome DX12 operating systems that aren't released yet.

It's probably a good thing you aren't in charge of a technology company.
That's not an excuse. Windows 10 is around the corner and a month or two delay is better than releasing a broken game.
And if they're not willing to wait for Windows 10, they could use Vulkan? Vulkan isn't bound to Widnows 10. Vulkan has been finalized for months now. How long would it take to port an existing code base to Vulkan? Two months tops!
Posted on Reply
#11
64K
Cybrnook2002Yes, would be nice for the Batman keys to be exchanged for this.... at least you can play it. (hopefully)
Arkham Knight is supposed to be fixed by Aug or Sept. Judging by the sorry state that Warner Bros/Rocksteady released the PC port in I'm not sure what is going to qualify as 'fixed' to them but I hope it is.
Posted on Reply
#12
Primey_
FrustratedGarrettNot another GameWorks title. You'd think that with DX12 and the 20x reduction in CPU time wasted on batch generation and validation, those game developers would be tempted to port their games to DX12 or Vulkan, and have all that physics simulation, AI and post processing of some of the visual effects run on CPUs, and possibly integrated IGPUs now that DX12 enables easy sharing of work between discrete and integrated GPUs.

Here's what GameWorks is: Offload everything to discrete GPUs and destroy the overall performance of a game that could run many times better on the same hardware. Performance is especially worse on competitor's hardware, which is a bonus. This way, we force PC gamers to spend hundreds of dollars to buy our high end graphics cards. Ain't that great?

GameWorks is basically a scam.
No where does it say that MGS V will be a gameworks title so hush.
Posted on Reply
#13
Assimilator
FrustratedGarrettThat's not an excuse. Windows 10 is around the corner and a month or two delay is better than releasing a broken game.
How do you know it will be broken?
FrustratedGarrettAnd if they're not willing to wait for Windows 10, they could use Vulkan? Vulkan isn't bound to Widnows 10. Vulkan has been finalized for months now. How long would it take to port an existing code base to Vulkan? Two months tops!
Two extra months to write a rendering path for an API that almost nobody you will be using? I'm sure you can see how the cost/benefit analysis doesn't add up.

There are exactly zero games with DX12 support that will launch with Windows 10. The best we can hope for is that holiday titles (i.e. end of year) will start to support it. Just like the first DX11 game only appeared a year after Microsoft announced the API, it will take time for developers to get to grips with how DX12 works, what they can do with it, and most importantly - what justifies upgrading from DX11.
Posted on Reply
#15
FrustratedGarrett
AssimilatorHow do you know it will be broken?



Two extra months to write a rendering path for an API that almost nobody you will be using? I'm sure you can see how the cost/benefit analysis doesn't add up.

There are exactly zero games with DX12 support that will launch with Windows 10. The best we can hope for is that holiday titles (i.e. end of year) will start to support it. Just like the first DX11 game only appeared a year after Microsoft announced the API, it will take time for developers to get to grips with how DX12 works, what they can do with it, and most importantly - what justifies upgrading from DX11.
A rendering path? I'm talking about rewriting existing code using the new subroutines, ADTs and data structure present in the new API and then compiling that code. This would take 2 months if they take their time doing it.

What do you mean by nobody will be using DX12? All AMD GCN and modern Nvidia graphics cards support DX12 and Vulkan. You're misinformed!
Posted on Reply
#16
Lionheart
Oh fucking great! More Scamworks titles :banghead:
Posted on Reply
#17
Dethroy
FrustratedGarrettWhat do you mean by nobody will be using DX12? All AMD GCN and modern Nvidia graphics cards support DX12 and Vulkan. You're misinformed!
He didn't say that nobody will be using DX12. He was talking about Vulkan.
Posted on Reply
#18
rtwjunkie
PC Gaming Enthusiast
FrustratedGarrettA rendering path? I'm talking about rewriting existing code using the new subroutines, ADTs and data structure present in the new API and then compiling that code. This would take 2 months if they take their time doing it.

What do you mean by nobody will be using DX12? All AMD GCN and modern Nvidia graphics cards support DX12 and Vulkan. You're misinformed!
And all of that costs money. What's in it for the studio? Since it's so easy, I recommend you go ahead and get on it. ;)

But seriously, first we have to see the adoption rates of W10. I'll admit I was wrong on here if it comes to it, but I do not see, despite the free upgrade, the super-high adoption rates for W10 as MS would have us believe.
Posted on Reply
#19
newtekie1
Semi-Retired Folder
DethroyConsole sales will be much higher than pc sales for MGS V. So I doubt it's gonna be a GameWorks title since the consoles are using AMD hardware.
Batman was a Gameworks title and it run on the consoles just fine. Hell, it ran better than it did on PC. Not sure why people seem to think GameWorks titles can't run on AMD hardware, they run just fine on AMD hardware.

FarCry4 and The Witcher 3 are both gameworks titles, and they also run just fine on the consoles. They run damn good on AMD hardware too. The Witcher 3 has stupid hair effects that bog down all but the best tessellation GPUs, but that is the developers fault more than nVidia's. The developers decided to apply the maximum amount of tessellation possible to hair as well as MSAAx16 to just the hair when hairworks was enabled. FarCry4 uses hairworks and it doesn't bog down the GPU like Witcher 3 does because the devs didn't go crazy with it.
Posted on Reply
#20
FrustratedGarrett
newtekie1Batman was a Gameworks title and it run on the consoles just fine. Hell, it ran better than it did on PC. Not sure why people seem to think GameWorks titles can't run on AMD hardware, they run just fine on AMD hardware.

FarCry4 and The Witcher 3 are both gameworks titles, and they also run just fine on the consoles. They run damn good on AMD hardware too. The Witcher 3 has stupid hair effects that bog down all but the best tessellation GPUs, but that is the developers fault more than nVidia's. The developers decided to apply the maximum amount of tessellation possible to hair as well as MSAAx16 to just the hair when hairworks was enabled. FarCry4 uses hairworks and it doesn't bog down the GPU like Witcher 3 does because the devs didn't go crazy with it.
Only the PC versions of these games use the GameWorks dlls. And don't give me that tessellation bullocks: WCCFTECH noted in their original W3 review that the 285, despite having better tessellation throughput than the GTX960, was getting more than twice the performance hit of the GTX960 when enabling HairWorks.

This is not about going crazy with simulation effects, it's about implementing them right, which includes doing them on the right processor. Physics done on a discrete GPU is bad! We have more than potent IGPUs and CPUs(with the new APIs) on both AMD and Intel chips that can render that level of simulation with ease.
Posted on Reply
#21
rtwjunkie
PC Gaming Enthusiast
@FrustratedGarrett I have to agree with you on the PhysX. I never allow the GPU to do it. I always set it to CPU. Why do I do that if I know I'm going to get fewer physics effects? One, because it doesn't bog down the GPU that way, and two, because I find the PhysX effects done on GPU to be very disproportionally unrealistic to real life compared to what you get on the CPU.
Posted on Reply
#22
newtekie1
Semi-Retired Folder
FrustratedGarrettOnly the PC versions of these games use the GameWorks dlls.
Hence my point.
FrustratedGarrettAnd don't give me that tessellation bullocks: WCCFTECH noted in their original W3 review that the 285, despite having better tessellation throughput than the GTX960, was getting more than twice the performance hit of the GTX960 when enabling HairWorks.
Yep, note the issues with MSAAx16 being applied to the hair when Hairworks was enabled? The 285 had very poor MSAA performance. You're focusing on one of the issues, I'm talking about the whole picture.
FrustratedGarrettThis is not about going crazy with simulation effects, it's about implementing them right, which includes doing them on the right processor. Physics done on a discrete GPU is bad! We have more than potent IGPUs and CPUs(with the new APIs) on both AMD and Intel chips that can render that level of simulation with ease.
Physics is number crunching, something GPUs are in fact better at. Hairworks is nothing more than nVidia's answer to TressFX. And in the case of the rest of the physics in Witcher, even though it is using PhysX, it is all done by the CPU. And if the effects enabled by hardware accelerated PhysX were possible using the CPU only, I'd think we'd see some of the CPU only physics engines doing them. But I have yet to see a CPU only physics engine do realistic cloth and volumetric fog effects. Can you point me to some CPU physics engines that do these effects?
rtwjunkie@FrustratedGarrett I have to agree with you on the PhysX. I never allow the GPU to do it. I always set it to CPU. Why do I do that if I know I'm going to get fewer physics effects? One, because it doesn't bog down the GPU that way, and two, because I find the PhysX effects done on GPU to be very disproportional to real life compared to what you get on the CPU.
I have yet to find a game where the PhysX bogged down my GPU. But it is just like any other visual effect, turning it on will lower framerates. Though the actual PhysX calculations have little to do with that, it is actually rendering the extra particles and effects that is causing the framerate drop. The PhysX calculations can be done on a GT640...
Posted on Reply
#23
FrustratedGarrett
newtekie1Yep, note the issues with MSAAx16 being applied to the hair when Hairworks was enabled? The 285 had very poor MSAA performance. You're focusing on one of the issues, I'm talking about the whole picture.
So the GTX960 with it's 128bit memory bus had better anti aliasing performance than the R9 285 with it's 256bit memory bus? I highly doubt it.
newtekie1Physics is number crunching, something GPUs are in fact better at. Hairworks is nothing more than nVidia's answer to TressFX. And in the case of the rest of the physics in Witcher, even though it is using PhysX, it is all done by the CPU. And if the effects enabled by hardware accelerated PhysX were possible using the CPU only, I'd think we'd see some of the CPU only physics engines doing them. But I have yet to see a CPU only physics engine do realistic cloth and volumetric fog effects. Can you point me to some CPU physics engines that do these effects?
It is true that a mid range GPU has a much higher flops rating than a high end CPU does. However, unless the process you're running on modern GPU is many-many threaded and highly vectorized (data parallel), you're worse off running it on a GPU.
Not every simulation task can be done faster on a modern GPU, especially in the case of discrete GPUs where you need to move GBs of data to GPU RAM very frequently and you need to go through a context switch (execute kernel code and save/move some data in cache and system memory) before you can address GPU memory and registers. Context switches and I/O are real expensive. In a game workload where the frequency at which you need to update the GPU is very high, you're gonna waste a lot of CPU and GPU cycles.

Nvidia's GameWorks is simply not the best approach by a long shot. That's why all we're getting in these games are these crappy "physics" effects that are primitive at best.
newtekie1I have yet to find a game where the PhysX bogged down my GPU. But it is just like any other visual effect, turning it on will lower framerates. Though the actual PhysX calculations have little to do with that, it is actually rendering the extra particles and effects that is causing the framerate drop. The PhysX calculations can be done on a GT640...
The issue is in the way Nvidia is pushing their way of doing better "physics". We can do better better Physics than Nvidia's Physx on midrange quad core CPUs alone.
The delays incurred every time you need to update GPU and GPU-RAM with new control bits and GBs of data are way too long for consistent frame delivery. Physics and/or GameWorks are really meant to make more money for Nvidia and not improve physics in games or games in general.
Posted on Reply
#24
newtekie1
Semi-Retired Folder
FrustratedGarrettSo the GTX960 with it's 128bit memory bus had better anti aliasing performance than the R9 285 with it's 256bit memory bus? I highly doubt it.
Yes, because you can judge a GPU on one spec. The GTX960 GPU is better at MSAA.
FrustratedGarrettIt is true that a mid range GPU has a much higher flops rating than a high end CPU does. However, unless the process you're running on modern GPU is many-many threaded and highly vectorized (data parallel), you're worse off running it on a GPU.
Not every simulation task can be done faster on a modern GPU, especially in the case of discrete GPUs where you need to move GBs of data to GPU RAM very frequently and you need to go through a context switch (execute kernel code and save/move some data in cache and system memory) before you can address GPU memory and registers. Context switches and I/O are real expensive. In a game workload where the frequency at which you need to update the GPU is very high, you're gonna waste a lot of CPU and GPU cycles.

Nvidia's GameWorks is simply not the best approach by a long shot. That's why all we're getting in these games are these crappy "physics" effects that are primitive at best.
So after all that, no CPU based physics engine that can do what PhysX can on a GPU. Got it. You'd think if the CPU was so much better, these companies that do nothing but program physic to run on CPUs would be doing these things by now. Why aren't they if the CPU is so much better?
FrustratedGarrettThe issue is in the way Nvidia is pushing their way of doing better "physics". We can do better better Physics than Nvidia's Physx on midrange quad core CPUs alone.
The delays incurred every time you need to update GPU and GPU-RAM with new control bits and GBs of data are way too long for consistent frame delivery. Physics and/or GameWorks are really meant to make more money for Nvidia and not improve physics in games or games in general.
You say that like PhysX is the only physics engine out there. Again, there are physics engines programmed entire for the CPU, very popular ones. The simple fact is they can not do what PhysX does on a CPU.

Soft body physics(cloth/hair) and volumetric fog/smoke were introduced years ago with PhysX. If the CPU is so much better at calculating these effects, why haven't the CPU based phsyics engines been able to successfully implement them in their engines? I mean, hell, AMD finally had to step up and create an API for soft body physics, and sure enough it runs on the GPU too.
Posted on Reply
#25
FrustratedGarrett
@newtekie1
Do you have a proof of the GTX960 being better at MSAA?

With regard to your stand on CPU "physics", well... good luck letting the DX11 runtime manage 10 synchronous threads. It's simply not possible with the current or rather previous gen APIs.

I'd say that AMD's HSA chips are quite a good option for heavy simulation and AI processing. The GPU in these chips can address the whole virtual memory space and communication between CPU and GPU is done through hardware and without any context switches. All you need to do is add pointers in your code where you want execution to switch from or to either the CPU or the GPU. AMDs HSA chips seem like the most sensible platform for these kinda workloads.
Posted on Reply
Add your own comment
Dec 20th, 2024 21:05 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts