• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Bundles Metal Gear Solid V: The Phantom Pain with Select GeForce Products

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
NVIDIA, which recently withdrew its Batman: Arkham Knight GeForce Bundle following the game's spectacular failure that forced its publishers to pull it off the shelves; announced a new game bundle. The company will be giving away "Metal Gear Solid V: The Phantom Pain" with GeForce GTX 980 Ti, GTX 980, GTX 970, GTX 960, and notebooks with GTX 980M and GTX 970M. There's no info on whether the game got the company's GameWorks varnish (GeForce-exclusive effects), but NVIDIA will give it a proper support package (Game Ready driver with SLI profiles and GeForce Experience settings). As one of the comments to the press-release suggest, it would be nice if the people who fell for the Batman: Arkham Knight bundle could be compensated with keys to this game.



View at TechPowerUp Main Site
 
Bet I don't get a copy with my 980ti....
 
decent $59 title...even to just give away if its not your cup o tea. better than some of the offerings Ive noticed from AMD in the recent past..;)
 
ew, yet another game that will run like crap on anything thats not nvidia...
 
i didnt receive my free copy of splinter cell when purchased my current GTX 760... amazon customer support was a shit with that deal .... so since that moment i only focus on the card... wont expect any other thing bundled ... if i get somethig for free will be a plus...

F*ckyou amazon crappy support...
 
It actually runs well on my R9 290... until you decide to put it on Ultra, and suddenly it's the crappiest frame rate I've ever seen. I also tried with Crossfire and it's the same crappy frame rate, so clearly it's doing something that the GPU doesn't know how to do.
 
Not another GameWorks title! You'd think that with DX12 and the 20x reduction in CPU time wasted on batch generation and validation, those game developers would be tempted to port their games to DX12 or Vulkan, and have all that physics simulation, AI and post processing of some of the visual effects done on CPUs and possibly integrated IGPUs now that DX12 enables easy sharing of work between discrete and integrated GPUs.

Here's what GameWorks is: Offload everything to discrete GPUs and destroy the overall performance of a game that could run many times better on the same hardware. Performance is especially worse on competitor's hardware, which is a bonus. This way, we force PC gamers to spend hundreds of dollars on our high end graphics cards. Ain't that great?

GameWorks is basically a scam.
 
Last edited:
Not another GameWorks title. You'd think that with DX12 and the 20x reduction in CPU time wasted on batch generation and validation, those game developers would be tempted to port their games to DX12 or Vulkan, and have all that physics simulation, AI and post processing of some of the visual effects run on CPUs, and possibly integrated IGPUs now that DX12 enables easy sharing of work between discrete and integrated GPUs.

Yeah, and we can run all these awesome DX12-enabled games on all the awesome DX12 operating systems that aren't released yet.

It's probably a good thing you aren't in charge of a technology company.
 
Console sales will be much higher than pc sales for MGS V. So I doubt it's gonna be a GameWorks title since the consoles are using AMD hardware.
 
Yes, would be nice for the Batman keys to be exchanged for this.... at least you can play it. (hopefully)
 
Yeah, and we can run all these awesome DX12-enabled games on all the awesome DX12 operating systems that aren't released yet.

It's probably a good thing you aren't in charge of a technology company.


That's not an excuse. Windows 10 is around the corner and a month or two delay is better than releasing a broken game.
And if they're not willing to wait for Windows 10, they could use Vulkan? Vulkan isn't bound to Widnows 10. Vulkan has been finalized for months now. How long would it take to port an existing code base to Vulkan? Two months tops!
 
Yes, would be nice for the Batman keys to be exchanged for this.... at least you can play it. (hopefully)

Arkham Knight is supposed to be fixed by Aug or Sept. Judging by the sorry state that Warner Bros/Rocksteady released the PC port in I'm not sure what is going to qualify as 'fixed' to them but I hope it is.
 
Not another GameWorks title. You'd think that with DX12 and the 20x reduction in CPU time wasted on batch generation and validation, those game developers would be tempted to port their games to DX12 or Vulkan, and have all that physics simulation, AI and post processing of some of the visual effects run on CPUs, and possibly integrated IGPUs now that DX12 enables easy sharing of work between discrete and integrated GPUs.

Here's what GameWorks is: Offload everything to discrete GPUs and destroy the overall performance of a game that could run many times better on the same hardware. Performance is especially worse on competitor's hardware, which is a bonus. This way, we force PC gamers to spend hundreds of dollars to buy our high end graphics cards. Ain't that great?

GameWorks is basically a scam.

No where does it say that MGS V will be a gameworks title so hush.
 
That's not an excuse. Windows 10 is around the corner and a month or two delay is better than releasing a broken game.

How do you know it will be broken?

And if they're not willing to wait for Windows 10, they could use Vulkan? Vulkan isn't bound to Widnows 10. Vulkan has been finalized for months now. How long would it take to port an existing code base to Vulkan? Two months tops!

Two extra months to write a rendering path for an API that almost nobody you will be using? I'm sure you can see how the cost/benefit analysis doesn't add up.

There are exactly zero games with DX12 support that will launch with Windows 10. The best we can hope for is that holiday titles (i.e. end of year) will start to support it. Just like the first DX11 game only appeared a year after Microsoft announced the API, it will take time for developers to get to grips with how DX12 works, what they can do with it, and most importantly - what justifies upgrading from DX11.
 
How do you know it will be broken?



Two extra months to write a rendering path for an API that almost nobody you will be using? I'm sure you can see how the cost/benefit analysis doesn't add up.

There are exactly zero games with DX12 support that will launch with Windows 10. The best we can hope for is that holiday titles (i.e. end of year) will start to support it. Just like the first DX11 game only appeared a year after Microsoft announced the API, it will take time for developers to get to grips with how DX12 works, what they can do with it, and most importantly - what justifies upgrading from DX11.

A rendering path? I'm talking about rewriting existing code using the new subroutines, ADTs and data structure present in the new API and then compiling that code. This would take 2 months if they take their time doing it.

What do you mean by nobody will be using DX12? All AMD GCN and modern Nvidia graphics cards support DX12 and Vulkan. You're misinformed!
 
Oh fucking great! More Scamworks titles :banghead:
 
What do you mean by nobody will be using DX12? All AMD GCN and modern Nvidia graphics cards support DX12 and Vulkan. You're misinformed!

He didn't say that nobody will be using DX12. He was talking about Vulkan.
 
A rendering path? I'm talking about rewriting existing code using the new subroutines, ADTs and data structure present in the new API and then compiling that code. This would take 2 months if they take their time doing it.

What do you mean by nobody will be using DX12? All AMD GCN and modern Nvidia graphics cards support DX12 and Vulkan. You're misinformed!

And all of that costs money. What's in it for the studio? Since it's so easy, I recommend you go ahead and get on it. ;)

But seriously, first we have to see the adoption rates of W10. I'll admit I was wrong on here if it comes to it, but I do not see, despite the free upgrade, the super-high adoption rates for W10 as MS would have us believe.
 
Console sales will be much higher than pc sales for MGS V. So I doubt it's gonna be a GameWorks title since the consoles are using AMD hardware.

Batman was a Gameworks title and it run on the consoles just fine. Hell, it ran better than it did on PC. Not sure why people seem to think GameWorks titles can't run on AMD hardware, they run just fine on AMD hardware.

FarCry4 and The Witcher 3 are both gameworks titles, and they also run just fine on the consoles. They run damn good on AMD hardware too. The Witcher 3 has stupid hair effects that bog down all but the best tessellation GPUs, but that is the developers fault more than nVidia's. The developers decided to apply the maximum amount of tessellation possible to hair as well as MSAAx16 to just the hair when hairworks was enabled. FarCry4 uses hairworks and it doesn't bog down the GPU like Witcher 3 does because the devs didn't go crazy with it.
 
Batman was a Gameworks title and it run on the consoles just fine. Hell, it ran better than it did on PC. Not sure why people seem to think GameWorks titles can't run on AMD hardware, they run just fine on AMD hardware.

FarCry4 and The Witcher 3 are both gameworks titles, and they also run just fine on the consoles. They run damn good on AMD hardware too. The Witcher 3 has stupid hair effects that bog down all but the best tessellation GPUs, but that is the developers fault more than nVidia's. The developers decided to apply the maximum amount of tessellation possible to hair as well as MSAAx16 to just the hair when hairworks was enabled. FarCry4 uses hairworks and it doesn't bog down the GPU like Witcher 3 does because the devs didn't go crazy with it.

Only the PC versions of these games use the GameWorks dlls. And don't give me that tessellation bullocks: WCCFTECH noted in their original W3 review that the 285, despite having better tessellation throughput than the GTX960, was getting more than twice the performance hit of the GTX960 when enabling HairWorks.

This is not about going crazy with simulation effects, it's about implementing them right, which includes doing them on the right processor. Physics done on a discrete GPU is bad! We have more than potent IGPUs and CPUs(with the new APIs) on both AMD and Intel chips that can render that level of simulation with ease.
 
@FrustratedGarrett I have to agree with you on the PhysX. I never allow the GPU to do it. I always set it to CPU. Why do I do that if I know I'm going to get fewer physics effects? One, because it doesn't bog down the GPU that way, and two, because I find the PhysX effects done on GPU to be very disproportionally unrealistic to real life compared to what you get on the CPU.
 
Last edited:
Only the PC versions of these games use the GameWorks dlls.

Hence my point.

And don't give me that tessellation bullocks: WCCFTECH noted in their original W3 review that the 285, despite having better tessellation throughput than the GTX960, was getting more than twice the performance hit of the GTX960 when enabling HairWorks.

Yep, note the issues with MSAAx16 being applied to the hair when Hairworks was enabled? The 285 had very poor MSAA performance. You're focusing on one of the issues, I'm talking about the whole picture.

This is not about going crazy with simulation effects, it's about implementing them right, which includes doing them on the right processor. Physics done on a discrete GPU is bad! We have more than potent IGPUs and CPUs(with the new APIs) on both AMD and Intel chips that can render that level of simulation with ease.

Physics is number crunching, something GPUs are in fact better at. Hairworks is nothing more than nVidia's answer to TressFX. And in the case of the rest of the physics in Witcher, even though it is using PhysX, it is all done by the CPU. And if the effects enabled by hardware accelerated PhysX were possible using the CPU only, I'd think we'd see some of the CPU only physics engines doing them. But I have yet to see a CPU only physics engine do realistic cloth and volumetric fog effects. Can you point me to some CPU physics engines that do these effects?

@FrustratedGarrett I have to agree with you on the PhysX. I never allow the GPU to do it. I always set it to CPU. Why do I do that if I know I'm going to get fewer physics effects? One, because it doesn't bog down the GPU that way, and two, because I find the PhysX effects done on GPU to be very disproportional to real life compared to what you get on the CPU.

I have yet to find a game where the PhysX bogged down my GPU. But it is just like any other visual effect, turning it on will lower framerates. Though the actual PhysX calculations have little to do with that, it is actually rendering the extra particles and effects that is causing the framerate drop. The PhysX calculations can be done on a GT640...
 
Yep, note the issues with MSAAx16 being applied to the hair when Hairworks was enabled? The 285 had very poor MSAA performance. You're focusing on one of the issues, I'm talking about the whole picture.

So the GTX960 with it's 128bit memory bus had better anti aliasing performance than the R9 285 with it's 256bit memory bus? I highly doubt it.

Physics is number crunching, something GPUs are in fact better at. Hairworks is nothing more than nVidia's answer to TressFX. And in the case of the rest of the physics in Witcher, even though it is using PhysX, it is all done by the CPU. And if the effects enabled by hardware accelerated PhysX were possible using the CPU only, I'd think we'd see some of the CPU only physics engines doing them. But I have yet to see a CPU only physics engine do realistic cloth and volumetric fog effects. Can you point me to some CPU physics engines that do these effects?

It is true that a mid range GPU has a much higher flops rating than a high end CPU does. However, unless the process you're running on modern GPU is many-many threaded and highly vectorized (data parallel), you're worse off running it on a GPU.
Not every simulation task can be done faster on a modern GPU, especially in the case of discrete GPUs where you need to move GBs of data to GPU RAM very frequently and you need to go through a context switch (execute kernel code and save/move some data in cache and system memory) before you can address GPU memory and registers. Context switches and I/O are real expensive. In a game workload where the frequency at which you need to update the GPU is very high, you're gonna waste a lot of CPU and GPU cycles.

Nvidia's GameWorks is simply not the best approach by a long shot. That's why all we're getting in these games are these crappy "physics" effects that are primitive at best.

I have yet to find a game where the PhysX bogged down my GPU. But it is just like any other visual effect, turning it on will lower framerates. Though the actual PhysX calculations have little to do with that, it is actually rendering the extra particles and effects that is causing the framerate drop. The PhysX calculations can be done on a GT640...

The issue is in the way Nvidia is pushing their way of doing better "physics". We can do better better Physics than Nvidia's Physx on midrange quad core CPUs alone.
The delays incurred every time you need to update GPU and GPU-RAM with new control bits and GBs of data are way too long for consistent frame delivery. Physics and/or GameWorks are really meant to make more money for Nvidia and not improve physics in games or games in general.
 
So the GTX960 with it's 128bit memory bus had better anti aliasing performance than the R9 285 with it's 256bit memory bus? I highly doubt it.

Yes, because you can judge a GPU on one spec. The GTX960 GPU is better at MSAA.

It is true that a mid range GPU has a much higher flops rating than a high end CPU does. However, unless the process you're running on modern GPU is many-many threaded and highly vectorized (data parallel), you're worse off running it on a GPU.
Not every simulation task can be done faster on a modern GPU, especially in the case of discrete GPUs where you need to move GBs of data to GPU RAM very frequently and you need to go through a context switch (execute kernel code and save/move some data in cache and system memory) before you can address GPU memory and registers. Context switches and I/O are real expensive. In a game workload where the frequency at which you need to update the GPU is very high, you're gonna waste a lot of CPU and GPU cycles.

Nvidia's GameWorks is simply not the best approach by a long shot. That's why all we're getting in these games are these crappy "physics" effects that are primitive at best.

So after all that, no CPU based physics engine that can do what PhysX can on a GPU. Got it. You'd think if the CPU was so much better, these companies that do nothing but program physic to run on CPUs would be doing these things by now. Why aren't they if the CPU is so much better?

The issue is in the way Nvidia is pushing their way of doing better "physics". We can do better better Physics than Nvidia's Physx on midrange quad core CPUs alone.
The delays incurred every time you need to update GPU and GPU-RAM with new control bits and GBs of data are way too long for consistent frame delivery. Physics and/or GameWorks are really meant to make more money for Nvidia and not improve physics in games or games in general.

You say that like PhysX is the only physics engine out there. Again, there are physics engines programmed entire for the CPU, very popular ones. The simple fact is they can not do what PhysX does on a CPU.

Soft body physics(cloth/hair) and volumetric fog/smoke were introduced years ago with PhysX. If the CPU is so much better at calculating these effects, why haven't the CPU based phsyics engines been able to successfully implement them in their engines? I mean, hell, AMD finally had to step up and create an API for soft body physics, and sure enough it runs on the GPU too.
 
Last edited:
Back
Top