Thursday, July 23rd 2015
NVIDIA Bundles Metal Gear Solid V: The Phantom Pain with Select GeForce Products
NVIDIA, which recently withdrew its Batman: Arkham Knight GeForce Bundle following the game's spectacular failure that forced its publishers to pull it off the shelves; announced a new game bundle. The company will be giving away "Metal Gear Solid V: The Phantom Pain" with GeForce GTX 980 Ti, GTX 980, GTX 970, GTX 960, and notebooks with GTX 980M and GTX 970M. There's no info on whether the game got the company's GameWorks varnish (GeForce-exclusive effects), but NVIDIA will give it a proper support package (Game Ready driver with SLI profiles and GeForce Experience settings). As one of the comments to the press-release suggest, it would be nice if the people who fell for the Batman: Arkham Knight bundle could be compensated with keys to this game.
Source:
NVIDIA
31 Comments on NVIDIA Bundles Metal Gear Solid V: The Phantom Pain with Select GeForce Products
F*ckyou amazon crappy support...
Here's what GameWorks is: Offload everything to discrete GPUs and destroy the overall performance of a game that could run many times better on the same hardware. Performance is especially worse on competitor's hardware, which is a bonus. This way, we force PC gamers to spend hundreds of dollars on our high end graphics cards. Ain't that great?
GameWorks is basically a scam.
It's probably a good thing you aren't in charge of a technology company.
And if they're not willing to wait for Windows 10, they could use Vulkan? Vulkan isn't bound to Widnows 10. Vulkan has been finalized for months now. How long would it take to port an existing code base to Vulkan? Two months tops!
There are exactly zero games with DX12 support that will launch with Windows 10. The best we can hope for is that holiday titles (i.e. end of year) will start to support it. Just like the first DX11 game only appeared a year after Microsoft announced the API, it will take time for developers to get to grips with how DX12 works, what they can do with it, and most importantly - what justifies upgrading from DX11.
www.pcper.com/news/Graphics-Cards/NVIDIA-GameWorks-Enhancements-Coming-Upcoming-Metal-Gear-Solid-Game
What do you mean by nobody will be using DX12? All AMD GCN and modern Nvidia graphics cards support DX12 and Vulkan. You're misinformed!
But seriously, first we have to see the adoption rates of W10. I'll admit I was wrong on here if it comes to it, but I do not see, despite the free upgrade, the super-high adoption rates for W10 as MS would have us believe.
FarCry4 and The Witcher 3 are both gameworks titles, and they also run just fine on the consoles. They run damn good on AMD hardware too. The Witcher 3 has stupid hair effects that bog down all but the best tessellation GPUs, but that is the developers fault more than nVidia's. The developers decided to apply the maximum amount of tessellation possible to hair as well as MSAAx16 to just the hair when hairworks was enabled. FarCry4 uses hairworks and it doesn't bog down the GPU like Witcher 3 does because the devs didn't go crazy with it.
This is not about going crazy with simulation effects, it's about implementing them right, which includes doing them on the right processor. Physics done on a discrete GPU is bad! We have more than potent IGPUs and CPUs(with the new APIs) on both AMD and Intel chips that can render that level of simulation with ease.
Not every simulation task can be done faster on a modern GPU, especially in the case of discrete GPUs where you need to move GBs of data to GPU RAM very frequently and you need to go through a context switch (execute kernel code and save/move some data in cache and system memory) before you can address GPU memory and registers. Context switches and I/O are real expensive. In a game workload where the frequency at which you need to update the GPU is very high, you're gonna waste a lot of CPU and GPU cycles.
Nvidia's GameWorks is simply not the best approach by a long shot. That's why all we're getting in these games are these crappy "physics" effects that are primitive at best. The issue is in the way Nvidia is pushing their way of doing better "physics". We can do better better Physics than Nvidia's Physx on midrange quad core CPUs alone.
The delays incurred every time you need to update GPU and GPU-RAM with new control bits and GBs of data are way too long for consistent frame delivery. Physics and/or GameWorks are really meant to make more money for Nvidia and not improve physics in games or games in general.
Soft body physics(cloth/hair) and volumetric fog/smoke were introduced years ago with PhysX. If the CPU is so much better at calculating these effects, why haven't the CPU based phsyics engines been able to successfully implement them in their engines? I mean, hell, AMD finally had to step up and create an API for soft body physics, and sure enough it runs on the GPU too.
Do you have a proof of the GTX960 being better at MSAA?
With regard to your stand on CPU "physics", well... good luck letting the DX11 runtime manage 10 synchronous threads. It's simply not possible with the current or rather previous gen APIs.
I'd say that AMD's HSA chips are quite a good option for heavy simulation and AI processing. The GPU in these chips can address the whole virtual memory space and communication between CPU and GPU is done through hardware and without any context switches. All you need to do is add pointers in your code where you want execution to switch from or to either the CPU or the GPU. AMDs HSA chips seem like the most sensible platform for these kinda workloads.