Tuesday, July 29th 2008
NVIDIA GeForce Driver Version 177.79 Released
Along with the release of the new NVIDIA GeForce 9800 GTX+, 9800 GT, and 9500 GT GPUs comes a new beta driver. The GeForce driver release 177.79 comes to add nothing but these three GPUs to the database of supported GeForce 8-series, 9-series, and 200-series GPUs. It may also bring some performance improvements in various games and applications. Give the release notes a try for more information.
DOWNLOAD: Windows XP 32-bit|64-bit, Windows Vista 32-bit|64-bit
Source:
NVIDIA
DOWNLOAD: Windows XP 32-bit|64-bit, Windows Vista 32-bit|64-bit
32 Comments on NVIDIA GeForce Driver Version 177.79 Released
edit: ah it was in the news here, coming in a week.
forums.techpowerup.com/showthread.php?t=66626
www.nvidia.com/object/winxp_177.79.html
www.nvidia.com/object/winxp64_177.79.html
www.nvidia.com/object/winvista_x86_177.79.html
www.nvidia.co.uk/object/winvista_x64_177.79.html
Probably the same as the BETAs, but...
Why would you process physisx on your gpu?
We already have struggling gpus.
A sub 200$ quad core can handle the job pretty well.
That CUDA thing is a actually a battle between nvidias ageia and INTELs havok.
The more of processing something by your gpu means the less of processing
something else.
Of course in same cases, there is absolutely no trade off. Like the GT200 GPU's that have a dedicated section of the GPU die specifically for PhysX. So when you run PhysX on those GPU's, it doesn't take any power away from the graphics rendering.
Of course there is also the fact that your main GPU doesn't have to do the PhysX processing at all. You can have a super high end graphics card doing the graphical rendering, and a $50 GPU doing the PhysX and you will be perfectly fine, with no power being taken away from graphics GPU.
This argument is the same as asking why we processes AA or AF with the GPU. If the GPU didn't have to process AA/AF it could processes something else better. Everything is a trade off.
Of course in same cases, there is absolutely no trade off. Like the GT200 GPU's that have a dedicated section of the GPU die specifically for PhysX. So when you run PhysX on those GPU's, it doesn't take any power away from the graphics rendering.
Of course there is also the fact that your main GPU doesn't have to do the PhysX processing at all. You can have a super high end graphics card doing the graphical rendering, and a $50 GPU doing the PhysX and you will be perfectly fine, with no power being taken away from graphics GPU.
Firstly,The gpu doesn't do physx better but faster.But,will games actually benefit from that?
secondly,it is a battle between INTEL and NVIDIA.Because, using the gpu to process what left to cpu in games,which is physx and artificial intelligence means that you dont need even a mid range cpu to play games.That leads to:less money for INTEL or more money for nvidia.
thirdly,a cpu can't process pixels/geometry/textures via rostaration.We need a parallel processor "gpu" to do that.Thats why a cpu can't process aa.
Why would I get a quad when no game needs it? It's not like you get hardware physics with a CPU on AGEIA games. My dual handles software physics already, no slowdowns on any Crysis explosion.
And yes I'm not expecting physics for free, it will cost me framerate or eye candy, so? Last level of Crysis is the only game my GPU has struggled everything else flies. I am going from 1280x1024 to 1680x1050 once my new LCD arrives and that will be a bigger hit probably on performance than these physics :)
It's something new and I can get it for FREE. Don't need to go out and buy a PPU or a quad. And not like it is in every game, just these www.nzone.com/object/nzone_physxgames_home.html
If going gets too tough I won't cry, I'll just disable it :p
The difference between PhysX and Havok is definitely a fight between Intel and nVidia, but you talked about CUDA. CUDA is a completely different technology, used for far more than PhysX. Still kind of a fight between the two companies though, but not really relivant to this discussion.
Yes, I know a CPU can't process AA, that wasn't my point. My point was that any time you add some eye candy to a game, it takes away processing power from something else. Everything is a trade off. Turn AA down from 16x to 8x and run PhysX with little to no frame rate loss. Or leave AA at 16x and don't use PhysX.
It does seem like my OC's are a little more unstable, but then I was pushing pretty far with this card...granted it seems some 280's are getting further.
Nkd what are you watching your clocks with? EVGA Precision 1.3.1 came out the other day, made by the same dude that created Rivatuner...so either works great, I've used 2.09 and EVGA...for this card I prefer the EVGA tuner since it has only what I need in the first place...I don't need much beyond clocks for GPU, Shaders, Mem and Fan speed and a few profiles so it fits perfectly.
I am considering going back to 177.70's, those seemed more stable while my card was OC'd. I did learn something funny though when I had my GPU OC too far beyond the Shader link, it'd kick down to 2D speeds...playing Comat Arms @ 1440x900, all settings maxed, 8X AA got me 22-35FPS...it was actually playable...it'd get a little hitchy here and there...but I was pretty amused by that! Thought I'd share!
:toast: