Quadro cards the biggest difference is usualy the rams
Here’s a simple comparison of 8800gtx and its variants:
Quadro FX 4600 is renamed and underclocked 8800gtx that costs 1,100$.
Quadro FX 5600 is renamed and overclocked 8800gtx with 2x more memory- 1.5tb opposed to "low" 768mb. This one is 2,200$
8800gtx uses the exactly same chip, altho a bit different in core/shaders/memory speed, and while its actually faster than its underclocked quadro variant it now costs below 200$- if found. While qfx 5600 is a bit faster than 8800gtx, 8800 ultra is faster than both - since its even more overclocked. 8800 ultra is also sub 200$ today... if found.
Now lets look at the performance difference between the highest-end Quadro cards and 2 generations old geforce:
Which one is faster for 3ds Max, AutoCad or Maya?
Here's the kicker- 8800!
In all tests and every possible way, 8800 is faster in real work. How? Its actually physically faster than quadro variants and added ram on 5600 model does nothing since 768mb is way more than any of the users/ tests use to begin with. 1,5tb is nothing more than marketing ploy for the clueless- or nice if you have a CPU from the future(are we there yet Doc?) that can handle more than a billion polygon scene, that is needed to use up 1,5tb of ram.
but this surely cant be, just look at SpecCheatTests
(i realize that im quoting myself on a yet unstated sentence)
Its true that SpecCheatTests wont reflect what I've just said, but these tests are synthetic and OpenGL based. OpenGL is not used by any of the mentioned professional programs for ANY serious work today.
8800, aside from being crippled in drivers for windowed OpenGL work, is also treated badly in the SpeCheatTest- these conditions are next to impossible to recreate in actual work with many of the new professional programs that have long since ditched the outdated and backward OpenGL methodology. OpenGL, aside from being old/slow/ugly is actually additionally crippled in all non-workstation cards, as if OpenGL wasn’t slow and ugly enough to begin with.
To see what I’m talking about in OpenGL vs D3D performance, just test Max/Maya/ACad with a workstation card in both OpenGL and D3D. You’ll surely notice that speed is 10x slower at best, and 100x slower on average when using OGL. It might be a bit more difficult to notice to the novice, but light and shadow effects/quality are sub-par or nonexistent when using OGL as well.
This is all on “professional” cards that don’t have OpenGL drivers crippled. When you see that a traditional driver crippling takes away another 100x performance in same class geforce chip, you get to 1k-10k x slower performance than D3D while, looking worse too.
Who uses OpenGL for anything anymore? Poor, obsolete CAD users and people who are forced to use OpenGL due to lack of choice on their OS(Mac, Linux...)