• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Introduces NVIDIA Quadro CX - The Accelerator For Adobe Creative Suite 4

First of all, the only way to get it to accelerate would be to somehow illegally get your hands on the software that adds CUDA acceleration to Premier. The only legal way to get and use this software is to buy this Quadro card(saddly). Without that software, the GTX260 renamed to the Quadro will not accelerate. However, once you get that software, any CUDA enabled GPU will accelerate, so renaming the GTX260 is really useless at that point.

:roll:

I never knew installing the quadro driver was so complicated.

In fact, since I have cs4, maybe I should make a nice screenie of the accel at work.
 
Sorry to tell you are all wrong. Softmodding isnt possible as it where before. But still we got some geniouses out there who have fixed this.

Now let me set your straight! Cause i only registered here to tell you how this works.

So! Well earlier on people did unlock shaders and other stuf by fysicaly modify the cards. Later on it where possible to unlock stuff with a simple bios tweak! now it isnt that easy!

And when it comes to Quadro cards the biggest difference is usualy the rams YES! but whan you realy are paying for is a beter guarantee that the cards wont crash on you. And the huge driver pacage that comes with it for proffesional CAD work.

So, when it comes to Geforce cards there is an even simpler solution to get it working as a quadro card! This you can do with a never Quadrosoftware script. Witch actually works together with rivatuner http://www.guru3d.com/index.php?page=rivatuner !!

Earlier this wherent working good at all, and still it does not increase the power of a geforce card to a quadro. But it increases!

Wish you good luck!

BTW. The script is a plugin for Rivatuner.
 
Quadro cards the biggest difference is usualy the rams

Here’s a simple comparison of 8800gtx and its variants:

Quadro FX 4600 is renamed and underclocked 8800gtx that costs 1,100$.

Quadro FX 5600 is renamed and overclocked 8800gtx with 2x more memory- 1.5tb opposed to "low" 768mb. This one is 2,200$

8800gtx uses the exactly same chip, altho a bit different in core/shaders/memory speed, and while its actually faster than its underclocked quadro variant it now costs below 200$- if found. While qfx 5600 is a bit faster than 8800gtx, 8800 ultra is faster than both - since its even more overclocked. 8800 ultra is also sub 200$ today... if found.

Now lets look at the performance difference between the highest-end Quadro cards and 2 generations old geforce:

Which one is faster for 3ds Max, AutoCad or Maya?

Here's the kicker- 8800!

In all tests and every possible way, 8800 is faster in real work. How? Its actually physically faster than quadro variants and added ram on 5600 model does nothing since 768mb is way more than any of the users/ tests use to begin with. 1,5tb is nothing more than marketing ploy for the clueless- or nice if you have a CPU from the future(are we there yet Doc?) that can handle more than a billion polygon scene, that is needed to use up 1,5tb of ram.

but this surely cant be, just look at SpecCheatTests
(i realize that im quoting myself on a yet unstated sentence)

Its true that SpecCheatTests wont reflect what I've just said, but these tests are synthetic and OpenGL based. OpenGL is not used by any of the mentioned professional programs for ANY serious work today.

8800, aside from being crippled in drivers for windowed OpenGL work, is also treated badly in the SpeCheatTest- these conditions are next to impossible to recreate in actual work with many of the new professional programs that have long since ditched the outdated and backward OpenGL methodology. OpenGL, aside from being old/slow/ugly is actually additionally crippled in all non-workstation cards, as if OpenGL wasn’t slow and ugly enough to begin with.

To see what I’m talking about in OpenGL vs D3D performance, just test Max/Maya/ACad with a workstation card in both OpenGL and D3D. You’ll surely notice that speed is 10x slower at best, and 100x slower on average when using OGL. It might be a bit more difficult to notice to the novice, but light and shadow effects/quality are sub-par or nonexistent when using OGL as well.

This is all on “professional” cards that don’t have OpenGL drivers crippled. When you see that a traditional driver crippling takes away another 100x performance in same class geforce chip, you get to 1k-10k x slower performance than D3D while, looking worse too.

Who uses OpenGL for anything anymore? Poor, obsolete CAD users and people who are forced to use OpenGL due to lack of choice on their OS(Mac, Linux...)
 
Last edited:
Yeah to good that people who takes those tests done sit and work with highpoly models through the roof and work with textures who are frigging huge at the same time working with physics and animation.... cause that is pre calculated in computer games. But in maya and 3d max these are things that the GPU needs to calculate in a high quality render. And when you do a Mental Ray with faces over the million you can start swallowing ram chips as they where h2o atoms.... and you know you need a lot of them unless it/you will chocke....

So, no need for loads of ram? well.... dont come here and say 4gb ram on MOBO and 1gb ram on GPU is enough. I know for shure when working with programs as Maya and Max in a professional manner! That aint enough....

But exept for that i wont correct you wrong!
 
Mental Ray probably will one day be done using GPU power. That day isnt today and by the looks of it, it isnt that close yet- than again it isnt very far away either.

All scene requirements at rendering time is handled by the CPU and system RAM. You can have integrated graphics with not a single mb of dedicated video ram or the quadro fx 5600 with 1,5tb ram. It wont make 0.01% difference.

Scene previewing is another matter all together, and all x8xx cards that have came out for the alst 2 years are capable of over billion polygon scenes with 100k textures. Moderns CPUs arent. They might be, if the viewports were optimized to use more than one core, but sadly, that is not yet the case.

For over 2 years, CPUs were the bottleneck of high-end 3d programs, not the GPUs. Still, interestingly enough, GPUs keep getting ridiculously better while CPUs get minute improvements.
 
Sounds like those douche coders need to learn a lesson from gaming. OpenGL rules.
 
can't cuda cards also accelerate? so why would we need this unless if we are ridiculous game designers.
 
Softmod of Geforce FX 260/280 to Quadro CX

I think most of the users of Premiere CS4 wonder if it's possible decrease rendering time of H.264 with a softmoded FX260/280. Maybe not as much as with a Quadro CX, but GPU rendring will without doubt go faster than ordinary CPU rendering. Someone who actually tested this?
 
Back
Top