# Why the FX line sucks/sucked so bad.



## AshenSugar (Feb 11, 2007)

ok to start, im posting this to clear up some missconseptions/fud i have seen people posting about the fx cards "emulating dx9" and why they sucked.
http://techreport.com/news_reply.x/4782/4/
its got 2 links neer the top in comments that are usefull.

from wikipedia : http://en.wikipedia.org/wiki/GeForce_FX


> Specifications
> NVIDIA's GeForce FX series is the fifth generation in the GeForce line. With GeForce 3, NVIDIA introduced programmable shader units into their 3D rendering capabilities, in line with the release of Microsoft's DirectX 8.0 release, and the GeForce 4 Ti was an optimized version of the GeForce 3. With real-time 3D graphics technology continually advancing, the release of DirectX 9.0 ushered in a further refinement of programmable pipeline technology with the arrival of Shader Model 2.0. The GeForce FX series brings to the table NVIDIA's first generation of Shader Model 2 hardware support..
> 
> The FX features DDR, DDR2 or GDDR-3 memory, a 130 nm fabrication process, and Shader Model 2.0/2.0A compliant vertex and pixel shaders. The FX series is fully compliant and compatible with DirectX 9.0b. The GeForce FX also included an improved VPE (Video Processing Engine), which was first deployed in the GeForce4 MX. Its main upgrade was per pixel video-deinterlacing — a feature first offered in ATI's Radeon, but seeing little use until the maturation of Microsoft's DirectX-VA and VMR (video mixing renderer) APIs. Among other features was an improved anisotropic filtering algorithm which was not angle-dependent (unlike its competitor, the Radeon 9700/9800 series) and offered better quality, but affected performance somewhat. Though NVIDIA reduced the filtering quality in the drivers for a while, the company eventually got the quality up again, and this feature remains one of the highest points of the GeForce FX family to date (However, this method of anisotropic filtering was dropped by NVIDIA with the GeForce 6 series for performance reasons).
> ...



haxxxx!!!!



> Questionable tactics
> 
> NVIDIA's GeForce FX era was one of great controversy for the company. The competition had soundly beaten them on the technological front and the only way to get the FX chips competitive with the Radeon R300 chips was to optimize the drivers to the extreme.
> 
> ...


more haxxxx!!!!!!!




> Competitive response
> 
> By early 2003, ATI had captured a considerable chunk of the high-end graphics market and their popular Radeon 9600 was dominating the mid-high performance segment as well. In the meantime, NVIDIA introduced the mid-range 5600 and low-end 5200 models to address the mainstream market. With conventional single-slot cooling and a more affordable price-tag, the 5600 had respectable performance but failed to measure up to its direct competitor, Radeon 9600. As a matter of fact, the mid-range GeForce FX parts did not even advance performance over the chips they were designed to replace, the GeForce 4 Ti and MX440. In DirectX 8 applications, the 5600 lost to or matched the Ti 4200. Likewise, the entry-level FX 5200 did not perform as well as the DirectX 7.0 generation GeForce 4 MX440, despite the FX 5200 possessing a far better 'checkbox' feature-set. FX 5200 was easily matched in value by ATI's older R200-based Radeon 9000-9250 series and outperformed by the even older Radeon 8500.
> 
> ...


9600 256mb i had stomped the 5800ultra i had at EVERY SINGEL THING and it was around 1/3-1/4 the price!!!!



> Windows Vista and GeForce FX PCI cards
> 
> Although ATI's competitive cards clearly surpassed the GeForce FX series among many gamers, NVIDIA may regain some market share with the release of Windows Vista, which requires DirectX 9 for its signature Windows Aero interface. Many users with systems with an integrated graphics processor (IGP) but without AGP or PCIe slots, that are otherwise powerful enough for Vista, may demand DirectX 9 PCI video cards for Vista upgrades, though the size of this niche market is unknown.
> 
> ...


pwned again!!!!

see the thumb for specs of these cards, i will also link specs of ati's r300 core
http://en.wikipedia.org/wiki/Radeon_R300
thumb is of the r300 range cards specs, cards that totaly stomp the nvidia equivlants!!!


----------



## AshenSugar (Feb 11, 2007)

http://www.digit-life.com/articles2/gffx/gffx-ref-p1.html






note the specs/layout   BAD BAD BAD!!!!


----------



## Jarman (Feb 11, 2007)

Didnt know if you mentioned it in your copy/paste there, didnt see it anyway.  From what i remember Nvidia went down the route of thinking games were going to use heavy vertex shading and so integrated far more vertex processors than shader processors.  ATI went the other way and integrated more shader processors.  Games became shader heavy and this helped the R300 cores no end.

The nvidia core was technically superior to the R300 in many ways, better manufacturing processes 128bit DX9 support and partial (64bit??) support. From what i remember again it was the 128bit support instead of the lower 96bit support required by DX9 that caused a lot of wasted clock cycles.

But who really cares that Nvidia released some crap cards 80 years ago :S Hell, i had a 5900XT and it sucked, my older GF4 4600TI beat it in several situations, but it doesnt really bother me that much in 2007


----------



## AshenSugar (Feb 11, 2007)

no nvidias design wasnt better in any way, they use partial persission shaders 48bit as much as possable 64bit when needed 128bit when force 64/128bit when there was no other choice read the quotes its all showed in there

read the artical



> Firstly, the chips were designed for use with a mixed precision fragment (pixel) programming methodology, using *48-bit integer ("FX12") precision and also (to a lesser extent) a 64-bit "FP16" for situations where high precision math was unnecessary to maintain image quality, and using the 128-bit "FP32" mode only when absolutely necessary*. The R300-based cards from ATI did not benefit from partial precision in any way because these chips were designed purely for Direct3D 9's required minimum of 96-bit FP24 for full precision pixel shaders. For a game title to use FP16, the programmer had to specify which pixel shader instructions used the lower precision by placing "hints" in the shader code. Because ATI didn't benefit from the lower precision and the R300 performed far better on shaders overall, and because it took significant effort to set up pixel shaders to work well with the lower precision calculations, the NV3x hardware was usually crippled to running full precision pixel shaders all the time.
> 
> Additionally, The NV30, NV31, and NV34 also were handicapped because they contained a mixture of DirectX 7 fixed-function T&L units, DirectX 8 integer pixel shaders, and DirectX 9 floating point pixel shaders.[citation needed] The R300 chips emulated these older functions on their pure Shader Model 2 hardware allowing the SM2 hardware to use far more transistors for SM2 performance with the same transistor budget. For NVIDIA, with their mixture of hardware, this resulted in non-optimal performance of pure SM2 programming, because only a portion of the chip could calculate this math.



and it matters because they got away with it once, and it may happen again, the r80 curently dosnt have working vista drivers(no dx10 support) yet thats one of its main selling points.

EDIT: please dont reply to posts i make without  at least reading the artical/post, its very dissrespectfull.

and hey look everybody an nvidia fanboi!!!!!


----------



## niko084 (Feb 11, 2007)

Nvidia, Ati power will continue to change hands back and forth, one will upstep the other and it will flop back and forth for a long time.. Same with Intel and AMD... It just keeps going.


----------



## Zubasa (Feb 11, 2007)

niko084 said:


> Nvidia, Ati power will continue to change hands back and forth, one will upstep the other and it will flop back and forth for a long time.. Same with Intel and AMD... It just keeps going.


Hopefully so, if either of these titans fall, it will be a catastrophe :shadedshu
I can't imagine how much a Core 2 Duo will cost if AMD is not there.


----------



## xvi (Feb 11, 2007)

niko084 said:


> Nvidia, Ati power will continue to change hands back and forth, one will upstep the other and it will flop back and forth for a long time.. Same with Intel and AMD... It just keeps going.



True. Nvidia is all like "Hurr! Our 8800GTX stomps the x1950XTX." and now ATI is going to be all like "Hurr! Our R600 (rumored to be aka: X2800) stomps the 8800GTX.", then Nvidia will come out with a 8900, and ATI with a (still rumored) x2900, etc...

Intel and AMD, like you said, are the same. Intel is all like "Hurr! My Pentium 3 is better." then AMD waas like "Well, hurr.. My Duron is better." then Intel is all like "Hurr! My Pentium 4 is better." then AMD is like "Hurr! My Athlon XP is better.", then Intel is like "Hurr! My Core 2 Duo is better." and now AMD is going to be all like "Hurr! My K8L is better." And then Intel will stick their memory controllers on the processor and implement their own version of HyperTransport and be all like "Hurr! My whatever is better!"..

It always happens. All you have to do is choose a side and wait for that side to jump ahead before you build a new computer.


----------



## AshenSugar (Feb 11, 2007)

acctualy intel from what i hear plans a dual nortbrige solution to give the fsb more bandiwith(haxxxx)

and the x2900 is about ready for market pix are showing up now, check the new section, the diffrance is amd/ati didnt rush out their card, nvidia did, and the 8900 from all evedance is going to be to try and save face against the x28/900 cards, its a g80(same exect core as the 8800) with driver tweaks and higher clocks, woo, big upgrade their, maby they should acctualy get some decent drivers out for the 8800, maby some working vista drivers that would acctualy make the 8800 into a dx10 card


----------



## Ketxxx (Feb 11, 2007)

The driver "optimizations" part is inaccurate. While nVidia did indeed use blatantly obvious aggressive hacks and were indeed cheating, ATi cant, under any circumstances, be accused of cheating as their driver "optimizations" were categorically PROVEN to not drastically effect (if at all) image quality and the scenes were still fully rendered.


----------



## AshenSugar (Feb 11, 2007)

ati admited they optimized the drivers, and nvidia fans did accuse these optimizations as being the same thing nvidia was doing(even thouth they wherent even close to the same things nv did, because they didnt effect quility)


----------



## Ketxxx (Feb 11, 2007)

Thats the point, ATI' optimizations were genuine, nVidias were not.


----------



## xvi (Feb 11, 2007)

Ketxxx said:


> The driver "optimizations" part is inaccurate. While nVidia did indeed use blatantly obvious aggressive hacks and were indeed cheating, ATi cant, under any circumstances, be accused of cheating as their driver "optimizations" were categorically PROVEN to not drastically effect (if at all) image quality.



*Hah!* Another person that still says nVidia (and not Nvidia). Thank you! And he's right. 



AshenSugar said:


> acctualy intel from what i hear plans a dual nortbrige solution to give the fsb more bandiwith(haxxxx)


I heard something about a "CSI" bus? Or Q-something? Basically a ripoff competitor of HyperTransport. And now they want to stick a memory controller on the processor?! I've heard of copying, but isn't this a bit... bold?


----------



## AshenSugar (Feb 11, 2007)

intel dosnt want to move the memcontroler onto the chip, it would be loosing face since they still claim that chipset based is more vercitile and blah blah blah, hence using 2 or more nortbriges and possably some kind of quad data rate ram, this to widen the FSB so it isnt saturated  with data.


----------



## xvi (Feb 11, 2007)

AshenSugar said:


> intel dosnt want to move the memcontroler onto the chip, it would be loosing face since they still claim that chipset based is more vercitile and blah blah blah, hence using 2 or more nortbriges and possably some kind of quad data rate ram, this to widen the FSB so it isnt saturated  with data.



http://www.theinquirer.net/default.aspx?article=37373

http://www.theinquirer.net/default.aspx?article=37392

And finally...
http://www.theinquirer.net/default.aspx?article=37432

I'll try searching for articles outside of the Inq, but until then...


----------



## Jarman (Feb 11, 2007)

AshenSugar i did read the article, i just didnt take in every last word...any idea how big that thing is?

As for being an Nvidia fanboy id like to think not.  Although my main card is a 7900GT (voltmod next wk) I also own an X800XT PE and the chipset on the RDX200 MB in this machine is also ATI, although the southbridge sucks and i wish i hadnt taken my NF4 board out to put this in, but there we go i cant be assed changing them over again.  Im also not a fan of the power hungry cards ATI have released of late.  Was one of the main reasons for me choosing the 7900GT when i did.  The 45W or so power consumption was exceptional for such a card.


----------



## AshenSugar (Feb 12, 2007)

Jarman, the only people i have ever seen say that the fx line was supperior in anyway to the r300 range are INSAIN fanboys.

and yes the old sb400/450 soundbriges sucked ass for usb perf, the ULI southy was alot better, now the 600 kicks arse tho.

as to rated watts/volts/amps, my exp is personaly that ati and nv caculate this diffrently, just as they caculate transistors diffrently, ati tends to be litteral with their caculations, nvidia well nobodys figuared out how they caculate such things yet.

i worrie more about amps draw since watts isnt really a very good method to use in rating pc devices imho.
mainly because theres no set standreds for rating psu's or cards.

they  could rate their stuff at concervitive numbers and blow past that with overclocking  or they could rate the cards in "worse case" it leaves to much room for screwing around IMHO, everybody needs to rate their cards/parts WORST POSSABLE CASE, this way we can be sure what we are really getting.

and yes the 7900 is slitly lower power then the  x1900 range, but then again, the x1900 rang slaps the 7 seirse around like red headed step child when it comes to raw power(not to mention better drivers) 

look at the 8800, things an aircraft carryer!!!


----------



## tkpenalty (Feb 12, 2007)

xvi said:


> True. Nvidia is all like "Hurr! Our 8800GTX stomps the x1950XTX." and now ATI is going to be all like "Hurr! Our R600 (rumored to be aka: X2800) stomps the 8800GTX.", then Nvidia will come out with a 8900, and ATI with a (still rumored) x2900, etc...
> 
> Intel and AMD, like you said, are the same. Intel is all like "Hurr! My Pentium 3 is better." then AMD waas like "Well, hurr.. My Duron is better." then Intel is all like "Hurr! My Pentium 4 is better." then AMD is like "Hurr! My Athlon XP is better.", then Intel is like "Hurr! My Core 2 Duo is better." and now AMD is going to be all like "Hurr! My K8L is better." And then Intel will stick their memory controllers on the processor and implement their own version of HyperTransport and be all like "Hurr! My whatever is better!"..
> 
> It always happens. All you have to do is choose a side and wait for that side to jump ahead before you build a new computer.



Dude, G80 is just a ultra beefed N7x chip.

In truth G80 is just a compilation of old technology, it emulates DX10 with difficulty.


----------



## anticlutch (Feb 12, 2007)

tkpenalty said:


> Dude, G80 is just a ultra beefed N7x chip.
> 
> In truth G80 is just a compilation of old technology, it emulates DX10 with difficulty.



Case in point: Crysis. 
Even if you weigh in the fact that the benchmark was done with unreleased beta drivers, those are some pretty abysmal numbers...


----------



## tkpenalty (Feb 12, 2007)

lol yeh... G80 dies in that because its not a shader-based GPU anyway. It has power at the cost of heat like the R600 does though but R600 is just nearly as crazy as the "AMDTI FX12000"


----------



## AshenSugar (Feb 12, 2007)

the r600 tho is im quite sure shader heavy, as will its lesser variants be, the g80 is a tweaked and slitly updated g70, they made it so it can do hdr+aa, and then added a crapload of pipes, wooo thats.....well imho thats lazy, and nv fanboi's like to blert out that the r420(x800 range) was basickly just a beefed up r300/350 (it was but they did ALOT of tweaking to the chip not just adding more pipes )

the g80 is whats known as a refresh product they updated and tweaked what they already had to make a very powerfull card, but they didnt acctualy make the feture set much more robust, i wouldnt expect anything truely new from nvidia till the 9900 range or even the 10900 range


----------



## Zubasa (Feb 12, 2007)

AshenSugar said:


> the r600 tho is im quite sure shader heavy, as will its lesser variants be, the g80 is a tweaked and slitly updated g70, they made it so it can do hdr+aa, and then added a crapload of pipes, wooo thats.....well imho thats lazy, and nv fanboi's like to blert out that the r420(x800 range) was basickly just a beefed up r300/350 (it was but they did ALOT of tweaking to the chip not just adding more pipes )
> 
> the g80 is whats known as a refresh product they updated and tweaked what they already had to make a very powerfull card, but they didnt acctualy make the feture set much more robust, i wouldnt expect anything truely new from nvidia till the 9900 range or even the 10900 range


I wonder how will nVidia name the GF10900 
ATi went on to use X = 10 (roman numerals)


----------



## AshenSugar (Feb 12, 2007)

knowing nvidia they will go for numbers that look more impressive so 10**0 and up, i would fallover if they called it the 88800


----------



## anticlutch (Feb 12, 2007)

Or the GeForce 66666?


----------



## tkpenalty (Feb 12, 2007)

Yes! ATI should patent the X numeral in lettering! Time for ATI/AMD to be a dickhead for the right things (like they always do) >=D, gosh I love ATI/AMD, they had it all planned out, Nvidia is probably going to file a lawsuit for monopolising the GPU industry. Therefore:

There would be no such thing as the:

-Geforce 9600
-Geforce 9800
-Geforce 8500

Nvidia seriously needs to consider a new naming code, or else they are screwed >=D.
I will literally ROFHASWL (Rolling on the floor having a seizure while laughing) if they make the 10800; thats seriously stupid looking, I would say if they wanted to be the "best" they should name it Geforce "TO INFINITY AND BEYOND"

I don't get how the X800 was criticised as a revision of the 9800, thats FUD.


----------



## AshenSugar (Feb 12, 2007)

well it is in a way, the cores an updated 9800/r350, more advanced, more pipes, the advantege of it being fairly closely related to the r300/350 cores was driver dev was easyer and the cards could more easly share drivers.

ati updated the ps2.0b to version f, added more pipes, better memory controler, the list goes on and on, but it is an evolution and a theme/design.
just as the 6/7/8 are all evolutions of the same core, just a tweak here and there and higher clocks/more or less pipes/shaders  thie x1k are a new design, modular by design, ati/amd could make a new version of the x1900 core that had 32 pipes and 128 shaders if they wanted  (imagins that and drools)  or 256shaders, because they can add/remove shaders pretty easly, just imagin if amd/ati are smart, they make a pci-e 1x card thats not even a video card, using the x1k design, 4-8 pipes, and a HUGE number of shader units(32-48-64.....)   with its own 128/256mb ram but make the card very small and because theres no video out they could have the card exaust the heat out the back, design it to fit under/next to/above the 1900 seirse cards ,  i hope this happens, it would rock hard imho


----------



## anticlutch (Feb 12, 2007)

@ AshenSugar: Why aren't you working for ATI?


----------



## AshenSugar (Feb 12, 2007)

forgot to say i alwase wanted to see a geforce 6660 maby a 106660 lol


----------



## AshenSugar (Feb 12, 2007)

anticlutch said:


> @ AshenSugar: Why aren't you working for ATI?



i wish 

and no, i had an fx5800ultra, it was a pile of very very very very very very noisy crap.

also seen people posting over and over that the fx line emulated dx9 when u cant emulate dx9 because u cant software emulate ps2, it has the hardware to run dx9 just not very well.

and common wouldnt a pcie 1x card that boosted game quility and perf be a kickin the ass?


----------



## anticlutch (Feb 12, 2007)

Yeah it would but imagine the heat! Two x1950pro's in crossfire (I prefer these since they have the internal connectors) with a sound card and the PCI add-on cards would probably raise your temps like no other... And you'd need a heck of a power supply to provide enough power as well!





(I'd still buy it though <_< )


----------



## Zubasa (Feb 12, 2007)

tkpenalty said:


> Yes! ATI should patent the X numeral in lettering! Time for ATI/AMD to be a dickhead for the right things (like they always do) >=D, gosh I love ATI/AMD, they had it all planned out, Nvidia is probably going to file a lawsuit for monopolising the GPU industry. Therefore:
> 
> There would be no such thing as the:
> 
> ...


They actually can't patronize a numbering system.
Remember how AMD piss off Intel for the 80X86 CPU?
Thatz is the beginning of the "glory" of Pentium (The Pentium would actually be the 80586).


----------



## Jarman (Feb 12, 2007)

ok so rating in amps is better than rating in watts?

That is most definitely wrong, amps are dependant on voltage

P = V.I for a DC circuit

Power (watts) = volts x amps

Eg, if you are on 230V eu power supply.  Lets say your pc draws 500watts from the mains

I = P / V = 500 / 230 = about 2 amps

and lets say your chip draws 100 watts at 1V (just an example remember)

I = P / V = 100 / 1 = 100 amps.  So see why you cant use amps to measure things??? unless voltages are the same OFC.

I (A) = P/V. So the higher the voltage on the chip the lower the power consumption appears to be.  Therefore watts is a better way to go, or joules per second if u wanna get really technical about it and put it in SI.

You also say the SB400/450 sucked, agreed, im using one now.  The SB600 rocks though, also agreed and you will only see it on my fav board manufacturer (definitely a DFI fanboy).  So if ATI release another not so good chipset i expect you will have the same reaction you are currently having to NV graphics cards yes?

As for the x1900 being faster, my 7900GT is stable at 620 GPU, 915 ram and thats without voltmod and was £70 cheaper than a x1900xt when i bought it.  Id like to bet it would give a x1900 some competition compared to the 450/650 default.

As for a slight difference in power the x1900XT takes about 80-90% more than that of the 7900GT, thats more than "slightly lower power"

http://www.vr-zone.com/?i=3335&s=8 for 7900GT power consumption and x1900XTX

http://www.xbitlabs.com/articles/video/display/gpu-consumption2006_4.html for x1900XT

I am not saying the x1900 series of cards are bad, most certainly not, but the nvidia range have their redeeming features and the 7900GT has unbelievable overclocking potential.  Especially if you are prepared to do a bit of voltmodding.


----------



## AshenSugar (Feb 12, 2007)

no voltmoding needed on ati cards, u can tweak bios to get that.

and when i say its better to look at amps im not talking about ur math, im talking about how psu's and such are rated by the maker, a crappy 850watt psu is gonna be worse then a highquility 300watt psu, thats fact.

going by the published numbers from amdti/nvidia on watts/amps in my exp is a bad idea, because they are subject to the people writing them out, say you give them a watt/amp range, 10-28amps and 85-125watts,  now depending on the person wrighting the official specs you could endup with the card being rated "best possable" with 10amps and 85watts draw, or worst possable with 28/125, the best way is worst case, but then again it may look bad to list the worst case specs, so marketing comes into play.

and if you watch around deals popup for cards all the time, i payed 278for my x1900xt/xtx(flashed to toxic bios) the 7900gt 512mb was over 300 at the time, and at stock core volts i can get 685core/1650ram  well thats as far as i have tryed so far, im guessing by results i have seen around with my cooler and the same card i should beable to get the core to 700range b4 i need to add vgpu, thats pretty damn good 

and by  official specs i should be using a 500+watt psu, im on a 400watt fortron sata seirse 12cm ran psu with 16+18 amp 12v rails, guess what, i hve 4hdd's(2 seagate high draw units 2 maxtors) 2 optical drives, 3500+(2.2gz)@3.03gz PIB air., 6 fans and its ROCK STABLE!!!!!  

thats what i mean about rating by amps, if i was using a blah quility 850watt psu i probbly wouldnt even beable to post


----------



## cdawall (Feb 13, 2007)

may i mention this i have a PNY fx5700 VIVO and ummm well it sux in everything except video rendering and image quality it is one of the BEST cards for video quality for the money but on the other hand even with the card running 500/700mhz the thing still falls behind my AsusTek ti4200 @ 330/605mhz which is sad 

the only thing that the FX series was good at was video rendering and it was quite good at that NVidia just missed the gaming mark

though the FX5900 did beat the 9700pro PROOF

and on paper the card kills the whole ATi series<---on paper only though






another article showing the ATi 9700PRO being outdone


----------



## mullered07 (Feb 13, 2007)

tkpenalty said:


> Dude, G80 is just a ultra beefed N7x chip.
> 
> In truth G80 is just a compilation of old technology, it emulates DX10 with difficulty.



it doesnt even emulate dx10 at this point in time, its a high end dx9 card and at best the best dx9 card cause it sure as hell aint dx10, but im thinking of getting a 8800gts 320mb for my next high end dx9 card and then get a 2nd gen midrange dx10 card when drivers and games permit   

peace.


----------



## AshenSugar (Feb 13, 2007)

cdawall said:


> may i mention this i have a PNY fx5700 VIVO and ummm well it sux in everything except video rendering and image quality it is one of the BEST cards for video quality for the money but on the other hand even with the card running 500/700mhz the thing still falls behind my AsusTek ti4200 @ 330/605mhz which is sad
> 
> the only thing that the FX series was good at was video rendering and it was quite good at that NVidia just missed the gaming mark
> 
> ...



in dx8 and ogl the high end fx cards could keep up with/outdo(mostly in ogl) the r300 range cards, but do i really need to show you what happens under halflife2 with the FX line if you try and run in dx9 mode? its unplayable, i tested it myself, try this 800x600 dx9 mode on a 5950ultra low detail, the card will CHUG at 20fps max, where as the 9700pro would be doing 1280x1024 with high detail and not have notable perf drops, a 9600non-pro is about 3-4x as fast in dx9 rendering then any fx line card, its just a matter of the fact that nvidia screwed up, and with the newer drivers(came out after the 9800pro) dx8 perf was improved as was ogl perf, tho the fx still did better in most ogl only games.
the reasion for thats is EASY, doom3 and the like where WRITEN FOR NVIDIA CARDS!!!!! Id and nvidia have a long standing relationship, Id spicificly tunes their games to work best with nv cards, even older ones, and doom3 is also only a ogl1.5 game, not ogl2.x where shader2 would be used(and thus could harm fx line card perf) 

and yes on paper the fx cards looked impressive, but in acction they where crap, sad when a gf2gts is faster then a card 3 gens newer (gf4mx=gf2)

i mainly posted this to clear up the missconseption that these chips somehow emulated dx9, whitch is IMPOSSABLE due to it being impossale to software emulate shader2.


----------



## cdawall (Feb 13, 2007)

AshenSugar said:


> i mainly posted this to clear up the missconseption that these chips somehow emulated dx9, whitch is IMPOSSABLE due to it being impossale to software emulate shader2.



thats the only reason my chip cant :shadedshu only shader 1.3


----------



## AshenSugar (Feb 14, 2007)

upgrade


----------

