• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

GTX285 vs 5870 vs 5870+5850

wolf

Better Than Native
Joined
May 7, 2007
Messages
8,834 (1.33/day)
System Name MightyX
Processor Ryzen 9800X3D
Motherboard Gigabyte B650I AX
Cooling Scythe Fuma 2
Memory 32GB DDR5 6000 CL30 tuned
Video Card(s) Palit Gamerock RTX 5080 oc
Storage WD Black SN850X 2TB
Display(s) LG 42C2 4K OLED
Case Coolermaster NR200P
Audio Device(s) LG SN5Y / Focal Clear
Power Supply Corsair SF750 Platinum
Mouse Corsair Dark Core RBG Pro SE
Keyboard Glorious GMMK Compact w/pudding
VR HMD Meta Quest 3
Software case populated with Artic P12's
Benchmark Scores 4k120 OLED Gsync bliss
So as the title says I've managed to run a few crysis warhead and vantage benchmark runs to compare these 3 setups.

the GTX285 was tested in a different system to the 58xx's however the two are exceptionally close, so I feel the results are quite appropriate.

system 1;

i7 920 @ 4ghz
X58-UD5
6gb 1600mhz
GTX285 @ 693 core - 1578 shaders - 1290 memory, 196.34

system 2;

i7 920 @ 4.2ghz
P6T Deluxe V2
6gb 1600mhz
5870 @ 950 core - 1250 memory, 10.3 preview
5870+5850 @ 950 core - 1250 memory, 10.3 preview

GAMERDX9.png


GAMERDX10.png


DirectX 10 GAMER 3X @ Map: ambush @ 0 1920 x 1080 AA 0x

5870 is on average 32 % faster than the GTX285

CFX is on average 85% faster than the GTX285

CFX is on average 40% faster than a single 5870

ENTHUSIAST1.png


ENTHUSIAST2.png


ENTHUSIAST3.png


DirectX 10 ENTHUSIAST 3X @ Map: ambush @ 0 1920 x 1080 AA 4x

5870 is on average 49% faster than the GTX285

CFX is on average 249% faster than the GTX285

CFX is on average 69% faster than a single 5870

roundup vantage high.png


TEMPS

both setups were tested in the same room, adjacent to each other, at the same time.

At 100% fan forced in EVGA precision the GTX285 rose to 83 degrees.

With a boosted fan curve the hottest any 58xx card became during testing was 63 degrees which was at 48% fan speed.

---------------------------------------------------

So in short yes the 58xx cards on on a *slightly* faster system, this gives an excellent idea how the cards perform at a range of settings, and how the lead grows the harder the task.
 
then i will buy a 5850 + my current 5770.. wonder how it would scale up :)
 
I think I will stick with my 285, it's good enough for now;) but those are great numbers on the 5870.
 
GTX285 @ 693 core - 1578 shaders - 1290 memory, 196.34

At 100% fan forced in EVGA precision the GTX285 rose to 83 degrees.

I run my XFX GTX285 at 720/1620/1350 with no higher than 70% fan speed and never see anything past 70c's.....methinks you need to clean that cooler on the 285.

Thanks for the benches though. :)
 
Need to redo the test with the 5870+5850 system at 4Ghz. I know the results would be similar, but using 2 different cpu speeds gives people a flaw to focus on.
 
Need to redo the test with the 5870+5850 system at 4Ghz. I know the results would be similar, but using 2 different cpu speeds gives people a flaw to focus on.

yeah that's what I was thinking, although minimal it *might* make a slight difference, just enough for some to potentially call the entire testing unfair.

@ {JNT}Raptor - I too thought the temps seemed high .... it is my mates pc and he rarely dusts it, so I'd also assume that will account for the weirdly high temps, given the boxes were side by side in the same room.
 
I've got a question: Ok, the Nvidia GTX 285 only has 240 processor cores,
the core clock is 648MHz. The effective memory clock is 2484MHz.

The HD 5870, OTOH, has 1600 "stream processing units," the core clock
is 850MHz, and the effective memory clock is 4800MHz.

So how is it then, that the GTX 285 even comes that close in speed to
an HD 5870?

I'm not bashing ATI; I just bought a 5850. I'm just wondering about
this....with so many more "stream processors," and practically double
the effective memory clock, how is it that it comes so close in speed?
 
1 Nvidia GT200 shader = 2.3484 ATI rv770xt shaders from what ive read online... dont know how the math works on the 5000 series.

ive heard someone say that it takes 3x as much ati shaders as nv shaders at same clocks to be equivilent although i probbaly just fucked that all up cuz i dont know myself im just repeating inet gossip
 
I've got a question: Ok, the Nvidia GTX 285 only has 240 processor cores,
the core clock is 648MHz. The effective memory clock is ......how is it that it comes so close in speed?

1 Nvidia GT200 shader = 2.3484 ATI rv770xt shaders from what ive read online... dont know how the math works on the 5000 series.

ive heard someone say that it takes 3x as much ati shaders as nv shaders at same clocks to be equivilent although i probbaly just fucked that all up cuz i dont know myself im just repeating inet gossip

It depends....

1 nv shader is usually clocked at around 1400-1600mhz, one ATi shader is clocked at GPU clocks, 700-850Mhz...



1 nv shader roughly equals one ATi cluster in a worst-case scenario...the RV770 had 160 clusters each cluster has one master shaders and 4 gimpy shaders. Worst case is when only the master shader can perform. This is bad since usually nv shaders are clocked twice as high, so massive loss for ATi here. You basically have ~80 shaders to NV's 240. This rarely happens, usually due to *extremely* bad driver support in a game.

With the higher clocked (or overclocked) shaders in the 5 series, this is different since the speed of the shaders isnt half, closer to 850-1000Mhz vs 1400-1600Mhz and there are twice as many. So if clockspeed was not taken into account, Ati would have 320 shaders, and NV 240... but because Nv's have a higher clock, they can *sometimes* touch the ATi card's performance. Again, worst case scenario.



However, in the best-case scenario, the ATi card will just obliterate the nv since in the best case 1ATi = 1NV at the same clockspeeds and 2ATi = 1Nv at actual clockspeeds.

In the best case, a 4870/4890 has *about* ~400 comparative(clockspeed adjusted) shaders to the gt200's 240. This happens VERY rarely, but its part of the reason you see ATi cards with outrageous theoretical compute numbers i.e. a 1.2 tflop 4890 is about the speed of a 650-700 gflop gtx275 in real world. If ATi cards were to employ all of their shaders, including the gimpy ones, their combined compute power is pretty amazing. The chance of this happening in a real-world scenario, however, is a bit like the planets lining up in a perfect row behind a blue moon.

So in actuality the 3 to 1 figure averages out pretty close to what happens, even though its not nearly that cut and dry.

One other thing to consider: There are other performance factors that determine the gaming experience, so shader power isn't everything, not by a long shot. Which is why doubling the shaders on the 5 series + increasing clocks didn't double the speed from rv770.
 
Last edited:
+1, you can crossfire only from same series, like 58xx or 57xx

oh sorry ya u r right :)

any way 5770 will be a good give away to my cuz. i got it as a replacement for my 4850 lol :)
 
Need to redo the test with the 5870+5850 system at 4Ghz. I know the results would be similar, but using 2 different cpu speeds gives people a flaw to focus on.

+1 or nay sayers will go crazy over it. :laugh:
 
WOOOOOLFFFF....

hey man would you recommend upgrading to a 5870 from sli 260's?
 
Back
Top