Thursday, July 10th 2008
R700 up to 80 % Faster than GeForce GTX 280
Pre-release performance evaluations suggest that the Radeon HD 4870 X2 2GB GDDR5 model will on an average be 50% faster than the GeForce GTX 280 and in some tests 80% faster. A second model, the HD 4850 X2 (2GB GDDR3 memory, 2x RV770Pro) will also convincingly outperform the GeForce GTX 280. The R700 series will be brought into the market late-July thru August.
Source:
Hardspell
149 Comments on R700 up to 80 % Faster than GeForce GTX 280
Kinda funny as I would probably have more actual use for a quad as a 3d artist than most of TPU yet I don't have one :laugh:
I don't, and I want to... :cry:
Im sorry card for card NV owns, yea the prices were out of whack but thats being fixed, just goes to show being the first to have new stuff does indeed come at a price!
4850: 149e > 230$
4870: 230e > 354$
GTX 260: 286e > 440$
GTX 280: 399e > 614€
Good for ATI!
If its 80% faster than 1 GTX 280, the dual version of the 280 is bound to be faster.
ATi pretty much has a game plan. They're not focusing as much on NV as NV is on them - for obvious reasons. If ATi continues on this path, they will have a single gpu that eats all comers. The 4870 is more than twice as powerful as a 3870. Its ridiculous to think that their next gpu will be 2X+ as powerful as a 4870 but the possibility is there. ATi also has rumored plans for a dual core gpu - & if it takes something like that to take the 'single' gpu crown then so be it. But the last thing ATi will do right now is stray away from their game plan (architecture) when its finally starting to bare sweet fruit.
BTW, some of you really need to understand GPU architecture a bit better. R700 is not more efficient than a design based on a single giant GPU would be. At least, not for pure 3D performance. The fastest GPU design, due to how parallel 3D rendering is, is always a single GPU. Dual GPUs waste RAM and have to deal with inefficiencies caused by trying to split the tasks and communicate via a pathetically slow bridge chip. There is extra hardware and hardware performing redundant tasks. And the drivers have to be specially set up for every game basically (this is conveniently ignored by most people for some reason.)
The problem is that manufacturing technology can not cope with mega huge GPUs. That is why GT200 can't clock as high as G92. The bigger the chip gets with more transistors, the hotter it is and the more complex it becomes to make it stable at higher clock speeds. If you look back at how early GPUs barely needed fans to today's ridiculous furnaces, you see that manufacturing is way behind what competition has pushed GPUs to become.
R700 and 9800GX2 are designed to overcome manufacturing inadequacies in the only way possible. They also conveniently allow an entire lineup to be based mainly on a single GPU design. It's just important to realize that this is not the optimum way to go for performance.
Also, realize that there is potential for a refreshed GT200 to be vastly faster. If they shrink it down and tweak it, and this allows it to clock up decently higher, a dual GT250 (or whatever) could be a lot faster than R700.
Besides, who can really claim a GF7 or X19xx as being crown of anything. Current cards run DX9 just fine and definitely look a hell of a lot better doing it than GF7. GF7's image quality was not all that great.
1) Wasn't the default fan speed set lower than it needed to be? From what I read there was a simple fix for this and just turning the speed up was supposed to help a lot with the heat situation with out too much cost in noise. If that is the case, early benchmarks measuring heat don't really speak to the thermals of the chip itself.
2) It was my understanding from various posts that the issue with the 4870's idle power draw is that it's not downclocking properly for 2D mode so it continues at full strength readiness even when you're just reading emails. I believe this is just a driver issue and ATI said this will be resolved with a new catalyst release. So saying that the rv770 on avg draws more power seems like a faulty argument. It draws less at max and it currently draws more at idle but once fixed via a driver I'm not sure there's much reason to think that it wouldn't draw less power at idle as well. I'm assuming your avg was just load + idle / 2, if that's the case as soon as that's fixed the numbers should change significantly.
come on ati.....