Monday, May 26th 2008
Next-gen NVIDIA GeForce Specs Unveiled, Part 2
Although it's a bit risky to post information taken from unknown for me sources, not to mention that the site is German, it's worth trying. The guys over at Gamezoom (Google translated) reported yesterday that during NVIDIA's Editors Day, the same place where the F@H project for NVIDIA cards and the buyout of RayScale were announced, NVIDIA has also unveiled the final specs for its soon to be released GT200 cards. This information comes to complement our previous Next-gen NVIDIA GeForce Specs Unveiled story:
Source:
Gamezoom
- GeForce GTX 280 will feature 602MHz/1296MHz/1107MHz core/shader/memory clocks, 240 stream processors, 512-bit memory interface, and GDDR3 memory as already mentioned.
- GeForce GTX 260 will come with 576MHz/999MHz/896MHz reference clock speeds, 192 stream processors, 448-bit memory interface and GDDR3 memory
108 Comments on Next-gen NVIDIA GeForce Specs Unveiled, Part 2
i don't really understand why people must be fanboys of Nvidia or Ati ;the point is to buy a card who suits you better and use it for a few years without upgrade; i don't like when mud is thrown from both sides now just to prove that "i have a card from the 1st and best gpu manufacturer in the world"(this can be Nv or Ati) wtf cares about?
So most probably just a wait for the price vs performance minded people after the GTX 280 is released.
*shrug*
The reason is in the architecture. For a game to run well on ATi R6 gen GPUs some heavy optimizations are required as the architecture of R6 GPUs is so different from earlier, and then ofcourse there's the obvious flaws in R600/RV670 like the absolutely horrible texture filtering capabilities and the innate inefficiency of the superscalar shaders. RV770 will partially fix texture filtering shortcomings but unfortunately the TMUs are only doubled - thus RV770's texturing muscle will still clearly trail that of even G92.
When you make a new game for instance you have to optimize the code so that it takes advantage of the hardware the GPU has. This process can take time, you dont generaly have time to cover all aspects and explore all the resources of two cards that work in different ways. So designers have to choose. Nvidia gives them all the help they need in understanding and using the nooks and crannies of their cards. Then games run better on NVIDIA cards.
I'm trying to stay neutral here, the only reason that I like ATi cards right now is because the competition is overcharging.
Long before TWIMTBP, many first tier game developers said the way 3DMark did things WAS NOT how they were going to do things in the future. AFAIK this never changed, so it's the same with 3DMark 06 too. Saying that Ati architecture is any better based on these kind of benchmarks, involves that those benchmarks are doing things right, which they aren't.
This is something that has always bugged me. People say that game developers are making code "Nvidia's way", but they never stop and think that MAYBE Nvidia is doing their hardware in "game developers way", while Ati may not. TWIMTBP it's a two way relationship. With the time Ati has become better (comparatively) in benchmarks and worse in games. Everybody blames TWIMTBP for this, without taking into account each company's design decisions. For a simple example, Ati's Superscalar, VLIW and SIMD shader processors are A LOT better suited for the HOMOGENEOUS CODE involved in an static benchmark, than for the ever changing code involved in a game. Also in the case of R600 and one of it's biggest flaws, its TMUs, benchmarks are a lot more favorable than games, since you can "guess" which texture comes next and you don't have to care about the textures that have already been used. In games you don't know where the camera will head next, so you don't know which textures you can discard. In reality none of the architectures can "guess" the next texture, but G80/92 with it's bigger texture power can react better to texture changes. On benchmarks R600/670 can mitigate this effect with streaming.
It has way more switching power which means ownage (as long as they don't screw it up like the FX series). Clocks are irrevelant (overall).
All I want to know is can it play Crysis on very high at 1920x1200
Only if the difference turns out more substantial would i consider to buy a nVidia but NOT for that price!
It's early to say anything, but most probably each card will have its market segment and ALL of them will have a similar price/performance, as it's been the case almost always. Also GTX cards won't lag a lot behind in performance-per-watt, always based on these specs which I don't know if are trusty. But then again we are always talking about these specs so, simple math:
157 watts +50% = 157 +79 = 236 W isn't funny?
nVidia's GTX280 could be 10 times faster the ATI's 4870x2 but i wouldn't buy it for this amount of money: NO WAY!!!!
Also I've been looking around and other sources say $400 and $500, for GTX260 and 280 respectively and Nvidia may still have an ace up its sleeve called GTX260 448 MB which would pwn as did GTS 320 in the past, so who knows...
To me, there are only 2 important aspects for any card: power usage and price (in that order).
1- HD3870 X2 is not twice as fast as HD3870, we could guess HD4 x2 won't either.
2- HD3870 X2 price is more than twice than DH3870, will this be different? That puts the HD4870 X2 well above $600. Probably it will be cheaper, but it won't launch until August, we don't know how prices are going to be then...
Although, I think there is a major difference in GPU architecture that has been holding ATI back across the board, the few games where ATI has worked closely with developers shows a fairly level playing field, or better ATI performance upon release.
Sadly, the only two games I can think of that I know for certain that ATI worked closely with game developers is Call of Juarez - where we see ATI cards tending to outperform nVidia's; and FEAR - where we see ATI cards continuing to keep pace with nVidias.
I'm sure a certain amount of collaboration does tend to help out nVidia's overall, but yes, GPU architecture does come into play as well; and ATI's GPUs just haven't been suited for the more complex games we've seen over the last 2 years or so.
But, all-in-all, the new ATI R700 series is a brand new design, not a re-hash of an older GPU like the R600 was to R500 - It might be wishful thinking, but . . . I think we might be in for a surprise with the new ATI GPUs. Probably just wishful thinking on my part, though :ohwell:
Anyhow, someone correct me if I'm wrong, but I thought I remember hearing that nVidia's new G200 is another re-hash of G92/G80? :confused:
or just go x2 and win that way, and the x2 will no doubt be far cheaper than nv's gx2 alternative, so again, price wise and performance its win win for ATI atm.
Think about it, NV may have one killer card, but if you can almost buy 2 of ATI's own killer cards (dont matter if they are less powerful than NV's offering) for around or just over the price of one of NV's cards, do the math, ATI / customer would win every time?
The only way the 2 monsters won't compete is that the 4870x2 is priced that low. I guess it'll make since to ATi but there's still no way that 280 or 260 is getting my money. I'll stick with the GT model if I had to get one but I'm sure that ppl will still flock to snatch them up. Hopefully it will be worth it to them.