Sunday, September 14th 2008

GeForce GTX 260 with 216 Stream Processors Pictured, Benchmarked

NVIDIA is dressing up a new version of the GeForce GTX 260 GPU as reported earlier, with a revision that carries 216 shader units (against 192 for the original GTX 260). Chinese website Bear Eyes has pictured the new GPU. Other than the increased shader count, that should provide a significant boost to the shader compute power, other GPU parameters such as clock speeds remain the same. The core features 72 texturing units and 28 ROPs. The core is technically called G200-103-A2 (the older core was G200-100-A2). The card reviewed by Bear Eyes was made by Inno3D, called GeForce GTX 260 Gold. This shows that the GTX 260 brand name is here to stay.
The card continues to have a 448-bit wide GDDR3 memory bus with 896 MB of memory. This card features 1.0 ns memory chips made by Samsung. On to the benchmarks, and NVIDIA finally manages to comprehensively outperform the Radeon HD 4870 512M in its category. Benchmark graphs for (in the order) 3DMark Vantage, Company of Heroes: Opposing Fronts (without and with AA at 1680x1050 px), and Crysis, are provided below. To read the full (Google Translated) review, visit this page.
Source: Bear Eyes
Add your own comment

83 Comments on GeForce GTX 260 with 216 Stream Processors Pictured, Benchmarked

#76
jaydeejohn
Lets not forget something important here. The 3xxx series from ATI had price reductions because it couldnt compete with nVidia and the G8/9 series. ATI took it in the shorts having to do that, and also created a certain POV towards their product. Simple truth is, the tides have changed, and now its nVidia having to drop prices to stay competitive. No one would be buying a G260 or G260 with 216 SPs lets try this again model for 450$ now would they? nVidias trying to get some market share here, and they only way they can do it is this way. Nothing wrong with that, competition coming from ATI currently is tough. If its released at the same price point, I think its a win for everyone, as anyone should appreciate more bang for the buck. If however theres price increases associated with this card, it wil not bode well for us, nor nVidia, as once again we see a rename/charge more for minimal gain in performance from nVidia, which has turned alot of people off. Im hoping nVidia gets it right this time, and reduces the current G260 prices, and slips this new one at the old price point, then, like I said, everybody wins
Posted on Reply
#77
newtekie1
Semi-Retired Folder
$ReaPeR$i care only for the prices over here tell me why should i care about prices around the world:banghead: isnt that logical enough? im not trying to convince anyone to buy the 4870 because i like it! my point was that in my situation the 4870 is the best choice. and for your information until the 3850 i owned nvidia cards and i was very pleased with their performance. darkmatter i think that you missunder stood my point and i liked your above mentioned point. sorry if i pissed you off it was never intentional
I know this is off topic, but it shows how immature you are. If I, being an American, had made the same comment about how the rest of the world doesn't matter, I would have been flamed to death.
DarkMatterHaha a little bit off topic here isn't it? ;)

About the "GTX260b" being a weakened GTX280. Of course, that's exactly what it is. That's exactly what GTX260 is.

But about being it a way to sell the GTX200 stock, I don't think it's only because of that. I have said thi before, that IMO Nvidia when they design their chips their goal is to make the chip so that the second card can be the same one with one cluster dissabled. But in order for this you need good yields, if you don't have enough of them you have to dissable one more. Yields is the one thing you can improve a lot over the time, so possibly right now dissabling one cluster is enough to assure a high yield rate.
I don't know why everyone is trying to make some kind of big deal out of this, this is exactly what nVidia has been doing since the 6 series. They design their cores with this flexibility in them for this exact reason, so they can use the defective cores in lower card. I'm amazed at the number of people that don't know nVidia does this(and ATi used to also, I don't know why they stopped) and the number of people that think the GTX260 uses a completely different core than the GTX280. Usually, nVidia's entire high-end teir is the same core, each graphics manufacture only really puts out about 3 actual GPU cores each generation.
Posted on Reply
#78
DarkMatter
newtekie1I don't know why everyone is trying to make some kind of big deal out of this, this is exactly what nVidia has been doing since the 6 series. They design their cores with this flexibility in them for this exact reason, so they can use the defective cores in lower card. I'm amazed at the number of people that don't know nVidia does this(and ATi used to also, I don't know why they stopped) and the number of people that think the GTX260 uses a completely different core than the GTX280. Usually, nVidia's entire high-end teir is the same core, each graphics manufacture only really puts out about 3 actual GPU cores each generation.
Yeah that amazes me too.

I suppose Ati stopped doing it, first, because of the architecture they decided to use didn't permit it really well (you lose 25% of the chip with each cluster on R600/RV670, ring bus, the arrangement of TMUs) and at the same time because yields were high enough to not justify the move. That along with the fact they didn't push the fab process to it's limits. Now for RV770 the only argument of the above that is still valid is that they probably have enough yields, as there's nothing preventing them from doing it again, so I dunno.

You have to take into account they already have one SP for redundancy on each SIMD array, so I supppose that already works out well for them. Even if they have to throw away some chips, they probably save some money because they don't have to test all the chips to see it's working units, just how far in Mhz they can go. You lose a bit in manufacturing, you save a bit on QA, I guess.
Posted on Reply
#79
Rapid
Dear God!

I read through this forum a lot, reviews, tips/ tricks etc. And I definitely agree with the point made earlier that people get so fanboy'd up about things. From what I can see many people stick their nose in and argue about things that they know nothing about. Other than the fact that they have some blind loyalty to a brand.

What people fail to realise is that being devoted to one company blinds them from the fact that another company may release a product that is better for them, cheaper etc.

I agree with having a healthy conversation / debate about the pros and cons of a product, but FFS stop getting so immature about it all.
Posted on Reply
#80
erocker
*
erockerHere is my issue. I read that someone used the term "Nv fans". Things like this should not be said as it may be insulting to them, and/or it can flame into arguments due to the fact that "fanboy-talk" is verablly aggressive. This detracts from the original pourpose to why the thread was posted. Stay on topic please.
Nick89nVidia's GTX280 is still better than ATi's cards
is still better than ATi's cards
better than ATi's cards

TY SO MUCH FOR THAT LAFF! Do you really still say you are not an Nvidia fanboy?:roll:

Has never heard of a 4870X2:roll:
Perhaps you should heed my warnings before posting.
Posted on Reply
#81
Nick89
btarunrPost something worthwhile, how does calling someone a fanboy help this discussion? You need to see his perspective, not just his post. Yes, the GTX 280 is better. For as low as $420, that's a hell of a card versus a $549 4870 X2, and there are reasons to back that statement. Try to read thru the thread or make a credible argument, not "omg lolololol, fanboy".
Sorry Btr, But he's always bashing ATI. I just find it funny he says he not a fanboy.

next time I will post something worthwhile.;)

Also it takes two GTX280's to beat one 4870X2, so 549$ or 840$
Posted on Reply
#82
Nick89
erockerPerhaps you should heed my warnings before posting.
I've been at work all day sorry Erocker. I'll edit the post if you want.
Posted on Reply
#83
Hayder_Master
the only point ti increase gtx 260 is trying to beat 4870 , but the original 4870 still win in some test's , but also 72 texturing units and 28 ROPs still interesting , and nvidia become do some things without think did they forget the gtx280 , i see the xfx gtx 260xxx edition tests and it is very close to the gtx280, the new gtx 260 oc edition sure beat gtx280 and i that time gtx280 become useless
Posted on Reply
Add your own comment
Dec 23rd, 2024 18:44 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts