Friday, October 15th 2010

NVIDIA to Counter Radeon HD 6970 ''Cayman'' with GeForce GTX 580
AMD is undertaking its product development cycle at a breakneck pace, NVIDIA trailed it in the DirectX 11 and performance leadership race by months. This November, AMD will release the "Cayman" GPU, its newest high end GPU, the expectations are that it will outperform the NVIDIA GF100, that is a serious cause for concern, for the green team. It's back to its old tactics of talking about GPUs that haven't even taken shape, to try and water down AMD's launch. Enter, the GF110, NVIDIA's new high-end GPU under design, on which is based the GeForce GTX 580.
The new GPU is speculated to have 512 CUDA cores, 128 TMUs, and a 512-bit wide GDDR5 memory interface holding 2 GB of memory, with a TDP of close to that of the GeForce GTX 480. In the immediate future, there are prospects of a more realistic-sounding GF100b, which is basically GF100 with all its 512 CUDA cores enabled, while retaining its 384-bit GDDR5 memory interface, 64 TMUs, and slightly higher TDP than that of the GTX 480.
Sources:
3DCenter.org, PCGH
The new GPU is speculated to have 512 CUDA cores, 128 TMUs, and a 512-bit wide GDDR5 memory interface holding 2 GB of memory, with a TDP of close to that of the GeForce GTX 480. In the immediate future, there are prospects of a more realistic-sounding GF100b, which is basically GF100 with all its 512 CUDA cores enabled, while retaining its 384-bit GDDR5 memory interface, 64 TMUs, and slightly higher TDP than that of the GTX 480.
195 Comments on NVIDIA to Counter Radeon HD 6970 ''Cayman'' with GeForce GTX 580
look at the products nvidia is putting out. there tdp is much higher than amd's products, nvidia is cutting rops from there gpu's but yet either has more cuda cores or the same amount as some last gen dx 10 cards (gts 250,gt240) but perform the same or less. luckily nvidia has a lock on physx which is still much to be desired there or who knows where they would be sitting at.
And actually I do hope that nVidia gives AMD a good kick in the butt! The fact that they want the 6800s to be GPUs that perform like 5800s is a testament to AMD's new found complacency!:shadedshu
And yes, I do know that it's because they're apparently moving their mid-range up from the x700s, but where is the sense in that?! Where's the space for the high end then... x900? Rubbish, there's not enough space there for med-high, high and ultra-high end!
It's a overpricing ploy and nothing more IMHO, so yes nVidia, make them look stupid by making their "best single GPU card" dominance short-lived! Serves them (AMD) right for trying to be lax!:banghead:
*phew*... rant over... for now!:rockout:
So I follow the first link www.3dcenter.org/news/2010-10-13 and which of the specs mentioned there are true exactly?
1- Fully enabled GF100?
2- 512 SP, 128 TMU, 512 bit?
3- 576 SP, 96 TMU, 384 bit?
4- Two GF104s in a die? 768 SP, 128 TMU, 512 bit
The truth is they are completely clueless. And it's obvious they have no kind of source, which becomes apparent by the fact that they are posting 4 different posible configs. If you have a source or if it's Nvidia PR behind you would have one spec posted, wrong or not, fake of not, hype or not. But 4? Come on...
One of the scenarios points to the 768 shader GTX 580 with about 50% performance on top of the GTX 480.
Excuse me for laughing out my cornflakes.
We're talking 40nm process here. A GF100 variant (unlike the super GF104) isn't very practical.
It took many months to release a crippled GTX 480 (480 cores not 512). They've yet to manage a 512 core variant. Yes, they probably will. But surely it requires a redesign which would be a bit odd seeing as they are working on Kepler now.
Or will Kepler be a diff design team as the Fermi design team managed to miss a few things in the design to manufacture process (just saying what JSH said).
Can i also state i think it's absolute bollocks that AMD have kept 58xx series prices so high. If NV can make a chip like the nonsense we're all talking about, it'll cost a fortune. Just like 69xx surely will :(
Physical products and real info is the mana of tech, not rumour and superstition.
That one is not only doable, but it's a lot more doable than GF100 and better than GF100 in every posible way, and doesn't require Nvidia going to the drawing board at all. 50% more GPC/shaders means less than 50% more silicon, because PCIe interface, video decoder portion of the chip and many other things are already there. In any case if we take the most pessimistic number of 50% increase in silicon we end up with a 2.8 billion transistor and 480mm^2 chip. Also 50% more power consumption means 225-275w.
basically "GF110" vs GF100:
transistors: 2.8 vs 3.1 billion
die area: 480 mm^2 vs 530mm^2
shaders: 576 vs 480
tmu: 96 vs 64
384 vs 384 bit
275w vs 320w
Ok let's think logically: all these new shaders have to be connected to the L2 cache, those connections take some space. Then you'll probably need a larger(and/or faster) L2, unless you want to leave all those new shiny cores starved for information. Then the back-end with the 50% more TMUs will need to be rewired and all of these new changes will need to be tested over and over and over...I dunno man you make it sounds a lot easier than it actually is.
I still think that if they play around with the existing cores and release fully enabled ones, Nvidia should pretty much be in the clear. Get rid of the 465 because it sucks donkey a$$, release a 466 based on the full G106 core, get rid of the 470 because the 466 will eat it up, move the 480 down to a 475 and have a 512core 485 at the top of the line. Play with prices and voltages a bit and et voila problem solved.
Yeah, maybe it's not as easy as I made it out to be, but it certainly isn't as difficult as a competely new chip. It's been 6+ months since GF104 was finished (not released). 6 months is more than enough to make that thing and then some. Besides forget about release times, it's internal times which we have to look at, and thse are unknown. Release dates for GF104, 106 and 108 were not based on when the design was finished, but on when can I make enough of them for a proper release, without eating up on production of the chips that make me most money, that is higher end ones. Bottom line Fermi derivatives were probably almost finished probably even before GF100 cards were released. Enough time for anything.
EDIT: And no, you don't need more L2. Fermi had much more L2 than any GPU will ever need. Reason GPGPU (GF100 is and will always be the GPGPU chip, just like G80 always was the GPGPU part, G92 existed oly like a gaming chip tho). GF104 is showing any decrease in performance due to less L2 per SP? No, not a single 1%. And 50% more SPs per SM were added. Adding another 16 SP, equalling a 33% increase is not going to change that either.
Also: They don't indeed, but it's actually the other way around as you are suggesting and in absolute favor for the "3/2 GF104-GF110":
- Doubling execution units usually never doubles transistor count or die area, especially die area. And you waste less area in "margins" (I know there's a term for that). i.e:
Ati
Redwood = 627 million
Juniper = 1040 million
Cypress = 2150 million, more than twice yes, but it does not count because it has at least a massive difference in that it supports 64 bit, while Juniper and below don't.
RV730 = 514 million (remember 320 SP)
RV740 = 826 million (640 SP)
RV770 = 956 million (800 SP)
Nvidia
GF108 = 585 million
GF106 = 1170 million
GF104 = 1950 million
- Power requirement increases are almost always lower than the actual active transistor increase.
As far as I'm concerned until cards are on shelves it's all a bunch of hot air. NVIDIA has zero credibility left with me after the Fermi "launch".
3Dcenter claims in that article of it's source: So who knows..
And as for the Fermi comment.... i like my GTX 470.:)
I was expecting nothing special from them untill they move onto 28nm, i would have said the same for AMD but it seams that the 6xxx cards may prove to be quite nice, only time will tell. Do you think well binned chips could possibly have everything enabled and not chow down on a massive ammount more power?