# GeForce GTX 580 Expected to be 20% Faster than GeForce GTX 480



## btarunr (Oct 21, 2010)

NVIDIA's next enthusiast-grade graphics processor, the GeForce GTX 580, based on the new GF110 silicon, is poised for at least a paper-launch by end of November, or early December, 2010. Sources in the video card industry told DigiTimes that the GTX 580 is expected to be 20% faster than the existing GeForce GTX 480. The new GPU is built on the existing 40 nm process, NVIDIA's 28 nm GPUs based on the Kepler architecture are expected to take shape only towards the end of 2011. Later this week, AMD is launching the Radeon HD 6800 series performance graphics cards, and will market-launch its next high-end GPU, codenamed "Cayman" in November.

*View at TechPowerUp Main Site*


----------



## FordGT90Concept (Oct 21, 2010)

20% hotter and more power draw too? XD


----------



## assaulter_99 (Oct 21, 2010)

FordGT90Concept said:


> BTW, the "to" doesn't sound right in the title.



GeForce GTX 580 Expected to be 20% Faster than GeForce GTX 480

Is that ok? 

Ok, so you owe me one BTA!


----------



## wahdangun (Oct 21, 2010)

maybe what they mean was the "wooden" version will be launching at December


----------



## HXL492 (Oct 21, 2010)

This is interesting, wonder how the cayman would fare.


----------



## 983264 (Oct 21, 2010)

Before NV says that, they must beat the 5970 in dual gpu segment...


----------



## motasim (Oct 21, 2010)

... and exactly how many nuclear power stations are required to power it ... 

... but look at the bright side; you won't need any gas cooker after you get this GPU ...


----------



## Atom_Anti (Oct 21, 2010)

Haha, the new GrillForce is coming! Chefs going to like it.


----------



## Fourstaff (Oct 21, 2010)

Hmmm, If you would consider GTX480 with 10% higher clock speed and all 512 cores.....


----------



## entropy13 (Oct 21, 2010)

Fourstaff said:


> Hmmm, If you would consider GTX480 with 10% higher clock speed and all 512 cores.....



And it's an entirely new generation!


----------



## Red_Machine (Oct 21, 2010)

btarunr said:


> poised for at least a paper-launch by end of November, or early December, 2010.



What?  THAT soon?


----------



## 20mmrain (Oct 21, 2010)

So let me get this straight Nvidia is going to launch a Architecture early that wasn't supposed to be launched until 2011 and they couldn't get Fermi out on time.... yeah right!


----------



## HardSide (Oct 21, 2010)

Red_Machine said:


> What?  THAT soon?



It's just a paper launch, they need to counter ATI new card that they announced. Been a nvidia fan boy for a while now, but if they dont fix the problems that were in the 400 series, they going to lose a lot of fan base.


----------



## wahdangun (Oct 21, 2010)

Red_Machine said:


> What?  THAT soon?



yeah just like fermi, they just paper dragon until six month later


----------



## DanTheMan (Oct 21, 2010)

And as the story goes ... After the 580 series, 1 month later AMD will answer back with a 15% increase, then NVidia will promise a 680 series (another 10% gain) then AMD .........

Same story .... over and over ..... It never ends folks!


----------



## derwin75 (Oct 21, 2010)

This is good but it's only words. Action speaks louder than words. All I can do is wait and see when it comes out.


----------



## Bjorn_Of_Iceland (Oct 21, 2010)

DanTheMan said:


> And as the story goes ... After the 580 series, 1 month later AMD will answer back with a 15% increase, then NVidia will promise a 680 series (another 10% gain) then AMD .........
> 
> Same story .... over and over ..... It never ends folks!



Its a half empty / half full conundrum


----------



## Sasqui (Oct 21, 2010)

End of 2011?  meh.


----------



## Benetanegia (Oct 21, 2010)

GF104 = 384 SP = 2 x GPC

GF110 = 3 x GPC = 576 SP

576/480 = 1.2 = *+20%* 

Power consumption on Furmark

GTX460 = 155 W = 336 SP

GF110 = (155 W x 576 SP) / 336 SP = *265 W*






iGame GTX460 = 141 W
GF110(?) = (141 x 576) / 336 = *241 W*


----------



## SNICK (Oct 21, 2010)

oh my god only 20% faster than gtx 480!
there were also speculation about doubled gf100

now there is no way nvidia can steal AMD's thunder


----------



## SNICK (Oct 21, 2010)

benetanegia said:


> gf104 = 384 sp = 2 x gpc
> 
> gf110 = 3 x gpc = 576 sp
> 
> ...



fyi . Tdp is dependent on several factors other than ONLY shaders count


----------



## CharlO (Oct 21, 2010)

Wait, 20% faster than a 480 is like a 5870 oc'd (well something more) but it is definetly slower than a promised 6870... They are not even getting even on paper now?


----------



## SNICK (Oct 21, 2010)

charlo said:


> wait, 20% faster than a 480 is like a 5870 oc'd (well something more) but it is definetly slower than a promised 6870... They are not even getting even on paper now?



i think you mean 6970


----------



## Benetanegia (Oct 21, 2010)

SNICK said:


> fyi . Tdp is dependent on several factors other than ONLY shaders count



FYI, Fermi cards are based *only *on 2 type of clusters, GPC and ROP partitions, every card is just a conbination of the two. . 

GPC = SPs + TMUs + rasterizers + setup engines + tesselators (Polymorph) + L1 cache

ROP partition = ROPs + memory controllers + L2 cache

When something increases everything increases by the same amount as long as the GPC is equal. GF100 and GF104 GPCs are different.

You want me to speculate power consumption based on ROP partitions?

GF104 = 2 ROP partitions = 155 W
GF110 = 3 ROP partitions = (155 x3) / 2 = 232 W

Other than silicon increase the only thing that affects power consumption is clock speed which I speculated is going to be the same more or less, and that it won't affect too much: as can be seen on the chart I provided @820 Mhz (quite an OC) the power is 176 W.


----------



## Ghost (Oct 21, 2010)

CharlO said:


> Wait, 20% faster than a 480 is like a 5870 oc'd (well something more) but it is definetly slower than a promised 6870... They are not even getting even on paper now?



20% faster than a 480 is like a 480 oc'd  

So it should be somewhat faster than 5970 http://tpucdn.com/reviews/MSI/N480GTX_GTX_480_Lightning/images/perfrel.gif


----------



## SNICK (Oct 21, 2010)

Benetanegia said:


> FYI, Fermi cards are based *only *on 2 type of clusters, GPC and ROP partitions, every card is just a conbination of the two. .
> 
> GPC = SPs + TMUs + rasterizers + setup engines + tesselators (Polymorph) + L1 cache
> 
> ...



your whole mathemetics based on unitary method is not supposed to be used on electricals components.TDP of FULL gf100 is around 204 watts more than gf100.based on your mathematics can you explain why the TDP OF FULL GF100 IS QUITE MORE AS EXPECTED?


----------



## dj-electric (Oct 21, 2010)

20% faster? thats it? 

...

fail?


----------



## DarkMatter75 (Oct 21, 2010)

Do i have to buy a dedicated AC unit from APC to use the new card??? 
Bye bye NVidia...


----------



## Benetanegia (Oct 21, 2010)

SNICK said:


> your whole mathemetics based on unitary method is not supposed to be used on electricals components.TDP of FULL gf100 is around 204 watts more than gf100.based on your mathematics can you explain why the TDP OF FULL GF100 IS QUITE MORE AS EXPECTED?



Do you really believed that to be real? That was not coming from Nvidia. How naive can you be? First of all the revision code (A1, A2, A3) was not shown so that card if it was real was probably an old prototype card using A1 or A2 silicon and why did Nvidia use A3? Ah yes, because A1 and A2 were broken. Other posibility is that they re-enabled a normal GTX480, and oh yes, there was a reason that SM was disabled...

If you want to know more exactly how enabling SPs/ROPs trully affect power consumption *on real cads*, then look at GTX470 vs GTX465, because they both have the same clocks. And how much is that then again?

GTX465 = 199 W
GTX470 = 232 W

Let's experiment with that:

GTX465 = 352 SP ; GTX470 = 448 SP
GTX465 = 4 ROP Partition ; GTX470 = 5 ROP partition

Ficticial GTX470 = (199 W x 448) / 352 = 253 W    hmm
Second Fictional GTX470 = (199 W x 5) / 4 = 248 W   hmm

wow so it looks like actual power consumption of actual GTX470 is lower than my math. I knew that from the beginning, my assumptions above are for worst case scenario.

GF100 consumed a lot, everybody knows that. Everybody should know by now too, that it was a problem with the fabric (interconnection layer between the different units within a chip) and that the problem has already been fixed. It was mostly fixed for GF104 as can be seen by its power consumtion and is probably even better for other future releases.


----------



## 1Kurgan1 (Oct 21, 2010)

Panic news to make ATI's new cards not get so much attention, makes me sad, not a real NV fan myself, but some better competition on releases would be better for price drops.


----------



## LAN_deRf_HA (Oct 21, 2010)

They must be upping the clock speed a lot. We already saw that going to 512 shaders alone only adds 5%, and I don't think the memory bandwidth is going to add the other 15%. Managing that in the same power consumption as the 480 might actually have taken a lot of work.... assuming their claims aren't total BS.


----------



## MxPhenom 216 (Oct 21, 2010)

CharlO said:


> Wait, 20% faster than a 480 is like a 5870 oc'd (well something more) but it is definetly slower than a promised 6870... They are not even getting even on paper now?



what are you on? can i get some of that?? the hd5870 cannot even hold ground against a non overclocked GTX480. a 20% faster GTX480 will annihilate the HD5870 and HD6870. remember the 6870 isnt a hd5870. its basically a 5850. amd naming is relaly messed up with this generation

And for people who are obsessing over the potential high heat and consumption. Just wait and see, its not worth getting all worked up about it now. Maybe nvidia has new strategies to over come this. Better reference coolers perhaps


----------



## CDdude55 (Oct 21, 2010)

Woot, TPU's heavy bias comes out the wood work again! *fist pumps*

Hopefully they can up the performance when it actually comes out, it's still WAY to early to tell anything.


----------



## Benetanegia (Oct 21, 2010)

CDdude55 said:


> Woot, TPU's heavy bias comes out the wood work again! *fist pumps*



+1

It's really sad tbh, sadder every day. Every day that passes is more difficult to talk about tech.

If only GTX460 had never been released, then maybe, maybe, the comments wouldn't be so baseless. But the GTX460 *was* released and it's faster than the GTX465 while consuming 50 W less... Is it posible for Nvidia to repeat that achievement with GF110, or maybe even... take a sit AMD fanboys... further improve the efficiency a little bit?

- No, because I'm so biased I cannot even see what's in front of my eyes.

- No, because AMD has apparently improved power efficiency further with NI, but there's no way on Earth or otherwise for Nvidia to catch up. Imposible! I mean, Nvidia never had better power efficiency than AMD/Ati. Never, never, never... hmm well... hmmmmokey "only" before Evergreen/Fermi.


----------



## bear jesus (Oct 21, 2010)

CDdude55 said:


> Woot, TPU's heavy bias comes out the wood work again! *fist pumps*





Benetanegia said:


> +1
> 
> It's really sad tbh, sadder every day. Every day that passes is more difficult to talk about tech.
> 
> ...



You just need to laugh at the stupidity and remember thats all it is, if the amd fanboys are so interested in power draw why are they not complaining about the over 300w tdp of the 5870x2 cards? (no not the 5970 i mean the ones with the speeds fo the 5870) things with 2 8 pin and one 6 pin? 

I'm no fan boy for either side as they both suck in their own special ways and are both awesome in the ways as well, and although stupidity annoys me i can ignore the stupidity in the fanboy comments for both amd and nvidia..... how about you 2 join me in laughing at the stupidiy? 

Back on topic i'm really excied about seeing what nvidia will be bringing out, i just hope its very close to the 6970's release date as im sick of waiting for my gpu upgrade and im starting to think i would want something more powerful than 1gb 460 sli so hopefully either amd or nvidia can give me something that fits the bill and before the end of the year.


----------



## Yellow&Nerdy? (Oct 21, 2010)

I believe this is just Nvidias PR trying to damper AMD's launch. I sure hope Nvidia could put something out there to compete with Cayman, because it would definitely help bring the prices down. But somehow it's hard to believe Nvidia has got anything to counter Cayman and Antilles... Well, we'll see.


----------



## KainXS (Oct 21, 2010)

damn, im truly disappointed to hear this

but it should make it so both companies have cards with similar performance at least, somewhat unlike now.



Benetanegia said:


> +1
> 
> It's really sad tbh, sadder every day. Every day that passes is more difficult to talk about tech.
> 
> ...



wow that review shows that the HD5450 is the fastest card O.O
did you change that yourself


----------



## erocker (Oct 21, 2010)

CDdude55 said:


> Woot, TPU's heavy bias comes out the wood work again! *fist pumps*



Love it. Just because a majority of people (pretty much across the internet) feel one way, it's bias. Of course, you're just a "fanboy" for pointing that out. It's all tounge and cheek, but it's either bias or the unwillingness to accept the truth from one "side" or the other. Fact of the matter is, Nvidia has failed to impress many.


----------



## bear jesus (Oct 21, 2010)

KainXS said:


> wow that review shows that the HD5450 is the fastest card O.O
> did you change that yourself



That's performance per watt, so not really relative to what card is fastest.


----------



## wolf (Oct 21, 2010)

Again a thread full of smack talk for a GPU we know next to nothing about, bar a few who actually have something to contribute. ohwell, can't say I'm not dissapointed.

I have a feeling this will just be a GF104 + 50% of it again, making it 576sp, back to 48 ROPS, and still 384-bit GDDR5, they just need to improve the memory controller, and make sure they hit a good power consumption target.


----------



## dlpatague (Oct 21, 2010)

KainXS said:


> wow that review shows that the HD5450 is the fastest card O.O
> did you change that yourself



Can you read? It says "Performance per Watt"! That has nothing to due with total performance. Geesh!


----------



## CDdude55 (Oct 21, 2010)

erocker said:


> Love it. Just because a majority of people (pretty much across the internet) feel one way, it's bias. Of course, you're just a "fanboy" for pointing that out. It's all tounge and cheek, but it's either bias or the unwillingness to accept the truth from one "side" or the other. Fact of the matter is, Nvidia has failed to impress many.



Offering constructive criticism is no issue at all, everyone should be able to point out mistakes and criticize it in hopes of future improvements. Look around, nothing but nonsensical counterproductive comments yet again from a site that i frequently visit, the majority can say whatever BS they want, but i honestly expect more out of sites like this.


----------



## cadaveca (Oct 21, 2010)

erocker said:


> Nvidia has failed to impress many.



No doubt. Considering all the problems I've been complaining about on here, you'd think nV would be "my best friend" at this point, however, the Fermi launch put a big damper in thier hype, and nothing since then has really done much to improve consumer confidence, as evident in the shift in gpu marketshare.

They need a killer product. 20% over GTX480 doesn't cut it....they need 33% or so...then they'd have a real chance.

Mind you the current rumour about price drops is definately gonna work in thier favor.


----------



## bear jesus (Oct 21, 2010)

cadaveca said:


> No doubt. Considering all the problems I've been complaining about on here, you'd think nV would be "my best friend" at this point



To be honest i don't understand how you don't hate AMD at this point  i'm pretty sure i would by now if in your position.


----------



## erocker (Oct 21, 2010)

CDdude55 said:


> Offering constructive criticism is no issue at all, everyone should be able to point out mistakes and criticize it in hopes of future improvements. Look around, nothing but nonsensical counterproductive comments yet again from a site that i frequently visit, the majority can say whatever BS they want, but i honestly expect more out of sites like this.



What BS? A lot of it is not false, though a lot of it is quite short-sided or just blunt. I can say that I'm not particularly impressed with certain things on both sides of the coin.


----------



## Mindweaver (Oct 21, 2010)

20% sounds nice... I just wonder how much room was left for overclocking?  I mean 20% on a 480 should be achievable on a 480, but how much more can you push it?.. I guess time will tell. A 580 that is 20% faster than a 480 plus being able to push it another 20% would be kickass! I don't know... i hope so for everyone.. We need both to do well to keep price's right. you know?


----------



## CDdude55 (Oct 21, 2010)

erocker said:


> What BS? A lot of it is not false,



In terms of the BS posts that contribute nothing to the discussion yet don't get regulate on. And for some reason always fill up _certain_ threads...

No ones denying Fermi's issues. But as you even said, both sides have their issues.


----------



## cadaveca (Oct 21, 2010)

erocker said:


> I can say that I'm not particularly impressed with certain things on both sides of the coin.



Well that's just it, isn't it?

Both sides kinda have issues, and so really, I blame TSMC.

Bear Jesus, while I can afford to just get rid of stuff and start over again, to me, that'd make all the time I've spent trying to get things working a waste of time. As far as I am concerned, either AMD fixes the issues I have, or they fail. I can't truly say that unless is see this out to the end.

But, because my usage(3 monitors) dictates I need a certain level of performance, I'm just plain out of options at this point. The 69xx-series is my last hope, or maybe this GTX580 can pull up nV's socks, and I'll switch over.

I am not "sticking it out" because I'm a fanboy...I need 60+FPS, and @ 5870x1080. A little bit of AA would be nice too. The first company that can do this, gets my cash.

For anyone else...they don't need GTX580. Seriously...a 480 is more than enough power, for anyone with a single monitor.


----------



## N3M3515 (Oct 21, 2010)

cadaveca said:


> No doubt. Considering all the problems I've been complaining about on here, you'd think nV would be "my best friend" at this point, however, the Fermi launch put a big damper in thier hype, and nothing since then has really done much to improve consumer confidence, as evident in the shift in gpu marketshare.
> 
> They need a killer product. 20% over GTX480 doesn't cut it....they need 33% or so...then they'd have a real chance.
> 
> Mind you the current rumour about price drops is definately gonna work in thier favor.



At least someone who thinks like me.
Bottom line: 20% over GTX480 is not enough, considering Cayman XT will be at the very least 30% - 40% faster than Cypress XT, that would make it go neck on neck(equal performance) with 'GTX580', and as everyone knows, due to amd chips being cheaper to produce, amd can lower prices more, and still make some profit.


----------



## KashunatoR (Oct 21, 2010)

I think that gtx 580 will be a winner if it is slightly cooler than gtx 480 and keeps the same OC potential and perfomance scaling. I own the gtx 480 and I will happily upgrade for gtx 580. many of you speak without knowing the truth. my gtx 480 OC 830/1100 doesnt't get passed 81 degrees Celsius and is almost as performant as ati 5970. and that without facing the retarded ATI drivers. you also get physx CUDA and the best minumum framerate which is most important when you playing games


----------



## Benetanegia (Oct 21, 2010)

cadaveca said:


> Both sides kinda have issues, and so really, I blame TSMC.



There's truth there.



> For anyone else...they don't need GTX580. Seriously...a 480 is more than enough power, for anyone with a single monitor.



They don't need it, true to an extent. But the thing is that I'm 100% sure that by going the 1.5x GF104 they can get a card with 20% more shaders, 50% more TMU and higher clocks, while the chip is smaller than 500 mm^2 and consumes 50w less. So why don't go for it? And of course as a consumer it's way better.



N3M3515 said:


> due to amd chips being cheaper to produce, amd can lower prices more, and still make some profit.



That is a very common missconception that is based on too many vagueties and only one fact.

Fact is that a smaller chip costs less than a big chip when everything else is equal.

But that's not the end of it. There's far too many things we dont' know and have as much effect on profitability.

- How much does AMD pay per waffer and how much Nvdia, considering Nvidia buys 2x the ammount of waffers and is not going to flee to GloFo as soon as GloFo is ready?
- Chip price is only a fraction of what a card costs. How much does it cost Nvidia to make the cards and how much AMD?
- AMD's ability to make smaller chips is based on much more $$ put into that R&D department than NVidia. How much?
- How much does it cost Nvidia and AMD to operate as a company?


----------



## the54thvoid (Oct 21, 2010)

Let me be the voice of reason and then we can all arrange cheap flights, meet in a bar and have good old drinks and talk about games and girls and guns? Huh, that sound fun?

Nvidia brought us exceptional gfx solutions.  Then they mesed up big time with Fermi.  They have since addressed that with GF104 which is close to 5870 perf per watt (not close to 5850).  If they base GF110 on GF 104 arch (or lessons learnt) they probably will make this card work.  And good for them.

BUT... what they are doing is invoking the memory of the Fermi launch last year when it never appeared.  It's a big gamble.  If GF110 turns up at the party too late when all the hookers are taken, NVidia are in trouble.  If they're hyping it up now, to combat 6xxx series sales, the product better get here ASA f*cking P.

Likewise, of all the people here, only W1zz and those close to him probably know the real performance of the 6870 and MUCH MORE IMPORTANTLY, the 6970.  The 6870 is starters, 6970 is main course and Antilles is the Creme Brulee.

We dont know how good, how on time or how efficient GTX 580 or HD 6970 will be.  Lets just wait and see, then we can all say, "told you so!" or just be adult and buy the bloody things.

Now, let's book those flights and get drunk.


----------



## cadaveca (Oct 21, 2010)

Benetanegia said:


> There's truth there.They don't need it, true to an extent. But the thing is that I'm 100% sure that by going the 1.5x GF104 they can get a card with 20% more shaders, 50% more TMU and higher clocks, while the chip is smaller than 500 mm^2 and consumes 50w less. So why don't go for it? And of course as a consumer it's way better.


It's simple. Mindshare doesn't work that way. There really isn't alot of people out there willing to pay for a 10FPS upgrade.

If they could sell GTX480 for $339 or less...well, then we can talk about a few things.

The real issue is that becuase of the problems with this gen of gpus, and a very weak economy, consumer confidence is much harder to earn than it ever has been, but OEM marketing is still plodding along like they have been for the past 6 years or so...and haven't adjusted with the market.

Performance isn't key any more. Pricing IS. The days of people spending double the cost for 1.4x gains are over. They cannot maintain current prciing with just a 20% boost in performance.


> That is a very common missconception that is based on too many vagueties and only one fact.
> 
> Fact is that a smaller chip costs less than a big chip when everything else is equal.
> 
> ...



ALl of this info is easy to get. AS a publically traded company, all ofthese figures are available to those that know where to look.

AMD's gpu division, as as whole, is far more profitable than nV is at this point. Thier chips are smaller, with similar ASP, but debt is what is holding AMD back at this point. they've kinda posted a loss this last quarter, but, at the same time, they ARE paying all of the bills they need to.

nV on the other hand, is bleeding funds. ANd htis price drop, if it doesn't boost sales enough, is only going to make that wound all the bigger.


It's a very exciting time in the gpu market...I smell some aquisitions soon.


----------



## N3M3515 (Oct 21, 2010)

Benetanegia said:


> There's truth there.
> 
> 
> 
> ...



Neither of them make cards....they only sell chips.
Even considering all of the other arguments, i still think amd's chips are cheaper to produce.
And i have the habit to compare from what i know, things that we don't know can go one side or the other, so, no practical conclusion.


----------



## Benetanegia (Oct 21, 2010)

cadaveca said:


> Performance isn't key any more. Pricing IS. The days of people spending double the cost for 1.4x gains are over. They cannot maintain current prciing with just a 20% boost in performance.



Yeah, and that's why GF110 = 1.5x GF104 makes more sense than continuing with GF100. It's faster, it consumes less and costs less to make.

I don't even think that the sources especifically told Digitimes 20% more performance and rather was what I posted above, 20% more shaders. With the GF104 easily reaching 900+ Mhz, and GF106 and GF108 hitting about the same wall, would you really be sure that the said 1.5x GF104 chip wouldn't be able to be released with a 750-800 Mhz core clock? Remember that the GTX460 achieves that with a very cheap PCB and power circuitry. A better PCB and power circuitry would probably enable same or better OC on bigger chip. Think about G92. 9800 GT had a hard time surpassing the 750 Mhz barrier, but the 9800 GTX+ was able to come close to 850 Mhz.

With 20% more shading performance and a 15-20% bump on clocks the card would easily surpass your requirement of 33% and would still have the same OC capabilities as GF100. 

Also Nvidia sacrificing a bit of OC potential in order to achieve better reference cards shouldn't be ruled out. Ati did that with RV670 and RV680 and worked out relatively well. In this case we would be talking about maybe 850 Mhz with potential to OC to 925-950 Mhz (9-12%) and reference performance would be araound 40% better than GTX480.

Just especulating.


----------



## newtekie1 (Oct 21, 2010)

Meh...paper launches are boring.

I'd rather see a dual GF104 card to compete with the HD5970 segment and drive down those insane prices.


----------



## Benetanegia (Oct 21, 2010)

N3M3515 said:


> Neither of them make cards....they only sell chips.
> Even considering all of the other arguments, i still think amd's chips are cheaper to produce.
> And i have the habit to compare from what i know, things that we don't know can go one side or the other, so, no practical conclusion.



No, they have others make the cards for them (Flextronics and Shappire i believe?), but reference cards are paid by them afaik, then sold to partners so that they put stickers on.

Ok I'm not sure on that, but I have always assumed it was that way.

Besides Nvidia does sell cards at Best Buy now.


----------



## motasim (Oct 21, 2010)

The discussion going on in this thread is quite enlightening indeed. If the GTX 580 turns out to have a 20% performance improvement in games over the GTX 480, AND has a similar performance/watt ratio as the 5870/6970 then I'd certainly go for it!


----------



## wolf (Oct 21, 2010)

Benetanegia said:


> Yeah, and that's why GF110 = 1.5x GF104 makes more sense than continuing with GF100. It's faster, it consumes less and costs less to make.



easily more sense, there is no reason to push GF100 more, just leave it be. a 1.5x GF104 should be cooler, use less power, and hit higher clocks. all while weighing in with a smaller transistor count and thus die size than GF100, not bad for a 20% increase in SP's.



Benetanegia said:


> Think about G92. 9800 GT had a hard time surpassing the 750 Mhz barrier, but the 9800 GTX+ was able to come close to 850 Mhz.



also remember though that to become a 9800GT instead of GTX/+ the core itself did not bin as well, IMO that has to be taken into consideration, but your point about the board is nonetheless valid.


----------



## Benetanegia (Oct 21, 2010)

wolf said:


> also remember though that to become a 9800GT instead of GTX/+ the core itself did not bin as well, IMO that has to be taken into consideration, but your point about the board is nonetheless valid.



I'm talking about late cards, yields had probably reached the ceiling long time ago and soon after that the 9800 was discontinued and only the GTS250 prevailed meaning that nearly every G92 should be able to meet the requirements.

But that is not the only example anyway: GF104 vs GF106, Juniper vs Cypress. The difference in attainable clocks between high-end and mainstream/performance has not been more than 25-50 Mhz since a long time ago, as long as temps and power are in check.


----------



## yogurt_21 (Oct 21, 2010)

cadaveca said:


> If they could sell GTX480 for $339 or less...well, then we can talk about a few things.



that's a very specific number, how much is in your rig fund? does it match?


----------



## N3M3515 (Oct 21, 2010)

All i want is HD6870/GTX460(815Mhz core) at 199 USD


----------



## entropy13 (Oct 21, 2010)




----------



## cadaveca (Oct 21, 2010)

yogurt_21 said:


> that's a very specific number, how much is in your rig fund? does it match?



My rig fund is far larger than that. 

Anyway, I always base my "ASP" for cards on quite a few things.


----------



## yogurt_21 (Oct 21, 2010)

entropy13 said:


> http://img831.imageshack.us/img831/7600/gtx5803d.png



so.... the 5xx series basically doesn't exist except for 1 card? or did they just add that in there since it's the only name leaked?


cadaveca said:


> My rig fund is far larger than that.
> 
> Anyway, I always base my "ASP" for cards on quite a few things.



ah good cause you're prollly gonna need about double that for a dual caymen


----------



## qubit (Oct 21, 2010)

It sounds like yet another rebranding con to call it a 580  a 485 would have been much more appropriate.

However, if it's a decent card, I'll see if I can get one. I'm not going to judge it until W1zzard reviews it.

I have to admit my main reason for currently preferring nvidia over AMD is the driver control panel. It's better designed and has far more sophisticated 3D settings. Why AMD can't make basic improvements like this beats me.

PhysX & 3D Vision aren't that important to me any more. Actually, PhysX would be if it was implemented in more than a handful of games and demos and designed to make an actual difference to game play.


----------



## N3M3515 (Oct 21, 2010)

GTX580 news here


----------



## OneCool (Oct 22, 2010)

Hmmmm.........the keywords here "paper launch" and "market launch"

If thats really how it happens .....I mean whats the point?


----------



## wolf (Oct 22, 2010)

qubit said:


> It sounds like yet another rebranding con to call it a 580  a 485 would have been much more appropriate.



news is it's a new GPU, hence 485 wouldn't really work, maybe if it were still GF100, but the news reports it will be GF110, with either 512sp's or 576. sounds like a GF104 with 50% added to it again, which means less transistors than a GTX480 and 20% more sp's. I think it has a lot of potential. however I'd more expect a launch version to have either 528 or 576 sp's not 512.


----------



## CDdude55 (Oct 22, 2010)

The GTX 580 should be coming out around the time the 6900's come out, so hopefully we see some awesome performance from both sides.


----------



## erocker (Oct 22, 2010)

CDdude55 said:


> The GTX 580 should be coming out around the time the 6900's come out, so hopefully we see some awesome performance from both sides.



I would say early Q1 2011 at the soonest. If they can get it out before Christmas that will be very impressive. Especially since Nvidia isn't usually so secretive about their new upcoming products.


----------



## wolf (Oct 22, 2010)

erocker said:


> I would say early Q1 2011 at the soonest. If they can get it out before Christmas that will be very impressive. Especially since Nvidia isn't usually so secretive about their new upcoming products.



if they can make the GPU pin compatible with a GTX480 PCB, they might just get it out quicker than we'd expect.


----------



## cadaveca (Oct 22, 2010)

I thought the plan _was_ a pre-christmas launch? or was that the GTX 460?


----------



## qubit (Oct 22, 2010)

cadaveca said:


> I thought the plan _was_ a pre-christmas launch? or was that the GTX 460?



The GTX 460 has been out a for a while. Not sure what you mean there.


----------



## TAViX (Oct 22, 2010)

Relax, it's no new generation. Is just based on the latest professional GPU with 512shaders and 512bit memory bus, that%s why is 20% faster. A new generation card should have almost double the performance of the previews gen, just like 5870 was for 4870.


----------



## wahdangun (Oct 24, 2010)

CDdude55 said:


> Woot, TPU's heavy bias comes out the wood work again! *fist pumps*
> 
> Hopefully they can up the performance when it actually comes out, it's still WAY to early to tell anything.



sorry I'm just make fun about that dumb nvdia CEO launching fermi with "wooden" version and claiming it was the "real" fermi card, 

and i think it will be Q1 2011 before this chip ready because let face it nvdia even can't release the full fermi, they need to shrink it to 28nm to be feasible unless they reduce the shader and make it more efficient just like HD 6870.

btw I'm not fanboy but nvdia have crush my dream for price war with their late and stupid move, but thank God there are HD 6870 that bring the price war come back just like HD 48XX vs GTX 2XX era


----------



## CDdude55 (Oct 24, 2010)

wahdangun said:


> sorry I'm just make fun about that dumb nvdia CEO launching fermi with "wooden" version and claiming it was the "real" fermi card,
> 
> and i think it will be Q1 2011 before this chip ready because let face it nvdia even can't release the full fermi, they need to shrink it to 28nm to be feasible unless they reduce the shader and make it more efficient just like HD 6870.
> 
> btw I'm not fanboy but nvdia have crush my dream for price war with their late and stupid move, but thank God there are HD 6870 that bring the price war come back just like HD 48XX vs GTX 2XX era



I agree. I doubt we will see any new card release from Nvidia anytime soon, but then again, they could release a Fermi refresh by the end of the year to battle it out with the 6900's, they could also go 28nm, but isn't that more reserved for Kepler?

They'll probably just cut prices for the 480 this holiday and wait till next year to really come back with something.


----------



## wahdangun (Oct 24, 2010)

CDdude55 said:


> I agree. I doubt we will see any new card release from Nvidia anytime soon, but then again, they could release a Fermi refresh by the end of the year to battle it out with the 6900's, they could also go 28nm, but isn't that more reserved for Kepler?
> 
> They'll probably just cut prices for the 480 this holiday and wait till next year to really come back with something.



no i don't think so, because next year when the TSMC ready with their 28nm ati will surely release their full fledged NI card. and i think what Nvdia can do right now is just cut the price down (and make loss) or discontinue altogether and probably keep GTX 460 around just like they did with GTX 2XX when its clearly fermi won't arrive for next 4 month !!!

or maybe nvdia have another sinister plan like tomshardware said :



> It also remains to be seen if Nvidia can maintain the long-term price war it recently declared. Every single GeForce GTX 470 is equipped with a monolithic GF100 GPU in the 530 square millimeter range. That’s close to twice the size of the Radeon HD 6870’s 255 mm2 die. How long can Nvidia keep up such a numbers-based fight? Not long, we’d guess, if there’s nothing else waiting in the wings. B*ut this sure would be a good time to introduce a card with a fully-equipped GF104 and 384 CUDA cores enabled* (Ed.: I can’t comment, but I know something that you don’t, Don).


----------



## Benetanegia (Oct 24, 2010)

wahdangun said:


> no i don't think so, because next year when the TSMC ready with their 28nm ati will surely release their full fledged NI card. and i think what Nvdia can do right now is just cut the price down (and make loss) or discontinue altogether and probably keep GTX 460 around just like they did with GTX 2XX when its clearly fermi won't arrive for next 4 month !!!
> 
> or maybe nvdia have another sinister plan like tomshardware said :



I think that the fully enabled and/or higher clocked GF104 will be released soon. 

Also I still strongly believe that Nvidia will release GF110 this year, because that must have been the plan since a long time ago, since they got the firts GF104 die on their hands. There's no news of any GF100 chip fabbed after the initial batches in Q1/early Q2. For example this is the GF100 on Wizzard's review of the MSI GTX480 Lightning card reviewed a few days ago:







As can be seen in the code 1015A3, the chip is A3 silicon fabbed in week 15 of 2010. Week 15 = April.

GF100 die as seen on Wizzard's review of the GTX480 on release day:






Week 9.

GTX460 as seen on Wizzard's review at launch day:






Week 22

Only 7-13 weeks difference. 2-3 months on production chips, GF104 test samples should have been on Nvidia's hands some weeks earlier. It should have been pretty obvious for Nvidia pretty soon that GF104 was so much better and the fact that it seems there's no newly fabbed GF100 since then (week 15) suggests that Nvidia didn't expect GF100 to last too much on the market once they saw GF104. The fact that the GTX460 was so seriously crippled in order to be able to clear GF100 inventories fast, also suggests they had something coming for which they wanted to make room for IMO. If they have no more GF100 (as can be deduced by the fact that even cards released now are remains from so long ago) they should have something to fill that segment up. I don't think they wanted to be months without anything that can even remotedly compete with Cayman, and GF100 could at least compete/be close performance wise.

Just my 2 cents.


----------



## trt740 (Oct 24, 2010)

Atom_Anti said:


> Haha, the new GrillForce is coming! Chefs going to like it.
> http://img696.imageshack.us/img696/2701/grillforce.jpg



I'm not so sure, won't this be built on a smaller die like the gtx 460 and if so it should be allot cooler and handle more functions per shader.


----------



## wahdangun (Oct 24, 2010)

Benetanegia said:


> I think that the fully enabled and/or higher clocked GF104 will be released soon.
> 
> Also I still strongly believe that Nvidia will release GF110 this year, because that must have been the plan since a long time ago, since they got the firts GF104 die on their hands. There's no news of any GF100 chip fabbed after the initial batches in Q1/early Q2. For example this is the GF100 on Wizzard's review of the MSI GTX480 Lightning card reviewed a few days ago:
> 
> ...




i dunno because nvdia already do that when GTX2XX can't compete in price and performance they simply EOL it, even tough the fermi card not ready for months, 

but it think nvdia will use AMD trick, and after seeing how well the GTX460 oc i can certainly sure they will be releasing full fledged GTX 460 with significantly higher clock to minimize HD 6970 damage


----------



## motasim (Oct 25, 2010)

wahdangun said:


> ... but it think nvdia will use AMD trick, and after seeing how well the GTX460 oc i can certainly sure they will be releasing full fledged GTX 460 with significantly higher clock to minimize HD 6970 damage



I think you mean 6870 and not 6970, since the performance of a fully enabled GTX 460 will only come close to that of a 6870, keeping in mind that the 6970 is intended to blow the GTX480 away.


----------



## Benetanegia (Oct 25, 2010)

motasim said:


> I think you mean 6870 and not 6970, since the performance of a fully enabled GTX 460 will only come close to that of a 6870, keeping in mind that the 6970 is intended to blow the GTX480 away.



Well I think he does not mean HD6870 at all. You are completely wrong. Fully enabled and properly clocked, a GF104 most probably smokes the HD6870. A GTX460 @ 800 Mhz is already as fast as HD6870 and has about the same OC potential left on it according to Wizzard reviews. With 15% more of it enabled you can be sure that it would be almost 15% faster and close to a GTX480.  Properly priced it could make Cayman look like overkill and overpriced. Bear in mind that Nvidia is already selling GF104 at (and below) $200 in the form of GTX460, meaning that a fully enabled one for $250 is posible. I mean production cost of a GTX460 and the supposed GTX475 would be similar, maybe it could cost $10-$25 more to make, so selling them for $250 ($50-$75) would be a relief. And they are already selling GTX470 for that much anyway.


----------



## senninex (Oct 26, 2010)

*AND.. to be HOTTER CARD EVER WITH MORE THAN 20% OF WATTAGE NEEDED!*


----------



## CDdude55 (Oct 26, 2010)

senninex said:


> snip



Unnecessarily big font as well as a grammar fail all in one post...  :shadedshu


----------



## Roph (Oct 26, 2010)




----------



## Steevo (Oct 26, 2010)

So a paper launch, on a cocktail napkin telling us the place in the future of when the paper launch will be, at a special Nvidia event, to keep all the girls giggling about the new size of a plastic penis extender.


Wow, the drama.


----------



## CDdude55 (Oct 26, 2010)

Roph said:


> http://i.imgur.com/Ywqjl.jpg





They're still awesome cards(performance wise)...


----------



## wahdangun (Oct 26, 2010)

motasim said:


> I think you mean 6870 and not 6970, since the performance of a fully enabled GTX 460 will only come close to that of a 6870, keeping in mind that the 6970 is intended to blow the GTX480 away.





Benetanegia said:


> Well I think he does not mean HD6870 at all. You are completely wrong. Fully enabled and properly clocked, a GF104 most probably smokes the HD6870. A GTX460 @ 800 Mhz is already as fast as HD6870 and has about the same OC potential left on it according to Wizzard reviews. With 15% more of it enabled you can be sure that it would be almost 15% faster and close to a GTX480.  Properly priced it could make Cayman look like overkill and overpriced. Bear in mind that Nvidia is already selling GF104 at (and below) $200 in the form of GTX460, meaning that a fully enabled one for $250 is posible. I mean production cost of a GTX460 and the supposed GTX475 would be similar, maybe it could cost $10-$25 more to make, so selling them for $250 ($50-$75) would be a relief. And they are already selling GTX470 for that much anyway.



yes, i mean HD 6970, and bene there is no overkill for use TPUers, we even just maxed crysis, so maybe we need that kind of power for crysis 2. 

and btw i think nvdia make almost no profit for GTX 470, just think about it even the GTX 480 still sell for $400 and they have almost same PCB design and same GPU,


----------



## Tatty_One (Oct 26, 2010)

senninex said:


> *AND.. to be HOTTER CARD EVER WITH MORE THAN 20% OF WATTAGE NEEDED! Although i have no clue what I am talking about as we only have speculation to play with at the moment*



My troll senses twitched as I visited TPU this morning


----------



## motasim (Oct 26, 2010)

Benetanegia said:


> Well I think he does not mean HD6870 at all. You are completely wrong. Fully enabled and properly clocked, a GF104 most probably smokes the HD6870. A GTX460 @ 800 Mhz is already as fast as HD6870 and has about the same OC potential left on it according to Wizzard reviews. With 15% more of it enabled you can be sure that it would be almost 15% faster and close to a GTX480.  Properly priced it could make Cayman look like overkill and overpriced. Bear in mind that Nvidia is already selling GF104 at (and below) $200 in the form of GTX460, meaning that a fully enabled one for $250 is posible. I mean production cost of a GTX460 and the supposed GTX475 would be similar, maybe it could cost $10-$25 more to make, so selling them for $250 ($50-$75) would be a relief. And they are already selling GTX470 for that much anyway.



Cool down mate  I just thought that wahdangun made a typo, and now that I know that it wasn't a typo; it still doesn't make sense to me. It takes two GTX 460 in SLI (noting that they scale excellently) to beat the GTX 480. Now we already know that the upcoming 6970 will blow the GTX 480 away; so logic says that it'll take a dual-GF104-chip GPU to perform close to the 6970. 

Now, I know what you'll say, that the GF104 in GTX 460 is not "fully-enabled", so let's examine that statment shall we; does anybody really expect a "fully-enabled" GF104 to equally perform as two of the standard GF104 used in GTX460?!  That would be stupid, right? The fact is that you've made a lot of illogical speculations about the upcoming "GTX 475" without substantiating them with any benchmarks or evidence, thus sounding like a true nVidia fanboy Benetanegia. Let's be mature here and not argue over pure speculations, or it's just a waste of time.

I'm sorry to be the one breaking the news to you; but the logical conclusion is that even the "fully-enabled", "properly-clocked", and "properly-priced" GF104 you are hoping for can never touch Cayman. 

That said; I truly hope that nVidia will come up with a pleasant surprise soon in their GTX 580 or whatever it is called, otherwise it is definitely Cayman for me. I am going to build a new gaming rig in December/January and I'll buy the best GPU that I can get for $425 at that time, regardless if it was nVidia or AMD/ATI. 

Edit: ... just one more thought came to mind: Benetanegia called Cayman an overkill, but I wonder if he said the same about the GTX 480 when it was launched for $500+ as any objective person would do ... I think not, the bias towards nVidia is clear in every word he writes ...


----------



## CDdude55 (Oct 26, 2010)

motasim said:


> Cool down mate  I just thought that wahdangun made a typo, and now that I know that it wasn't a typo; it still doesn't make sense to me. It takes two GTX 460 in SLI (noting that they scale excellently) to beat the GTX 480. Now we already know that the upcoming 6970 will blow the GTX 480 away; so logic says that it'll take a dual-GF104-chip GPU to perform close to the 6970.
> 
> Now, I know what you'll say, that the GF104 in GTX 460 is not "fully-enabled", so let's examine that statment shall we; does anybody really expect a "fully-enabled" GF104 to equally perform as two of the standard GF104 used in GTX460?!  That would be stupid, right? The fact is that you've made a lot of illogical speculations about the upcoming "GTX 475" without substantiating them with any benchmarks or evidence, thus sounding like a true nVidia fanboy Benetanegia. Let's be mature here and not argue over pure speculations, or it's just a waste of time.
> 
> ...



I don't think we truly know enough about Caymen to determine that, you talk as though it will inevitable be better, but that's still just strong speculation.


----------



## motasim (Oct 26, 2010)

CDdude55 said:


> I don't think we truly know enough about Caymen to determine that, you talk as though it will inevitable be better, but that's still just strong speculation.



I only have this , but again CDdude55, logically if it wouldn't be better (and they have had a year to work on it now), then why the hell are they bothering with it, and who would buy it if it wasn't?!


----------



## Benetanegia (Oct 26, 2010)

motasim said:


> Cool down mate  I just thought that wahdangun made a typo, and now that I know that it wasn't a typo; it still doesn't make sense to me. It takes two GTX 460 in SLI (noting that they scale excellently) to beat the GTX 480. Now we already know that the upcoming 6970 will blow the GTX 480 away; so logic says that it'll take a dual-GF104-chip GPU to perform close to the 6970.
> 
> Now, I know what you'll say, that the GF104 in GTX 460 is not "fully-enabled", so let's examine that statment shall we; does anybody really expect a "fully-enabled" GF104 to equally perform as two of the standard GF104 used in GTX460?!  That would be stupid, right? The fact is that you've made a lot of illogical speculations about the upcoming "GTX 475" without substantiating them with any benchmarks or evidence, thus sounding like a true nVidia fanboy Benetanegia. Let's be mature here and not argue over pure speculations, or it's just a waste of time.
> 
> ...



Funny that you tell me that I'm making conclusions with no info, when it's you the only one doing that:



> Now we already know that the upcoming 6970 will blow the GTX 480 away



We know what? And even those numbers mean nothing in reality. Vantage? A HD5870 is as fast as a GTX480 in Vantage. So that Vantage benchmark could only mean that Cayman is some 30% faster than Cypress and hence only 15% faster than GTX480. And even then we'd still be basing our assumptions on thin air, since those benchmarks are probably fake.

My (let's call it) "assumption" (although you'll see how it's not) of fully enabled GF104 performance is based on hard facts on the other hand. A GTX460 @ 820 Mhz is as fast as a GTX470 and hence also HD6870:






And with 15% more shaders/TMU/tesselators/... enabled it would be almost 15% faster because Fermi scales that way (well) based on shading performance. You don't believe me?

Let's see what the GFlops are for the Fermi lineup:

GT430 = 268.8 GFlops
GTS450 = 601.34 GFlops
GTX460 = 907.2 GFlops
GTX470 = 1088.64 GFlops
GTX480 = 1344.96 GFlops

Now let's normalize those numbers so that a GTX460 represents 85% just like in the chart above and see if there's a relation. What I'm doing is if 907 Gflops = 85% then 268.8 = 85% *268.8/907.2 = 25.18%. OK let's do it for all the cards listed above:

GT430 = 25.18% ---------> 27% on the chart
GTS450 = 56,34 %--------> 55% on the chart
GTX460 = 85% -----------> 85% on the chart obviously 
GTX470 = 102% ----------> 104% on the chart
GTX480 = 126% ----------> 128% on the chart

The conclusion is no other than *Fermi scales linearly with GFlops*. And what would be the GFlops for the hypothetic GTX475?

384 SPs * 800/850 Mhz *2 (shader clock) *2 (FMADD) = 1228.8/1305.6 Gflops

And normalized:

GTX475 = *115/122%* +/- 2%

So now that we both DO know what would be the performance of the supposed GTX475, let's explain what I meant.

If Nvidia releases that card that is within a hair of GTX480 performance for $250 anything that would sell above that price would look simply overkill/overpriced for almost anyone except enthusiasts and Cayman XT will most probably sell for more than $400. Also bear in mind that such card would cost Nvidia almost the same to make as GTX460 1GB does so selling them at $250 would be a relief rather than a curse.

Oh and BTW for most people's needs anything above a GTX460 or HD5850 is overkill. For most people, it was overkill selling at $500 and even now is still overkill (for most people) selling for $400.


----------



## motasim (Oct 26, 2010)

Benetanegia said:


> Funny that you tell me that I'm making conclusions with no info, when it's you the only one doing that:
> 
> 
> 
> ...



Nice statistical work Benetanegia, and I have to admit that there is some logic in your point of view now that you've explained it. My take on this matter is as follows:

1) I don't think that it is right to decide for other people what they need or don't need. How can you say that anything above the HD 5850/GTX 460 is an overkill; that means that the huge crowd who bought the highly successful 5870 are idiots, which is definitely not the case. It's a free world so just let everyone select the GPU that suits him/her best, and spare us your personal judgments.

2) If nVidia is going to release this GTX 475 or whatever it is called any time soon, and knowing that this GPU will perform as good as the GTX 480 yet selling at $250, then they would definitely inform the GPU manufacturers ahead to take certain measures to deplete their stock of GTX 480 & GTX 470 GPUs (through discounts, bundles, ... etc) since after such a GPU hits the market at that price point; they'll never be able to move the GF100 GPUs, right??? If this was the case then how come MSI has just released the N480GTX Lightning GPU and Point of View have just announced the TGT GTX 480 Beast GPU? It simply doesn't make sense. Yes, I do believe that nVidia is going to release a fully enabled GF104 chip GPU soon according to many rumors, but the performance will only compete with the 6870 and not the GTX480, thus targeting the gamers' sweet spot as they call it, but this won't be enough for enthusiasts. In fact now that I see that ZOTAC are going to soon release the GTX 460 X2, I become more and more convinced that nVidia are not going to release the long-awaited GTX 495 (dual GF104 chip GPU), otherwise ZOTAC wouldn't have taken this initiative on their own.

3) Forget about possibly fake benchmarks of the 6970 or whatever, just listen to simple reasoning. AMD already have the 5870 which is a very successful GPU performing just behind the GTX 480, and noting that it has been over a year since they released the 5870 so they have had enough time to develop the 6970, then just tell me by what logic would AMD produce this new GPU if it wouldn't beat the GTX 480? Simply not possible. Enthusiasts and Extreme Gamers (who are targeted by the Cayman XT & Pro GPUs) are always looking for what is faster and better, and that's where the 6970 fit in. Even if it was only 10-15% better than the GTX 480, but achieving that at a reasonable TDP and power efficiency, then it'll definitely be a winner, given they don't price it stupidly.


----------



## Benetanegia (Oct 26, 2010)

motasim said:


> Nice statistical work Benetanegia, and I have to admit that there is some logic in your point of view now that you've explained it. My take on this matter is as follows:
> 
> 1) I don't think that it is right to decide for other people what they need or don't need. How can you say that anything above the HD 5850/GTX 460 is an overkill; that means that the huge crowd who bought the highly successful 5870 are idiots, which is definitely not the case. It's a free world so just let everyone select the GPU that suits him/her best, and spare us your personal judgments.
> 
> ...



1) I'm not deciding anything. I'm just stating what most people's perception is. How come GTX460 is so successful? How come AMD released Barts especifically for that price point? Because it's what most people are willing to buy. For 90% of people Barts has made Cypress pointless. And GTX480 is also pointless, but in reality both cards were enthusiast cards and not meant to sell a lot. Either way a strong $250 card just narrows the enthusiast market even further and that's what I am talking about and I think that's in a sense what wahdangun was saying.



> ... but it think nvdia will use AMD trick, and after seeing how well the GTX460 oc i can certainly sure they will be releasing full fledged GTX 460 with significantly higher clock to minimize HD 6970 damage



Nowhere there he is saying that fully enabled GF104 will be close to Cayman performance wise, and neither I said that anywhere. We are just saying that a strong performance part will most likely steal a lot of high-end sales and GTX475 could certainly do that.

2) GTX480 performance from this GTX475 is not a certainty and neither is a necessity. It just needs to offer enough performance to play nearly all games maxed out. A GTX460/HD68xx/HD5850 already does that according to what maxed out means for the grand mayority. Anything faster is going to be welcome as long as it's priced really well, but very few are going to rush to the store to buy $400+ cards, no matter what performance it offers, because it's simply not needed, and much less if there's something for $250 that suits their needs just as well.

As to why partners are releasing those cards, because that's the way they are going to get rid of GF100. I could ask you why they released the GTX465 just a month prior to GF104 if they knew it was going to kick GTX465 in the butt. Because they had to sell them. 

As for Nvidia, they've done everything on their hand to sell those GF100, including releasing the GTX460 as it is instead of releasing the full chip, in order to not steal GF100 sales. The explanation of bad yields is false. Yields work in two directions: 1) defects that render some parts unusable 2) defects that limit maximum attainable clocks on certain areas. Both always go hand in hand, they never get a lot of one type but none of the other. The situation we are seing is literally imposible: not enough chips to be able to release a 384 SP SKU from day one and every single GTX460 being able to hit 850 mhz at the same time is just not posible. If yields were bad clocks wouldn't be good either and even if that would be posible, it still does not explain why the GTX460 was not released at 750-850 Mhz to begin with. The only xplanation is that GF100 had to sell. And why did GF100 have to sell? Because something better is coming to fill that gap.

3- Yes, 15-20% faster than GTX480 is posible, and maybe some more, but those who think that Cayman is going to be twice as fast as Barts are just living in a pipe dream. I mean, I'll never say it's imposible, but it's very very very unlikely. For instance Barts is not a real improvement over Cypress/Juniper. It offers close performance while being smaller mostly because it lacks dual precision (FP64) support, just Like Juniper did. The other reason is that is very well known that Cypress had some scaling issues with more than 1440 SP. A HD5850 running at same clocks is just as fast as HD5870 and there's no way to know if the same would happen with HD5830 if it weren't for the crippled ROP count. 16 ROPs was a necessity or a trick to not expose the scaling problem? We'll never know. Barts hits a good spot and that's all. There's no way to know at this point if Cayman will scale any better than Cypress, and it needs to scale much much better if they want to make a real difference.

Another issue with 2x Barts Cayman is size. Barts while smaller than Cypress is 255 mm^2, but it lacks FP64 support. Twice that and you are at 500 mm^2, bring in some optimizations and you go lower than that, but then you have to bring FP64 support in and size goes up again. We are still on 40nm process and AMD cannot do magic as some people here think. If they could do magic they would have used it on Barts and Barts really isn't anything special except for the fact that it was designed for the best selling market and that it was designed with just enough SPs to perform well, but not too many as to meet the efficiency "wall".

EDIT: Things about Barts that make me be almost 100% sure that Cayman is not going to be massively faster than Cypress:

From Anandtech:



> Compared to Cypress, you’ll note that FP64 performance is not quoted, and this isn’t a mistake. Barts isn’t meant to be a high-end product (that would be the 6900 series) so FP64 has been shown the door in order to bring the size of the GPU down.





> However it’s worth noting that internally AMD was throwing around 2 designs for Barts: a 16 SIMD (1280 SP) 16 ROP design, and a 14 SIMD (1120 SP) 32 ROP design that they ultimately went with. The 14/32 design was faster, but only by 2%.



It hints as some shader innefficiency as well as need for higher ROP power. Not really good for Cayman as it could mean that Cayman needs more than 32 ROPs in order to be much faster (like 2x) than Barts, which again makes it too big.



> Along with selectively reducing functional blocks from Cypress and removing FP64 support, AMD made one other major change to improve efficiency for Barts: they’re using Redwood’s memory controller. In the past we’ve talked about the inherent complexities of driving GDDR5 at high speeds, but until now we’ve never known just how complex it is. It turns out that Cypress’s memory controller is nearly twice as big as Redwood’s! By reducing their desired memory speeds from 4.8GHz to 4.2GHz, AMD was able to reduce the size of their memory controller by nearly 50%. Admittedly we don’t know just how much space this design choice saved AMD, but from our discussions with them it’s clearly significant.



This means again that Cayman is bigger than Barts for the performance it'd offer. Either 512 bits are used in Cayman along with slow memory, making it big, or the improved Cypress memory controler is used again (or even a bigger one to support 6 gbps memory), which makes it bigger too. Either way bigger.


----------



## wahdangun (Oct 27, 2010)

i think the biggest market was sub $200 card, but having performance king was necessary , just look at HD 3850 its was cheap sub $200 champion but AMD still losing market share, but after HD 4870 that was performing really  good and after HD 4870X2 become performance king amd market share rise rapidly, and with the HD 5870 they surpass nvdia (because nvdia doesn't have any card to compete for almost 6 month)

and btw majority of people still with onboard graphic just take a look at intel market share, so with your logic, we won't even need discrete graphic card because majority of people just enough with onboard graphic


----------



## motasim (Oct 27, 2010)

wahdangun said:


> i think the biggest market was sub $200 card, but having performance king was necessary , just look at HD 3850 its was cheap sub $200 champion but AMD still losing market share, but after HD 4870 that was performing really  good and after HD 4870X2 become performance king amd market share rise rapidly, and with the HD 5870 they surpass nvdia (because nvdia doesn't have any card to compete for almost 6 month)
> 
> and btw majority of people still with onboard graphic just take a look at intel market share, so with your logic, we won't even need discrete graphic card because majority of people just enough with onboard graphic



Thanks wahdangun, and you are mostly right, but please note that the market we are referring to is not all of the PC owners/users, but rather the group of these users who are willing to invest in a proper discrete GPU for use in gaming, folding, and/or for professional use. I truly hope that nVidia take some serious steps to properly compete with AMD/ATI to avoid monopolizing the market, which is in the end not to the best interest of us gamers.


----------



## Benetanegia (Oct 27, 2010)

wahdangun said:


> i think the biggest market was sub $200 card, but having performance king was necessary , just look at HD 3850 its was cheap sub $200 champion but AMD still losing market share, but after HD 4870 that was performing really  good and after HD 4870X2 become performance king amd market share rise rapidly, and with the HD 5870 they surpass nvdia (because nvdia doesn't have any card to compete for almost 6 month)



Well yeah the halo effect does exist. Not sure it's so important anymore, but it does exist yet.



> and btw majority of people still with onboard graphic just take a look at intel market share, so with your logic, we won't even need discrete graphic card because majority of people just enough with onboard graphic



Like motasim said above, we are talking about graphics cards sales, so obviously we are not talkig about IGP. Also contrary to the common belief, amongst gamers the ASP is $150-300 and not the sub $200 market. Just take a look at Steam's Hardware survey and you'll see a miriad of performance $200-300 cards and very few midrange and low-end cards at the top.


----------

