Thursday, October 21st 2010

GeForce GTX 580 Expected to be 20% Faster than GeForce GTX 480

NVIDIA's next enthusiast-grade graphics processor, the GeForce GTX 580, based on the new GF110 silicon, is poised for at least a paper-launch by end of November, or early December, 2010. Sources in the video card industry told DigiTimes that the GTX 580 is expected to be 20% faster than the existing GeForce GTX 480. The new GPU is built on the existing 40 nm process, NVIDIA's 28 nm GPUs based on the Kepler architecture are expected to take shape only towards the end of 2011. Later this week, AMD is launching the Radeon HD 6800 series performance graphics cards, and will market-launch its next high-end GPU, codenamed "Cayman" in November.
Source: DigiTimes
Add your own comment

98 Comments on GeForce GTX 580 Expected to be 20% Faster than GeForce GTX 480

#51
the54thvoid
Intoxicated Moderator
Let me be the voice of reason and then we can all arrange cheap flights, meet in a bar and have good old drinks and talk about games and girls and guns? Huh, that sound fun?

Nvidia brought us exceptional gfx solutions. Then they mesed up big time with Fermi. They have since addressed that with GF104 which is close to 5870 perf per watt (not close to 5850). If they base GF110 on GF 104 arch (or lessons learnt) they probably will make this card work. And good for them.

BUT... what they are doing is invoking the memory of the Fermi launch last year when it never appeared. It's a big gamble. If GF110 turns up at the party too late when all the hookers are taken, NVidia are in trouble. If they're hyping it up now, to combat 6xxx series sales, the product better get here ASA f*cking P.

Likewise, of all the people here, only W1zz and those close to him probably know the real performance of the 6870 and MUCH MORE IMPORTANTLY, the 6970. The 6870 is starters, 6970 is main course and Antilles is the Creme Brulee.

We dont know how good, how on time or how efficient GTX 580 or HD 6970 will be. Lets just wait and see, then we can all say, "told you so!" or just be adult and buy the bloody things.

Now, let's book those flights and get drunk.
Posted on Reply
#52
cadaveca
My name is Dave
BenetanegiaThere's truth there.They don't need it, true to an extent. But the thing is that I'm 100% sure that by going the 1.5x GF104 they can get a card with 20% more shaders, 50% more TMU and higher clocks, while the chip is smaller than 500 mm^2 and consumes 50w less. So why don't go for it? And of course as a consumer it's way better.
It's simple. Mindshare doesn't work that way. There really isn't alot of people out there willing to pay for a 10FPS upgrade.

If they could sell GTX480 for $339 or less...well, then we can talk about a few things.

The real issue is that becuase of the problems with this gen of gpus, and a very weak economy, consumer confidence is much harder to earn than it ever has been, but OEM marketing is still plodding along like they have been for the past 6 years or so...and haven't adjusted with the market.

Performance isn't key any more. Pricing IS. The days of people spending double the cost for 1.4x gains are over. They cannot maintain current prciing with just a 20% boost in performance.
That is a very common missconception that is based on too many vagueties and only one fact.

Fact is that a smaller chip costs less than a big chip when everything else is equal.

But that's not the end of it. There's far too many things we dont' know and have as much effect on profitability.

- How much does AMD pay per waffer and how much Nvdia, considering Nvidia buys 2x the ammount of waffers and is not going to flee to GloFo as soon as GloFo is ready?
- Chip price is only a fraction of what a card costs. How much does it cost Nvidia to make the cards and how much AMD?
- AMD's ability to make smaller chips is based on much more $$ put into that R&D department than NVidia. How much?
- How much does it cost Nvidia and AMD to operate as a company?
ALl of this info is easy to get. AS a publically traded company, all ofthese figures are available to those that know where to look.

AMD's gpu division, as as whole, is far more profitable than nV is at this point. Thier chips are smaller, with similar ASP, but debt is what is holding AMD back at this point. they've kinda posted a loss this last quarter, but, at the same time, they ARE paying all of the bills they need to.

nV on the other hand, is bleeding funds. ANd htis price drop, if it doesn't boost sales enough, is only going to make that wound all the bigger.


It's a very exciting time in the gpu market...I smell some aquisitions soon.
Posted on Reply
#53
N3M3515
BenetanegiaThere's truth there.



They don't need it, true to an extent. But the thing is that I'm 100% sure that by going the 1.5x GF104 they can get a card with 20% more shaders, 50% more TMU and higher clocks, while the chip is smaller than 500 mm^2 and consumes 50w less. So why don't go for it? And of course as a consumer it's way better.



That is a very common missconception that is based on too many vagueties and only one fact.

Fact is that a smaller chip costs less than a big chip when everything else is equal.

But that's not the end of it. There's far too many things we dont' know and have as much effect on profitability.

- How much does AMD pay per waffer and how much Nvdia, considering Nvidia buys 2x the ammount of waffers and is not going to flee to GloFo as soon as GloFo is ready?
- Chip price is only a fraction of what a card costs. How much does it cost Nvidia to make the cards and how much AMD?
- AMD's ability to make smaller chips is based on much more $$ put into that R&D department than NVidia. How much?
- How much does it cost Nvidia and AMD to operate as a company?
Neither of them make cards....they only sell chips.
Even considering all of the other arguments, i still think amd's chips are cheaper to produce.
And i have the habit to compare from what i know, things that we don't know can go one side or the other, so, no practical conclusion.
Posted on Reply
#54
Benetanegia
cadavecaPerformance isn't key any more. Pricing IS. The days of people spending double the cost for 1.4x gains are over. They cannot maintain current prciing with just a 20% boost in performance.
Yeah, and that's why GF110 = 1.5x GF104 makes more sense than continuing with GF100. It's faster, it consumes less and costs less to make.

I don't even think that the sources especifically told Digitimes 20% more performance and rather was what I posted above, 20% more shaders. With the GF104 easily reaching 900+ Mhz, and GF106 and GF108 hitting about the same wall, would you really be sure that the said 1.5x GF104 chip wouldn't be able to be released with a 750-800 Mhz core clock? Remember that the GTX460 achieves that with a very cheap PCB and power circuitry. A better PCB and power circuitry would probably enable same or better OC on bigger chip. Think about G92. 9800 GT had a hard time surpassing the 750 Mhz barrier, but the 9800 GTX+ was able to come close to 850 Mhz.

With 20% more shading performance and a 15-20% bump on clocks the card would easily surpass your requirement of 33% and would still have the same OC capabilities as GF100.

Also Nvidia sacrificing a bit of OC potential in order to achieve better reference cards shouldn't be ruled out. Ati did that with RV670 and RV680 and worked out relatively well. In this case we would be talking about maybe 850 Mhz with potential to OC to 925-950 Mhz (9-12%) and reference performance would be araound 40% better than GTX480.

Just especulating.
Posted on Reply
#55
newtekie1
Semi-Retired Folder
Meh...paper launches are boring.

I'd rather see a dual GF104 card to compete with the HD5970 segment and drive down those insane prices.
Posted on Reply
#56
Benetanegia
N3M3515Neither of them make cards....they only sell chips.
Even considering all of the other arguments, i still think amd's chips are cheaper to produce.
And i have the habit to compare from what i know, things that we don't know can go one side or the other, so, no practical conclusion.
No, they have others make the cards for them (Flextronics and Shappire i believe?), but reference cards are paid by them afaik, then sold to partners so that they put stickers on.

Ok I'm not sure on that, but I have always assumed it was that way.

Besides Nvidia does sell cards at Best Buy now. :laugh:
Posted on Reply
#57
motasim
The discussion going on in this thread is quite enlightening indeed. If the GTX 580 turns out to have a 20% performance improvement in games over the GTX 480, AND has a similar performance/watt ratio as the 5870/6970 then I'd certainly go for it!
Posted on Reply
#58
wolf
Performance Enthusiast
BenetanegiaYeah, and that's why GF110 = 1.5x GF104 makes more sense than continuing with GF100. It's faster, it consumes less and costs less to make.
easily more sense, there is no reason to push GF100 more, just leave it be. a 1.5x GF104 should be cooler, use less power, and hit higher clocks. all while weighing in with a smaller transistor count and thus die size than GF100, not bad for a 20% increase in SP's.
BenetanegiaThink about G92. 9800 GT had a hard time surpassing the 750 Mhz barrier, but the 9800 GTX+ was able to come close to 850 Mhz.
also remember though that to become a 9800GT instead of GTX/+ the core itself did not bin as well, IMO that has to be taken into consideration, but your point about the board is nonetheless valid.
Posted on Reply
#59
Benetanegia
wolfalso remember though that to become a 9800GT instead of GTX/+ the core itself did not bin as well, IMO that has to be taken into consideration, but your point about the board is nonetheless valid.
I'm talking about late cards, yields had probably reached the ceiling long time ago and soon after that the 9800 was discontinued and only the GTS250 prevailed meaning that nearly every G92 should be able to meet the requirements.

But that is not the only example anyway: GF104 vs GF106, Juniper vs Cypress. The difference in attainable clocks between high-end and mainstream/performance has not been more than 25-50 Mhz since a long time ago, as long as temps and power are in check.
Posted on Reply
#60
yogurt_21
cadavecaIf they could sell GTX480 for $339 or less...well, then we can talk about a few things.
that's a very specific number, how much is in your rig fund? does it match?
Posted on Reply
#61
N3M3515
All i want is HD6870/GTX460(815Mhz core) at 199 USD :D
Posted on Reply
#63
cadaveca
My name is Dave
yogurt_21that's a very specific number, how much is in your rig fund? does it match?
My rig fund is far larger than that. :laugh:

Anyway, I always base my "ASP" for cards on quite a few things.

;)
Posted on Reply
#64
yogurt_21
entropy13img831.imageshack.us/img831/7600/gtx5803d.png
so.... the 5xx series basically doesn't exist except for 1 card? or did they just add that in there since it's the only name leaked?
cadavecaMy rig fund is far larger than that. :laugh:

Anyway, I always base my "ASP" for cards on quite a few things.

;)
ah good cause you're prollly gonna need about double that for a dual caymen
Posted on Reply
#65
qubit
Overclocked quantum bit
It sounds like yet another rebranding con to call it a 580 :rolleyes: a 485 would have been much more appropriate.

However, if it's a decent card, I'll see if I can get one. I'm not going to judge it until W1zzard reviews it.

I have to admit my main reason for currently preferring nvidia over AMD is the driver control panel. It's better designed and has far more sophisticated 3D settings. Why AMD can't make basic improvements like this beats me.

PhysX & 3D Vision aren't that important to me any more. Actually, PhysX would be if it was implemented in more than a handful of games and demos and designed to make an actual difference to game play.
Posted on Reply
#67
OneCool
Hmmmm.........the keywords here "paper launch" and "market launch"

If thats really how it happens .....I mean whats the point?
Posted on Reply
#68
wolf
Performance Enthusiast
qubitIt sounds like yet another rebranding con to call it a 580 :rolleyes: a 485 would have been much more appropriate.
news is it's a new GPU, hence 485 wouldn't really work, maybe if it were still GF100, but the news reports it will be GF110, with either 512sp's or 576. sounds like a GF104 with 50% added to it again, which means less transistors than a GTX480 and 20% more sp's. I think it has a lot of potential. however I'd more expect a launch version to have either 528 or 576 sp's not 512.
Posted on Reply
#69
CDdude55
Crazy 4 TPU!!!
The GTX 580 should be coming out around the time the 6900's come out, so hopefully we see some awesome performance from both sides.:)
Posted on Reply
#70
erocker
*
CDdude55The GTX 580 should be coming out around the time the 6900's come out, so hopefully we see some awesome performance from both sides.:)
I would say early Q1 2011 at the soonest. If they can get it out before Christmas that will be very impressive. Especially since Nvidia isn't usually so secretive about their new upcoming products.
Posted on Reply
#71
wolf
Performance Enthusiast
erockerI would say early Q1 2011 at the soonest. If they can get it out before Christmas that will be very impressive. Especially since Nvidia isn't usually so secretive about their new upcoming products.
if they can make the GPU pin compatible with a GTX480 PCB, they might just get it out quicker than we'd expect.
Posted on Reply
#72
cadaveca
My name is Dave
I thought the plan was a pre-christmas launch? or was that the GTX 460?
Posted on Reply
#73
qubit
Overclocked quantum bit
cadavecaI thought the plan was a pre-christmas launch? or was that the GTX 460?
The GTX 460 has been out a for a while. Not sure what you mean there. :confused:
Posted on Reply
#74
Unregistered
Relax, it's no new generation. Is just based on the latest professional GPU with 512shaders and 512bit memory bus, that%s why is 20% faster. A new generation card should have almost double the performance of the previews gen, just like 5870 was for 4870.
#75
Unregistered
CDdude55Woot, TPU's heavy bias comes out the wood work again! *fist pumps*

Hopefully they can up the performance when it actually comes out, it's still WAY to early to tell anything.
sorry I'm just make fun about that dumb nvdia CEO launching fermi with "wooden" version and claiming it was the "real" fermi card,

and i think it will be Q1 2011 before this chip ready because let face it nvdia even can't release the full fermi, they need to shrink it to 28nm to be feasible unless they reduce the shader and make it more efficient just like HD 6870.

btw I'm not fanboy but nvdia have crush my dream for price war with their late and stupid move, but thank God there are HD 6870 that bring the price war come back just like HD 48XX vs GTX 2XX era
Posted on Edit | Reply
Add your own comment
May 2nd, 2024 20:22 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts