Thursday, October 28th 2010
NVIDIA GeForce GTX 580 Reference Design Graphics Card Pictured
Here are the first pictures of what is touted to be the GeForce GTX 580 reference design graphics card by NVIDIA, by sections of the Chinese media. There are some interesting inferences that can be drawn just by the looks of the card. To begin with the cooler bears an uncanny resemblance to one of the earliest design iterations of the GeForce GTX 480 (pictured here and here). In its final iteration, NVIDIA gave the GTX 480 a more massive cooler, perhaps to keep up with its finalized clock speeds. If the design of the GTX 580 cooler is anything to go by, it means that either NVIDIA refined the GF100 architecture in the GF110 (on which GTX 580 is based) a great deal, increasing performance per Watt; or that since GTX 580 is in its development stage, its final version could look different. GeForce GTX 580 is being designed as a counter to AMD's Radeon HD 6900 series single-GPU graphics cards that are based on the new Cayman graphics core, which is slated for release in late November. It is expected to be 20% faster than the GTX 480.
Source:
PCinLife
213 Comments on NVIDIA GeForce GTX 580 Reference Design Graphics Card Pictured
Generally Ati come out on top.
ATI just needs some dev support :laugh:
I chose not to buy a 5870 and go for two 5850's because i wanted super performance. 5870 wasn't enough. I spent £400 on a gfx solution - don't think that doesn't make it any less than buying one single card. You're logic is irrelevant. If i bought 2 x 5770's then yeah, big diff. But i bought 5850's in Nov 09 when they were the 2nd fastest single chip solutions. I couldn't buy a GF 100 because well, someone forgot how to make them on time.
You seem to miss that I'm not slagging off GF 100. I'm simply saying it should perform better and the GF110 is exactly that fix. It's also where NV want to go with Kepler and Maxwell. They have stated they want to produce 'x' times performance gains with a marked improvement in efficiency.
And on power - going by W1zz's readings:
www.techpowerup.com/reviews/HIS/Radeon_HD_6870/27.html
One 5850 = 150 Watts (two even doubled = 300)
One 480 = 320 Watts.
Comparison on max power.
Now i'm not arguing on that point anymore, take it up with W1zz.
As for i7's consuming more power than phenom x6:
www.anandtech.com/show/3674/amds-sixcore-phenom-ii-x6-1090t-1055t-reviewed/10
Yeah, my 920 conumes 2 Watts more on this test yet performs:
www.anandtech.com/show/3674/amds-sixcore-phenom-ii-x6-1090t-1055t-reviewed/9
3 wins for i7 920, 2 draws and one loss.
And we all know clock speeds mean jack between AMD and Intel.
I spent £1500 on my self built rig. I get to call myself an enthusiast. I'm not a water cooler or into shiny neon. My money goes on quality for noise/performance optimisation as i watch TV through my PC. For me, performance must be met with acceptable noise levels. I had to use rivatuner on my GTX 295 to lower idle fan speeds. I dont on my crossfire set up. People have different priorities - accept it, enthusiasts dont all care about pure power at the expense of everything else.
So, if you're going to quote me again, quote this:
GF100 isn't a failure - it should have performed better, even JSH said so in his modest interview. GTX 580 is to fix that. That's what i've been posting all along.
Except i do care about heat because i usually tend to pair up cards ;) Exactly what i thought when i heard about the power consumption of the GTX480.
I think I would've understood your point better if ATI was left out of the original comment altogether.
Meh. Whatever. It's over and done. Point taken. Meh. I water cool. Total non-issue for me.
itbbs.pconline.com.cn/diy/12074471.html
They should have revised the fabric thing when they received A1 and go directly to B1 revision instead of doing metal respins that didn't really fix anything or very little? That would have made them be 4 months late or so instead of 6 months and have a much better product. But it is very easy to judge that right now that we know where the error was made. At the time all they knew was that the fabric which is in fact nothing but a metal layer didn't work, so a metal respin was what made most sense: 2 months late and hopes of fixing it. A metal respin takes less than 2 months so it just made sense to try to fix the management mistake in a metal respin. The error was deeper than they thought and the rest is history left for prosperity. Should they have went to B1 after A2 didn't fix anything, being 8 months late but with a card that could potentially destroy Cypress like the alleged GF110 is apparently going to? Again, maybe they should have, but that was probably too much pressure for partners and it's again a decision that is too easy to make now that we have all the responses...
EDIT: tbh if GF110 turns out to be just GF100 made right, and it seems most likely to, I'm going to be dissapointed, regardless of how it does in comparison to Cayman. Because even though I think it's going to be very competitive against Cayman, I'm 99% sure they could have made a much better card if they had used the 48SP SMs instead of the 32 in GF100. In fact, IMO if they wanted to release another 512 SP chip, they could have taken GF104 and add another 16 SP. The architecture must be able to handle it, because in GF104 2 dispatchers are working with 3 SIMDs with no performance penalty at all, so every dispatcher can surely dispatch to 2 SIMDs, so 2x2=4= 64 SP. And since GF104 has 2x the TMU/SFU/load/store of CF100 per SM, they would end up with a card with almost same specs as GF100 but with a sub 400mm^2 area (instead of the rumored 460-480 mm^2 of GF110). The only drawback would be tesselation, because with half the clusters it would have half the tesselation capabilities. Still considering that even the GTX460 (or GTS450 if you stretch it) annihilates Cypress and Barts when it comes to tesselation it would be a good compromise.
Supposed GTX 580 price($599) and availibilty info coming from VRzone..vr-zone.com/articles/nvidia-geforce-gtx-580-priced-at-us-599-to-be-available-november-9th-/10222.html
That price is actually a big far out of reach for me unfortunately(especially for a single GPU card).
I hope it's not that much and if so, i hope the 6900's are cheaper.
hec if AMD does what they did with the 5870 and go for roughly a ~$399 price point Nv may have screwed themselves, if they wait they will know what they can price them at to sell as many as possible.
HD5870 is at 360 USD, speculated 40% increase of HD6970 will take it to 504 USD BUT, that won't be the case because people will prefer to buy 2x HD6870 for 480 USD and 20% more performance.
Then i guess it will be at 450USD.
'i dont care about price, power, heat or noise so long as its the fastest!'
nvidia knows that their fanboys (not their regular users) only care because they have the FASTEST, not about any other category.
You have to remember, if you're buying a top end card, heat, power and noise don't matter to a lot of people, if you're paying top tier price you should know and have the precautions set up so that you can deal with those. In the end it's quite literally about performance actually, if this card is a monster, you bet your ass people/enthusiasts with the cash will be buying it no matter the heat, power, noise etc. and i honestly don't see a problem with that.