• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GF100 512 Core Graphics Card Tested Against GeForce GTX 480

What I don't know is what to do with all the GF100 512 shader cards that don't cut it, as there won't be a SKU for them?

Quadro and Tesla cards which are both based on 448 SPs afaik. They are actually selling quite a few of them.

As for the power consumption it would consume more then it's actual performance increase over the 480c.

According to empiric data on the net about how Fermi behaves, it is quite the opposite. Power draw vastly depends on the clock and very little in enabled/disabled parts.

For example, the GTX465 consumes almost as much as the GTX470 (~20w difference, which is a 10%) despite having 25% of the core disabled, but the GTX470 on the other hand consumes almost 100w less (50% difference) than GTX480 although it only has 7% of shader cores and 20% ROPs disabled. That discrepancy comes from clock difference and anyone who has ever OCed a GTX470 knows that. Conclusion: the power draw difference on the 512 part would be negligible.
 
Last edited:
According to empiric data on the net about how Fermi behaves, it is quite the opposite. Power draw vastly depends on the clock and very little in enabled/disabled parts.

For example, the GTX465 consumes almost as much as the GTX470 (~20w difference, which is a 10%) despite having 25% of the core disabled, but the GTX470 on the other hand consumes almost 100w less (50% difference) than GTX480 although it only has 7% of shader cores and 20% ROPs disabled. That discrepancy comes from clock difference and anyone who has ever OCed a GTX470 knows that. Conclusion: the power draw difference on the 512 part would be negligible.
That's a long way of saying it will consumer more :wtf:. In any case we will find out all the details if/when such a video card comes out and reviewed properly.
 
That's a long way of saying it will consumer more :wtf:. In any case we will find out all the details if/when such a video card comes out and reviewed properly.

No that's a long way of saying it will not consume more, unless it's clocked higher. I was responding to the claim that power consumption increase will be higher than the performance increase, which is not true.

And if it's based on a revised part as has been suggested (revision number blurred) anything could happen. For example that the power draw of such part has the same perf/watt difference comapred to the current GTX480 as the GTX460 has with the GTX465, which would make such part consume less than the GTX470. At this point anything is posible.
 
No that's a long way of saying it will not consume more, unless it's clocked higher. I was responding to the claim that power consumption increase will be higher than the performance increase, which is not true.

And if it's based on a revised part as has been suggested (revision number blurred) anything could happen. For example that the power draw of such part has the same perf/watt difference comapred to the current GTX480 as the GTX460 has with the GTX465, which would make such part consume less than the GTX470. At this point anything is posible.
If the information about it so far is true it will consume more (final clocks, etc). We will see (that's if it does actually come out). No need to get upset about it. ;)
 
If the information about it so far is true it will consume more (final clocks, etc).

And that again is a bold statement that contradicts every bit of information we have about GF100. All the info says it will have exactly the same clocks as the 480 SP model. Like I said there's little difference in power draw between the GTX465 and 470 and there's a 25% difference in enabled silicon. To be precise there's a 10% power difference. The 512 SP version will only have 3-4% more silicon enabled, so do the math, power difference would be 1-2%. And that's assuming everything in the card itself is the same: a revised PWM, revised PCB, revised cooler... all of them would have a far bigger impact on power consumption than the chip itself.

So no, you just connot affirm that it will consume more, based on the fact that it will have 4% more silicon enabled, because the slightest change to the card's design would make a much greater difference. And that's assuming these pics are not related to a new revision that includes all the optimizations made on GF104.
 
Why?

Why oh why?

Nvidia already has the fastest single core GPU. (and hottest, loudest etc). Why would they use the GF100 for this? For 6-7% increase....

Unless it's a partner doing it and not NV?

Although if Southern Islands makes it out this year perhaps this is NV's attempt to dull down ATI's 5xxx series revision.

Two words - performance / watt. Thats all that counts. No point having a 6-7% faster card if it's technologically backwards in respect of power draw.

Performance is performance..... power draw is a different concern. Electricity is not expensive where I live. If "gpu A" is even 200watts more than "gpu B" it isn't going to cost me but maybe $5 more a month(half the cost of a nice sandwich) when gaming with it 4 hours a day. And I don't think it is a 200watt difference. So it isn't a big deal. Lower power draw would have been nice though. Also I am assuming you don't have a poorly ventilated case or a low wattage PSU to worry about. I don't have any GTX 4xx, and I have many ATi 5xxx series cards. It is not like I think Nvidia is doing a bad job. It is competition which is good for the consumer. We don't have to take sides like they are sports teams. ;)
 
Last edited:
And that again is a bold statement that contradicts every bit of information we have about GF100. All the info says it will have exactly the same clocks as the 480 SP model. Like I said there's little difference in power draw between the GTX465 and 470 and there's a 25% difference in enabled silicon. To be precise there's a 10% power difference. The 512 SP version will only have 3-4% more silicon enabled, so do the math, power difference would be 1-2%. And that's assuming everything in the card itself is the same: a revised PWM, revised PCB, revised cooler... all of them would have a far bigger impact on power consumption than the chip itself.

So no, you just connot affirm that it will consume more, based on the fact that it will have 4% more silicon enabled, because the slightest change to the card's design would make a much greater difference. And that's assuming these pics are not related to a new revision that includes all the optimizations made on GF104.

Why are you arguing? I have no need to argue with you. But I can show you results.
source.
Power Consumption Idle
512c.....158w
480c.....141w

Power Consumption Load
512c.....644w
480c.....440w

Now it would have been silly of me to have gotten into an argument with you when no information was available at the time. However with a 1st peek I will await more reviews for confirmation.
 
All the gamers here have lost sight of the fact that the full Fermi architecture does play a huge role in NVs strategic product outlook, it's just in HPC not gaming. Perhaps going forward NV will be smart and separate the two from the getgo even though that may be worse from a production standpoint.
 
now that power draw is plain epic. :eek:

but i don't believe it came from only unlocking the rest of the chip.
 
Why are you arguing? I have no need to argue with you. But I can show you results.
source.
Power Consumption Idle
512c.....158w
480c.....141w

Power Consumption Load
512c.....644w
480c.....440w

Now it would have been silly of me to have gotten into an argument with you when no information was available at the time. However with a 1st peek I will await more reviews for confirmation.

Based on my calculations, this card breaks PCI-e specifications. If we assume they are doing full system power consumption and that the normal Fermi uses about 320W of power (I think that's right for a GF100 GTX480), then if we take the power consumption load of the SP480 (440W) and take away the Fermi load (320W) = 120W for system, then if we take the system consumption away from the SP512 model (644 - 120W) THAT'S 524W!

HOLY CRAP! The card has 2 8 pin power connectors totalling 300W of power (don't go on to me about power supplies being able to supply more because that's irrelavent) and 75W from the PCI-e slot, that's still 149W over! How the heck can they justify such a huge leap in power consumption over the SP480 with such a small improvement in performance?

Fail. Epic fail. Uber epic ULTIMATE FAIL! GF100 is FAIL!!!
 
Based on my calculations, this card breaks PCI-e specifications. If we assume they are doing full system power consumption and that the normal Fermi uses about 320W of power (I think that's right for a GF100 GTX480), then if we take the power consumption load of the SP480 (440W) and take away the Fermi load (320W) = 120W for system, then if we take the system consumption away from the SP512 model (644 - 120W) THAT'S 524W!

HOLY CRAP! The card has 2 8 pin power connectors totalling 300W of power (don't go on to me about power supplies being able to supply more because that's irrelavent) and 75W from the PCI-e slot, that's still 149W over! How the heck can they justify such a huge leap in power consumption over the SP480 with such a small improvement in performance?

Fail. Epic fail. Uber epic ULTIMATE FAIL! GF100 is FAIL!!!

Because some random Chinese website on the Internet is so reliable, right?

Also, the existence of 6+2 pin connectors means PSU manufacturers are already ignoring ATX specifications.
 
and that takes a lot for someone to say when I see in his sig rig he has a gtx480 of all things!!
Myeah.. well afterall this is a tech forum where we discourse anything tech related intelligently.. Its just that sometimes, people get over zealously sentimental / rabidly hostile to some brand or belief, which imo is pretty pointless.
 
Last edited:
hmm, weird damn results there.. GPU score for a 1 HD5850 OC'd up is 18500 Points and feature scores are a lot higher. Now those cannot by any mean be true readings on screenshots. So, why is there like 9K/10K at the screenshots.
 
hmm, weird damn results there.. GPU score for a 1 HD5850 OC'd up is 18500 Points and feature scores are a lot higher. Now those cannot by any mean be true readings on screenshots. So, why is there like 9K/10K at the screenshots.

er... becouse its running in EXTREAM mode??

id like to see a 5850 get anywhere near 9000 points in extream
 
Do you think the cooler will have heatpipes coming out the side still ?

Or something like the Galaxy vapour chamber cooling ?

I expect it to have the same cooling solution as the GTX 480, except with a more hair-trigger fan profile.
 
Because some random Chinese website on the Internet is so reliable, right?

Also, the existence of 6+2 pin connectors means PSU manufacturers are already ignoring ATX specifications.

Dude, I don't know what Chinese class you took, and I haven't even taken any form of Chinese lesson but I can read that just fine. No idea what you're on about with Chinese stuff.

Also I was talking about PCI-e specifications not ATX. If you would read my post right the first time, I wouldn't have to explain myself.
 
Why are you arguing? I have no need to argue with you. But I can show you results.
source.
Power Consumption Idle
512c.....158w
480c.....141w

Power Consumption Load
512c.....644w
480c.....440w

Now it would have been silly of me to have gotten into an argument with you when no information was available at the time. However with a 1st peek I will await more reviews for confirmation.

No wonder. Just look at the load voltage. :laugh:
 
All the gamers here have lost sight of the fact that the full Fermi architecture does play a huge role in NVs strategic product outlook, it's just in HPC not gaming. Perhaps going forward NV will be smart and separate the two from the getgo even though that may be worse from a production standpoint.

The whole point is to make the technologies converge. HPC requires computational power... gaming(essentially 3d rendering) requires computational power. Converging them makes much more sense than trying to design two different products that do what can potentially be the same thing.
 
as for consumptions uh did anyone already forget cards are avaible that use 3 6pins or 3 6+2 pins that comes to 150+150+150+75 525w so there ya go problem solved
 
Last edited:
as for consumptions uh did anyone already forget cards are avaible that use 3 6pins or 3 6+2 pins that comes to 150+150+150+75 525w so there ya go problem solved

Yeah but in the review the sample had 2 8 pins AKA 2 6+2 pin connectors.
 
Um, what was it I was going to say? Oh right, "BWAHAHAHAHAHA to the fools who bought the GTX480 with the assumption it would be the most powerful card from this generation. Sucks to be you. The 512SP version is better."

OR

"BWAHAHAHAHAHA to the fools who didn't bought the GTX480 because the assumption it would be the most power-hungy card from this generation. Sucks to be you. The 512SP version is hotter!"

:wtf:
 
OR

"BWAHAHAHAHAHA to the fools who didn't bought the GTX480 because the assumption it would be the most power-hungy card from this generation. Sucks to be you. The 512SP version is hotter!"

:wtf:


Both those statements are wrong lol.(yours and HillBeast's)
 
Why are you arguing? I have no need to argue with you. But I can show you results.
source.
Power Consumption Idle
512c.....158w
480c.....141w

Power Consumption Load
512c.....644w
480c.....440w

Now it would have been silly of me to have gotten into an argument with you when no information was available at the time. However with a 1st peek I will await more reviews for confirmation.

204 extra watts for 32 extra "Cuda cores"?! Something isn't right. I wonder if this website took a standard GTX480 and got their hands on a 512 core bios.
 
Back
Top