• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce 4XX Series Discussion

Status
Not open for further replies.
I don't see how some random guy from nVidia says "our stuff are great" is important.
Its not like he will ever say "the GT300 is shit", even if he does he woudn't be in the company for long.
I am not doubting the GT300 will perform, but propaganda is Propaganda.
 
I don't see how some random guy from nVidia says "our stuff are great" is important.
Its not like he will ever say "the GT300 is shit", even if he does he woudn't be in the company for long.
I am not doubting the GT300 will perform, but propaganda is Propaganda.

"Propaganda" or not, it will be faster than the 5870, of that we can all be sure. Just look at the last few generations and significantly so.

This won't be too hard to achieve, either. If you look at the benchmarks on TPU, you'll see that for all its design advances, the 5870 doesn't top the GTX 285 by all that much and can't certainly reach the duals.

To their credit this time, ATI never claimed to make the fastest card around and nvidia will be looking to blow away the competition.
 
Last edited:
"Propaganda" or not, it will be faster than the 5870, of that we can all be sure. Just look at the last few generations and significantly so.

This won't be too hard to achieve, either. If you look at the benchmarks on TPU, you'll see that for all its design advances, the 5870 doesn't top the GTX 285 by all that much and can't certainly reach the duals.

To their credit this time, ATI never claimed to make the fastest card around and nvidia will be looking to blow away the competition.
LOL :roll:
The point is you were posting some shit that doesn't really matter and says its the most important.
Now, if they say the GT300 is out the door tomorrow then thats important. :slap:

Did you bother to even read my post?
All you are trying to say is GT300 > 5870, nobody is putting a doubt on that.
If it is not, nvidia should go fuck themselves for taking so long.
 
Fuad is expecting 399$ for gt300 and 649$ for dual gpu one ,since he is nvidia's voice for sometime my comment is interesting

http://www.fudzilla.com/content/view/15784/1/

The source says:

We still don’t know the price but we would not be surprised it if launches as $399 part for a single and probably $649 for dual. Of course, this is more of an educated guess rather than a fact that we can confirm.

With that kind of a statement I do not really think its more than a guess by the guy. ;)
 
LOL :roll:
The point is you were posting some shit that doesn't really matter and says its the most important.
Now, if they say the GT300 is out the door tomorrow then thats important. :slap:

Did you bother to even read my post?
All you are trying to say is GT300 > 5870, nobody is putting a doubt on that.
If it is not, nvidia should go fuck themselves for taking so long.

I don't post "shit". Thanks. :slap:
 
GT300 will pawn ATI's 5800 series.. but knowing Nvidia, the price of this card will be sky high LOL! :laugh:
 
nvidia fagboys, shut the hell up, we all know that the gt300 will be faster than the HD5870, who really cares, its all about price performance to most ppl!
 
nvidia fagboys, shut the hell up, we all know that the gt300 will be faster than the HD5870, who really cares, its all about price performance to most ppl!

since when was this some kind of name calling contest? in all honesty i think everyone in this page needs to calm down put their zomfg i have nvidia and zomfg i have ati in my system aside and talk about specs..SPECS and FACTS this is what this thread is about...discussing what we think INTELLIGENTLY will be the details behind GT300 this isnt a performance compitition or thread..that will come a week or so after release day when the ppl with $$ on this forum buy them..until then lets stick to NEWS kthnx.
 
since when was this some kind of name calling contest? in all honesty i think everyone in this page needs to calm down put their zomfg i have nvidia and zomfg i have ati in my system aside and talk about specs..SPECS and FACTS this is what this thread is about...discussing what we think INTELLIGENTLY will be the details behind GT300 this isnt a performance compitition or thread..that will come a week or so after release day when the ppl with $$ on this forum buy them..until then lets stick to NEWS kthnx.

I know and I feel the same way, but even mods and admins have been taking sides rather than being objective I am afraid. Its not a good trend to see TBH. I hope the forum population as a whole stops being so emotional. Its like TPU is having a period. :(
 
Quoting yourself suddenly makes your arguement true?
May be you should get some of your facts right before you post a statement.
The GTX 285 in your post is also 55nm, it doesn't mean it can clock at the same frequency as the 4890.
Also that 4870 in your post is 512MB while the GTX 280 and 285 are both 1024MB, the frame buffer makes a significant difference @2560x1600.

When a company create a SKU they have to consider the cost, heat output, and power consumtion of the product.
The GT200b if clocked @the same frequency of the 4890 = 850Mhz will result in poor heat output, and power efficiency.
The higher transistor count and architechure yields better performance per clock,
but at the same time it also increase the power consumtion and heat output significantly.
As a result, the higher transitor count also limits the ability the chip is able to clock.

May be you should compare the RV790 and GT200b which is both 55nm.
http://tpucdn.com/reviews/Powercolor/HD_4890/images/perfrel_1920.gif

Yeah sure the only thing that matters at 2560x1600 is memory. Why don't you check your facts first? The 1 GB and 200 Mhz faster HD4890 is not doing much better either, HD4890 is 1.2x times faster clocks than the HD4870 and is about the same ammount faster. On the other hand the 896 MB GTX275 is 5% faster than the HD4890, so that's a big:nutkick: to your argument.

And chips have been ported to lower nodes since the beginning and moving them to the lower one has never changed the power consumtion a lot (6600 GT or x1950pro for example). You have to R&D your chip completely for that node from the start to obtain the benefits of that node. Also GT200 had a wider memory bus and GDDR3, which both make the card consume more. RV770 vs. GT200 was like an army with m4s fighting an army with muskets. Nvidia has already admitted staying in 65nm was a big mistake. GT300 will have the same weapons in this generation, so we'll see. The only truth is tht you can't extrapolate what happened in the last generation to this new one. Conditions are not the same at all.

EDIT: and regarding power consumption:

perfwatt_1920.gif

perfwatt_2560.gif


Too bad the 65nm GTX280 is above the HD4890, HD4870 and even HD4850. The GTX285 is simply in another league. And that's with 512 bit PCB and GDDR3. We know nothing about GT300, but if they are making a dual-GPU card it won't consume more than GT200 does.
 
Last edited:
since when was this some kind of name calling contest? in all honesty i think everyone in this page needs to calm down put their zomfg i have nvidia and zomfg i have ati in my system aside and talk about specs..SPECS and FACTS this is what this thread is about...discussing what we think INTELLIGENTLY will be the details behind GT300 this isnt a performance compitition or thread..that will come a week or so after release day when the ppl with $$ on this forum buy them..until then lets stick to NEWS kthnx.

Very well said, thankyou. :)

Indeed, it's only a graphics card for playing games fer chrissake, not a murder or some other controversial topic...
 
Last edited:
Yeah sure the only thing that matters at 2560x1600 is memory. Why don't you check your facts first? The 1 GB and 200 Mhz faster HD4890 is not doing much better either, HD4890 is 1.2x times faster clocks than the HD4870 and is about the same ammount faster. On the other hand the 896 MB GTX275 is 5% faster than the HD4890, so that's a big:nutkick: to your argument.

And chips have been ported to lower nodes since the beginning and moving them to the lower one has never changed the power consumtion a lot (6600 GT or x1950pro for example). You have to R&D your chip completely for that node from the start to obtain the benefits of that node. Also GT200 had a wider memory bus and GDDR3, which both make the card consume more. RV770 vs. GT200 was like an army with m4s fighting an army with muskets. Nvidia has already admitted staying in 65nm was a big mistake. GT300 will have the same weapons in this generation, so we'll see. The only truth is tht you can't extrapolate what happened in the last generation to this new one. Conditions are not the same at all.

EDIT: and regarding power consumption:

http://img.techpowerup.org/091002/perfwatt_1920.gif
http://img.techpowerup.org/091002/perfwatt_2560.gif

Too bad the 65nm GTX280 is above the HD4890, HD4870 and even HD4850. The GTX285 is simply in another league. And that's with 512 bit PCB and GDDR3. We know nothing about GT300, but if they are making a dual-GPU card it won't consume more than GT200 does.
Most importantly I was refering to the fact that ATi have better "performance/transistor ratio", so thats the :nutkick: for you.
Did I ever mention that the RV790 is more power efficient over GT200b? Nope.

And Guess what?
The amount of frame buffer do matters a terrible lot until it reaches a certian point which then becomes over kill.
You choose the specific resolution (2560x1600) where 512MB frame buffer is compete insufficient and pit it against a GT200 whith double amount (1024MB) the frame buffer.

Clock speed doesn't really mean shit between different architechtures, given equal condition the GTX 275 have a hard time closing the 200Mhz Gap when both are OCed.
The GT200/b have a really hard time getting close to that 850Mhz the RV790 is @stock, your arguement proved nothing.

All you did was trying to mislead everyone here.

The 9800GTX 512MB and the GTS250 1024MB yield the same performance @1280x1024:
perfrel_1280.gif

But when you get to 2560*1600:
perfrel_2560.gif
 
Last edited:
Most importantly I was refering to the fact that ATi have better "performance/transistor ratio".
Did I ever mention that the RV790 is morepower efficient?

And Guess what?
The amount of frame buffer do matters a terrible lot until it reaches a certian point which then becomes over kill.
You choose the specific resolution where 512MB frame buffer is compete insufficient and pit it against a GT200 whith double amount (1024MB) the frame buffer.

Clock speed doesn't really mean shit between different architechtures, given equal condition the GTX 275 have a hard time closing the 200Mhz Gap when both are OCed.
The GT200/b have a really hard time getting close to that 850Mhz the RV790 is @stock, your arguement proved nothing.

All you did was trying to mislead everyone here.

First of all. I was not pitting it deliverately and second, I extended my argument with the inclusion of HD4890, the result is the same, and you decided to forgot about that little fact. How convenient, the HD4890 is not doing any better than if it had 512MB, but let's just forget about that and say the point is flawed.

Third the GTX285 doesn't need to run at 850 Mhz to be 40% faster than the HD4890, so that perf/transistor count is the same.

And finally, you are just trolling. My original claim is that we don't know the clocks of Fermi. If it is 600 Mhz like GT200, then it's twice the GT200, but if it runs at say 750 Mhz (still much less than 850Mhz) it will smoke the HD5870 big time.

In fact, I choose 2560x1600 in order to my claim NOT being flawed. ROPs make a card faster at higher resolutions and GT200 has twice as many. Where do all those extra transistors come from??? A LOT come from those extra ROPs, so we have to find a setting where both chips are using all their power, if we want to compare perf/transitors.

Anyway I'm done with the topic so don't bother replying.
 
Last edited:
First of all. I was not pitting it deliverately and second, I extended my argument with the inclusion of HD4890, the result is the same, and you decided to forgot about that little fact. How convenient, the HD4890 is not doing any better than if it had 512MB, but let's just forget about that and say the point is flawed.

Third the GTX285 doesn't need to run at 850 Mhz to be 40% faster than the HD4890, so that perf/transistor count is the same.

And finally, you are just trolling. My original claim is that we don't know the clocks of Fermi. If it is 600 Mhz like GT200, then it's twice the GT200, but if it runs at say 750 Mhz (still much less than 850Mhz) it will smoke the HD5870 big time.

In fact, I choose 2560x1600 in order to my claim NOT being flawed. ROPs make a card faster at higher resolutions and GT200 has twice as many. Where do all those extra transistors come from??? A LOT come from those extra ROPs, so we have to find a setting where both chips are using all their power, if we want to compare perf/transitors.

Anyway I'm done with the topic so don't bother replying.
Pointing out that your argument was wrong is suddently trolling?
 
please tell me why you think you can compare clock speeds across architecture at all?
 
Pointing out that your argument was wrong is suddently trolling?

It is in the way you are doing it. You give no facts and you say I'm flawed, yet you want to compare cards at relaxed settings so that the weak card has the advantage. What's next? We compare a Quad core CPU with a Dual core CPU in a single threaded application?
 
It is in the way you are doing it. You give no facts and you say I'm flawed, yet you want to compare cards at relaxed settings so that the weak card has the advantage. What's next? We compare a Quad core CPU with a Dual core CPU in a single threaded application?
Weak cards have the advantage huh?
You should read reviews on cards with equal or even close frame buffer amount.
The fact is you did pit a 1024MB card on a 512MB card at 2560x1600 and says that the GT200 is faster solely because it is?
And I am not talking about 4650 1GB vs 4770 512MB here.

How do you explain that the GTS250 1024MB is 33% faster than the 9800GTX 512MB @2560x1600?
 
Last edited:
Weak cards have the advantage huh?
You should read reviews on cards with equal or even close frame buffer amount.
The fact is you did pit a 1024MB card on a 512MB card at 2560x1600 and says that the GT200 is faster sole because it is?

Like I said, GTX275 vs HD4890. The card with 15% less memory is performing 5% faster. Too much for them being limited by frame buffer. You should check wich games are being tested by Wizzard, more than half of them are old and they don't use a lot of memory.

Anyway, like I said to you 3 times including this one. My original claim was that it depends.

- Less transistors are better, for performance per transistor? It depends. And I showed the examples.
- Less transistors higher clocks, means less heat and power consumption. It depends. Higher clocks increase heat and consumption. In fact the GT200 is better at performance per watt than RV770.

So those are not rigid laws and they depend a lot on how they are implemented. So without knowing shit about GT300 clocks, we don't know if that would be true. Performance per transistor is going to be very different if it runs at 600Mhz or if it rums at 800 Mhz, and the fact is that we don't know which clocks they can achieve because it's a completely new architecture in a new fab process.
 
Last edited:
The effect of the amount of Frame Buffer is evident in Crysis, where you see the 4870 512MB drops off @2560*1600 and the GTS250 1024MB actually surpasses the GTX260 896MB.
crysis_1920_1200.gif
crysis_2560_1600.gif
 
Last edited:
So those are not rigid laws and they depend a lot on how they are implemented. So without knowing shit about GT300 clocks, we don't know if that would be true. Performance per transistor is going to be very different if it runs at 600Mhz or if it rums at 800 Mhz, and the fact is that we don't know which clocks they can achieve because it's a completely new architecture in a new fab process.

Yes it all depends, and I am eager to see what nVidia has to offer.
 
The effect of the amount is evident in Crysis, where you see the 4870 512MB drops off @2560*1600 and the GTS250 1024MB actually surpasses the GTX260 898Mb.
http://tpucdn.com/reviews/MSI/GTX_275_N275GTX_Lightning/images/crysis_1920_1200.gifhttp://tpucdn.com/reviews/MSI/GTX_275_N275GTX_Lightning/images/crysis_2560_1600.gif

Yeah, I never said it doesn't affect in some games especially Crysis. ;)

But on average of all games it doesn't matter all that much and being more especific, the HD4890 has 1024 MB and runs a 20% faster clocks and it achieved 20%... and I'm going to shut up now regarding frame buffer because when I compared HD4870 and HD4890 I calculated wrong the relation clocks/frames per second, and my claims after that were based on those wrong numbers so... :o

But my claim that those laws are not always true still holds. I think we can agree with that.
 
Ok the memory bus decreased, but bandwidth increased. Normally, when you talk about specs, you have to take the practical ones, bandwidth is and bus width is not. And what's more important, bandwidth is much higher in GT300 than in RV870, so it doesn't matter if RV870 is bottlenecked, which I think it's not.

GT200 -> 512bit/8 x 2500 = 160 GB/s
GT300 -> 384bit/8 x 4800 = 230 GB/s
RV770 -> 256bit/8 x 3600 = 115 GB/s
RV790 -> 256bit/8 x 3900 = 124 GB/s
RV870 -> 256bit/8 x 4800 = 153 GB/s

As you see even the GTX285 had higher memory bandwidth, but that doesn't mean the GTX was bottlenecked, not at all.
Also the jump from RV770 to RV870 is 153/115 = 1.33 or 33% more bandwidth. Or RV790/RV870 is 153/124 = 1.23 or 23%.
While the jump in Nvidia is 230/160= 1,43 or 44% improvement. Nothing points out to GT300 being memory bottlenecked, far from it.



Yeah, I knew about that decrease, but it's important to note that GT200 never ever reached anything close to that number, and I mean 24576, and GT300 can. Peak <insert spec> means very little unless you are speaking about the exact same architecture. It's like RV770 had 1.2 TFlops and GT200 only had 622 Gflops or 933 with dual issue (which almost never happens), but GT200 has the ability to use them much better and it's faster.



Not true. http://forums.techpowerup.com/showpost.php?p=1575651&postcount=188

We don't know the clocks. The limitations they encountered in clocking GT200 higher could not happen now. GT200 was 65nm and RV770 was 55nm. Now both are 40nm so they can achieve clocks that are similar, they might not, but the posibility is higher than in previous generation.

The truth is that at least since DX10 cards the performance/transistors ratio has been constant in almost every chip. If you add G92 and RV670 to the equations that I made in the link, it becomes even more apparent, g92 with 756 millions more than competes with the low end of RV770. And the 667 million trans. RV670 does so with G92.



The X2 also has two schedulers, one per chip, so that can be the problem as I said.

The only way in which memory bottleneck can be proven in RV870 is taking the HD5870 and downclocking the chip while leaving the memory as is. If you downclock and performance is mantained then the card was bottlenecked and if it does perform worse then it's not.

Anyway specs don't tell all the story. My latest assumptions are not based in specs (only), they are based in all the aspects of the chip covered in the white paper and architecture previews like the one in Real World Technologies. GT300 has improved in almost every practical aspect and RV870 is basically the same chip, with twice the units and DX11 suport. Just because the latter has not scaled well, doesn't mean the former will not scale well. For instance, Nvidia has scaled much better in the past. They went from 128 SPs to 240SP, 1.875x the ammount. At the same time AMD went from 320SP to 800SP, 2.5x. None of them reached that amount of improvement but Nvidia got much closer. Looking at it that way, it doesn't surprise at all that RV870 didn't scale that well, RV770 neither did after all. And now Nvidia is doing a 2.15x increase, so chances are big for them to do much better.

Regarding the memory bottleneck for a 5870, whenever I see a dual 4890 in Crossfire beating a 5870, I blame it on the reduced total memory bandwidth for a 5870 compared to two 4890's in total.

A 5870 theoretically should be at least equal to dual 4890's in specifications, save the memory bandwidth. It should NEVER be slower than dual 4890's if it were not bottlenecked by memory, simply due to the fact that crossfire scaling is not 100% efficient.

Once again, if I see a 2x 4890 beating a 5870 by say, 40% at a high resolution, I simply point my fingers at the memory bandwidth of a 5870. It is as logical as it can get.

EDIT: Oops.. wrong thread.. since the post is about a 5870.. should I move it over to the 5870 thread?
 
The pictured GT300 cards are FAKE!!!

http://www.semiaccurate.com/2009/10/01/nvidia-fakes-fermi-boards-gtc/

and after Nvidia denied it to Charlie, Nvidia finally confirmed it was fake:

http://www.fudzilla.com/content/view/15798/1/

:slap:

http://forums.techpowerup.com/showthread.php?t=105052

NVIDIA 'Fermi', Tesla Board Pictured in Greater Detail, Non-Functional Dummy Unveiled

On close inspection of the PCB, it doesn't look like a working sample. Components that are expected to have pins protruding soldered on the other side, don't have them, and the PCB seems to be abruptly ending. Perhaps it's only a dummy made to display at GTC, and give an indication of how the card ends up looking like. In other words, it doesn't look like NVIDIA has a working prototype/sample of the card they intended to have displayed the other day.
 
Status
Not open for further replies.
Back
Top