• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce GF100 Architecture

i'm sorry .. i didnt not mean to imply it is worth a ban [here], was just making a joke that most of the hardocp forum visitors would understand :)

Ok, just a misunderstanding, thankyou! I didn't understand the [H] reference there; it's actually quite funny and I feel a lot better now. I've just learned something today myself. :toast:

nvidia's engineers say their implementation is awesome

And I'm sure it is - they make some excellent products and I've got several of them. :D Seriously though, we're both technical guys and I'd love to know what an engineer (who frankly knows way more about this stuff than I do) from nvidia, ati, intel, doesn't matter, has to say about the design principle I explained.
 
Yields, price and what you will get?

If Nv can only get 104 chips per wafer vs 160 from AMD how much will these cards cost? I'm not even speculating on what chips will be ready to use, harvested, etc. The other issue which I touched on was pre-release cards vs production cards. Something Charlie brought up (to be taken with a grain of salt but worth mentioning none the less) is will we see pre-release cards that are fully unlocked at 512 shader with production cards only offer 448 shader?
 
Last edited:
If Nv can only get 104 chips per wafer vs 160 from AMD how much will these cards cost? I'm not even speculating on what chips will be ready to use, harvested, etc.

And with Nvidia selling 2x the numbers of cards for so many years and creating chips that were twice as big, resulting in 4x the number of wafers ordered, how much better deals does Nvidia obtain based on volumes? We don't know exactly how much this chips really cost to Nvidia so speculating on the retail price based on a simple die area basis is pointless.
 
And with Nvidia selling 2x the numbers of cards for so many years and creating chips that were twice as big, resulting in 4x the number of wafers ordered, how much better deals does Nvidia obtain based on volumes? We don't know exactly how much this chips really cost to Nvidia so speculating on the retail price based on a simple die area basis is pointless.

What they did last year is not a reflection on this year's arch. This is why I asked the question. A question is by no means speculating but looking for information. If the aforementioned information is true (which is why I started with the word "If") what will the final product cost? This is the question I posted. Even if the numbers are correct or incorrect we understand that a larger die size will yield less per wafer. So that's moot to worry about that. The ultimate question is consumer cost.
 
Price my internet fiends, price to performance ratio is all i personally care about but that little presentation was pretty impressive.
 
What they did last year is not a reflection on this year's arch. This is why I asked the question. A question is by no means speculating but looking for information. If the aforementioned information is true (which is why I started with the word "If") what will the final product cost? This is the question I posted.

That's what I'm saying that we don't know. We don't know how many wafers has Nvidia ordered for the entire year. They might take a risk and try to create 100 million Fermi cards by the end of the year or they could be very conservative and order 10 million in which case the price will change vastly. Not only because of volume discounts, also because in both cases they are going to aim to comparable profits (the expectations from investors)+/- 50%, they would just make less per card in the first case and that's it. IMO they are going to try something more on the lines of high volume of sales and hence the cards will retail for no more than $500 and $350 the second one, but that's just my opinion.
 
That's not innovation at all. That's a trend and it's been happening since like forever.
GF100 has 1 billion more transistors (almost 1/3 more) than Cypress yet, it is very unlikely it will beat Cypress by 33% in benchmarking. That is innovation on AMD's behalf or a lack of innovation on NVIDIA's behalf.
 
Last edited:
nvidia's engineers say their implementation is awesome

Wonder if the NVIDIA engineer who said that shortly before the release of the FX series is still there? :D
 
And with Nvidia selling 2x the numbers of cards for so many years and creating chips that were twice as big, resulting in 4x the number of wafers ordered, how much better deals does Nvidia obtain based on volumes? We don't know exactly how much this chips really cost to Nvidia so speculating on the retail price based on a simple die area basis is pointless.

I think this is an excellent point. Just because some wafers have imperfections doesn't mean those chips are bad either. Those are made into lesser video cards which is exactly the way Nvidia has been doing it already. Either way it will be interesting to see what these are going to be priced at. On one side you have Nvidia that is a bit late with their new GPU and perhaps they'll take a bit of a profit loss to sell them. On the other hand there are a lot of Nvidia die-hard's (I don't like fanboy) and uninformed folks who only know of Nvidia that will pay regardless. Let's see some competitive pricing, it doesn't matter the company, I always love to see one stick it in the eye of the other and the consumer coming out on top.
 
I think this is an excellent point. Just because some wafers have imperfections doesn't mean those chips are bad either. Those are made into lesser video cards which is exactly the way Nvidia has been doing it already. Either way it will be interesting to see what these are going to be priced at. On one side you have Nvidia that is a bit late with their new GPU and perhaps they'll take a bit of a profit loss to sell them. On the other hand there are a lot of Nvidia die-hard's (I don't like fanboy) and uninformed folks who only know of Nvidia that will pay regardless. Let's see some competitive pricing, it doesn't matter the company, I always love to see one stick it in the eye of the other and the consumer coming out on top.
Yeah exactly!!!!!!!!!!!!

And for me this has been an awesome ride with ATI as Nvidia has stomped them for years Meaning I have enjoyed some great cards for the money......

The one and only REASON I stick with ATI is easily pointed out HERE
http://www.tomshardware.com/reviews/burnout-paradise-performance,2289-2.html

I simply SEE ATI's Color's BETTER than the USUALLY Better Nvidia Card

I know it's a personal choice... I select The Better color scheme (for me) over performance

And for years Now Nvidia has simply out performed ATI in Select Benchmarks and I (and my wallet) truly HOPE this will continue.....

EDIT: I guess this is just me saying



GO! Nvidia!!
 
GF100 has 1 billion more transistors (almost 1/3 more) than Cypress yet, it is very unlikely it will beat Cypress by 33% in benchmarking. That is innovation on AMD's behalf or a lack of innovation on NVIDIA's behalf.

Unlikely? That's not what they are saying. I know about marketing and all, but I don't think they are lying about their internal numbers this time (a lot of people have seen those benchmarks are saying the same too), inflating them yes, lying no, as in they are saying 60-80% faster but it's actually 30-40% faster. Anyway, I'm still 95% convinced that the GF100 will be faster than 2xGTX285 and hence the HD5970.

EDIT: The reason I think they are not lying and are actually saying the truth is that lying about performance under this situation would just hurt them. They are very late and people already have their thoughts or otherwis are waiting, performance numbers are not going to change any purchase decision right now. Whoever decided to wait it will continue waiting and whoever decided to buy an HD5xxx is waiting for one to be available. Lying would just hurt them, because it would anger the ones that are waiting once released and would not buy the card+ Nvidia would make their fame even worse than it is. Nothing will help them now and that's why they have not disclosed any hard numbers.

Anyway innovation and performance-per-die have nothing to do with each other. Fermi has a lot of new features that potentially increase performance and make it a much better GPGPU, that is innovation. Cypress is just a mirrored RV770.
 
Last edited:
That's what I'm saying that we don't know. We don't know how many wafers has Nvidia ordered for the entire year. They might take a risk and try to create 100 million Fermi cards by the end of the year or they could be very conservative and order 10 million in which case the price will change vastly. Not only because of volume discounts, also because in both cases they are going to aim to comparable profits (the expectations from investors)+/- 50%, they would just make less per card in the first case and that's it. IMO they are going to try something more on the lines of high volume of sales and hence the cards will retail for no more than $500 and $350 the second one, but that's just my opinion.
And yet we come full circle as you are the one speculating after accusing me of speculating when asking a question :slap:. So in the end you really don't know yet you've argued about not knowing. My question was really geared to anyone out there they may come across some pricing information as the video card is suppose to be released in less then 2 months.
 
If you read my posts again, you'll see my tone was actually perfectly pleasant and polite. If you don't agree with me assertion about the design, then fine, it really doesn't matter. I don't have to go around proving anything to anyone and they don't have to prove anything to me.

I've already said at least twice how in the real world, one can't necessarily achieve the efficiency of a power of 2 design. In the end, it's just my opinion on it and it shouldn't matter to you either if you don't agree with it.

Mister, the politeness of your tone does not suffice. Prove your assertions. If your institution taught you that, surely it would have been documented in some textbook or other form of literature. Show me that.

Currently your assertions are baseless, so do the needful.
 
Last edited:
Unlikely? That's not what they are saying. I know about marketing and all, but I don't think they are lying about their internal numbers this time (a lot of people have seen those benchmarks are saying the same too), inflating them yes, lying no, as in they are saying 60-80% faster but it's actually 30-40% faster. Anyway, I'm still 95% convinced that the GF100 will be faster than 2xGTX285 and hence the HD5970.
The official slides some time ago showed GF100 being barely faster than the HD5970.


EDIT: The reason I think they are not lying and are actually saying the truth is that lying about performance under this situation would just hurt them. They are very late and people already have their thoughts or otherwis are waiting, performance numbers are not going to change any purchase decision right now. Whoever decided to wait it will continue waiting and whoever decided to buy an HD5xxx is waiting for one to be available. Lying would just hurt them, because it would anger the ones that are waiting once released and would not buy the card+ Nvidia would make their fame even worse than it is. Nothing will help them now and that's why they have not disclosed any hard numbers.
Sounds like what AMD was doing when they were getting spanked by Core 2 Duo/Quad. AMD kept crying "Phenom is coming," "Phenom is going to be great," and when it got here everyone except AMD was saying "Phenom is crap," "Phenom belongs in the bargin bin."


Anyway innovation and performance-per-die have nothing to do with each other. Fermi has a lot of new features that potentially increase performance and make it a much better GPGPU, that is innovation. Cypress is just a mirrored RV770.
It is not innovative to use a smaller process with more transistors to get more performance. It is innovative to streamline the processing power so better results are produced faster with few transistors.

Fermi also has one feature that completely destroys its potential as a consumer graphics card: ECC memory controllers. The GPGPU card will hit the market as a Tesla product. It will be some time before we see the consumer version without the ECC memory controllers. Doesn't that sound a lot like where Intel Larrabee is at right now?

I think there is a lesson to be learned from this: it is easier to put the GP in GPU than extrapolate the GPU out of a GPGPU card.
 
why was last gen the 200 series and this is 100 series?
 
Because NVIDIA is screwed in the head. I can't name one thing they did right since 2008 (G80, 8 series GeForce).
 
Because NVIDIA is screwed in the head. I can't name one thing they did right since 2008 (G80, 8 series GeForce).

9600GT was a BIG Right thing...(OMG, especially when they were like $39)
 
why was last gen the 200 series and this is 100 series?

GF100 is not a brand name. It's a GPU internal codename, which is irrelevant to the consumers anyway.
 
thx, it seemed silly to be going backwards...
 
They named it that cause it costs 100 billion jiggawatts.
 
And yet we come full circle as you are the one speculating after accusing me of speculating when asking a question :slap:. So in the end you really don't know yet you've argued about not knowing. My question was really geared to anyone out there they may come across some pricing information as the video card is suppose to be released in less then 2 months.

I didn't accuse you of speculating, I pointed out that speculating based on die size is always an error. Die probably represents less than 10% of the card's final price and that means they can play a lot with the final price depending on what they want to do with it. I mentioned volumes etc, because there's this missconception that a chip that is 50% bigger will cost 50% more to produce and to sell, etc, which is crap. There are far more considerations and that's why I mentioned one.

I've just been trying to explain why your question was not really relevant, but I'm going to answer your question based on my opinion (with no acces to any info except that I've been working on a ratailer in the past and I know how things work) so that you can see how vague the response to such a question is when we don't know anything and we are just assuming or determining how many dies per wafer they have. Answer: anything from $250 to $750. That's it. Happy?
 
Sounds like what AMD was doing when they were getting spanked by Core 2 Duo/Quad. AMD kept crying "Phenom is coming," "Phenom is going to be great," and when it got here everyone except AMD was saying "Phenom is crap," "Phenom belongs in the bargin bin."

AMD was not in the situation that Nvidia is and has never been, regarding fame or company image. AMD always had good image, Nvidia doesn't and if they outrught lie about the performance they will not sell a card, they would sell less than what they would do if they said the truth. Plus GF100 not being much faster than Cypress is not going to happen. People just have to accept that performance wise it will be much faster.

It is not innovative to use a smaller process with more transistors to get more performance. It is innovative to streamline the processing power so better results are produced faster with few transistors.

That is still left to see. If the performance claims anywhere near being truth then, Fermi will just spank Cypress on that department too. 33% more transistors, if it's 60% then it's much much better.

Fermi also has one feature that completely destroys its potential as a consumer graphics card: ECC memory controllers. The GPGPU card will hit the market as a Tesla product. It will be some time before we see the consumer version without the ECC memory controllers. Doesn't that sound a lot like where Intel Larrabee is at right now?

Nonsense. ECC is not going to be used in the GPUs and doesn't affect performace at all when not in use. And what the hell are you talking about Tesla coming first? GF100 comes in March and Tesla will not come until late Q2.

I think there is a lesson to be learned from this: it is easier to put the GP in GPU than extrapolate the GPU out of a GPGPU card.

This shows you either didn't read Wizzard's article or you didn't understand it at all. And IMO it's definately the former since you are saying those things that are based on info from like July or September. This new article is showing that Fermi is indeed a GPU with all the letters. It's not just the GPGPU card that was claimed to be.
 
That's correct, you need two cards to get the 3 display setup working.
 
Back
Top