Wednesday, September 30th 2009
NVIDIA GT300 ''Fermi'' Detailed
NVIDIA's upcoming flagship graphics processor is going by a lot of codenames. While some call it the GF100, others GT300 (based on the present nomenclature), what is certain that the NVIDIA has given the architecture an internal name of "Fermi", after the Italian physicist Enrico Fermi, the inventor of the nuclear reactor. It doesn't come as a surprise, that the codename of the board itself is going to be called "reactor", according to some sources.
Based on information gathered so far about GT300/Fermi, here's what's packed into it:
Source:
Bright Side of News
Based on information gathered so far about GT300/Fermi, here's what's packed into it:
- Transistor count of over 3 billion
- Built on the 40 nm TSMC process
- 512 shader processors (which NVIDIA may refer to as "CUDA cores")
- 32 cores per core cluster
- 384-bit GDDR5 memory interface
- 1 MB L1 cache memory, 768 KB L2 unified cache memory
- Up to 6 GB of total memory, 1.5 GB can be expected for the consumer graphics variant
- Half Speed IEEE 754 Double Precision floating point
- Native support for execution of C (CUDA), C++, Fortran, support for DirectCompute 11, DirectX 11, OpenGL 3.1, and OpenCL
205 Comments on NVIDIA GT300 ''Fermi'' Detailed
Regarding the last sentence that's not accurate really. If you put a GTX285 at the same clocks as the HD4870 reference clocks it would more than scale beyond, HD4890 clocks... It's just two different ways of doing things, Nvidia has had the OC advantage in almost every chip in the recent years, mainly because they aim at lower clocks to begin with. And that being said we have no clue which clocks will GT300 launch at, it could be anything between 600-800 Mhz. Lower and higher is unlikely. If it's close to 600Mhz, then GT200 would be 2x as fast as GT200, if it launched near to 800Mhz it would be much faster than that. Point is we don't know exactly how it will perform, but looking at the specs it becomes more and more evident it will not be slow.
EDIT: Before this becomes a discussion, I'm not fighting with you at all. I'm just stating the posibilities, answering your questions trying to offer the different angles. I don't believe 100% either, and I'm not saying that's going to be true. But what I do think is that writen words that are claimed to come from a CEO >>>>>>>>> speculation and thoughts of a member with no info to back his claims. So since all this is speculation, and all of us are talking from speculation, I put both things in a balance and I have no doubts as to which posible, especulated, reality is the one with more probabilities. Specially since most of the other info there regarding GTC is true. Even if Fudzilla is not the most believable source, truth is that with GT300, they've been correct in the last two days and also overally. For instance I think they were the first ones mentioning the real codename Fermi.
What is clear IMO is that he had already made his mind around an idea, he didn't know who Jensen Huang is nor what GTC is, so he thought he was making his claim stronger in his second reply, while he wasn't, and he is unable to change his position after that on his next posts.
Point is, even if that info is not 100% accurate, the posibility that it could not happen that way is not enough to assure his claims. Uncertainty is never a proof of anything, and seriously I'm starting to believe I've traveled to an alien world or something, because I'm seing uncertainty used as proof everywhere: like in BM: Arkham, TWIMTBP as a whole, in the spaniard TV... It's the world becoming crazy or what?
Why, you ask? Because if so, the winning one can behave much like Intel VS AMD (price wise) and, IMHO, that's a BIG no no.
Apparently (on paper), nVidia will win this round (if and when "Fermi" is launched): the question is, by how much.
As long as they are both close to each other, then we can all benefit from their price wars.
It doesn't matter if NVIDIA put out a $700 or $1000 or $1b gpus, because those gpus are not meant to be mainstream. Those gpus are not meant to be good price/performance.
Even if the margin is small (1-5% faster), they still can sell it for x2 more the price.
The performance gap is not a problem, I repeat, as long as AMD has something good on their side, then we're (consumers) all good.
Mimd based architectures are going to be the way of the future, no the old Scalar architecture.
I was really really suprised when I saw how small the card was though, I was just amazed, because Nvidia usually makes the High end cards very long but this time they are moving in the right direction.
But looks really nice, about time people get a bit of bling too for how much they have to pay :)
GTX 280 needs 8pins+6pins and this is more that double the transistors, that's why I'm not buying it, even if it is smaller manufacturing process.
Compared to HD5870 the GTX285 is doing 78% and HD4870 is doing 50%, if we normalize 50% to being 100% and take it as the base, then:
78/50 * 100 = 156%
That is at 2560x1600 the GTX285 is 56% faster than HD4870 in the average of all the games that Wizzard reviews.
But wait!! Ati had another 956 million transistor card, using the same chip the HD4850, we apply the same math and that gives us that GTX285 is 95% faster or almost twice as fast. 40% more transistors and 2x the performance not too shaby isn't it? GTX285's clock is 648 Mhz, HD4870 is 750 Mhz and HD4850 is 625 Mhz.
Comparing the card at 2560x1600 does make sense, because a lot of that extra 40% transistors went to the extra 16 ROPs that help at that resolution.
What I mean with all this is, it depends.
i bet a gtx380 scores "125%" on that chart in W1zzards review in December :)
EDIT: as for the poll ill go for 130% flat :)
:-)
I predict RIP for www.clearspeed.com/
PS. Who prefers the "matte" look of the ATI, or the "glossy" look of the nV?
I wonder why no one has come with a killer backplate design, I mean, I know it wouldn't have any practical function, but wouldn't it be nice to have something you can actually see when you stare at your case's window instead of a PCB or an all black backplate?... :confused:
Anyway we are not comparing GT200/RV770, we are talking about Fermi. Fermi has 3 billion transistors and just like GT200 it will use every one of them in being significantly faster than RV870. I'm not assuming it will be twice as fast as GT200, although I know it will be somewhere around. Twice as fast would have been if we said that in the chart it would be 156%, I said 135-140% quite different. Looking at the papers, we don't have any reason to think it will be twice as fast, what we have is some reasons to think it will be 3x faster, but we are saying it will be less than twice.
2nd day into the conference, and I have yet to see any mention of such a release strategy on the on-running GTC blog site, nor was it mentioned in Jensen's keynote address summary . . . as well, no other tech site reporting on the conference have mentioned the GT300 release strategy.
Sorry to play devil's advocate here, but I'd think news such as that (along with the presentation of the GT300) would be big enough that other sites would've coughed it up too . . . not just Fud; and I'm 100% sure Fud is not the only "major" tech site with representatives on hand. If you've payed any attention to the tech industry over the last 10-20 years (which, I defi get the feeling you have), you'd know that upcoming hardware release strategies (especially for the GPU markets) are like crack to the tech communities - right alongside the spec and pricing sheets.
Such a release strategy just does not make sense for nVidia, especially considering that ATI have already made it to market with their new series, allowing ATI to gain the upper hand in the pricing game . . . not to mention that everyone knows hemlock is waiting in the wings, and ATI won't drop that bombshell until nVidia have stepped into the ring . . . this is the same market strategy both companies have used since the days of the X1000/7800 series. That being said, until I see it on shelves - I'll believe it when I see it. So, seeing as how eeverything is all just speculation . . . then it's safe to assume your initial interpretations of Fud's article are merely speculation as well? I mean, you yourself claim you don't believe Fud 100% either, yet you've twisted your understanding of the article to serve your needs . . . and you have the audacity to claim that I'm warping the message?
Again, this is the tech market we're talking about - all "rumors," whether regurgitated from news sites, or spewed from the manufacturer's mouth - are all to be taken with a grain of salt. There's a lot of sandbagging in the industry, and a lot of smoke & mirrors, too. Things can and do change overnight. I love your pretentious level of assumption. Simply because I disagree with an unbacked statement you made . . . brilliant.
You still fail to realize that again you yourself have twisted the claim that Fud made . . . here, let me help break down simple english context for you, seeing as how you obiviously don't get it . . . now, if english is not your primary language, that's cool - I apologize . . . if not . . . then, that's just sad: Does that make it more understandable? Sounds like sandbagging to me . . . as you've come across quite unsure yourself. It's always amazed me how "holier-than-thou" people are willing to come across, but seem to be under the impression that their shit doesn't stink.
I've said my piece, I'm done with this discussion, debate, arguement, disagreement or whatever you want to call it.
I'll let time prove who's right and who's wrong. :toast: