Wednesday, September 30th 2009

NVIDIA GT300 ''Fermi'' Detailed
NVIDIA's upcoming flagship graphics processor is going by a lot of codenames. While some call it the GF100, others GT300 (based on the present nomenclature), what is certain that the NVIDIA has given the architecture an internal name of "Fermi", after the Italian physicist Enrico Fermi, the inventor of the nuclear reactor. It doesn't come as a surprise, that the codename of the board itself is going to be called "reactor", according to some sources.
Based on information gathered so far about GT300/Fermi, here's what's packed into it:
Source:
Bright Side of News
Based on information gathered so far about GT300/Fermi, here's what's packed into it:
- Transistor count of over 3 billion
- Built on the 40 nm TSMC process
- 512 shader processors (which NVIDIA may refer to as "CUDA cores")
- 32 cores per core cluster
- 384-bit GDDR5 memory interface
- 1 MB L1 cache memory, 768 KB L2 unified cache memory
- Up to 6 GB of total memory, 1.5 GB can be expected for the consumer graphics variant
- Half Speed IEEE 754 Double Precision floating point
- Native support for execution of C (CUDA), C++, Fortran, support for DirectCompute 11, DirectX 11, OpenGL 3.1, and OpenCL
205 Comments on NVIDIA GT300 ''Fermi'' Detailed
*I say oddly enough, because the Batman game that has caused so much uproar recently is actually based on an Unreal Engine.
**Valve removed the ATi branding once ATi stopped working with them, and most other developers, to improve games before release. That worked wonderfully in the past, and probably allowed nVidia to compete better, and eliminated consumer confusion, and lowered prices for the consumer, so I can't see how it was really a bad thing.
However, this likely won't work with the upcoming generation of cards, as DX11 support will be required.
Wee gotta love paper launches and the wars starting over what somebody said...not actual proof and hard launch benches.
We can sit here all day and chat about how A card will beat B card and get all enraged about it...or, we can wait until its actually released and base our views on solid facts.
I know what way I'd prefer...but saying that, this card does look to be a beast in the making; I just have my doubts about the way nVidia will chose to price it as they often put an hefty price on 5% more "power".
* I say that because it seems too good to be true TBH.
Anyway this sure sounds like a true power horse and realy stresses out that Nvidia wants to shatter the idea of the Graphics card as just means of entertainment.
I think that alot of businesses and scientifical laboratories are ready for a massively paralel alternative to the CPU. And once they do, gamers and more importantly users are bound to follow.
I mean come on, think about it for a second. Is there any better pick-up line than: "Hey baby, wanna come down to my crib and check out my quadruple-pumped super computer?"
Likely too hot for my taste.
Hopefully it will lower the 5850 prices a bit though, I might pick one of those up..or even wait for Juniper XT
If it turns out they do rock the folding world and see great gains, I'll probably start scheming ways to change out 6 GTX 260 216s for 6 of these and a much lighter wallet.
Cheers. :toast:
All in all, what they are claiming is that they have implemented an ISA for those programming languages, so they are effectively claiming that for every core function in C/C++ and Fortran there is an instruction in the GPU that can execute it. In a way they have completely bypassed the CPU, except for the first instruction that is going to be required to move the execution to the GPU. Yes Intel does have something to worry about. If the above is true, they will certainly own in folding. Not only they would be much faster, but there's not going to be a need for a GPU client to begin with. Just a pair of lines to make the CPU client run in the GPU. :eek:
Now that I think about it, it might mean that GT300 could be the only processor inside a gaming console too, but it would run normal code very slowly. The truth is that the CPU is still very needed to run normal code, because GPUs don't have branch prediction (although I wouldn't bet a penny at this point, just in case) and that is needed. Then again C and Fortran have conditional expressions as core functions, so the ability to run them should be there, although at a high performance penalty compared to a CPU. A coder may take advantage of the raw power of the GPU and perform massive speculative execution though.
Sorry for the jargon and overall divagation. :o
I'd love to see Nvidias solution from a nerds pov rather than anything else. If they realy acomplished what they state here, that would render Larabee useless and obsolete before it even comes out and create some serious competition on the HPC market.
www.nzone.com/object/nzone_twimtbp_gameslist.html
The main point still stands though, ATi had/has a similar program that did the exact same thing.
"Then timing is just as valid, because while Fermi currently exists on paper, it's not a product yet. Fermi is late. Clock speeds, configurations and price points have yet to be finalized. NVIDIA just recently got working chips back and it's going to be at least two months before I see the first samples. Widespread availability won't be until at least Q1 2010.
I asked two people at NVIDIA why Fermi is late; NVIDIA's VP of Product Marketing, Ujesh Desai and NVIDIA's VP of GPU Engineering, Jonah Alben. Ujesh responded: because designing GPUs this big is "fucking hard".
Source:
www.anandtech.com/video/showdoc.aspx?i=3651
Another informative article:
www.techreport.com/articles.x/17670
My 1Gb video card can't run it, it isn't the processor power required, it is the vmem, plain and simple.
Linky
Apparently its all in 3D this year. An interesting side effect is the press can't get any decent shots of the slides they show. The question is whether or not it was intentional to help keep people guessing.
www.realworldtech.com/page.cfm?ArticleID=RWT093009110932