Wednesday, September 30th 2009
NVIDIA GT300 ''Fermi'' Detailed
NVIDIA's upcoming flagship graphics processor is going by a lot of codenames. While some call it the GF100, others GT300 (based on the present nomenclature), what is certain that the NVIDIA has given the architecture an internal name of "Fermi", after the Italian physicist Enrico Fermi, the inventor of the nuclear reactor. It doesn't come as a surprise, that the codename of the board itself is going to be called "reactor", according to some sources.
Based on information gathered so far about GT300/Fermi, here's what's packed into it:
Source:
Bright Side of News
Based on information gathered so far about GT300/Fermi, here's what's packed into it:
- Transistor count of over 3 billion
- Built on the 40 nm TSMC process
- 512 shader processors (which NVIDIA may refer to as "CUDA cores")
- 32 cores per core cluster
- 384-bit GDDR5 memory interface
- 1 MB L1 cache memory, 768 KB L2 unified cache memory
- Up to 6 GB of total memory, 1.5 GB can be expected for the consumer graphics variant
- Half Speed IEEE 754 Double Precision floating point
- Native support for execution of C (CUDA), C++, Fortran, support for DirectCompute 11, DirectX 11, OpenGL 3.1, and OpenCL
205 Comments on NVIDIA GT300 ''Fermi'' Detailed
2 models lower with be at (GTX 380) $359 (faster than HD5870), and (GTX360) $299 (= HD5870), which will push the current HD5870 to $295 and HD 5850 to $245.
The GTS model will be as fast or faster than (abit) GTX285 with DX11 support and will be price at $249, following with a GT at $200.
And the GPUx2 version which use 2xGTX380 GPU, and become the HD5870X2 killer, likely will be price around $649
:toast:
Base on baseless sources.
nVidia never goes the C/P route, they always push out the $600 monster and you either buy it or you don't.
After all, if you have the more powerful product, why sell it cheaper?
I am going to wait until summer for a new build. Prices will have settled, there will be a far larger assortment of cards, and we will see what games/programs are out and able to take advantage of hardware.
I will be scheduling my lunch break for 13:00 Pacific.
also ...very close to amd fusion cpu
en.wikipedia.org/wiki/AMD_Fusion
However, there will be cut down varients, just like the previous generations. These are the SKUs I expect to be competitive both in price and performance with ATi's parts.
Judging by the original figures, I expect mainstream parts to look something like:
352 or 320 Shaders
320-Bit or 256-bit Memory Bus
1.2GB or 1GB GDDR5
forums.techpowerup.com/showpost.php?p=1573106&postcount=131
Although the info about memory in that chart is in direct conflict with the one in the OP, I'm still very inclined to believe in the rest. It hints to 4 models being made, and you are not too far off. I also encourage you to join that tread, we've discussed price there too, with similar conclusions. :toast:
@newtekie
http://forums.techpowerup.com/showpost.php?p=1573733&postcount=143 - That's what I think about the possible versions based on the TechARP chart (the other link above).
For nerds:
Back when MIMD was announced, it was also said the design would be much more modular that GT200. That means the ability to disable units is much improved and that means that the creation of more models is more feasible. 40nm yields are not the best in the world and having 4 models with decreasing number of clusters can greatly improve them to very high numbers.
Furthermore, we still don't know how fast the HD5890 will be, which should somehow address the memory bandwidth bottleneck of the HD5870.
www.guru3d.com/article/radeon-hd-5870-crossfirex-test-review/9
and that most games exept crysis are crappy console ports :rolleyes:
The dual-GPU cards still have long life in market, if ATI has announced its X2 and we have seen the pictures it means that nvidia will do the same.
They are been always very powerful and less expensive than two boards mounted on two physical PCI EX slot.
Still , until i see it i won't take it as "the beast" , we have to wait and see what it can do , not only games but other stuff too.
Another thing , all that C++, fortran ... , is this what DX11 should be and what ATI 5870 can do too or is just exclusive to the GT300 chip.
I'm asking this because it is a big thing , if the programers could easily use a 3 billion trans. GPU the the CPU will be insignificant :) in some tasks , Intel should start to feel threatened , AMD too but they are too small to be bothered by this and they have a GPU too :) .
PC graphics are better, take for example asassins Creed on the PC is much better than consoles, Batman arkaham asylum, Wolfenstein, Mass Effect, Call of duty series and many many others.
The PC is the primary platform for gaming with the PC you can make games with the consoles you can only play.
I think the next upgrade for me will be the sammy 2233rz, then a 5850 after the price comes down :)
Either way though, beastly specs!!
DX11 and OpenCL are used a little bit differently, but are not any less useful and on these AMD does it too. Indeed that's already happening. The GPU will never replace the CPU, it will always be a CPU in the PC, but it will go from being powerfull enough to run appications fast, to be fast enough to feed the GPU that runs the applications fast. This means the end for big overpriced CPUs. Read this:
wallstreetandtech.com/it-infrastructure/showArticle.jhtml?articleID=220200055&cid=nl_wallstreettech_daily
Intead of using a CPU farm with 8000 processors, they used only 48 servers with 2 Tesla GPUs each.
And that Tesla is the old Tesla using GT200 GPU. So that's a lot of saying actually, since GT300 does double precision 10 times faster. While GT200 did 1 TFlop single precission and 100 Gflops in double precision, GT300 will do ~2.5 TFlops in single precision and 1.25 Tflops on double precision. So yeah, if your application is parallel enough you can now say that Nvidia did open up a can of Whoop ass on Intel this time. All those games are ports. PC graphics are better because they used better textures, you use higher resolution and you get proper AA and AF, but the game was coded for the consoles and then ported to PC.
Because in games especially those with very large rooms and environments, the FPS tend to fall down because of the greater workload of pixels.
So a graphic card that comes to 200 fps will drop to 100 fps and you will not notice any slowdown even with explosions and fast movements. While a card that makes it even more down 100 ( to 40 in some cases ) you will notice a drastic slowdown.
This often happens in Crysis, but not games like in modern warfare that has been optimized
to run at 60 fps stable.
Is it a rumour that Nvidia bought Ageia and left all Ageia card owners for dead?
Is it a rumour that Nvidia made sure that an ATI + Nvidia (for PhysX) configuration is no longer do-able?
Is it a rumour that AA does not work on ATI cards in Batman AA even though it is able to do so?
The only rumour is that Nvidia made Ubisoft remove DX10.1 support from AC because AC was running to awesome on it with of course ATI cards (Nvidia does not support DX10.1)
This will be VERY interesting, if the price is right, I may skip the 5k and go GT300 :rockout: