Wednesday, September 30th 2009

NVIDIA GT300 ''Fermi'' Detailed

NVIDIA's upcoming flagship graphics processor is going by a lot of codenames. While some call it the GF100, others GT300 (based on the present nomenclature), what is certain that the NVIDIA has given the architecture an internal name of "Fermi", after the Italian physicist Enrico Fermi, the inventor of the nuclear reactor. It doesn't come as a surprise, that the codename of the board itself is going to be called "reactor", according to some sources.

Based on information gathered so far about GT300/Fermi, here's what's packed into it:
  • Transistor count of over 3 billion
  • Built on the 40 nm TSMC process
  • 512 shader processors (which NVIDIA may refer to as "CUDA cores")
  • 32 cores per core cluster
  • 384-bit GDDR5 memory interface
  • 1 MB L1 cache memory, 768 KB L2 unified cache memory
  • Up to 6 GB of total memory, 1.5 GB can be expected for the consumer graphics variant
  • Half Speed IEEE 754 Double Precision floating point
  • Native support for execution of C (CUDA), C++, Fortran, support for DirectCompute 11, DirectX 11, OpenGL 3.1, and OpenCL
Update: Here's an image added from the ongoing public webcast of the GPU Technology Conference, of a graphics card based on the Fermi architecture.
Source: Bright Side of News
Add your own comment

205 Comments on NVIDIA GT300 ''Fermi'' Detailed

#51
kid41212003
I'm expecting the most high-end single GPU to be price at (GTX390) $449

2 models lower with be at (GTX 380) $359 (faster than HD5870), and (GTX360) $299 (= HD5870), which will push the current HD5870 to $295 and HD 5850 to $245.

The GTS model will be as fast or faster than (abit) GTX285 with DX11 support and will be price at $249, following with a GT at $200.

And the GPUx2 version which use 2xGTX380 GPU, and become the HD5870X2 killer, likely will be price around $649

:toast:



Base on baseless sources.
Posted on Reply
#52
Zubasa
EarlZLikely this would be priced similar to the 58xx series, hopefully its cheaper and performs better in all scenarios.
You can only hope.
nVidia never goes the C/P route, they always push out the $600 monster and you either buy it or you don't.
After all, if you have the more powerful product, why sell it cheaper?
Posted on Reply
#53
Unregistered
I agree with the waiting game. Having bleeding edge may be cool for bragging rights, but sometimes you end up with stuff like a sapphire 580 pure MB, still waiting for RMA, and a nice collection of 3870's.

I am going to wait until summer for a new build. Prices will have settled, there will be a far larger assortment of cards, and we will see what games/programs are out and able to take advantage of hardware.

I will be scheduling my lunch break for 13:00 Pacific.
Posted on Edit | Reply
#54
devguy
Oh good. It is super exciting that I may soon have the opportunity to GPU hardware accelerate the thousands of Fortran programs I've been writing lately. I even heard that the new version of Photoshop will be written in Fortran!
Posted on Reply
#56
newtekie1
Semi-Retired Folder
Seems like a monster. I can almost guarantee the highest end offering will be priced through the roof.

However, there will be cut down varients, just like the previous generations. These are the SKUs I expect to be competitive both in price and performance with ATi's parts.

Judging by the original figures, I expect mainstream parts to look something like:

352 or 320 Shaders
320-Bit or 256-bit Memory Bus
1.2GB or 1GB GDDR5
Posted on Reply
#57
Benetanegia
kid41212003I'm expecting the most high-end single GPU to be price at (GTX390) $449

2 models lower with be at (GTX 380) $359 (faster than HD5870), and (GTX360) $299 (= HD5870), which will push the current HD5870 to $295 and HD 5850 to $245.

The GTS model will be as fast or faster than (abit) GTX285 with DX11 support and will be price at $249, following with a GT at $200.

And the GPUx2 version which use 2xGTX380 GPU, and become the HD5870X2 killer, likely will be price around $649

:toast:



Base on baseless sources.
Not so baseless IMO. :)

forums.techpowerup.com/showpost.php?p=1573106&postcount=131

Although the info about memory in that chart is in direct conflict with the one in the OP, I'm still very inclined to believe in the rest. It hints to 4 models being made, and you are not too far off. I also encourage you to join that tread, we've discussed price there too, with similar conclusions. :toast:

@newtekie

http://forums.techpowerup.com/showpost.php?p=1573733&postcount=143 - That's what I think about the possible versions based on the TechARP chart (the other link above).

For nerds:
Back when MIMD was announced, it was also said the design would be much more modular that GT200. That means the ability to disable units is much improved and that means that the creation of more models is more feasible. 40nm yields are not the best in the world and having 4 models with decreasing number of clusters can greatly improve them to very high numbers.
Posted on Reply
#58
ToTTenTranz
laszloi see a 5870 killer

now all depend on pricing
But not a HD5870X2 killer, which will be its competitor, price-wise.


Furthermore, we still don't know how fast the HD5890 will be, which should somehow address the memory bandwidth bottleneck of the HD5870.
Posted on Reply
#59
Benetanegia
ToTTenTranzBut not a HD5870X2 killer, which will be its competitor, price-wise.
We don't know. Rumors have said the X2 will launch at $550-600. GTX380 will not launch at that price unless it's much much faster and it has 2 lower versions that are faster or compete with HD4870. Forget about GTX2xx launch already, those prices were based on pricing strategy of the past. GTX3xx production costs will be much lower than GTX2xx cards at launch and significantly cheaper to produce than a dual card. Not to mention that Nvidia will have a dual card too.
Furthermore, we still don't know how fast the HD5890 will be, which should somehow address the memory bandwidth bottleneck of the HD5870.
How? What is what I missed?
Posted on Reply
#60
wiak
:eek:
buggalugsOnly for idiots who think 10 extra fps is worth $300 more.


The 4870/4890 have done very well while there are faster Nvidia cards.
i know and you have to also consider that two 5870 in crossfire are bottlenecked by Core i7 965 @ 3.7ghz in some games
www.guru3d.com/article/radeon-hd-5870-crossfirex-test-review/9

and that most games exept crysis are crappy console ports :rolleyes:
Posted on Reply
#61
Animalpak
Specifications are very promising of this new GPU, I look forward to the announcement of the upcoming dual GPU from NVIDIA.

The dual-GPU cards still have long life in market, if ATI has announced its X2 and we have seen the pictures it means that nvidia will do the same.

They are been always very powerful and less expensive than two boards mounted on two physical PCI EX slot.
Posted on Reply
#62
leonard_222003
Altough i hate Nvidia for what it does to games i have to say i'm impressed.
Still , until i see it i won't take it as "the beast" , we have to wait and see what it can do , not only games but other stuff too.
Another thing , all that C++, fortran ... , is this what DX11 should be and what ATI 5870 can do too or is just exclusive to the GT300 chip.
I'm asking this because it is a big thing , if the programers could easily use a 3 billion trans. GPU the the CPU will be insignificant :) in some tasks , Intel should start to feel threatened , AMD too but they are too small to be bothered by this and they have a GPU too :) .
Posted on Reply
#63
VanguardGX
[I.R.A]_FBirassclaat
My words exactly lol!!! This thing is gonna be a number crunching beast!! Hope it can still play games:)
Posted on Reply
#64
Animalpak
wiakand that most games exept crysis are crappy console ports
This is not true, games are built from their first release for every platform for the PC version you have more 'opportunities for further improvements in graphics and stability.

PC graphics are better, take for example asassins Creed on the PC is much better than consoles, Batman arkaham asylum, Wolfenstein, Mass Effect, Call of duty series and many many others.

The PC is the primary platform for gaming with the PC you can make games with the consoles you can only play.
Posted on Reply
#65
Binge
Overclocking Surrealism
buggalugsOnly for idiots who think 10 extra fps is worth $300 more.


The 4870/4890 have done very well while there are faster Nvidia cards.
Call me an idiot and chop off my genitals so as I can't reproduce any more retards. I've got my wallet ready and waiting :laugh:
Posted on Reply
#66
mechtech
Seems more like a F@H card or a GPGPU crunching card. I guess it will push good fps also, but thats kinda useless anyway, since LCD monitors can only push 60fps anyway. With the exception of the sammy 2233rz and viewsonic fuhzion.

I think the next upgrade for me will be the sammy 2233rz, then a 5850 after the price comes down :)

Either way though, beastly specs!!
Posted on Reply
#67
Benetanegia
leonard_222003Altough i hate Nvidia for what it does to games i have to say i'm impressed.
Still , until i see it i won't take it as "the beast" , we have to wait and see what it can do , not only games but other stuff too.
Another thing , all that C++, fortran ... , is this what DX11 should be and what ATI 5870 can do too or is just exclusive to the GT300 chip.
AFAIK that means that you can just #include C for CUDA and work with c++ like you would do with any other library and same for fortran. That's very good for some programers indeed, but only works on Nvidia hardware.

DX11 and OpenCL are used a little bit differently, but are not any less useful and on these AMD does it too.
I'm asking this because it is a big thing , if the programers could easily use a 3 billion trans. GPU the the CPU will be insignificant :) in some tasks , Intel should start to feel threatened , AMD too but they are too small to be bothered by this and they have a GPU too :) .
Indeed that's already happening. The GPU will never replace the CPU, it will always be a CPU in the PC, but it will go from being powerfull enough to run appications fast, to be fast enough to feed the GPU that runs the applications fast. This means the end for big overpriced CPUs. Read this:

wallstreetandtech.com/it-infrastructure/showArticle.jhtml?articleID=220200055&cid=nl_wallstreettech_daily

Intead of using a CPU farm with 8000 processors, they used only 48 servers with 2 Tesla GPUs each.

And that Tesla is the old Tesla using GT200 GPU. So that's a lot of saying actually, since GT300 does double precision 10 times faster. While GT200 did 1 TFlop single precission and 100 Gflops in double precision, GT300 will do ~2.5 TFlops in single precision and 1.25 Tflops on double precision. So yeah, if your application is parallel enough you can now say that Nvidia did open up a can of Whoop ass on Intel this time.
AnimalpakThis is not true, games are built from their first release for every platform for the PC version you have more 'opportunities for further improvements in graphics and stability.

PC graphics are better, take for example asassins Creed on the PC is much better than consoles, Batman arkaham asylum, Wolfenstein, Mass Effect, Call of duty series and many many others.

The PC is the primary platform for gaming with the PC you can make games with the consoles you can only play.
All those games are ports. PC graphics are better because they used better textures, you use higher resolution and you get proper AA and AF, but the game was coded for the consoles and then ported to PC.
Posted on Reply
#68
WarEagleAU
Bird of Prey
No one is mentioning the L1 and L2 caches on this. Its basically a gpu and cpu merge it seems. I have to say, being an ATI/AMD fanboy, Im impressed with a rumored spec sheet (if not concrete). I Wont be buying it, but it seems like a hell of a dream card.
Posted on Reply
#69
Animalpak
I've always noticed that it is a benefit to have a graphics card that are capable of the highest number of FPS. ( look at only your monitor resolution )

Because in games especially those with very large rooms and environments, the FPS tend to fall down because of the greater workload of pixels.

So a graphic card that comes to 200 fps will drop to 100 fps and you will not notice any slowdown even with explosions and fast movements. While a card that makes it even more down 100 ( to 40 in some cases ) you will notice a drastic slowdown.

This often happens in Crysis, but not games like in modern warfare that has been optimized
to run at 60 fps stable.
Posted on Reply
#70
Binge
Overclocking Surrealism
WarEagleAUNo one is mentioning the L1 and L2 caches on this. Its basically a gpu and cpu merge it seems. I have to say, being an ATI/AMD fanboy, Im impressed with a rumored spec sheet (if not concrete). I Wont be buying it, but it seems like a hell of a dream card.
this is because they remade the shader architecture to MIMD which would not do well sharing with memory from the memory bandwidth. The MIMD is optimized for using a pool of cache instead. Otherwise there would be some serious latency with shader processing.
Posted on Reply
#71
trt740
this will be a monster
Posted on Reply
#72
ZoneDymo
gumptySeriously man, whether you like it or not, you making a purchase of anything is part of how you run your business. Excluding half your potential business partners because of rumours about bad behaviour is not the smartest move. Cut off your nose to spite your face?
Rumours?
Is it a rumour that Nvidia bought Ageia and left all Ageia card owners for dead?
Is it a rumour that Nvidia made sure that an ATI + Nvidia (for PhysX) configuration is no longer do-able?
Is it a rumour that AA does not work on ATI cards in Batman AA even though it is able to do so?

The only rumour is that Nvidia made Ubisoft remove DX10.1 support from AC because AC was running to awesome on it with of course ATI cards (Nvidia does not support DX10.1)
Posted on Reply
#73
Binge
Overclocking Surrealism
ZoneDymoRumours?
Is it a rumour that Nvidia bought Ageia and left all Ageia card owners for dead?
Is it a rumour that Nvidia made sure that an ATI + Nvidia (for PhysX) configuration is no longer do-able?
Is it a rumour that AA does not work on ATI cards in Batman AA even though it is able to do so?

The only rumour is that Nvidia made Ubisoft remove DX10.1 support from AC because AC was running to awesome on it with of course ATI cards (Nvidia does not support DX10.1)
Still not monopolizing the market, so your whining won't do anything. This is not the thread for GPU war conspiracy theories.
Posted on Reply
#74
happita
Bingethis is because they remade the shader architecture to MIMD which would not do well sharing with memory from the memory bandwidth. The MIMD is optimized for using a pool of cache instead. Otherwise there would be some serious latency with shader processing.
And I was wondering why I was seeing L1 and L2 caches on an upcoming video card. I thought I was going crazy hahaha.

This will be VERY interesting, if the price is right, I may skip the 5k and go GT300 :rockout:
Posted on Reply
#75
eidairaman1
The Exiled Airman
since they call this card reactor, i guess it will be as hot as a Nuclear reactor under meltdown conditions, aka GF 5800.
Posted on Reply
Add your own comment
Dec 22nd, 2024 07:48 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts