• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GT300 ''Fermi'' Detailed

Erocker, I think his "bitching" response was directed at the portion of the community as a whole.

Yeah that's true. Sorry Erocker if it seemed directed at you.

It's just that the subject is being brought again every 2 posts and really, it has already been explained by many members. I dont' think it's crazy to believe in the innocence of 1000s of developers (individuals), that IMO are being insulted by the people that presume guilty. TBH I get angry because of that.
 
WOW 512 cores!!! without a doubt this will be better than a 5870 but im thinking the power draw will be larger than the 5870 too.
 
all I can say is its hurting their sales due to the Game having the TWIMTBP badge on it not getting the expected sales from AMD users. With that badge they getpaid a little here and there. With business practices like this it makes me glad I switched to ATI back in 2002

You realize that ATi had a similar program to TWIMTBP back in 2002, right? Oddly enough*, I remember seeing it all over the place in games like Unreal Tournament, and Source based games**. Both of wich ran better on ATi hardware due to ATi's aid in developement. Surprising that you would actually switch to them, when they were in the middle of doing exactly what you are complaing about now...

*I say oddly enough, because the Batman game that has caused so much uproar recently is actually based on an Unreal Engine.
**Valve removed the ATi branding once ATi stopped working with them, and most other developers, to improve games before release.

NV is just gonna recycle old G92/GT200 Parts to fill that gap!!!

That worked wonderfully in the past, and probably allowed nVidia to compete better, and eliminated consumer confusion, and lowered prices for the consumer, so I can't see how it was really a bad thing.

However, this likely won't work with the upcoming generation of cards, as DX11 support will be required.
 
Who says that in 2009 a GPU is a gaming only device? It has never been anyway. Nvidia has GeForce, Tesla and Cuadro brands and all of them are based on the same chip. As long as the graphics card is being competitive do you care about anything else? You shouldn't. And appart from this the ability to run C++ code can help in all kinds of applications. You have never encoded a video? Wouldn't you like to be able to do it 20x faster?

Pros on the other hand have 1000s and every reason to jump into something like this.

I'm all for lightning fast encode times, but please explain how running C and Fortran would help that, since that part is a bit foggy for me. If nVidia can squeeze in extra "features" and keep prices and performance "competitive", all is peachy. We'll need to wait and see if that's the case though.
 
Thier margin means nothing to me, and everybody stop with the bus width, DDR5 allows TWICE the bandwidth per pin compared to DDR3, so a 384 is like a 768.


I just want to run games like GTA4 and newer ones at native res with eye candy to have a immirsive experiance, and unlike what a diehard blowhard NV camp man said yesterday, you want that to get into the game, otherwise he can have a FX card and play on that.

I do that now with a GTX280....:wtf:

Wee gotta love paper launches and the wars starting over what somebody said...not actual proof and hard launch benches.
 
Why does every topic have to be derailed in some way by fanboys or general bullshit that has nothing to do with the subject at hand? :shadedshu



We can sit here all day and chat about how A card will beat B card and get all enraged about it...or, we can wait until its actually released and base our views on solid facts.

I know what way I'd prefer...but saying that, this card does look to be a beast in the making; I just have my doubts about the way nVidia will chose to price it as they often put an hefty price on 5% more "power".
 
I'm all for lightning fast encode times, but please explain how running C and Fortran would help that, since that part is a bit foggy for me. If nVidia can squeeze in extra "features" and keep prices and performance "competitive", all is peachy. We'll need to wait and see if that's the case though.

If the chip can trully run C code natively*, it means that a programer doesn't have to do anything especial to code a program to run on the GT300. They can just do it as they would to run it on the CPU. So the only difference is that instead of thinking they have 4 cores available, they have to make their code suitable for running in 512. Previously it was as if in order to write a book you had to learn french, because that was what the GPU could understand and with GT300 you could write in english as you have always done. A lot if not most applications and games are programmed in C/C++ and Fortran is very used in science and industry.

* I say that because it seems too good to be true TBH.
 
if thats all tru, then the singlecore GTX 380? will even beat the crap out of the 5870 X2 and still won't be tired after doing so ... damn
 
I don't think it will be able to run C natively. I think this can only be done on x86 and RISC.
Anyway this sure sounds like a true power horse and realy stresses out that Nvidia wants to shatter the idea of the Graphics card as just means of entertainment.

I think that alot of businesses and scientifical laboratories are ready for a massively paralel alternative to the CPU. And once they do, gamers and more importantly users are bound to follow.

I mean come on, think about it for a second. Is there any better pick-up line than: "Hey baby, wanna come down to my crib and check out my quadruple-pumped super computer?"
 
If i had it, she would :roll:
 
A monster..but boy it won't be cheap with that transistor count plus more expensive memory system.

Likely too hot for my taste.

Hopefully it will lower the 5850 prices a bit though, I might pick one of those up..or even wait for Juniper XT
 
I hate to say it but I sure hope they suck at folding. I hope they provide no real gain over the current NVIDIA offering in terms of daily points produced.

If it turns out they do rock the folding world and see great gains, I'll probably start scheming ways to change out 6 GTX 260 216s for 6 of these and a much lighter wallet.
 
Yeah that's true. Sorry Erocker if it seemed directed at you.

It's just that the subject is being brought again every 2 posts and really, it has already been explained by many members. I dont' think it's crazy to believe in the innocence of 1000s of developers (individuals), that IMO are being insulted by the people that presume guilty. TBH I get angry because of that.

Heh, and I keep mixing up this thread with the damn Batman thread. I should quit bitching as I'm pretty content with my current setup anyways.

Cheers. :toast:
 
Heh, and I keep mixing up this thread with the damn Batman thread. I should quit bitching as I'm pretty content with my current setup anyways.

Cheers. :toast:

It's ok, we all know you live in the batcave, which has a bitchin' setup ;)
 
I don't think it will be able to run C natively. I think this can only be done on x86 and RISC.

That's exactly what they are saying the chip does.

Ferni architecture natively supports C [CUDA], C++, DirectCompute, DirectX 11, Fortran, OpenCL, OpenGL 3.1 and OpenGL 3.2. Now, you've read that correctly - Ferni comes with a support for native execution of C++. For the first time in history, a GPU can run C++ code with no major issues or performance penalties and when you add Fortran or C to that, it is easy to see that GPGPU-wise, nVidia did a huge job.

To implement ISA inside the GPU took a lot of bravery, and with GT200 project over and done with, the time came right to launch a chip that would be as flexible as developers wanted, yet affordable.

Implementing ISA inside the GPU. That's what they say. An ISA is something very especific to be misinterpreted IMO. Although there's no such thing yet as an ISA for C/C++ or Fortran because they are compiled to run in x86 or PPC processors it is true that many instructions in C/C++ have direct relationship with the x86 instruction set, and over the time x86 has absorbed most successful of it's functions, making the x86 instruction set grow and now probably it can be said that on the basic things C/C++ = x86 and I suppose that it's the same with Fortran, but I don't know fortran myself, so I can't speak of that.

All in all, what they are claiming is that they have implemented an ISA for those programming languages, so they are effectively claiming that for every core function in C/C++ and Fortran there is an instruction in the GPU that can execute it. In a way they have completely bypassed the CPU, except for the first instruction that is going to be required to move the execution to the GPU. Yes Intel does have something to worry about.

I hate to say it but I sure hope they suck at folding. I hope they provide no real gain over the current NVIDIA offering in terms of daily points produced.

If it turns out they do rock the folding world and see great gains, I'll probably start scheming ways to change out 6 GTX 260 216s for 6 of these and a much lighter wallet.

If the above is true, they will certainly own in folding. Not only they would be much faster, but there's not going to be a need for a GPU client to begin with. Just a pair of lines to make the CPU client run in the GPU. :eek:

Now that I think about it, it might mean that GT300 could be the only processor inside a gaming console too, but it would run normal code very slowly. The truth is that the CPU is still very needed to run normal code, because GPUs don't have branch prediction (although I wouldn't bet a penny at this point, just in case) and that is needed. Then again C and Fortran have conditional expressions as core functions, so the ability to run them should be there, although at a high performance penalty compared to a CPU. A coder may take advantage of the raw power of the GPU and perform massive speculative execution though.

Sorry for the jargon and overall divagation. :o
 
Last edited:
And I bet my basement would sound like a bunch of harpies getting gang banged by a roving group of banshees with 6 GT300s added to my setups.
 
I'm going with ATI/AMD again whether it's much better or not due to my mobo, but of course I'd like to see it being at least competitive to drive the prices down.
 
Sorry UT 99 was a TWIMTBP, along with UT2K3/4 and UT3, so I don't see your point, only thing I really seen was the oft delayed HL 2 having ATI badge on it.

You realize that ATi had a similar program to TWIMTBP back in 2002, right? Oddly enough*, I remember seeing it all over the place in games like Unreal Tournament, and Source based games**. Both of wich ran better on ATi hardware due to ATi's aid in developement. Surprising that you would actually switch to them, when they were in the middle of doing exactly what you are complaing about now...

*I say oddly enough, because the Batman game that has caused so much uproar recently is actually based on an Unreal Engine.
**Valve removed the ATi branding once ATi stopped working with them, and most other developers, to improve games before release.



That worked wonderfully in the past, and probably allowed nVidia to compete better, and eliminated consumer confusion, and lowered prices for the consumer, so I can't see how it was really a bad thing.

However, this likely won't work with the upcoming generation of cards, as DX11 support will be required.
 
I dunno the biggest problem I see with coding for C on a non x86 architecture is the complexity of the code. The SIMD/MIMD architecture of GPUs is closer to RISC, and from what i know it's much harder to write code for RISC than it is for x86, but once you have a working code, the benefits can me enormous.

I'd love to see Nvidias solution from a nerds pov rather than anything else. If they realy acomplished what they state here, that would render Larabee useless and obsolete before it even comes out and create some serious competition on the HPC market.
 
Sorry UT 99 was a TWIMTBP, along with UT2K3/4 and UT3, so I don't see your point, only thing I really seen was the oft delayed HL 2 having ATI badge on it.

Maybe your right, but I could have sworn UT 2K3 had GITG branding on it, maybe not though, its been so long.

The main point still stands though, ATi had/has a similar program that did the exact same thing.
 
There will be some waiting..

"Then timing is just as valid, because while Fermi currently exists on paper, it's not a product yet. Fermi is late. Clock speeds, configurations and price points have yet to be finalized. NVIDIA just recently got working chips back and it's going to be at least two months before I see the first samples. Widespread availability won't be until at least Q1 2010.
I asked two people at NVIDIA why Fermi is late; NVIDIA's VP of Product Marketing, Ujesh Desai and NVIDIA's VP of GPU Engineering, Jonah Alben. Ujesh responded: because designing GPUs this big is "fucking hard".

Source:
http://www.anandtech.com/video/showdoc.aspx?i=3651

Another informative article:
http://www.techreport.com/articles.x/17670
 
Last edited:
I do that now with a GTX280....:wtf:

Wee gotta love paper launches and the wars starting over what somebody said...not actual proof and hard launch benches.

100 draw distance and all high settings? You are delusional, mistaken, or full of shit.

My 1Gb video card can't run it, it isn't the processor power required, it is the vmem, plain and simple.
 
For anyone who wants to watch webcasts of NVIDIA's GPU Tech Conference.

Linky

Apparently its all in 3D this year. An interesting side effect is the press can't get any decent shots of the slides they show. The question is whether or not it was intentional to help keep people guessing.
 
Back
Top