# NVIDIA GT300 ''Fermi'' Detailed



## btarunr (Sep 30, 2009)

NVIDIA's upcoming flagship graphics processor is going by a lot of codenames. While some call it the GF100, others GT300 (based on the present nomenclature), what is certain that the NVIDIA has given the architecture an internal name of "Fermi", after the Italian physicist Enrico Fermi, the inventor of the nuclear reactor. It doesn't come as a surprise, that the codename of the board itself is going to be called "reactor", according to some sources. 

Based on information gathered so far about GT300/Fermi, here's what's packed into it:
Transistor count of over 3 billion
Built on the 40 nm TSMC process
512 shader processors (which NVIDIA may refer to as "CUDA cores")
32 cores per core cluster
384-bit GDDR5 memory interface
1 MB L1 cache memory, 768 KB L2 unified cache memory
Up to 6 GB of total memory, 1.5 GB can be expected for the consumer graphics variant
Half Speed IEEE 754 Double Precision floating point
Native support for execution of C (CUDA), C++, Fortran, support for DirectCompute 11, DirectX 11, OpenGL 3.1, and OpenCL


*Update:* Here's an image added from the ongoing public webcast of the GPU Technology Conference, of a graphics card based on the Fermi architecture.





*View at TechPowerUp Main Site*


----------



## laszlo (Sep 30, 2009)

i see a 5870 killer

now all depend on pricing


----------



## DaedalusHelios (Sep 30, 2009)

laszlo said:


> i see a 5870 killer
> 
> now all depend on pricing







Pricing will be a huge factor. Lets hope they wise up to a lower MSRP than the current trend.


----------



## pr0n Inspector (Sep 30, 2009)

Awaits for nerdgasm and nerdrage.


----------



## human_error (Sep 30, 2009)

a billion more transistors than 5870 - this thing is going to be huge! (size wise, although i'd wager performance will be very nice too).

Now we just need a decent price war between ati and nvidia on the dx11 cards and i'll be happy.


----------



## Howard (Sep 30, 2009)

ohhhh, crown~ 
ready to wear it!!!


----------



## $ReaPeR$ (Sep 30, 2009)

this looks extremely promising


----------



## FatForester (Sep 30, 2009)

This looks really interesting, hopefully the yields are good enough so they won't charge an arm and a leg for it. I just have to wonder if "reactor" relates to anything to do with its thermal output? 3 billion transistors on a 40nm process has to get pretty warm.


----------



## happita (Sep 30, 2009)

btarunr said:


> [*]32 cores per core cluster
> [*]Up to 6 GB of total memory, 1.5 GB can be expected for the consumer graphics variant



What the?! 

The price might be decently competitive, I don't see a 512-bit memory interface like the previous cards they had before jacked up on price.


----------



## [I.R.A]_FBi (Sep 30, 2009)

rassclaat


----------



## FatForester (Sep 30, 2009)

happita said:


> What the?!
> 
> The price might be decently competitive, I don't see a 512-bit memory interface like the previous cards they had before jacked up on price.



I'd say the 1.5 GB variant is for regular end-users like us, while the 6 GB is for workstations that could use the extra memory for CAD or other really heavy tasks.


----------



## buggalugs (Sep 30, 2009)

laszlo said:


> i see a 5870 killer
> 
> now all depend on pricing


 

 Only for idiots who think 10 extra fps is worth $300 more.


 The 4870/4890 have done very well while there are faster Nvidia cards.


----------



## qubit (Sep 30, 2009)

Shame that bus isn't a full 512 bits wide. They've increased the bandwidth with GDDR5, yet taken some back with a narrower bus. Also, 384 bits has the consequence that the amount of RAM is that odd size like on the 8800 GTX, when it would really be best at a power of two.


----------



## DaedalusHelios (Sep 30, 2009)

buggalugs said:


> Only for idiots who think 10 extra fps is worth $300 more.
> 
> 
> The 4870/4890 have done very well while there are faster Nvidia cards.




 "Only idiots" is reserved for people insulting a possibly better product before seeing the official pricing.

I am sorry, but thats just what came to mind. 

Don't get pissed with a paper launch. The GPU isn't on shelves or priced yet so its still useless to everybody.



qubit said:


> Shame that bus isn't a full 512 bits wide. They've increased the bandwidth with GDDR5, yet taken some back with a narrower bus. Also, 384 bitys has the consequence that the amount of RAM is that odd size like on the 8800 GTX, when it would really be best at a power of two.



Well since its a different GPU architecture we do not know how well it would scale to higher bandwidth memory yet. No benches mean we are still in the dark.


----------



## gumpty (Sep 30, 2009)

buggalugs said:


> Only for idiots who think 10 extra fps is worth $300 more.
> 
> 
> The 4870/4890 have done very well while there are faster Nvidia cards.



QFT.

Not going to notice much difference going from 100FPS to 200FPS either. The pricing is the key. If it beats the 5870 and they can get the price close enough to ATI's offerings, then we all win with price warz. If it beats the 5870 but is too expensive ... it will mean nothing.


----------



## Atom_Anti (Sep 30, 2009)

gumpty said:


> QFT.
> 
> Not going to notice much difference going from 100FPS to 200FPS either. The pricing is the key. If it beats the 5870 and they can get the price close enough to ATI's offerings, then we all win with price warz. If it beats the 5870 but is too expensive ... it will mean nothing.



I think you are right!


----------



## csendesmark (Sep 30, 2009)

laszlo said:


> i see a 5870 killer
> 
> now all depend on pricing



Bigger transistor count mean lower yield + less chip/wafer = more expensive card
AMD can drop prices


----------



## laszlo (Sep 30, 2009)

buggalugs said:


> Only for idiots who think 10 extra fps is worth $300 more.
> 
> 
> The 4870/4890 have done very well while there are faster Nvidia cards.



as i estimate it'll be more than 10 fps,i say between 50-100 over 5870

so the "idiots" who want a future proof card(better than 5870) will buy it if the price won't be much higher than 5870;i expect price around 500 or less;we're not anymore in the dark ages when nvidia&ati by common agreement has overpriced the high-end cards just to charge as they want


----------



## btarunr (Sep 30, 2009)

csendesmark said:


> Bigger transistor count mean lower yield + less chip/wafer = more expensive card
> AMD can drop prices



Uh no, the equation will be similar to that between RV790/RV770 and GT200. Despite everything, GTX 200 series cards have been affordable.


----------



## Benetanegia (Sep 30, 2009)

Half Speed IEEE 754 Double Precision floating point is just sick!!  (I had to say it here too.)



csendesmark said:


> Bigger transistor count mean lower yield + less chip/wafer = more expensive card
> AMD can drop prices



Yields don't matter in this case because Nvidia is paying for chip and not per wafer and yields are good, it's been cofirmed by Nvidia. All the news about bad yields were false and spread by AMD's competitive analysis team, whatever that is.

Anyway although that formula is true in the technical aspect, it doesn't take a very important thing into account, we don't know how much evey company pays per waffer. Nvidia makes twice as many chips (they've been selling twice as much) so as it happens with every other inter-company volume deal in the world:

more products = less $ per product
more wafers = less $ per waffer


----------



## ZoneDymo (Sep 30, 2009)

Yeah nice, dont care though, with there way of doing business, im going ATI.


----------



## laszlo (Sep 30, 2009)

ZoneDymo said:


> Yeah nice, dont care though, with there way of doing business, im going ATI.




you should care because this will force ati to drop prices and i bet you like lower prices no?


----------



## DaedalusHelios (Sep 30, 2009)

ZoneDymo said:


> Yeah nice, dont care though, with there way of doing business, im going ATI.



I have heard their PCB wafers are made of ground up babies. Only a sicko would buy Nvidia cards. 

I buy both so I guess the jury is out on me.


----------



## naram-sin (Sep 30, 2009)

DaedalusHelios said:


> "Only idiots" is reserved for people insulting a possibly better product before seeing the official pricing.
> 
> I am sorry, but thats just what came to mind.
> 
> ...



Sorry for the long quote, regarding paper-launch, I think you're right. It seems to me that, considering our paper-launch experience in a last couple of years, that this one is extremely narrow, stll in kind of _alpha stage_ and I think it is yet to be largely revised at least 2-3 times, before we can do some actual scaling with ATI solution and get first realistically expected performance figures. Because all of stated is kind of sci-fi to me. As I said, for now.

Btw, it could all be a piece of... hype to try and halt a couple of percentages of _extreme enthusiasts_ in buying of 58xx and motivate them in waiting for GF100/GT300/Fermi/Reactor or whatever... (btw, doesn't _Reactor_ makes U automatically think of high temperatures?)


----------



## gumpty (Sep 30, 2009)

ZoneDymo said:


> Yeah nice, dont care though, with there way of doing business, im going ATI.



Seriously man, whether you like it or not, you making a purchase of anything is part of how you run _your_ business. Excluding half your potential business partners because of rumours about bad behaviour is not the smartest move. Cut off your nose to spite your face?


----------



## DaedalusHelios (Sep 30, 2009)

naram-sin said:


> Sorry for the long quote, regarding paper-launch, I think you're right. It seems to me that, considering our paper-launch experience in a last couple of years, that this one is extremely narrow, stll in kind of _alpha stage_ and I think it is yet to be largely revised at least 2-3 times, before we can do some actual scaling with ATI solution and get first realistically expected performance figures. Because all of stated is kind of sci-fi to me. As I said, for now.
> 
> Btw, it could all be a piece of... hype to try and halt a couple of percentages of _extreme enthusiasts_ in buying of 58xx and motivate them in waiting for GF100/GT300/Fermi/Reactor or whatever... (btw, doesn't *Reactor* makes U automatically think of high temperatures?)



If they label it that way on the final product, customs will have a field day.


----------



## gumpty (Sep 30, 2009)

naram-sin said:


> Btw, it could all be a piece of... hype to try and halt a couple of percentages of _extreme enthusiasts_ in buying of 58xx and motivate them in waiting for GF100/GT300/Fermi/Reactor or whatever...



Totally. Unless they are made of money, the smart move is to wait for nvidia's offering to the graphics gods, see how the prices react, then make your choice. Gives you more time to save too.


----------



## naram-sin (Sep 30, 2009)

DaedalusHelios said:


> If they label it that way on the final product, customs will have a field day.



I see Jack Bauer on cover of this card's retail box.


----------



## Fitseries3 (Sep 30, 2009)

looks promising. now if they could just release it on my birthday and send me a few as a gift i'd be 5x as excited.

it seems they skipped a dualcore gpu and went straight to 32cores haha


----------



## mdm-adph (Sep 30, 2009)

Is there anybody even doubting that Nvidia's next chip is going to be fast?


----------



## btarunr (Sep 30, 2009)

It seems Fermi will be demonstrated today at 1:00 PM (Pacific time) / 10:00 PM CEST at the footnote of GTC.


----------



## gumpty (Sep 30, 2009)

mdm-adph said:


> Is there anybody even doubting that Nvidia's next chip is going to be fast?



I hope they are faster than the 58## series by the same order of magnitude that the GT200 were faster than the 48## series. And that the prices are similar too. It was happy days for the consumer.


----------



## springs113 (Sep 30, 2009)

DaedalusHelios said:


> If they label it that way on the final product, *customs* will have a field day.



that is so true...as for those worrying about ATI, i do believe that they have about a $50 price drop  for the 5870 and about $20-$30 drop for the 5850.  ATI definitely has NVIDIA where they want them when it comes to pricing as we all know that the GT300/Fermi will be more costly than a 5870 while probably only adding anywhere between 7-15% in performance.  ATI is better positioned to fight a price war...

just for fun
plus there's 2 rumors floating around...1-the rumored hemlock being priced at $500 suggest its 2 5850s and not 2 5870s...and 2-not so much a rumor but more of a hypothesis... the 5890.


----------



## [I.R.A]_FBi (Sep 30, 2009)

btarunr said:


> It seems Fermi will be demonstrated today at 1:00 PM (Pacific time) / 10:00 PM CEST at the footnote of GTC.



Link to stream plz


----------



## mdm-adph (Sep 30, 2009)

gumpty said:


> I hope they are faster than the 58## series by the same order of magnitude that the GT200 were faster than the 48## series. And that the prices are similar too. It was happy days for the consumer.



True -- while I'll admit I'm currently an ATI fan, I've been quite happy with the way things have been during the GTX 200 / HD 4800 days.  Everything seems pretty balanced, cost wise and performance wise between both brands.  

...as long as one stays away from those "TWIMTBP" games.


----------



## gumpty (Sep 30, 2009)

springs113 said:


> just for fun
> plus there's 2 rumors floating around...1-the rumored hemlock being priced at $500 suggest its 2 5850s and not 2 5870s...and 2-not so much a rumor but more of a hypothesis... the 5890.



I'm sure I read somewhere that they are lining up both a 5850X2 & 5870X2. If it is true hopefully the 5850X2 will be properly supported and not just a Sapphire plaything.


----------



## wiak (Sep 30, 2009)

3 billion transistors? basicly its alot bigger than Cypress, so it will cost more to make, and will be expensive :O
its RV770 vs G200 all over again


----------



## gumpty (Sep 30, 2009)

mdm-adph said:


> ...as long as one stays away from those "TWIMTBP" games.



Right you are sir!


----------



## naram-sin (Sep 30, 2009)

gumpty said:


> Totally. Unless they are made of money, the smart move is to wait for nvidia's offering to the graphics gods, see how the prices react, then make your choice. Gives you more time to save too.



Well, I guess that would be an MO of reasonable people. Not _enthusiasts_.  But, instead of being reasonable, i went _enthusiastically_ with HD4850 and had to wait for a decent aftermarket cooler to be able and play on it. I guess I'll wait this time for some non-reference variant of HD58xx. Or maybe even Reactor, who knows?! But HD4850 can still kick some punches. 

Btw, it seams to me that it is going to prove useful for our budgets to wait, as you said, because we are experiencing accelerated shortening of time between two generations of GPU architecture. This seams to go for CPUs as well. Of course, not in cases of re-branding of old GPUs as new mainstream and lower-mainstream options.


----------



## Steevo (Sep 30, 2009)

If this card is much superior to the 2GB 5870 or 5890 for the money I will try out the green camp, and they need to clean up their act.


----------



## wiak (Sep 30, 2009)

Steevo said:


> If this card is much superior to the 2GB 5870 or 5890 for the money I will try out the green camp, and they need to clean up their act.


but  dont forget, you have to PAY 2 times more for it,  remember GT200 cards when they were released?
btw ATI can make 2 times more chips per waffel compared to NVIDIA


----------



## Breathless (Sep 30, 2009)

they need to skip the 384-bit bus bologna crap and give us 512-bit on every high end card from now on. We would be willing to pay the extra $25 or so dollars that it would cost.... Gimme a break


----------



## Steevo (Sep 30, 2009)

Thier margin means nothing to me, and everybody stop with the bus width, DDR5 allows TWICE the bandwidth per pin compared to DDR3, so a 384 is like a 768.


I just want to run games like GTA4 and newer ones at native res with eye candy to have a immirsive experiance, and unlike what a diehard blowhard NV camp man said yesterday, you want that to get into the game, otherwise he can have a FX card and play on that.


----------



## laszlo (Sep 30, 2009)

Breathless said:


> they need to skip the 384-bit bus bologna crap and give us 512-bit on every high end card from now on. We would be willing to pay the extra $25 or so dollars that it would cost.... Gimme a break



you're fallen from bed maybe...384 bit on ddr5 is more than enough


----------



## Benetanegia (Sep 30, 2009)

mdm-adph said:


> Is there anybody even doubting that Nvidia's next chip is going to be fast?



No, but every bit of info points a greater difference than that on GT200 vs RV770, while the production costs are lower than GT200. That means that heavy competition is inminent.


----------



## W1zzard (Sep 30, 2009)

Breathless said:


> they need to skip the 384-bit bus bologna crap and give us 512-bit on every high end card from now on. We would be willing to pay the extra $25 or so dollars that it would cost.... Gimme a break



you will pay 25$ for nothing? send it to funds@techpowerup.com


----------



## springs113 (Sep 30, 2009)

think i will wait myself... looking to sell my 2 4850s with the scythe musashi  coolers on both and or one of my 4850s and my 8800 gt... i need the ati integrated hd sound.  

i need to upgrade my q6600 to the i860 msi p55 or equivalent and my current top dog p2 955 with a 5870...so the waiting game for the best bang for my money is the only option really.


----------



## laszlo (Sep 30, 2009)

springs113 said:


> so the waiting game for the best bang for my money is the only option really.



smart 

after they release it i hope we all will be happy, nvidiots&atiidiots


----------



## EarlZ (Sep 30, 2009)

Likely this would be priced similar to the 58xx series, hopefully its cheaper and performs better in all scenarios.


----------



## cauby (Sep 30, 2009)

Finally something to talk about other than Batman...
This thing will definetly be faster than the 5800 (except for the 5800x2,if something like this really comes out),but i'm still waiting for the actual release to confirm it.After all,didn't a early "bench" of the 5870 said it would be up to 95% faster than the gtx295????

Yeah,right...


----------



## springs113 (Sep 30, 2009)

you guys and your misquotes about the 5870 being 95% faster than the gtx295...u guys gotta really be ignorant...relatively speaking why would a company do that when they can milk us for our money while squeezing out a marginal percentage improvement bi annually.  Not trying to being disrespectful but come on use your head...we knew where the performance was gonna be and for a single card to cost less than a dual card and performs just as good, sometimes better sometimes worst...then that is just awesome for everyone...

I remember when the the gtx285 was what $549..shit the 5850 can beat it for just about half that price...better yet look at the 8800 ultras price point...these new cards thrash that damn card easy and cost way way less. another example i remember when the radeon 9800xtx came out i was just looking to build a pc at that time and i remember seeing $500+ on newegg


----------



## kid41212003 (Sep 30, 2009)

I'm expecting the most high-end single GPU to be price at (GTX390) $449

2 models lower with be at (GTX 380) $359 (faster than HD5870), and (GTX360) $299 (= HD5870), which will push the current HD5870 to $295 and HD 5850 to $245.

The GTS model will be as fast or faster than (abit) GTX285 with DX11 support and will be price at $249, following with a GT at $200. 

And the GPUx2 version which use 2xGTX380 GPU, and become the HD5870X2 killer, likely will be price around $649





Base on baseless sources.


----------



## Zubasa (Sep 30, 2009)

EarlZ said:


> Likely this would be priced similar to the 58xx series, hopefully its cheaper and performs better in all scenarios.


You can only hope.
nVidia never goes the C/P route, they always push out the $600 monster and you either buy it or you don't.
After all, if you have the more powerful product, why sell it cheaper?


----------



## LaidLawJones (Sep 30, 2009)

I agree with the waiting game. Having bleeding edge may be cool for bragging rights, but sometimes you end up with stuff like a sapphire 580 pure MB, still waiting for RMA, and a nice collection of 3870's.

I am going to wait until summer for a new build. Prices will have settled,  there will be a far larger assortment of cards, and we will see what games/programs are out and able to take advantage of hardware.

I will be scheduling my lunch break for 13:00 Pacific.


----------



## devguy (Sep 30, 2009)

Oh good.  It is super exciting that I may soon have the opportunity to GPU hardware accelerate the thousands of Fortran programs I've been writing lately.  I even heard that the new version of Photoshop will be written in Fortran!


----------



## adrianx (Sep 30, 2009)

this look like a CPU, also have native support for C++ and Fortran.... this sound like a CPU

also ...very close to amd fusion cpu

http://en.wikipedia.org/wiki/AMD_Fusion


----------



## newtekie1 (Sep 30, 2009)

Seems like a monster.  I can almost guarantee the highest end offering will be priced through the roof.

However, there will be cut down varients, just like the previous generations.  These are the SKUs I expect to be competitive both in price and performance with ATi's parts.

Judging by the original figures, I expect mainstream parts to look something like:

352 or 320 Shaders
320-Bit or 256-bit Memory Bus
1.2GB or 1GB GDDR5


----------



## Benetanegia (Sep 30, 2009)

kid41212003 said:


> I'm expecting the most high-end single GPU to be price at (GTX390) $449
> 
> 2 models lower with be at (GTX 380) $359 (faster than HD5870), and (GTX360) $299 (= HD5870), which will push the current HD5870 to $295 and HD 5850 to $245.
> 
> ...



Not so baseless IMO. 

http://forums.techpowerup.com/showpost.php?p=1573106&postcount=131

Although the info about memory in that chart is in direct conflict with the one in the OP, I'm still very inclined to believe in the rest. It hints to 4 models being made, and you are not too far off. I also encourage you to join that tread, we've discussed price there too, with similar conclusions. 

@newtekie

http://forums.techpowerup.com/showpost.php?p=1573733&postcount=143 - That's what I think about the possible versions based on the TechARP chart (the other link above).

For nerds:
Back when MIMD was announced, it was also said the design would be much more modular that GT200. That means the ability to disable units is much improved and that means that the creation of more models is more feasible. 40nm yields are not the best in the world and having 4 models with decreasing number of clusters can greatly improve them to very high numbers.


----------



## ToTTenTranz (Sep 30, 2009)

laszlo said:


> i see a 5870 killer
> 
> now all depend on pricing



But not a HD5870X2 killer, which will be its competitor, price-wise.


Furthermore, we still don't know how fast the HD5890 will be, which should somehow address the memory bandwidth bottleneck of the HD5870.


----------



## Benetanegia (Sep 30, 2009)

ToTTenTranz said:


> But not a HD5870X2 killer, which will be its competitor, price-wise.



We don't know. Rumors have said the X2 will launch at $550-600. GTX380 will not launch at that price unless it's much much faster and it has 2 lower versions that are faster or compete with HD4870. Forget about GTX2xx launch already, those prices were based on pricing strategy of the past. GTX3xx production costs will be much lower than GTX2xx cards at launch and significantly cheaper to produce than a dual card. Not to mention that Nvidia will have a dual card too.



> Furthermore, we still don't know how fast the HD5890 will be, which should somehow *address the memory bandwidth bottleneck* of the HD5870.



How? What is what I missed?


----------



## wiak (Sep 30, 2009)

buggalugs said:


> Only for idiots who think 10 extra fps is worth $300 more.
> 
> 
> The 4870/4890 have done very well while there are faster Nvidia cards.


i know and you have to also consider that two 5870 in crossfire are bottlenecked by Core i7 965 @ 3.7ghz in some games
http://www.guru3d.com/article/radeon-hd-5870-crossfirex-test-review/9

and that most games exept crysis are crappy console ports


----------



## Animalpak (Sep 30, 2009)

Specifications are very promising of this new GPU, I look forward to the announcement of the upcoming dual GPU from NVIDIA.

The dual-GPU cards still have long life in market, if ATI has announced its X2 and we have seen the pictures it means that nvidia will do the same. 

They are been always very powerful and less expensive than two boards mounted on two physical PCI EX slot.


----------



## leonard_222003 (Sep 30, 2009)

Altough i hate Nvidia  for what it does to games i have to say i'm impressed.
Still , until i see it i won't take it as "the beast" , we have to wait and see what it can do , not only games but other stuff too.
Another thing , all that C++, fortran ... , is this what DX11 should be and what ATI 5870 can do too or is just exclusive to the GT300 chip.
I'm asking this because it is a big thing , if the programers could easily use  a 3 billion trans. GPU the the CPU will be insignificant  in some tasks , Intel should start to feel threatened , AMD too but they are too small to be bothered by this and they have a GPU too  .


----------



## VanguardGX (Sep 30, 2009)

[I.R.A]_FBi said:


> rassclaat



My words exactly lol!!! This thing is gonna be a number crunching beast!! Hope it can still play games


----------



## Animalpak (Sep 30, 2009)

wiak said:


> and that most games exept crysis are crappy console ports





This is not true, games are built from their first release for every platform for the PC version you have more 'opportunities for further improvements in graphics and stability.

PC graphics are better, take for example asassins Creed on the PC is much better than consoles, Batman arkaham asylum, Wolfenstein, Mass Effect, Call of duty series and many many others.

The PC is the primary platform for gaming with the PC you can m*ake *games with the consoles you can only play.


----------



## Binge (Sep 30, 2009)

buggalugs said:


> Only for idiots who think 10 extra fps is worth $300 more.
> 
> 
> The 4870/4890 have done very well while there are faster Nvidia cards.



Call me an idiot and chop off my genitals so as I can't reproduce any more retards.  I've got my wallet ready and waiting


----------



## mechtech (Sep 30, 2009)

Seems more like a F@H card or a GPGPU crunching card.  I guess it will push good fps also, but thats kinda useless anyway, since LCD monitors can only push 60fps anyway.  With the exception of the sammy 2233rz and viewsonic fuhzion.

I think the next upgrade for me will be the sammy 2233rz, then a 5850 after the price comes down 

Either way though, beastly specs!!


----------



## Benetanegia (Sep 30, 2009)

leonard_222003 said:


> Altough i hate Nvidia  for what it does to games i have to say i'm impressed.
> Still , until i see it i won't take it as "the beast" , we have to wait and see what it can do , not only games but other stuff too.
> Another thing , all that C++, fortran ... , is this what DX11 should be and what ATI 5870 can do too or is just exclusive to the GT300 chip.



AFAIK that means that you can just #include C for CUDA and work with c++ like you would do with any other library and same for fortran. That's very good for some programers indeed, but only works on Nvidia hardware.

DX11 and OpenCL are used a little bit differently, but are not any less useful and on these AMD does it too.



> I'm asking this because it is a big thing , if the programers could easily use  a 3 billion trans. GPU the the CPU will be insignificant  in some tasks , Intel should start to feel threatened , AMD too but they are too small to be bothered by this and they have a GPU too  .



Indeed that's already happening. The GPU will never replace the CPU, it will always be a CPU in the PC, but it will go from being powerfull enough to run appications fast, to be fast enough to feed the GPU that runs the applications fast. This means the end for big overpriced CPUs. Read this:

http://wallstreetandtech.com/it-inf...ticleID=220200055&cid=nl_wallstreettech_daily

Intead of using a CPU farm with 8000 processors, they used only 48 servers with 2 Tesla GPUs each.

And that Tesla is the old Tesla using GT200 GPU. So that's a lot of saying actually, since GT300 does double precision 10 times faster. While GT200 did 1 TFlop single precission and 100 Gflops in double precision, GT300 will do ~2.5 TFlops in single precision and 1.25 Tflops on double precision. So yeah, if your application is parallel enough you can now say that Nvidia did open up a can of Whoop ass on Intel this time.



Animalpak said:


> This is not true, games are built from their first release for every platform for the PC version you have more 'opportunities for further improvements in graphics and stability.
> 
> PC graphics are better, take for example asassins Creed on the PC is much better than consoles, Batman arkaham asylum, Wolfenstein, Mass Effect, Call of duty series and many many others.
> 
> The PC is the primary platform for gaming with the PC you can m*ake *games with the consoles you can only play.



All those games are ports. PC graphics are better because they used better textures, you use higher resolution and you get proper AA and AF, but the game was coded for the consoles and then ported to PC.


----------



## WarEagleAU (Sep 30, 2009)

No one is mentioning the L1 and L2 caches on this. Its basically a gpu and cpu merge it seems. I have to say, being an ATI/AMD fanboy, Im impressed with a rumored spec sheet (if not concrete). I Wont be buying it, but it seems like a hell of a dream card.


----------



## Animalpak (Sep 30, 2009)

I've always noticed that it is a benefit to have a graphics card that are capable of the highest number of FPS. ( look at only your monitor resolution )

Because in games especially those with very large rooms and environments, the FPS tend to fall down because of the greater workload of pixels.  

So a graphic card that comes to 200 fps will drop to 100 fps and you will not notice any slowdown even with explosions and fast movements. While a card that makes it even more down 100 ( to 40 in some cases ) you will notice a drastic slowdown.

This often happens in Crysis, but not games like in modern warfare that has been optimized
to run at 60 fps stable.


----------



## Binge (Sep 30, 2009)

WarEagleAU said:


> No one is mentioning the L1 and L2 caches on this. Its basically a gpu and cpu merge it seems. I have to say, being an ATI/AMD fanboy, Im impressed with a rumored spec sheet (if not concrete). I Wont be buying it, but it seems like a hell of a dream card.



this is because they remade the shader architecture to MIMD which would not do well sharing with memory from the memory bandwidth.  The MIMD is optimized for using a pool of cache instead.  Otherwise there would be some serious latency with shader processing.


----------



## trt740 (Sep 30, 2009)

this will be a monster


----------



## ZoneDymo (Sep 30, 2009)

gumpty said:


> Seriously man, whether you like it or not, you making a purchase of anything is part of how you run _your_ business. Excluding half your potential business partners because of rumours about bad behaviour is not the smartest move. Cut off your nose to spite your face?




Rumours?
Is it a rumour that Nvidia bought Ageia and left all Ageia card owners for dead?
Is it a rumour that Nvidia made sure that an ATI + Nvidia (for PhysX) configuration is no longer do-able?
Is it a rumour that AA does not work on ATI cards in Batman AA even though it is able to do so?

The only rumour is that Nvidia made Ubisoft remove DX10.1 support from AC because AC was running to awesome on it with of course ATI cards (Nvidia does not support DX10.1)


----------



## Binge (Sep 30, 2009)

ZoneDymo said:


> Rumours?
> Is it a rumour that Nvidia bought Ageia and left all Ageia card owners for dead?
> Is it a rumour that Nvidia made sure that an ATI + Nvidia (for PhysX) configuration is no longer do-able?
> Is it a rumour that AA does not work on ATI cards in Batman AA even though it is able to do so?
> ...



Still not monopolizing the market, so your whining won't do anything.  This is not the thread for GPU war conspiracy theories.


----------



## happita (Sep 30, 2009)

Binge said:


> this is because they remade the shader architecture to MIMD which would not do well sharing with memory from the memory bandwidth.  The MIMD is optimized for using a pool of cache instead.  Otherwise there would be some serious latency with shader processing.



And I was wondering why I was seeing L1 and L2 caches on an upcoming video card. I thought I was going crazy hahaha.

This will be VERY interesting, if the price is right, I may skip the 5k and go GT300


----------



## eidairaman1 (Sep 30, 2009)

since they call this card reactor, i guess it will be as hot as a Nuclear reactor under meltdown conditions, aka GF 5800.


----------



## AddSub (Sep 30, 2009)

W1zzard said:


> you will pay 25$ for nothing? send it to funds@techpowerup.com



Situation is that bad, eh? May I recommend injecting ads into every post on TPU as a signature and banning people who use ad-blockers (like TechReport)


----------



## happita (Sep 30, 2009)

Binge said:


> Still not monopolizing the market, so your whining won't do anything.  This is not the thread for GPU war conspiracy theories.



No monopolizing, correct, but they are involved in highly unethical business practices. And if this said rumor by Zone is true, forcing a company to not allow a game to utilize a certain DX level just because Nvidia doesn't support it is pretty f'ed up. I could understand if it was an Nvidia-only thing, but its freakin directx!! The consumer gets f'ed in the end. That is not the way to persuade customers to go and buy their product over competition. But I'm sure AMD/ATI does the same, but the light doesn't get shed on them because they're the "small guy". I WANT TRANSPARENCY ON BOTH SIDES DAMMIT!!!


----------



## eidairaman1 (Sep 30, 2009)

AddSub said:


> Situation is that bad, eh? May I recommend injecting ads into every post on TPU as a signature and banning people who use ad-blockers (like TechReport)



screw you dude


----------



## Benetanegia (Sep 30, 2009)

ZoneDymo said:


> Is it a rumour that Nvidia bought Ageia and left all Ageia card owners for dead?



They were not so many people. Around 10.000 PhysX PPU were sold. And sometimes a company has to do the best for the most. Spending as much to support 10.000 cards as you do to support 100++ million GPUs makes no sense at all, it's spending twice for nothing in the big squeme of things. I'm not saying that was good, it's a pity for those who bought the card, but for those who want PhysX acceleration, they can now have if for free, by just going Nvidia in their next purchase and all those who already had a Nvidia card got it for free.



> Is it a rumour that Nvidia made sure that an ATI + Nvidia (for PhysX) configuration is no longer do-able?



It wasn't doable in every OS and did Ati want to share QA costs? Would Ati deliver Nvidia the newer ureleased cards, so that Nvidia could test compatibility and create the drivers before they were launched? Or they would have had to wait, with problematic drivers and taking all the blames for bad functioning setups??



> Is it a rumour that AA does not work on ATI cards in Batman AA even though it is able to do so?



BS. And it has been discussed to death in other threads. Ati cards don't do that kind of AA, which was *added* exclusively for Nvidia, paid by Nvidia and QA'd by Nvidia. It even has the Nvidia AA label written all over it.



> The only rumour is that Nvidia made Ubisoft remove DX10.1 support from AC because AC was running to awesome on it with of course ATI cards (Nvidia does not support DX10.1)



BS again. If Nvidia didn't want DX10.1 in that game,, it would have never have released with DX10.1 to begin with. HD3xxx had been released moths before the game launched, they already knew how they were going to perform, it just takes a travel to the local store and buying a damn card FFS!


----------



## Atom_Anti (Sep 30, 2009)

eidairaman1 said:


> since they call this card reactor, i guess it will be as hot as a Nuclear reactor under meltdown conditions, aka GF 5800.



Yeah, yeah like Chornobyl reaktor did it in 1986. 
Anyway, there is still no release time, so Nvidia may won't be able to make it. Remember what happened with 3dfx, they were the biggest and most famous 3d vga maker, but they are could not come up enough soon with the Rampage. Maybe Reactor=Rampage.


----------



## newtekie1 (Sep 30, 2009)

ZoneDymo said:


> Rumours?
> Is it a rumour that Nvidia bought Ageia and left all Ageia card owners for dead?
> Is it a rumour that Nvidia made sure that an ATI + Nvidia (for PhysX) configuration is no longer do-able?
> Is it a rumour that AA does not work on ATI cards in Batman AA even though it is able to do so?
> ...



Ageia was on the brink of bankrupcy when nVidia bought them.  If Ageia went under, were would it have left the Ageia card owners then?  They should all consider themselve lucky that nVidia bought Ageia and at least continued support for some time longer than what they would have gotten if Ageia was left to die.

Who cares if you can't use PhysX with an ATi card doing the graphics?  ATi pretty much assured this when they denied nVidia to run PhysX natively on ATi hardware.  Thats right, despite all your whining, you forget that initially, nVidia wanted to make PhysX run natively on ATi hardware, no nVidia hardware required.  ATi was the one that shut the door on peaceful PhysX support, not nVidia.

And if you actually paid attention to the Batman issue, you would know a few things.  1.) Enabling AA on ATi cards not only doesn't actually work, it also breaks the game.  2.) nVidia paid for AA to be added in the game, the game engine does not natively support AA. If nVidia hadn't done so, AA would not have been included in the game, so ATi is no worse off.  There is no reason that nVidia should allow ATi to use a feature that nVidia spent the money to develope and include in the game.  So, it is a false rumor that nVidia paid to have a feature disabled for ATi card.  The truth is that they paid to have the feature added for their cards.  There is a big difference between those two.


----------



## aetneerg (Sep 30, 2009)

I predict GTX380 card will cost between $549.99-$579.99.


----------



## legends84 (Sep 30, 2009)

oh well.. stick with my recent gpu first..this thing will burn my money


----------



## KainXS (Sep 30, 2009)

so if its 384bit of GDDR5, then we are looking at a card with a max of 48 rops and 512 shaders

at least nvidia is being innovative, I think this card is going to be a monster when it comes out.


----------



## erocker (Sep 30, 2009)

newtekie1 said:


> Ageia was on the brink of bankrupcy when nVidia bought them.  If Ageia went under, were would it have left the Ageia card owners then?  They should all consider themselve lucky that nVidia bought Ageia and at least continued support for some time longer than what they would have gotten if Ageia was left to die.
> 
> Who cares if you can't use PhysX with an ATi card doing the graphics?  ATi pretty much assured this when they denied nVidia to run PhysX natively on ATi hardware.  Thats right, despite all your whining, you forget that initially, nVidia wanted to make PhysX run natively on ATi hardware, no nVidia hardware required.  ATi was the one that shut the door on peaceful PhysX support, not nVidia.
> 
> And if you actually paid attention to the Batman issue, you would know a few things.  1.) Enabling AA on ATi cards not only doesn't actually work, it also breaks the game.  2.) nVidia paid for AA to be added in the game, the game engine does not natively support AA. If nVidia hadn't done so, AA would not have been included in the game, so ATi is no worse off.  There is no reason that nVidia should allow ATi to use a feature that nVidia spent the money to develope and include in the game.  So, it is a false rumor that nVidia paid to have a feature disabled for ATi card.  The truth is that they paid to have the feature added for their cards.  There is a big difference between those two.



Good point with Ageia, but Ageia isn't the only company. Many of us are still burned from 3dFX. Now that sucked.

I can enable AA in CCC for the Batman demo. It works, performance sucks but it works. There is no way a game developer should be taking kickbacks for exclusivity in games. It alienates many of their customers. There is also no way you know for sure that they couldn't of added AA for all graphics cards with Batman. Honestly, what? Are game developers going to start charging graphics card companies more money to include the color blue in their games? Ridiculous.

Fact of the matter is though, I'm sure if we had a poll, both Nvidia and ATi owners would agree that features should be included for both cards. Continuing down this road will do nothing but make certain features that should already be in the game, exclusive to a particular brand more and more. It needs to stop, it's bad business and most importantly bad for the consumer, limiting our choices.


----------



## Benetanegia (Sep 30, 2009)

Benetanegia said:


> AFAIK that means that you can just #include C for CUDA and work with c++ like you would do with any other library and same for fortran. That's very good for some programers indeed, but only works on Nvidia hardware.



I said that, but after reading information in other places thatmight be innacurate. They're saying that it can run C/C++ and Fortran code directly. That means there's no need to deal with CUDA, OpenCL or DX11 compute shaders, because you could just program something in Visual Studio and the code would just work. I don't know if that's true but it would be amazing.


----------



## Easy Rhino (Sep 30, 2009)

given nvidias past this will most likely be $100 dollars more than ATis flagship. obviously to the sane person its performance would have to justify the cost. this time around though nvidia is coming out after ati so they may indeed have to keep their prices competitive.


----------



## Benetanegia (Sep 30, 2009)

erocker said:


> Good point with Ageia, but Ageia isn't the only company. Many of us are still burned from 3dFX. Now that sucked.
> 
> I can enable AA in CCC for the Batman demo. It works, performance sucks but it works. There is no way a game developer should be taking kickbacks for exclusivity in games. It alienates many of their customers. There is also no way you know for sure that they couldn't of added AA for all graphics cards with Batman. Honestly, what? Are game developers going to start charging graphics card companies more money to include the color blue in their games? Ridiculous.
> 
> Fact of the matter is though, I'm sure if we had a poll, both Nvidia and ATi owners would agree that features should be included for both cards. Continuing down this road will do nothing but make certain features that should already be in the game, exclusive to a particular brand more and more. It needs to stop, it's bad business and most importantly bad for the consumer, limiting our choices.



Stop the bitching allready. Fact is that 10+ games have been released using UE3 and *none* of them had in game AA. If you wanted AA, you had to enable it in the control panel, had you Ati or had you Nvidia. With this game Nvidia asked the developer to add AA and helped developing and Quality Assurancing the feature. AMD didn't even contact the developer so should they obtain something they didn't pay for? Should the developers risk their reputation by releasing something that has not had any QA*? Or should Nvidia pay so that the developer did QA in AMD's cards? AMD is not helping developers on purpose, but they are expecting to get all the benefits, is that any moral? Not in my book. Only reason that GPUs are sold is because of games, so helping the ones that are helping you is the natural thing to do.

Regarding your last paragraph, I would say yes, but only if both GPU developers were helping to include the features.

* The feature breaks the game, so instead of the crap about Nvidia disabling te feature, we would be hearing crap about Nvidia paying for the game crashing with AMD cards. This crap will never end.


----------



## eidairaman1 (Sep 30, 2009)

all I can say is its hurting their sales due to the Game having the TWIMTBP badge on it not getting the expected sales from AMD users. With that badge they getpaid a little here and there.  With business practices like this it makes me glad I switched to ATI back in 2002


----------



## yogurt_21 (Sep 30, 2009)

happita said:


> And I was wondering why I was seeing L1 and L2 caches on an upcoming video card. I thought I was going crazy hahaha.
> 
> This will be VERY interesting, if the price is right, I may skip the 5k and go GT300



yeah I though that was odd as well, I'm very curious to see the performance of these bad boys.  the way things are going it will be while before I have the cash for a new card anyway so i might as weel wait and see what both sides have to offer.



eidairaman1 said:


> all I can say is its hurting their sales due to the Game having the TWIMTBP badge on it not getting the expected sales from AMD users. With that badge they get paid a little here and there.



funny even when I had my 2900xt I barely noticed the twimtbp badge either on the case or in the loading of the game. I've yet to find a game that will not work on either sides cards and i never buy a game because it works better on one than another. I hope most of you don't either. I buy games that i like for storyline, graphics, gameplay, and replayability. other reasons make no sense to me.


----------



## aj28 (Sep 30, 2009)

Prices will be competitive because they have to be, but nVidia is going to lose big due to the cost of producing this behemoth. And what is their plan for scaling? All of this talk about a high-end chip while they're completely mum about everything else. Are they going to re-use G92 again, or just elect not to compete?

Once Juniper comes out (still scheduled Q4 2009), expect nVidia to shed a few more of their previously-exclusive AIB partners...


----------



## erocker (Sep 30, 2009)

Benetanegia said:


> Stop the bitching allready. Fact is that 10+ games have been released using UE3 and *none* of them had in game AA. If you wanted AA, you had to enable it in the control panel, had you Ati or had you Nvidia. With this game Nvidia asked the developer to add AA and helped developing and Quality Assurancing the feature. AMD didn't even contact the developer so should they obtain something they didn't pay for? Should the developers risk their reputation by releasing something that has not had any QA*? Or should Nvidia pay so that the developer did QA in AMD's cards? AMD is not helping developers on purpose, but they are expecting to get all the benefits, is that any moral? Not in my book. Only reason that GPUs are sold is because of games, so helping the ones that are helping you is the natural thing to do.
> 
> Regarding your last paragraph, I would say yes, but only if both GPU developers were helping to include the features.
> 
> * The feature breaks the game, so instead of the crap about Nvidia disabling te feature, we would be hearing crap about Nvidia paying for the game crashing with AMD cards. This crap will never end.



I'm not bitching, you're bitching. (You are also walking thin ice with that comment. I am free to express my views on this forum like anyone else.) I'm stating how I see it. Tell me all mighty one, do you work for Nvidia? Do you know if Nvidia is approachng the developers or is it the other way around? Is Nvidia paying the developers to add features to the games for their cards, or are they throwing developers cash to keep features exclusive to their cards. Unless you work in one of their back rooms, you have no idea.

*Ugh, I'm sounding like a conspiracy theorist. I hate conspiracy theorists. I'll shut up and just not play this game.


----------



## VanguardGX (Sep 30, 2009)

aj28 said:


> Prices will be competitive because they have to be, but nVidia is going to lose big due to the cost of producing this behemoth. And what is their plan for scaling? All of this talk about a high-end chip while they're completely mum about everything else. Are they going to re-use G92 again, or just elect not to compete?
> 
> Once Juniper comes out (still scheduled Q4 2009), expect nVidia to shed a few more of their previously-exclusive AIB partners...



NV is just gonna recycle old G92/GT200 Parts to fill that gap!!!


----------



## DaJMasta (Sep 30, 2009)

Native fortran support!!!!!!




YESSSSSS!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!


----------



## Benetanegia (Sep 30, 2009)

erocker said:


> I'm not bitching, you're bitching. I'm stating how I see it. Tell me all mighty one, do you work for Nvidia? Do you know if Nvidia is approachng the developers or is it the other way around? Is Nvidia paying the developers to add features to the games for their cards, or are they throwing developers cash to keep features exclusive to their cards. Unless you work in one of their back rooms, you have no idea.



Do you know anything of presumption of innocence? When there's no proofs of guilty the natural reaction is to presume innocence. That's what a honest non biased person thinks. And that's what any legal system is based on.

You have as much proof of that happening as I do of it not happening. have you any proof except that it runs better on one than the other? So you have the reason and I don't, because? Enlighten me.

I know Nvidia is playing fair, because that's what the developers say. When docens of Developers say that Nvidia is approaching them to help them develop, optimize and test their games, my natural reaction is to believe them, because I don't presume that all the people lie. I don't presume that all of them take money underhand. But most importantly I know that at least one out of the 100 that form a game development team, out of the 100 developers that have developed under TWIMTBP, one at least, would have said something already if it was happening something shaddy.

I've been part of an small developer that did small java games, and I know how much it costs to optimize and make even a simple game bug free, so it doesn't surprise me a bit that games that have been optimized on one card runs better than in the one that hasn't*. That's the first thing that make people jump about TWIMTBP, so being that's what they think and have been claiming bad behavior out of that, it doesn't surprice me.

*It also happened that one game run flawlessly on one cellphone and crashed on others, while both should be able to run the same java code.


----------



## Mistral (Sep 30, 2009)

Benetanegia said:


> ...They're saying that it can run C/C++ and Fortran code directly. That means there's no need to deal with CUDA, OpenCL or DX11 compute shaders, because you could just program something in Visual Studio and the code would just work. I don't know if that's true but it would be amazing.



As amazingly awesome that is, save for some very rare cases I'm quite questioning the utility of this. Besides raising the transistor count (almost wrote trannies there...), what use is that to the gaming population? And I doubt even pros would have much reason to jump on it.

L1 and L2 caches sounds like fun though...


----------



## DaedalusHelios (Sep 30, 2009)

erocker said:


> I'm not bitching, you're bitching. (You are also walking thin ice with that comment. I am free to express my views on this forum like anyone else.) I'm stating how I see it. Tell me all mighty one, do you work for Nvidia? Do you know if Nvidia is approachng the developers or is it the other way around? Is Nvidia paying the developers to add features to the games for their cards, or are they throwing developers cash to keep features exclusive to their cards. Unless you work in one of their back rooms, you have no idea.
> 
> *Ugh, I'm sounding like a conspiracy theorist. I hate conspiracy theorists. I'll shut up and just not play this game.



Erocker, I think his "bitching" response was directed at the portion of the community as a whole. I know you feel a little burned by the trouble you had with Physx configurations. It really did take extra time to program selective AA into the game. I think we all agree on that right? With that being said, we know that Nvidia helped fund the creation of the game from the start. So if the developer says,"Nvidia, that money helped so much in making our production budget that we will engineer a special selective AA for your hardware line". Is that morally wrong? Do you say Nvidia cannot pay for extra engineering to improve the experience of their own end users? If Nvidia said they would send ham sandwiches to everybody that bought Nvidia cards in the last year would you say that its not fair unless they send them to ATi users too?

What it comes down to is it was a game without selective AA. Nvidia helps develop the game and gets a special feature for their hardware line when running the game. Where is the moral dillemma?


----------



## Benetanegia (Sep 30, 2009)

Mistral said:


> As amazingly awesome that is, save for some very rare cases I'm quite questioning the utility of this. Besides raising the transistor count (almost wrote trannies there...), what use is that to the gaming population? And I doubt even pros would have much reason to jump on it.
> 
> L1 and L2 caches sounds like fun though...



Who says that in 2009 a GPU is a gaming only device? It has never been anyway. Nvidia has GeForce, Tesla and Cuadro brands and all of them are based on the same chip. As long as the graphics card is being competitive do you care about anything else? You shouldn't. And appart from this the ability to run C++ code can help in all kinds of applications. You have never encoded a video? Wouldn't you like to be able to do it 20x faster?

Pros on the other hand have 1000s and every reason to jump into something like this.


----------



## Tatty_One (Sep 30, 2009)

qubit said:


> Shame that bus isn't a full 512 bits wide. They've increased the bandwidth with GDDR5, yet taken some back with a narrower bus. Also, 384 bits has the consequence that the amount of RAM is that odd size like on the 8800 GTX, when it would really be best at a power of two.



GDDR5 effectively doubles the bandwidth in any case (unlike 3), there is no card on the planet that will be remoteley hampered by effectively a 768MBits throughput, really, with GDDR5 there is absolutely no need to go to the additional expense.


----------



## Benetanegia (Sep 30, 2009)

DaedalusHelios said:


> Erocker, I think his "bitching" response was directed at the portion of the community as a whole.



Yeah that's true. Sorry Erocker if it seemed directed at you.

It's just that the subject is being brought again every 2 posts and really, it has already been explained by many members. I dont' think it's crazy to believe in the innocence of 1000s of developers (individuals), that IMO are being insulted by the people that presume guilty. TBH I get angry because of that.


----------



## AlienIsGOD (Sep 30, 2009)

WOW 512 cores!!! without a doubt this will be better than a 5870 but im thinking the power draw will be larger than the 5870 too.


----------



## newtekie1 (Sep 30, 2009)

eidairaman1 said:


> all I can say is its hurting their sales due to the Game having the TWIMTBP badge on it not getting the expected sales from AMD users. With that badge they getpaid a little here and there.  With business practices like this it makes me glad I switched to ATI back in 2002



You realize that ATi had a similar program to TWIMTBP back in 2002, right?  Oddly enough*, I remember seeing it all over the place in games like Unreal Tournament, and Source based games**.  Both of wich ran better on ATi hardware due to ATi's aid in developement.  Surprising that you would actually switch to them, when they were in the middle of doing exactly what you are complaing about now...

*I say oddly enough, because the Batman game that has caused so much uproar recently is actually based on an Unreal Engine.
**Valve removed the ATi branding once ATi stopped working with them, and most other developers, to improve games before release.



VanguardGX said:


> NV is just gonna recycle old G92/GT200 Parts to fill that gap!!!



That worked wonderfully in the past, and probably allowed nVidia to compete better, and eliminated consumer confusion, and lowered prices for the consumer, so I can't see how it was really a bad thing.

However, this likely won't work with the upcoming generation of cards, as DX11 support will be required.


----------



## Mistral (Sep 30, 2009)

Benetanegia said:


> Who says that in 2009 a GPU is a gaming only device? It has never been anyway. Nvidia has GeForce, Tesla and Cuadro brands and all of them are based on the same chip. As long as the graphics card is being competitive do you care about anything else? You shouldn't. And appart from this the ability to run C++ code can help in all kinds of applications. You have never encoded a video? Wouldn't you like to be able to do it 20x faster?
> 
> Pros on the other hand have 1000s and every reason to jump into something like this.



I'm all for lightning fast encode times, but please explain how running C and Fortran would help that, since that part is a bit foggy for me. If nVidia can squeeze in extra "features" and keep prices and performance "competitive", all is peachy. We'll need to wait and see if that's the case though.


----------



## PP Mguire (Sep 30, 2009)

Steevo said:


> Thier margin means nothing to me, and everybody stop with the bus width, DDR5 allows TWICE the bandwidth per pin compared to DDR3, so a 384 is like a 768.
> 
> 
> I just want to run games like GTA4 and newer ones at native res with eye candy to have a immirsive experiance, and unlike what a diehard blowhard NV camp man said yesterday, you want that to get into the game, otherwise he can have a FX card and play on that.



I do that now with a GTX280....

Wee gotta love paper launches and the wars starting over what somebody said...not actual proof and hard launch benches.


----------



## aCid888* (Sep 30, 2009)

Why does every topic have to be derailed in some way by fanboys or general bullshit that has nothing to do with the subject at hand?  :shadedshu



We can sit here all day and chat about how A card will beat B card and get all enraged about it...or, we can wait until its actually released and base our views on solid facts.

I know what way I'd prefer...but saying that, this card does look to be a beast in the making; I just have my doubts about the way nVidia will chose to price it as they often put an hefty price on 5% more "power".


----------



## Benetanegia (Sep 30, 2009)

Mistral said:


> I'm all for lightning fast encode times, but please explain how running C and Fortran would help that, since that part is a bit foggy for me. If nVidia can squeeze in extra "features" and keep prices and performance "competitive", all is peachy. We'll need to wait and see if that's the case though.



If the chip can trully run C code natively*, it means that a programer doesn't have to do anything especial to code a program to run on the GT300. They can just do it as they would to run it on the CPU. So the only difference is that instead of thinking they have 4 cores available, they have to make their code suitable for running in 512. Previously it was as if in order to write a book you had to learn french, because that was what the GPU could understand and with GT300 you could write in english as you have always done. A lot if not most applications and games are programmed in C/C++ and Fortran is very used in science and industry.

* I say that because it seems too good to be true TBH.


----------



## soldier242 (Sep 30, 2009)

if thats all tru, then the singlecore GTX 380? will even beat the crap out of the 5870 X2 and still won't be tired after doing so ... damn


----------



## HalfAHertz (Sep 30, 2009)

I don't think it will be able to run C natively. I think this can only be done on x86 and RISC. 
Anyway this sure sounds like a true power horse and realy stresses out that Nvidia wants to shatter the idea of the Graphics card as just means of entertainment. 

I think that alot of businesses and scientifical laboratories are ready for a massively paralel alternative to the CPU. And once they do, gamers and more importantly users are bound to follow. 

I mean come on, think about it for a second. Is there any better pick-up line than: "Hey baby, wanna come down to my crib and check out my quadruple-pumped super computer?"


----------



## PP Mguire (Sep 30, 2009)

If i had it, she would


----------



## Kaleid (Sep 30, 2009)

A monster..but boy it won't be cheap with that transistor count plus more expensive memory system.

Likely too hot for my taste.

Hopefully it will lower the 5850 prices a bit though, I might pick one of those up..or even wait for Juniper XT


----------



## El Fiendo (Sep 30, 2009)

I hate to say it but I sure hope they suck at folding. I hope they provide no real gain over the current NVIDIA offering in terms of daily points produced. 

If it turns out they do rock the folding world and see great gains, I'll probably start scheming ways to change out 6 GTX 260 216s for 6 of these and a much lighter wallet.


----------



## erocker (Sep 30, 2009)

Benetanegia said:


> Yeah that's true. Sorry Erocker if it seemed directed at you.
> 
> It's just that the subject is being brought again every 2 posts and really, it has already been explained by many members. I dont' think it's crazy to believe in the innocence of 1000s of developers (individuals), that IMO are being insulted by the people that presume guilty. TBH I get angry because of that.



Heh, and I keep mixing up this thread with the damn Batman thread. I should quit bitching as I'm pretty content with my current setup anyways.

Cheers.


----------



## happita (Sep 30, 2009)

erocker said:


> Heh, and I keep mixing up this thread with the damn Batman thread. I should quit bitching as I'm pretty content with my current setup anyways.
> 
> Cheers.



It's ok, we all know you live in the batcave, which has a bitchin' setup


----------



## Benetanegia (Sep 30, 2009)

HalfAHertz said:


> I don't think it will be able to run C natively. I think this can only be done on x86 and RISC.



That's exactly what they are saying the chip does.



> Ferni architecture natively supports C [CUDA], C++, DirectCompute, DirectX 11, Fortran, OpenCL, OpenGL 3.1 and OpenGL 3.2. Now, you've read that correctly - Ferni comes with a support for native execution of C++. For the first time in history, a GPU can run C++ code with no major issues or performance penalties and when you add Fortran or C to that, it is easy to see that GPGPU-wise, nVidia did a huge job.
> 
> To *implement ISA inside the GPU* took a lot of bravery, and with GT200 project over and done with, the time came right to launch a chip that would be as flexible as developers wanted, yet affordable.



Implementing ISA inside the GPU. That's what they say. An ISA is something very especific to be misinterpreted IMO. Although there's no such thing yet as an ISA for C/C++ or Fortran because they are compiled to run in x86 or PPC processors it is true that many instructions in C/C++ have direct relationship with the x86 instruction set, and over the time x86 has absorbed most successful of it's functions, making the x86 instruction set grow and now probably it can be said that on the basic things C/C++ = x86 and I suppose that it's the same with Fortran, but I don't know fortran myself, so I can't speak of that.

All in all, what they are claiming is that they have implemented an ISA for those programming languages, so they are effectively claiming that for every core function in C/C++ and Fortran there is an instruction in the GPU that can execute it. In a way they have completely bypassed the CPU, except for the first instruction that is going to be required to move the execution to the GPU. Yes Intel does have something to worry about.



El Fiendo said:


> I hate to say it but I sure hope they suck at folding. I hope they provide no real gain over the current NVIDIA offering in terms of daily points produced.
> 
> If it turns out they do rock the folding world and see great gains, I'll probably start scheming ways to change out 6 GTX 260 216s for 6 of these and a much lighter wallet.



If the above is true, they will certainly own in folding. Not only they would be much faster, but there's not going to be a need for a GPU client to begin with. Just a pair of lines to make the CPU client run in the GPU. 

Now that I think about it, it might mean that GT300 could be the only processor inside a gaming console too, but it would run normal code very slowly. The truth is that the CPU is still very needed to run normal code, because GPUs don't have branch prediction (although I wouldn't bet a penny at this point, just in case) and that is needed. Then again C and Fortran have conditional expressions as core functions, so the ability to run them should be there, although at a high performance penalty compared to a CPU. A coder may take advantage of the raw power of the GPU and perform massive speculative execution though.

Sorry for the jargon and overall divagation.


----------



## El Fiendo (Sep 30, 2009)

And I bet my basement would sound like a bunch of harpies getting gang banged by a roving group of banshees with 6 GT300s added to my setups.


----------



## Millenia (Sep 30, 2009)

I'm going with ATI/AMD again whether it's much better or not due to my mobo, but of course I'd like to see it being at least competitive to drive the prices down.


----------



## eidairaman1 (Sep 30, 2009)

Sorry UT 99 was a TWIMTBP, along with UT2K3/4 and UT3, so I don't see your point, only thing I really seen was the oft delayed HL 2 having ATI badge on it.



newtekie1 said:


> You realize that ATi had a similar program to TWIMTBP back in 2002, right?  Oddly enough*, I remember seeing it all over the place in games like Unreal Tournament, and Source based games**.  Both of wich ran better on ATi hardware due to ATi's aid in developement.  Surprising that you would actually switch to them, when they were in the middle of doing exactly what you are complaing about now...
> 
> *I say oddly enough, because the Batman game that has caused so much uproar recently is actually based on an Unreal Engine.
> **Valve removed the ATi branding once ATi stopped working with them, and most other developers, to improve games before release.
> ...


----------



## HalfAHertz (Sep 30, 2009)

I dunno the biggest problem I see with coding for C on a non x86 architecture is the complexity of the code. The SIMD/MIMD architecture of GPUs is closer to RISC, and from what i know it's much harder to write code for RISC than it is for x86, but once you have a working code, the benefits can me enormous.

I'd love to see Nvidias solution from a nerds pov rather than anything else. If they realy acomplished what they state here, that would render Larabee useless and obsolete before it even comes out and create some serious competition on the HPC market.


----------



## El Fiendo (Sep 30, 2009)

eidairaman1 said:


> Sorry UT 99 was a TWIMTBP, along with UT2K3/4 and UT3, so I don't see your point, only thing I really seen was the oft delayed HL 2 having ATI badge on it.



UT 99 was not a TWIMTBP game.

http://www.nzone.com/object/nzone_twimtbp_gameslist.html


----------



## newtekie1 (Sep 30, 2009)

eidairaman1 said:


> Sorry UT 99 was a TWIMTBP, along with UT2K3/4 and UT3, so I don't see your point, only thing I really seen was the oft delayed HL 2 having ATI badge on it.



Maybe your right, but I could have sworn UT 2K3 had GITG branding on it, maybe not though, its been so long.

The main point still stands though, ATi had/has a similar program that did the exact same thing.


----------



## Kaleid (Sep 30, 2009)

There will be some waiting..

"Then timing is just as valid, because while Fermi currently exists on paper, it's not a product yet. Fermi is late. Clock speeds, configurations and price points have yet to be finalized. NVIDIA just recently got working chips back and it's going to be at least two months before I see the first samples. Widespread availability won't be until at least Q1 2010.
I asked two people at NVIDIA why Fermi is late; NVIDIA's VP of Product Marketing, Ujesh Desai and NVIDIA's VP of GPU Engineering, Jonah Alben. Ujesh responded: because designing GPUs this big is "fucking hard".

Source:
http://www.anandtech.com/video/showdoc.aspx?i=3651

Another informative article:
http://www.techreport.com/articles.x/17670


----------



## Steevo (Sep 30, 2009)

PP Mguire said:


> I do that now with a GTX280....
> 
> Wee gotta love paper launches and the wars starting over what somebody said...not actual proof and hard launch benches.



100 draw distance and all high settings? You are delusional, mistaken, or full of shit.

My 1Gb video card can't run it, it isn't the processor power required, it is the vmem, plain and simple.


----------



## El Fiendo (Sep 30, 2009)

For anyone who wants to watch webcasts of NVIDIA's GPU Tech Conference.

Linky

Apparently its all in 3D this year. An interesting side effect is the press can't get any decent shots of the slides they show. The question is whether or not it was intentional to help keep people guessing.


----------



## Kaleid (Sep 30, 2009)

Another in-depth article:
http://www.realworldtech.com/page.cfm?ArticleID=RWT093009110932


----------



## hat (Sep 30, 2009)

L1 and L2 cache for graphics cards? I've never seen that before...


----------



## Fitseries3 (Sep 30, 2009)

http://www.nvidia.com/content/PDF/fermi_white_papers/NVIDIAFermiArchitectureWhitepaper.pdf

http://www.nvidia.com/object/gpu_technology_conference.html#livewebcast


----------



## Bjorn_Of_Iceland (Sep 30, 2009)

El Fiendo said:


> And I bet my basement would sound like a bunch of harpies getting gang banged by a roving group of banshees with 6 GT300s added to my setups.


You may also want to charge people for a sauna bath in your basement as well.


----------



## DaedalusHelios (Sep 30, 2009)

Steevo said:


> 100 draw distance and all high settings? You are delusional, mistaken, or full of shit.
> 
> My 1Gb video card can't run it, it isn't the processor power required, it is the vmem, plain and simple.



You are using a 4850 right? 

A lightly OC'ed gtx 280 beats it even if the 4850 has a 1ghz core OC.


----------



## Fitseries3 (Sep 30, 2009)




----------



## El Fiendo (Sep 30, 2009)

Well that's lackluster. I mean after the weird new design concept of the 5870, I half expected to see something with a built in rainbow gun that can actually fire rainbows or something as retaliation. This one looks like they said 'We love the G80 so much, we're doing it again!' Thankfully I don't buy my GFX cards based on looks.


----------



## Fitseries3 (Sep 30, 2009)

watch the vid and you can get a better look. 

its actually a bit different than you would think.

i like it myself.

these cards will have some serious balls to them from what i've heard in the video so far.

the card is similar to gtx2XX card but slightly smaller


----------



## Valdez (Sep 30, 2009)




----------



## El Fiendo (Sep 30, 2009)

^^^

CUDA cores = new term for shader cores for anyone who didn't catch that right away.


----------



## newtekie1 (Sep 30, 2009)

El Fiendo said:


> ^^^
> 
> CUDA cores = shader cores for anyone who didn't catch that right away.



*Waits for people to bitch about nVidia renaming them...and equating it to them renaming video cards...


----------



## Fitseries3 (Sep 30, 2009)

if anyone argues that these gpus wont be all balls they will be teabagged once the numbers show up.


----------



## btarunr (Sep 30, 2009)

newtekie1 said:


> *Waits for people to bitch about nVidia renaming them...and equating it to them renaming video cards...



They can call them "little girls with crayons" if they want to. It becomes all the more humiliating when 512 of them beat 1600 "super dooper pooper troopers".


----------



## El Fiendo (Sep 30, 2009)

Bta, you're now in charge of naming all hardware tech. If you don't like the name that comes up in the news, replace it. I would love to read stories along the lines of what you posted.

Also, that HP netbook doing the HD streaming was pretty sweet (for people watching the webcast).


----------



## Fitseries3 (Sep 30, 2009)

haha.... ludwig fuchs

great name.

that ferrari is pretty nice


----------



## El Fiendo (Sep 30, 2009)

Wow. Ludwig brought out some awesome tech. See those lighting effects?


----------



## NeSeNVi (Sep 30, 2009)

No words about TDP?


----------



## Benetanegia (Sep 30, 2009)

Fitseries3 said:


> watch the vid and you can get a better look.
> 
> its actually a bit different than you would think.
> 
> ...



I has some serious balls definately, and some brains too. 

And I love the look and the fact that it's shorter than GTX2xx cards. Especially the later.



El Fiendo said:


> ^^^
> 
> CUDA cores = new term for shader cores for anyone who didn't catch that right away.



They had started to call them only cores in the last months. Anyway CUDA is (has always be) the name of the architecture itself. Like x86.

What I want to know is if they have shown or will show performance numbers. I now they are not going to be real (like HD5870 being 90% faster than GTX295 lol), but if they say 200% faster you they have something.


----------



## SteelSix (Sep 30, 2009)

hat said:


> L1 and L2 cache for graphics cards? I've never seen that before...



Indeed. Can't wait to see what it can do. Damn I wish this thing was only 30 days out.


----------



## PVTCaboose1337 (Sep 30, 2009)

This sounds like it will destroy the ATI 5xxx series.  L2 cache for a graphics card = awesome.


----------



## PP Mguire (Oct 1, 2009)

Steevo said:


> 100 draw distance and all high settings? You are delusional, mistaken, or full of shit.
> 
> My 1Gb video card can't run it, it isn't the processor power required, it is the vmem, plain and simple.



GTX280 > 4850. Nuff said. My single GTX280 overclocked ran circles around my crossfire 4870 setup. 

Btw 280 = 1gb. 

Also there is a patch out there that will allow you to run a higher resolution without having mass amounts of vmem. Whether it lags or not.


----------



## Easy Rhino (Oct 1, 2009)

looks like i was right about nvidia's new line of cards. now i just have to wait for them to release a midrange series...


----------



## Benetanegia (Oct 1, 2009)

Easy Rhino said:


> looks like i was right about nvidia's new line of cards. now i just have to wait for them to release a midrange series...



I've not been looking the webcast since the beginning, but according to Fudzilla that are there and posting at the same time, Jensen has said that they will do a top to bottom release, including a dual GPU card.

http://www.fudzilla.com/content/view/15758/1/

PD: I've been seing the last part and the augmented reality in the Tegra has really really impressed me.


----------



## imperialreign (Oct 1, 2009)

This new DX11 war is looming to be rather interesting . . .

It defi _appears_ that nVidia might have an ace up their sleeve against the HD5000 series . . . but, all things considered, ATI have _yet_ to throw out any potential date for the release of the 70x2 - leading me to believe they're holding it in the reins until the 300 is out, then slap nVidia with their dual-GPU setup, further driving nVidia's price down . . .

I'm getting the feeling we're going to see a repeat of the HD4000/GT200 release shenanigans - either way, I guess we'll have to see how it goes.


----------



## Benetanegia (Oct 1, 2009)

imperialreign said:


> leading me to believe they're holding it in the reins until the 300 is out, then slap nVidia with their dual-GPU setup, further driving nVidia's price down . . .



Read my post above yours. Nvidia will release Fermi from top to bottom which includes the dual GPU card.


----------



## lism (Oct 1, 2009)

imperialreign said:


> This new DX11 war is looming to be rather interesting . . .
> 
> It defi _appears_ that nVidia might have an ace up their sleeve against the HD5000 series . . . but, all things considered, ATI have _yet_ to throw out any potential date for the release of the 70x2 - leading me to believe they're holding it in the reins until the 300 is out, then slap nVidia with their dual-GPU setup, further driving nVidia's price down . . .
> 
> I'm getting the feeling we're going to see a repeat of the HD4000/GT200 release shenanigans - either way, I guess we'll have to see how it goes.



The numbers look good, but its still a paperlaunch, and if it takes really much of an effort to bake these wafers without errors, its going to be a hell of a period with this new war between Ati and Nvidia.

I'd prefer Ati, but i have my own reasons for that. Also a shame that even a i7 cant last up a HD5870 in Crossfire  I think the ball is at eitherway Intel or AMD to produce a much stronger crunching CPU.


----------



## imperialreign (Oct 1, 2009)

Benetanegia said:


> Read my post above yours. Nvidia will release Fermi from top to bottom which includes the dual GPU card.



all and good . . . except I don't really consider Fud to be a fully reliable source . . . as well, it seems rather odd for nVidia (or ATI for that matter) to release their cards in that order.

Also, there's been no confirmation, nor even rumor, from nVidia regarding a dual-GPU setup . . . actually, most rumors have been a little cautious in that they don't really expect nVidia to have a dual-GPU offering for this series . . . again, though, it's all speculation - nVidia have really yet to offer up much detail straight from their mouth.

Besides, it'd be extremelly shtoopid of nVidia to release a dual-GPU card _before_ releasing anything to stack up against the 5870, especially knowing that ATI still have their dual-GPU monstrosities waiting in the wings . . . it's the same tactic that ATI is currently using, by not releasing the x2 ATM.


----------



## Easy Rhino (Oct 1, 2009)

if i see a dx11 nvidia mid range card by nov i will poop myself


----------



## Benetanegia (Oct 1, 2009)

imperialreign said:


> all and good . . . except I don't really consider Fud to be a fully reliable source . . . as well, it seems rather odd for nVidia (or ATI for that matter) to release their cards in that order.
> 
> Also, there's been no confirmation, nor even rumor, from nVidia regarding a dual-GPU setup . . . actually, most rumors have been a little cautious in that they don't really expect nVidia to have a dual-GPU offering for this series . . . again, though, it's all speculation - nVidia have really yet to offer up much detail straight from their mouth.
> 
> Besides, it'd be extremelly shtoopid of nVidia to release a dual-GPU card _before_ releasing anything to stack up against the 5870, especially knowing that ATI still have their dual-GPU monstrosities waiting in the wings . . . it's the same tactic that ATI is currently using, by not releasing the x2 ATM.



Erm I'll wait until someone that has been seing the webcast can confirm that Jensen has said that in GTC, but they (FUD) have suposedly post that from the GTC... I wouldn't call that a rumor. This is not a "We've been told by sources close...", this is "Jensen Huang has confirmed..." pretty different.


----------



## wolf (Oct 1, 2009)

I love all the ATi heavy posts in this thread 

you had your hype guys, now its our turn, let the green machine steam roll once more!

We should get some specifics here pretty soon.


----------



## Kursah (Oct 1, 2009)

imperialreign said:


> I'm getting the feeling we're going to see a repeat of the HD4000/GT200 release shenanigans - either way, I guess we'll have to see how it goes.



I kinda hope we do, give us some more competition and drive prices down drastsically. Give us cheaper products that aren't much slower and can be OC'd to more than make up for it, give us decent stock cooling and easy adaptation for aftermarket cooling solutions. I really hope the DX11 Gen1 "wars" set up to be ultra competetive, I kinda hope that NV doesn't pull too far ahead performance-wise and I don't think ATI would let them so to say...but in the same instant, if NV has too good of an ace up their sleeves in theory, ATI might have to get the 2nd gen rolled out sooner to compete...though ATI will have the affordability on their side I'm sure. Things have definately gotten more interesting, hard saying which way I'll go when the products are all out there and my 260 isn't cutting it anymore..which isn't yet thankfully!


----------



## imperialreign (Oct 1, 2009)

Benetanegia said:


> Erm I'll wait until someone that has been seing the webcast can confirm that Jensen has said that in GTC, but they (FUD) have suposedly post that from the GTC... I wouldn't call that a rumor. This is not a "We've been told by sources close...", this is "Jensen Huang has confirmed..." pretty different.



Still, word from "an insider" is not the same as an "official" statement . . . unless the statement is "official," it's really just a rumor.

I'm not trying to say your statement is wrong, simply that we've seen those kinds of "news topics" posted at sites such as Fud, Inquirer, MaxPC, Tom's, Nordic, and countless other sites - some more reliable than others.  It usually all boils down to the premise of "believe it when we see it" kinda thing, as the tech industry changes so often that nothing is really finalized until it's in the hands of the consumer . . . just look at how often the preliminary specs of the HD5000 series (and HD4000 / GT200, for that matter) changed before they were finally released.




Kursah said:


> I kinda hope we do, give us some more competition and drive prices down drastsically. Give us cheaper products that aren't much slower and can be OC'd to more than make up for it, give us decent stock cooling and easy adaptation for aftermarket cooling solutions. I really hope the DX11 Gen1 "wars" set up to be ultra competetive, I kinda hope that NV doesn't pull too far ahead performance-wise and I don't think ATI would let them so to say...but in the same instant, if NV has too good of an ace up their sleeves in theory, ATI might have to get the 2nd gen rolled out sooner to compete...though ATI will have the affordability on their side I'm sure. Things have definately gotten more interesting, hard saying which way I'll go when the products are all out there and my 260 isn't cutting it anymore..which isn't yet thankfully!




Defi looking forward to it too - we haven't seen this heavy or heated of competition since the good 'ol days!


----------



## Benetanegia (Oct 1, 2009)

imperialreign said:


> Still, word from "an insider" is not the same as an "official" statement . . . unless the statement is "official," it's really just a rumor.
> 
> I'm not trying to say your statement is wrong, simply that we've seen those kinds of "news topics" posted at sites such as Fud, Inquirer, MaxPC, Tom's, Nordic, and countless other sites - some more reliable than others.  It usually all boils down to the premise of "believe it when we see it" kinda thing, as the tech industry changes so often that nothing is really finalized until it's in the hands of the consumer . . . just look at how often the preliminary specs of the HD5000 series (and HD4000 / GT200, for that matter) changed before they were finally released.
> 
> ...



Well definately you don't get it man. 

http://www.nvidia.com/object/gpu_technology_conference.html

Jensen Huang *CEO of Nvidia* has said that in a conference. If you can find any more "official" source let me know man.

"an insider"????
"official"????  

Sorry but I have to laugh. I have to laugh sooo hard...


----------



## imperialreign (Oct 1, 2009)

Benetanegia said:


> Well definately you don't get it man.
> 
> http://www.nvidia.com/object/gpu_technology_conference.html
> 
> ...




I love how presumptuous some people get . . . seriously, I do.

Simply because a CEO of a company makes a "statement" does not make that statement official - nor will that be the "be-all, end-all" strategy that the company will follow . . . such has happened numerous times over the last 3 decades.

Besides - the link you posted to originally stated:



> *While he didn't talk about it during the keynote presentation*, this release strategy also includes a high end dual-GPU configuration that should ship around the same time as the high end single-GPU model.



So, then, if the CEO didn't mention it at GTC - where did that info about such a release strategy come from?  Or do you have some "inside scoop" you'd like to share, eh?


----------



## Benetanegia (Oct 1, 2009)

imperialreign said:


> I love how presumptuous some people get . . . seriously, I do.
> 
> Simply because a CEO of a company makes a "statement" does not make that statement official - nor will that be the "be-all, end-all" strategy that the company will follow . . . such has happened numerous times over the last 3 decades.
> 
> ...



You can wrap that around as much as you want, what that sentence means is that he didn't say anything about the dual-GPU on the keynote. He did say about the top to bottom release. 
Aditionally sometimes reporters get the chance to talk to them behind the scene as to say, and he told them about the dual card then. Putting false words in a company's CEO's mouth is not the best way of doing bussiness when you are a news site and there's still one more day of conference left so you want to have him happy.


----------



## KainXS (Oct 1, 2009)

It will be different this time around, Nvidia did not refer to the Geforce sector, they only referred to the the Telsa's, IE, cards for businesses.


----------



## imperialreign (Oct 1, 2009)

Benetanegia said:


> You can wrap that around as much as you want, what that sentence means is that he didn't say anything about the dual-GPU on the keynote. He did say about the top to bottom release.



Oh . . . so, then, although Fud stated that Jensen "didn't talk about [a top-to-bottom release strategy] during the keynote presentation," and it's not even mentioned in the keynote highlights at nVidia's blog site . . . you can affirm that the company's CEO _has_ stated the release strategy for the GT300?

So, then, you mean to say that the statements that Fud made were wrong, because you know, for sure, of otherwise?  That's quite different from your original assesment that Fud is a reliable source . . . even though Fud didn't claim that Jensen even mentioned the GT300 release strategy.

I'm not wrapping anything around any-which-way . . . simply citing the text and context as it's written.



> Aditionally sometimes reporters get the chance to talk to them behind the scene as to say, and he told them about the dual card then. Putting false words in a company's CEO's mouth is not the best way of doing bussiness when you are a news site and there's still one more day of conference left.



Such actions have never stopped sites before - besides, both these tech sites and the companies thrive on the hype.  It's no different than browsing through the tabloids at the supermarket . . . sometimes they're right, sometimes they're wrong . . . other times they're so far out in left field that there's no way they'd be right.

Such is how it goes with just about every tech report site out there.  Simply because a reported might have an opportunity to speak with a company representative "behind the scenes" does not make that an official statement . . . nor does it mean that said reported hasn't added their own "embellishment."


----------



## Benetanegia (Oct 1, 2009)

imperialreign said:


> Oh . . . so, then, although Fud stated that Jensen "didn't talk about [*a dual card release at the same time as the rest as the lineup*] during the keynote presentation," and it's not even mentioned in the keynote highlights at nVidia's blog site . . . you can affirm that the company's CEO _has_ stated the release strategy for the GT300?
> 
> So, then, you mean to say that the statements that Fud made were wrong, because you know, for sure, of otherwise?  That's quite different from your original assesment that Fud is a reliable source . . . even though Fud didn't claim that Jensen even mentioned the GT300 release strategy.
> 
> ...



Fixed. He *did* said about the top to bottom release. What I linked is the second (out 6) consecutive post made in FUD at the hours that the keynote has taken place, the first one says:



> 3 billion transistors
> 
> 
> Nvidia's CEO, Jensen Huang just posted the first official picture of Fermi rendering a Bugatti car and the slide confirms that the new GPU has 3 billion transistors. The car looks realistic and it is probably rendered at real time, but at press time we've only seen a picture of it.
> ...



http://www.fudzilla.com/content/view/15757/1/

Since you read the keynote highlights you did read about the car right?

The second post is the one I posted earlier.

The third: http://www.fudzilla.com/content/view/15759/1/
The fourth: http://www.fudzilla.com/content/view/15760/1/
I was looking the GTC webcast already so I can confirm he did say that.
The fifth: http://www.fudzilla.com/content/view/15761/1/
I too was seing that.
The sixth is a photo of Huang Holding the card: http://www.fudzilla.com/content/view/15762/1/
I don't know when that has happened, I wasn't watching.

So does the fact that I can confirm 2 of them make a difference for you?


----------



## @RaXxaa@ (Oct 1, 2009)

I wonder how many $xxxx you gotta pay for it, seems to be breaking a $1000 feet long hole in your pocket


----------



## PP Mguire (Oct 1, 2009)

Who the hell cares how they will release or what they will release. The only thing that matters is hard benchmarks AFTER its released. The rest is fuddy duddy. 

From past releases its safe to say they WILL have a dual GPU card because they have since the 7950GX2. Also, people argued over heat that there wouldnt be a dual gpu GT200 but in fact there was. Arguing over what they will or will not do seems childish when it wouldnt matter either way. You will buy what card you want or can afford and call it a day no matter if they release all at once or release a dual gpu card.


----------



## wolf (Oct 1, 2009)

Anyone is Australia know what aussie time it will be when specs are completely officially revealed?


----------



## Nkd (Oct 1, 2009)

soldier242 said:


> if thats all tru, then the singlecore GTX 380? will even beat the crap out of the 5870 X2 and still won't be tired after doing so ... damn



hmm, I would say that is bit of a stretch, so what is the difference, the games I doubt are going to be using cuda, so if they do than yea, the main purpose of this graphics card is general purpose computing, I will keep my judgement until the card is actually tested, I am sure it will beat my hd 5870, looking at the specs but who knows it might not do as well in games as it would in folding.


----------



## Hayder_Master (Oct 1, 2009)

very nice this is beat 5870 but when it release and how much cost


----------



## Binge (Oct 1, 2009)

hayder.master said:


> very nice this is beat 5870 but when it release and how much cost



all in due time?


----------



## DaedalusHelios (Oct 1, 2009)

Yeah I don't really trust fudzilla enough to believe it 100%. I still think its a bit up in the air what release strategy they will have. Its not that I don't believe you Benetanegia, its that I don't believe fudzilla is truthful all the time.


----------



## Zubasa (Oct 1, 2009)

PP Mguire said:


> Who the hell cares how they will release or what they will release. The only thing that matters is hard benchmarks AFTER its released. The rest is fuddy duddy.
> 
> From past releases its safe to say they WILL have a dual GPU card because they have since the 7950GX2. Also, people argued over heat that there wouldnt be a dual gpu GT200 but in fact there was. Arguing over what they will or will not do seems childish when it wouldnt matter either way. You will buy what card you want or can afford and call it a day no matter if they release all at once or release a dual gpu card.


There never is and never will be a dual GT200 card. 
You need to get your facts right, before you argue.
The GTX 295 is a dual GT200b card, the 55nm die shink allow this to happen.
If they try to get dual GT200 on a single cooler, you get another Asus Mars furnace.
I am sure it would have made you a cup of coffee.


----------



## Animalpak (Oct 1, 2009)

Today we finally can see the GT300 in pictures


http://www.hardwarecanucks.com/news/video/nvidia-gt300-pictured-nvidia-gtc/


----------



## Animalpak (Oct 1, 2009)

Uhmm tesla is for programming or what ? Not sure but i think can look like this


----------



## mR Yellow (Oct 1, 2009)

Looks sexy...even tho i'm not a huge nVidia fan.


----------



## El Fiendo (Oct 1, 2009)

Wow. Fits you were right about the looks on this. Very nice.


----------



## amschip (Oct 1, 2009)

I still wonder will that 3 billion transistors really make a difference taking into account all the added cpu kind functionality? gt200 was much bigger than rv770 and yet that difference didn't really scaled well.
Just my two cents...


----------



## mR Yellow (Oct 1, 2009)

amschip said:


> I still wonder will that 3 billion transistors really make a difference taking into account all the added cpu kind functionality? gt200 was much bigger than rv770 and yet that difference didn't really scaled well.
> Just my two cents...



Time will tell.


----------



## Benetanegia (Oct 1, 2009)

amschip said:


> I still wonder will that 3 billion transistors really make a difference taking into account all the added cpu kind functionality? gt200 was much bigger than rv770 and yet that difference didn't really scaled well.
> Just my two cents...



It's not the transistor count what you have to take into account, it's 512 SPs which is 2.15x more than in GT200. That paired with all the improvements in threading and load balancing means that Fermi has probably more than twice the power than GT200 has. After reading the whitepapers, I don't think that anything of that added "cpu kind" functionality will cripple performance, on the contrary: latencies have been dramatically decreased, interconnect bandwidth increased, there are added schedulers and threads...

Regarding the last sentence that's not accurate really. If you put a GTX285 at the same clocks as the HD4870 reference clocks it would more than scale beyond, HD4890 clocks... It's just two different ways of doing things, Nvidia has had the OC advantage in almost every chip in the recent years, mainly because they aim at lower clocks to begin with. And that being said we have no clue which clocks will GT300 launch at, it could be anything between 600-800 Mhz. Lower and higher is unlikely. If it's close to 600Mhz, then GT200 would be 2x as fast as GT200, if it launched near to 800Mhz it would be much faster than that. Point is we don't know exactly how it will perform, but looking at the specs it becomes more and more evident it will not be slow.

EDIT: Before this becomes a discussion, I'm not fighting with you at all. I'm just stating the posibilities, answering your questions trying to offer the different angles.



DaedalusHelios said:


> Yeah I don't really trust fudzilla enough to believe it 100%. I still think its a bit up in the air what release strategy they will have. Its not that I don't believe you Benetanegia, its that I don't believe fudzilla is truthful all the time.



I don't believe 100% either, and I'm not saying that's going to be true. But what I do think is that writen words that are claimed to come from a CEO >>>>>>>>> speculation and thoughts of a member with no info to back his claims. So since all this is speculation, and all of us are talking from speculation, I put both things in a balance and I have no doubts as to which posible, especulated, reality is the one with more probabilities. Specially since most of the other info there regarding GTC is true. Even if Fudzilla is not the most believable source, truth is that with GT300, they've been correct in the last two days and also overally. For instance I think they were the first ones mentioning the real codename Fermi.

What is clear IMO is that he had already made his mind around an idea, he didn't know who Jensen Huang is nor what GTC is, so he thought he was making his claim stronger in his second reply, while he wasn't, and he is unable to change his position after that on his next posts. 

Point is, even if that info is not 100% accurate, the posibility that it could not happen that way is not enough to assure his claims. Uncertainty is never a proof of anything, and seriously I'm starting to believe I've traveled to an alien world or something, because I'm seing uncertainty used as proof everywhere: like in BM: Arkham, TWIMTBP as a whole, in the spaniard TV... It's the world becoming crazy or what?


----------



## Kaleid (Oct 1, 2009)

Animalpak said:


> http://www.tomshw.it/articles/20091001/scheda-video-gt300-2_c.jpg
> http://www.tomshw.it/articles/20091001/scheda-video-gt300-3_c.jpg
> http://www.tomshw.it/articles/20091001/scheda-video-gt300-4_c.jpg
> Uhmm tesla is for programming or what ? Not sure but i think can look like this



Really nice look. Too bad (just like with Ati) that the fan is so small.


----------



## amschip (Oct 1, 2009)

Benetanegia said:


> It's not the transistor count what you have to take into account, it's 512 SPs which is 2.15x more than in GT200. That paired with all the improvements in threading and load balancing means that Fermi has probably more than twice the power than GT200 has. After reading the whitepapers, I don't think that anything of that added "cpu kind" functionality will cripple performance, on the contrary: latencies have been dramatically decreased, interconnect bandwidth increased, there are added schedulers and threads...
> 
> Regarding the last sentence that's not accurate really. If you put a GTX285 at the same clocks as the HD4870 reference clocks it would more than scale beyond, HD4890 clocks... It's just two different ways of doing things, Nvidia has had the OC advantage in almost every chip in the recent years, mainly because they aim at lower clocks to begin with. And that being said we have no clue which clocks will GT300 launch at, it could be anything between 600-800 Mhz. Lower and higher is unlikely. If it's close to 600Mhz, then GT200 would be 2x as fast as GT200, if it launched near to 800Mhz it would be much faster than that. Point is we don't know exactly how it will perform, but looking at the specs it becomes more and more evident it will not be slow.



I'm not saying it's true either, but everywhere I look its: "OMG it has 3 Billion transistors, it must be fast"  while rv770 proved otherwise already. As for my second sentence, it's refering to the first one really. By looking at real game performance 1.4 billion against 956 million wasn't really translating into 50% more performance now was it .


----------



## HTC (Oct 1, 2009)

I really don't care which one is faster as long as the faster one *isn't way faster*.

Why, you ask? Because if so, the winning one can behave much like Intel VS AMD (price wise) and, IMHO, that's a BIG no no.

Apparently (on paper), nVidia will win this round (if and when "Fermi" is launched): the question is, by how much.

*As long as they are both close to each other, then we can all benefit from their price wars.*


----------



## h3llb3nd4 (Oct 1, 2009)

wow.....


----------



## kid41212003 (Oct 1, 2009)

HTC said:


> I really don't care which one is faster as long as the faster one *isn't way faster*.
> 
> Why, you ask? Because if so, the winning one can behave much like Intel VS AMD (price wise) and, IMHO, that's a BIG no no.
> 
> ...



It doesn't matter how much it faster, as LONG as AMD has something afforable with good performance. This is the right way to say it.

It doesn't matter if NVIDIA put out a $700 or $1000 or $1b gpus, because those gpus are not meant to be mainstream. Those gpus are not meant to be good price/performance.

Even if the margin is small (1-5% faster), they still can sell it for x2 more the price.
The performance gap is not a problem, I repeat, as long as AMD has something good on their side, then we're (consumers) all good.


----------



## HalfAHertz (Oct 1, 2009)

Benetanegia said:


> It's not the transistor count what you have to take into account, it's 512 SPs which is 2.15x more than in GT200. That paired with all the improvements in threading and load balancing means that Fermi has probably more than twice the power than GT200 has. After reading the whitepapers, I don't think that anything of that added "cpu kind" functionality will cripple performance, on the contrary: latencies have been dramatically decreased, interconnect bandwidth increased, there are added schedulers and threads...
> 
> Regarding the last sentence that's not accurate really. If you put a GTX285 at the same clocks as the HD4870 reference clocks it would more than scale beyond, HD4890 clocks... It's just two different ways of doing things, Nvidia has had the OC advantage in almost every chip in the recent years, mainly because they aim at lower clocks to begin with. And that being said we have no clue which clocks will GT300 launch at, it could be anything between 600-800 Mhz. Lower and higher is unlikely. If it's close to 600Mhz, then GT200 would be 2x as fast as GT200, if it launched near to 800Mhz it would be much faster than that. Point is we don't know exactly how it will perform, but looking at the specs it becomes more and more evident it will not be slow.
> 
> ...



These are all valid points, but you have to remember one thing - usally complex and big chips don't like high frequencies. We should really wait and see the final specs. I'm pretty sure it will be faster than 5870, still the question is how much faster exactly.


----------



## KainXS (Oct 1, 2009)

The way I am looking at it Nvidia did the smart thing, instead of relying on the old MADD architecture they finally upgraded to something better, as for frequencies nobody knows but one thing is for sure ATI doubled the specs of their 5870 over the old gen and got what, a 40% performance boost across the board, Nvidia did the same exact thing with their GT200's and got the same result nearly, because they kept the same old architecture so I think Nvidia made the right move this time, I am expecting more from the GT300's than I did for the HD5870.

Mimd based architectures are going to be the way of the future, no the old Scalar architecture.


I was really really suprised when I saw how small the card was though, I was just amazed, because Nvidia usually makes the High end cards very long but this time they are moving in the right direction.


----------



## OnBoard (Oct 1, 2009)

Animalpak said:


>



Just 8 pin power  And 2 pins more? No way will it run with just one power plug, or it's a miracle card.

But looks really nice, about time people get a bit of bling too for how much they have to pay


----------



## newtekie1 (Oct 1, 2009)

You have to remember that 8-pin isn't just the addition of 2 extra pins(really those are just ground pins anyway, so they could have been left off).  The real change with the 8-pin introduction was the doubling of the power provided according to the specifications. The addition of the two pins doesn't really do anything, I think it was just done to make it easy to tell the difference in supplied/required power.


----------



## OnBoard (Oct 1, 2009)

newtekie1 said:


> You have to remember that 8-pin isn't just the addition of 2 extra pins(really those are just ground pins anyway, so they could have been left off).  The real change with the 8-pin introduction was the doubling of the power provided according to the specifications. The addition of the two pins doesn't really do anything, I think it was just done to make it easy to tell the difference in supplied/required power.



Yep (well more ground should allow more amps from 12v), but in that picture there is 8 pins + 2 pins.

GTX 280 needs 8pins+6pins and this is more that double the transistors, that's why I'm not buying it, even if it is smaller manufacturing process.


----------



## Benetanegia (Oct 1, 2009)

amschip said:


> I'm not saying it's true either, but everywhere I look its: "OMG it has 3 Billion transistors, it must be fast"  while rv770 proved otherwise already. As for my second sentence, it's refering to the first one really. By looking at real game performance 1.4 billion against 956 million wasn't really translating into 50% more performance now was it .



It depends. This is the average of all games performance at 2560x1600 from Wizzard's HD5870 review.






Compared to HD5870 the GTX285 is doing 78% and HD4870 is doing 50%, if we normalize 50% to being 100% and take it as the base, then:

78/50 * 100 = 156%

That is at 2560x1600 the GTX285 is 56% faster than HD4870 in the average of all the games that Wizzard reviews.

But wait!! Ati had another 956 million transistor card, using the same chip the HD4850, we apply the same math and that gives us that GTX285 is 95% faster or almost twice as fast. 40% more transistors and 2x the performance not too shaby isn't it? GTX285's clock is 648 Mhz, HD4870 is 750 Mhz and HD4850 is 625 Mhz.

Comparing the card at 2560x1600 does make sense, because a lot of that extra 40% transistors went to the extra 16 ROPs that help at that resolution.

What I mean with all this is, *it depends*.


----------



## jaredpace (Oct 1, 2009)

you know, benetanegia,

i bet a gtx380 scores "125%" on that chart in W1zzards review in December


----------



## btarunr (Oct 1, 2009)

I'm predicting that figure to be 115~120% on that chart.


----------



## Benetanegia (Oct 1, 2009)

Since the lower numbers have been taken I'll say 135-140%. We have a poll going on here.


----------



## wolf (Oct 1, 2009)

Benetanegia said:


> It depends. This is the average of all games performance at 2560x1600 from Wizzard's HD5870 review.
> 
> http://img.techpowerup.org/091001/perfrel_2560.gif
> 
> ...



I'm really starting to enjoy your posts Benetanegia, I'm glad you found TPU, or that TPU found you 

EDIT: as for the poll ill go for 130% flat


----------



## jaredpace (Oct 1, 2009)

Probably right btarunr, 20% faster, and 60% later than a 5870 sounds about right.


----------



## yogurt_21 (Oct 1, 2009)

Benetanegia said:


> It depends. This is the average of all games performance at 2560x1600 from Wizzard's HD5870 review.
> 
> http://img.techpowerup.org/091001/perfrel_2560.gif
> 
> ...



285 was a revision, you need to redo your numbers using the 280 to start your theory. which btw is flawed as it's assuming since the 5870 is twice the speed of the 4870 that the gt300 will be twice the speed of the gt200. it could be more than twice the speed, it could be less. we have zero numbers to go on atm, just paper specs.


----------



## btarunr (Oct 1, 2009)

OnBoard said:


> Just 8 pin power  And 2 pins more? No way will it run with just one power plug, or it's a miracle card.



One 6-pin connector on the 'top' (placeholder for 8-pin), one 8-pin one at the 'rear'.


----------



## lemonadesoda (Oct 1, 2009)

> Half Speed IEEE 754 Double Precision floating point



This is extraordinary performance... assuming is means what it is suggesting.  This thing will walk the floor on CUDA, math and Physx. A new world order on computational accelerators has just opened.

I predict RIP for http://www.clearspeed.com/

PS. Who prefers the "matte" look of the ATI, or the "glossy" look of the nV?


----------



## btarunr (Oct 1, 2009)

Here be more pics: http://www.techpowerup.com/105052/N...ference_Board_Pictured_in_Greater_Detail.html


----------



## Benetanegia (Oct 1, 2009)

lemonadesoda said:


> PS. Who prefers the "matte" look of the ATI, or the "glossy" look of the nV?





btarunr said:


> Here be more pics: http://www.techpowerup.com/105052/N...ference_Board_Pictured_in_Greater_Detail.html




I hate the look of the new Ati cards (never been a fan of the batmobile anyway), so this ^^ one definately.


----------



## 15th Warlock (Oct 1, 2009)

lemonadesoda said:


> PS. Who prefers the "matte" look of the ATI, or the "glossy" look of the nV?



Doesn't really matter that much to me, all those pretty stickers and glossy finishes will be facing down all the time anyway... 

I wonder why no one has come with a killer backplate design, I mean, I know it wouldn't have any practical function, but wouldn't it be nice to have something you can actually see when you stare at your case's window instead of a PCB or an all black backplate?...


----------



## Benetanegia (Oct 1, 2009)

yogurt_21 said:


> 285 was a revision, you need to redo your numbers using the 280 to start your theory. which btw is flawed as it's assuming since the 5870 is twice the speed of the 4870 that the gt300 will be twice the speed of the gt200. it could be more than twice the speed, it could be less. we have zero numbers to go on atm, just paper specs.



Why do I have to redo the numbers? GTX285 and GTX280 use the exact same architecture, only difference is clocks, it's absolutely irrelevant which one I use, when my point is precisely that clocks matter. GT200 didn't offer less performance per billion transistor than RV770, even when RV770 runs 100 Mhz faster GTX285 performs significantly better than the % increase in transistors. At similar clocks the 1.4 billion card performs almost twice as fast as the 1 billion card. So that refutes the claim of "same performance, more transistors".

Anyway we are not comparing GT200/RV770, we are talking about Fermi. Fermi has 3 billion transistors and just like GT200 it will use every one of them in being significantly faster than RV870. I'm not assuming it will be twice as fast as GT200, although I know it will be somewhere around. Twice as fast would have been if we said that in the chart it would be 156%, I said 135-140% quite different. Looking at the papers, we don't have any reason to think it will be twice as fast, what we have is some reasons to think it will be 3x faster, but we are saying it will be less than twice.


----------



## imperialreign (Oct 2, 2009)

Benetanegia said:


> Fixed. He *did* said about the top to bottom release. What I linked is the second (out 6) consecutive post made in FUD at the hours that the keynote has taken place, the first one says:



I don't care what _Fud_ said about the keynote address - again, I *do not* find them to be a reliable source.  Just because _Fud_ claims a CEO says something, *does not* make it true.

2nd day into the conference, and I have yet to see any mention of such a release strategy on the on-running GTC blog site, nor was it mentioned in Jensen's keynote address summary . . . as well, no other tech site reporting on the conference have mentioned the GT300 release strategy.

Sorry to play devil's advocate here, but I'd think news such as that (along _with_ the presentation of the GT300) would be big enough that other sites would've coughed it up too . . . not just Fud; and I'm 100% sure Fud is not the only "major" tech site with representatives on hand.  If you've payed any attention to the tech industry over the last 10-20 years (which, I defi get the feeling you have), you'd know that upcoming hardware release strategies (especially for the GPU markets) are like crack to the tech communities - right alongside the spec and pricing sheets.

Such a release strategy just does not make sense for nVidia, especially considering that ATI have already made it to market with their new series, allowing ATI to gain the upper hand in the pricing game . . . not to mention that _everyone_ knows hemlock is waiting in the wings, and ATI won't drop that bombshell until nVidia have stepped into the ring . . . this is the same market strategy both companies have used since the days of the X1000/7800 series.  That being said, until I see it on shelves - I'll believe it when I see it.  




> I don't believe 100% either, and I'm not saying that's going to be true. But what I do think is that writen words that are claimed to come from a CEO >>>>>>>>> speculation and thoughts of a member with no info to back his claims. So since all this is speculation, and all of us are talking from speculation, I put both things in a balance and I have no doubts as to which posible, especulated, reality is the one with more probabilities. Specially since most of the other info there regarding GTC is true. Even if Fudzilla is not the most believable source, truth is that with GT300, they've been correct in the last two days and also overally. For instance I think they were the first ones mentioning the real codename Fermi.



So, seeing as how eeverything is all just speculation . . . then it's safe to assume your initial interpretations of Fud's article are merely speculation as well?  I mean, you yourself claim you don't believe Fud 100% either, yet you've twisted your understanding of the article to serve your needs . . . and you have the audacity to claim that I'm warping the message? 

Again, this is the tech market we're talking about - all "rumors," whether regurgitated from news sites, or spewed from the manufacturer's mouth - are all to be taken with a grain of salt.  There's a lot of sandbagging in the industry, and a lot of smoke & mirrors, too.  Things can and do change overnight.




> What is clear IMO is that he had already made his mind around an idea, he didn't know who Jensen Huang is nor what GTC is, so he thought he was making his claim stronger in his second reply, while he wasn't, and he is unable to change his position after that on his next posts.



I love your pretentious level of assumption.  Simply because I disagree with an unbacked statement you made . . . brilliant.

You still fail to realize that _again_ you yourself have twisted the claim that Fud made . . . here, let me help break down simple english context for you, seeing as how you obiviously don't get it . . . now, if english *is not* your primary language, that's cool - I apologize . . . if not . . . then, that's just sad:



> Nvidia will implement a top-to-bottom release strategy from high end to entry level. *(the claim that is being disputed)* While he didn't talk about it during the keynote presentation *meaning, Fud states that Jensen DID NOT mention the GT300 release strategy during his keynote address)*, this release strategy also includes a high end dual-GPU configuration that should ship around the same time *("around the same time" - meaning "not at the same time, but possibly within a month or two")* as the high end single-GPU model *(which contradicts the first sentence in this sentence, in that the dual-GPU will not ship at the same time as the single-GPU . . . which means it won't be a "top-to-bottom" release, as the dual-GPU would be released first)*.




Does that make it more understandable?




> Point is, even if that info is not 100% accurate, the posibility that it could not happen that way is not enough to assure his claims. Uncertainty is never a proof of anything, and seriously I'm starting to believe I've traveled to an alien world or something, because I'm seing uncertainty used as proof everywhere: like in BM: Arkham, TWIMTBP as a whole, in the spaniard TV... It's the world becoming crazy or what?



Sounds like sandbagging to me . . . as you've come across quite unsure yourself.  It's always amazed me how "holier-than-thou" people are willing to come across, but seem to be under the impression that _their_ shit doesn't stink.


I've said my piece, I'm done with this discussion, debate, arguement, disagreement or whatever you want to call it.

I'll let time prove who's right and who's wrong.


----------



## El Fiendo (Oct 2, 2009)

imperialreign said:


> Nvidia will implement a top-to-bottom release strategy from high end to entry level. (the claim that is being disputed) While he didn't talk about it during the keynote presentation meaning, Fud states that Jensen DID NOT mention the GT300 release strategy during his keynote address), this release strategy also includes a high end dual-GPU configuration that should ship around the same time ("around the same time" - meaning "not at the same time, but possibly within a month or two") as the high end single-GPU model (which contradicts the first sentence in this sentence, in that the dual-GPU will not ship at the same time as the single-GPU . . . which means it won't be a "top-to-bottom" release, as the dual-GPU would be released first).



Hold up a second. Mind your periods and commas. As that article is written it says:

Nvidia will implement a top-to-bottom release strategy from high end to entry level.  *<<END>>*   While he didn't talk about it during the keynote presentation, *this release strategy also includes a high end dual-GPU configuration that should ship around the same time*...  <-THIS is what is implied he didn't talk about due to the way the sentence is structured. If it was saying he didn't talk about the release strategy, it would read:

Nvidia will implement a top-to-bottom release strategy from high end to entry level, even though he didn't talk about it during the keynote presentation. 

Commas can change context very easily.

As for the "around the same time" - meaning "not at the same time, but possibly within a month or two" part, the quote you provide never said it would be released immediately. It only ever says 'around the same time' so nothing is contradicting.


----------



## Benetanegia (Oct 2, 2009)

El Fiendo said:


> Hold up a second. Mind your periods and commas. As that article is written it says:
> 
> Nvidia will implement a top-to-bottom release strategy from high end to entry level.  *<<END>>*   While he didn't talk about it during the keynote presentation, *this release strategy also includes a high end dual-GPU configuration that should ship around the same time*...  <-THIS is what is implied he didn't talk about due to the way the sentence is structured. If it was saying he didn't talk about the release strategy, it would read:
> 
> ...



Thanks, thanks. I was going to write exactly the same, but since my first language is not english, he would still have something to say about it. Thanks, thanks a lot again. Also important for the context is the previous line to what imperial posted:



> However, one thing that he has confirmed is that Fermi architecture is very scalable and that it will end up in many GPUs.
> 
> *so*
> 
> Nvidia will implement a top-to-bottom release strategy from high end to entry level.*<<END>>* While he didn't talk about it during the keynote presentation, *this release strategy also includes a high end dual-GPU configuration that should ship around the same time*...


----------



## PP Mguire (Oct 2, 2009)

Zubasa said:


> There never is and never will be a dual GT200 card.
> You need to get your facts right, before you argue.
> The GTX 295 is a dual GT200b card, the 55nm die shink allow this to happen.
> If they try to get dual GT200 on a single cooler, you get another Asus Mars furnace.
> I am sure it would have made you a cup of coffee.



Symantics. The point is everybody argued against a GTX295 type card and there was one. 
Any video card that properly makes me a cup of coffee has my vote.


----------



## jaredpace (Oct 2, 2009)

The NV GF100 card looks 1000 times better than the HDbatmobileX2


----------



## amschip (Oct 2, 2009)

Benetanegia said:


> It depends. This is the average of all games performance at 2560x1600 from Wizzard's HD5870 review.
> 
> http://img.techpowerup.org/091001/perfrel_2560.gif
> 
> ...



Correct me if I'm wrong but don't you thing comparing 1GB and 512MB cards at such a high resolution is seriously flawed? If you compare gtx295 and 4870x2 both with more or less 2GB of memory the difference is not that big. The difference is only 12%. So almost 3 Billion transistors is only 12% faster than 2 Billion.


----------

