# AMD's HD5970 faster than Nvidia Fermi chip!



## Super XP (Jan 13, 2010)

> *Cypress yields not a problem*
> Written by David Stellmack
> Tuesday, 12 January 2010 18:01
> 
> ...


LINK:
http://www.fudzilla.com/content/view/17205/1/

Looks like NVIDIA is having a lot of yield & heat problems. NVIDIA will do anything to deny such claims or there stock would drop like rocks which is why they keep re-enforcing that there's nothing wrong with Fermi where in FACT we all know they are having major issues. Hence the many release dates they keep postponing. I was hoping for an earlier release so NVIDIA can help drive prices of the HD 5870 down


----------



## 3volvedcombat (Jan 13, 2010)

Ya, its 20% faster then a HD 5870, wait for the daul 1200-1400 dollar gpu solutions hits the market and is 40% faster then a HD 5970.

Screw it even if im a nvidia lover, IM NOT PAYING 600 DOLLARS FOR TE 20% GAIN.


----------



## dir_d (Jan 13, 2010)

3volvedcombat said:


> Ya, its 20% faster then a HD 5870, wait for the daul 1200-1400 dollar gpu solutions hits the market and is 40% faster then a HD 5970.
> 
> Screw it even if im a nvidia lover, IM NOT PAYING 600 DOLLARS FOR TE 20% GAIN.



It dosent seem logical at all, sucks if Nvidia has to sell them at a loss to compete


----------



## crazyeyesreaper (Jan 13, 2010)

its fudzilla there the Steven Quincy Urkel of the tech world in terms of information


----------



## dir_d (Jan 13, 2010)

crazyeyesreaper said:


> its fudzilla there the Steven Quincy Urkel of the tech world in terms of information
> 
> [url]http://img86.imageshack.us/img86/3139/steveurkel29ww.jpg[/URL]



They have gained alot of credibility in the past year..they been over 80% correct on their information i really think people should take them a little more serious. Plus everything he said isnt far fetched and could be most likely true.


----------



## SummerDays (Jan 13, 2010)

One company is getting 20% yields, while the next company is getting 60-80% yields?  

And they're using the same fab?  Hmm.  Something doesn't make sense there.

In any case, what % do you think you will gain from hooking up a second ATI card in crossfire?  

A 20% gain for a single card is quite a bit.

Yes, I'm aware they go for a more complex solution


----------



## dir_d (Jan 13, 2010)

SummerDays said:


> One company is getting 20% yields, while the next company is getting 60-80% yields?
> 
> And they're using the same fab?  Hmm.  Something doesn't make sense there.



Nvidia's chips are almost 2x bigger than ATI's but suspect Nvidas yields to be atleast 30 to 40%


----------



## Super XP (Jan 13, 2010)

SummerDays said:


> One company is getting 20% yields, while the next company is getting 60-80% yields?
> 
> And they're using the same fab?  Hmm.  Something doesn't make sense there.


Yes something is wrong, NVIDIA’s Fermi chip is a complicated chip, which is the problem. Don’t get me wrong, I admire NVIDIA for releasing monster graphics cards in the past and they seem to be following the same trend with the Fermi chip. But they really are having problems in yields and heat. This new card is supposed to be at least 1.5 x hotter than NVIDIA’s flagship card today. 

We need competition and I know the Fermi will be the card to do that, but I beleive NVIDIA is rushing it because ATI already sold more than 2 million HD 5000's cards. And they are going to sell a dump load more once the HD 5600's and the HD 5500's come on line next week


----------



## Goodman (Jan 13, 2010)

crazyeyesreaper said:


> its fudzilla there the Steven Quincy Urkel of the tech world in terms of information
> 
> [url]http://img86.imageshack.us/img86/3139/steveurkel29ww.jpg[/URL]



Man!! forgot about this show , what was it call again?
i just can't remember...?

EDIT:Found it on Google it's family matters but anyhow thanks! for make me remember that funny show
BTW:Still would not believe anything from Fudz & the Inquirer ...


----------



## Super XP (Jan 13, 2010)

crazyeyesreaper said:


> its fudzilla there the Steven Quincy Urkel of the tech world in terms of information


Some might say that but little do people know Fudzilla has industry connections coming out of his aris just like Mike Magee from The Inquirer


----------



## crazyeyesreaper (Jan 13, 2010)

the name of the show would be Family Matters had will smith and a bunch of other ppl not nearly as famous as will smith


----------



## a_ump (Jan 13, 2010)

would the transister density of fermi also affect yields? i heard fermi's density is alot greater than RV870's. and i too agree, almost everything fudzilla has said over the past 6months has been very very accurate, and what they've stated would makes perfect sense. I mean it isn't anything most of us haven't thought of from time to time anyways.


----------



## HossHuge (Jan 13, 2010)

I live about 40 mins away from TSMC.  Everyone just wait, I'll go down there and ask them....


----------



## [I.R.A]_FBi (Jan 13, 2010)

HossHuge said:


> I live about 40 mins away from TSMC.  Everyone just wait, I'll go down there and ask them....



pics or it didnt happen


----------



## 3volvedcombat (Jan 13, 2010)

[I.R.A]_FBi said:


> pics or it didnt happen



O cmon, lol fbi all into it.XD


----------



## Super XP (Jan 13, 2010)

HossHuge said:


> I live about 40 mins away from TSMC.  Everyone just wait, I'll go down there and ask them....


Make sure you take your Canon SLR Digital Camera OK, and get the scoop on what’s really going on. 

I’ve heard a rumour that Aliens were involved with Fermi, but for some crazy reason NVIDIA’s CEO Jen-Hsun Huang didn’t like the direction they were taking NVIDIA hence we have all the delays.


----------



## air_ii (Jan 13, 2010)

a_ump said:


> would the transister density of fermi also affect yields? i heard fermi's density is alot greater than RV870's. and i too agree, almost everything fudzilla has said over the past 6months has been very very accurate, and what they've stated would makes perfect sense. I mean it isn't anything most of us haven't thought of from time to time anyways.



Cypress' density: 2.15 billion / 324 sq mm ~= 6.636 million / 1 sq mm
Fermi: 3.2 billion / (rumoured) ca. 500 sq mm (give or take 20) ~= 6.4 million / 1 sq mm

So no, Fermi's density is not greater (it's 6.66 if it's 480 sq mm, but I think it will be 500 at least).


----------



## Fourstaff (Jan 13, 2010)

But a bigger area will mean that it has a greater chance of a defect happening. 324/500=0.648, which means Nvidia has a 60% more chance of having a defect on its chip than ATI, if Ati yields are 60%. then Nvidia's chips will hit 24% success rate only, which is in line with the rumors.


----------



## bobzilla2009 (Jan 13, 2010)

Indeed, there is pretty much a fixed chance of an error occurring during the process, so having a bigger chip is the problem. I think ATi saw that coming and made a very small cypress chip. I'm in no doubt ATi could have made a 3200 shader card instead (the hd5870 is a remarkably cool card when the fan is turned up to 40-60%, even when heavily overclocked) but chose not to in order to avoid the same problems as nvidia. It's wideley assumed nvidia are having heat troubles anyway, something is holding them back, after all. If fermi were god's gift to gpu's and was absolutely perfect in design and function, it would already have come out 

But it's not like dx9 games can utilise the 1600 shaders anyway, so 3200 would just be a waste on the load of tripe (dx9 games to be specific) that comes out of the gaming market at the moment.


----------



## Black Panther (Jan 13, 2010)

And it's not only Fudzilla, yesterday I found an article on Toms Hardware stating about the heat problems of Fermi.


----------



## wolf (Jan 13, 2010)

From what I've seen and heard it smokes Hemlock by 9-12% at this stage, before even their release drivers.

I remain very hopeful.


----------



## Binge (Jan 13, 2010)

Ehhhhh this looks like an oddly conditioned article.  Doom and gloom.


----------



## yogurt_21 (Jan 13, 2010)

crazyeyesreaper said:


> its fudzilla there the Steven Quincy Urkel of the tech world in terms of information
> 
> [url]http://img86.imageshack.us/img86/3139/steveurkel29ww.jpg[/URL]



joke fails Steve Urkel was a dork but also incredibly intelligent and was not known for lying.

fud on the other hand....


----------



## crazyeyesreaper (Jan 13, 2010)

id explain it but i tried to before posting this and it wasnt possible i lose


----------



## Fourstaff (Jan 13, 2010)

wolf said:


> From what I've seen and heard it smokes Hemlock by 9-12% at this stage, before even their release drivers.
> 
> I remain very hopeful.



Its probably more than 10-12% more expensive, so you would be disappointed.


----------



## btarunr (Jan 13, 2010)

SummerDays said:


> One company is getting 20% yields, while the next company is getting 60-80% yields?
> 
> And they're using the same fab?  Hmm.  Something doesn't make sense there.



It does make sense. Fermi is likely too big a die to expect decent yields from.


----------



## mlee49 (Jan 13, 2010)

Hmmm, 20% increase at 100% of the price.... something doesn't add up.

Even if Fermi does have a hot head., buy a card w/a lifetime warranty. 

I'm doing my best to keep patient in all this 300 series debate.  We can only wait until they are released for and final word.


----------



## Bo_Fox (Jan 13, 2010)

^^

Naw, not 100% price increase!  Are you nuts or what?  That would make it $800, since the 5870 is selling for $400!

I'd expect NV to try $499 and nothing higher.  There will be a 5890 to counter the Fermi!

Great news:  That means the 5870 will instantly be pushed $100 down to $299 or lower!


----------



## bobzilla2009 (Jan 13, 2010)

indeed, another hd5870 for £200 or so would be a nice treat in the summer for a 'years uni work well done'  Although i will need to get a new mobo, lol, but that's hardly a problem since i foresee me changing to either AMD's or intel 6 core processors (probably AMD's since they'll charge around £170-250 while intel are going to charge around £700, hardly a competition there).


----------



## cowie (Jan 13, 2010)

prices for gt100 is all a little speculative at the moment.


----------



## bobzilla2009 (Jan 13, 2010)

indeed, but we can have pretty good educated guesses as to the price now that we have had the AMD cards. Nvidia need to make up the loss, and the length of delay and the apparent troubles with production point towards the $500 bracket or above. Either that or nvidia are going to have to take it on the chin for every card sold, something i can't see happening.


----------



## TVman (Jan 13, 2010)

all i care about is buying a HD5850 for 200$ like i bought my HD4850


----------



## afw (Jan 13, 2010)

TVman said:


> all i care about is buying a HD5850 for 200$ like i bought my HD4850



same here  ... i believe there are loads of people who want that to happen ... though I don't think that might happen .... might drop to $230 or so ... anyway will be the best price/performance card out there ...  ..

hoping+praying for a march FERMI release ...


----------



## bobzilla2009 (Jan 13, 2010)

Lol, even if fermi is amazing, ATi will probably still make money from people buying the hd5850 as a opencl/directcompute physics processor in the future due to it's price ^^ assuming that can happen of course.


----------



## btarunr (Jan 13, 2010)

bobzilla2009 said:


> ATi will probably still make money from people buying the hd5850 as a opencl/directcompute physics processor in the future due to it's price ^^ assuming that can happen of course.



Fermi supports OpenCL and DirectCompute 5.0.


----------



## bobzilla2009 (Jan 13, 2010)

yes, but it'll cost more than a hd5870 at least. Fermi does not have a midrange card yet as far as we know, we don't even know if they're even bringing out a gtx360! Either way, the hd5850 will probably drop to around £150, making it a superb secondary card in such a situation.


----------



## btarunr (Jan 13, 2010)

bobzilla2009 said:


> yes, but it'll cost more than a hd5870 at least. Fermi does not have a midrange card yet as far as we know, we don't even know if they're even bringing out a gtx360! Either way, the hd5850 will probably drop to around £150, making it a superb secondary card in such a situation.



HD 5850 is a $300~$350 card. Competition will push the GTX 360 to the same price point. So the people who can now get HD 5850 will soon be able to get a more powerful GTX 360.


----------



## bobzilla2009 (Jan 13, 2010)

It makes sense that the hd5850 will drop to around $200-250 after the gtx360 release, which is why i'm saying if fermi does do well, it will make the hd5850 a cheap card to use as a secondary physics card. Not that technology is static, there is always a better card in any pricepoint on the way  It's like people who buy a GTX360 will be able to buy a more powerful card afterwards, or indeed the gtx360 might be straying into hd5870 territory after it's inevitable price cut. Since ATi have already skimmed the uber profits from the earlier release, they will have no problem outpricing the gtx360 to death (whereas nvidia have a large chip, that's harder to make = smaller profit overhead to cut).

Nvidia must take the high end market with fermi, because ATi should be able to outprice them everywhere else.


----------



## Bo_Fox (Jan 13, 2010)

bobzilla2009 said:


> It makes sense that the hd5850 will drop to around $200-250 after the gtx360 release, which is why i'm saying if fermi does do well, it will make the hd5850 a cheap card to use as a secondary physics card. Not that technology is static, there is always a better card in any pricepoint on the way  It's like people who buy a GTX360 will be able to buy a more powerful card afterwards, or indeed the gtx360 might be straying into hd5870 territory after it's inevitable price cut. Since ATi have already skimmed the uber profits from the earlier release, they will have no problem outpricing the gtx360 to death.
> 
> Nvidia must take the high end market with fermi, because ATi should be able to outprice them everywhere else.



5850 as a physics card???    

Could we do that?!?  In this life, and not sometime in 3000 AD?


----------



## bobzilla2009 (Jan 13, 2010)

i would imagine so if/when openCL/directcompute become widespread (which they probably will, especially DC because of microsoft). Companies have seen how nvidia have shifted cards by allowing them to be physX adapters.

i would imagine 2011 would be about right, or early 2012 if we must wait for the console tards to get physics too. It all depends on the dx11 success, so if someone is sitting there with xp, refusing to upgrade because 'dx9 is good enough' give them a sucker punch and tell them to go console  pc gaming isn't about 'good enough' dammit!

In the end dx11 = the dawn of real widespread GPGPU, and not just a few scant apps.


----------



## dir_d (Jan 13, 2010)

btarunr said:


> HD 5850 is a $300~$350 card. Competition will push the GTX 360 to the same price point. So the people who can now get HD 5850 will soon be able to get a more powerful GTX 360.



I dont even really know if there will be a GTX360 initially. Nvidia has stated that the GF100 which is faster than the 5870 will be out in March and that the GF-104 which is speculated faster than the 5970 will be out Q2 2010. Dosent really make sense to me what Nvidia is really doing. Kinda sounds like the GTX 380=GF-100 and GTX 395=GF104 and if thats true i think the GF-100 will come in at about $500 and i really dont think it will drop the 5870 price down alot.


----------



## phanbuey (Jan 13, 2010)

Theyre gonna have to sell it close to a loss until they can get a refresh out.  This has happened in the past, this design just needs to be modified/simplified and shrunk and it will be a successful card.  

Its really too bad about the manufacturing issues. This has pushed nvidia way back.


----------



## bobzilla2009 (Jan 13, 2010)

Indeed, nvidia have to play the game very carefully now, and get some mainstream cards out before they lose too much money.

Even if nvidia beat ati in performance at the high end, i bet the ati guys are pretty happy at the moment. They are the guys in the lead now, they've got a 6 month head start on the new series, and have had much more time to optimise their current cards. I think the only cards missing are the hd53xx and 54xx cards now.

Some may say it's too early, but based on the lateness of nvidia, the lack of mainstream segment cards, or even a sign of some and around 12 months (maximum until ati release the next series) before the generation ends (2 more of which we will see no nvidia cards), i will say:

First DX11 generation:
ATi 1 - Nvidia 0

Not that it is deserved after a few generations behind. I believe that will stand even if fermi beats hemlock.


----------



## TAViX (Jan 14, 2010)

Fourstaff said:


> But a bigger area will mean that it has a greater chance of a defect happening. 324/500=0.648, which means Nvidia has a 60% more chance of having a defect on its chip than ATI, if Ati yields are 60%. then Nvidia's chips will hit 24% success rate only, which is in line with the rumors.



EXACTLY!


----------



## Super XP (Jan 15, 2010)

Anybody looking forward to the upcoming ATI Radeon HD 7880


----------



## Tatty_One (Jan 15, 2010)

I still find it odd, especially if as they claim, yields are so good that contrary to what they are saying, the cards in some area's are still not widely available and many that have been have been sold at bumped up prices, especially inthe UK it's fairly typical for a shop to have a list of say 10 different manufacturers cards, 9 of them are out of stock or pre-order, the one that is in stock isnt selling because it's priced around 20% above mrrp or if the price was competative it was sold out within an hour....... now this is some 3 months after initial release.


----------



## bobzilla2009 (Jan 15, 2010)

Yep, but the cards are trickling through slowly though. In the end demand is extremely is very high for the most part, and the UK is not really the hottest market to sell in. We always get ripped off by a 20% bump usually anyway, it'll be the same with fermi too.

Still, i got my card 3 months ago for just under £300 (around £15 more than a direct $ conversion + VAT) so i'm not feeling hard done by. Also, VAT just returned to 17.5%, bumping up the MRRP by around £6-7 or so anyway (whether or not this had been realized is another matter).

P.S. lambdatek are very good for selling at the MRRP, and their card is £291, up from £278 3 months ago, so i'm assuming VAT has definitely kicked back in.


----------



## Tatty_One (Jan 15, 2010)

bobzilla2009 said:


> Yep, but the cards are trickling through slowly though. In the end demand is extremely is very high for the most part, and the UK is not really the hottest market to sell in. We always get ripped off by a 20% bump usually anyway, it'll be the same with fermi too.
> 
> Still, i got my card 3 months ago for just under £300 (around £15 more than a direct $ conversion + VAT) so i'm not feeling hard done by. Also, VAT just returned to 17.5%, bumping up the MRRP by around £6-7 or so anyway (whether or not this had been realized is another matter).
> 
> P.S. lambdatek are very good for selling at the MRRP, and their card is £291, up from £278 3 months ago, so i'm assuming VAT has definitely kicked back in.




Thats a very nice deal, however "trickling through slowly" after 3 months to me isnt really good enough, especially if this article is true and there are no yeild issues, methinks AMD probably realised at some point they could get the drop on fermi so pushed the release even quicker to gain as much advantage as possible....that makes sense and is understandable but in the end it's the poor old consumer that often has to pay top dollar if they want to get their hands on one early....... although I appreciate thats wasnt the case for you.

Edit:  just checked Lambdatek, they list about 9 HD5850's..... non in stock, they list about 10 HD5870's and they have one in stock but thats one of the more expensive ones (£330, thats $535), I shall keep my eye out there though becuase if they do get stock in at the prices advertised I will be on to one!


----------



## laszlo (Jan 15, 2010)

nvidia didn't  learned from past yield problems -8800gtx...and is too late now to retreat so no matter how many good chips they squeeze from a wafer the prestige and credibility is the stake now;we'll see


----------



## bobzilla2009 (Jan 15, 2010)

Tatty_One said:


> Thats a very nice deal, however "trickling through slowly" after 3 months to me isnt really good enough, especially if this article is true and there are no yeild issues, methinks AMD probably realised at some point they could get the drop on fermi so pushed the release even quicker to gain as much advantage as possible....that makes sense and is understandable but in the end it's the poor old consumer that often has to pay top dollar if they want to get their hands on one early....... although I appreciate thats wasnt the case for you.
> 
> Edit:  just checked Lambdatek, they list about 9 HD5850's..... non in stock, they list about 10 HD5870's and they have one in stock but thats one of the more expensive ones (£330, thats $535), I shall keep my eye out there though becuase if they do get stock in at the prices advertised I will be on to one!




Indeed, they do get the stock in usually, but since they are selling at probably the cheapest prices for most cards, they get swept up very very quickly. Lambdatek are generally a very good e-shop anyway, although their selection on some items (cases for example) can be a bit limited.


----------



## H82LUZ73 (Jan 15, 2010)

Goodman said:


> Man!! forgot about this show , what was it call again?
> i just can't remember...?
> 
> EDIT:Found it on Google it's family matters but anyhow thanks! for make me remember that funny show
> BTW:Still would not believe anything from Fudz & the Inquirer ...



Was actually A spin from another show Perfect Strangers were the mom in Family Matters was the Elevator operator.

 Ha found it


----------



## Super XP (Jan 17, 2010)

AMD already sold a record 2 million HD 5000's which is not bad considering the delays in the 40nm process. From what I've read they are begining to ramp up full force less than 2 weeks time. Nvidia has a lot of catching up to do, but that's good competition.


----------



## imperialreign (Jan 17, 2010)

Biggest drawback I see, ATM, with nVidia taking so long to get to market - they've given up a lot of "new series" sales to the red camp.  At the rate these delays are going, ATI's next series will be right behind nVidia's release . . .

Regarding the article at Fud (or any other site) . . . like all tech "rumors," I take them all with a full shaker of salt.  Sometimes they're right, sometimes not . . . we won't know for sure until the hardware is starting to circulate.

But . . . y'all know how much we love speculation here at TPU


----------



## Super XP (Jan 17, 2010)

Fermi is taking long because its going to be more than just your average graphics card  If NVIDIA sticks to there original Fermi plan they are going to have a monster despite what internal sources say about it being only a mild 20% boost in performance vs. there best today. But AMD already has something coming and it's called HD 5900's


----------



## imperialreign (Jan 17, 2010)

Super XP said:


> Fermi is taking long because its going to be more than just your average graphics card  If NVIDIA sticks to there original Fermi plan they are going to have a monster despite what internal sources say about it being only a mild 20% boost in performance vs. there best today. But AMD already has something coming and it's called HD 5900's



It wouldn't surprise me - TBH, I've been under the impression for a while that the delays are either due to an inability to control thermal output, or that the performance hasn't been up to their expectations.  Supposedly, Fermi is completely new architecture for nVidia, and I get the defi feeling that it's the R&D that are slowing things up.

Either way, it's all speculation at this point, as nVidia themselves haven't coughed up too much info . . . only some occasional hype.

It's taking so long, though, that I'm sure ATI have something planned for whenever green gets to the shelf.


----------



## El_Mayo (Jan 17, 2010)

nVidia needs to hurry up and release whatever they're building, so ATi cards become cheaper.
Also, what exactly is a yield?


----------



## HalfAHertz (Jan 17, 2010)

They have been spreading themselves too thin with all that tegra and ion business, haven't they? Still the way I see thngs: Nvidia is late with this gen, but they will have a new architecture, where as Ati has reused the 3gen old architecture. Ati will be in the same position with their next release. They need a new architecture, but at least they have a head start with the Dx11 release and hopefully will not experience any delays.


----------



## yogurt_21 (Jan 18, 2010)

HalfAHertz said:


> They have been spreading themselves too thin with all that tegra and ion business, haven't they? Still the way I see thngs: Nvidia is late with this gen, but they will have a new architecture, where as Ati has reused the 3gen old architecture. Ati will be in the same position with their next release. They need a new architecture, but at least they have a head start with the Dx11 release and hopefully will not experience any delays.



seen this alot the past few days but nothing posted to back it up. all the specs I've seen on fermi seem to indicate a beefed up gt200 with dx11 support. any sources to back up that nvidia has left the unified shader architecture?

I find it very funny when people on this forum use "designed from the ground up" without knowing anything about what that means. 

if it had been designed form the ground up it would use a completely different shader design, not just a different number. in all truthfulness neither team has had a new internal architecture redesign from the ground up since the unified shader architecture was released. And I seriously doubt nvidias fermi is either based on all the reports. 

so lets go out to the net and see.

http://www.brightsideofnews.com/new...ure-unveiled-512-cores2c-up-to-6gb-gddr5.aspx

unified shader architecture

http://www.nvidia.com/object/fermi_architecture.html

unified shader architecture

http://www.bit-tech.net/news/hardware/2009/09/30/huang-reveals-fermi-architecture/1

unified shader architecture


I'm sorry am i missing something? fermi like most new series has several feature bumps over the prior generation, however it doesn't seem anywhere near "designed from the ground up"
you're thinking the 8800 series there.


----------



## pantherx12 (Jan 18, 2010)

El_Mayo said:


> nVidia needs to hurry up and release whatever they're building, so ATi cards become cheaper.
> Also, what exactly is a yield?





I shall teach you something that will help you for ever.

Google has several built in tools, one being the "define" tool

Put

Define : 

Into google followed by the word you don't know and it lets you know : ]

Much faster then asking.


----------



## imperialreign (Jan 18, 2010)

yogurt_21 said:


> if it had been designed form the ground up it would use a completely different shader design, not just a different number. in all truthfulness neither team has had a new internal architecture redesign from the ground up since the unified shader architecture was released.



Agreed.  Regarding the red camp's design, the last new architecture was the R600 . . . and, as you had mentioned, that was the start of the red camp's unified shader architecture.  All newer GPU cores from ATI since have been revisions of that design.


----------



## crazyeyesreaper (Jan 18, 2010)

true but each revision seems to dbl its performance since then at least when comparing the xx70 variants

3870-4870-5870  so while it may be the same arch its defiently proven to be very scalable in terms of performance but eventually that will reach its end because nothing can scale that way indefinetly so we will have to see what the 6k series brings


----------



## Steevo (Jan 18, 2010)

6K is a whole new design. Multi-core design according to the rumour mills.


----------



## trt740 (Jan 18, 2010)

the boys at geforce are building a monster it appears.


----------



## phanbuey (Jan 18, 2010)

well they built a monster... on paper... but the people who actually make the chip can't seem to build it.


----------



## eidairaman1 (Jan 18, 2010)

Rest Assured TMSC will have trouble building the part just as much as they did with ATIs stuff, thats why i think Radeon HD6 will be built by GloFo


----------



## HalfAHertz (Jan 18, 2010)

yogurt_21 said:


> seen this alot the past few days but nothing posted to back it up. all the specs I've seen on fermi seem to indicate a beefed up gt200 with dx11 support. any sources to back up that nvidia has left the unified shader architecture?
> 
> I find it very funny when people on this forum use "designed from the ground up" without knowing anything about what that means.
> 
> ...




True, but you have to remember that the shaders are just ~50% of the GPU. There's ALOT more happening behind the scene than just rasterization.


----------



## bobzilla2009 (Jan 18, 2010)

eidairaman1 said:


> Rest Assured TMSC will have trouble building the part just as much as they did with ATIs stuff, thats why i think Radeon HD6 will be built by GloFo



well, AMD bought gloFo last march, so that's a huge factor too. It should mean a lot more cards streaming into the market next time round though compared to how slowly the hd5xxx's have been coming out


----------



## btarunr (Jan 18, 2010)

http://www.youtube.com/watch?v=dlgzfwIrmIc


----------



## eidairaman1 (Jan 18, 2010)

NV launch reminds me of this

http://www.youtube.com/watch?v=WOVjZqC1AE4


----------



## Benetanegia (Jan 18, 2010)

yogurt_21 said:


> seen this alot the past few days but nothing posted to back it up. all the specs I've seen on fermi seem to indicate a beefed up gt200 with dx11 support. any sources to back up that nvidia has left the unified shader architecture?
> 
> I find it very funny when people on this forum use "designed from the ground up" without knowing anything about what that means.
> 
> ...



Well, if you take a look at the TPU front page and follow the correct article, you'll learn how wrong you are. I too thought that when it came to graphics, it was going to be like GT200, just a 2x GT200 + DX11 and "software" (ISA level, actually) tesselation. Boy, I was wrong! We were all so wrong, especially with the tesselation part. "It will not have a dedicated tesselator". How many times we have heard that? Now the only thing I can think about the origin of that sentence is that Nvidia said "I will not have *ONE* dedicated tesselator..." and whispering so low no one could hear "ahem, *it has 16*! Along with 4 triangle setup and raster engines too!". 

Also the texture units run at the shader clocks, everything except the ROPs and memory runs at shader clock in fact, so yeah, it's really different. Everything except ROPs has been moved to what now they call GPC, which is a GPU core for all intents and purposes. Fermi is a multi-core GPU!!


----------



## wolf (Jan 18, 2010)

Even the 448sp version will be a 5870 killer, and it looks like things will go back to how they have been time and time again;

To own THE BEST graphics subsystem, means you will use Nvidia cards. That is Nvidia's bread and butter imo, and they will have many buyers (eg the likes of TPU members) just from that fact alone.


----------



## Benetanegia (Jan 18, 2010)

I don't think there's going to be a 448 SP version. I dont see how exactly they could disable only 64 SPs on that architecture. Unless they can disable individual SPs instead of requiring the old method of disabling clusters? Now that I think about it, maybe it's that what their vastly improved scalability claims have always been about?


----------



## wolf (Jan 18, 2010)

Potentially so, I figure since the sp's are in clusters of 32, they would be able to laser cut up to two defective ones?


----------



## Benetanegia (Jan 18, 2010)

wolf said:


> Potentially so, I figure since the sp's are in clusters of 32, they would be able to laser cut up to two defective ones?



I don't know... that would leave 2 GPCs with 4 clusters and 2 with 3 clusters or 4-4-4-2. Feasible, maybe but...


----------



## Binge (Jan 18, 2010)

The GT200 could disable cores entirely via software.


----------



## eidairaman1 (Jan 18, 2010)

wolf said:


> Even the 448sp version will be a 5870 killer, and it looks like things will go back to how they have been time and time again;
> 
> To own THE BEST graphics subsystem, means you will use Nvidia cards. That is Nvidia's bread and butter imo, and they will have many buyers (eg the likes of TPU members) just from that fact alone.



you are obviously blinded by your own fanaticism.


----------



## Benetanegia (Jan 18, 2010)

Binge said:


> The GT200 could disable cores entirely via software.



I have never heard of that. It could disable SMs (8 SPs), but I have never heard of disabling individual SPs. The only SP software disabling that have ever heard of, was the one regarding the power states too, quite a different thing, so if you have a link or something I would appreciate.


----------



## Binge (Jan 18, 2010)

Benetanegia said:


> I have never heard of that. It could disable SMs (8 SPs), but I have never heard of disabling individual SPs. The only SP software disabling that have ever heard of, was the one regarding the power states too, quite a different thing, so if you have a link or something I would appreciate.



That's entirely my point.  Even if it is disabled for power states, and what difference does it make?  If they can disable the SP then they can disable it.


----------



## eidairaman1 (Jan 18, 2010)

you 2 are probably talking about the exact same thing.


----------



## Benetanegia (Jan 18, 2010)

eidairaman1 said:


> you 2 are probably talking about the exact same thing.



No. If they can be software disabled, they can be software enabled and that's not going to happen.



Binge said:


> That's entirely my point.  Even if it is disabled for power states, and what difference does it make?  If they can disable the SP then they can disable it.



It's a different thing actually, when "disabled" for a low power state they are not actually (100%) disabled, they are just on stand by. And you are not implying that they can disable individual SPs then?

My point is: They can disable SP clusters (SM)? Yes, but that means 32 SPs have to be disabled at a time and because the chip is arranged in 4 GPCs the most logical thing is to disable one on every GPC so 128 SPs disabled. Otherwise you end up with a highly asymetrical chip and I don't think that would work very well performance wise. But I'm just going by what's been standard until now, and that might be a mistake...


----------



## Binge (Jan 18, 2010)

According to Charlie the chip "doesn't work".  I don't feel like having a debate over something so petty...  The facts are they say they have done exactly what you propose would be a bad thing.


----------



## air_ii (Jan 18, 2010)

I don't think they're talking about the same thing. Binge is talking about power saving features, while Benetanegia is talking about disabling the cores due to bad yields, which is a totally different story.

As to Charlie, as much as it's fun to read his articles, his hatred towards nVidia makes him 'exaggerate' a bit on some occasions.


----------



## Binge (Jan 18, 2010)

I'm more or less talking about how engineers can make the hardware do whatever they want it to do, and they've shown ability to affect individual shaders with simple software.  My argument is that it doesn't seem impossible/unlikely/unreasonable for engineers to disable single cores.


----------



## Benetanegia (Jan 18, 2010)

Binge said:


> I'm more or less talking about how engineers can make the hardware do whatever they want it to do, and they've shown ability to affect individual shaders with simple software.  My argument is that it doesn't seem impossible/unlikely/unreasonable for engineers to disable single cores.



I never said it was imposible or unreasonable. In fact, I was leaning towards that posibility, because I considered that option optimal. It just happens that I have never heard of that, that's why I asked.

And maybe I explained it wrong, but the reason I say they can't disable them on software is because that would make it posible to unlock the cards and it is very unlike they would leave that option. I think they don't laser cut anymore, but their techniques are equally effective nowadays AFAIK.


----------



## W1zzard (Jan 18, 2010)

it's possible to make a register write-once by including a write-protect bit in that register's data that gets set once the initialization is done

the challenge here however is that you need to put the code that sets the write protect bit in a safe place, not the vga bios or the driver code which both can be reverse engineered and modified


----------



## Benetanegia (Jan 18, 2010)

W1zzard said:


> not the vga bios or the driver code which both can be reverse engineered and modified



Yeah, maybe you'll have to forgive my lack of proficiency, but that's what I was trying to say with "can't do it in software". Sofware as in drivers/bios.

Also, maybe I'm stupid enough and has already been cleared out, but I have not seen an answer to this?

Can individual shader processors be disabled? I have always thought (and had not seen any evidence showing the contrary) that you had to disable the entire SM, because only the SM is connected to the rest of chip? If you could answer this, I would appreciate a lot.


----------



## wolf (Jan 18, 2010)

eidairaman1 said:


> you are obviously blinded by your own fanaticism.



lately Nvidia have tended to make bigger more powerful GPU's than ATi have, its shaping up very much so this time around.

two GF100's vs two 5970's, I know where I'd put my money for a better gaming experience, and I know for a fact many people will put their money there too.

Remember this is coming from somebody who has a 5870 and is loving the experience, having come straight from the likes of GTX260 SLi and a GTX295, single GPU's that beat two together is an awesome experience.

Not to mention, whats wrong with a little fantasy, you say it like it's a bad thing 

But I guess your right, I'm quite obviously blinded by wanting GF100 to paste cypress, lord knows I don't want it to be on par.


----------



## bobzilla2009 (Jan 18, 2010)

if it's on par, it'll be a massive disappointment, especially since it's 6 months late.


----------



## yogurt_21 (Jan 18, 2010)

Benetanegia said:


> Well, if you take a look at the TPU front page and follow the correct article, you'll learn how wrong you are. I too thought that when it came to graphics, it was going to be like GT200, just a 2x GT200 + DX11 and "software" (ISA level, actually) tesselation. Boy, I was wrong! We were all so wrong, especially with the tesselation part. "It will not have a dedicated tesselator". How many times we have heard that? Now the only thing I can think about the origin of that sentence is that Nvidia said "I will not have *ONE* dedicated tesselator..." and whispering so low no one could hear "ahem, *it has 16*! Along with 4 triangle setup and raster engines too!".
> 
> Also the texture units run at the shader clocks, everything except the ROPs and memory runs at shader clock in fact, so yeah, it's really different. Everything except ROPs has been moved to what now they call GPC, which is a GPU core for all intents and purposes. Fermi is a multi-core GPU!!



hmmm I'm afraid you failed. nowhere in your post did you mention a differentiation from the unified shader architecture.

more rop's, more shader, dx 11, cuda improvements including how each thread is handled, sure. but these are all improvements on what existed in gt200 not a whole new line built from the ground up. take a look at the 7800 series, then take a look at the 8800 series, that was an actually different architecture.


----------



## Benetanegia (Jan 18, 2010)

yogurt_21 said:


> hmmm I'm afraid you failed. nowhere in your post did you mention a differentiation from the unified shader architecture.
> 
> more rop's, more shader, dx 11, cuda improvements including how each thread is handled, sure. but these are all improvements on what existed in gt200 not a whole new line built from the ground up. take a look at the 7800 series, then take a look at the 8800 series, that was an actually different architecture.



Have you seen the slides in Wizzard's article? It's very different, it has nothing to do with GT200. I don't know what do you expect, nothing is going to be as different as moving to unified shaders, but the two archtectures have very little in common and it's obvious it's been built from the ground up.

This: 
	

	
	
		
		

		
			





is not the same as this:






It's not even close to being the same man. You are never going to see a comparable change in architecture as when moving to unified shader anymore, but that doesn't mean that chips are not going to be built from scratch. They will just end up similar, because it's been a long way in GPU designing already. You just can't reinvent the wheel, but you can start one from scratch and change what's inside, how it works etc, it's just that you will always end up with a round shape. You will end up with a wheel, no more no less.


----------



## bobzilla2009 (Jan 18, 2010)

It seems have some major tessellation horsepower going for them! as the heaven benchmark has shown  Now lets hope, that since both companies do have pretty decent tessellation going for them (sure, the sub hd58xx series can't do it as effectively, but nvidia doesn't even have a high or mid range out yet lol) devs will use it mroe than just a few extra ripples in a puddle


----------



## 3volvedcombat (Jan 18, 2010)

I sense the GF100 512stream processor card, will and could be overclocked vastly, and be as fast as a HD 5970. 

Mu 260s are going for sale when im solid at realease dates.


----------



## KainXS (Jan 18, 2010)

not the same,  the G80, G92, and GT200 were all loosely based on eachother, just as the R600, R670, R770, and R870 are but the GF100 is not very similar to the GT200 at all, it may be unified but you can't really say since its unified they are the similar, so is the R600+ line of GPU's, and they are nowhere similar to nvidia's.


----------



## Black Panther (Jan 18, 2010)

To date a single new gpu is highly unlikely and has never as yet beaten the previous generation top gpu x2....

I've also read it's very likely the Fermi won't be beating any of the 200 series in Sli.

What I expect is a card which would be 15-30% faster than the 5870.

At least that has been the trend as evident from previous generations.


----------



## 3volvedcombat (Jan 18, 2010)

Black Panther said:


> To date a single new gpu is highly unlikely and has never as yet beaten the previous generation top gpu x2....
> 
> I've also read it's very likely the Fermi won't be beating any of the 200 series in Sli.
> 
> ...



Well sense the specs have been posted on the GF100 
as in 512-stream processors
200gb of bandwith at least
and the cache on cores, and 40nm tech

I couldnt see it not performing better then dual 285's. Why? the only reason why that would perform worse then 285's is if its clocks were lowerd to dirt cheep speeds. But thats probably not the case. And it will be a single card solution which intern means you dont have to cope with GTX 285's and having the second 285 not being supported in games, and 
only getting a 50% increase in supported games.


----------



## Benetanegia (Jan 18, 2010)

KainXS said:


> not the same,  the G80, G92, and GT200 were all loosely based on eachother, just as the R600, R670, R770, and R870 are but the GF100 is not very similar to the GT200 at all, it may be unified but you can't really say since its unified they are the similar, so is the R600+ line of GPU's,* and they are nowhere similar to nvidia's.*



And even still Cypress is more similar to GT200 than GF100:






It has the front end (with the tesselator included there), all conected to the SIMD arrays which have the texture units attached to them, then conected to the back ends. These SIMD arrays are only connected to local memory and can only write to L2.

GF100 on the other hand has 4 half/incomplete front ends incrustated inside the GPC, which are commanded by the gigathread engine wich is in the outside or above them and completes what usually is described as the front end. In any case the GPC looks just like a mere description (or something related to hardware disposition), and has nothing to do with the architecture arrangement, because at execution time it seems no specific hardware is attached only to the hardware inside his GPC. All the units can read/write to global memory and hence should be able to work on anything that other unit in any place inside the GPU was working on.

VERY different.


----------



## yogurt_21 (Jan 18, 2010)

Benetanegia said:


> Have you seen the slides in Wizzard's article? It's very different, it has nothing to do with GT200. I don't know what do you expect, nothing is going to be as different as moving to unified shaders, but the two archtectures have very little in common and it's obvious it's been built from the ground up.
> 
> This: http://tpucdn.com/reviews/Zotac/GeForce_GTX_280_Amp_Edition/images/architecture.jpg
> 
> ...



ironically my expectations are not that of nvidia but how people use their terms. ground up signifies a new *FOUNDATION* and we do not have that here, we have bumper to bumper overhaul or an interior gutting and redesign. (like gutting an old home to change the entire floor plan but keeping the exterior walls and foundation intact)

I didn't mean to indicate that this gpu would be another mere revision card, just that built from the ground up is a bit of a stretch. I see this more of a geforce 5800 series level enhancement vs the former gerforce 4 which was a redesign of the geforce 3 which was a redesign of geforce 2.  Now don't get me wrong I don't see this as being a flop like the 5800 series was, just that the build up by both camps was similar; ati with the 9800 series a redesign gpu and nvidia with the 5800 series a new style of architecture. 

to me built from the ground up = 8800 series something that changes the entire platform of the industry.


----------



## Benetanegia (Jan 18, 2010)

Simply put, as I see it, you are overestimating the change on the 8800 then (and I guess you feel the same with HD2900 too).


----------



## yogurt_21 (Jan 18, 2010)

Benetanegia said:


> Simply put, as I see it, you are overestimating the change on the 8800 then (and I guess you feel the same with HD2900 too).



actually I think you're underestimating it, prior to the unified shader architecture they had to design a gpu to handle each different type of shaders pixel and vertex using separate units. which often meant half of the gpu was being idle in either pixel or vertex intensive games, not ideal.

with the unified shader architecture suddenly each core could be used to perform either action this revolutionized the industry for both hardware and software manufacturers. 

I see nothing in fermi that will cause the same reaction.

http://www.techpowerup.com/reviews/NVIDIA/G80/

this is what i was trying to get you to take a look at versus

http://www.neoseeker.com/Articles/Hardware/Previews/G70preview/3.html
http://img.neoseeker.com/v_image.php?articleid=1807&image=17

see that image versus

http://www.techpowerup.com/reviews/NVIDIA/G80/images/schematic.jpg


suddenly you see one set for shaders versus two oriented for different purposes. it did have a huge impact.

dunno where you were during that time, or how much you were into prior gpu architectures, but dude, you missed something big.


----------



## Benetanegia (Jan 18, 2010)

yogurt_21 said:


> actually I think you're underestimating it, prior to the unified shader architecture they had to design a gpu to handle each different type of shaders pixel and vertex using separate units. which often meant half of the gpu was being idle in either pixel or vertex intensive games, not ideal.
> 
> with the unified shader architecture suddenly each core could be used to perform either action this revolutionized the industry for both hardware and software manufacturers.
> 
> ...



lol I know what unified shaders meant.  I've been following tech sites since forever man. Aparently it's you who has not been around for too long and hence the jump to unified shaders means more than it really is, historically seaking. The introduction of shaders was an even more important change and maybe even more important was the introduction of hardware T&L. Unified shaders just means you can improve load balancing by running both vertex and pixel shaders on the same processor. It really isn't that big of a change in reality, coders have not changed their way of designing, they have not vastly increased the use of vertex processing or pixel processing in any direction. Current engines have about the same balance, compared to each other, there is not one with astronomic vertex processing and one wich only does pixel shading and in the end unified architecture just meant a more efficient use of the hardware, but didn't change anything.

What we have in Fermi is at least as innovative as that. We have an architecture optimiced for both graphics and GPGPU and will not make a big difference either for both, but it's the only architecture that permits doing both at the same time efficiently. It has also taken something that was suposed to be a fixed function running on dedicated hardware and has made it moldable, with expanded functionality. I'm talking about their aproach to tesselation.

All in all, either you overestimate the unified shaders or you underestimate all the changes in Fermi.


----------



## Super XP (Jan 19, 2010)

imperialreign said:


> Agreed.  Regarding the red camp's design, the last new architecture was the R600 . . . and, as you had mentioned, that was the start of the red camp's unified shader architecture.  All newer GPU cores from ATI since have been revisions of that design.


Didn't AMD scrap the RingBus for the Crossbar Switch on the R600 or am I thinking about the previous gen cards? I do know they got a nice performance boost after they scrapped it. It looked to me that once AMD took over, ATI cards gained big performance boosts at extremely resonable prices


----------



## eidairaman1 (Jan 19, 2010)

R520, R600 Had the Ringbus Memory Topology, R700 had the Crossbar


----------



## bobzilla2009 (Jan 19, 2010)

Indeed, ATi has become far more price/performance based in the last few years it would seem. This generation is probably going to end up one of their most successful since amd took over (relative to nvidia sales, the sales will be lower overall due to recession ect.). Still, i'll be very interested to see what the hd6 series has to offer if it is to be based on a new architecture, as the rumor mill would spew out [although it seems highly likely].


----------



## trt740 (Jan 19, 2010)

release date is march correct?


----------



## yogurt_21 (Jan 19, 2010)

Benetanegia said:


> lol I know what unified shaders meant.  I've been following tech sites since forever man. Aparently it's you who has not been around for too long and hence the jump to unified shaders means more than it really is, historically seaking. The introduction of shaders was an even more important change and maybe even more important was the introduction of hardware T&L. Unified shaders just means you can improve load balancing by running both vertex and pixel shaders on the same processor. It really isn't that big of a change in reality, coders have not changed their way of designing, they have not vastly increased the use of vertex processing or pixel processing in any direction. Current engines have about the same balance, compared to each other, there is not one with astronomic vertex processing and one wich only does pixel shading and in the end unified architecture just meant a more efficient use of the hardware, but didn't change anything.
> 
> What we have in Fermi is at least as innovative as that. We have an architecture optimiced for both graphics and GPGPU and will not make a big difference either for both, but it's the only architecture that permits doing both at the same time efficiently. It has also taken something that was suposed to be a fixed function running on dedicated hardware and has made it moldable, with expanded functionality. I'm talking about their aproach to tesselation.
> 
> All in all, either you overestimate the unified shaders or you underestimate all the changes in Fermi.



sigh, http://news.yahoo.com/comics/non-sequitur#id=/comics/uclickcomics/20100117/cx_nq_uc/nq20100117


----------



## Benetanegia (Jan 19, 2010)

yogurt_21 said:


> sigh, http://news.yahoo.com/comics/non-sequitur#id=/comics/uclickcomics/20100117/cx_nq_uc/nq20100117



Ok, wow thanks man, lesson taken, you really gave me no option but to adopt the phylosophy, I will not argue with you.

If you look at the new info given yesterday and see Fermi is completely different, I'll leave it there. Thanks man, seriously.


----------



## Saakki (Jan 19, 2010)

Im pretty impressed with its horsepower. Price is tough way too high here in scandinavia..i bet 200 e over the original price..


----------



## Super XP (Jan 19, 2010)

bobzilla2009 said:


> Indeed, ATi has become far more price/performance based in the last few years it would seem. This generation is probably going to end up one of their most successful since amd took over (relative to nvidia sales, the sales will be lower overall due to recession ect.). Still, i'll be very interested to see what the hd6 series has to offer if it is to be based on a new architecture, as the rumor mill would spew out [although it seems highly likely].


AMD releasing a brand new build from the ground up all depends on NVIDIA's Fermi and how well it will perform. Though I do believe its going to be available for R800 or R900.
If AMD can keep up it's price/performance then they will continue the route they've started ever since AMD took over ATI and perhaps not release the new design in a rush.


----------



## Benetanegia (Jan 19, 2010)

Saakki said:


> Im pretty impressed with its horsepower. Price is tough way too high here in scandinavia..i bet 200 e over the original price..



200 euros over the original price as in $500 = 350 € --> +200 = 550 €

or

US price $500 --> EU price 500 € --> scandinavian price 700 €.


----------



## bobzilla2009 (Jan 20, 2010)

To be honest, i would say the price when it land will be somewhere between the two values. The HD5870 currently sells for around £320-350 (~400 euro/$550) in the UK. I was kind of lucky when i bought it for just under £300 4 months ago.

I'm expecting a price tag around 600E (~£500) for the higher fermi card, and around £450 for the lower end one. At those prices, only real morons will end up buying them XD they'll even make the hd5870 look like absolutely amazing value at the current overblown uk prices ^^ (which should receive a nice price cut in march i'd imagine, ATi have filled up at the enthusiast buffet).


----------



## eidairaman1 (Jan 20, 2010)

bobzilla2009 said:


> Indeed, ATi has become far more price/performance based in the last few years it would seem. This generation is probably going to end up one of their most successful since amd took over (relative to nvidia sales, the sales will be lower overall due to recession ect.). Still, i'll be very interested to see what the hd6 series has to offer if it is to be based on a new architecture, as the rumor mill would spew out [although it seems highly likely].



Hell id like to see what the Refreshes Bring


----------



## Saakki (Jan 20, 2010)

Benetanegia said:


> 200 euros over the original price as in $500 = 350 € --> +200 = 550 €
> 
> or
> 
> US price $500 --> EU price 500 € --> scandinavian price 700 €.



yep..we are facing a about 700 e card  thats something

Most expensive from finnish cheapest estore: "XFX Radeon HD 5970 2GB Black Edition näytönohjain PCI-E väylään. Kukkulan kuningas on täällä tarjoten graafista vääntöä yli äyräiden. 	733.90 €"

Cheapest : "Asus Radeon EAH5970/G/2DIS/2GD5 2GB PCI-E - näytönohjain PCI-Express -väylään.Piirisarja: ATI Radeon HD 5970Muisti: 2GB 	601.90 €"


----------



## Super XP (Jan 20, 2010)

eidairaman1 said:


> Hell id like to see what the Refreshes Bring


The same happend with the HD 4870's and its refresh. But shortly after the better HD 5800's were released. I would just wait around for the HD 6800's. They should be out well before Q3 2010 I assume.


----------



## HalfAHertz (Jan 20, 2010)

Super XP said:


> The same happend with the HD 4870's and its refresh. But shortly after the better HD 5800's were released. I would just wait around for the HD 6800's. They *should be out well before Q3 2010* I assume.




Somehow I higly doubt that


----------



## pantherx12 (Jan 20, 2010)

Super XP said:


> The same happend with the HD 4870's and its refresh. But shortly after the better HD 5800's were released. I would just wait around for the HD 6800's. They should be out well before Q3 2010 I assume.




No way at all.

18 months minimum between major GPU updates if you look back through the years.


----------



## afw (Jan 20, 2010)

pantherx12 said:


> No way at all.
> 
> 18 months minimum between major GPU updates if you look back through the years.



+1

I too think that AMD will release the 6000 series around Q2/Q3 ,2011 ... but we might get info and previews (benchmarks) by Q1 2011 ... 

I hope FERMI lives up to its expectations ...  ... so we can benefit from price drops on the 5800 series ....


----------



## Fishymachine (Jan 20, 2010)

afw said:


> +1
> 
> I too think that AMD will release the 6000 series around Q2/Q3 ,2011 ... but we might get info and previews (benchmarks) by Q1 2011 ...
> 
> I hope FERMI lives up to its expectations ...  ... so we can benefit from price drops on the 5800 series ....



I somehow get the impression that AMD will become completely GlobalFoundries and thus releas HD6k under SOI 28nm fab very soon (maybe even nov-dec '10)


----------



## Super XP (Jan 20, 2010)

If AMD follows this trend, they may very well release something new in Q2-Q3 2010 OR a refresh of the Cypress design ALL depending on NVIDIA's plan of attack  But looking closely, it seems as though AMD released the HD 3870 to rid themselves of the underperforming HD 2900 series. The HD 3870 was the 1st introduction to the CrossBar Switch 

Radeon HD 2900 (R600) series launched on May 14, 2007.
Radeon HD 3870 (RV670) series launched in Mid-November 2007.
Radeon HD 3870 X2 (R680) was launched on January 28, 2008.
*(approx: 6 months in between R600 & RV670)*

Radeon HD 4870 (RV770) series launched on June 25, 2008.
Radeon HD 4870 X2 (R700) series launched on August 12, 2008.
Radeon HD 4890 (RV790) series launched on April 2, 2009.
*(Approx: 10 months in between RV770 & RV790 re-fresh)*

Radeon HD 5870 (Cypress) series launched on September 23, 2009.
Radeon HD 5900 (Hemlock) series launched on October 12, 2009. 
*(Approx: 15 months in between the RV770 & Cypress)*

The specifications for the architectures of the R800 and R900 families are "closed", with the specifications of the R1000 family being developed.


----------



## bobzilla2009 (Jan 20, 2010)

Which sounds about right tbh, AMD are probably at glofo right now with the very first designs of the hd6870 seeing how they turn out. It's unlikely they'll deal with tmsc now that they own glofo.


----------



## HalfAHertz (Jan 20, 2010)

bobzilla2009 said:


> Which sounds about right tbh, AMD are probably at glofo right now with the very first designs of the hd6870 seeing how they turn out. It's unlikely they'll deal with tmsc now that they own glofo.



They own only a relatively small portion of the stocks at GF...ATIC owns the majority:
http://www.reuters.com/article/idCNLDE60J1TQ20100120


----------



## bobzilla2009 (Jan 20, 2010)

ahhh right, brand new news  still, it does seem AMD are making their chips there.


----------



## Goodman (Jan 20, 2010)

HalfAHertz said:


> They own only a relatively small portion of the stocks at GF...ATIC owns the majority:
> http://www.reuters.com/article/idCNLDE60J1TQ20100120



i'll be happy with 34.2% out of few millions , hell i'll take 34.2% of just one million any day 

No matter how small the profits is it's always a plus

As for ATI new card the 6000 series i'm pretty sure they'll have a prototype running by next september or they'll wait to switch to GBF in the end of 2010?


----------



## bobzilla2009 (Jan 20, 2010)

They'll definitely get a prototype done by september, even if they do it at tsmc. They'll only change to glofo when it's not going to slow them up. I think they of all people now know how much of an advantage a few months is in the gpu industry


----------



## Super XP (Jan 20, 2010)

It's extremely likely that AMD already has behind closed doors a prototype of the HD 6000 series up and running. It's probably not yielding what it’s supposed to yield.


> AMD holds 34.2% of the fabs while investor Advanced Technology Investment Company (ATIC)  holds the remaining 65.8%.


----------



## Black Panther (Jan 21, 2010)

The word "chugs" makes one go 



> *NVIDIA GF100 (Fermi) Graphics Card Chugs Along at CES; Benchmark Video Included*
> 
> NVIDIA's next generation graphics card based on the Fermi architecture, whose consumer variant is internally referred to as GF100 got to business at the CES event being held in Las Vegas, USA, performing live demonstration of its capabilities. The demo PC is housing one such accelerator which resembles the card in past sightings. Therefore it is safe to assume this is what the reference NVIDIA design of GF100 would look like. The accelerator draws power from 6-pin and 8-pin power connectors. It has no noticeable back-plate, a black PCB, and a cooler shroud with typical NVIDIA styling. The demo rig was seen running Unigine Heaven in a loop showing off the card's advanced tessellation capabilities in DirectX 11 mode. The most recent report suggests that its market availability can be expected in March, later this year. No performance figures have been made public as yet.



Link to video.
But apparently it's been removed due to "terms of use violation"


----------



## nt300 (Jan 23, 2010)

*Fermi to be the hottest single chip ever*

So some sources within NVIDIA stated that a single chip should end up close to 300W and so assume that a Dual-Fermi card can go 600W+ 



> *Fermi to be the hottest single chip ever*
> No surprises there
> 
> ATI's key argument against Fermi might be that this chip will be the hottest ever. We can now confirm that it will definitely end up hotter and with a higher TDP than that of ATI's RV870, *Radeon HD 5870's* Cypress XT chip, but at the same time it will end up faster.
> ...


http://www.fudzilla.com/content/view/17356/65/


----------



## Super XP (Feb 1, 2010)

Well hurry up NVIDIA and release that blasted Fermi Monster so I can pick up my long waiting HD 5870 for a cheaper price


----------



## xrealm20 (Feb 1, 2010)

+1 

Competition only helps pricing!


----------



## Super XP (Feb 1, 2010)

Yup. 
What NVIDIA doesn't know is AMD's little sweet secret they have hidden away in there cubby hole just waiting for Fermi's release and performance numbers. I believe AMD called it the HD 5980 (Based on the HD 5850) and the HD 5990 (Based on a faster HD 5970)


----------



## Kantastic (Feb 1, 2010)

Super XP said:


> Well hurry up NVIDIA and release that blasted Fermi Monster so I can pick up my long waiting HD 5870 for a cheaper price



That's the attitude!


----------



## Hayder_Master (Feb 1, 2010)

bla bla bla, please there is something called benchmarking


----------



## Aleksa (Feb 1, 2010)

*Another unofficial possibility*

AMD ATi Radeon HD 5890 

- Codename RV890 "Cypress XTX".
- 2.6 Billion transistors on TSMC 40nm process.
- 24 SIMD Cores.
- Each SIMD Core Has 16 5D (Vec4+1) Processing Units (IEEE754-2008, FP32+FP64 MADD/FMA).
- 384 5D (Vec4+1) Processing Units at 875MHz .
- 1920 ALUs In Total.
- FP32 MADD/FMA : 1920 Ops/Clock.
- FP64 MADD/FMA : 384 Ops/Clock.
- SP (FP32) MADD/FMA Rate : 3.36 Tflops.
- DP (FP64) MADD/FMA Rate : 672 Gflops.
- 96 Texture Address Units (TA).
- 96 Texture Filtering Untis (TF).
- INT8 Bilinear Texel Rate : 84 Gtexels/s
- FP16 Bilinear Texel Rate : 42 Gtexels/s
- 48 Raster Operation Units (ROPs).
- ROP Rate : 42 Gpixels.
- 875MHz Core.
- 384 bit Memory Subsystem.
- 1200MHz (4800MHz Effective) Memory Clock.
- 230.4 GB/s Memory Bandwidth.
- 1536MB GDDR5 Memory.

HD5890 Is 20% - 25% faster than HD5870
End of March 2010, price 399.00 USD


----------



## KainXS (Feb 1, 2010)

something like that could completely screw the fermi launch, but where in the world did you get such info . . . . .

I don't believe it, gotta be fake.


----------



## eidairaman1 (Feb 1, 2010)

Give us a Link to this data or did you pull it out of your ass?


Aleksa said:


> AMD ATi Radeon HD 5890
> 
> - Codename RV890 "Cypress XTX".
> - 2.6 Billion transistors on TSMC 40nm process.
> ...


----------



## xrealm20 (Feb 1, 2010)

eidairaman1 said:


> Give us a Link to this data or did you pull it out of your ass?



I haven't seen any specs on the "upcoming" ATi launch - but if the specs are anywhere near what was listed above, it could very well be just as powerful if not moreso than Fermi.

It's always fun to sit back and watch as these two companies try to one up another


----------



## btarunr (Feb 1, 2010)

eidairaman1 said:


> Give us a Link to this data or did you pull it out of your ass?



He pulled it out of his ass.


----------



## Aleksa (Feb 1, 2010)

*unofficial possibility*

It does clearly say "unofficial possibility" and not ass , and in IT industry there is lot of info as there is the grains of the salt in the sea ...


----------



## SUPERREDDEVIL (Feb 1, 2010)

hayder.master said:


> bla bla bla, please there is something called benchmarking



agree.. wait for nvidia`s release, do some benchs, and get conclusions.


----------



## sinar (Feb 1, 2010)

A rumor is 75% true + 25% added preservative


----------



## eidairaman1 (Feb 1, 2010)

Aleksa said:


> It does clearly say "unofficial possibility" and not ass , and in IT industry there is lot of info as there is the grains of the salt in the sea ...



Since you have no links to back up what you just said You really pulled that data out of your asshole.


----------



## Aleksa (Feb 2, 2010)

*investigative journalism*

Obviously quite a few people don't know what investigative journalism is and what it includes.  They show their oblivious subliminal nob state of mind and the fetishes that turns them on "ass".


----------



## eidairaman1 (Feb 2, 2010)

Sorry around here you need links and documents and pictures with your name on a piece of paper next to the evidence. Word of mouth is not enough. At the moment any data you are providing without proof is frivolous to this topic. Now if it was opinion or wishful thinking i would understand, but this stuff your providing without any proof is not enough. Your speaking reminds me so much of the Fermi Launch, (well where is the damn part if your so certain of specs!)


----------



## Aleksa (Feb 2, 2010)

*Your nickers in a twist*

Obviously some people are not a patron of fine arts , do tell, it does clearly say "Another unofficial possibility" , don't get your nickers in a twist.


----------



## erocker (Feb 2, 2010)

Aleksa said:


> Obviously some people are not a patron of fine arts , do tell, it does clearly say "Another unofficial possibility" , don't get your nickers in a twist.



A better term would be your own speculation.


----------



## Aleksa (Feb 2, 2010)

*ATI vs Nvidia*

One thing is guarantied, expect more options in just few weeks with 5830 and other variants of 5xxx including the ones with non generic cooling solutions including infamous Fermi variants slated for March.


----------



## Indra EMC (Feb 2, 2010)

Nvidia always delaying Fermi further and further, until everybody buy ATI Card, and when Fermi Releases, nobody buy it anymore... even that's twice faster than cypress.


----------



## dir_d (Feb 2, 2010)

Nvidia Twittered March 10th Fermi GTX480 and GTX470


----------



## btarunr (Feb 2, 2010)

Aleksa said:


> It does clearly say "unofficial possibility" and not ass , and in IT industry there is lot of info as there is the grains of the salt in the sea ...



And without even a source of that "unofficial possibility", it's easy to find out where that came from.


----------



## eidairaman1 (Feb 2, 2010)

btarunr said:


> And without even a source of that "unofficial possibility", it's easy to find out where that came from.



now thats funny


----------



## jaredpace (Feb 2, 2010)

You know, regarding the 5890 specs....  

The 384bit & 24 SIMDS seems like total BS, however the one thing giving it even an ounce of credibility is the fact that Cypress die shots have not surfaced.  So I wont shrug it off completely yet.

Thanks to the fella who calculated all the theoretical output! ;P


----------



## btarunr (Feb 2, 2010)

Not only that, adding 320 SPs and so many more components isn't going to step up transistor count by just 600 million!


----------



## KainXS (Feb 2, 2010)

think thats where he got it, old but meh

they could do something like that, you never know, still think its bs though

and theres stuff wrong on that chart too if you look. . . . .


----------



## Super XP (Feb 2, 2010)

jaredpace said:


> You know, regarding the 5890 specs....
> 
> The 384bit & 24 SIMDS seems like total BS, however the one thing giving it even an ounce of credibility is the fact that Cypress die shots have not surfaced.  So I wont shrug it off completely yet.
> 
> Thanks to the fella who calculated all the theoretical output! ;P


I can't see AMD coming out with 384-Bit, it goes against its current design principals. If this was the case we are talking about a total re-design on a GPU that already performs great. But the chart states they are doing so


----------



## eidairaman1 (Feb 2, 2010)

Until the boards are out i take any data with a grain of salt, and those who have no proof of data are full of shit.


----------



## Aleksa (Feb 3, 2010)

*Right dates for Fermi*

As reported earlier that the Fermi is between 2nd and 10th of March as the sources indicated http://forums.techpowerup.com/showthread.php?p=1744678#post1744678 but the naming was a bit off, as the whole IT industry was surprised and caught off guard with new Nvidia Fermi naming.
As to the word "unofficial" i will continue to use it as it is appropriate.


----------



## Indra EMC (Feb 4, 2010)

I really waiting for Fermi GPU to release, so i can buy an ATI 5870.

that's because AMD Always rebate their price when newer Nvidia card release to the market.


----------



## Bjorn_Of_Iceland (Feb 4, 2010)

myeah.. ima still wait and grab it. faster or slower than 5970.


----------



## nt300 (Feb 6, 2010)

NVIDIA is naming Fermi something different because Nvidia got caught re-branding old hardware as new and charging people more money. That's a  for Nvidia. They must have been the 10th or 12th time they've tried ripping people off. 

Internal sources within Nvidia state Fermi is going to be a HOT & Power Hungry green goblin. The heat output and the power draw will not be worth the performance they are going to gain from these cards. The best bet IMO is to wait for the Fermi re-freshers to get released. I wouldn't touch the 1st batches, I'm no guinea pig thank U

*Nvidia's Fermi Cards Said to Run Very Hot*
http://www.tomshardware.co.uk/nvidia-fermi-gpu-graphics,news-32524.html


----------



## Indra EMC (Feb 6, 2010)

nt300 said:


> NVIDIA is naming Fermi something different because Nvidia got caught re-branding old hardware as new and charging people more money. That's a  for Nvidia. They must have been the 10th or 12th time they've tried ripping people off.
> 
> Internal sources within Nvidia state Fermi is going to be a HOT & Power Hungry green goblin. The heat output and the power draw will not be worth the performance they are going to gain from these cards. The best bet IMO is to wait for the Fermi re-freshers to get released. I wouldn't touch the 1st batches, I'm no guinea pig thank U
> 
> ...



Nvidia

The Way Is Mean To Be Change Old GPU With New Sticker and say "This is new GPU"


----------



## nt300 (Feb 6, 2010)

Isn't the max supported power draw for PCIe x16 ver.2.0 at 300W? Nvidia has a lot of work to repair its problems. I plan on picking up a HD 5870 as soon as prices go way down. The sooner Nvidia can release there oven roasted Fermi the faster I can get my ATI card.


----------



## wahdangun (Feb 6, 2010)

so still no detail about fermi, wtf nvdia doing, because i can't wait to buy at HD 5770, and with no competition at all the price is really skyrocketing


----------



## nt300 (Feb 6, 2010)

wahdangun said:


> so still no detail about fermi, wtf nvdia doing, because i can't wait to buy at HD 5770, and with no competition at all the price is really skyrocketing


I have the XFX Radeon HD 5770 1GB GDDR5 and love it. I am saving up for the HD 5870 1GB card once prices go down. I want to put my 5770 into my wifes PC once I get the chance

Nvidia has no choice but to delay Fermi unless they plan on people cooking eggs on there PC's lol


----------



## eidairaman1 (Feb 6, 2010)

wahdangun said:


> so still no detail about fermi, wtf nvdia doing, because i can't wait to buy at HD 5770, and with no competition at all the price is really skyrocketing



Just buy one or a 5850


----------



## Black Panther (Feb 6, 2010)

Prices are _increasing_? That's bad :shadedshu
I think this is the first time that nvidia let such a long period of time before a new release. It's been already 3 months that the 5970 was released, albeit it's as rare as a turtle with wings...

Well as of this moment the 5970 is certainly faster than fermi, for the reason that fermi doesn't actually exist as yet...


----------



## nt300 (Feb 6, 2010)

Black Panther said:


> Prices are _increasing_? That's bad :shadedshu
> I think this is the first time that nvidia let such a long period of time before a new release. It's been already 3 months that the 5970 was released, albeit it's as rare as a turtle with wings...
> 
> Well as of this moment the 5970 is certainly faster than fermi, for the reason that fermi doesn't actually exist as yet...


Yes and ATI already recorded more than 2+ million HD 5000 cards sold with that number easily reaching past the several million.

The more ATI sells before Nvidia releases the Fermi the cheaper ATI can sell the cards once Fermi is finally released. All I know is Nvidia is getting killed all across the board.


----------



## xrealm20 (Feb 6, 2010)

yep, nVidia has really dropped the ball on this launch.  I'd bet they were thinking that the AMD product wasn't going to be able to compete and got lazy.


----------



## nt300 (Feb 7, 2010)

Just wait for the HD 6000 series to arrive late in the q4 2010 when you will see Nvidia


----------



## wiak (Feb 7, 2010)

yes the the Radeon HD 5970 is faster and yes AMD will release a new next generation in some months too that is faster than current HD 5970 
http://www.xbitlabs.com/news/video/...ck_for_the_Second_Half_of_2010_AMD_s_CEO.html

you shoulnt think NVIDIA is all the best in the world, far from it, last time i heard they released a rebranded Geforce 240 as Geforce 340, whats next? rebrand a GTX 295 as a Geforce 460 and HOPE people wont notice


----------



## Wile E (Feb 7, 2010)

Black Panther said:


> Prices are _increasing_? That's bad :shadedshu
> I think this is the first time that nvidia let such a long period of time before a new release. It's been already 3 months that the 5970 was released, albeit it's as rare as a turtle with wings...
> 
> Well as of this moment the 5970 is certainly faster than fermi, for the reason that fermi doesn't actually exist as yet...


----------



## H82LUZ73 (Feb 7, 2010)

Win with SAPPHIRE and SemiAccurate!
SAPPHIRE has teamed up with SemiAccurate.com to give away some prizes to SSC members!  Enter for your chance to win one of three prizes: a SAPPHIRE HD 5450 graphics card, a SAPPHIRE PURE 785G mainboard, or a future SAPPHIRE HD 5K series graphics card (which we can’t tell you about yet, but we will let you know as soon as we can).<<< I wonder if Sapphire is say YES a 5890 is coming.???

SemiAccurate.com focuses not just on the new shiny stuff but the how and why behind the technology we use. If you haven’t already, check them out.

Remember, you need to be an SSC member to enter the contest.  If you’re not a member already, join now.

To enter, just log in and accept the terms and conditions!

Contest closes Sunday, February 28, 2010 at 11:59 PM EST.


----------



## Kantastic (Feb 7, 2010)

H82LUZ73 said:


> Win with SAPPHIRE and SemiAccurate!
> SAPPHIRE has teamed up with SemiAccurate.com to give away some prizes to SSC members!  Enter for your chance to win one of three prizes: a SAPPHIRE HD 5450 graphics card, a SAPPHIRE PURE 785G mainboard, or a future SAPPHIRE HD 5K series graphics card (which we can’t tell you about yet, but we will let you know as soon as we can).<<< I wonder if Sapphire is say YES a 5890 is coming.???
> 
> SemiAccurate.com focuses not just on the new shiny stuff but the how and why behind the technology we use. If you haven’t already, check them out.
> ...



Judging from the other prizes I'll bet it's a crap card like a new 55xx series card.


----------



## H82LUZ73 (Feb 7, 2010)

Kantastic said:


> Judging from the other prizes I'll bet it's a crap card like a new 55xx series card.



True that But why is AMD/ATI Flooding the market with low end cards?It makes no sense to have so many of them,To me most of the 54-55-56 cards should be used in mobile applications Laptops could use them more then a desktop.


----------



## Kantastic (Feb 7, 2010)

H82LUZ73 said:


> True that But why is AMD/ATI Flooding the market with low end cards?It makes no sense to have so many of them,To me most of the 54-55-56 cards should be used in mobile applications Laptops could use them more then a desktop.



Well if they're discontinuing the production of the previous gen chips then it only makes sense to fill the market with new ones, that or lose huge profits & market share? Only a very small amount of the market (people like us) buy high end cards, probably like 5%.


----------



## TooFast (Feb 7, 2010)

this is getting dumb! who care about playing cod4 at 200000fps! these companies are scaming people into buying things they dont need. I would only look at the 5970 if u have a 30 inch lcd!


----------



## HalfAHertz (Feb 8, 2010)

Nobody is bending your arm into buying one...


----------



## nt300 (Feb 8, 2010)

HalfAHertz said:


> Nobody is bending your arm into buying one...


I am


----------



## TooFast (Feb 8, 2010)

I have 1 lol and a 30 inch dell to go with it.


----------



## nt300 (Feb 8, 2010)

Nvidia's Fermi chip was reported to be very power-hungry and extremely hot, pushing or even exceeding the PCI power limit of 300W

There's speculation that Nvidia's Fermi will be out of date on the day of launch because they've already delayed launch many times and that ATI will continue to hold the performance crown with the HD 5970 and the upcoming surprise HD 5980 & HD 5990


----------



## Aleksa (Feb 18, 2010)

*Update on limited launch of Fermi*

Based on the technical specs and architecture of Fermi as well as the thermal envelope expect under 10000 Fermi video cards on the launch. The Nvidia is concentrating it resources on Fermi II - a optimized 28nm GPU. The Nvidia 3xx parts such as 310 are DirectX 10.1 vs Ati 4xxx video cards.

Fermi story is more intriguing and complex with unexpected twist as expected with a behemoth of a Fermi GPU: http://www.semiaccurate.com/2010/02/17/nvidias-fermigtx480-broken-and-unfixable/


----------



## KainXS (Feb 18, 2010)

If even a little bit in that semi accurate article(and its semi accurate"shrugs") then Nvidia could be in some serious trouble I will admit, still we will have to wait and see.

but also all the GT3XX parts are not DX10.1 the GT330 is actually just a low power DX10 9800GT/G92, also if you go to Nvidia's US site they have changed the GT330's picture with the GT340's but it is correct on the UK site.


----------

