• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce 4XX Series Discussion

Status
Not open for further replies.

Bo_Fox

New Member
Joined
May 29, 2009
Messages
480 (0.09/day)
Location
Barack Hussein Obama-Biden's Nation
System Name Flame Vortec Fatal1ty (rig1), UV Tourmaline Confexia (rig2)
Processor 2 x Core i7's 4+Gigahertzzies
Motherboard BL00DR4G3 and DFI UT-X58 T3eH8
Cooling Thermalright IFX-14 (better than TRUE) 2x push-push, Customized TT Big Typhoon
Memory 6GB OCZ DDR3-1600 CAS7-7-7-1T, 6GB for 2nd rig
Video Card(s) 8800GTX for "free" S3D (mtbs3d.com), 4870 1GB, HDTV Wonder (DRM-free)
Storage WD RE3 1TB, Caviar Black 1TB 7.2k, 500GB 7.2k, Raptor X 10k
Display(s) Sony GDM-FW900 24" CRT oc'ed to 2560x1600@68Hz, Dell 2405FPW 24" PVA (HDCP-free)
Case custom gutted-out painted black case, silver UV case, lots of aesthetics-souped stuff
Audio Device(s) Sonar X-Fi MB, Bernstein audio riser.. what??
Power Supply OCZ Fatal1ty 700W, Iceberg 680W, Fortron Booster X3 300W for GPU
Software 2 partitions WinXP-32 on 2 drives per rig, 2 of Vista64 on 2 drives per rig
Benchmark Scores 5.9 Vista Experience Index... yay!!! What??? :)
What I was saying is that AMD shifted their strategy towards multi-GPU as the way to create the high-end parts. In theory, their desire is to create small dies to serve the mainstream and performance price points and then put them together to create the high-end. When they presented this strategy, they said that soon they would be putting 4, 6, 8 small dies in order to create different performance levels. This is in absolute contrast to Nvidia's atrategy of creating the bigger modular design they can and then cut it down to create the mainstream products.

My comment was about that divergence in focus. At Nvidia when they start designing their chip they have to aim for the dual-GPU card to be on the safe side, even if they are going to create a dual-GPU card themselves, because when the project starts, 3-4 years before it reaches stores, they don't know what AMD will do. What if AMD puts 3 or 4 dies on a card, for example?

About pricing, it's really hard to say. We can make a guesstimate about how much it will cost Nvidia to create the cards, but we don't know how much they will charge, it will depend on the market, demand, performance, etc.. About production costs, once that 40nm yields improve, they will be cheaper to produce than GT200 cards (smaller die, 384vs512 bit), so if needed or if they simply want to, they can sell them at very similar prices as Ati cards* without sacrificing profits like they did with GT200.


*Reasons being:

- HD5xxx cards cost more than HD4xxx card to produce: bigger die, fastest GDDR5 memory.
- Nvidia will apparently use cheaper slower GDDR5 memory, that will aleviate the price difference a bit.
- Nvidia will sell Fermi Tesla cards (technically the same thing) in the HPC market and depending on how well they do there, they will be able to adapt their profit requirements on the GPU market and compete better. Profits-per-card are 10-20 times bigger in the HPC market, so in essence every Tesla card sold could aleviate the need to make a profit in 10-20 GeForce cards if really required.

One thing is sure, they will always be competitive, at least on cards that directly compete with AMD cards and the faster ones will be forced to come down too or they will become worthless. This makes slower cards to adapt again and the ball keeps rolling.

Nvidia having faster cards doesn't hurt competition as much as people think. The GTX260 did come down in price a lot (so did the 8800GT at the time) because it competed with the HD4870, only the GTX280/285 remained expensive AND if the prerequisite for competition is that Nvidia doesn't have a faster card, then the undeniable truth is that GTX280/285 and that performance level would have never existed in the first place.

The feel of lack of competition is just subjective and abstract. It's that people look at the GTX285 and want it and think there's no competition at that price point because it's expensive, that it doesn't make sense in a performance/price basis and hence they think it would have been better if Nvidia didn't outperform AMD. Well, but if GTX285 never existed (or performed like a HD4870) that performance level would have never existed in the first place and HD4870/GTX260 (or GTX280 with same performace as HD4870) would probably cost much more than they did, they would both be priced as premium cards instead of "second in charge" cards. What I'm trying to say is that the prerequisite for competition is that Nvidia releases a card with similar performance as the HD5870, how many cards they have above that level is irrelevant. The ideal thing is not that Nvidia fails to outperform HD5870 now. the idel situation would have been if the HD5870 was already in the performance point where the GTX380 specs suggest it will land.

Sorry for the long response.

--HD 4870/4890 had "FAST" GDDR5 memory at the time those cards were launched. GDDR5 was a brand new tech, so it was not cheap. It was expensive enough for Nvidia to deliberately avoid it for the time being.

-- Even though Nvidia will be using slower GDDR5 memory than the 5870, the GT300 will be using 12 GDDR5 chips, which will be more than 8 GDDR5 modules found on the 5870. Even if Nvidia is using only 384 bits, which is slightly cheaper than 512-bit on the GTX 285, 12 GDDR5 chips at 4.2GHz effective is hardly any cheaper than 8 GDDR5 chips at 4.8GHz effective found on 5870 cards. Heck, those new GDDR5 chips on the GT300 would be rated at like 4.6-4.8 GHz or so.

-- The GTX 280/285 did not "remain expensive". Nvidia designed those chips with a $500+ price-point in mind. The reality is that Nvidia ended up having to sell those cards for much less than $500 from day one.

Now, Nvidia is refraining from making any more of those GT200b (55nm) chips, even if it pisses off the 3rd party retailers (EVGA, BFG, MSI, XFX, etc..). Nvidia knows that it's no longer worth selling those monster chips at a loss, especially after ATI has released the next-generation GPU (since selling at a loss would only backfire further if Nvidia tried harder). It's good to just let the prices rise a little bit just in time for Fermi's release, so that the high-end Fermi chip will sell well at $550+.

Just my 2 cents.. to add to this discussion blah blah!
 
Joined
Oct 6, 2009
Messages
2,824 (0.52/day)
Location
Midwest USA
System Name My Gaming System
Processor Intel i7 4770k @ 4.4 Ghz
Motherboard Asus Maximus VI Impact (ITX)
Cooling Custom Full System Water cooling Loop
Memory G.Skill 1866 Mhz Sniper 8 Gb
Video Card(s) EVGA GTX 780 ti SC
Storage Samsung SSD EVO 120GB - Samsung SSD EVO 500GB
Display(s) ASUS W246H - 24" - widescreen TFT active matrix LCD
Case Bitfenix Prodigy
Power Supply Corsair AX760 Modular PSU
Software Windows 8.1 Home Primeum
Well heres my 2 cents, nvidia's fermi cards sound awesome, just have to wait and see, and i hope ATI release a HD5990 4GB card, that would be awesome

That would be cool
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.49/day)
Location
Reaching your left retina.
--HD 4870/4890 had "FAST" GDDR5 memory at the time those cards were launched. GDDR5 was a brand new tech, so it was not cheap. It was expensive enough for Nvidia to deliberately avoid it for the time being.

-- Even though Nvidia will be using slower GDDR5 memory than the 5870, the GT300 will be using 12 GDDR5 chips, which will be more than 8 GDDR5 modules found on the 5870. Even if Nvidia is using only 384 bits, which is slightly cheaper than 512-bit on the GTX 285, 12 GDDR5 chips at 4.2GHz effective is hardly any cheaper than 8 GDDR5 chips at 4.8GHz effective found on 5870 cards. Heck, those new GDDR5 chips on the GT300 would be rated at like 4.6-4.8 GHz or so.

-- The GTX 280/285 did not "remain expensive". Nvidia designed those chips with a $500+ price-point in mind. The reality is that Nvidia ended up having to sell those cards for much less than $500 from day one.

Now, Nvidia is refraining from making any more of those GT200b (55nm) chips, even if it pisses off the 3rd party retailers (EVGA, BFG, MSI, XFX, etc..). Nvidia knows that it's no longer worth selling those monster chips at a loss, especially after ATI has released the next-generation GPU (since selling at a loss would only backfire further if Nvidia tried harder). It's good to just let the prices rise a little bit just in time for Fermi's release, so that the high-end Fermi chip will sell well at $550+.

Just my 2 cents.. to add to this discussion blah blah!

Good points, but they do not really apply to what I was saying. The reason is that I am assuming that Fermi is not a failure, I'm assuming it does some justice to its specs, hence the GTX380 will perform close or better than the HD5970 and GTX360 compete against the HD5870. Otherwise, it will just be a flop in the GPU arena.

So by my assumption, in the case of the GTX380 it's 12 GDDR5 chips against 16 chips and 384 bit PCB against 512 bit board, GTX380 is in clear advantage. The GTX360 is a bit in dissadvantage though (10 chips vs 8 chips and 320 vs 256, 10 cheap chips can very well cost less than 8 expensive ones though), but not so much as the GTX260 was in relation to the HD4870 (448 vs 256 and 896 vs 512 MB) and if it's significantly faster, that wouldn't matter at all.

Again, profits can be sacrificed in GPUs, because profits will be made in other areas. The reason that GTX380 and 360 are in a disadvantage is precisely that Fermi was designed for HPC too, but that's something Nvidia knows from the start, it's part of the strategy and hence, it doesn't really matter. What does matter is profits derived from Fermi as a whole, profits made by the company and thanks to the same things that make Fermi GPUs "expensive" to make, they have been able to enter the HPC market through the front door. Their contract with ORNL to make a supercomputer 10 times faster than RoadRunner has to involve at the very least 20.000 Tesla chips (20k * 1 Tflop = 20 PFlop), but most probably something around 50.000 (unless Fermi is super-efficient, more so than regular CPUs, not likely he!). That means that with that contract they will be making the same profits as they'd do by selling more than half a million GPUs. How many high-end GPUs do they sell normally? Not much much more than that really. Global GPU shipments ammount to 400 million yearly, 30% of which is Nvidia, only 25% of that being discrete and who-knows-how-many-but-clearly-not-too-many are high-end parts, 5% for example?? 5% of 25% of 30% of 400 million makes around 1.5 million high-end chips sold. Figure this out, with a single HPC project they are "covering" the losses of 25% of their GPU market, they have much more HPC projects on their sleeves, they could litherally give away 25% of their GPUs and still make the same profits as they did in the past... Don't worry about Nvidia, they will be OK financially.
 
Joined
Apr 30, 2008
Messages
4,880 (0.82/day)
Location
Multidimensional
System Name Boomer Master Race
Processor AMD Ryzen 7 7800X3D 4.2Ghz - 5Ghz CPU
Motherboard MSI B650I Edge Wifi ITX Motherboard
Cooling CM 280mm AIO + 2x 120mm Slim fans
Memory Kingston Fury 32GB 6000Mhz
Video Card(s) ASUS RTX 4070 Super 12GB OC
Storage Samsung 980 Pro 2TB + WD 2TB 2.5in HDD
Display(s) Sony 4K Bravia X85J 43Inch TV 120Hz
Case CM NR200P Max TG ITX Case
Audio Device(s) Built In Realtek Digital Audio HD
Power Supply CoolerMaster V850 SFX Gold 850W PSU
Mouse Logitech G203 Lightsync
Keyboard Atrix RGB Slim Keyboard
VR HMD ( ◔ ʖ̯ ◔ )
Software Windows 10 Home 64bit
Benchmark Scores Don't do them anymore.
hmmmm probably the HD5890 and the GTX370 will compete with each other if or when their announced!
 
Joined
Nov 13, 2007
Messages
10,451 (1.71/day)
Location
Austin Texas
Processor 13700KF Undervolted @ 5.4, 4.8Ghz Ring 190W PL1
Motherboard MSI 690-I PRO
Cooling Thermalright Peerless Assassin 120 w/ Arctic P12 Fans
Memory 48 GB DDR5 7600 MHZ CL36
Video Card(s) RTX 4090 FE
Storage 2x 2TB WDC SN850, 1TB Samsung 960 prr
Display(s) Alienware 32" 4k 240hz OLED
Case SLIGER S620
Audio Device(s) Yes
Power Supply Corsair SF750
Mouse Xlite V2
Keyboard RoyalAxe
Software Windows 11
Benchmark Scores They're pretty good, nothing crazy.
what ive noticed is that they always come out with the main card... and then they come out with a 'mainstream' card (like the 8800GT, or the 8800gts 320, or the 7800gt) like 6 mos later and it costs half the price and has 80% of the performance of the two flagship models.

the one exception was the gt200 series which were little price rocks.

im hoping for an 8800GT of the GT3xx generation, which probably means waiting for a new revision/stepping of the fermi silicone.
 
Joined
Oct 6, 2009
Messages
2,824 (0.52/day)
Location
Midwest USA
System Name My Gaming System
Processor Intel i7 4770k @ 4.4 Ghz
Motherboard Asus Maximus VI Impact (ITX)
Cooling Custom Full System Water cooling Loop
Memory G.Skill 1866 Mhz Sniper 8 Gb
Video Card(s) EVGA GTX 780 ti SC
Storage Samsung SSD EVO 120GB - Samsung SSD EVO 500GB
Display(s) ASUS W246H - 24" - widescreen TFT active matrix LCD
Case Bitfenix Prodigy
Power Supply Corsair AX760 Modular PSU
Software Windows 8.1 Home Primeum
What I was saying is that AMD shifted their strategy towards multi-GPU as the way to create the high-end parts. In theory, their desire is to create small dies to serve the mainstream and performance price points and then put them together to create the high-end. When they presented this strategy, they said that soon they would be putting 4, 6, 8 small dies in order to create different performance levels. This is in absolute contrast to Nvidia's atrategy of creating the bigger modular design they can and then cut it down to create the mainstream products.

My comment was about that divergence in focus. At Nvidia when they start designing their chip they have to aim for the dual-GPU card to be on the safe side, even if they are going to create a dual-GPU card themselves, because when the project starts, 3-4 years before it reaches stores, they don't know what AMD will do. What if AMD puts 3 or 4 dies on a card, for example?

About pricing, it's really hard to say. We can make a guesstimate about how much it will cost Nvidia to create the cards, but we don't know how much they will charge, it will depend on the market, demand, performance, etc.. About production costs, once that 40nm yields improve, they will be cheaper to produce than GT200 cards (smaller die, 384vs512 bit), so if needed or if they simply want to, they can sell them at very similar prices as Ati cards* without sacrificing profits like they did with GT200.


*Reasons being:

- HD5xxx cards cost more than HD4xxx card to produce: bigger die, fastest GDDR5 memory.
- Nvidia will apparently use cheaper slower GDDR5 memory, that will aleviate the price difference a bit.
- Nvidia will sell Fermi Tesla cards (technically the same thing) in the HPC market and depending on how well they do there, they will be able to adapt their profit requirements on the GPU market and compete better. Profits-per-card are 10-20 times bigger in the HPC market, so in essence every Tesla card sold could aleviate the need to make a profit in 10-20 GeForce cards if really required.

One thing is sure, they will always be competitive, at least on cards that directly compete with AMD cards and the faster ones will be forced to come down too or they will become worthless. This makes slower cards to adapt again and the ball keeps rolling.

Nvidia having faster cards doesn't hurt competition as much as people think. The GTX260 did come down in price a lot (so did the 8800GT at the time) because it competed with the HD4870, only the GTX280/285 remained expensive AND if the prerequisite for competition is that Nvidia doesn't have a faster card, then the undeniable truth is that GTX280/285 and that performance level would have never existed in the first place.

The feel of lack of competition is just subjective and abstract. It's that people look at the GTX285 and want it and think there's no competition at that price point because it's expensive, that it doesn't make sense in a performance/price basis and hence they think it would have been better if Nvidia didn't outperform AMD. Well, but if GTX285 never existed (or performed like a HD4870) that performance level would have never existed in the first place and HD4870/GTX260 (or GTX280 with same performace as HD4870) would probably cost much more than they did, they would both be priced as premium cards instead of "second in charge" cards. What I'm trying to say is that the prerequisite for competition is that Nvidia releases a card with similar performance as the HD5870, how many cards they have above that level is irrelevant. The ideal thing is not that Nvidia fails to outperform HD5870 now. the idel situation would have been if the HD5870 was already in the performance point where the GTX380 specs suggest it will land.

Sorry for the long response.

While can't out right disagree with your logic I can say this. No I am hoping for a flop from Nvidia. I am just stating that they have been acting quite weird about their new cards. (with the press releases and so on....) But I have already stated what I think with that...... So I won't go on.... But I do also see your point with the fact that they could also be using that as a tactic.

Now the second part. No you are right that AMD is planning on building smaller and small Dies to make multiple (and more powerful and Cheaper) GPU'S. They also don't differentiate between Multiple GPU cards and single GPU cards. So that is AMD's tactic.
While Nvidia is going the other route. Making larger die but more powerful single dies. That also leads to being more expensive. But with that same breath I still don't think that the GTX380 will be able to pull off a win against the double 5970 GPU. With those specs that they should there while it would be possible in theory to be more powerful than the 5870 it still would come along way to beating a 5970 by 15%.
If you look at the specs of the new GTX380 is not that far ahead of a 5870 on paper let alone not counting in the fact that every card that has ever came out is less powered than the paper specs.
Now I am not trying to turn this into a contest of which card will be more powerful...... because there is really no way of telling until FEMI releases. But I am stating that what from what we know about the femi is...... That on paper yes it should be more powerful than a single 5870. But it wouldn't be more powerful than a 5970. Which would leave room for a GX2 version of the card. That I have read in several articles........ Nvidia is planning to release.

I will also agree that if this new card does not flop. Nvidia will take back the crown. But just until ATI comes out with their next version (6800 Series.) Then ATI will pull ahead.

But gone are the days I think were one card maker has a real definite lead in performance. ATI has enough money to back them up from here to kingdom come with the profits from their 4800 series. And well Nvidia I don't think they will make the same mistake twice...... With building cards that know one can afford.

That brings me to my next thought. Nvidia while they might have made the gtx 200 series with the 500$ dollar group in mind. I also misread what people were also willing to pay for performance. I do think that they made the mistake of thinking that people will pay anything to get those extra 3 frames. They took a gamble and they were wrong.

But Nvidia won't make that mistake again....... I think it will be a great battle hopefully. I do look forward to seeing what a GTX395/385GX2 can do. :D

The feel of lack of competition is just subjective and abstract. It's that people look at the GTX285 and want it and think there's no competition at that price point because it's expensive, that it doesn't make sense in a performance/price basis and hence they think it would have been better if Nvidia didn't outperform AMD. Well, but if GTX285 never existed (or performed like a HD4870) that performance level would have never existed in the first place and HD4870/GTX260 (or GTX280 with same performace as HD4870) would probably cost much more than they did, they would both be priced as premium cards instead of "second in charge" cards. What I'm trying to say is that the prerequisite for competition is that Nvidia releases a card with similar performance as the HD5870, how many cards they have above that level is irrelevant. The ideal thing is not that Nvidia fails to outperform HD5870 now. the idel situation would have been if the HD5870 was already in the performance point where the GTX380 specs suggest it will land.

I also wanted to add something to this point. I do think that price bracket would still exist. Let me explain...... Because if your Theory was 100% true with this. Than how can you explain the 4890? There is a card that was of the same caliber of the GTX 285( especially when O.C'd) Look how that was priced. From the day it came out it was priced reasonably. Then the prices kept dropping. So if we stayed with your theory instead of the 4890 dropping in price it should have gone up and stayed up.
I just think that Nvidia depended on there being a market for a 500 and 700$ cards. So what did they do ...... they made a card for that. Then they couldn't afford to take the loss.
When ATI started dropping their performance.

Again I don't think that they will do that with Femi. They can't afford too. So I would say with Femi you wil see closer performance to the 5800 series..... So that when ATI drops the price Nvidia can too or Visa versa.

Sorry for my answer to be a book too :)
 
Last edited:

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.49/day)
Location
Reaching your left retina.
I agree with many parts. Only I have to answer this:

I also wanted to add something to this point. I do think that price bracket would still exist. Let me explain...... Because if your Theory was 100% true with this. Than how can you explain the 4890?

It's easy to explain, the HD4890 is nowhere near as fast as the GTX285. There is enough distance between the two to make a difference, especially at higher settings. Also Nvidia released the GTX275 to fight the HD4890, and it turned out to be faster. The GTX275 went down in price alongside the HD4890.



HD4890 = 50%
GTX285 = 58% - That's 16% faster and at 2560x1600 the difference is even bigger, so there was a place for that card.
HD5850 = 62% - 7% faster.
HD5870 = 71% - 22% faster than GTX285

As you see, there is almost as much difference between HD4890 and GTX285 as there is between the latter and the HD5870, and certainly much more than with the 5850. HD5870 sells for much more than the GTX285 because it's faster and hence there is a place for it. People don't pay more for 3% faster cards, but they do for 15%. Now the 5850 should have made the GTX285 go down in price but since there's no new GT200 inventory, prices don't change. Stores usually change the price when they buy new inventory or if the items don't sell. No new invntory + demand for the cards + HD5xxx shortage = GTX285 keeping its price.

And now looking at those performance figures, I think even more that it's clear that GTX380 will vastly outperform the HD5870 by a good margin. Come on, there's no posible way that multiplicating the resources by 2.13x will only yield a 25% increase in performance. No way, absolutely. They would have thrown Fermi to the trash bin long long long time ago if that was the case. They not only compete against AMD they compete against themselves too, and 25% over previous generation is just unnaceptable. I strongly believe in the posibility of GTX380 being faster than HD5970, because performance in past Nvidia cards always went hand in hand with the shader power and Fermi is not so much different as to loose 85% of it's power.

What has happened with the HD5870 not being 2x as fast means nothing and I will show is why. I've made a similar chart as the one for Nvidia with AMD cards and while in Nvidia we see performance incresing linearly with Gflops, that doesn't happen with the Radeons. Radeons where designed to be effective on RAW performance-per-die area and power, not to be an effective scalar design and hence they don't scale so well. Look compare:



I think that those charts speak for themselves TBH. Just compare the GTX285 with the HD4890...

Note that the "small" percentual increase (lower charts) in Pixel Fillrate and BW in Ati chart comes from the fact that HD3xxx had very high specs on that department in the first place: the HD3850 outperforms the much faster 9800GT on pixel fillrate and has almost same BW...
Pixel fillrate doesn't seem to help the HD5xxx cards anyway, as this ones have the biggest increase of all the chart in that department and the lowest increase in performance.

Again, sorry for the long post.
 

Binge

Overclocking Surrealism
Joined
Sep 15, 2008
Messages
6,979 (1.20/day)
Location
PA, USA
System Name Molly
Processor i5 3570K
Motherboard Z77 ASRock
Cooling CooliT Eco
Memory 2x4GB Mushkin Redline Ridgebacks
Video Card(s) Gigabyte GTX 680
Case Coolermaster CM690 II Advanced
Power Supply Corsair HX-1000
I'd like to thank 20 and Bene personally because these past couple pages have been really interesting to read. I was reading 20's post just above Bene's last and saying that ATI's smaller chips would end up saving them money, and while this is true this is too long term to show benefits in this generation. NV's GF100 is not that much larger than ATI's latest GPU, and in fact has a higher transistor density. This isn't really my point. ATI's GPUs need to under a certain size to begin with. To be considered greatly cost effective the die size needs to be in a really sweet spot when comparing error density to die size.

So far multi GPU is easily a great way to get numbers, but I've yet to see linked GPUs give the same feel as a single GPU. Strange driver issues, applications rejecting some multi-gpu settings, microstuttering, and a lack of multi-gpu support in applications outside of gaming. @ 20mm - NV is releasing a gpu about 1/8th mm^2 larger than the RV800. Do you think that size is going to give ATI the ground it needs to pull ahead in profit, or is that extra 50% transistor count in 1/8th extra area actually pretty damn exciting?
 
Last edited:

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,233 (2.57/day)
NV is releasing a gpu about 1/8th mm^2 larger than the RV800. Do you think that size is going to give ATI the ground it needs to pull ahead in profit, or is that extra 50% transistor count in 1/8th extra area actually pretty damn exciting?

Given that recent info direct from nV staff kinda hints that this density is what is holding back nV's new chips, I find it hard to be excited.

http://www.semiconductor.net/article/438968-Nvidia_s_Chen_Calls_for_Zero_Via_Defects.php

Now, don't get me wrong, it's fantastic they are able to make such designs, however, the real problem is not thier's, but TSMC's.

It's unexciting, as it hints at future problems, and may explain why there are rumours 32nm is going to be skipped for some products, as well as explaining why R800 failed to meet targets. Third-party companies are worried about long-term reliability of ATI's parts, due to yet another manufacturing issue.

Dick James, a technology analyst at Chipworks (Ottowa, Canada), said via defects have shown up on ICs manufactured by TSMC. Chipworks has inspected products from graphics vendor ATI, now part of Advanced Micro Devices (AMD, Sunnyvale, Calif.). "The problem appears to be that when they cut a via, a residue of photoresist gets on the edge of the via, which creates a ring-shaped discontinuity in the metal," James said. "The discontinuity could create electromigration issues. We've seen the same problem on the upper metal levels on the ATI chips we've studied. It creates a reliability failure mode."

I'm afriad that even if ATI's chips are pulled from TSMC and moved to GloFo, that even the extra capacity nV will have available @ TSMC may not be enough for any future products.


Overall, nV is a programming powerhouse. they need to leverage that more than ever now, even though re-branding hints at this strength.
 
Joined
Oct 6, 2009
Messages
2,824 (0.52/day)
Location
Midwest USA
System Name My Gaming System
Processor Intel i7 4770k @ 4.4 Ghz
Motherboard Asus Maximus VI Impact (ITX)
Cooling Custom Full System Water cooling Loop
Memory G.Skill 1866 Mhz Sniper 8 Gb
Video Card(s) EVGA GTX 780 ti SC
Storage Samsung SSD EVO 120GB - Samsung SSD EVO 500GB
Display(s) ASUS W246H - 24" - widescreen TFT active matrix LCD
Case Bitfenix Prodigy
Power Supply Corsair AX760 Modular PSU
Software Windows 8.1 Home Primeum
Again, sorry for the long post.

It's no problem for the long post...... But it might take me a second to go over all the info so I can respond. Because it is so long..... I don't want to leave any important talking points out:)

But just something to mow over until I do respond...... a quick point...... the 4890 you are correct @ stock clocks might have been 15% slower than a GTX 285. But that doesn't take into account that the 4890 Will overclock like a sieve. So with that said the 4890 could easily in most cases not all could reach Stock GTX 285 speeds. With that mentioned I understand you could turn around and then say well the GTX 285 could just overclock and still beat it. You are correct. But the performance per dollar that you a reaching is way out of wack for the GTX 285 comparatively.
Now if you look @ the GTX 275 vs. 4890 (both of these comments are based in real world performance not charts or numbers because I have owned most of the cards we are talking about)
The 4890 was very comparable to the GTX 275. But because they were so close it is debatable to which one was faster. I say this because if you make a general observation of articles you read. Depending on who wrote the article they will tell you that Either the GTX 275 was faster or the 4890 was faster. Now going back that doesn't take into account for overclocking again. The 4890 could easily out perform the GTX 275's overclock. You also stated that the GTX 275 fell in price along with the 4890. I beg to differ with that. The 4890 fell in price long before the GTX 275 did. the 4890 started falling in price right around late may and the GTX 275 didn't start falling until August. I think that only happened because Nvidia couldn't afford to be so high priced any longer.
While the paper specs you provided are really informative and a well and good. It still doesn't take into account real world performance.

:D Now with that said I will look things over and give you a better answer today after I get home from college.
BTW I enjoy this debate so keep it coming bud!

So far multi GPU is easily a great way to get numbers, but I've yet to see linked GPUs give the same feel as a single GPU. Strange driver issues, applications rejecting some multi-gpu settings, microstuttering, and a lack of multi-gpu support in applications outside of gaming. @ 20mm - NV is releasing a gpu about 1/8th mm^2 larger than the RV800. Do you think that size is going to give ATI the ground it needs to pull ahead in profit, or is that extra 50% transistor count in 1/8th extra area actually pretty damn exciting?

I think that it will give ATI some ground. While the 5970 is selling for a large amount of money right now. You have to take into account that it is Christmas time and they are the only card manufacturer that has a DX11 card out right now. After the holidays you will see prices drop back down closer to were they should be. Also after Christmas the 5600/5950's and the like will release. So that will also cause the prices to come down as well.
Now compared to Nvidia's offering of Femi. I can't really say for sure. But waht I can say is that I believe ATI will pull out a head on profit. While Nvidia's extra 50% transistor count is very impressive. I don't believe that it will be enough to beat the 5970/5950's of the world. So it will force Nvidia to bring out a GX2 version of Femi. Which will defiantly beat the 5900's. But because their Die will cost more and the fact that every Nvidia card also contains an other chip (physx which is also expensive) I believe that it will be way over priced. Weather the performance over the 5900's is large or not is irrelevant. The fact that a card like that could cost up too 200$ dollars more is relevant. Remember most people (the masses are aiming for 60FPS) So anything over that is usually a waist. Also the masses most of them don't play at 30 inch resolutions either. So while Nvidia will get some sales from Femi's GX2 offerings. They won't get more than ATI. So yes I think Nvidia is digging them selves a bigger hole. Not to mention that ATI also has the profit's from the 4800 series in their corner along with the backing of AMD. Nvidia doesn't have those things going for them. So if necessary ATI could also respond to FEMI with out losing that much. But it would be a lot harder for Nvidia to respond to that if it went down that road.

Hope that answers your question:)

Quote:
Originally Posted by Binge View Post
NV is releasing a gpu about 1/8th mm^2 larger than the RV800. Do you think that size is going to give ATI the ground it needs to pull ahead in profit, or is that extra 50% transistor count in 1/8th extra area actually pretty damn exciting?
Given that recent info direct from nV staff kinda hints that this density is what is holding back nV's new chips, I find it hard to be excited.

http://www.semiconductor.net/article...ia_Defects.php

Now, don't get me wrong, it's fantastic they are able to make such designs, however, the real problem is not thier's, but TSMC's.

It's unexciting, as it hints at future problems, and may explain why there are rumours 32nm is going to be skipped for some products, as well as explaining why R800 failed to meet targets. Third-party companies are worried about long-term reliability of ATI's parts, due to yet another manufacturing issue.

Quote:
Dick James, a technology analyst at Chipworks (Ottowa, Canada), said via defects have shown up on ICs manufactured by TSMC. Chipworks has inspected products from graphics vendor ATI, now part of Advanced Micro Devices (AMD, Sunnyvale, Calif.). "The problem appears to be that when they cut a via, a residue of photoresist gets on the edge of the via, which creates a ring-shaped discontinuity in the metal," James said. "The discontinuity could create electromigration issues. We've seen the same problem on the upper metal levels on the ATI chips we've studied. It creates a reliability failure mode."
I'm afriad that even if ATI's chips are pulled from TSMC and moved to GloFo, that even the extra capacity nV will have available @ TSMC may not be enough for any future products.


Overall, nV is a programming powerhouse. they need to leverage that more than ever now, even though re-branding hints at this strength.
__________________
My posts are my opinion, and may not be based in fact. Please take that into consideration.

What is said here is a great point. Wait till ATI brings their Business to Global Foundries. It will really take off.

While Femi are almost guaranteed to be great and very powerful cards. If they can't get them out...... what's the point of making them. Not only that...... Just the time frame is too much too late!
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,233 (2.57/day)
What is said here is a great point. Wait till ATI brings their Business to Global Foundries. It will really take off.

Given that a cpu's transistor density generally isn't as high as gpus, it's hard to actually gauge how well GloFo will be able to compete in this space. ATI's mini-fab in Canada may give them the edge they need, and AMD's tooling is very advanced, thanks to APM, so they should be able to transfer tech over successfully, but I'm worried that marketing is taking such a strong stance on jsut how good GloFo's process technology really is...they could be truthful, but at the same time, they could be hiding thier weakness, as they did with the Phenom 1 launch( a certain blond hyping 3ghz chips comes to mind). Such actions don't breed confidence.

While Fermi are almost guaranteed to be great and very powerful cards. If they can't get them out...... what's the point of making them. Not only that...... Just the time frame is too much too late!

I am also very concerned about how current marketing seems to ignore that the problem in getting a timely release has nothing to do with nVidia, and everything to do with TSMC. Both ATI and nVidia are facing the same fab issues, and the difference in designs has ATI ahead, but it's really no fault of either party that both are having issues. I understand them not wanting to pass on the buck, but I'm one for complete honesty from all parties.

nVidia and ATi have very drastically different designs, and if they'd work together a bit more, both can excel and the industry as a whole will benefit, but it seems that ego is getting in the way.

I used to be one of those guys that bought every product from ATI at launch, but as I hinted at on my blog months ago, I'm waiting until more products are released before investing my hard-earned dollars. I had a bit of loyalty to ATI in the past, but really, I'm waiting for nV's gpus before deciding what side my dollars go to when it comes to DX11. Hopefully this extra time that nV has before release mean that thier drivers will be that much better, but ATI has the same period of time to do the same.


All that said though, I've got the dispaly-port monitors to run Eyefinity, so I know where I'm headed, but I'm more than willing to let them collect dust if nV delivers everything this time. I just hope it really doesn't take until March for nV to tell us what's really going on.
 
Joined
Jul 2, 2008
Messages
3,638 (0.62/day)
Location
California
GeForce 7 released on June 2005
<-16 months gap->
GeForce 8 on November, 2006
<-18 months gap (GeForce 9 released on April)->
GeForce 200 on June, 2008
<-18 months+ gap
Next gen Q1 2010

AMD released HD4800 series on June 2008, and HD5800 on September 2009, it's 14 months.

If they follow this circle, the next AMD gen will be on December next year.

GT100 is totally a new GPU design, so I'm expecting the gen that following it will be based on it, and the circle will be alot shorter.

I believe NVIDIA is on schedule as long as they release Fermi on Q1, and not Q2.
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,233 (2.57/day)
True, but JenHsun's showing that card at the recent conference really had alot of people thinking the cards would be out sooner, and this does not lend well to consumer confidence. Given that we know nV is having fab issues, it's really safe to say that they are late to release, even though for them, a bit late is the norm...Microsoft releases an API, developed on ATI hardware, ATi has cards for that API release, and nV follows up later. It's all perfectly normal, which really makes me question current marketing tactics from both sides.

nv being late allows the to leverage that time for a stonger release, so there's nothing really negative about them being a bit late anyway. However, there's no denying that if they had a DX11 product out now, that maybe more developer's would have DX11 products.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.49/day)
Location
Reaching your left retina.
The 4890 was very comparable to the GTX 275. But because they were so close it is debatable to which one was faster. I say this because if you make a general observation of articles you read. Depending on who wrote the article they will tell you that Either the GTX 275 was faster or the 4890 was faster. Now going back that doesn't take into account for overclocking again. The 4890 could easily out perform the GTX 275's overclock. You also stated that the GTX 275 fell in price along with the 4890. I beg to differ with that. The 4890 fell in price long before the GTX 275 did. the 4890 started falling in price right around late may and the GTX 275 didn't start falling until August. I think that only happened because Nvidia couldn't afford to be so high priced any longer.

You are absolutely right about the performance when ovrclocked, but since we were talking about the market price of the cards, that doesn't really matter. Very few people overclock, so the effect of that in the market is meaningless. That also played in GTX275's favor, because at default it is slightly faster and people are willing to pay a little bit more for some more performance, as long as performance/price is the same. And in this case it was. Only in the last quarter Ati cards have come down in price a lot while Nvidia cards have not, but that's again because of shortage and demand.

While the paper specs you provided are really informative and a well and good. It still doesn't take into account real world performance.

My charts absolutely take real performance into account, that's their purpose in fact. They are based on Wizzard's reviews, overall performance (column on the right) and there's no other review that tests so many games and settings, so you will not find a more accurate and complete representation of real performance anywhere. What my charts are doing is taking the specs and comparing them to real performance and they are normalized on the lower charts so that you can clearly see how real performance increased as paper specs increased. In the case of Nvidia it scaled linearly, but not in the case of Ati, which is why even if HD5xxx cards didn't scale according to their specs, Fermi will most probably do, because every previous Nvidia card in that chart scaled to perfection, sometimes being bottlenecked by the rasterizing units, but most of the times being bottlenecked by the GFlops. Why would Fermi be different? It really isn't different, so Fermi cards will most probably perform somewhere in between the range I represented. The only question that we could have, was if Fermi could achieve good clocks, but it is achieving good clocks and hence they are delivering the raw power. Now, will that raw power be translated into real performance? Why not? There are many posibilities, but the most probable is that it will, just like all the other cards in the chart did.

I know it's hard to believe in the first place, I even scratched my head when I looked at the results for the first time, but it's probably just the reality hitting hard on our expectations. Everything changed when I took another look at Wizzard's performance charts and realized that HD5870 is only 25% faster than the GTX285 and the HD5970 is only 75% faster, heck GTx285 SLI is just as fast, sometimes even faster. I know this will hurt, but HD5xxx failed to deliver the performance. Nvidia has always aimed at 2x the previous generation's performance (that's 2x GTX285) and as the chart shows they've been pretty accurate at doing so in the past, not always making it exactly 2x as fast, but always succeeding at scaling along with the resource increases. For example with GT200 they were not able to double up the shader count, but performance scaled perfectly. That's what's good about Nvidia's strategy, their fight is in trying to put the resources on the silicon and being able to clock that silicon decently, once they have achieved that, everything is goes smooth. Now they have more than doubled up the power and it's just a matter of continuing with the typical trend of matching performance to specs. Like I said, Nvidia could have failed in delivering the raw power, fail to have the specs, could have been forced to lower the SP count to i.e 320 or forced to lower theclocks to 500 Mhz, but once they have the specs they have the results.

AMD uses a different strategy, they put all ther efforts into being able to put as many things as possible into as fewer silicon as they can and then they try to make that scale well. They are two different strategies that work similarly well in general, but that can have their own issues too.

EDIT: Forgot about one thing sorry:

But because their Die will cost more and the fact that every Nvidia card also contains an other chip (physx which is also expensive) I believe that it will be way over priced.

It doesn't have another chip for PhysX, not at all. PhysX is just software, running through CUDA.

I am also very concerned about how current marketing seems to ignore that the problem in getting a timely release has nothing to do with nVidia, and everything to do with TSMC. Both ATI and nVidia are facing the same fab issues, and the difference in designs has ATI ahead, but it's really no fault of either party that both are having issues. I understand them not wanting to pass on the buck, but I'm one for complete honesty from all parties.

I couldn't agree more on that. It's something that worries me a lot too. I think that people just like blaming someone and Nvidia is just the easiest choice. TSMC is too abstract for most people.
 
Last edited:
Joined
Nov 13, 2007
Messages
10,451 (1.71/day)
Location
Austin Texas
Processor 13700KF Undervolted @ 5.4, 4.8Ghz Ring 190W PL1
Motherboard MSI 690-I PRO
Cooling Thermalright Peerless Assassin 120 w/ Arctic P12 Fans
Memory 48 GB DDR5 7600 MHZ CL36
Video Card(s) RTX 4090 FE
Storage 2x 2TB WDC SN850, 1TB Samsung 960 prr
Display(s) Alienware 32" 4k 240hz OLED
Case SLIGER S620
Audio Device(s) Yes
Power Supply Corsair SF750
Mouse Xlite V2
Keyboard RoyalAxe
Software Windows 11
Benchmark Scores They're pretty good, nothing crazy.
At the end of the day, it doesn't matter that a video card gives a 100% boost, or an 80% boost over the flagship of the current gen. It matters when it comes out, and how many cards ultimately sell.

The 5xxx is substantially faster than anything on the market that's what matters. The design is simple, and the gpu is powerful.

I feel like the fermi argument is the same as AMD's True Quad core argument, in the sense that both of these chips were massively late and ppl tried desperately to rationalize some massive benefit to waiting for the over-complicated design. It will be an imressive card, maybe. But there needs to be a balance between capability and design. If the design is too ambitious to make it is the same, in the end, as having no design at all.

Fermi is late, and it will be almost certainly slower than the 5970... by the time it finally comes out, ATI will have a full line of products. This is from two years ago when ATI couldnt even compete with the 8800gt.

In the strategy sense, ATI has won - i can tell you with reasonable certainty that ATI's next design will be a 3200sp gpu based on a slightly tweaked but otherwise exactly the same SP design with a massive amount of memory, whether it be gddr5 or something else. It will probably not perform better than 2x 5870's but it will come out MUCH sooner than Nv's next design, and will be faster than fermi.

Unless NV skips a generation, or fermi is massively faster than 2xgtx285 in SLi, then NV is in trouble in the long run. IMO
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,233 (2.57/day)
I couldn't agree more on that. It's something that worries me a lot too. I think that people just like blaming someone and Nvidia is just the easiest choice. TSMC is too abstract for most people.

I've been blaming TSMC for years, and hoping that ATI would move production elsewhere...for literal YEARS I have been posting that TSMC has been at fault for alot of issues that both nV and ATI have faced.

Unfortunately, there isn't alot of other options for foundry businesses, and hence me saying for the past 4-5 years that there's alot of money to be made for a company that can build a fantastic foundry...and I think that this hard fact is what made Abu Dhabi buy in and form GloFo. Silicon is in so many devices, it's hard to ignore the potential market, if the problems foundries face can be overcome.

If I could get $4-$5 bil together, I know what I'd be doing with it...

Unless NV skips a generation, or fermi is massively faster than 2xgtx285 in SLi, then NV is in trouble in the long run. IMO

I don't see it this way, as nV has successfully leveraged thier programming prowess, time and again, to remain 100% relevant to the industry. As JenHsun says, they are a software company that sells hardware, so hardware failures are a relatively minor setback.

ATI doesn't have the same programming strength, but seems to capitalize on developing hardware that overcomes foundry issues. If the two were to get together, and leverage both of thier strengths together, they'd be unstoppable. Had Larrabee been a success, this might have even been possible, as there would be no monopoly in the gpu market, but of course, recent news shows that such actions are indefinately delayed. Plus, Jen Hsun loves to fight adversity...he's been doing it since he was a kid, so this difficult time when his leadership may win out overall.
 
Last edited:

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.49/day)
Location
Reaching your left retina.
At the end of the day, it doesn't matter that a video card gives a 100% boost, or an 80% boost over the flagship of the current gen. It matters when it comes out, and how many cards ultimately sell.

The 5xxx is substantially faster than anything on the market that's what matters. The design is simple, and the gpu is powerful.

The performance does matter. If Fermi ends up even close to HD5970, I can tell you that it's much much much cheaper to produce than the HD5970. And Fermi has a lot of chances to being faster as I have explained.

I feel like the fermi argument is the same as AMD's True Quad core argument, in the sense that both of these chips were massively late and ppl tried desperately to rationalize some massive benefit to waiting for the over-complicated design. It will be an imressive card, maybe. But there needs to be a balance between capability and design. If the design is too ambitious to make it is the same, in the end, as having no design at all.

No it's not any intent to rationalize anything and as cadaveca said, fermi is late because of TSMC and not because any design flaw. By now TSMC should have more than 80% yields on their more than one year old 40nm process. They are at below 40% and that hurts the bigger chip, yes, but that doesn't change the fact that TSMC should be able to make a <500 mm2 chip in that process with the eyes closed and they aren't. It's like blaiming a Formula One car team for not being able to compete in a countryside race, when they had been invited to a Formula One Grand Prix. It shouldn't be there. Period.

Fermi is late, and it will be almost certainly slower than the 5970... by the time it finally comes out, ATI will have a full line of products. This is from two years ago when ATI couldnt even compete with the 8800gt.

Based on what exactly can you affirm that? Can you elaborate? Give anything close to the charts I provided uppon which we could speculate? Or you know something we don't?

In the strategy sense, ATI has won - i can tell you with reasonable certainty that ATI's next design will be a 3200sp gpu based on a slightly tweaked but otherwise exactly the same SP design with a massive amount of memory, whether it be gddr5 or something else. It will probably not perform better than 2x 5870's but it will come out MUCH sooner than Nv's next design, and will be faster than fermi.

That depends a lot. AMD putting 3200 SP would mean nothing if it delivered the same increase as HD5xxx has delivered. That is, if it delivered 50% increase again that wouln't change things too much. We still don't know how Fermi will perform, but as I said, if it lives up to its specs and history is proof enough that it will, Fermi already will perform like that hypothetic 3200 SP card, heck the HD5970 is a 3200 SP card (and the HD4870 X2 demostrates that 800 SP cards Crossfired can be faster than a native 1600 SP card). Nvidia could easily release a refresh with more SPs too.

Unless NV skips a generation, or fermi is massively faster than 2xgtx285 in SLi, then NV is in trouble in the long run. IMO

You are assuming that Nvidia will not release anything new in 1 year... It will not release a new architecture, Fermi is that, but they can easily release the equivalent of the GT200, same architecture with twice the resources.
 
Joined
Nov 13, 2007
Messages
10,451 (1.71/day)
Location
Austin Texas
Processor 13700KF Undervolted @ 5.4, 4.8Ghz Ring 190W PL1
Motherboard MSI 690-I PRO
Cooling Thermalright Peerless Assassin 120 w/ Arctic P12 Fans
Memory 48 GB DDR5 7600 MHZ CL36
Video Card(s) RTX 4090 FE
Storage 2x 2TB WDC SN850, 1TB Samsung 960 prr
Display(s) Alienware 32" 4k 240hz OLED
Case SLIGER S620
Audio Device(s) Yes
Power Supply Corsair SF750
Mouse Xlite V2
Keyboard RoyalAxe
Software Windows 11
Benchmark Scores They're pretty good, nothing crazy.
I don't see it this way, as nV has successfully leveraged thier programming prowess, time and again, to remain 100% relevant to the industry. As JenHsun says, they are a software company that sells hardware, so hardware failures are a relatively minor setback.

ATI doesn't have the same programming strength, but seems to capitalize on developing hardware that overcomes foundry issues. If the two were to get together, and leverage both of thier strengths together, they'd be unstoppable. Had Larrabee been a success, this might have even been possible, as there would be no monopoly in the gpu market, but of course, recent news shows that such actions are indefinately delayed. Plus, Jen Hsun loves to fight adversity...he's been doing it since he was a kid, so this difficult time when his leadership may win out overall.

I would buy that, but I can think of many times when successful hardware designs (g80, TNT2, first geforce generations) revolutionized the market and made NV what it is today. I cant think of any time when their software saved them.

They have a nice developer kit... but their software is 100% proprietary and 100% based on the performance of their hardware.

Jen Hsun also said that the ambitious and costly FX5200 also almost killed the company on a keynote that I listened to a while back.

CUDA is the only real example of revolutionary software, but again, hardware is what runs it... And no one will buy a $500 accelerator to encode porn when they can do it on a $60 card just fine.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.49/day)
Location
Reaching your left retina.
Jen Hsun also said that the ambitious and costly FX5200 also almost killed the company on a keynote that I listened to a while back.

Hehe, it was the FX5900, the FX5200 was the low end card. Probably the worst low end card in history, because it was the worst card in the worst lineup ever made.

He also said that the biggest problem with the FX line is that it required a different programming model, it required Nvidia to put effort into promoting that model and they failed badly at doing that, he has admitted that several times. In most aspects the FX line was identical to R600, highly parallel VLIW architecture, with more but weaker execution units, that requires much more load balancing than more regular designs. And they released that, when programable shaders where a novelty, so in a sense it should have been clear for them it was too early for such a programming model. R600 still came too soon for that model...

Furthermore the FX failed, mostly because of the great success of the 9700 and 9800, which ironically went with a super-scalar (but scalar anyway) design instead of a complex VLIW. It's the story of R600 actually, but brand reversed and happening on a time when programming on shaders was not as advanced as it is now.
 
Joined
Nov 13, 2007
Messages
10,451 (1.71/day)
Location
Austin Texas
Processor 13700KF Undervolted @ 5.4, 4.8Ghz Ring 190W PL1
Motherboard MSI 690-I PRO
Cooling Thermalright Peerless Assassin 120 w/ Arctic P12 Fans
Memory 48 GB DDR5 7600 MHZ CL36
Video Card(s) RTX 4090 FE
Storage 2x 2TB WDC SN850, 1TB Samsung 960 prr
Display(s) Alienware 32" 4k 240hz OLED
Case SLIGER S620
Audio Device(s) Yes
Power Supply Corsair SF750
Mouse Xlite V2
Keyboard RoyalAxe
Software Windows 11
Benchmark Scores They're pretty good, nothing crazy.
Hehe, it was the FX5900, the FX5200 was the low end card. Probably the worst low end card in history, because it was the worst card in the worst lineup ever made.

He also said that the biggest problem with the FX line is that it required a different programming model, it required Nvidia to put effort into promoting that model and they failed badly at doing that, he has admitted that several times. In most aspects the FX line was identical to R600, highly parallel VLIW architecture, with more but weaker execution units, that requires much more load balancing than more regular designs. And they released that, when programable shaders where a novelty, so in a sense it should have been clear for them it was too early for such a programming model. R600 still came too soon for that model...

Furthermore the FX failed, mostly because of the great success of the 9700 and 9800, which ironically went with a super-scalar (but scalar anyway) design instead of a complex VLIW. It's the story of R600 actually, but brand reversed and happening on a time when programming on shaders was not as advanced as it is now.

yes that programming model was exactly what he talked about... and how it evetually lead to the current design of nvidia cards and ultimately to GPGPU - so it was a failure, but not really.
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,233 (2.57/day)
I would buy that, but I can think of many times when successful hardware designs (g80, TNT2, first geforce generations) revolutionized the market and made NV what it is today. I cant think of any time when their software saved them.

Well, there is a very large example of how programming has kept them more than revelant...TWIMTBP.

While I agree that this program is a bit difficult to accept due to being a closed API, as an end user, it's only a benefit for those that buy nV's products. The real problem is for developer's that must liscence the closed APIs, which, from my own perspective, went totally against the grain of what DX10 was about...open functions for all(with DX10's lack of "cap bits"). the introduction of CUDA and Phys-X has greatly undermined the purposefulness of DX10, but at the same time, from a business perspective, forcing liscencing has garnered alot of money for nV...think of all the money made from SLi liscencing.

I'm kinda two-faced about nV, because I respect them as a business, but hate how it may have hurt the software market, with developer's trying to make both ATI and nV hardware relevant, performance-wise.
 
Last edited:
Joined
Oct 6, 2009
Messages
2,824 (0.52/day)
Location
Midwest USA
System Name My Gaming System
Processor Intel i7 4770k @ 4.4 Ghz
Motherboard Asus Maximus VI Impact (ITX)
Cooling Custom Full System Water cooling Loop
Memory G.Skill 1866 Mhz Sniper 8 Gb
Video Card(s) EVGA GTX 780 ti SC
Storage Samsung SSD EVO 120GB - Samsung SSD EVO 500GB
Display(s) ASUS W246H - 24" - widescreen TFT active matrix LCD
Case Bitfenix Prodigy
Power Supply Corsair AX760 Modular PSU
Software Windows 8.1 Home Primeum
At the end of the day, it doesn't matter that a video card gives a 100% boost, or an 80% boost over the flagship of the current gen. It matters when it comes out, and how many cards ultimately sell.

The 5xxx is substantially faster than anything on the market that's what matters. The design is simple, and the gpu is powerful.

I feel like the fermi argument is the same as AMD's True Quad core argument, in the sense that both of these chips were massively late and ppl tried desperately to rationalize some massive benefit to waiting for the over-complicated design. It will be an imressive card, maybe. But there needs to be a balance between capability and design. If the design is too ambitious to make it is the same, in the end, as having no design at all.

Fermi is late, and it will be almost certainly slower than the 5970... by the time it finally comes out, ATI will have a full line of products. This is from two years ago when ATI couldnt even compete with the 8800gt.

In the strategy sense, ATI has won - i can tell you with reasonable certainty that ATI's next design will be a 3200sp gpu based on a slightly tweaked but otherwise exactly the same SP design with a massive amount of memory, whether it be gddr5 or something else. It will probably not perform better than 2x 5870's but it will come out MUCH sooner than Nv's next design, and will be faster than fermi.

Unless NV skips a generation, or fermi is massively faster than 2xgtx285 in SLi, then NV is in trouble in the long run. IMO

I unfortunately won't be able to get into this wonderful debate tonight. Because I have to go to my youngest daughters Christmas Concert But......

All I have to say is Here Here....... This quote worded perfectly what I was trying to say. It's not a battle as to who is better Nivida or ATI. It's more a battle of which company is "doing" better right now. IMO! Like I stated before.... weather or not Femi is more powerful than 5870 isn't the question. It's the fact will it sell that will make it fail or prosper. Take Intel and AMD for instance. Does anyone here think that Intel sells more extream CPU's than AMD's Phenom II X 4 965's? Of course they don't Because of the price to performance ratio. When I state that Femi could be a flop I am not only considering performance I am also considering the market. Sure I also had some performance thought's in there but my statements were meant to be more all around in general than anything.

But again I will strike and than leave Because I have to run:D HEHE. I can't wait to read the discussion tomorrow!!!
 
Joined
Oct 1, 2006
Messages
4,921 (0.75/day)
Location
Hong Kong
Processor Core i7-12700k
Motherboard Z690 Aero G D4
Cooling Custom loop water, 3x 420 Rad
Video Card(s) RX 7900 XTX Phantom Gaming
Storage Plextor M10P 2TB
Display(s) InnoCN 27M2V
Case Thermaltake Level 20 XT
Audio Device(s) Soundblaster AE-5 Plus
Power Supply FSP Aurum PT 1200W
Software Windows 11 Pro 64-bit
Damn you guys have really indepth conversations, (sniff sniff!!! I smell fanboys lol) it is interesting but seriously, lets just wait and see what nvidia has in store! If they beat ATI's cards including the HD5970 which I don't see happening unless they bring out a dual GPU solution, Im going with nvidia's cards even though Im an ATI fanboy, I mean having 2 GTX380's in SLI just smells like pure AWESOMENESS! :D:D:D
You do notice that pic of what we assume to be the GTX 380 has 8+6pin connectors?

Assuming that pic is real:
If they are going to make a dual GPU card out of that, they will most likely need @lease a dual 8-pin card with under clocked/cut out GPUs.
The Fermi might have higher transistor density, but that couldn't save them from the higher TDP from the higher power consumption.
It just sounds all too much like the last gen, I am sorry to say.
Just that this time they are a bit too late on the move.
 
Last edited:

Binge

Overclocking Surrealism
Joined
Sep 15, 2008
Messages
6,979 (1.20/day)
Location
PA, USA
System Name Molly
Processor i5 3570K
Motherboard Z77 ASRock
Cooling CooliT Eco
Memory 2x4GB Mushkin Redline Ridgebacks
Video Card(s) Gigabyte GTX 680
Case Coolermaster CM690 II Advanced
Power Supply Corsair HX-1000
You do notice that pic of what we assume to be the GTX 380 has 8+6pin connectors?

Assuming that pic is real:
If they are going to make a dual GPU card out of that, they will most likely need @lease a dual 8-pin card with under clocked/cut out GPUs.
The Fermi might have higher transistor density, but that couldn't save them from the higher TDP from the higher power consumption.
It just sounds all too much like the last gen, I am sorry to say.
Just that this time they are a bit too late on the move.

Thanks for the original viewpoint. :rolleyes:

For anyone confused he said that NV will have a larger chip that will most likely produce a good bit of heat, and requires a good deal of power to operate. He feels this step is much like the GT200, and the "move" to GT300 is too slow. Zubz also found it fitting to bring up the validity of recent pictures of the GT300. I truly doubt anyone's brought up the subject before him :laugh:
 
Joined
Nov 13, 2007
Messages
10,451 (1.71/day)
Location
Austin Texas
Processor 13700KF Undervolted @ 5.4, 4.8Ghz Ring 190W PL1
Motherboard MSI 690-I PRO
Cooling Thermalright Peerless Assassin 120 w/ Arctic P12 Fans
Memory 48 GB DDR5 7600 MHZ CL36
Video Card(s) RTX 4090 FE
Storage 2x 2TB WDC SN850, 1TB Samsung 960 prr
Display(s) Alienware 32" 4k 240hz OLED
Case SLIGER S620
Audio Device(s) Yes
Power Supply Corsair SF750
Mouse Xlite V2
Keyboard RoyalAxe
Software Windows 11
Benchmark Scores They're pretty good, nothing crazy.
Thanks for the original viewpoint. :rolleyes:

For anyone confused he said that NV will have a larger chip that will most likely produce a good bit of heat, and requires a good deal of power to operate. He feels this step is much like the GT200, and the "move" to GT300 is too slow. Zubz also found it fitting to bring up the validity of recent pictures of the GT300. I truly doubt anyone's brought up the subject before him :laugh:

:laugh: brutal.
 
Status
Not open for further replies.
Top