# AMD Confirms GDDR5 for ATI Radeon 4 Series Video Cards



## malware (May 21, 2008)

AMD today announced the first commercial implementation of Graphics Double Data Rate, version 5 (GDDR5) memory in its forthcoming next generation of ATI Radeon graphics card products. The high-speed, high-bandwidth GDDR5 technology is expected to become the new memory standard in the industry, and that same performance and bandwidth is a key enabler of The Ultimate Visual Experience, unlocking new GPU capabilities. AMD is working with a number of leading memory providers, including Samsung, Hynix and Qimonda, to bring GDDR5 to market. 


Today's GPU performance is limited by the rate at which data can be moved on and off the graphics chip, which in turn is limited by the memory interface width and die size. The higher data rates supported by GDDR5 - up to 5x that of GDDR3 and 4x that of GDDR4 - enable more bandwidth over a narrower memory interface, which can translate into superior performance delivered from smaller, more cost-effective chips. AMD's senior engineers worked closely with industry standards body JEDEC in developing the new memory technology and defining the GDDR5 spec. 

"The days of monolithic mega-chips are gone. Being first to market with GDDR in our next-generation architecture, AMD is able to deliver incredible performance using more cost-effective GPUs," said Rick Bergman, Senior Vice President and General Manager, Graphics Product Group, AMD. "AMD believes that GDDR5 is the optimal way to drive performance gains while being mindful of power consumption. We're excited about the potential GDDR5 brings to the table for innovative game development and even more exciting game play." 

The introduction of GDDR5-based GPU offerings marks the continued tradition of technology leadership in graphics for AMD. Most recently AMD has been first to bring a unified shader architecture to market, the first to support Microsoft DirectX 10.1 gaming, first to lower process nodes like 55nm, the first with integrated HDMI with audio, and the first with double-precision floating point calculation support. 

AMD expects that PC graphics will benefit from the increase in memory bandwidth for a variety of intensive applications. PC gamers will have the potential to play at high resolutions and image quality settings, with superb overall gaming performance. PC applications will have the potential to benefit from fast load times, with superior responsiveness and multi-tasking. 

"Qimonda has worked closely with AMD to ensure that GDDR5 is available in volume to best support AMD's next-generation graphics products," said Thomas Seifert, Chief Operating Officer of Qimonda AG. "Qimonda's ability to quickly ramp production is a further milestone in our successful GDDR5 roadmap and underlines our predominant position as innovator and leader in the graphics DRAM market." 

*GDDR5 for Stream Processing*
In addition to the potential for improved gaming and PC application performance, GDDR5 also holds a number of benefits for stream processing, where GPUs are applied to address complex, massively parallel calculations. Such calculations are prevalent in high-performance computing, financial and academic segments among others. AMD expects that the increased bandwidth of GDDR5 will greatly benefit certain classes of stream computations. 

New error detection mechanisms in GDDR5 can also help increase the accuracy of calculations by indentifying errors and re-issuing commands to get valid data. This capability is a level of reliability not available with other GDDR-based memory solutions today.

*View at TechPowerUp Main Site*


----------



## Cybrnook2002 (May 21, 2008)

Man, cool and all, but its getting hard to keep up. Specs are getting so dated sooo fast, what happened to the 6 months and your good rule.


----------



## CDdude55 (May 21, 2008)

Now i am confused, Should i get the a 8800 series card like a 8800 GT or get one of the 4 series cards? I have a 680i SLI SE motherboard so i don't know if an ATI will work so good on my motherboard . Still can't wait for GDDR5 
.


----------



## suraswami (May 21, 2008)

Damn I was thinking doing CF with current 38 series.  May be just one 48 series will do if the price is right.  Damn should I keep the Asus?


----------



## HTC (May 21, 2008)

CDdude55 said:


> Now i am confused, Should i get the a 8800 series card like a 8800 GT or get one of the 4 series cards? I have a 680i SLI SE motherboard so i don't know if an ATI will work so good on my motherboard . Still can't wait for GDDR5
> .



As far as i know, any 1 card (except 3870x2, i believe) will work with your motherboard. It's when you want crossfire or SLI that you have to worry about.


----------



## Psychoholic (May 21, 2008)

the 3870x2 will also work, should..  the crossfire bridge is onboard


----------



## magibeg (May 21, 2008)

Well it looks like atleast one of the rumors about the 4 series cards are true. Hopefully the GDDR5 doesn't make things too expensive though.


----------



## HTC (May 21, 2008)

Psychoholic said:


> the 3870x2 will also work, should..  the crossfire bridge is onboard



I think the 3870 works in crossfire, even though it's only 1 board. Therefore, it shouldn't work in SLI only boards. I could be wrong, though!


----------



## ningen (May 21, 2008)

ATI's always better on paper - faster memory, clocks, DX 10.1 support... yet nvidia cards are still better.


----------



## magibeg (May 21, 2008)

HTC said:


> I think the 3870 works in crossfire, even though it's only 1 board. Therefore, it shouldn't work in SLI only boards. I could be wrong, though!



I'm pretty sure the 3870x2 doesn't need a crossfire board in order to work.

edit- its not listed under requirements on the ati website. There is no work on the motherboards part to make the dual gpu's work together.


----------



## CDdude55 (May 21, 2008)

Cybrnook2002 said:


> Man, cool and all, but its getting hard to keep up. Specs are getting so dated sooo fast, what happened to the 6 months and your good rule.




Good point, That the only problem i have with PC hardware. I could get a 8800 GTX at launch and in the next two days a rumor about the 9 series is leaked. I just wish they had longer life spans instead of dropping from highend to second best in a matter of days or weeks.:shadedshu


----------



## eidairaman1 (May 21, 2008)

ningen said:


> ATI's always better on paper - faster memory, clocks, DX 10.1 support... yet nvidia cards are still better.



yet another Fanboy.


----------



## hv43082 (May 21, 2008)

eidairaman1 said:


> yet another Fanboy.



Fanboy or not, NVIDIA cards are still faster right now.  They have the fastest card and best bang for the buck card at the moment.


----------



## happita (May 21, 2008)

Lets hope these will clock better than the current 3k offerings.


----------



## ShinyG (May 21, 2008)

ningen said:


> ATI's always better on paper - faster memory, clocks, DX 10.1 support... yet nvidia cards are still better.



Careful dude, that post is a potential fanboi war starter 

Anyway, back on topic: I'm looking forward do both nVidia's 9900 and ATI's 4870, the battle should be a close one! Competition is good. That something we almost forgot it existed until the  38x0 vs 8800/9600 GT battle 
Who knows, by the end of this year, someone might be able to play Crysis maxed out!


----------



## candle_86 (May 21, 2008)

eidairaman1 said:


> yet another Fanboy.



hows that the case, the 2900XT and 3870 looked great on paper, but they came out and didnt stand a chance.

The x1800XT didnt have a fight agasint the 7800GTX 512 and the 1900XTX tied it and was again beaten a few weeks later by the 7900GTX and then the 7950GX2. When the 1950XTX showed up the 8800GTX arrived 1 month later. ATI hasnt been putting up a good showing for awhile. Heck look at the x1600 or hd2600 cards compared to there direct compititon


----------



## CDdude55 (May 21, 2008)

ningen said:


> ATI's always better on paper - faster memory, clocks, DX 10.1 support... yet nvidia cards are still better.



I personally don't care. The only reason i have a Nvidia card is becuase i have a 680i SLI motherboard. But if ATI come out with something really good(maybe the 4 series) then i would have know problem upgraded to a ATI card.


----------



## HTC (May 21, 2008)

CDdude55 said:


> I personally don't care. The only reason i have a Nvidia card is becuase i have a 680i SLI motherboard. But if ATI come out with something really good(maybe the 4 series) then i would have know problem upgraded to a ATI card.



The same is true if it were the other way around!

+1


----------



## Valdez (May 21, 2008)

CDdude55 said:


> Good point, That the only problem i have with PC hardware. I could get a 8800 GTX at launch and in the next two days a rumor about the 9 series is leaked. I just wish they had longer life spans instead of dropping from highend to second best in a matter of days or weeks.:shadedshu



nv launched the g80 (gtx, ultra) in 2006. Now is mid 2008, and the fastest (single gpu) card is still the 8800 ultra.


----------



## candle_86 (May 21, 2008)

the 9800GTX is faster than the ultra in alot of situations


----------



## Valdez (May 21, 2008)

hv43082 said:


> Fanboy or not, NVIDIA cards are still faster right now.  They have the fastest card and best bang for the buck card at the moment.



yes, faster with bribed developers and publishers behind nvidia...


----------



## Valdez (May 21, 2008)

candle_86 said:


> the 9800GTX is faster than the ultra in alot of situations



http://www.computerbase.de/artikel/..._9800_gtx_sli/21/#abschnitt_performancerating


----------



## erocker (May 21, 2008)

CDdude55 said:


> I personally don't care.  But if ATI come out with something really good(maybe the 4 series) then i would have no problem upgraded to a ATI card.



Me neither!  I wouldn't care if some new company came out called Goobledygoobledy, and realeased the fastest card, I'd buy that too!


----------



## CDdude55 (May 21, 2008)

Valdez said:


> nv launched the g80 (gtx, ultra) in 2006. Now is mid 2008, and the fastest (single gpu) card is still the 8800 ultra.



Thats true but it's not generally the fastest anymore. Also,the 8800 GTX and Ultra is still expensive on Newegg.com.


----------



## erocker (May 21, 2008)

Valdez said:


> yes, faster with bribed developers and publishers behind nvidia...



Not bribed, what I don't get is why ATi isn't getting together with developers a little more?  Can Nvidia's "The way it's meant to be played" program really be leagal?  It would be like Ford getting together with OPEC and formulating a gasoline that only works well with Ford cars.  Sure, it's smart of Nvidia to do it, however it hurts competition.  The end result is that it hurts the consumer.  Oh, and I'm glad that Ati is at least using GDDR5, and furthering thier advancement.  Hopefully this will pay off well in the performance department.


----------



## candle_86 (May 21, 2008)

So Nvidia supplies there compilers to a company so there product works well on Nvidia. AMD is more than welcome to do that, and they did in the past as ATI. AMD has never done this, how many titles have you seen say runs best on AMD besides the FarCry 64 patch? AMD needs to get there compliers out so games can be optimized for there cards also, if they can't do that, then who cares. The way its meant to be played program is a program for compaines to get access to the complier and recive sponsoring from Nvidia to help fund the game. AMD is always welcome to do it also, they just won't/


----------



## CDdude55 (May 21, 2008)

ATI is the only thing holding AMD together. They are luky they bought them in time.


----------



## ningen (May 21, 2008)

Oi oi, not a fanboy and don't want to instigate. I do remember Radeon 9700 you know.

It's just facts though - you look at specs and wow, GDDR5, wide bus... but still GFs win. Even if ATI gets one game, it loses in 10 other ones. There are more drivers problems with ATI too, and pricing kinda shows what's what as well.


----------



## Valdez (May 21, 2008)

erocker said:


> Not bribed, what I don't get is why ATi isn't getting together with developers a little more?  Can Nvidia's "The way it's meant to be played" program really be leagal?  It would be like Ford getting together with OPEC and formulating a gasoline that only works well with Ford cars.  Sure, it's smart of Nvidia to do it, however it hurts competition.  The end result is that it hurts the consumer.



After "bribing", Ati's presence is not welcomed anymore by the developers  Ati doesn't have money, so they are happy if the final version of the game is tested on a Ati card at least...


----------



## erocker (May 21, 2008)

candle_86 said:


> So Nvidia supplies there compilers to a company so there product works well on Nvidia. AMD is more than welcome to do that, and they did in the past as ATI. AMD has never done this, how many titles have you seen say runs best on AMD besides the FarCry 64 patch? AMD needs to get there compliers out so games can be optimized for there cards also, if they can't do that, then who cares. The way its meant to be played program is a program for compaines to get access to the complier and recive sponsoring from Nvidia to help fund the game. AMD is always welcome to do it also, they just won't/



Exactly.  Plus, AMD needs to kick themselves in the arse and get thier marketing department going.  As far as "running best on" that is all just marketing hoopla.  AMD should have plenty of money to "grease the palms" of game developers.


----------



## candle_86 (May 21, 2008)

CDdude55 said:


> ATI is the only thing holding AMD together. They are luky they bought them in time.



id have to say Sempron and low end x2 does alot better for them, they are still in many OEM and office PC's sold, and are faster than the intel lot in that price range. ATI is getting beat in every price segment right now


----------



## candle_86 (May 21, 2008)

erocker said:


> Exactly.  Plus, AMD needs to kick themselves in the arse and get thier marketing department going.  As far as "running best on" that is all just marketing hoopla.  AMD should have plenty of money to "grease the palms" of game developers.



agreed, alot of runs best on intel games back in the day did the oppisite but it just means they had the complier for it, if AMD did this i might take notice, but all i see them doing is screwing up


----------



## Valdez (May 21, 2008)

candle_86 said:


> So Nvidia supplies there compilers to a company so there product works well on Nvidia. AMD is more than welcome to do that, and they did in the past as ATI. AMD has never done this, how many titles have you seen say runs best on AMD besides the FarCry 64 patch? AMD needs to get there compliers out so games can be optimized for there cards also, if they can't do that, then who cares. The way its meant to be played program is a program for compaines to get access to the complier and recive sponsoring from Nvidia to help fund the game. AMD is always welcome to do it also, they just won't/



It's illogical what you've written here... Do you have any source that amd doesn't (want to) provide any help to the developers?


----------



## EastCoasthandle (May 21, 2008)

CDdude55 said:


> Now i am confused, Should i get the a 8800 series card like a 8800 GT or get one of the 4 series cards? I have a 680i SLI SE motherboard so i don't know if an ATI will work so good on my motherboard . Still can't wait for GDDR5
> .



Confused?  Confused?  Either you wait and see if the 4870 is indeed faster then nvidia current offering or you remain loyal no matter what!  It's nothing to be confused about at this point.


----------



## jbunch07 (May 21, 2008)

erocker said:


> Not bribed, what I don't get is why ATi isn't getting together with developers a little more?  Can Nvidia's "The way it's meant to be played" program really be leagal?  It would be like Ford getting together with OPEC and formulating a gasoline that only works well with Ford cars.  Sure, it's smart of Nvidia to do it, however it hurts competition.  The end result is that it hurts the consumer.  Oh, and I'm glad that Ati is at least using GDDR5, and furthering thier advancement.  Hopefully this will pay off well in the performance department.



those where my thoughts... it doesnt seem right when nvidia does that. :shadedshu
AMD/ATI doesnt do that afaik, but im sure its just a marketing thing nvidia has going...

anyway im glad to see that gddr5 is confirmed!


----------



## Azazel (May 21, 2008)

for the love of god..another ati vs nvedia,...


----------



## Valdez (May 21, 2008)

cata 8.5 is out btw


----------



## PVTCaboose1337 (May 21, 2008)

I am liking the GDDR5 route, and also I am liking the 28 people viewing this thread (must be important!)  Good thing ATI will make a comeback.


----------



## CDdude55 (May 21, 2008)

EastCoasthandle said:


> Confused?  Confused?  Either you wait and see if the 4870 is indeed faster then nvidia current offering or you remain loyal no matter what!  It's nothing to be confused about at this point.



But if it is faster is will be more expensive. Also i don't know how good ATI's drivers are on nforce chipset boards.


----------



## Valdez (May 21, 2008)

CDdude55 said:


> But if it is faster is will be more expensive. Also i don't know how good ATI's drivers are on nforce chipset boards.



i'm using hd3870 with nforce 570 ultra, never had problems with that


----------



## Millenia (May 21, 2008)

CDdude55 said:


> But if it is faster is will be more expensive. Also i don't know how good ATI's drivers are on nforce chipset boards.



But if it offers a major performance upgrade it's a LOT cheaper and a better choice to use a bit more expensive RAM instead of having monolithic cards with gazillions of transistors if it yields the same end result.

At least power consumption should be lower that way


----------



## EastCoasthandle (May 21, 2008)

CDdude55 said:


> But if it is faster is will be more expensive. Also i don't know how good ATI's drivers are on nforce chipset boards.



You could have easily visited your MB forum and would have received a good answer to that.    Valdez statement should key you in on what the deal is IMO.


----------



## [I.R.A]_FBi (May 21, 2008)

im guessing ggdr5 cheaper than 512 bit bus?


----------



## EastCoasthandle (May 21, 2008)

[I.R.A]_FBi said:


> im guessing ggdr5 cheaper than 512 bit bus?



Well think of it as this:
You have water cooling setup and want to decide on the size of the tubing.  You can go 3/4" inner diameter tubing but you run the risk of a slower flow rate do to the size of the pump's barb only being 1/2" and it's power output (more/less).  Or you can get a tube with an inner diameter 7/16" (which is slightly smaller then 1/2") which should maximize your flow rate.  I believe this is what the following means:



> Today’s GPU performance is limited by the rate at which data can be moved on and off the graphics chip, which in turn is limited by the memory interface width and die size. *The higher data rates supported by GDDR5 *– up to 5x that of GDDR3 and 4x that of GDDR4 – *enable more bandwidth over a narrower memory interface*, *which can translate into superior performance delivered from smaller, more cost-effective chips*.



If I am wrong could someone clarify this?


----------



## Kirby123 (May 21, 2008)

ningen said:


> ATI's always better on paper - faster memory, clocks, DX 10.1 support... yet nvidia cards are still better.



that is all in opinion from my point of view, my x2 matches up to the 9800 gx2 easily, my max score on mark 06 was higher than the 9800gx2, same parts just different video cards.
the cards are built for gaming, there isnt a good enough game to really tell what is better. right now i beat the 9800 series of invidia by 8 fps with same settings. it is all on the config of your pc
( i am only running 1 card right now too)


----------



## CDdude55 (May 21, 2008)

Kirby123 said:


> that is all in opinion from my point of view, my x2 matches up to the 9800 gx2 easily, my max score on mark 06 was higher than the 9800gx2, same parts just different video cards.
> the cards are built for gaming, there isnt a good enough game to really tell what is better. right now i beat the 9800 series of invidia by 8 fps with same settings. it is all on the config of your pc
> ( i am only running 1 card right now too)



That's weird considering the 9800 GX2 is more expensive.


----------



## WarEagleAU (May 21, 2008)

Looks like the 4 series will be mighty nice. I may need to sell my 3870 and buy  4870. Of course, Id love to CF two of them, even though I dont have a need for it.


----------



## Kirby123 (May 21, 2008)

prices dosnt matter its all on your setup. 9800 gx2 and the X2 3 series are basicly equal. i recomend X2 over 9800 gx2. right now the best computer out is the ati crossfire X quad gpu setup. my friend from css mark 06 score was 45k.... one main reason is the quad extream he has at 4.8 ghz with 8gb ram. but his friend got the 9800 gx2 sli and his was only 40k. crossfireX is the best thing out right now from what i have seen atleast.


----------



## imperialreign (May 21, 2008)

jbunch07 said:


> those where my thoughts... it doesnt seem right when nvidia does that. :shadedshu
> AMD/ATI doesnt do that afaik, but im sure its just a marketing thing nvidia has going...
> 
> anyway im glad to see that gddr5 is confirmed!



ATI has worked with developers as well, and usually, the very, very few games that they've worked closely with devlopers, we tend to see the tables turned - i.e. Call of Juarez.  Even games that are developed closely with ATI, but still brandish the TWIMTBP logo run fairly equal between the two (i.e. FEAR, when it came out, and even up to today).

But, those games are few and far between - nVidia likes to pander to the poor, struggling game devlopers, who need the money.


ATI cards have proven capable of keeping up without "optimized" game code, which says a lot, IMO.   If we were to scrap all hardware optimizational coding, both companies would be on a level playing field.

The way I see it, though, I wouldn't be surprised if nVidia is the next to come under anti-trust law investigation.  They're still up to shady tactics, and odd coincidences pop up now and then - like Assassin's Creed and DX10.1 . . . funny how much better ATI 10.1 capable cards were running it as compared to nVidia's, and then DX10.1 is removed?! 

Makes you wonder why nVidia refused to support 10.1 to begin with, even though it's meant to run better than initial DX10 . . . and possibly that refinement is why ATI chose to be the only hardware manufacturer to support it . . .


----------



## CDdude55 (May 21, 2008)

ATI works a little with valve. Thats why there logo is in the pause menu in HL2:EPx.  And Nvidias logo is in Crysis. But it's not like it actually matters.


----------



## ningen (May 22, 2008)

You don't play 3DMark so who cares. 9800gx2 vs 3870x2 tests (at least those I've seen) show that cards are pretty much on par, but... ATI experiences heavy fps drops with AA sometimes. Maybe drivers, maybe nvidia optimalizations... whatever. In the end, I'd rather play than indulge in conspiracy theories, just as I'd buy a card that works better for variety of games instead of clinging to those few that my gpu shines in.

GDDR5 doesn't seem like something to drool over at all, too. I mean, isn't it like with DDR3 RAM? Frequency increases and so do timings, and in the end performance increase is nothing much really.
Same with graphic cards, we're up to GDDR5 already but the cards will still develop at the usual, moderate pace. 4x series is supposed to be what, up to 50% faster then 3x series? Within the usual cycle, I'd say.


----------



## Kirby123 (May 22, 2008)

well my sli ultras didnt run crysis as good as my ati cards have.... thats what i find funny. the games that say way to be play invidia ? why do my ati cards run it better?


----------



## Valdez (May 22, 2008)

imperialreign said:


> and possibly that refinement is why ATI chose to be the only hardware manufacturer to support it . . .



s3 chrome 430gt supports dx10.1 too


----------



## [I.R.A]_FBi (May 22, 2008)

ningen said:


> You don't play 3DMark so who cares. 9800gx2 vs 3870x2 tests (at least those I've seen) show that cards are pretty much on par, but... ATI experiences heavy fps drops with AA sometimes. Maybe drivers, maybe nvidia optimalizations... whatever. In the end, I'd rather play than indulge in conspiracy theories, just as I'd buy a card that works better for variety of games instead of clinging to those few that my gpu shines in.
> 
> GDDR5 doesn't seem like something to drool over at all, too. I mean, isn't it like with DDR3 RAM? Frequency increases and so do timings, and in the end performance increase is nothing much really.
> Same with graphic cards, we're up to GDDR5 already but the cards will still develop at the usual, moderate pace. 4x series is supposed to be what, up to 50% faster then 3x series? Within the usual cycle, I'd say.



I wouldnt say "nothing". It would be nothing without clockspeeds but the clockspeeds are there.


----------



## kylew (May 22, 2008)

From the devs' point of view "nvidia, the way you're meant to be paid" . I wouldn't be supprised that without any special NV optimisations through "close working relationships" with devs, NV cards wouldn't be nearly as fast as they are now. I also wouldn't be surprised of they got slapped with an anti trust something, all this about Assassin's Creed and DX10.1 has to have caught some one's interest by now.


----------



## ningen (May 22, 2008)

[I.R.A]_FBi said:


> I wouldnt say "nothing". It would be nothing without clockspeeds but the clockspeeds are there.


I said "nothing much", not "nothing".
Without the clocks increased, we'd get a performance drop with slower timings, obviously.


----------



## kylew (May 22, 2008)

Valdez said:


> s3 chrome 430gt supports dx10.1 too



I think NV is so against DX10.1 because they can't implement it them selves, that'd kinda tie in with rumors that NV complained to MS about the specs of DX10 before it was finialised, so they were changed. I think what DX10.1 is, is what DX10 was originally meant to be. It just seems too much of a coincidence, NV are appearing to be very against DX10.1 for reason unknown outside of rumors. All this crap about DX10.1 and what NV appears to have done has put me totally off giving NV any of my money, they can go away, to stay in the corner... ahem...  anyway.


----------



## Kirby123 (May 22, 2008)

cards are built for specific games. thats how they work


----------



## wolf2009 (May 22, 2008)

Kirby123 said:


> well my sli ultras didnt run crysis as good as my ati cards have.... thats what i find funny. the games that say way to be play invidia ? why do my ati cards run it better?



saying it with words is one thing , but proving with a in-game benchmark is another. 

to say ATI cards play crysis better than NVIDIA is a very subjective term. u got to explain further with screens, or a benchmark. its difficult to believe u when all the benchmarks prove otherwise.


----------



## EastCoasthandle (May 22, 2008)

kylew said:


> I think NV is so against DX10.1 because they can't implement it them selves, that'd kinda tie in with rumors that NV complained to MS about the specs of DX10 before it was finialised, so they were changed. I think what DX10.1 is, is what DX10 was originally meant to be. It just seems too much of a coincidence, NV are appearing to be very against DX10.1 for reason unknown outside of rumors. All this crap about DX10.1 and what NV appears to have done has put me totally off giving NV any of my money, they can go away, to stay in the corner... ahem...  anyway.



From my understanding this is what was said to be the case. Read here 


Also read this about DX11 being nix'd (take it with a grain of salt).  But if true, see a pattern?


----------



## Kirby123 (May 22, 2008)

i say it is my ocing im using on my x2. since it is for both cores 918/1053 stable on air. with my 3.6ghz E8400 on air. once i get my new heatsink i can go up to 3.8 or 4.1ghz. i have to buy the mark 06 to get real pics of benchmark -.-    im to poor to even get that. all money went to computers and bills XD


----------



## flashstar (May 22, 2008)

Maybe it's just me, but I cannot get over 35 fps on UT3 at max settings at 1680x1050 with my 2900pro overclocked to 870/927. I've tried to figure out what's happening because my 2900pro is clearly much faster than a 8800gts 640 and slightly faster than an 8800gt at stock speeds. Even with my old 7800gtx, I could get 30 fps at similar settings and that card had 1/3 the raw power of my 2900pro. I too believe that Nvidia has had control of the market for too long and has been pulling some strings behind the scene. I'll give these new drivers a shot and let everyone know the results.


----------



## ShadowFold (May 22, 2008)

Cybrnook2002 said:


> Man, cool and all, but its getting hard to keep up. Specs are getting so dated sooo fast, what happened to the 6 months and your good rule.



Nothing  You can still game with a 8800GTS G80/ULTRA. They are just updating cores cause they have the tech so there like "why not?"


----------



## imperialreign (May 22, 2008)

TBH, I've never had anything against nVidia for their TWIMTBP campaign - truthfully, I thought it was a brilliant marketing maneuver . . .

but now, years later, it hasn't lead to an increase in competition . . . it's lead to an increase in one-sidedness.

But, the only thing, IMO, that TWIMTBP really accounts for - aside from the fact that a game is written to be optimized for green camp hardware - is the major performance lead nVidia cards have over ATI when a new title is released.  It leaves ATI having to make up that ground with CAT releases, and IMO, they reclaim that ground quite respectfully after a few months.  Sure, the game will continue to run better on nVidia hardware, but months after a major release, ATI cards are at least back on par, or slightly behind nVidia.



Somewhat back on topic:  I'm really glad to hear that the new HDk4s will be utilizing GDDR5, and hopefully it will work out great to their advantage performance wise . . . but, I also find it somewhat disturbing how ATI tries to stay at the top of technology support, while nVidia seems to ignore any indusrty-wide technological advancements.  ATI is the first to support GDDR3, GDDR4, GDDR5, HDMI, PCIE 2.0, DX10.1, etc, etc . . .

Curious . . . did nVidia ever release a GDDR4 video card?


----------



## wolf2009 (May 22, 2008)

imperialreign said:


> TBH, I've never had anything against nVidia for their TWIMTBP campaign - truthfully, I thought it was a brilliant marketing maneuver . . .
> 
> but now, years later, it hasn't lead to an increase in competition . . . it's lead to an increase in one-sidedness.
> 
> ...



ya never . thats an interesting point u make . ati has always taken the initiative with technology.

another interesting point u make is that games on ati hardware catch up with nvidia after few months , u have any link to a benchmark ? i'm interested in seeing this.


----------



## Kirby123 (May 22, 2008)

flashstar said:


> Maybe it's just me, but I cannot get over 35 fps on UT3 at max settings at 1680x1050 with my 2900pro overclocked to 870/927. I've tried to figure out what's happening because my 2900pro is clearly much faster than a 8800gts 640 and slightly faster than an 8800gt at stock speeds. Even with my old 7800gtx, I could get 30 fps at similar settings and that card had 1/3 the raw power of my 2900pro. I too believe that Nvidia has had control of the market for too long and has been pulling some strings behind the scene. I'll give these new drivers a shot and let everyone know the results.



i dont see why your getting so low with a 2900???? im confused myself, youve got atleast a 2.8 dual core? and your mem should be able to get faster tahn that


----------



## Kirby123 (May 22, 2008)

im confused it says in your computer specs you have 2900 xt and you say you have a pro


----------



## Kursah (May 22, 2008)

This kind of reminds me of the touted GDDR4 introduction on the X1950XTX, which was cool and a "braggable" feature for a few. I hope the GDDR5 introduction is much improved over the GDDR4 introduction...also the hopes of better OC-ability also arise in my mind...the GDDR4 on my XTX struggled to get beyond 1080...where-as my GDDR3 on my current 9600GT hit 1100 w/o issue and has gone higher.

Of course it's totally cool that AMD/ATI is still pushing memory innovation in the graphics arena, they definately got something here, especially if the performance netted from GDDR5 proves to be a strong value and factor for the card series' performance. I will sit back with my 9600GT and watch for now, as-always I hope that AMD/ATI are successful, same with NV...if neither succeed, we lose.


----------



## Dangle (May 22, 2008)

The prob with NVIDIA is they do crap like watering down graphics to give their cards higher framerates.  Don't believe me? PM me and I'll find a link for you.  ATI FTW!


----------



## Kursah (May 22, 2008)

Dangle said:


> The prob with NVIDIA is they do crap like watering down graphics to give their cards higher framerates.  Don't believe me? PM me and I'll find a link for you.  ATI FTW!



That's strategy for ya...honest or not, in a cut-throat, bottom line, very fast paced industry...I've seen the good/bad/ugly of both sides...personally I care for what gets me the best performance for my budget and requiriments...in this last purchase, the 9600GT won, but a quite a few times before that...I had the ATI's.

For those of you that care about the politics in this industry, then you can support the company that makes you feel good...I support what makes my games play smoothly, look pretty and is easy on the wallet, to some that may seem wrong, but for me that's just how it is.


----------



## btarunr (May 22, 2008)

While some believe a single HD4870 will be about 20~25% faster than 9800 GX2 suggest pre-release benches, others believe it's not going to be more than 25% faster than a 9800 GTX.

I'm not expecting much out of the HD4870.


----------



## XooM (May 22, 2008)

EastCoasthandle said:


> Well think of it as this:
> You have water cooling setup and want to decide on the size of the tubing.  You can go 3/4" inner diameter tubing but you run the risk of a slower flow rate do to the size of the pump's barb only being 1/2" and it's power output (more/less).  Or you can get a tube with an inner diameter 7/16" (which is slightly smaller then 1/2") which should maximize your flow rate.


This doesn't make sense. Water velocity through tubing is completely meaningless. Water volume through tubing is what really matters, and 3/4" has lower laminar resistance than 7/16". However, the difference in volumetric throughput between the two is minimal enough that 7/16" is preferred due to simplicity in tubing runs.


----------



## EastCoasthandle (May 22, 2008)

XooM said:


> This doesn't make sense. Water velocity through tubing is completely meaningless. Water volume through tubing is what really matters, and 3/4" has lower laminar resistance than 7/16". However, the difference in volumetric throughput between the two is minimal enough that 7/16" is preferred due to simplicity in tubing runs.


This is completely and utterly ridiculous.  
#1 Work on how you reply to people.
#2 I suggest you check your folks water hose in the yard.  Just turn the facet on a little bit and use the nozzle on the end to release the water.  The water volume at the facet won't be the same at the nozzle  
#3 If you want to continue this PM me, no need to derail the thread!


----------



## erocker (May 22, 2008)

Lol, I am seriously laughing that I go to the end of this thread and find a discussion on tubing for water cooling!  This thread has indeed gone way off topic!  Let's try to at least keep it in the realm of video cards.


----------



## jbunch07 (May 22, 2008)

erocker said:


> Lol, I am seriously laughing that I go to the end of this thread and find a discussion on tubing for water cooling!  This thread has indeed gone way off topic!  Let's try to at least keep it in the realm of video cards.



haha i was just wondering the same thing...

back on topic:
lets hope the price of gddr5 doesn't hurt Ati to much...
they need to be putting some money into marketing, ATi's marking is close to non existent  at the moment.


----------



## imperialreign (May 22, 2008)

wolf2009 said:


> ya never . thats an interesting point u make . ati has always taken the initiative with technology.
> 
> another interesting point u make is that games on ati hardware catch up with nvidia after few months , u have any link to a benchmark ? i'm interested in seeing this.




I'll try and dig up the review I read a while back, if I can find it again - it's kinda hard to find legit reviews like that seeing as how not many sites run back through testing when new driver releases are put out . . .

we can kind of see it, though, in our e-peen "post your gameX benchmark score here" threads

but, again, I'll try and dig up what I remember seeing, and I'll post it back up here in this thread once I find it . . .


----------



## FR@NK (May 22, 2008)

XooM said:


> Water volume through tubing is what really matters, and 3/4" has lower laminar resistance than 7/16". However, the difference in volumetric throughput between the two is minimal enough that 7/16" is preferred due to simplicity in tubing runs.



Maybe because the 3/4" tubing has more surface area on the inter area of the tube which causes more friction on the coolant.....lol GDDR5 will be able to have smaller 256bit interface yet still have more bandwidth then a 512-bit 2900xt. I have no idea what this has to do with a garden hose tho.


----------



## Rebo&Zooty (May 22, 2008)

candle_86 said:


> hows that the case, the 2900XT and 3870 looked great on paper, but they came out and didnt stand a chance.
> 
> The x1800XT didnt have a fight agasint the 7800GTX 512 and the 1900XTX tied it and was again beaten a few weeks later by the 7900GTX and then the 7950GX2. When the 1950XTX showed up the 8800GTX arrived 1 month later. ATI hasnt been putting up a good showing for awhile. Heck look at the x1600 or hd2600 cards compared to there direct compititon



oh, dont make me link every farking review out there showing the 7950gx2 for the POS it was, your such an nvidiot..........

first the gx2 vs the 1950x2 the gx2 looses not just in perf, but in support, the gx2 is trash, nvidia made it to keep top numbers in a few games till the 8800 came out thats it, then they fully dumped its support, sure the drivers work, but quad sli? and even sli perf of the gx2 vs true sli was worse, thats sad since its basickly 2 cards talking dirrectly.

as to the x1900, it STOMPED the 7900/7950, cards that ON PAPER should have been stronger, 24 pipes vs 16 for example was what ppl where using to "proove" that the nvidia cards WOULD kill the x1900 range of cards.

i would make another massivly long post, but you would just ignore it  like all fanboi's do, or resorte to insults.


----------



## jaydeejohn (May 22, 2008)

EastCoasthandle said:


> Well think of it as this:
> You have water cooling setup and want to decide on the size of the tubing.  You can go 3/4" inner diameter tubing but you run the risk of a slower flow rate do to the size of the pump's barb only being 1/2" and it's power output (more/less).  Or you can get a tube with an inner diameter 7/16" (which is slightly smaller then 1/2") which should maximize your flow rate.  I believe this is what the following means:
> 
> 
> ...



 Imagine 512 connections/wires coming from the bus to everywhere it needs to go for the output. Thats alot of wires, and voltage control. With GDDR5, you have the ability to push the same or a lil more info faster than a 512 bus without all those wires, in this case, just 256. Also, GDDR5 "reads" the length of each connection, allowing for correct voltage thru the wire/line, this is important, so its more stable, keeping frequencies within proper thresholds, also elimanting costs of having to go the more exspensive way of doing it. Hope that helps


----------



## EastCoasthandle (May 22, 2008)

jaydeejohn said:


> Imagine 512 connections/wires coming from the bus to everywhere it needs to go for the output. Thats alot of wires, and voltage control. With GDDR5, you have the ability to push the same or a lil more info faster than a 512 bus without all those wires, in this case, just 256. Also, GDDR5 "reads" the length of each connection, allowing for correct voltage thru the wire/line, this is important, so its more stable, keeping frequencies within proper thresholds, also elimanting costs of having to go the more exspensive way of doing it. Hope that helps



Thanks for the info


----------



## jaydeejohn (May 22, 2008)

YW. This should dramatically cut down the costs of the pcbs, and still provide great performance


----------



## jbunch07 (May 22, 2008)

jaydeejohn said:


> YW. This should dramatically cut down the costs of the pcbs, and still provide great performance



that is if the cost for the gddr5 doesnt cripple them...


----------



## EastCoasthandle (May 22, 2008)

jaydeejohn said:


> YW. This should dramatically cut down the costs of the pcbs, and still provide great performance



Agreed...I still wonder what kind of performance is had with 512 bus.  I hope we find out with the X2


----------



## HTC (May 22, 2008)

EastCoasthandle said:


> Agreed...I still wonder what kind of performance is had with 512 bus.  I hope we find out with the X2



And, in theory, reduce the heat it creates too!


----------



## Rebo&Zooty (May 22, 2008)

how long till the nvidia fanboi says that ati should have gone 512bit and should have more pipes/rops?

funny since the x1900/1950xt/xtx cards had 16 pipes/rops vs the 7900 having 24 and the 7900 got pwned........

meh, im sick of the "ati sucks because *add bullshit FUD here*" or the "nvidia sucks because *add bullshit FUD here*" 

they both have their flaws and their good points.

the one thing i almost alwase see out of ati since the 8500 has been INNOVATION, it hasnt alwase worked out the way they intended, the 2900/3800 are the prime example, the main issue was that ati designed the r600/670 cores for dx10 not dx9, as such they followed what microsoft wanted to do with dx10+ that was to remove detocated AA hardware, using shaders to do the AA and other work, ofcorse this lead to a problem, dx9 support was an after thought and as such gave worse performance when you turned aa on.

ati thought like many other companys thought, vista would take off and be a huge hit, just like xp did when it came out, and with vista being a big hit, dx10+ games would have been out en-mass, but vista fell on its face, an ati still had this pure dx10 chip alreadin in the pipe, so they ran with it KNOWING it would have its issues/querks in dx9 games.

Nvidia on the other hand effectivly took the oposite aproch with the g80/92 cores, they build a dx9 part with dx10 support as an afterthought, in this case it was a good move, because without vista being a giant hit, game developers had no reasion to make true pure dx10 games.

nvidia didnt go dx10.1 because it would have taken some redesign work on the g92, and they wanted to keep their investment in it as low as possable to keep the profit margin as high as possable, its why they lowered the buss width and complexity of the pcb, its why they didnt add dx10.1 support, its why the 8800gt's refrance cooler is the utter peice of shit it is(i have one, i can say for 100% certen the refrance coolers a hunk of shit!!!!) 

now i could go on and on and on about each company, point is they have both screwed up.

biggist screwups for each

ATI:2900(r600) not having a detocated AA unit for dx9 and older games.

nVidia: geforce 5/FX line, horrible dx9 support that game developers ended up having to not use because it ran so bad, forcing any FX owner to run all his/her games in dx8 mode, also the 5800 design was bad, high end ram with a small buss and ungodly loud fan does not a good card make.


thats how i see it, at least ATI never put out a card tauted as being capable of something that in practice it couldnt do even passably well......


----------



## Rebo&Zooty (May 22, 2008)

jbunch07 said:


> that is if the cost for the gddr5 doesnt cripple them...



dought it will have any real impact from the card makers end, they buy HUGE quintites of chips, getting a price thats FAR lower then the preimum we consumers pay for that same ram.

I had an artical b4 my last hdd melt down, it showed acctual cost per memory chip for videocards, ddr vs ddr2 vs ddr3 vs ddr4

ddr4 was more expencive, but that was mostly due not to it being new but due to it being in short supply at the time, still the price you payed to get it on a card was extreamly exagerated, ofcorse its "new" so they charge extra for it.

the cost of 2 vs 3 again, wasnt that large, same with ddr vs ddr2, again, we are talking about companys that buy 100's of thousnads if not millions of memory chips at a time from their supplyers, those supplyers want to keep on the good side of their customers so they keep making a profit, so they give them far better prices then they would ever admit to an outside party.

also the more you buy, the lower the per unit cost is, same as with most things, go check supermediastore, if you guy 600 blanks the price is quite a bit lower then buying 50 or 100 at a time


----------



## jbunch07 (May 22, 2008)

yea this is true!
the vendor to get ram at a nice price because they buy such large orders!


----------



## Rebo&Zooty (May 22, 2008)

i would love to see somebody try that new qmoda...whatever ram thats higher dencity per chip, would be intresting to see a videocard that had 2gb of high bandwith ram......or hell use it for onboard video(ohhh that could rock, 4 chips for 512bit(or something like that) would make onboard a hell of alot better....


----------



## candle_86 (May 22, 2008)

Rebo&Zooty said:


> how long till the nvidia fanboi says that ati should have gone 512bit and should have more pipes/rops?
> 
> funny since the x1900/1950xt/xtx cards had 16 pipes/rops vs the 7900 having 24 and the 7900 got pwned........
> 
> ...




The AA on the shaders is a stupid bad idea, MS doesnt even understand hardware thats the problem. Nvidia is not going to do DX10.1 because it requires shader based AA which is total junk and worthless. Sure the AA might look better but a 50% drop in FPS isnt worth it, ill take dedicated hardware AA any day. What MS needs to do is discuss these ideas not just sit around and think them up. Remember if MS cuts Nvidia out of DX totally OpenGL will make a massive comeback. MS has not choice but to do what Nvidia tells it to do for this reason alone. Sevral problems exist with shader AA if you can't see that im sorry. As for Innovation i beg to differ, what has ATI actully done, shader AA was the worst idea ive heard of. 5 groups of 64 shaders but only one unit can do complex shader math another bad idea. Thats why ATI cards preform like 64 shader cards most of the time, and if they are lucky 128 shader cards. GDDR5 is marketing hype, the latancy alone kills it, new ram types are never as good as the older ones on release. Look at the GDDR3 5700Ultra vs the regular 5700Ultra. Same preformace because of latancy. Go ahead give us 3000mhz ram with a 200ms reponse time, it wont be any better than 2000mhz ram with an 80ms reponse time, these are just random numbers but its the same reason people dont upgrade to DDR3. All hype from AMD and appsolutly nothing to even care about. This time they might have a single core solution that can tie the 8800Ultra.


----------



## jbunch07 (May 22, 2008)

Rebo&Zooty said:


> i would love to see somebody try that new qmoda...whatever ram thats higher dencity per chip, would be intresting to see a videocard that had 2gb of high bandwith ram......or hell use it for onboard video(ohhh that could rock, 4 chips for 512bit(or something like that) would make onboard a hell of alot better....



hmmm 2gb of video ram would only be good for extremely high resolutions...


----------



## Dangle (May 22, 2008)

Why are there so many furious Nvidia fans in here?


----------



## candle_86 (May 22, 2008)

where tring to save you from a stupid purchase


----------



## jbunch07 (May 22, 2008)

candle_86 said:


> where tring to save you from a stupid purchase



come on now candle...what has ati ever done to you? 

seriously there is now reason to hate ati that much!


----------



## candle_86 (May 22, 2008)

oh yes there is, plus yall are like family and this is an intervention, i have to save yall from yourselves. IF you buy AMD products you will hate yourself for doing so, historiclly Nvidia has always been faster at the same price point


----------



## jbunch07 (May 22, 2008)

candle_86 said:


> oh yes there is, plus yall are like family and this is an intervention, i have to save yall from yourselves. IF you buy AMD products you will hate yourself for doing so, historiclly Nvidia has always been faster at the same price point



look at my system specks<----
look at my face----> 

im very happy with Amd/ati

my previous rig was nvidia i was happy with that as well 
but hey im not complaining...you have every right to say what you want.


----------



## FR@NK (May 22, 2008)

candle_86 said:


> oh yes there is, plus yall are like family and this is an intervention, i have to save yall from yourselves. IF you buy AMD products you will hate yourself for doing so, historiclly Nvidia has always been faster at the same price point



Most of us here are smart enough to know that the ATI cards we use are slower then nvidia's cards.


----------



## jbunch07 (May 22, 2008)

FR@NK said:


> Most of us here are smart enough to know that the ATI cards we use are slower then nvidia's cards.



true...besides you dont see me going to a nvidia new thread and bash on them...

no one like a buzz kill!


----------



## erocker (May 22, 2008)

candle_86 said:


> oh yes there is, plus yall are like family and this is an intervention, i have to save yall from yourselves. IF you buy AMD products you will hate yourself for doing so, historiclly Nvidia has always been faster at the same price point



No one here needs saving, keep these types of comments to yourself.  I believe you have already been warned on this subject before.


----------



## Rebo&Zooty (May 22, 2008)

candle_86 said:


> The AA on the shaders is a stupid bad idea, MS doesnt even understand hardware thats the problem. Nvidia is not going to do DX10.1 because it requires shader based AA which is total junk and worthless. Sure the AA might look better but a 50% drop in FPS isnt worth it, ill take dedicated hardware AA any day. What MS needs to do is discuss these ideas not just sit around and think them up. Remember if MS cuts Nvidia out of DX totally OpenGL will make a massive comeback. MS has not choice but to do what Nvidia tells it to do for this reason alone. Sevral problems exist with shader AA if you can't see that im sorry. As for Innovation i beg to differ, what has ATI actully done, shader AA was the worst idea ive heard of. 5 groups of 64 shaders but only one unit can do complex shader math another bad idea. Thats why ATI cards preform like 64 shader cards most of the time, and if they are lucky 128 shader cards. GDDR5 is marketing hype, the latancy alone kills it, new ram types are never as good as the older ones on release. Look at the GDDR3 5700Ultra vs the regular 5700Ultra. Same preformace because of latancy. Go ahead give us 3000mhz ram with a 200ms reponse time, it wont be any better than 2000mhz ram with an 80ms reponse time, these are just random numbers but its the same reason people dont upgrade to DDR3. All hype from AMD and appsolutly nothing to even care about. This time they might have a single core solution that can tie the 8800Ultra.



humm, maby u need to check the assassins creede reviews, seems shader based aa isnt a bad idea if done nativly by the game, the 9800gtx and 3870x2 where toe to toe less then 1fps diffrance between them, corse ur a fanboi, wouldnt expect you to know that.

as to ms doing what another company tells it, wrong, ms could block opengl support if they wanted, and guess what, nobody could stop them, everybody has to do what ms says, because the only other choice is to fall back into a niche market like matrox has done.

as to your 5700 example, that dosnt mean shit the 5700 was a peice of crap, it was the best of the fx line, but thats not saying much......specly when a 9550se can out perform it LOL
this is dx10.1 3870x2 vs 9800gtx under sp1(dx10.1 is enabled with sp1)








> But after we installed Vista SP1, an interesting thing happened. The performance of AMD's video card increased, while NVIDIA's performance did not.* In fact, with SP1 installed there was less than a single frame per second difference on average between these two video cards.*



funny, shader based aa vs detocated AA and the perf diffrance is around 1fps diffrance

so your "shader based AA is a stupid Idea" line is a load of fanboi bullshit(as expected from you)

the ideas fine, if your talking about native dx10/10.1 games, but todays games are mostly dx9 games with some dx10 shaders added(crysis for example) 

as this shows there is ZERO reasion that shader based aa need to be any slower, its only slower in native code, its just slower on older games, hence as i said, they should have had a hardware AA unit for dx9 and older games and used shader based AA for dx10.x games, problem would have been solved.


----------



## jaydeejohn (May 22, 2008)

You wont be seeing those huge latencies with this memory. I dont like to argue, just give it time, and see what happens. The 4870 is going to be a killer card, and rumors have it at 25% over the 9800GTX. nVidias solution, the G280 should be faster, but itll draw too much energy to do a X2, thus allowing the 4870 X2 to compete with it at the very top for single slot solution. Hopefully we will see that MCM on the 4870 X2


----------



## Thermopylae_480 (May 22, 2008)

Rebo&Zooty said:


> humm, maby u need to check the assassins creede reviews, seems shader based aa isnt a bad idea if done nativly by the game, the 9800gtx and 3870x2 where toe to toe less then 1fps diffrance between them, corse ur a fanboi, wouldnt expect you to know that.
> 
> as to ms doing what another company tells it, wrong, ms could block opengl support if they wanted, and guess what, nobody could stop them, everybody has to do what ms says, because the only other choice is to fall back into a niche market like matrox has done.
> 
> ...



Don't respond to trolls, especially after a moderator has already attempted to end the situation.  Such behavior only worsens the situation, and can get you  in trouble.

(DO NOT RESPOND)


----------



## Rebo&Zooty (May 22, 2008)

candle_86 said:


> oh yes there is, plus yall are like family and this is an intervention, i have to save yall from yourselves. IF you buy AMD products you will hate yourself for doing so, historiclly Nvidia has always been faster at the same price point



i own amd, and im using an x1900 till my 8800gt's back from the shop(stock cooler gave out) an u dont see me QQ(crying) or upset about being an amd user, i have setup core2 systems for people, they are nice, but price for price i still prefer to get as much out of an amd rig as i can, my new/current boards got a few years left b4 i need to replace it, plenty of cpu's to come in that time, i would guess 3-4 will pass thru the board b4 i upgrade it, unless i get a really kickass deal on a DFI 790fx board(the high end one not the lower one)


----------



## HTC (May 22, 2008)

Rebo&Zooty said:


> the ideas fine, if your talking about native dx10/10.1 games, but todays games are mostly dx9 games with some dx10 shaders added(crysis for example)
> 
> as this shows there is ZERO reasion that shader based aa need to be any slower, its only slower in native code, its just slower on older games, hence as i said, they should have had a hardware AA unit for dx9 and older games and used shader based AA for dx10.x games, problem would have been solved.



This should be easy enough to prove / disprove when more dx10.x games are released: might take a while for that to happen, though 

EDIT

Apologies, moderator: it was in post # 99 when i started to write this reply!


----------



## jbunch07 (May 22, 2008)

lets get this thread back on track!

no more arguing about ati vs nvidia!
at least not here


----------



## Rebo&Zooty (May 22, 2008)

Thermopylae_480 said:


> Don't respond to trolls, especially after a moderator has already attempted to end the situation.  Such behavior only worsens the situation, and can get you  in trouble.



sorry was a cross post
i started when he sposted that orignaly, i spent alot of time on my slow net(damn comcast is buggin again!!!!!)  finding those damn links/images.

sorry for the cross posts,woulde delete it but, all that effort gone to waste


----------



## btarunr (May 22, 2008)

jaydeejohn said:


> Imagine 512 connections/wires coming from the bus to everywhere it needs to go for the output. Thats alot of wires, and voltage control. With GDDR5, you have the ability to push the same or a lil more info faster than a 512 bus without all those wires, in this case, just 256. Also, GDDR5 "reads" the length of each connection, allowing for correct voltage thru the wire/line, this is important, so its more stable, keeping frequencies within proper thresholds, also elimanting costs of having to go the more exspensive way of doing it. Hope that helps



Well said. We must stop laying empasis on bus-width as long as faster memory makes up. Let's stop (my 512bit pwns your 256bit), look up the charts and the final bandwidth of the memory bus. 

Ignorant people even begin with their own terminology, "256bit GPU", "Mine's a 512bit GPU" I've not seen anything more retarded, I mean come on, xxx-bit is just the width of the memory bus.


----------



## jbunch07 (May 22, 2008)

btarunr said:


> Well said. We must stop laying empasis on bus-width as long as faster memory makes up. Let's stop (my 512bit pwns your 256bit), look up the charts and the final bandwidth of the memory bus.
> 
> Ignorant people even begin with their own terminology, "256bit GPU", "Mine's a 512bit GPU" I've not seen anything more retarded, I mean come on, xxx-bit is just the width of the memory bus.



thank you!

its about time someone finally said it!

comparing memory bus always made me laugh

256 gddr5 should do very nice!


----------



## jaydeejohn (May 22, 2008)

Actually, having thruput is only good if it delivers. Its like putting 1 gig of memory on a x1600. Sure its there, but can the card relly use it?


----------



## Rebo&Zooty (May 22, 2008)

HTC said:


> This should be easy enough to prove / disprove when more dx10.x games are released: might take a while for that to happen, though



yeah, see, from what i been told by a couple people i know who work for amd/ati and intel, ati honestly expected vista to take off and replace xp over night, if that had happened dx10 would have become the norm and the r600/670 design would have been GREAT, it would have looked far better then it does, BUT because vista fell on its face(doah!! *homer simpson sound*) ati's design was.....well less then optimal.

i have sent ati enough bitching emails in the past about buggs that i know how their support is, if you report it dirrectly they tend to try and fix it.

nvidia support, u get a form letter at best unless u know somebody on the inside, then they get the runaround and you get the runaround from them because, honestly they cant get any clear answers to alot of long standing buggs.

a few examples

windows server 2003 and xp x64(same os core) have a lovely bugg with nvidia drivers, if you have ever installed another companys video drivers you have a 99% chance that once you install the nvidia drivers the system will BSOD every time you try and use the card above a 2d desktop level, its a KNOWN issue since x64 came out(and from some reports also effects 32bit server 2003 as well), nvidia has had YEARS to fix this, they havent bothered, their fix is noted as "reinstall windows"..........if i had repoted that bugg to ati i would have had a beta fix in a couple days( i know because i repoted a bugg with some 3rd party apps that caused them to lock up and got fast acction) 

Nvidia for over a year had a bugg in their yv12 video rendering, ffdshow wiki explains it, and documents how long its been a probblem, they fixed it in some beta drivers, but then broek it in fulls........

ati wide screen scaling: on some older games the games image is streched because the game dosnt support widescreen res's, theres a fix for this if you know where to look in drivers but its not automatic so it causes alot of people troubles.


i got a large list of bitches about both companys.

ati: agp card supports been spotty with the x1k and up cards, no excuse here other then the fact that they just need more people to email them and complain about it(squeeky wheel gets the oil as granny use to say)


----------



## btarunr (May 22, 2008)

jaydeejohn said:


> Actually, having thruput is only good if it delivers. Its like putting 1 gig of memory on a x1600. Sure its there, but can the card relly use it?



This is sort of an arms race between USA and USSR. Even if a GPU doesn't need all the bandwidth, it's in place, a HD3650 will never need PCI-E 2.0 x16 bandwidth, but when it comes to RV770 and memory subsystem, the difference comes to surface when RV770Pro is compared to its own GDDR3 variant. The fact that there _is_ a difference shows the RV770 is able to make use of all that bandwidth and is efficient with it.


----------



## jaydeejohn (May 22, 2008)

Heres an interesting link about M$ decision about DX10 implementation if you havnt already read it, do so, it gives clarity http://blogs.msdn.com/ptaylor/archive/2007/03/03/optimized-for-vista-does-not-mean-dx10.aspx In the article, it says why they dumped DX10, the real DX10, which we now know as DX10.1.


----------



## jbunch07 (May 22, 2008)

btarunr said:


> This is sort of an arms race between USA and USSR. Even if a GPU doesn't need all the bandwidth, it's in place, a HD3650 will never need PCI-E 2.0 x16 bandwidth, but when it comes to RV770 and memory subsystem, the difference comes to surface when RV770Pro is compared to its own GDDR3 variant. The fact that there _is_ a difference shows the RV770 is able to make use of all that bandwidth and is efficient with it.



i thought the bandwidth needed had more to do with the game or what your doing with the cards...ie some games need more bandwidth than other games...but i know what you mean.

correct me if im wrong


----------



## mandelore (May 22, 2008)

people like candle really should be kept out of these types of threads, im sure he just comes a stompin to troll as usual....

wait for the card, then smack it if you feel necessary, else just stfu and let the facts roll from the horses mouth so to speak and wait for genuine reviews.


----------



## btarunr (May 22, 2008)

jbunch07 said:


> i thought the bandwidth needed had more to do with the game or what your doing with the cards...ie some games need more bandwidth than other games...but i know what you mean.
> 
> correct me if im wrong



Yes, higher the resolution (of the video/game), larger are the frames, more amounts of data are transferred, extra bandwidth helps there.


----------



## DarkMatter (May 22, 2008)

Rebo&Zooty said:


> as to the x1900, it STOMPED the 7900/7950, cards that ON PAPER should have been stronger, 24 pipes vs 16 for example was what ppl where using to "proove" that the nvidia cards WOULD kill the x1900 range of cards.





> funny since the x1900/1950xt/xtx cards had 16 pipes/rops vs the 7900 having 24 and the 7900 got pwned........



I could agree with many of your points in this thread, but I can't take you seriously, just because of these:

a- BOTH had 16 ROPS and 8 vertex shaders.
b- It's true that NV had 24 TMU while Ati had 16, though they were different. Ati ones were more complex.
c- AND X1900 had 48 pixel shaders vs 24 on the 7900.

Back then nothing suggested that TMUs could be the bottleneck, even today I have my reservations, but I generally accept TMUs as R600/670 's weakness. Ati cards (X1900) were a LOT BETTER on paper than Nvidia cards, and resulted in a performance win in practice. BUT it didn't stomp the 7900 as it was never more than 10% faster (except a pair of exceptions) and was usually within a 5% margin. If x1900 STOMPED the 7900, I don't know how do you describe G80/92 vs. R600/RV670...

Don't bring in the price argument, please, since 7900GTX was a lot cheaper than X1900 XTX. It actually traded blows with the XT, both in price and performance. The only card that stood out at it's price segment was the X1950 pro when G80 was already out, but was still very expensive.

I don't have anything against your opinions, but try not to use false data to support your arguments. I really think it's just that your memory failed, but be careful next time. 

EDIT: Hmm I just noticed two things in the Assasin's Creed graphic you posted.

1- No Anisotropic Filtering used on the Ati card.
2- It's the X2 what is being compared to the 9800 GTX, I first thought it was the HD3870.

All in all the X2 should be faster, because it's more expensive and no AF is applied, but it's not.


----------



## Rebo&Zooty (May 22, 2008)

msrp is simlar on both cards, nvidia just recently price droped them afik.

af was dissaled because its bugged on that game eather a driver patch or game patch would fix that, but the makers are patching out dx10.1 support for now, probbly because nvidia dont want anybody competing with them. 

this wasnt to show price card vs card it was to show that dx10.1 shader based AA has less impact then dx9 shader based AA, and since the r600/670 where made for not dx9 or dx9+dx10 shaders.

diffrent designs, the 8800 is really a dx9 card with shader4.0 taged on, the 2900/3800 are native shader 4 cards with dx9 support taged on via drivers, very diffrent concept behind each, since vista tanked, the r600/670 havent had any true dx10/10.1 games to show off their design, and as soon as one came out, somehow it ended up patching it out when ati did well in it.


----------



## Valdez (May 22, 2008)

DarkMatter said:


> Back then nothing suggested that TMUs could be the bottleneck, even today I have my reservations, but I generally accept TMUs as R600/670 's weakness. Ati cards (X1900) were a LOT BETTER on paper than Nvidia cards, and resulted in a performance win in practice. BUT it didn't stomp the 7900 as it was never more than 10% faster (except a pair of exceptions) and was usually within a 5% margin. If x1900 STOMPED the 7900, I don't know how do you describe G80/92 vs. R600/RV670...



rv770 has 32tmu instead of 16 in the rv670, if the rumours are right.

Today 1950xtx is 36% faster than 7900gtx in 1280x1024 without aa, and 79% faster than 7900gtx with 4x aa. (it also beats the 7950gx2 by 4% without aa, and by 35% with 4x aa)

http://www.computerbase.de/artikel/...on_hd_3870_x2/24/#abschnitt_performancerating


----------



## DarkMatter (May 22, 2008)

Rebo&Zooty said:


> msrp is simlar on both cards, nvidia just recently price droped them afik.
> 
> af was dissaled because its bugged on that game eather a driver patch or game patch would fix that, but the makers are patching out dx10.1 support for now, probbly because nvidia dont want anybody competing with them.
> 
> ...



MSRP is not similar, the GTX is $50 cheaper since day one. And that's the case in Newegg, GTX is around $50 cheaper. Average GTX $300, average X2 $350-375. Cheaper GTX $289, cheaper X2 $339. The average is not calculated, but aproximated, I didn't take the 2 higher prices for each card to make the average. If I did so the X2 would suffer a lot, indeed my averages are being very favorable to the X2. Here in Spain, the GTX is well below 250 euro, while the X2 is well above 300.

Anyway my point was that the graphic didn't show shader AA to be superior, X2 should be a lot faster in that circunstances, but it's not. It only shows that the performance hit under DX10.1 is not as pronounced as under DX10 when AA done on shaders, but never that it's faster than with dedicated hardware. Also according to THAT GAME, DX10.1 AA is faster than DX10 AA on Ati cards, but I would take that with a grain of salt. The lighting in DX10.1 was way inferior to DX10 one on some places, because something was missing. I saw it somewhere and had my doubts. Until one of my friends confirmed it.


----------



## Rebo&Zooty (May 22, 2008)

DarkMatter said:


> I could agree with many of your points in this thread, but I can't take you seriously, just because of these:
> 
> a- BOTH had 16 ROPS and 8 vertex shaders.
> b- It's true that NV had 24 TMU while Ati had 16, though they were different. Ati ones were more complex.
> ...



http://www.techpowerup.com/reviews/PointOfView/Geforce7900GTX/

according to tpu review its 
7800/7900
Number of pixel shader processors	24	
Number of pixel pipes                 	24	
Number of texturing units	           24	

so your wrong, the 7800/7900 based cards are 24 rops/pipes 1 shader unit per pipe, where as the x19*0xt/xtx have 16 pipes 3 shaders per pipe(48 total) 

you try and disscredit me then use faulse facts......

2nd, the x1900xt and xtx where the same card, i have yet to meet a x1900xt that wouldnt clock to xtx and beyond, and it was cake to flash them, infact thats what my backup card is, a CHEAP x1900xt flashed with toxic xtx bios 
http://www.trustedreviews.com/graph...phire-Liquid-Cooled-Radeon-X1900-XTX-TOXIC/p4

check that out, seems the gx2 is faster then the xtx but only in a few cases, over all they trade blows, yet the gx2 was alot more expencive and had a VERY short life, it went EOL pretty fast, and never did get quad SLI updates.......
in the end it was a bad buy, where as my x1900xt/xtx card was a great buy, i got it for less then 1/3 the price of a 8800gts, its still able to play current games not maxed out by any means but still better then the 7900/50 do


----------



## DarkMatter (May 22, 2008)

Valdez said:


> rv770 has 32tmu instead of 16 in the rv670, if the rumours are right.
> 
> Today 1950xtx is 36% faster than 7900gtx in 1280x1024 without aa, and 79% faster than 7900gtx with 4x aa. (it also beats the 7950gx2 by 4% without aa, and by 35% with 4x aa) *in 3DMark 06*
> 
> http://www.computerbase.de/artikel/...on_hd_3870_x2/24/#abschnitt_performancerating



Corrected that for you. C'mon, we all know what happens between 3DMark 06 and Nvidia, and what happens in games. I don't want to hear the conspiracy theory again, unless some actual proofs are showed, please. It's an old tired argument. Over the time I have come to the conclusion that Ati does their cards for benchmarking, while Nvidia does theirs for games. [H] had a really nice article about Benchmarks vs. Games. The difference was brutal, and they didn't talk about 3DMark vs games. It was benchmarks of a game vs. the actual gameplay on the same game. They even demoed their own benchmarks and the result was the same.


----------



## DarkMatter (May 22, 2008)

Rebo&Zooty said:


> http://www.techpowerup.com/reviews/PointOfView/Geforce7900GTX/
> 
> according to tpu review its
> 7800/7900
> ...



7900 GTX has 16 ROPS. Period.
Speaking of PIPES when the cards have different number of units at each stage is silly. What's the pipe number? The number of ROPs, the number of TMUs or the number of Pixel shaders? Silly.


----------



## Valdez (May 22, 2008)

DarkMatter said:


> Corrected that for you. C'mon, we all know what happens between 3DMark 06 and Nvidia, and what happens in games. I don't want to hear the conspiracy theory again, unless some actual proofs are showed, please. It's an old tired argument. Over the time I have come to the conclusion that Ati does their cards for benchmarking, while Nvidia does theirs for games. [H] had a really nice article about Benchmarks vs. Games. The difference was brutal, and they didn't talk about 3DMark vs games. It was benchmarks of a game vs. the actual gameplay on the same game. They even demoed their own benchmarks and the result was the same.



I don't know what you're talking about. The link shows a benchmark with a lots of games. The page i linked shows how the cards perform to each other in average of all tests.


----------



## DarkMatter (May 22, 2008)

Valdez said:


> I don't know what you're talking about. The link shows a benchmark with a lots of games. The page i linked shows how the cards perform to each other in average of all tests.



Yup, OK, sorry. 

I used Google to translate it to Spanish and it didn't a good job. I understood it was 3DMark results, not to mention that Next/Previous page was nowhere to be found.. OMG, I love you Google translator... 
Traduction to English went better. 

In the end you're right. The X1900 is A LOT faster in newer games, and I knew it would happen. Bigger use of shaders helping the card with more pixel shaders is not a surprise. If you knew me, you would know that I have always said X1900 was a lot faster than 7900, but in no way it STOMPED it in games. NOW it does. Anyway, it's faster but almost always on unplayable frames. Don't get me wrong, it's a lot faster, period. It just took too long for this to happen IMO. 
Also IMO Ati should make cards for today and not always thinking in being the best in a far future (that's 1 year in this industry), when better cards are going to be around and ultimately no one will care about the old one. That's my opinion anyway. I want Ati back and I think that's what they have to do. Until then they are making me buy Nvidia, since it's the better value at the moment. HD4000 and GTX 200 series are not going to change this from what I heard, it's a shame.

EDIT: I forgot to answer this before even though I wanted to do so:



Valdez said:


> rv770 has 32tmu instead of 16 in the rv670, if the rumours are right.



It seems they are right. BUT they are doubling Shader power too, so it doesn't look like texture power was as big of a problem if they have mantained the balance between the two. Same with next Nvidia's cards, they have mantained the balance between SP and TMU AFAIK. 
It's something that saddens me, since I really wanted to know where the bottleneck is more common, it's in SPs or TMUs? It definately isn't on ROPs until you reach high resolution and AA levels and sure as hell it's not on memory bandwidth. That doesn't mean memory bandwidth couldn't be more important in the future. Indeed if GPU physics are finally widespread, and I think it's inevitable, we will need that bandwidth, but for graphics alone, bandwidth is the one thing with more spare power to give nowadays. GDDR5 clocks or 512bit interface is NOT needed for the kind of power that next cards will have, if only used for rendering. They are more e-peenis than anything IMO.


----------



## vexen (May 22, 2008)

Kirby123 said:


> prices dosnt matter its all on your setup. 9800 gx2 and the X2 3 series are basicly equal. i recomend X2 over 9800 gx2. right now the best computer out is the ati crossfire X quad gpu setup. my friend from css mark 06 score was 45k.... one main reason is the quad extream he has at 4.8 ghz with 8gb ram. but his friend got the 9800 gx2 sli and his was only 40k. crossfireX is the best thing out right now from what i have seen atleast.


Considering the current World Record is 32k, and 8GB of ram doesn't increase a 3DMark score over 2GB, what are you talking about?


----------



## Valdez (May 22, 2008)

DarkMatter said:


> Yup, OK, sorry.
> 
> I used Google to translate it to Spanish and it didn't a good job. I understood it was 3DMark results, not to mention that Next/Previous page was nowhere to be found.. OMG, I love you Google translator...
> Traduction to English went better.
> ...




I think the tmu was a bottleneck in rv670, the memory bandwith is good for high res with high aa. More shader powa is necessary as always, especially if gpu physics come into the picture.
I'm not certain if the rops are a bottleneck...


----------



## DarkMatter (May 22, 2008)

Valdez said:


> I think the tmu was a bottleneck in rv670, the memory bandwith is good for high res with high aa. More shader powa is necessary as always, especially if gpu physics come into the picture.
> I'm not certain if the rops are a bottleneck...



My point is that if TMUs were the bottleneck, they would have done 2 x Shader Power AND dunno 3x Texturing power? and not 2x-2x. If textures were the bottleneck, this flaw has been carried over to the new series. Since I don't think that possible, as I don't think they are so stupid, my only conclusion is that it wasn't a bottleneck. I am based in other concerns to reach tht conclusion too, like the efficiency at which they are capable of using the VLIW + SIMD SP arrays, for example. I have defended since day one that R600 was limited by it's shader power, it's just lately and after hearing most people complain about its 16 TMUs that I had to "admit" or adopt the idea that the bottleneck is on TMUs.


----------



## imperialreign (May 22, 2008)

imperialreign said:


> I'll try and dig up the review I read a while back, if I can find it again - it's kinda hard to find legit reviews like that seeing as how not many sites run back through testing when new driver releases are put out . . .
> 
> we can kind of see it, though, in our e-peen "post your gameX benchmark score here" threads
> 
> but, again, I'll try and dig up what I remember seeing, and I'll post it back up here in this thread once I find it . . .





as to this last post of mine - I can't find the review, so my arguement has not veifiable standpoint.

Most ATI users here could probably vouch for how quick we see performance gains in new to the market games, but without strong evidence, I have to call my own argument flaky.





Anyhow, back to the discussion at hand - I'm fairly certain by the way it looks that the HD4000 series will be the true contenders that we haven't seen from ATI since the 1900 series; sure, nVidia will be coutering with new hardware a couple of months after the HD4000 release, but if it's another "panic countermeasure" like nVidia tends to do, we're not going to see any "epic" advancements to their hardware -

The way it loosk right now, IMO, is that ATI will be able to reclaim the performance crown with this new series, and once nVidia releases their upcoming hardware, both companies are going to be toe-2-toe.  Price will be the biggest determining factor as to who will win the fight, not performance.

Either way, though, we're in for a heated year between red and green, and I'm really looking forward to it again


----------



## HTC (May 23, 2008)

Has anyone seen this yet?



> We finally have a screenshot that confirms most of the things that we said about RV770PRO, here. Our sources suggested that ATI is talking about a Radeon *4850 512MB* name for this card.
> 
> The card at its top frequency works at 900MHz as we wrote yesterday, and the GDDR3 memory works at 1000MHz. ATI uses power play, a power saving technique that can drop the GPU clock to 500MHz or 625MHz, depending on the power state.
> 
> ...



Source: Fudzilla


----------



## DarkMatter (May 23, 2008)

imperialreign said:


> as to this last post of mine - I can't find the review, so my arguement has not veifiable standpoint.
> 
> Most ATI users here could probably vouch for how quick we see performance gains in new to the market games, but without strong evidence, I have to call my own argument flaky.
> 
> ...



Man, you are so misinformed. Nvidia will launch it's counterpart *2 days* later than Ati's HD4850 and *5 days* before HD4870 according to widespread news:

16th June - HD4850
18th June - GTX 260 and 280
And week later than the HD4850 the HD4870 with availabity on first half of July.

And there's not going to be any "panic countermeasure", it never was IMO. 8800GT's low availability was the first time for Nvidia, as so was the first time that Ati delivered at launch instead of doing a paper launch in my recent memory. Ati does good hardware, but when it comes to delivering the actual cards...


----------



## imperialreign (May 23, 2008)

DarkMatter said:


> Man, you are so misinformed. Nvidia will launch it's counterpart *2 days* later than Ati's HD4850 and *5 days* before HD4870 according to widespread news:
> 
> 16th June - HD4850
> 18th June - GTX 260 and 280
> ...



sorry, I haven't heard any confirmed release dates yet - last I had heard was initial launch of HD4000 series in either June or July, and nVidia claiming late summer . . .

but, going by how things have rolled in the past, when ATI states a release date and starts releasing specs, they're usually on the money as far as when they intend to begin sale, nVidia tends to be questionable when all they state is a new hardware release in a "round about this time of year" kind of way.

Although, I can't say for certain what type of counter nVidia's new hardware will be - if it's just a quick fabrication change, then they've probably been capable of the change for quite a while and were just waiting on ATI's new series announcement (it wouldn't surprise me, Intel does the same thing with AMD; they've been sandbagging their processors for quite a while now).  ATI on the otherhand have been desiging the R700 for quite some time now, and rumors have been floating around about it's design for a long time now, I'm sure nVidia knew it was coming - so, I guess you're right in that nVidia's new series isn't a "panic countermeasure" like the 9800GX2 was, but, I'd argue that it's more of a "cock block" instead.



> Ati does good hardware, but when it comes to delivering the actual cards



huh?  I've never heard of any issues with ATI card supplies not meeting customer demands on release . . . nVidia, on the other hand, has a hard time preparing for demand.  Sure, ATI might not have the kick over nVidia these last couple of years with new hardware, but many customers have gone with ATI's newest cards because nVidia can't supply at first.


----------



## eidairaman1 (May 23, 2008)

You notice how Nvidia had to release so many GS, because the G92s couldnt get enough Non Defective Yields at the Time for the GT and GTS Parts, it took many steppings later to get the 9800GTX and GX2, since the GS didnt sell so well as the higher cards, Nvidia Rebagged them.



DarkMatter said:


> Man, you are so misinformed. Nvidia will launch it's counterpart *2 days* later than Ati's HD4850 and *5 days* before HD4870 according to widespread news:
> 
> 16th June - HD4850
> 18th June - GTX 260 and 280
> ...


----------



## Rebo&Zooty (May 24, 2008)

imperialreign said:


> sorry, I haven't heard any confirmed release dates yet - last I had heard was initial launch of HD4000 series in either June or July, and nVidia claiming late summer . . .
> 
> but, going by how things have rolled in the past, when ATI states a release date and starts releasing specs, they're usually on the money as far as when they intend to begin sale, nVidia tends to be questionable when all they state is a new hardware release in a "round about this time of year" kind of way.
> 
> ...



haha, very true.

now as to short supply, the x800xt pe was the only card i saw in "short supply" from ati, and around here, the 6800ultra was in the same sate, u could get the gt, but not the ultra, you could get the x800pro or pro vivo(pro vivo at the time ALL flashed to xt pe specs, some needed slitly lower clocks, but most dirrect flashed no problem) 

and as i said, in my area( neer portland oregon) we had shortages of nvidia cards from the geforce1 up, till the g80 cards hit, the mid and lower versions like the gt/gts where avalable IF they wherent the top version but if the where, you could endup waiting weeks in line behind others who had already sign up toget one when they came in.

the x1800 and 1900 cards when they hit on the other hand had NO supply issues here, dispite selling VERY VERY well(the x1800gto sold like hotcakes at a county fair) really.

and ati has put off relece dates b4, but if they say "it WILL be out by this time", you could get it from channels within days of that, nvidia are never clear on a date till they put the card out, its a way to try and avoid paper launches.....

i know alot of people including myself lost alot of the respect they had for tomshardware due to some extreamly bias reviewers tom hired once apon a time(toms personal reviews alwase came of to me as being very low on bias and at times very blunt about flaws the reviews items had)  but one thing you gotta respect him for, if you look thru his archives he once stoped reviewing products that where "paper launch" and he had a front page statement to nvidia,ati,intel,amd and other hardware makers flat out telling them he wouldnt review stuff that wouldnt be readly avalable on the open market at launch, and he stuck to that, dispite it pissing off those companys that wanted their next great paper launch item reviewed to get the hype up..

i do remmber that some items he mentioned when interviewed about that where the p4EE chips that many times you COULDNT get ahold of after launch because intel kept them in such extreamly short supply.

the FX chips because it took a month in some cases for the chips to acctualy be readly avalable.

and he slamed nvidia and ati for the same crap, putting out their ultra hign end products but not having a ready supply, the xt pe and ultra seirse cards where the primary target for this since they where alwase in short supply after launch.


----------



## DarkMatter (May 24, 2008)

I don't know there, but here it's been the same since the 9700 until the HD3870, this last one was in better supply. As I said I can't know about the US personally, but I've been reading tech sites since the beggining and according to the sites X800, X1800 and x1900 were in short supply AND later than what they first said would launch (paper launch). The situation was so extreme that some sites (Tom's Hardware being the most popular) stoped talking about those cards for months until the actual card was on streets. I Know that was what happened and anything will change my mind, I just have ti go to news archives of any tech site to refresh my mind.


----------



## Rebo&Zooty (May 24, 2008)

DarkMatter said:


> I don't know there, but here it's been the same since the 9700 until the HD3870, this last one was in better supply. As I said I can't know about the US personally, but I've been reading tech sites since the beggining and according to the sites X800, X1800 and x1900 were in short supply AND later than what they first said would launch (paper launch). The situation was so extreme that some sites (Tom's Hardware being the most popular) stoped talking about those cards for months until the actual card was on streets. I Know that was what happened and anything will change my mind, I just have ti go to news archives of any tech site to refresh my mind.



the x1800 was late to market yes, but not a paper launch, same for the 1900, at least here we had them in ready supply, even localy at places like compusa you could get them just after the official launch.

and check ur facts about toms hardware, he reamed nvidia and ati as well as intel, and to a far lesser extent amd(amd had one fx chip that was in VERY short supply as well as a low watt chip line that was hard to get ahold of)  

example 6800ultra=paper launched  and took MONTHS to acctualy become readly avalable......

the 2900 was also pushed back from orignal launch date(drastickly) but at least around here u could get them in stores after official launch.

the x800pro/pro vivo where NOT paper launched, they where very easy to come by, as was the 6800gt(same class cards) and the 9700/9800 where not hard to get, i knew plenty of ppl who got them when they hit, at the time i didnt have the $ to get one :/

your claim that nvidia never paper launches is countered by a simple google search.

try it, you will find that if you combine proper terms all the company are pretty much equialy guilty of paper launching stuff, now to diffrent market impacts, amd's early low watt chips went thru paper alunches for all intents and perpouses, intel.....well they paper launched every time amd put out a new chip that dirrectly thretened ther few leading points with the p4, nvidia and ati, well they both have a LONG history of launching stuff and having VERY low supplys avalable OR pushing launches back for untold reasions.

latist ones I have seen personaly where the x800xt pe/6800ultra and the 8800gt it was avalable but VERY short supplys, was launched in a panic to avoid ati's 3800 cards from taking market share, the rush launch also showed in the crappy sub-par cooling the cards where given.......the 8800gt lacks polish......

and as i said before, the xt pe was hard/impossable to get but the pro vivo cards from the time could enlarge flash to xt pe specs, i only ever had 4 pro vivos that didnt dirrectly flash to xt pe. and those where all sapphire cards(sapphire in my exp tend to be the worst choice for moding or infact componant quility) 

meh, i cant find the toms hardware thing, i think its archived it was years back, around the time of the xt pe/6800ultra where out.


----------



## DarkMatter (May 24, 2008)

Rebo&Zooty said:


> the x1800 was late to market yes, but not a paper launch, same for the 1900, at least here we had them in ready supply, even localy at places like compusa you could get them just after the official launch.
> 
> and check ur facts about toms hardware, he reamed nvidia and ati as well as intel, and to a far lesser extent amd(amd had one fx chip that was in VERY short supply as well as a low watt chip line that was hard to get ahold of)
> 
> ...



Meh I don't want to discuss. I know the facts, I read 20+ tech sites everyday and I know what I'm talking about.

Only two things:
1. 6800 Ultra was out the same day, with tons of cards, but were launched all around the world, so supply was short. On the other hand Ati loves to supply enough to US and don't care alot about the rest of the world, another thing that counts a lot IMO. You could find a 6800 Ultra the same they launched it, at least here in Spain. I know it because I worked in a computer store back then, and we had the card 3 days before launch, but couldn't sell it till THE DAY. Happened the same with 7800.

EDIT: Now, if you are talking about the ULtra 512, then right. That was a paper launch, from my expeirence, indeed, I could say it was a fairy launch, as I never had the chance to see one in the store.
2. X800 XT was a lot later than what they first said, x1800 and x1900 too. Again I know it first hand. Paper launch is not only not having the card in supply at launch, it is also to announce it and constantly delay the launch. Ati has done this a lot of times. Indeed they are going to do it with the HD4870, they are going to "launch" it around 22 June, with availability on first half of July. That's also a paper launch to me.


----------



## eidairaman1 (May 25, 2008)

instead of buying a card at launch i usually wait a few months before grabbing, thats what i did with the 9700 and the x1950, because of stepping improvements.


----------

