Wednesday, May 21st 2008
AMD Confirms GDDR5 for ATI Radeon 4 Series Video Cards
AMD today announced the first commercial implementation of Graphics Double Data Rate, version 5 (GDDR5) memory in its forthcoming next generation of ATI Radeon graphics card products. The high-speed, high-bandwidth GDDR5 technology is expected to become the new memory standard in the industry, and that same performance and bandwidth is a key enabler of The Ultimate Visual Experience, unlocking new GPU capabilities. AMD is working with a number of leading memory providers, including Samsung, Hynix and Qimonda, to bring GDDR5 to market.
Today's GPU performance is limited by the rate at which data can be moved on and off the graphics chip, which in turn is limited by the memory interface width and die size. The higher data rates supported by GDDR5 - up to 5x that of GDDR3 and 4x that of GDDR4 - enable more bandwidth over a narrower memory interface, which can translate into superior performance delivered from smaller, more cost-effective chips. AMD's senior engineers worked closely with industry standards body JEDEC in developing the new memory technology and defining the GDDR5 spec.
"The days of monolithic mega-chips are gone. Being first to market with GDDR in our next-generation architecture, AMD is able to deliver incredible performance using more cost-effective GPUs," said Rick Bergman, Senior Vice President and General Manager, Graphics Product Group, AMD. "AMD believes that GDDR5 is the optimal way to drive performance gains while being mindful of power consumption. We're excited about the potential GDDR5 brings to the table for innovative game development and even more exciting game play."
The introduction of GDDR5-based GPU offerings marks the continued tradition of technology leadership in graphics for AMD. Most recently AMD has been first to bring a unified shader architecture to market, the first to support Microsoft DirectX 10.1 gaming, first to lower process nodes like 55nm, the first with integrated HDMI with audio, and the first with double-precision floating point calculation support.
AMD expects that PC graphics will benefit from the increase in memory bandwidth for a variety of intensive applications. PC gamers will have the potential to play at high resolutions and image quality settings, with superb overall gaming performance. PC applications will have the potential to benefit from fast load times, with superior responsiveness and multi-tasking.
"Qimonda has worked closely with AMD to ensure that GDDR5 is available in volume to best support AMD's next-generation graphics products," said Thomas Seifert, Chief Operating Officer of Qimonda AG. "Qimonda's ability to quickly ramp production is a further milestone in our successful GDDR5 roadmap and underlines our predominant position as innovator and leader in the graphics DRAM market."
GDDR5 for Stream Processing
In addition to the potential for improved gaming and PC application performance, GDDR5 also holds a number of benefits for stream processing, where GPUs are applied to address complex, massively parallel calculations. Such calculations are prevalent in high-performance computing, financial and academic segments among others. AMD expects that the increased bandwidth of GDDR5 will greatly benefit certain classes of stream computations.
New error detection mechanisms in GDDR5 can also help increase the accuracy of calculations by indentifying errors and re-issuing commands to get valid data. This capability is a level of reliability not available with other GDDR-based memory solutions today.
Source:
AMD
Today's GPU performance is limited by the rate at which data can be moved on and off the graphics chip, which in turn is limited by the memory interface width and die size. The higher data rates supported by GDDR5 - up to 5x that of GDDR3 and 4x that of GDDR4 - enable more bandwidth over a narrower memory interface, which can translate into superior performance delivered from smaller, more cost-effective chips. AMD's senior engineers worked closely with industry standards body JEDEC in developing the new memory technology and defining the GDDR5 spec.
"The days of monolithic mega-chips are gone. Being first to market with GDDR in our next-generation architecture, AMD is able to deliver incredible performance using more cost-effective GPUs," said Rick Bergman, Senior Vice President and General Manager, Graphics Product Group, AMD. "AMD believes that GDDR5 is the optimal way to drive performance gains while being mindful of power consumption. We're excited about the potential GDDR5 brings to the table for innovative game development and even more exciting game play."
The introduction of GDDR5-based GPU offerings marks the continued tradition of technology leadership in graphics for AMD. Most recently AMD has been first to bring a unified shader architecture to market, the first to support Microsoft DirectX 10.1 gaming, first to lower process nodes like 55nm, the first with integrated HDMI with audio, and the first with double-precision floating point calculation support.
AMD expects that PC graphics will benefit from the increase in memory bandwidth for a variety of intensive applications. PC gamers will have the potential to play at high resolutions and image quality settings, with superb overall gaming performance. PC applications will have the potential to benefit from fast load times, with superior responsiveness and multi-tasking.
"Qimonda has worked closely with AMD to ensure that GDDR5 is available in volume to best support AMD's next-generation graphics products," said Thomas Seifert, Chief Operating Officer of Qimonda AG. "Qimonda's ability to quickly ramp production is a further milestone in our successful GDDR5 roadmap and underlines our predominant position as innovator and leader in the graphics DRAM market."
GDDR5 for Stream Processing
In addition to the potential for improved gaming and PC application performance, GDDR5 also holds a number of benefits for stream processing, where GPUs are applied to address complex, massively parallel calculations. Such calculations are prevalent in high-performance computing, financial and academic segments among others. AMD expects that the increased bandwidth of GDDR5 will greatly benefit certain classes of stream computations.
New error detection mechanisms in GDDR5 can also help increase the accuracy of calculations by indentifying errors and re-issuing commands to get valid data. This capability is a level of reliability not available with other GDDR-based memory solutions today.
135 Comments on AMD Confirms GDDR5 for ATI Radeon 4 Series Video Cards
first the gx2 vs the 1950x2 the gx2 looses not just in perf, but in support, the gx2 is trash, nvidia made it to keep top numbers in a few games till the 8800 came out thats it, then they fully dumped its support, sure the drivers work, but quad sli? and even sli perf of the gx2 vs true sli was worse, thats sad since its basickly 2 cards talking dirrectly.
as to the x1900, it STOMPED the 7900/7950, cards that ON PAPER should have been stronger, 24 pipes vs 16 for example was what ppl where using to "proove" that the nvidia cards WOULD kill the x1900 range of cards.
i would make another massivly long post, but you would just ignore it like all fanboi's do, or resorte to insults.
funny since the x1900/1950xt/xtx cards had 16 pipes/rops vs the 7900 having 24 and the 7900 got pwned........
meh, im sick of the "ati sucks because *add bullshit FUD here*" or the "nvidia sucks because *add bullshit FUD here*"
they both have their flaws and their good points.
the one thing i almost alwase see out of ati since the 8500 has been INNOVATION, it hasnt alwase worked out the way they intended, the 2900/3800 are the prime example, the main issue was that ati designed the r600/670 cores for dx10 not dx9, as such they followed what microsoft wanted to do with dx10+ that was to remove detocated AA hardware, using shaders to do the AA and other work, ofcorse this lead to a problem, dx9 support was an after thought and as such gave worse performance when you turned aa on.
ati thought like many other companys thought, vista would take off and be a huge hit, just like xp did when it came out, and with vista being a big hit, dx10+ games would have been out en-mass, but vista fell on its face, an ati still had this pure dx10 chip alreadin in the pipe, so they ran with it KNOWING it would have its issues/querks in dx9 games.
Nvidia on the other hand effectivly took the oposite aproch with the g80/92 cores, they build a dx9 part with dx10 support as an afterthought, in this case it was a good move, because without vista being a giant hit, game developers had no reasion to make true pure dx10 games.
nvidia didnt go dx10.1 because it would have taken some redesign work on the g92, and they wanted to keep their investment in it as low as possable to keep the profit margin as high as possable, its why they lowered the buss width and complexity of the pcb, its why they didnt add dx10.1 support, its why the 8800gt's refrance cooler is the utter peice of shit it is(i have one, i can say for 100% certen the refrance coolers a hunk of shit!!!!)
now i could go on and on and on about each company, point is they have both screwed up.
biggist screwups for each
ATI:2900(r600) not having a detocated AA unit for dx9 and older games.
nVidia: geforce 5/FX line, horrible dx9 support that game developers ended up having to not use because it ran so bad, forcing any FX owner to run all his/her games in dx8 mode, also the 5800 design was bad, high end ram with a small buss and ungodly loud fan does not a good card make.
thats how i see it, at least ATI never put out a card tauted as being capable of something that in practice it couldnt do even passably well......
I had an artical b4 my last hdd melt down, it showed acctual cost per memory chip for videocards, ddr vs ddr2 vs ddr3 vs ddr4
ddr4 was more expencive, but that was mostly due not to it being new but due to it being in short supply at the time, still the price you payed to get it on a card was extreamly exagerated, ofcorse its "new" so they charge extra for it.
the cost of 2 vs 3 again, wasnt that large, same with ddr vs ddr2, again, we are talking about companys that buy 100's of thousnads if not millions of memory chips at a time from their supplyers, those supplyers want to keep on the good side of their customers so they keep making a profit, so they give them far better prices then they would ever admit to an outside party.
also the more you buy, the lower the per unit cost is, same as with most things, go check supermediastore, if you guy 600 blanks the price is quite a bit lower then buying 50 or 100 at a time ;)
the vendor to get ram at a nice price because they buy such large orders!
seriously there is now reason to hate ati that much!
look at my face----> :D
im very happy with Amd/ati
my previous rig was nvidia i was happy with that as well
but hey im not complaining...you have every right to say what you want.
no one like a buzz kill!
as to ms doing what another company tells it, wrong, ms could block opengl support if they wanted, and guess what, nobody could stop them, everybody has to do what ms says, because the only other choice is to fall back into a niche market like matrox has done.
as to your 5700 example, that dosnt mean shit the 5700 was a peice of crap, it was the best of the fx line, but thats not saying much......specly when a 9550se can out perform it LOL
this is dx10.1 3870x2 vs 9800gtx under sp1(dx10.1 is enabled with sp1)
funny, shader based aa vs detocated AA and the perf diffrance is around 1fps diffrance
so your "shader based AA is a stupid Idea" line is a load of fanboi bullshit(as expected from you)
the ideas fine, if your talking about native dx10/10.1 games, but todays games are mostly dx9 games with some dx10 shaders added(crysis for example)
as this shows there is ZERO reasion that shader based aa need to be any slower, its only slower in native code, its just slower on older games, hence as i said, they should have had a hardware AA unit for dx9 and older games and used shader based AA for dx10.x games, problem would have been solved.
(DO NOT RESPOND)