Saturday, May 31st 2008

AMD Starts Shipping ATI Radeon HD 4850 Video Cards

The ATI Radeon HD 4850 cards are reportedly already shipping to OEM partners and retailers. According to TG Daily everything is going as planned and AMD/ATI is aiming for a sizable launch of the new product generation, with Radeon 4850 512MB boards leading the charge. The final prices for all Radeon HD 4 series cards will be officially declared during the Computex 2008 tradeshow which will open doors on Monday. Higher-end Radeon 4870 cards with 512MB of onboard GDDR5 memory are expected to ship in volume sometime this summer, with flagship Radeon HD 4870 X2 to follow soon after that.
Source: TG Daily
Add your own comment

74 Comments on AMD Starts Shipping ATI Radeon HD 4850 Video Cards

#26
Apocolypse007
this card looks very promising. I expect nvidia's competitor card to be more power hungry and inefficient (still ddr3 on Nvidia's flagship... Really?).

im hoping this will help AMD regain some of its popularity with its gamer fans.

HD3000 series was nice. My 3870 is serving me well. Upgraded to it from my x1800xt and haven't regretted it. It would be nice if you could Xfire this with the 3000 series, but my board only has 1 PCI-e slot :shadedshu .
Posted on Reply
#27
robodude666
entilzaI'd like to see the Crossfire Compatibilty chart with this new 4800 series...

I hope my 3870 can remain as I just love it :)
Same here. Would also hope that any Vista + oh say atikmdag.sys get fixed with the HD 4800 driver release. Wouldn't mind getting a HD 48x0 instead of a 8800GT in late june and CrossFire w/ my 3870.
Posted on Reply
#28
jonmcc33
My X1900XT is still going strong. I'll use it until there's a game worth playing all the time for me to upgrade.
Posted on Reply
#29
Wile E
Power User
Apocolypse007this card looks very promising. I expect nvidia's competitor card to be more power hungry and inefficient (still ddr3 on Nvidia's flagship... Really?).
The fact that nVidia is using DDR3 doesn't matter for two reasons. First, it rumored their top end cards will be using a 512bit bus, greatly increasing bandwidth, and second, because nothing is really able to use all the bandwidth available anyway.
Posted on Reply
#30
jonmcc33
Wile EThe fact that nVidia is using DDR3 doesn't matter for two reasons. First, it rumored their top end cards will be using a 512bit bus, greatly increasing bandwidth, and second, because nothing is really able to use all the bandwidth available anyway.
One reason they are using it still is because ATi helped develop GDDR5. So when you are left in the dust about development of something like that what do you expect?

It has already been proven that the memory bus in particular isn't the greatest factor. ATi already tried the 512-bit memory bus and it didn't last well. nVIDIA had 320-bit and 384-bit and notice that the G92 core with 256-bit is better performing in AA/AF benchmarks?

When you can have a clock speed of 1.125GHz on your memory it's massive bandwidth even with a 256-bit bus. Not to mention the additional benefits and performance improvements of the technology.
Posted on Reply
#31
Wile E
Power User
jonmcc33One reason they are using it still is because ATi helped develop GDDR5. So when you are left in the dust about development of something like that what do you expect?

It has already been proven that the memory bus in particular isn't the greatest factor. ATi already tried the 512-bit memory bus and it didn't last well. nVIDIA had 320-bit and 384-bit and notice that the G92 core with 256-bit is better performing in AA/AF benchmarks?

When you can have a clock speed of 1.125GHz on your memory it's massive bandwidth even with a 256-bit bus. Not to mention the additional benefits and performance improvements of the technology.
I know what the benefits of the bus are. My point is, the wider bus requires less memory clock speed to achieve the same bandwidth, making GDDR5 unnecessary for nVidia in terms of overall throughput.

And if you never noticed, nothing above GDDR3 has shown significant gains on ATI cards, even when all else is equal. So the bandwidth of both GDDR5 on a 256b bus, or GDDR3 on a 512b bus is essentially overkill.
Posted on Reply
#32
Silverel
GDDR5 is cheaper, and can use a smaller bus with the same bandwidth. The architecture used allows for uneven length trace lines (i.e shorter and less expensive), less PCB layers to accomodate the larger bus, and will run cooler as well. It's not just about the clock speed, but the overall cost of the board that will make the big difference. Not to mention, all it has to do is meet the same speeds to provide the same bandwidth. If it's not going to make a difference performace-wise due to some architecture limitation within ATI cards, at least it will be cheaper... and price/performance will drop accordingly. Or at least their profits will rise.
Posted on Reply
#33
tkpenalty
Something tells me from Nvidia not using GDDR5/4, that they are holding back...
Posted on Reply
#34
Blacklash
Good news. If they rock I'll replace my two HD 3850s @ 760|2038 with them.
Posted on Reply
#35
DarkMatter
SilverelGDDR5 will/could be cheaper in 3-6 months or so, and can use a smaller bus with the same bandwidth. The architecture used allows for uneven length trace lines (i.e shorter and less expensive), less PCB layers to accomodate the larger bus, and will run cooler as well. It's not just about the clock speed, but the overall cost of the board that will make the big difference. Not to mention, all it has to do is meet the same speeds to provide the same bandwidth. If it's not going to make a difference performace-wise due to some architecture limitation within ATI cards, at least it will be cheaper... and price/performance will drop accordingly. Or at least their profits will rise.
Corrected that for you. Right now GDDR5 is way more expensive.
Posted on Reply
#36
DarkMatter
newtekie1Yes, I am neutral. Most fanboys seem to think that if you say anything bad about a company you are "talkin crap" or "bashing" their beloved company. Pointing out negative points about a company and it's products doesn't mean you don't like them or you favor another company. The fact of the matter at the time WAS that nVidia was a better buy, nVidia had ATi beat in both price and performance. Pointing out that fact doesn't mean I was saying nVidia was great and ATi was crap, it was just the facts at the time(and still is the fact, but lets see what these new cards bring).
Been there. People just don't want to hear the truth when it hurts. That sentiment is extremely intensified in the case of fanbois.
Posted on Reply
#37
Esse
DarkMatterCorrected that for you. Right now GDDR5 is way more expensive.
Well someone needs to tread in the deep end other wise no one will ever get in the pool :rolleyes:
Posted on Reply
#38
candle_86
imperialreignretro - funny how when we apply that in the tech market we're only talking a couple of years old :laugh:
that XP of his isnt a couple of years old nor his his board. NF2 and AThlonXP are 2002 tech remember. Heck the 754 A64's are near 6 years old now. Processors are slowing down in advancement and preformace at least to me.
Posted on Reply
#39
yogurt_21
DarkMatterCorrected that for you. Right now GDDR5 is way more expensive.
don't think you're paying attention mate. gddr5 is cheaper to implement because it uses a less complicated process. and right now the chips are on 10% more expensive than a lower speed gdd3 counterpart. the 512bit bus on the gt200 has made the chip huge and expensive to produce. there's more than chip cost involved in the price of memory. check out the gddr5 thread for more info. gddr5 is MUCH cheaper to implement. so no it is not at all more expensive.
Posted on Reply
#40
jonmcc33
Wile EI know what the benefits of the bus are. My point is, the wider bus requires less memory clock speed to achieve the same bandwidth, making GDDR5 unnecessary for nVidia in terms of overall throughput.

And if you never noticed, nothing above GDDR3 has shown significant gains on ATI cards, even when all else is equal. So the bandwidth of both GDDR5 on a 256b bus, or GDDR3 on a 512b bus is essentially overkill.
No it is not overkill when you have DX10.1 which requires 4XAA. When you play with the AA/AF cranked then bandwidth becomes very important and is the bottleneck.

GDDR5 isn't unnecessary, it's unattainable by nVIDIA because they aren't involved in the development of it. Come to think of it, didn't ATi help develop GDDR3 as well? The same memory technology that nVIDIA is using on it's flagship cards?

en.wikipedia.org/wiki/GDDR3

Yes it is! ATi and JEDEC developed it! I guess some companies are still followers on the market, eh? ;)
Posted on Reply
#41
snuif09
im only gettin dx10 cards when its becoming a standard atm there arent enough dx10 games that are much better than dx9
Posted on Reply
#42
Disparia
Cools. Sign me up for two of them! (as there's a lack of chain-able DisplayPort monitors, video cards).
Posted on Reply
#43
newtekie1
Semi-Retired Folder
DarkMatterBeen there. People just don't want to hear the truth when it hurts. That sentiment is extremely intensified in the case of fanbois.
Exactly, and if people would look farther back than just a week or two they would see me complaining about nVidia's utterly shit driver support for the Pre-8000 series cards when the 8800's came out.
Posted on Reply
#44
imperialreign
candle_86that XP of his isnt a couple of years old nor his his board. NF2 and AThlonXP are 2002 tech remember. Heck the 754 A64's are near 6 years old now. Processors are slowing down in advancement and preformace at least to me.
that's because AMD is behind, and Intel is sandbagging. The CPU market has become stale because of it. Until AMD can re-enter the ring with the thousand-hand slap, we won't see anything remotely brilliant come out of Intel . . .


. . . come to think of it, we haven't really seen anything brilliant or noteworthy come out of Intel since they slapped two P4's together on the same die. IIRC, even the looming-sometime-this-decade-release Nehalem is still milking the core2 architecture.
jonmcc33No it is not overkill when you have DX10.1 which requires 4XAA. When you play with the AA/AF cranked then bandwidth becomes very important and is the bottleneck.

GDDR5 isn't unnecessary, it's unattainable by nVIDIA because they aren't involved in the development of it. Come to think of it, didn't ATi help develop GDDR3 as well? The same memory technology that nVIDIA is using on it's flagship cards?

en.wikipedia.org/wiki/GDDR3

Yes it is! ATi and JEDEC developed it! I guess some companies are still followers on the market, eh?
sad, too, ATI jumps on new technology as soon as they can, and nVidia lags behind - they remind me of Intel in many respects, as long as their current tech works and dominates the market, why change it? If they get ahead of the competition, they sandbag their current tech and milk it for as long as they can.

ATI was the first to offer GDDR3, GDDR4, GDDR5, DVI, HDMI, PCI-E 2.0, DX10.1 (they also would've had the first DX10 release, if the merger with AMD didn't hold up the HD2000 series as long as it did), etc, etc - and people will purchase their products simply for the new tech, but it doesn't make up much for their lagging performance :(


but, y'know, IMO, if AMD designed GPU cores as massive as nVidia has, we'd have seen a different story the last few series.
Posted on Reply
#45
jonmcc33
candle_86that XP of his isnt a couple of years old nor his his board. NF2 and AThlonXP are 2002 tech remember. Heck the 754 A64's are near 6 years old now. Processors are slowing down in advancement and preformace at least to me.
You are aware that the architecture improvement of the Penryn core over the Conroe actually improved performance at the same clock speed, right?

Currently quad core processors are out, soon we'll see 8 core processors, etc. It's still advancing and heavily. The real thing is applications making use of SMP so they gain from these multicore processors.
Posted on Reply
#46
DarkMatter
yogurt_21don't think you're paying attention mate. gddr5 is cheaper to implement because it uses a less complicated process. and right now the chips are on 10% more expensive than a lower speed gdd3 counterpart. the 512bit bus on the gt200 has made the chip huge and expensive to produce. there's more than chip cost involved in the price of memory. check out the gddr5 thread for more info. gddr5 is MUCH cheaper to implement. so no it is not at all more expensive.
1- GDDR5 uses a smaller process. smaller != simpler != cheaper. For comparison 55nm process in the HD3000 series was not a lot cheaper (if ay all at launch time) than 65 nm in the time. We can say the same about firts 45 nm Intels. Check your facts.

2- Right now slow GDDR5 is 10-20% more expensive than higher speed GDDR3 (0.8 ns). Slower GDDR3 such as 1ns memory is way cheaper. GTX260, the only possible "direct" competitor to Ati, uses the cheaper one and has more or less the same bandwidth as the 770XT. GTX 280 has a significant higher bandwidth, so we are not comparing apples to apples there. We could easily say that we are comparing extreme high GDDR3 vs. slow GDDR5 and the actual result is Nvidia has the higher bandwidth, so Ati solution being cheaper makes sense, it's what we should expect from the slower part. 512 bit + GDDR3 here is giving more bandwidth, it's overkill, unnecerary IMO, but faster still.

3- I didn't talk about the implementation, but the price of memory. If you read his post you would notice he first says GDDR5 is cheaper and then starts talking about the implementation like this: cheaper GDDR5 + cheaper controler, PCB, etc = WIN WIN. That's what I understand in his statement. And that's not true NOW, it will in 3-6 months probably. And that's what I stated. And it's in this same moment, having to justify my statements, when newtekie's post #20 makes even more sense to me...
EsseWell someone needs to tread in the deep end other wise no one will ever get in the pool :rolleyes:
Indeed! Who said the oposite? I only made clear that GDDR5 is not cheaper NOW. I didn't say it was a bad decision or anything. I just say that price wise NOW may not be better, not by much at least. In reality we don't know if it's cheaper at all:

1- Memory is more expensive and in short supply NOW*. So much that Ati halved the frame buffer to 512 MB in the 770XT and uses GDDR3 in the Pro. Never forget this, as it pictures the truth in a "check the reality" fashion.

2- We have no real proofs that making a 256 bit GDDR5 controler is cheaper than 512/448 bit GDDR3 controler RIGHT NOW* and get the same performance. Remember DDR vs DDR2, DDR2 vs DDR3, GDDR3 vs GDDR4...

3- PCB is a lot cheaper, but IMO it's the only part where there's an econ0mic benefit RIGHT NOW* and it doesn't account all that much to the retail price..

*Sorry because I used it so much, but I think that I have to make clear that I am talking about today. Otherwise some people will think I'm saying something that I am not. I have no doubts it will be better with the time. That's why I said 3-6 months.
Posted on Reply
#47
DarkMatter
jonmcc33GDDR5 isn't unnecessary, it's unattainable by nVIDIA because they aren't involved in the development of it. Come to think of it, didn't ATi help develop GDDR3 as well? The same memory technology that nVIDIA is using on it's flagship cards?

en.wikipedia.org/wiki/GDDR3

Yes it is! ATi and JEDEC developed it! I guess some companies are still followers on the market, eh? ;)
I've been searching info about this for some time, as I don't have the Wiki in my highest reliable sources list, and it seems the info is right. I never pay as much attention to who develops what as I do to the specs and benchmarks of the thing. But I may do it in the future.

That's the sadest thing I have heard in a long time. It's extrememly unfair and bad for free competition. :shadedshu
I had no clue Ati was involved in the development of GDDR. I thought it was the JEDEC who does this things in conjunction with memory developers in any case (when talking about memory of course). And not only ONE of the consumers. No wonder why Nvidia is not using it! Ati has probably tons of patents that doesn't want to share for cheap! If they want to share them at all. :banghead:

Not to mention that this way Ati is kind of imposing the use of the memory the way they want it to be, which may not be the better way, who knows? In any case it surely benefits Ati. It's not like other companies can't develop their own standard, or that they can't make it better, it's surely more based on relationship with JEDEC. It's like TWIMTBP in hardware development, with the exception that there's no patent involvement in TWIMTBP and in harware it's SURE there is. :shadedshu
Posted on Reply
#48
eidairaman1
The Exiled Airman
Technically, PCB has a part to play, usually the more Area that is used by a Single PCB, such as larger the Price does go up but if the boards were smaller they could get greater yields out of them, thus price would go down, Just too bad they cant reuse the Wasted PCB Platter to make a new PCB Platter.
DarkMatter1- GDDR5 uses a smaller process. smaller != simpler != cheaper. For comparison 55nm process in the HD3000 series was not a lot cheaper (if ay all at launch time) than 65 nm in the time. We can say the same about firts 45 nm Intels. Check your facts.

2- Right now slow GDDR5 is 10-20% more expensive than higher speed GDDR3 (0.8 ns). Slower GDDR3 such as 1ns memory is way cheaper. GTX260, the only possible "direct" competitor to Ati, uses the cheaper one and has more or less the same bandwidth as the 770XT. GTX 280 has a significant higher bandwidth, so we are not comparing apples to apples there. We could easily say that we are comparing extreme high GDDR3 vs. slow GDDR5 and the actual result is Nvidia has the higher bandwidth, so Ati solution being cheaper makes sense, it's what we should expect from the slower part. 512 bit + GDDR3 here is giving more bandwidth, it's overkill, unnecerary IMO, but faster still.

3- I didn't talk about the implementation, but the price of memory. If you read his post you would notice he first says GDDR5 is cheaper and then starts talking about the implementation like this: cheaper GDDR5 + cheaper controler, PCB, etc = WIN WIN. That's what I understand in his statement. And that's not true NOW, it will in 3-6 months probably. And that's what I stated. And it's in this same moment, having to justify my statements, when newtekie's post #20 makes even more sense to me...



Indeed! Who said the oposite? I only made clear that GDDR5 is not cheaper NOW. I didn't say it was a bad decision or anything. I just say that price wise NOW may not be better, not by much at least. In reality we don't know if it's cheaper at all:

1- Memory is more expensive and in short supply NOW*. So much that Ati halved the frame buffer to 512 MB in the 770XT and uses GDDR3 in the Pro. Never forget this, as it pictures the truth in a "check the reality" fashion.

2- We have no real proofs that making a 256 bit GDDR5 controler is cheaper than 512/448 bit GDDR3 controler RIGHT NOW* and get the same performance. Remember DDR vs DDR2, DDR2 vs DDR3, GDDR3 vs GDDR4...

3- PCB is a lot cheaper, but IMO it's the only part where there's an econ0mic benefit RIGHT NOW* and it doesn't account all that much to the retail price..

*Sorry because I used it so much, but I think that I have to make clear that I am talking about today. Otherwise some people will think I'm saying something that I am not. I have no doubts it will be better with the time. That's why I said 3-6 months.
Posted on Reply
#49
Assimilator
imperialreignsad, too, ATI jumps on new technology as soon as they can, and nVidia lags behind - they remind me of Intel in many respects, as long as their current tech works and dominates the market, why change it? If they get ahead of the competition, they sandbag their current tech and milk it for as long as they can.
If there's no pressure to innovate, companies don't innovate. That's why AMD/ATI need to get off their asses and provide some decent competition.

And implementing technology for technology's sake is hardly the right way to make money. Did GDDR4 prevent the 2900 XT from being a POS? How is DirectX 10.1 useful if not ONE game in existence uses it? In this industry, performance is king - ATI can provide all the features they want, but if they don't provide the horsepower to go with them, no-one will buy their cards and hence no-one will use those features.
imperialreignbut, y'know, IMO, if AMD designed GPU cores as massive as nVidia has, we'd have seen a different story the last few series.
By that logic, all ATI has to do to regain the performance crown is release a GPU with 2 billion transistors.

As for the GDDR3 vs GDDR5 debate, it's been done to death already. The fact of the matter is that nVidia would be suicidal to go with GDDR5 because there's so little of it available, it's hideously expensive, and (almost certainly) because ATI has already purchased most of the stock that's available.
Posted on Reply
#50
DarkMatter
eidairaman1Technically, PCB has a part to play, usually the more Area that is used by a Single PCB, such as larger the Price does go up but if the boards were smaller they could get greater yields out of them, thus price would go down, Just too bad they cant reuse the Wasted PCB Platter to make a new PCB Platter.
What I was saying is that we don't know if the simpler PCB offsets the other differences in price. In fact I think that 1 GB of GDDR5 must be expensive enough compared to 1 GB GDDR3 that this difference is bigger than the one existing between 256 and 448 bit boards or close to it if Ati decided to launch the cards with "only" 512 MB. GTX cards are going to be more expensive in retail (~$100 GTX260 vs, 770XT), but they will pack more memory that will account for ~$30-50 in production costs. Could that difference in production costs account for a great part of the final retail price difference? Just thinking...
Posted on Reply
Add your own comment
Aug 25th, 2024 11:25 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts