Saturday, May 31st 2008

AMD Starts Shipping ATI Radeon HD 4850 Video Cards

The ATI Radeon HD 4850 cards are reportedly already shipping to OEM partners and retailers. According to TG Daily everything is going as planned and AMD/ATI is aiming for a sizable launch of the new product generation, with Radeon 4850 512MB boards leading the charge. The final prices for all Radeon HD 4 series cards will be officially declared during the Computex 2008 tradeshow which will open doors on Monday. Higher-end Radeon 4870 cards with 512MB of onboard GDDR5 memory are expected to ship in volume sometime this summer, with flagship Radeon HD 4870 X2 to follow soon after that.
Source: TG Daily
Add your own comment

74 Comments on AMD Starts Shipping ATI Radeon HD 4850 Video Cards

#51
eidairaman1
The Exiled Airman
DarkMatterWhat I was saying is that we don't know if the simpler PCB offsets the other differences in price. In fact I think that 1 GB of GDDR5 must be expensive enough compared to 1 GB GDDR3 that this difference is bigger than the one existing between 256 and 448 bit boards or close to it if Ati decided to launch the cards with "only" 512 MB. GTX cards are going to be more expensive in retail (~$100 GTX260 vs, 770XT), but they will pack more memory that will account for ~$30-50 in production costs. Could that difference in production costs account for a great part of the final retail price difference? Just thinking...
IMO id say 512 is the sweet Spot.
Posted on Reply
#52
DarkMatter
eidairaman1IMO id say 512 is the sweet Spot.
Debatable, even though I agree with that sweet spot. But often times: sweet spot = mainstream, and we are talking about high-end cards isn't it?. 1gb 8800s do have some benefits at higher settings and more with high overclocks=performance. A card 2x faster will easily need the extra memory.
Anyway the difference in price is there, and that was my point. Maybe I remember this badly wrong, but I remember Palit_Guy saying their profit was smaller in 1 GB cards than on 512 ones. This means that production cost difference was bigger than the one existing in retail prices. I mean the difference in retail price (without the aforementioned profit loss) of the extra 400 MB in the GTX 260 would probably be around $50. A 448 MB GTX 260 could easily sell for $60-80 less just as the GTS 320 and still be faster than the competition.

EDIT: Also correct me if I'm wrong but Wizzard said 1 ns 512 Mbit chips were about $4,5 from manufacturers in 1000 (10.000?) units. $4,5 x 8 = 36. And 0,8 ns chips are a lot more expensive AFAIK.

EDIT2: On another note, I've just seen this in Fudzilla (yeah whatever, I trust them as much as many other sources, kinda):

www.fudzilla.com/index.php?option=com_content&task=view&id=7625&Itemid=1

I've mixed opinions about this. I do think AA is done better out of the shaders, but we'll have to wait and see now, because FSAA done in 16 ROPs could be a limiting factor for such a card. I guess they have significantly improved them, but I don't know how it will work in the end. I'm optimistic in that I never think they will fail, but I'm am worried in the sense that the decisions seems more based on developers desires than on Ati's desires. This is important in a timely basis: have the chip prepared to do it since the start or is it a "last time" decision?
Posted on Reply
#53
pentastar111
Personally I don't give a cr^p if the memory is GDDR3 or GDDR5 or GDDR10...I want a card that kicks butt...is stable and reliable...and doesn't require the selling of a kidney or my firstborn...;)
Posted on Reply
#54
jonmcc33
DarkMatterI've been searching info about this for some time, as I don't have the Wiki in my highest reliable sources list, and it seems the info is right. I never pay as much attention to who develops what as I do to the specs and benchmarks of the thing. But I may do it in the future.

That's the sadest thing I have heard in a long time. It's extrememly unfair and bad for free competition. :shadedshu
I had no clue Ati was involved in the development of GDDR. I thought it was the JEDEC who does this things in conjunction with memory developers in any case (when talking about memory of course). And not only ONE of the consumers. No wonder why Nvidia is not using it! Ati has probably tons of patents that doesn't want to share for cheap! If they want to share them at all. :banghead:

Not to mention that this way Ati is kind of imposing the use of the memory the way they want it to be, which may not be the better way, who knows? In any case it surely benefits Ati. It's not like other companies can't develop their own standard, or that they can't make it better, it's surely more based on relationship with JEDEC. It's like TWIMTBP in hardware development, with the exception that there's no patent involvement in TWIMTBP and in harware it's SURE there is. :shadedshu
Well, AMD developed x86-64 technology did they not? Intel was left in the dust with that too originally.

But it's good in the long run. These companies keep pushing themselves to advance technology and it's the consumer that has benefited.

The reason nVIDIA needs to use a more expensive 512-bit memory bus is because they don't have the capability to advance GDDR memory the way ATi can. Still using GDDR3 because they don't know how to develop anything better.
Posted on Reply
#55
eidairaman1
The Exiled Airman
pentastar111Personally I don't give a cr^p if the memory is GDDR3 or GDDR5 or GDDR10...I want a card that kicks butt...is stable and reliable...and doesn't require the selling of a kidney or my firstborn...;)
Well dont worry about it then, your cards will obviously last longer than the G80 line will.
Posted on Reply
#56
farlex85
I thought they were going to launch a 1gb 4870 at the outset. :confused: I mean, I'm sure these cards will be great, nice successors to the 3xxx series, but it seems as though their not even trying to compete w/ nvidia in the high end. I was hoping they would........:(
Posted on Reply
#57
DarkMatter
jonmcc33Well, AMD developed x86-64 technology did they not? Intel was left in the dust with that too originally.

But it's good in the long run. These companies keep pushing themselves to advance technology and it's the consumer that has benefited.

The reason nVIDIA needs to use a more expensive 512-bit memory bus is because they don't have the capability to advance GDDR memory the way ATi can. Still using GDDR3 because they don't know how to develop anything better.
The difference is that JEDEC is suposed to be an "open" group to set standards in the industry. If they make one company's development to be an standard that's not fair. It's not a matter of "I can or I can not". It's a matter of "what's the price I have to pay?" I'm 99'9% sure any chipmaker can implement anything like that. Maybe not as well, but they can...

EDIT2: I have made my brother read this and he has said I didn't made my point clear about the above. Here's further explanation. It's not that I don't like a company to own the rights over something they have developed. It's the fact that suposedly Ati has set standards for the industry for something they don't manufacture, memory chips. They are the customers not the developers and IMO shouldn't be the ones setting standards, as those may not fit other customer's needs. The fact that it's always Ati who sets the GDDR standards and that I know how the world works, and that's 99% politics, my view is that JEDEC aproves Ati's designs because they are "used" to them. That's not good for the industry nor the customer IMHO.

EDIT: Just as a showing of how they can. I studied Telecommunications Engineering. I left in the second course because it was a lot about programming software than what I liked and first thought it would be. But in the meantime and without a lot of studying in that matter (I mean I left in the second course) , I learnt by myself how to do a SDRAM memory controler with the help of Altera programable FPGAs, for a project in which I needed more memory than the one that I could fit in the FPGA they gave us. Was it fast? No mama, but it worked, and it was me. Come on...


Plus Amd developed x86-64 because they have a joint development agreement with Intel. Any improvement made by one part can be "copied" without a patent infringement (more or less that's it).

This is not the case with Ati - Nvidia AFAIK. Plus JEDEC dictates what the manufacturers are going to do. If JEDEC says GDDR5, it's GDDR5. Let's say Nvidia comes up with a new own memory design, they have first need to be aproved by JEDEC. If the new design is similar to GDDR they won't get aproved. The case is that GDDR5 is the standard and if Nvidia wants to use it, probably has to pay to Ati because they probably have the patents over it. Any try to make something similar will fail into patent infringement.

Following the example of x86. Do you really think that nobody in the world can make their own x86 processor? Nvidia for example to follow with the same players. Of course they can, but patents prevent them from doing it. You would be surprised how patents can prevent many companies from doing lots of things in this industry. The computer industry is not that much about innovation really, it's more the sort of like for double the performance double the lines or the clocks, etc. There are some things you can innovate, but offen times they are easy to think off and obvious, but at the same time are not worth at that time, so you don't implement them ar even care about them. So you forget about them, then a month later you think again and say "Ey in the next year this could be handy". You go to the patent's office just to find out you are 3 days late and someone else has "taken the lead"... That's how this things work. The worst thing, besides this or in adition to this, is that there are plenty of "companies" out there which their only job is finding those "holes" and making patents of them even though the won't even develop them into reality and never have thought of doing it. It sucks.
Posted on Reply
#58
jonmcc33
farlex85I thought they were going to launch a 1gb 4870 at the outset. :confused: I mean, I'm sure these cards will be great, nice successors to the 3xxx series, but it seems as though their not even trying to compete w/ nvidia in the high end. I was hoping they would........:(
Not sure what you mean by this. Did you complete miss the 3870 X2 or something? The price factor makes it an even bigger steal.
Posted on Reply
#59
farlex85
jonmcc33Not sure what you mean by this. Did you complete miss the 3870 X2 or something? The price factor makes it an even bigger steal.
Nah I mean I was under the impression the initial card was to be a 4870 1gb single gpu that would at least hopefully compete w/ the offerings from nvidia (1gb and 884gb). It seems though, that they are instead launching in the mid-range, giving nvidia free reign over the high-end for some period of time. I'm not saying these cards won't be an excellent value, I'm saying I was hoping for competition on all levels at the get-go for this round (for better pricing for us). Although, Nvidia doesn't seem to be launching in the middle-ground where most cards are bought, so maybe its just a marketing thing. I just wanna see some fighting......

That 3870x2 came long after nvidia was at the top of the hill, and it was only one offering, and it was a dual-gpu. I was just hoping for more competition, one card (even though it is a great card) too far down the road doesn't do it.
Posted on Reply
#60
jonmcc33
farlex85Nah I mean I was under the impression the initial card was to be a 4870 1gb single gpu that would at least hopefully compete w/ the offerings from nvidia (1gb and 884gb). It seems though, that they are instead launching in the mid-range, giving nvidia free reign over the high-end for some period of time. I'm not saying these cards won't be an excellent value, I'm saying I was hoping for competition on all levels at the get-go for this round (for better pricing for us). Although, Nvidia doesn't seem to be launching in the middle-ground where most cards are bought, so maybe its just a marketing thing. I just wanna see some fighting......

That 3870x2 came long after nvidia was at the top of the hill, and it was only one offering, and it was a dual-gpu. I was just hoping for more competition, one card (even though it is a great card) too far down the road doesn't do it.
It's been back and forth for years (or did you miss nVIDIA's GeForce FX debacle). They screwed up with the 2900 series but the 3800 series has done quite well in performance. The 3870 X2 was their king of the hill and not only did it take the performance crown temporarily but it was also offered at a real budget price. I cannot believe I paid $500 for my X1900XT back in the day and a 3870 X2 right now costs $300 and destroys my video card. :(
Posted on Reply
#61
imperialreign
AssimilatorIf there's no pressure to innovate, companies don't innovate. That's why AMD/ATI need to get off their asses and provide some decent competition.

And implementing technology for technology's sake is hardly the right way to make money. Did GDDR4 prevent the 2900 XT from being a POS? How is DirectX 10.1 useful if not ONE game in existence uses it? In this industry, performance is king - ATI can provide all the features they want, but if they don't provide the horsepower to go with them, no-one will buy their cards and hence no-one will use those features.
I completely agree. At the least it shows that a company is willing to keep up with the times. I've noticed, though, that before the merger with AMD, supporting new tech worked, because they were well on par with nVidia; but a lot of the newer tech support hasn't been that big of importance since the merger . . .

anyhow, GDDR4 was never really show to improve performance over GDDR3, as GDDR3 over GDDR2 - the only difference I think that makes sense, IMO, is it reduces loading stutter, as the memory can move texture files in and out quicker. But, this type of performance isn't included, nor can it really be, in benchmarks.
By that logic, all ATI has to do to regain the performance crown is release a GPU with 2 billion transistors.
it's a thought . . . but ATI has never really been about the brute force method
Posted on Reply
#62
brian.ca
farlex85Nah I mean I was under the impression the initial card was to be a 4870 1gb single gpu that would at least hopefully compete w/ the offerings from nvidia (1gb and 884gb). It seems though, that they are instead launching in the mid-range, giving nvidia free reign over the high-end for some period of time. I'm not saying these cards won't be an excellent value, I'm saying I was hoping for competition on all levels at the get-go for this round (for better pricing for us). Although, Nvidia doesn't seem to be launching in the middle-ground where most cards are bought, so maybe its just a marketing thing. I just wanna see some fighting......

That 3870x2 came long after nvidia was at the top of the hill, and it was only one offering, and it was a dual-gpu. I was just hoping for more competition, one card (even though it is a great card) too far down the road doesn't do it.
The intial cards aren't meant to compete with Nv's highend.. that's the role of the X2 which was slated for late Augustish I think? There was a rumor about ATI wanting to see what the Nv cards were capable of before releasing their top of the line card.

I think I heard of the 1gb card as well but I'd imagine that'd be a no go until supply of DDR5 gets better (it would seem they had to cancel the DDR5 on the 4850s b/c of supply issues). Ultimately these cards very much so seem to be meant for the mainstream where good price/performance ratios should be able to get ATI some market share back. If 512mb is the sweet spot for most buyers I'd imagine ATI would want to put all their eggs into that basket from the outset and put the current supply of memory towards those so they can sell more.
How is DirectX 10.1 useful if not ONE game in existence uses it? In this industry, performance is king - ATI can provide all the features they want, but if they don't provide the horsepower to go with them, no-one will buy their cards and hence no-one will use those features.
More of a nitpick than anything but Assasin's Creed actually did make use of 10.1 and from what I read it was supposed to make a pretty significant difference (I think I remember reading like 20% performance gain when using AA).

Also "performance is king" is probably not correct either -- it's important but it doesn't rule all. I'm sure Nv sold a hell of a lot more 8800 GTs than Ultra's or GX2s. Not to mention the arguement can be misleading. You can have games where an ATI card will out perform a similiar NV card that otherwise crushes the same ATI card in game x. In cases like those which is really the stronger card? At times performance can rely as much on market share & cooperative development initatives as it does technical design & specs. Where ATI may have the latter two they are definately lacking in the former. It becomes a chicken & egg thing. To make their cards perform beter ATI needs games to support their cards features, to get the support they probably need stronger market share (it makes sense to put more support behind a card your consumer is just more likely to have), to get market share they need....
Posted on Reply
#63
Wile E
Power User
jonmcc33No it is not overkill when you have DX10.1 which requires 4XAA. When you play with the AA/AF cranked then bandwidth becomes very important and is the bottleneck.

GDDR5 isn't unnecessary, it's unattainable by nVIDIA because they aren't involved in the development of it. Come to think of it, didn't ATi help develop GDDR3 as well? The same memory technology that nVIDIA is using on it's flagship cards?

en.wikipedia.org/wiki/GDDR3

Yes it is! ATi and JEDEC developed it! I guess some companies are still followers on the market, eh? ;)
Ummm, even with 4xAA in DX10.1 DDR3 on a 512bit bus IS NOT going to be a bottleneck. Again, look at the 2900XT with the 512bit bus. The GDDR3 and GDDR4 card perform the same, because the bandwidth isn't anywhere near maxed out on even the GDDR3 cards.

And why do you care who developed it? All that matters is the cost and performance of the hardware, not who designed it.
Posted on Reply
#64
jcfougere
Ati needs to catch up financially..that is to say need to come out of debt.

I believe it is an extremely good move for Ati to stay in the shadow on Nvidia on the ultra high end of the scale. Why should they manufactor an expensive mega card that only markets towards the 5% of users who actually buy the best of the best.

By sticking to the affordable almost high end cards, they are bringing VERY good cards at FAR more reasonable prices, and starting to win back the mainstream users who buy up most of the cards sold by both companies.

After 2900xt people feared Ati might be a done company, now they are on the right track again with affordable high performance cards; isn't that everyones dream come true?
Posted on Reply
#66
jonmcc33
Wile EUmmm, even with 4xAA in DX10.1 DDR3 on a 512bit bus IS NOT going to be a bottleneck. Again, look at the 2900XT with the 512bit bus. The GDDR3 and GDDR4 card perform the same, because the bandwidth isn't anywhere near maxed out on even the GDDR3 cards.

And why do you care who developed it? All that matters is the cost and performance of the hardware, not who designed it.
When it comes to AA and AF, the bottleneck is memory bandwidth. Not sure what makes you think it doesn't. The 2900XT had a 512-bit bus and then what did ATi do? They abandoned it for the 256-bit bus on the 3800 series. But again, the reason they can is because they can advance memory speed and performance where as nVIDIA cannot.
Posted on Reply
#67
candle_86
imperialreignI completely agree. At the least it shows that a company is willing to keep up with the times. I've noticed, though, that before the merger with AMD, supporting new tech worked, because they were well on par with nVidia; but a lot of the newer tech support hasn't been that big of importance since the merger . . .

anyhow, GDDR4 was never really show to improve performance over GDDR3, as GDDR3 over GDDR2 - the only difference I think that makes sense, IMO, is it reduces loading stutter, as the memory can move texture files in and out quicker. But, this type of performance isn't included, nor can it really be, in benchmarks.



it's a thought . . . but ATI has never really been about the brute force method
really?

R300 8pp to compete with NV25 with 4pp. 256bit over 128bit. Seems brute force to me. r4xx 16pp SM2b 520/560 mhz core vs NV40 16pp SM3 450mhz. Brute force to me again.
R580 16pp 48shaders vs G71 24pp 24 shaders Brute force once again from ATI. Heck the R600 320 shaders 512bit vs G80 128 shaders 384bit, brute force again

The last non brute force attempt on ATI/AMD's part was the R520.
Posted on Reply
#68
HAL7000
Well this all started with the mention of AMD shipping out the 4850 video cards. I enjoyed the string but the bottom line is will they be worth the wait and price. I wanted to upgrade a year ago and put it off because I did not feel that the hardware was worth the investment.

AMD or Nvidia, its all choices we live with. One leads the other follows and vise versa, one works through the front door (Intel) the other through the back door (AMD). But as for video cards, I am for the best implementation of new technology at a fare price. Memory, GPU's, CPU's and performance change quickly.

I for one look forward to seeing the new 4800 series come to market. My personal gaming rig will own one of these cards. My Biostar TP35D3-A7 Deluxe 5.x and E6850 sit idle waiting for the 4870 X2 to be released. AMD don't fail me now.
Posted on Reply
#69
Wile E
Power User
jonmcc33When it comes to AA and AF, the bottleneck is memory bandwidth. Not sure what makes you think it doesn't. The 2900XT had a 512-bit bus and then what did ATi do? They abandoned it for the 256-bit bus on the 3800 series. But again, the reason they can is because they can advance memory speed and performance where as nVIDIA cannot.
You are completely missing my point. My point is the technology in use does not matter, only the end result matters. If nVidia can get it done with a wider bus but slower ram, so be it. I am also saying that putting slower GDDR3 on a faster bus will be roughly equivalent in performance to faster GDDR5 memory on a slower bus. Either way, it will be more bandwidth than the cards will be able to use, so it doesn't matter. All that is going to matter to most people is price and performance.
Posted on Reply
#70
eidairaman1
The Exiled Airman
Dude what is the problem with you :shadedshu
candle_86really?

R300 8pp to compete with NV25 with 4pp. 256bit over 128bit. Seems brute force to me. r4xx 16pp SM2b 520/560 mhz core vs NV40 16pp SM3 450mhz. Brute force to me again.
R580 16pp 48shaders vs G71 24pp 24 shaders Brute force once again from ATI. Heck the R600 320 shaders 512bit vs G80 128 shaders 384bit, brute force again

The last non brute force attempt on ATI/AMD's part was the R520.
Posted on Reply
#71
[I.R.A]_FBi
lawl .. calm down guys .. get a bveer or some juice and gin. candle is an NV d00d forever.
Posted on Reply
#72
eidairaman1
The Exiled Airman
he certainly seems to try to filibuster his point, or tries to beat a dead horse, how much fun is it to beat a dead horse?:wtf:
Posted on Reply
#73
erocker
*
Yeah, I think his opinions are well ingrained into all of us by now. Let's move forward!:D
Posted on Reply
#74
eidairaman1
The Exiled Airman
erockerYeah, I think his opinions are well ingrained into all of us by now. Let's move forward!:D
:shadedshu:banghead::nutkick::twitch:


J/K

:slap: ;)
Posted on Reply
Add your own comment
Oct 2nd, 2024 14:45 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts