Monday, April 21st 2008

AMD/ATI RV770 - Radeon HD 4 Series Almost Ready

ATI is moving ahead the launch of its next-generation video processors, as reported by TG Daily. Despite its previous faliures in the graphics business and $8 million loss in the first quarter of this year - AMD's president and chief operating officer Dirk Meyer stated that the company will roll out a significant number of products in May ahead of schedule, including the new RV770 and mobile (M88) graphics parts. RV770 will launch as Radeon HD 4800 and will make its way into the FireStream stream processor and FireGL workstation cards. Both GDDR3 and GDDR5 memories ranging from 256MB to 1024MB will be supported by the chip, but ATI itself will only be offering GDDR5 cards with 256-bit memory controller. The Radeon HD 4850 version is set to come to market with an 800+ MHz core (the final clock has not been specified yet and will not be available until the final qualification is completed), while the 4870 will be the first mass-production GPU with a clock speed higher than 1GHz. The graphics processor itself will integrate more texture memory units (TMUs) - 32 in RV770 against 16 in R6xx. Expect mass availability of RV770 cards soon after their announcement in May.
Source: TG Daily
Add your own comment

72 Comments on AMD/ATI RV770 - Radeon HD 4 Series Almost Ready

#51
kylew
lemonadesodaCore clock of over 1Ghz means ZILCH without other statistics. It's like saying there's a new P4 Netburst Extra Extreme Edition P4EEE with 5Ghz. It's a worthless power-consuming hog. We need architecture changes, fab shrinks and multiparallelism.

If they said 500Mhz clock but with 512 memory bus and 1024 universal shader units, and 75W total consumption, then I'd be much more impressed.
We know it's meant to be based on R600, so an RV670 GPU at 1GHz would be pretty impressive itself, plus the extra shaders, it's looking to be a very nice line up.
Posted on Reply
#52
kylew
suraswamihopefully this series gives decent performance compared to NVidia.
Now you know that's not true, sure they're not the fastest, but they're not that far behind NV, especially when it comes to price/performance ratio.
Posted on Reply
#53
flashstar
The 9800gtx really isn't that much faster than the 8800gtx. Nvidia must be quaking in it's boots after seeing that its brand new gpu will be squashed so soon.

I personally doubt that AMD will make the same mistake twice. They're not going to release a product before it's entirely ready again like the 2900xt.
Posted on Reply
#54
imperialreign
flashstarThe 9800gtx really isn't that much faster than the 8800gtx. Nvidia must be quaking in it's boots after seeing that its brand new gpu will be squashed so soon.

I personally doubt that AMD will make the same mistake twice. They're not going to release a product before it's entirely ready again like the 2900xt.
Even funnier, rumor has been that the HD5000 series is slated for early to mid 09, and will be ushering in the R700 dual-core GPU.


We all need to go grab some popcorn for the bitch-slapping fest that's about to kick back up between ATI and nVidia! :rockout:
Posted on Reply
#55
KainXS
next year will be an impressive year for GPU's, Ati will make nvidia step up their game by the end of this game and nvidia will have to release a new gpu.

very nice
Posted on Reply
#56
springs113
KainXSnext year will be an impressive year for GPU's, Ati will make nvidia step up their game by the end of this game and nvidia will have to release a new gpu.

very nice
nvidia will be releasing a new card that supposedly built from the ground up... something after the release of these ati cards...
so on the graphics front things will be really nice...another note...on the cpu side of things amd demoed a feature in their upcoming chipset that will allow higher clocks(in windows) of their processors...and their 45nm have been demoed to hit 3.2ghz.
Posted on Reply
#57
brian.ca
lemonadesoda1./ What bias? LOL, I'm an ATI fanboi... and ONLY got ATi Radeons and FireGLs. LOL

2./ You cant really compare the unaudited Q1 vs Q1 figures for 2007 and 2008. In both situations there is a significant loss. In 2007 there were significant costs associated with the Ati aquisition. To say profit is improved and everything is OK is falling into the apples vs. oranges trap. There are so many significant adjustments to the window-dressed accounts. Only last week the AMD board had a showdown with investment analysts and they were NOT convinced that the AMD board was delivering... (google for the transcript). AMD *must* do some +ve PR work at this time. If not, not only will the share price sink further... but there will be pressure to replace board members.

I'm NOT arguing about dates of which model gets released first, firegl or consumer. Thats not the issue. The point is the "release being brought forward"... " we are ready to release"... "revenues coming soon"... "promise"... is all *necessary* PR to keep the (investor)market happy.
/shrug.. you don't seem it, but one way or the other I said bias rather than rationale -- ie: the rationale seemed missing (particularly in the original quoated statement) so I chalked it up to bias, if you're not biased feel free to attribute the faulty logic to something else.

Looking at an article attached to google fiance's page for AMD real quick there's this, "Sunnyvale, Calif.-based Advanced Micro Devices Inc. said that during the quarter it lost $358 million, or 59 cents per share, compared with a loss of $611 million, or $1.11 per share, in the same period a year earlier. The latest quarter's results include charges of 8 cents a share for the acquisition of graphics chip maker ATI Technologies." Looking at their website (2006 numbers are also noted as unaudited for reference) the ATI aquisiton charges for Q1 07 were $113 million, this year $50 million... they narrowed their losses by $253m this year -- going by the numbers above the difference in ATI aquisition charges only accounts for $63m out of that $253m... so they've still narrowed their loss a fair bit even exluding the ATI acquisition charges.

But keep in mind, I'm not aruging that everything is peachy keen and right as rain... the knee deep analogy should have pointed to that. But I don't buy that they're in that much worse of a position than they've been for the last year and a half or two that they would have to be putting out a bunch of PR that may not hold too much water to compensate. From everything I've heard to date before this article, the 4000 series was slated for release for June/July and was well on course to that end. So, to me, that they might be able to push it out a month or so ahead of schedule doesn't sound implausible / probably shouldn't be written off as PR bs.

And I understand your main point, but don't forget the original statement I replied to was "FireGL August... retail 4870 some months later... ie. they are announcing now... to prop up the share price... but launch is still Q3 into retail channels." You seemed to be arguing this was mostly PR (necessary in your opinion) but release would still be later, and seemed to be basing that off the idea of the 4000s coming out after the FireGLs which were coming out in August... that was the only reason it became any bit of an issue. B/c that was not a reason to think that the release would actually be later (or rather remain the original slated release date) which would make all this empty PR.
Posted on Reply
#58
brian.ca
WarEagleAUStill only 256bit ring bus though. I wreckon the 512 on the HD 2900 XT didnt go like they though or was over kill? for a card this caliber you would think they would up it. Also, one thing that bothers me, the quote about them failing miserably with their graphics section. I dont think that is true, taken from the fact that the x1900 on up has sold extremely well and even the uber ocers are taking ATI cards and reaching them high scores :D
Don't quote me on this one but from what I read and understand using the larger bus was overkill & at the same time added a significant cost to producing the cards. The reason you saw a lot of those cards being used for OCing records was b/c that overkill equated to head room when doing some of that extreme OCing mumbo jumbo.

But from what I understand a larger bus isn't really needed when you have faster memory. The orginal article linked above seems to point to this saying, "With a 256-bit memory controller, we're talking about 115 to 141 GB/s of bandwidth. This number equals the memory bandwidth record set by the 2900XT 1GB GDDR4 (512-bit interface with GDDR4 at 1.1 GHz DDR)."
Little bummed about the 256-bit MEM BUS, but, IMO, GDDR5 partnered with a 1GHz GPU? I think she'll still be game. Although, 256b might be pushing it if a card gets packed with 1GB of GDDR5. I think that 1GB claim will prob be for the GDDR3 - on that note, why 3?! Why not use 4 instead? If a whole series was stouting GDDR4 and GDDR5, they'd be ahead of the curve over nVidia on a technological level.
Going by the full article above it sounds like the GDDR3 variants are aimed more at OEMs and GDDR5s at the retail level. So I guess the 3s are to help OEMs keep costs down.
Posted on Reply
#59
imperialreign
brian.caDon't quote me on this one but from what I read and understand using the larger bus was overkill & at the same time added a significant cost to producing the cards. The reason you saw a lot of those cards being used for OCing records was b/c that overkill equated to head room when doing some of that extreme OCing mumbo jumbo.

But from what I understand a larger bus isn't really needed when you have faster memory. The orginal article linked above seems to point to this saying, "With a 256-bit memory controller, we're talking about 115 to 141 GB/s of bandwidth. This number equals the memory bandwidth record set by the 2900XT 1GB GDDR4 (512-bit interface with GDDR4 at 1.1 GHz DDR)."



Going by the full article above it sounds like the GDDR3 variants are aimed more at OEMs and GDDR5s at the retail level. So I guess the 3s are to help OEMs keep costs down
.
I hope so. Really, though, I'd love to see ATI stouting only DDR4/5 with this series, it would give them a slight edge on nVidia as far as spec sheets go.

I think you've got it right about the MEM BUS as well, partly why I mentioned 256bit isn't that big a deal if the GPU is clocked at 1GHz with DDR5. The bandwidth of the MEM itself will make up for it. But, as I also pointed out, if they're packing 1GB of high bandwidth MEM, a 256b BUS could prove to be a limitation - we'll have to see, the upgrading to 32 TMU might work out just nicely.


Either way, the next year and a half is stacking up to be quite competitive between red and green - which is what we all really want to see more than one camp leading the pack. We benefit more from close competition more than we do one leading and one trailing.
Posted on Reply
#60
sam0t
Wonder what Nvidia is going to usher against these ones. 9800 series seem to get old before it even got started, then again it got starte about 1.5y ago :D
Posted on Reply
#61
tkpenalty
gg nvidia? The fact that AMD immediately saw that the R600 series wasn't going to be stellar and decided only to improve in a way that wouldn't really have implications on the R&D effort for the RV700, which they have been working on for ages.
lemonadesodaCore clock of over 1Ghz means ZILCH without other statistics. It's like saying there's a new P4 Netburst Extra Extreme Edition P4EEE with 5Ghz. It's a worthless power-consuming hog. We need architecture changes, fab shrinks and multiparallelism.

If they said 500Mhz clock but with 512 memory bus and 1024 universal shader units, and 75W total consumption, then I'd be much more impressed.
imperialreignI hope so. Really, though, I'd love to see ATI stouting only DDR4/5 with this series, it would give them a slight edge on nVidia as far as spec sheets go.

I think you've got it right about the MEM BUS as well, partly why I mentioned 256bit isn't that big a deal if the GPU is clocked at 1GHz with DDR5. The bandwidth of the MEM itself will make up for it. But, as I also pointed out, if they're packing 1GB of high bandwidth MEM, a 256b BUS could prove to be a limitation - we'll have to see, the upgrading to 32 TMU might work out just nicely.


Either way, the next year and a half is stacking up to be quite competitive between red and green - which is what we all really want to see more than one camp leading the pack. We benefit more from close competition more than we do one leading and one trailing.
Um FYI, the 512bit/384bit memory bus is really redundant at this stage as the architecture and the GPUs don't, and cant use the 512 memory bit bus addressing to its fullest potential.
Most of you guys are thinking way too "zomg 256bit memory bus suxxors". Its the raw calculating power of the GPU itself thats important as well as the efficiency of it. The bit width of the memory bus isn't important if the GPU architecture is poor.

Okay in this case not poor but weaker, say for example G92 vs RV670. GDDR4 evidently has way more memory bandwidth, however the RV670 is slower than the G92! Now Compare RV670 to R600. R600 has the 512 bit bus... any performance increases? Little to none. The GPU isnt fast enough/can't proccess that much to use the 512 bit width to its max potential, same reason why Nvidia took a step back as well.

Another thing, it costs more to make a card with a wider memory bus. Why you may ask? Because it is required to have more memory chips. Each chip is 32 bits. Therefore 32bits x 8 chips = 256; 256bit, 32bits x 12chips = 384bit, and finally, 32 bits x 16 chips = 512; 512 bit.... may seem obvious to some but that's why the G80/R600s were priced so damn high versus current 256 bit cards of equivalent. More memory chips, more components needed onboard and finally a requirement for a longer PCB (Usually) due to increased power consumption from the extra chips as well as the core (larger memory controller).
hatATi has been under so much pressure lately, I bet this thing will roxorz nvidia's soxorz
hey addsub ROPs are outdated everyone uses shader units now
[sarcasm]Hey look!!!! Its awesome that ATI Ripped out their ROPs... now I cant even game in 3D AWESOME!!![/sarcasm].

ROPs are needed FYI.

I'm guessing the reasons why the numbers of components are in a core are because:
1. Core balancing, as with multi GPU technologies, I've noticed the linear decrease in performance as you add more GPUs. This means that GPU R&D HAVE to balance out the core; more doesn't equal better a lot of the times, and i think the same applies for GPUs. Within an architecture, you probably can only have a specific amount of parts for the GPU before you start getting decreases in efficiencies.
2. Another reason is the fact that the numbers make it modular to manufacture
3. Cost/performance feasibility.

Its easy for you guys to go "HEY LETS CHUCK 1024 SHADERS AND 32 TMUs AS WELL AS A 512 BIT BUS!!!111", but wouldn't they have done it if it was THAT bloody easy?



Anyway guys, please stop arguing "you're a fanboy/you're biased!" with each other...

OT: I'm wondering if Intel's larabee will be even decent, the fact that its just a really powerful CPU thats not really designed to be dedicated to rendering somewhat worries me. However since one of their Xeons do ray-tracing at liek 60 fps or something I might be wrong (then again, games NEVER use ray-tracing...nor do GPUs have the ability.
Posted on Reply
#62
Mussels
Freshwater Moderator
hehe the illogical fanboi thread!

Seriously, 'omg nvidia is quaking' 'nv will crush this' - stop. really. The last real major breakthrough in the video market was the 8800GTX, and it hasnt changed much since then - slapping two cards onto one PCI-E slot is nice, but its not new.

In the end, all it comes down to is who has the fastest card (to gather more fanbois/investors) and who has the most popular card - the price to performance ratio. The 9600GT and 3850 are filling that segment right now, but they dont match the 8800GTX or 3870x2 in performance do they?

Who has the fastest doesnt really matter, its who has the most cost effective.
Posted on Reply
#63
Megasty
I love cheap performers as much as the next guy. If I had to buy a card I would go for the pp champs rather than the uber cards that fall from grace in 2 months. I actually bought a 3850 for my sis's pc the other day so she could use her br-drive & now she's sitting up there playing assassin's creed :confused:

However, I still love to play with the uber cards & will happily beat the crap out of a 4870x2 when it comes out :D
Posted on Reply
#64
Tatty_Two
Gone Fishing
ROP'S, TMU's and SP's ALL required :D
Posted on Reply
#65
[I.R.A]_FBi
Musselshehe the illogical fanboi thread!

Seriously, 'omg nvidia is quaking' 'nv will crush this' - stop. really. The last real major breakthrough in the video market was the 8800GTX, and it hasnt changed much since then - slapping two cards onto one PCI-E slot is nice, but its not new.

In the end, all it comes down to is who has the fastest card (to gather more fanbois/investors) and who has the most popular card - the price to performance ratio. The 9600GT and 3850 are filling that segment right now, but they dont match the 8800GTX or 3870x2 in performance do they?

Who has the fastest doesnt really matter, its who has the most cost effective.
your quite right, i dont think these cards can even stay on the shelves.
Posted on Reply
#66
btarunr
Editor & Senior Moderator
Scrizz4870 clocked higher than 1GHz! dang..
Yup, that's really needed if the shaders on the RV770 don't have clock-generators of their own. Maybe part of the reason behind ATI's approach to shaders not paying off well so far is because the shaders use the core's clock.
Posted on Reply
#67
happita
To think I was going to upgrade the cooler and start overclocking, I might as well just get 2 4870s and xfire those bad boys up hopefully when they come out in may!
Posted on Reply
#68
chibiwings
looks promising hope we can get a good driver released..this time
Posted on Reply
#69
Unregistered
il be getting a 4870x2..hopefully...and maybe a 4870 as well...:P depends on my funding
Posted on Edit | Reply
#70
Mussels
Freshwater Moderator
i'll wait for a higher res screen before doing that... lol.

Just to add some sanity here: do you NEED that much performance for current/upcoming titles? i'm seeing a lot of games lately that work fine on max details on an 8800GT, so are you sure you want to go that far?
Posted on Reply
#71
Unregistered
Musselsi'll wait for a higher res screen before doing that... lol.

Just to add some sanity here: do you NEED that much performance for current/upcoming titles? i'm seeing a lot of games lately that work fine on max details on an 8800GT, so are you sure you want to go that far?
I am just being greedy really :) no reason :p
Posted on Edit | Reply
#72
jbunch07
azazelil be getting a 4870x2..hopefully...and maybe a 4870 as well...:P depends on my funding
if ya do u need to sell me your X2 ;)

i wont get a 4xxx card unless the performance is just amazing compared to current 3xxx cards
Posted on Reply
Add your own comment
Sep 18th, 2024 03:42 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts