Monday, April 21st 2008
AMD/ATI RV770 - Radeon HD 4 Series Almost Ready
ATI is moving ahead the launch of its next-generation video processors, as reported by TG Daily. Despite its previous faliures in the graphics business and $8 million loss in the first quarter of this year - AMD's president and chief operating officer Dirk Meyer stated that the company will roll out a significant number of products in May ahead of schedule, including the new RV770 and mobile (M88) graphics parts. RV770 will launch as Radeon HD 4800 and will make its way into the FireStream stream processor and FireGL workstation cards. Both GDDR3 and GDDR5 memories ranging from 256MB to 1024MB will be supported by the chip, but ATI itself will only be offering GDDR5 cards with 256-bit memory controller. The Radeon HD 4850 version is set to come to market with an 800+ MHz core (the final clock has not been specified yet and will not be available until the final qualification is completed), while the 4870 will be the first mass-production GPU with a clock speed higher than 1GHz. The graphics processor itself will integrate more texture memory units (TMUs) - 32 in RV770 against 16 in R6xx. Expect mass availability of RV770 cards soon after their announcement in May.
Source:
TG Daily
72 Comments on AMD/ATI RV770 - Radeon HD 4 Series Almost Ready
I personally doubt that AMD will make the same mistake twice. They're not going to release a product before it's entirely ready again like the 2900xt.
We all need to go grab some popcorn for the bitch-slapping fest that's about to kick back up between ATI and nVidia! :rockout:
very nice
so on the graphics front things will be really nice...another note...on the cpu side of things amd demoed a feature in their upcoming chipset that will allow higher clocks(in windows) of their processors...and their 45nm have been demoed to hit 3.2ghz.
Looking at an article attached to google fiance's page for AMD real quick there's this, "Sunnyvale, Calif.-based Advanced Micro Devices Inc. said that during the quarter it lost $358 million, or 59 cents per share, compared with a loss of $611 million, or $1.11 per share, in the same period a year earlier. The latest quarter's results include charges of 8 cents a share for the acquisition of graphics chip maker ATI Technologies." Looking at their website (2006 numbers are also noted as unaudited for reference) the ATI aquisiton charges for Q1 07 were $113 million, this year $50 million... they narrowed their losses by $253m this year -- going by the numbers above the difference in ATI aquisition charges only accounts for $63m out of that $253m... so they've still narrowed their loss a fair bit even exluding the ATI acquisition charges.
But keep in mind, I'm not aruging that everything is peachy keen and right as rain... the knee deep analogy should have pointed to that. But I don't buy that they're in that much worse of a position than they've been for the last year and a half or two that they would have to be putting out a bunch of PR that may not hold too much water to compensate. From everything I've heard to date before this article, the 4000 series was slated for release for June/July and was well on course to that end. So, to me, that they might be able to push it out a month or so ahead of schedule doesn't sound implausible / probably shouldn't be written off as PR bs.
And I understand your main point, but don't forget the original statement I replied to was "FireGL August... retail 4870 some months later... ie. they are announcing now... to prop up the share price... but launch is still Q3 into retail channels." You seemed to be arguing this was mostly PR (necessary in your opinion) but release would still be later, and seemed to be basing that off the idea of the 4000s coming out after the FireGLs which were coming out in August... that was the only reason it became any bit of an issue. B/c that was not a reason to think that the release would actually be later (or rather remain the original slated release date) which would make all this empty PR.
But from what I understand a larger bus isn't really needed when you have faster memory. The orginal article linked above seems to point to this saying, "With a 256-bit memory controller, we're talking about 115 to 141 GB/s of bandwidth. This number equals the memory bandwidth record set by the 2900XT 1GB GDDR4 (512-bit interface with GDDR4 at 1.1 GHz DDR)." Going by the full article above it sounds like the GDDR3 variants are aimed more at OEMs and GDDR5s at the retail level. So I guess the 3s are to help OEMs keep costs down.
I think you've got it right about the MEM BUS as well, partly why I mentioned 256bit isn't that big a deal if the GPU is clocked at 1GHz with DDR5. The bandwidth of the MEM itself will make up for it. But, as I also pointed out, if they're packing 1GB of high bandwidth MEM, a 256b BUS could prove to be a limitation - we'll have to see, the upgrading to 32 TMU might work out just nicely.
Either way, the next year and a half is stacking up to be quite competitive between red and green - which is what we all really want to see more than one camp leading the pack. We benefit more from close competition more than we do one leading and one trailing.
Most of you guys are thinking way too "zomg 256bit memory bus suxxors". Its the raw calculating power of the GPU itself thats important as well as the efficiency of it. The bit width of the memory bus isn't important if the GPU architecture is poor.
Okay in this case not poor but weaker, say for example G92 vs RV670. GDDR4 evidently has way more memory bandwidth, however the RV670 is slower than the G92! Now Compare RV670 to R600. R600 has the 512 bit bus... any performance increases? Little to none. The GPU isnt fast enough/can't proccess that much to use the 512 bit width to its max potential, same reason why Nvidia took a step back as well.
Another thing, it costs more to make a card with a wider memory bus. Why you may ask? Because it is required to have more memory chips. Each chip is 32 bits. Therefore 32bits x 8 chips = 256; 256bit, 32bits x 12chips = 384bit, and finally, 32 bits x 16 chips = 512; 512 bit.... may seem obvious to some but that's why the G80/R600s were priced so damn high versus current 256 bit cards of equivalent. More memory chips, more components needed onboard and finally a requirement for a longer PCB (Usually) due to increased power consumption from the extra chips as well as the core (larger memory controller). [sarcasm]Hey look!!!! Its awesome that ATI Ripped out their ROPs... now I cant even game in 3D AWESOME!!![/sarcasm].
ROPs are needed FYI.
I'm guessing the reasons why the numbers of components are in a core are because:
1. Core balancing, as with multi GPU technologies, I've noticed the linear decrease in performance as you add more GPUs. This means that GPU R&D HAVE to balance out the core; more doesn't equal better a lot of the times, and i think the same applies for GPUs. Within an architecture, you probably can only have a specific amount of parts for the GPU before you start getting decreases in efficiencies.
2. Another reason is the fact that the numbers make it modular to manufacture
3. Cost/performance feasibility.
Its easy for you guys to go "HEY LETS CHUCK 1024 SHADERS AND 32 TMUs AS WELL AS A 512 BIT BUS!!!111", but wouldn't they have done it if it was THAT bloody easy?
Anyway guys, please stop arguing "you're a fanboy/you're biased!" with each other...
OT: I'm wondering if Intel's larabee will be even decent, the fact that its just a really powerful CPU thats not really designed to be dedicated to rendering somewhat worries me. However since one of their Xeons do ray-tracing at liek 60 fps or something I might be wrong (then again, games NEVER use ray-tracing...nor do GPUs have the ability.
Seriously, 'omg nvidia is quaking' 'nv will crush this' - stop. really. The last real major breakthrough in the video market was the 8800GTX, and it hasnt changed much since then - slapping two cards onto one PCI-E slot is nice, but its not new.
In the end, all it comes down to is who has the fastest card (to gather more fanbois/investors) and who has the most popular card - the price to performance ratio. The 9600GT and 3850 are filling that segment right now, but they dont match the 8800GTX or 3870x2 in performance do they?
Who has the fastest doesnt really matter, its who has the most cost effective.
However, I still love to play with the uber cards & will happily beat the crap out of a 4870x2 when it comes out :D
Just to add some sanity here: do you NEED that much performance for current/upcoming titles? i'm seeing a lot of games lately that work fine on max details on an 8800GT, so are you sure you want to go that far?
i wont get a 4xxx card unless the performance is just amazing compared to current 3xxx cards