• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

ATI Radeon HD 4800 Series Video Cards Specs Leaked

16 ROPs still, huh?

I think we will be back to AMD at the mid-range and NV at the top in no time. This chip isn't going to really blow past the current top stuff, even in CF. And NV certainly isn't just sending its engineers to posh parties and skipping out on R&D for a new GPU.
 
Nvidia has a 71% share in discrete graphics. That is a fact, although not one that has been linked to in this thread. NV would have more errors by sheer numbers alone. I got a bit annoyed earlier, but its not a fact that can really be argued - yeah intel have more than Nvidia, but Nvidia have more than AMD - i dont see people choosing intel video because it crashes less.

I never argued the discreet market share, I argue that applying a 71% market share to nvidia in that article was ridiculous as others have shown, when intel is in the equation, nvidia doesn't have near that amount. And the article didn't even say it focused on graphics, it just reported the amount of crashes due to drivers, amd/ati, intel, nvidia, and via all use drivers for motherboards as well as drivers for graphics. This will also include any tv tuners or other add on devices any of the above companies make that have seperate drivers. You also have to remember that the specs of the systems were not shown. Meaning any joe shmoe who wanted to upgrade his nforce motherboard with an amd athlon cpu to vista is factored in there. You also have to factor any user who decided to mdownload a nvidia quodro or ati fire gl driver for their geforce or radeon. Or any users who tried to install the wrong motherboard drivers.

All in all you have to take the article with a grain of salt. I mean it came off of 158 pages of suppor tickets without any real system info. Thats inconclusive in any book.
 
They have been doing it since right after the 9800...I love ati but come on, lets be realistic. You cant just say there not going to, theres at least a feasible chance they will.

Who the hell are you kidding?
I'm still running Crossfire 1950XTX.
I've never had driver problems
I have never run across a game yet that I can't max out the graphics and get 70+ fps.
(Not counting Crysis....it is an unoptimized pos that should have been scrapped)

I did work for Dell as a tecnician. Nvidia and Microsoft fought for over a year about who was supposed to foot the bill for rewriting the Nvidia Vista drivers. They absolutely sucked during that time.
Another point is that many prebuit systems never report their crashes to MS, they are reported to their respective companies. which are never counted in the polls.....and are 80% nvidia.
The Nvidia/Microsoft fiasco over the Vista drivers put Nvidia back quite a ways behind vista compatibility.

Another thing to consider is that ATI-Tool is far more complex at tuning ATI cards and offers far more options for us ATI fanboys than any Nvidia overclocking tool, and that alone creates far more crashes than would normally happen were we content to just use the ATI control panel. and most ATI users do use ATI-Tool for tweaking their graphics.
 
Who the hell are you kidding?
I'm still running Crossfire 1950XTX.
I've never had driver problems
I have never run across a game yet that I can't max out the graphics and get 70+ fps.
(Not counting Crysis....it is an unoptimized pos that should have been scrapped)

I did work for Dell as a tecnician. Nvidia and Microsoft fought for over a year about who was supposed to foot the bill for rewriting the Nvidia Vista drivers. They absolutely sucked during that time.
Another point is that many prebuit systems never report their crashes to MS, they are reported to their respective companies. which are never counted in the polls.....and are 80% nvidia.
The Nvidia/Microsoft fiasco over the Vista drivers put Nvidia back quite a ways behind vista compatibility.

Another thing to consider is that ATI-Tool is far more complex at tuning ATI cards and offers far more options for us ATI fanboys than any Nvidia overclocking tool, and that alone creates far more crashes than would normally happen were we content to just use the ATI control panel. and most ATI users do use ATI-Tool for tweaking their graphics.

By any chance, did you bother to read post #180? You know: the post right before yours?
 
By any chance, did you bother to read post #180? You know: the post right before yours?

Of course not :rolleyes:

Before this thread completely goes to hell, those specs seem to have a few areas of focus that would cause some to think that they are atleast twice as fast as their 3800 counterparts. The ROPs kills that thought right off. The cards will still pwn, but if the architecture would have allowed for 24 or 32 ROPs, they would have been out of this world. I do have to commend them for fixing the TMU mess as that screwed us over more than anything else :rockout:
 
both have. repeatedly.

Aimed at no one in particular:
ANYTHING YOU SAY ABOUT NV OR ATI SUCKING CAN BE APPLIED EQUALLY TO THE OTHER ONE. PLEASE STOP THIS REPETITIVE FANBOI CRAP.

but at least ati didnt put out an FX class line of cards :P
 
nVidia pushes out 2+ beta drivers a month. Usually they only have one alpha release a month. They're on par with ATI; only difference is that ATI doesn't release beta drivers left and right like nVidia does - instead, they rely heavily on feedback crews, and consumer feedback (us) for driver development. If there's an issue they're trying to resolve, we typically see either a hotfix or a beta release.

Now, if we start calling beta drivers as "official" driver releases - than yeah, I'll defi admit that nVidia releases more drivers than ATI does.



And saying that ATI is lucky to see monthly driver releases anymore is absolutely ridiculous - and you know that, man - ATI has been following the same 1 official driver release per month schedule since, what? 2004/2005? We all know round about when the next driver is rolling out, there's no guessing or hoping involved. If there was any evidence that ATI would start cutting back to quarterly or bi-monthly driver releases, we would've seen or heard evidence of that already.

I understand there's a debate going on, but in the heat of a debate one's comments can start coming across to be very fanboish - not calling you a fanboi, newtekie1, but IMO, that quote on the driver releases very much sounded that way.

Initial release January 19, 2004

Ver. 4.1 (Pkg. ver. 7.97) [1]

so yeah, nvidia's supports so much better......yet most of their beta drivers dont support all the cards that use the same chips(g92) and they are NOT official drivers they are BETA.

personaly i have an 8800gt, and really, i wish the 3870 had been out when i got my card, because thats what i would have, and i wouldnt have had to reinstall x64 windows 4 times to get the system useable.
 
By any chance, did you bother to read post #180? You know: the post right before yours?


I didnt have Problems with Drivers Until 7.8s and higher, Once the Hotfix Came Out i switched to those and no problems.

Beyond that Please Lets get back on Topic, this topic is not about drivers but ATi's Radeon 4 Line.
 
On wikipedia, it says that the GT200 has 1000 gflops of processing power. The R770 is estimated to have 1000 as well. It appears that the current situation between ATI and Nvidia will not change much with the exception of ATI being out first this time in the contest between GT200 and R770.

What will matter is how competitive ATI's pricing is. I'm betting that we will see a major price drop in Q3 2008 from ATI for the R770 with the release of GT200 products.
 
nope, ATI released the 'VE' series (Radion 7000)

the VE and SE cards are kinda like the MX line, they are just suck cards for people who want cheap.

look how many stupid fools bought gf4mx cards thinking "its geforce 4 its gotta be good"

the VE was bad for games, but they played dvd's very well, and OLD games where ok, hell the mpeg decoding on them really helped slower systems play dvd's for old skool htpc/mpc systems :)


They put out HD2000 series instead :P

at least the hd2000 cards are capable of doing what they advertise even if they loose perf to AA and such.

the fx line CANT play dx9 games worth a damn, i know its the one thing nvidia did that truely ticked me off, soldme a top of the line "dx9" card that turned out to be utterly unable to play dx9 games......unless u like a 4fps slideshow...........

meh, i hope that the 4800's turn out to be kickass :)
 
In the same way, I should be able to play any DX 10 game at appropriate settings on any HD 2000 series card. I can't play Crysis on even the lowest setting on a HD2400 Pro....4fps slideshow....what I don't like.
 
2400 is NOT a gaming card tho, just like the 8400 isnt a gaming card, they are made for buisness and work/video playback systems, wanting to play any game other then some 10year old stuff on a low end "value" card is like wanting to use a geo metro to tow a 24foot boat ;)
 
2400 is NOT a gaming card tho, just like the 8400 isnt a gaming card, they are made for buisness and work/video playback systems, wanting to play any game other then some 10year old stuff on a low end "value" card is like wanting to use a geo metro to tow a 24foot boat ;)

I will use your logic, equate the HD2400 to FX 5200 (which in its line couldn't play DX9 games).

No more FX / HD2000 discussion. Barring the HD2900 series, HD2000 had been as much a hollow promise to consumers as GeForce FX was.
 
5200 could Run NFSU and U2 fine at normal settings, just couldnt pump graphics, every card has its niche.

But TBH i think this Topic needs to be locked as it has gotten way out of context.
 
Of course not :rolleyes:

Before this thread completely goes to hell, those specs seem to have a few areas of focus that would cause some to think that they are atleast twice as fast as their 3800 counterparts. The ROPs kills that thought right off. The cards will still pwn, but if the architecture would have allowed for 24 or 32 ROPs, they would have been out of this world. I do have to commend them for fixing the TMU mess as that screwed us over more than anything else :rockout:

Ati really doesn't need more ROP's because they do AA in the shaders (unlike nVidia). The 3870 already proves that since it's more competitive at higher resolutions.
http://www.computerbase.de/artikel/..._x2/20/#abschnitt_performancerating_qualitaet
Check out 2560x1600 ... ATI is short on TMU's not ROP's.
 
Ati really doesn't need more ROP's because they do AA in the shaders (unlike nVidia). The 3870 already proves that since it's more competitive at higher resolutions.
http://www.computerbase.de/artikel/..._x2/20/#abschnitt_performancerating_qualitaet
Check out 2560x1600 ... ATI is short on TMU's not ROP's.

The choking point in the 3800 series was definitely the TMUs. The relatively poor performance in the 3870 came from only having 16 TMUs. The card would fly when just starting to game & slow down to a crawl when you enter an environment where there's alot going on. The 3870x2 nearly solved that problem while showing the ability of a card that have 32 total TMUs & ROPs, although they work over 2 gpu's. With the doubling of the TMUs, ATI has tackled the problem headon. They still have half the TMUs of current Nvidia cards which they tried to offset by having a ridiculous amount of basic shaders but that's an architecture issue isn't it :toast:
 
Ati really doesn't need more ROP's because they do AA in the shaders (unlike nVidia). The 3870 already proves that since it's more competitive at higher resolutions.
http://www.computerbase.de/artikel/..._x2/20/#abschnitt_performancerating_qualitaet
Check out 2560x1600 ... ATI is short on TMU's not ROP's.

And I think the architecture is only going to better with the generations in that respect. Since AA is done in the shaders it takes up X shading power. Let's do a bold speculation based on performance hit when AA is enabled and say it takes up 80 SP on the HD3870 at a said resolution (25% of power, please it's just to give an example) . At the same resolution the HD4 series are going to need the same power, 80 SPs, but the difference is:

480 - 80 = 400
320 - 80 = 240

We have 50% more shaders, but that translates to 66% more free shaders, and this will go up as we add shaders. IMO dedicated hardware (more ROPs) is still better, but Ati is going to improve, that's for sure. 66% more shaders clocked 35% higher (1050Mhz / 775 Mhz) translates to 125% more performance:

P x 1,66 x 1,35 = 2,249 P

Interestingly, we have more or less the same improvement in the texture mapping area, double the units clocked a bit higher:

Texture fillrate -> TFR x 2 x 850/775 = 2,19 TFR

I guess they are aiming at ~2,20X the performance of HD3 series.

And I really hope we are correct and Ati's new generation comes with a 100% improvement or greater, since according to leaked specs Nvidia's chip IS going to be double as fast as G92, since it's double everything.

We need Ati back on the high-end market.
 
And I think the architecture is only going to better with the generations in that respect. Since AA is done in the shaders it takes up X shading power. Let's do a bold speculation based on performance hit when AA is enabled and say it takes up 80 SP on the HD3870 at a said resolution (25% of power, please it's just to give an example) . At the same resolution the HD4 series are going to need the same power, 80 SPs, but the difference is:

480 - 80 = 400
320 - 80 = 240

We have 50% more shaders, but that translates to 66% more free shaders, and this will go up as we add shaders. IMO dedicated hardware (more ROPs) is still better, but Ati is going to improve, that's for sure. 66% more shaders clocked 35% higher (1050Mhz / 775 Mhz) translates to 125% more performance:

P x 1,66 x 1,35 = 2,249 P

Interestingly, we have more or less the same improvement in the texture mapping area, double the units clocked a bit higher:

Texture fillrate -> TFR x 2 x 850/775 = 2,19 TFR

I guess they are aiming at ~2,20X the performance of HD3 series.

And I really hope we are correct and Ati's new generation comes with a 100% improvement or greater, since according to leaked specs Nvidia's chip IS going to be double as fast as G92, since it's double everything.

We need Ati back on the high-end market.

then double that up for the 4870x2 :eek: :nutkick:
 
Another thing to consider is that ATI-Tool is far more complex at tuning ATI cards and offers far more options for us ATI fanboys than any Nvidia overclocking tool, and that alone creates far more crashes than would normally happen were we content to just use the ATI control panel. and most ATI users do use ATI-Tool for tweaking their graphics.

Have you ever even used Rivatuner? It is far more complicated than ATItool.
 
... lots of calcs leading to conclusion of 2.2x performance.
Given the same architecture, higher clocks, and more shaders, I think these are the performance implications:

1./ Broadly similar performance at standard resolutions e.g. 1280x1024 and with no AA FSAA effects since no architectural changes
2./ General improvement in line with clock-for-clock increases 10-20%
3./ The increase to 32 TMU will mean that the cards wont CHOKE at higher resolutions. It will be able to handle 1920x1200 without hitting the wall
4./ Currently you can dial up 4x AA without any performance hit. With the extra shaders you can do the same at 1920x1200 now
5./ With the extra shaders, you will be able to dial up 8x or 16x at 1280x1024 without a significant hit.
6./ The GPU will run hotter and require more power
7./ Compensated by using GDDR5 memory that will require less power and run a bit cooler

Net net... get the GDDR5 model.

Will there be a "jump" in performance like we saw between the x19xx series and hd38xx? No.
 
Last edited:
Will there be a "jump" in performance like we saw between the x19xx series and hd38xx? No.

That is where I have to disagree with you. If performance doesn't "jump", ATI will fail. Then AMD will be very vulnerable to a buyout from some other company and then who knows what will happen. ATI knows that it has to be at least on par with the GT200.
 
Beyond Shaders, ROPs, TMUs there is the Fact of the Basic Transistor Density.
And I think the architecture is only going to better with the generations in that respect. Since AA is done in the shaders it takes up X shading power. Let's do a bold speculation based on performance hit when AA is enabled and say it takes up 80 SP on the HD3870 at a said resolution (25% of power, please it's just to give an example) . At the same resolution the HD4 series are going to need the same power, 80 SPs, but the difference is:

480 - 80 = 400
320 - 80 = 240

We have 50% more shaders, but that translates to 66% more free shaders, and this will go up as we add shaders. IMO dedicated hardware (more ROPs) is still better, but Ati is going to improve, that's for sure. 66% more shaders clocked 35% higher (1050Mhz / 775 Mhz) translates to 125% more performance:

P x 1,66 x 1,35 = 2,249 P

Interestingly, we have more or less the same improvement in the texture mapping area, double the units clocked a bit higher:

Texture fillrate -> TFR x 2 x 850/775 = 2,19 TFR

I guess they are aiming at ~2,20X the performance of HD3 series.

And I really hope we are correct and Ati's new generation comes with a 100% improvement or greater, since according to leaked specs Nvidia's chip IS going to be double as fast as G92, since it's double everything.

We need Ati back on the high-end market.
 
Back
Top