# ATI Radeon HD 3800 Series Specs, Photos and Logos



## malware (Oct 20, 2007)

The title says it all. ATI Radeon HD 3870, the one on the second picture will feature 825MHz core and 2400MHz memory clocks, DirectX 10.1 support, and PCI-e 2.0 technology. ATI Radeon HD 2850 on the third picture will be clocked at 700MHz/1800MHz core/memory with DirectX 10.1 support and PCI-e 2.0. Both cards will feature 55nm manufactured graphics processing unit (GPU). 



 

 

 



*View at TechPowerUp Main Site*


----------



## freaksavior (Oct 20, 2007)

2 things i want to know.

Price and bandwidth of the cards


----------



## AphexDreamer (Oct 20, 2007)

Hmmm, should I be mad that I just recently bought a HD2900Pro?


----------



## GLD (Oct 20, 2007)

AphexDreamer said:


> Hmmm, should I be mad that I just recently bought a HD2900Pro?



I would say be happy. You bought a excellent piece of hardware at a good price, and with 512bit. I am jealous of the 2900Pro owners.


----------



## AphexDreamer (Oct 20, 2007)

GLD said:


> I would say be happy. You bought a excellent piece of hardware at a good price, and with 512bit. I am jealous of the 2900Pro owners.



Thanks man, that make me feel a whole lot better.


----------



## happita (Oct 20, 2007)

The 3870 better have decent stock cooling. I hope a good enough airflow case will be able to cool that beast down. Although I shouldn't be that worried since its based on 55nm. OCing should be real fun on those things 

All we need now is A RELEASE DATE!!!!!!!!!


----------



## oli_ramsay (Oct 20, 2007)

Doesn't the 2900 series also have 320 stream processors?


----------



## mandelore (Oct 20, 2007)

oli_ramsay said:


> Doesn't the 2900 series also have 320 stream processors?



yes it does, as does it have physics processing capabilities.

this is "basically" just a die shrink of the R600 with a few extras

so id expect a very nice overclock on these


----------



## mrw1986 (Oct 20, 2007)

So should I buy a 2900Pro 1gb or wait for this puppy? Keep in mind I wanna stay in the ~$300 price range


----------



## effmaster (Oct 20, 2007)

mrw1986 said:


> So should I buy a 2900Pro 1gb or wait for this puppy? Keep in mind I wanna stay in the ~$300 price range



When these come out the prices on the other graphics cards should drop unless of course they decide to discontinue some of them which is always a likely case especially for the higher up 2900s


----------



## Weer (Oct 20, 2007)

It's the same thing as the 2900XT! It's simply 55nm..


----------



## freaksavior (Oct 20, 2007)

Weer said:


> It's the same thing as the 2900XT! It's simply 55nm..



please elaborate


----------



## effmaster (Oct 20, 2007)

Weer said:


> It's the same thing as the 2900XT! It's simply 55nm..



Dont forget that it also has DX 10.1 as well as PCIe 2.0 on it


----------



## a111087 (Oct 20, 2007)

game physics processing capabilities? wow, it it like PhysX?


----------



## mandelore (Oct 20, 2007)

a111087 said:


> game physics processing capabilities? wow, it it like PhysX?



the card can be used as a dedicated physics unit, just like the R600 in multi crossfire setups


----------



## mk_ln (Oct 20, 2007)

hm...only *256-bit memory interface*....odd


----------



## erocker (Oct 20, 2007)

mk_ln said:


> hm...only *256-bit memory interface*....odd



It will be interesting to see if it actually holds the card back.


----------



## Tatty_One (Oct 20, 2007)

erocker said:


> It will be interesting to see if it actually holds the card back.



Damn unlikely to hold it back I would say TBH.


----------



## WarEagleAU (Oct 20, 2007)

Weird, but maybe this is like the x1800 x1900 debacle. 

That second pic looks like the old x800XTPEs and X850XTPEs with that huge honking cooler on it.


----------



## GLD (Oct 20, 2007)

I figure I will buy one of these with the dual slot cooler. My 2600XT that cost me $128 is great for the amount of money, imo. I think one of these cards will be my last upgrade for this 939 system. I had thought about the 2900Pro's alot, but thought my San Diego might not be 100% up to it. One of these should be the bomb for my rig. I can hardly wait.


----------



## Deleted member 24505 (Oct 20, 2007)

Were do these come in the pecking order? high/mid?? thx


----------



## freaksavior (Oct 20, 2007)

any idea how much they will cost?


----------



## Tatty_One (Oct 20, 2007)

It's a bit strange to me thats they are of a lower fabrication process but that one still requires a double slot cooler.


----------



## Kursah (Oct 20, 2007)

Well if the 3870 series is the XT series wouldn't that mean it's got 2xGPU's on the board? Whereas the the 2850 would only have one?


----------



## v-zero (Oct 20, 2007)

mk_ln said:


> hm...only *256-bit memory interface*....odd



This is very clever economics. ATi/AMD had massive problems and cost in making the super-complex 12-layer PCB to employ that 512-bit bus, and having seen that the R600 is not limited by it's bandwidth they have done the most sensible thing. It should lower prices (though maybe the consumer wont see that) and increase energy-efficiency, and may aid in higher clock rates.

A nice move in my opinion, but the benchmarks will tell. I'm pretty sure G80 will finally meet it's match, it's been a year coming.


----------



## Weer (Oct 20, 2007)

freaksavior said:


> please elaborate



For you, anything.

The R600 GPU and the RV670 GPU are identical. The only difference is the manufacturing process, going from 80nm to 55nm (allegedly).

This means that the HD 2900 series and the HD 3800 series will differ solely in clock speed, which is laughable.
And calling is the HD 3870.. makes me think they're trying to screw themselves into the ground intentionally..

And that pisses me off, because I thought that I would FINALLY get to see the Red side get the upper hand, as it had a year ago. But.. I guess there is no red side anymore is there? It's the AMD side now..


----------



## EastCoasthandle (Oct 20, 2007)

Weer said:


> It's the same thing as the 2900XT! It's simply 55nm..


That incorrect, it will support:
-DX 10.1/SM 4.1
-UVD (unified video decoding)
-PCIe 2.0 compliant 

With more information to come as it becomes available to the public.


----------



## nflesher87 (Oct 20, 2007)

EastCoasthandle said:


> That incorrect, it will support:
> -DX 10.1/SM 4.1
> -UVD (unified video decoding)
> 
> With more information to come as it becomes available to the public.



I agree, it's a bit soon to go preaching that it'll have no advantages over the 2900...


----------



## tkpenalty (Oct 20, 2007)

Very nice...


----------



## AsRock (Oct 20, 2007)

freaksavior said:


> please elaborate




Die shink basicly means it will run cooler that the 90nm..



EastCoasthandle said:


> That incorrect, it will support:
> -DX 10.1/SM 4.1
> -UVD (unified video decoding)
> -PCIe 2.0 compliant
> ...



WOW all that loool...




mrw1986 said:


> So should I buy a 2900Pro 1gb or wait for this puppy? Keep in mind I wanna stay in the ~$300 price range



And as forbuying one if i was you just wait till you see people swap out for this newer one probaly get a few who had them both too.....


----------



## Ser-J (Oct 20, 2007)

Lets just hope this card runs well with all games!


----------



## cdawall (Oct 21, 2007)

Weer said:


> For you, anything.
> 
> The R600 GPU and the RV670 GPU are identical. The only difference is the manufacturing process, going from 80nm to 55nm (allegedly).
> 
> ...



are you kidding? a die shrink is laughable...look at the G70 vs G71 the only difference is a die shrink and yet the max oc achieved on a G70 7800GS _AFTER_ voltmods is a 570 and my card on stock volts, stock cooler hits 569....oh yeah this will make a laughable difference people like you who just make blanket statements are just STUPID! here is some proof for you...LINK my cards got a GPUz validation in my sig if you want proof of my card


----------



## PVTCaboose1337 (Oct 21, 2007)

Wow those coolers are meh...  reminds of as said above, the X850s.


----------



## magibeg (Oct 21, 2007)

Man, I'm starting to hate the new naming thing they have going on already. Now they can go up to 3xxx and 4xxx without actually making any significant differences. This should be super confusing to customers.


----------



## corwin155 (Oct 21, 2007)

*ackkkk*

heh i just bought a hd2600xt gddr4 , figures its only a dx10.0 card 
 even tho id like to upgrade again tothe new ati line dx10.1 card because of the new PhysX being built into Ati cards (crossfire of course)

My rigs for Htpc / light gaming
Msi k9A Platinum AMD 580X CrossFire (ATI RD580) Chipset
amd 64 EE 5200+ mild oc 2.8 ghz AC 64 pro 92mm cpu fan
3 gigs adata ddr2 800 
2x 250 wd sata2 hds
1x ATI HD2600xt gddr4 oc 845 core 1170 mem dx10.0
1x sony cd burner 1x samsung dl-dvd 18x
logisys 575watt psu
WinV home premium 32bit


----------



## DEFEATEST (Oct 21, 2007)

Can someone tell me why this card , and others say they have HDMI? I dont see it. Just DVI and s-Video?


----------



## corwin155 (Oct 21, 2007)

ati made a Hdmi adapter that plugs into one of the DVI ports 
http://www.newegg.com/Product/ShowI...=SAPPHIRE+Radeon+HD+2600XT+100210L+Video+Card
 photo link


----------



## Redshift (Oct 21, 2007)

it will definitely be interesting to see how nVidia responds to this. rumors have already hit the net (possibly even here on techpowerup) about thier 9xxx series chipset... that produces 3 teraflops on one die


----------



## effmaster (Oct 21, 2007)

Redshift said:


> it will definitely be interesting to see how nVidia responds to this. rumors have already hit the net (possibly even here on techpowerup) about thier 9xxx series chipset... that produces 3 teraflops on one die



Not to mention free 4x anti aliasing.  That alone is sweet and all the AA that I would ever truly need


----------



## moto666 (Oct 21, 2007)

*3xxx*

Calling a card 3xxx without major difference between to the 2xxx is just dope!

PCIe 2.0
Die shrink
DX 10.1

this things are nice, but for me, and I think many of you would "3xxx" mean "R700"
My opinion!


----------



## ChaoticBlankness (Oct 21, 2007)

v-zero said:


> This is very clever economics. ATi/AMD had massive problems and cost in making the super-complex 12-layer PCB to employ that 512-bit bus, and having seen that the R600 is not limited by it's bandwidth they have done the most sensible thing. It should lower prices (though maybe the consumer wont see that) and increase energy-efficiency, and may aid in higher clock rates.
> 
> A nice move in my opinion, but the benchmarks will tell. I'm pretty sure G80 will finally meet it's match, it's been a year coming.



The R680 is supposed to mark the return on the 512 bus, and a higher clock speed.


----------



## Tatty_One (Oct 21, 2007)

EastCoasthandle said:


> That incorrect, it will support:
> -DX 10.1/SM 4.1
> -UVD (unified video decoding)
> -PCIe 2.0 compliant
> ...



Yep, but nothing revolutionary there but I cannot understand why that would bother people too much, after all, it's not that long since the 600 was introduced, and although it was considerably late it has shown it can compete with the best NVidia has to offer at a lower price (some hiccups withstanding such as AA in certain circumstances), and anyways, it's no different with the G92 from Nvidia, a few more stream processors, PCI-E 2.0, SM4.1 etc along with a lower fabriction process.  I suppose NVidia would argue that their upcoming 9800 or whatever it is called will be revolutionary.......but we will have to wait and see on that one!


----------



## quickie (Oct 21, 2007)

Why did they call it 3870 instead of 3800. was the 70 meant to represent something important? They might as well have called it 3870-100 like some sort of boeing.

But whatever it's going to be called, it's good news nonetheless. The HD2850 interests me since it will actually fit in my case. If this comes out in time for Christmas, I'd be happy.


----------



## v-zero (Oct 21, 2007)

ChaoticBlankness said:


> The R680 is supposed to mark the return on the 512 bus, and a higher clock speed.



Yep, once it is relevant to the market .


----------



## EastCoasthandle (Oct 21, 2007)

AsRock said:


> WOW all that loool...


Not relevant as Weer stated "It's the same thing as the 2900XT..."  in which I have shown him as well as you that statement isn't correct.


----------



## [I.R.A]_FBi (Oct 21, 2007)

so how long are these thingies?


----------



## 15th Warlock (Oct 21, 2007)

AMD has finally lost it, using a higher number for a brand new next gen flagship video card has been a practice in the video card business since ever, for example Radeon 7200-8500-9700-X800-X1800-X2900 (X meaning "10"), GeForce 2-3-4-5800-6800-7800-8800, Voodoo 1-2-3-4-5, and so on and so forth, but now AMD is misleading the consumer making them think this GPU model is technologically a generation above the previous GPU model when it's not.... 

Frankly, I don't know where they want to get with this, what's next AMD? PR numbers like you use for your processors? Radeon 4000+ anyone?? :shadedshu


----------



## Tatty_One (Oct 21, 2007)

Personally, I understand why some are getting a bit confused/bemused by the numbering of the card but I do not see it as a completely new model, I see it as an evolution of the R600, just like really the G92 is of the G80 (there are no radical improvements) but marketed in a different way.  Nvidia are going to bring out a replacement GTS but they will call it an 8800GTS so maybe their strategy in this case is a bit more up front?


----------



## Urbklr (Oct 21, 2007)

I think this is GAY.....what is wrong with AMD....why would they move to a new series....when they pretty much just gained some steam on the HD 2k series...


----------



## AsRock (Oct 21, 2007)

i think people are panicing to much lol. To me this seems there doing the whole range again with nm shrinks with a few bits added maybe the next range will show more of what there up to.

Need to remember that the R600 has cost them so much and probably need to use it more to make enough money to actually do some thing new..

I just hope they make it.


----------



## Argoon (Oct 21, 2007)

mandelore said:


> the card can be used as a dedicated physics unit, just like the R600 in multi crossfire setups



Hi to all, mandelore you forgot to mention that the game as to support HavokFX from Havok and ONLY HavokFX nor the Havok 5 engine to this to work, and until now i have no knowledge of a game that does.

And any ps 3.0 card can accelerate Havok FX.


----------



## ChaoticBlankness (Oct 21, 2007)

Urbklr911 said:


> I think this is GAY.....what is wrong with AMD....why would they move to a new series....when they pretty much just gained some steam on the HD 2k series...



AMD is hoping to beat the GF 8000 series with these higher clocked cards, that's all.  Is it right?  No.  Are they doing it anyways?  Yes.

The trouble is we can all be mad out how misleading it is, but at the end of the day will that stop us from buying if the price/performance good?  No.


----------



## Tatty_One (Oct 22, 2007)

There are going to be an aweful lot of mid/mid high cards on the market in a couple of months to chose from, it will be interesting how they all fit into a pricing strategy:

8800GTS 320 G80
8800GTS 640 G80
8800GT G92 256
8800GT G92 512
8800GTS 640 with extra pipes (112) G80

etc etc

2900pro
2950pro
2900xt
2950xt???
2952.5 proX 

I am lost already


----------



## JC316 (Oct 22, 2007)

AphexDreamer said:


> Hmmm, should I be mad that I just recently bought a HD2900Pro?



I'm not. The 2900 pro is awesome. The 2850 does look nice though, but I will still stick with my 2900.


----------



## Urbklr (Oct 22, 2007)

JC316 said:


> I'm not. The 2900 pro is awesome. The 2850 does look nice though, but I will still stick with my 2900.



Correction......3850
This numbering is STILL GAY


----------



## [I.R.A]_FBi (Oct 22, 2007)

how long is it?


----------



## GrapeApe (Oct 22, 2007)

15th Warlock said:


> AMD has finally lost it, using a higher number for a brand new next gen flagship video card has been a practice in the video card business since ever, for example Radeon 7200-8500-9700-X800-X1800-X2900 (X meaning "10"), GeForce 2-3-4-5800-6800-7800-8800, Voodoo 1-2-3-4-5, and so on and so forth, but now AMD is misleading the consumer making them think this GPU model is technologically a generation above the previous GPU model when it's not....



What was the generational difference between the R9800 and X800 other than some extremely slight tweaks? I gues at least it was a different codenamed part, and what was the big generational difference between the GF6800 and GF7800 (aka the NV47/48)? Oh yeah forgot nV changed that last one to the G70 as if it was a big change.  



> Frankly, I don't know where they want to get with this, what's next AMD? PR numbers like you use for your processors? Radeon 4000+ anyone?? :shadedshu



Well that seems to be it, just like the craptacular GMA X3000.
It's not much different than any other naming scheme. As long as there's some rhyme or reason to it, it'll work. That it's an HD2900 or 3870 doesn't really matter as much as whether or not it's worth the money. I don't care if they call it the AMD Corvette as long as it outperforms the AMD Chevette for the price.


----------



## Terantek (Oct 22, 2007)

Inquirer has an article claiming 2400 Mhz memory clock - thats pretty insane! Also I wonder if there is any performance to be gained by increasing stream processor clock... i know 2900 xt had about double the number of stream processors over 8800 but around half the clock speed on said processors. Maybe they did something like this to justify a 3xxx model number.. i guess we'll see how the benchies turn out.


----------



## GrapeApe (Oct 22, 2007)

cdawall said:


> are you kidding? a die shrink is laughable...look at the G70 vs G71 the only difference is a die shrink



You do realize that's not the only difference, eh!?! :shadedshu

They also got rid of 25million transistors yet were able to keep the same # of shaders/TUs/ROPs/etc. 

Still not sure what they got rid of (drop FX12 support?  )

So not only do they get a process reduction benefit, they also cut transistors, which also helps with heat and power, which often helps speed limits. The RV670 likely benefits from a similar change, but depends on what else they added or changed (TMUs/ROPs) in addition to what we already know (UMD/SM4.1) while taking some other things away.

Process reduction alone isn't beneficial though if it isn't efficient reduction, because as you decrease trace size and increase density, you increase the potential for noise which you overcome with more voltage, which usually leads to more heat/power. 
But as 80nm and 65/55nm are completely different processes it's not just an optical shrink it's a complete move which gives them the change to change the layout, hopefully to something with potential to reach a little closer to those 1GHz numbers in all those early R600 rumours way back when.

Now if they want to keep power consumption low then it would be best to have lower clocks (likely the single slot solution), but that they are going to have a dual-slot model shows that they are going to push what they can hard which would increase heat/power while getting higher speeds/performance. This may be so that they can get the fab savings over the HD2900 and maybe replace possibly the 512MB model and at least the PRO with a cheaper to make high end part. They have alot of potential if they have less issues than they reportedly had with the TSMC 80nmHS fab.


----------



## Tatty_One (Oct 22, 2007)

Terantek said:


> Inquirer has an article claiming 2400 Mhz memory clock - thats pretty insane! Also I wonder if there is any performance to be gained by increasing stream processor clock... i know 2900 xt had about double the number of stream processors over 8800 but around half the clock speed on said processors. Maybe they did something like this to justify a 3xxx model number.. i guess we'll see how the benchies turn out.



A very good point, IMO one of the few weaknesses of the 2900XT is the fixed shader clock, not only do the current NVidia cards shader clocks raise with the core clocks but now, with the latest release of RivaTuner you can independantly raise the shader clock completely unlinked to the core, if ATi can integrate something like that into their architecture then I think that potentiallly, with their cards extra stream processors they could really get some extra performance.


----------



## GrapeApe (Oct 22, 2007)

Except that the HD2900 isn't really shader power hampered as much as texture and ROP/hdwr AA limited. The HD2900 already competes well with the GF8800GTX/Ultra in shader power, and demonstrates this well when there's no need for AA or texture loads are low. Look at the GF8800 when it is forced to do shader based AA like that called for in DX10, performance flips then when the texture and ROP loads aren't stressed, but the shaders are. 

Having faster shaders would be nice, as would faster everything, but the question is whether you could have the current composition at much faster speeds. There are already a bunch of components working outside of core clock, but how easy is it to implement on those 320SPUs/64shader-cores, and also what's the benefit vs power/heat cost. Personally I'd prefer the opposite of the G80 vis-a-vis the R600 series, faster TMUs/ROPs to make up for the lack of numbers and different composition.


----------



## Tatty_One (Oct 22, 2007)

GrapeApe said:


> Except that the HD2900 isn't really shader power hampered as much as texture and ROP/hdwr AA limited. The HD2900 already competes well with the GF8800GTX/Ultra in shader power, and demonstrates this well when there's no need for AA or texture loads are low. Look at the GF8800 when it is forced to do shader based AA like that called for in DX10, performance flips then when the texture and ROP loads aren't stressed, but the shaders are.
> 
> Having faster shaders would be nice, as would faster everything, but the question is whether you could have the current composition at much faster speeds. There are already a bunch of components working outside of core clock, but how easy is it to implement on those 320SPUs/64shader-cores, and also what's the benefit vs power/heat cost. Personally I'd prefer the opposite of the G80 vis-a-vis the R600 series, faster TMUs/ROPs to make up for the lack of numbers and different composition.



Yes it does compete well, your right, my point is it has twice the Sp's and with a little work could be a fair bit quicker!


----------



## rhythmeister (Oct 22, 2007)

Urbklr911 said:


> I think this is GAY.....what is wrong with AMD....why would they move to a new series....when they pretty much just gained some steam on the HD 2k series...



I personally think that calling an inanimate object without gender a homo' is gay in itself! 

Long live ati


----------



## GrapeApe (Oct 22, 2007)

I understand that, but if the bottleneck is in the backend and not the shaders then your benefit is still limited. It's still a benefit, but it would be like overclocking your QX9650 to 4GHz in UT3 but still being stuck with a ChromeS27, your computer may be able to better handle the game's core needs but you still can't translate that benefit out to your display because of a bottleneck further down the path. Same problem with the R600 it's biggest weaknesses are in the back-end not it's core shader power.  

That's not to say it's without benefit, overclocked SPUs  would help a bit with the shader based AA, but it's still heavily TMU and ROP limited at any significant setting used by top-end cards.

I don't disagree that faster SPUs will improve some things, but my main point is that's not it's biggest weakness, and what is the cost of your OC, as it's already a very power hungry and pretty warm VPU without increasing the speed of the SPUs (these increases you seek don't come at 0 cost there). I think that level of power is for next year's games, not really our current batch (although Crysis may prove otherwise if geometry is cranked as high as we hope).
So like I said, personally I'd prefer to see them focus on the back-end for any expenditure of power/heat or even transistors since that's their current Achilles' heel.


----------



## [I.R.A]_FBi (Oct 22, 2007)

how long is it?


----------



## thomasxstewart (Oct 22, 2007)

*TOP $ Till Nvidia PCIe2.0*

 Well its good if DX10.1 comes in stronger, especially with TWICE Bandwidth. Yet TEARS of Pain & How Can They Charge Sooo Much, comes to Mind. Well Until Nvidia PCIe 2.0 pokes new high score, if THEY can. Its HOTTT!!!

SignedHYSICIAN THOMAS STEWART VON DRASHEK M.D.


----------



## Ben Clarke (Oct 22, 2007)

Interesting. AMD are really stepping it up, especially since NVIDIA admitted defeat earlier.


----------



## Urbklr (Oct 22, 2007)

Okay...the person who thought too release a new series is gay! LONG LIVE ATi though....i wuv them!


----------



## cdawall (Oct 22, 2007)

GrapeApe said:


> You do realize that's not the only difference, eh!?! :shadedshu
> 
> They also got rid of 25million transistors yet were able to keep the same # of shaders/TUs/ROPs/etc.
> 
> ...



i did know that but didnt think breaking into the tech stuff would benifit as much as the raw difference in clocks between the cards  that were really not to much different as far as series changes go


----------



## effmaster (Oct 23, 2007)

cdawall said:


> i did know that but didnt think breaking into the tech stuff would benifit as much as the raw difference in clocks between the cards  that were really not to much different as far as series changes go



Raw clock speeds dont always mean they are the best if they are highest or higher than before. AMD proved this to Intel after all. And Intel responded with lower GHZ speed processors namely called Core 2 Duo and it was an amazing proc and still is to this day


----------



## General (Oct 23, 2007)

GrapeApe said:


> What was the generational difference between the R9800 and X800 other than some extremely slight tweaks? I gues at least it was a different codenamed part, and what was the big generational difference between the GF6800 and GF7800 (aka the NV47/48)? Oh yeah forgot nV changed that last one to the G70 as if it was a big change.
> 
> 
> 
> ...



There apparantly using these numbers so to get rid of the 'XT, XTX, PRO, GT' that, your average customer simply doesnt understand.  Much easier to look on a website and see a card that says 3870 and say 

'wooo, that must be better than a 2950' or whatever the hell they end up calling these card's.

Power requirements on those R600's where just insane, for a lot of people, (myself included) that was the only reason I went with an nvidia card.

However, this really does tempt me I must say =]  Doing a brand spanking new system for christmas (too bad I will miss out on the new CPU's and 790SLI chipset )

Martyn


----------



## 15th Warlock (Oct 23, 2007)

GrapeApe said:


> What was the generational difference between the R9800 and X800 other than some extremely slight tweaks? I gues at least it was a different codenamed part, and what was the big generational difference between the GF6800 and GF7800 (aka the NV47/48)? Oh yeah forgot nV changed that last one to the G70 as if it was a big change.



For starters, the X800 (R420) had _twice_ the pixel shaders (16ps vs 8ps) than the 9700 (the original R300), _50% more_ vertex shaders (6vs vs 4vs) than the 9700, about _45% more_ transistors than the 9700 (160 million vs 107 million), a _new fabrication process_ (.13 micron vs .15 micron) _supported SM2b _and the 9700 supported 2a, supported the _PCIe_ platform and the 9800 was AGP only, it _was the first Ati card that supported Crossfire_, and the clocks for both memory and GPU core were _about 50% higher_ than the 9700's clocks. Even though the R430 was an evolutionary step from the fantastic R300 core, it offered a performance leap anywhere _from 40% up to 120%_ depending on the game or benchmark and the resolution/effects used, not just some "extremely slight tweaks" as you can see. 

Now, the GF7800 (G70) supported _50% more_ pixel shaders than the GF6800 (NV42, not 47/48 ) (24ps vs 16ps), _33% more_ vertex shaders than the GF6800 (8vs vs 6vs), about_ 40% more_ transitors than the GF6800 (302 million vs 222 million), _20% higher_ memory and GPU clocks than the GF6800, supported _transparency adaptive AA_, supported _multiple GPUs_ on a single board (aka 7950GX2) and even though by the numbers there didn't appear to be so much of a difference between both cards, you could get a performance leap anywhere _from 30% to more than a 100%_ depending on the benchmark or game and the resolution/effects used.

As you can see, both examples you quote, clearly were more than worthy of having a new numerical denomination when compared to their previous gen counterparts


----------



## GrapeApe (Oct 24, 2007)

Well it was an illustration of a similar application of hyperbole, but for the heck of it let's continue on...



15th Warlock said:


> For starters, the X800 (R420) had _twice_ the pixel shaders...



Quantity doesn't show improvement on the architecture nor the need of a name change and thus the resistance people seem to have with the new name. An increase in number within a 'generation' has the X1800-> X1900 increased the shader count 3 fold, and the transistor count 20%, yet didn't get it's own X2K numbering, but the GF3 -> GF4 is only 10% diff but gets it own generation. 



> a _new fabrication process_ (.13 micron vs .15 micron)



Fab process wasn't new, just new to the high end, 130nm and 130nm low-Kd were already used on the R9600P/XT. And for that same reason you could argue the RV670 deserves a name change skipping a node and going from optical shrink to optical shrink, so that's like 2 process changes, and will be the first to be built on the new fab from any IHV. So it kinda proves my point more than dissproves it, although I don't think fab process matters that much so much as the results. 



> _supported SM2b _and the 9700 supported 2a,



*Actually the FX cards were PS2.0a, not the R3xx which was PS2.0 and PS2.0extended, the R420 was PS2.0b.* There are more differences between PS2.0a and either PS2.0 and 2.0b than between 2.0 and 2.0b themselves which have slight changes in their upper limits.



> supported the _PCIe_ platform and the 9800 was AGP only,



So the PCX5900 should've been the PCX6800 based on that argument? OR does it matter native/non-native where the GF6800 PCIe (NV45) becomes the GF7800, instead of the later NV47?



> it _was the first Ati card that supported Crossfire_,



Only after it's refresh when it became the R480, and actually after it was demoed on X700s before you could even buy X850 master cards. So should the R480 have become the X1800 based on that argument? 
*BTW, R9700s were doing multi-VPU rendering on E&S SimFusion rigs long before nV even had their new 'SLi' and even before Alienware demoed their ALX, so not sure how relevant multi-vpu support is.*



> and the clocks for both memory and GPU core were _about 50% higher_ than the 9700's clocks.



But only about 20% more than the R9800XT core, and the core was slower than the R9600XT. And if it was speedboost alone then the GF5900 -> 6800 jump shouldn't have gotten a generational name change as it went down in speed. 



> Even though the R430 was an evolutionary step from the fantastic R300 core, it offered a performance leap anywhere _from 40% up to 120%_ depending on the game or benchmark and the resolution/effects used, not just some "extremely slight tweaks" as you can see.



Performance increase doesn't need dramatic architecture changes, the R9800XT offered larger performance differences over the R9700 as did the X1900 offer over the X1800 depending on the game/setttings, but what constitutes a significant enough change.



> Now, the GF7800 (G70) supported _50% more_ pixel shaders than the GF6800 (NV42, not 47/48 ) (24ps vs 16ps),



*The original GF6800 was the NV40, not the NV42 which was the 110nm GF6800 plain 12PS model, and if you don't know what the NV47/48 was in reference to, perhaps you shouldn't bother replying, eh?* 



> supported _multiple GPUs_ on a single board (aka 7950GX2)



Actually that was multiple GPUs on TWO board (you could actually take them apart if you were so inclined), but a single PCIe socket, you probably should've refered to the ASUS Extreme N7800GT Dual. *Also, the GF6800 supported multiple VPUs on a single board as well*, guess you never heard of the Gigabyte 3D1 series (both GF6800 and 6600);
http://www.digit-life.com/articles2/video/nv45-4.html



> As you can see, both examples you quote, clearly were more than worthy of having a new numerical denomination when compared to their previous gen counterparts



I think both my examples were pretty illustrative of why it's too early to complain about numbering schemes, since similar examples have occured in the past, and especially when most of the people complaining really don't know enough about them to complain in the first place. 

BTW, I'm just curious if those who have a problem with the HD3xxx numbering scheme have a similar problem with the GF8800GT and potential GF8800GTS-part2 numbering scheme causing conflicts with the current high-end?

Personally I only dislike the new numbering scheme if they got rid of the suffixes and replaced them with numbers to play down to the dumbest consumers in the marketplace. 
That to me focuses on people who don't care anyways and will still buy an HD4100 with 1GB of 64 bit DDR2 memory because the number and VRAM size is higher than the HD3999 with 512MB of 512bit XDR/GDDR5 memory which may outperform it 5:1 or whatever. Those are the same people who are simply better served by a chart printed on the box by the IHV showing the performance positioning of part more than changing an existing numbering scheme.


----------



## Tatty_One (Oct 24, 2007)

Is Alec back????


----------



## 15th Warlock (Oct 25, 2007)

GrapeApe said:


> Quantity doesn't show improvement on the architecture nor the need of a name change and thus the resistance people seem to have with the new name. An increase in number within a 'generation' has the X1800-> X1900 increased the shader count 3 fold, and the transistor count 20%, yet didn't get it's own X2K numbering, but the GF3 -> GF4 is only 10% diff but gets it own generation.




Yes Ati did that previously with the X1800~X1900 series, both used very different architectures and yet both had the same generational numeration, but in that case, the consumer was not mislead, you got a product that didn't improve a performance dramatically from the previous flagship video card, so Ati decided to just go for the X1900 numeration, that was the old Ati, and I preferred that to what they do now. 

In this case, you get almost the same GPU from an architectural standpoint (smaller fabrication process,  DX10.1 support which is worthless besides being on more bullet point to add to the feature list) but yet, most uninformed consumers will think this is a whole new card because of the next gen denomination (HD3800>HD2900), when in reality, will have about the same performance but a cheaper price point than the "previous gen" card.

This is akin to what nVidia did many years ago with the GeForce 4 MX, which was a GeForce 2 MX with higher clocks and a new name, even though the GeForce 4 Ti series were a lot faster than the MX series and had support for pixel and vertex shaders. Or the same as Ati did when they introduced the 9000 and 9200 series, they only supported DX 8.1 when compared to other fully DX 9 "genuine" R9x00 cards. Or the X600, X300, X700 cards, which used the X denomination but were just PCIe versions of the 9600/9700 series.





GrapeApe said:


> Fab process wasn't new, just new to the high end, 130nm and 130nm low-Kd were already used on the R9600P/XT. And for that same reason you could argue the RV670 deserves a name change skipping a node and going from optical shrink to optical shrink, so that's like 2 process changes, and will be the first to be built on the new fab from any IHV. So it kinda proves my point more than dissproves it, although I don't think fab process matters that much so much as the results.



The card that introduced the 9X00 series was the R300 based 9700, not the RV350/360, it has been a common practice in the video card industry for many years for manufacturers to migrate to a smaller fab. process for the mainstream GPU series on any given generation, before using that smaller process for the next gen flagship video cards, just as the HD3800 is a mainstream smaller fab. process version of the HD2900, sorry but this kinda disproves your point in any case...




GrapeApe said:


> So the PCX5900 should've been the PCX6800 based on that argument? OR does it matter native/non-native where the GF6800 PCIe (NV45) becomes the GF7800, instead of the later NV47?



I was just using an example of another feature available on the X8x0 series that wasn't available on the R3x0 series (the two architectures you decided to quote), just to prove that all those features combined don't add up to just "some extremely slight tweaks" between both generations...




GrapeApe said:


> Only after it's refresh when it became the R480, and actually after it was demoed on X700s before you could even buy X850 master cards. So should the R480 have become the X1800 based on that argument?
> *BTW, R9700s were doing multi-VPU rendering on E&S SimFusion rigs long before nV even had their new 'SLi' and even before Alienware demoed their ALX, so not sure how relevant multi-vpu support is.*



Another feature available for consumers on X8x0 cards first, add it to the feature list that doesn't add up to "some extremely slight tweaks". It doesn't matter if the US government used 4 9800XT cards working in parallel for a flight simulator, or Alienware shows some vaporware, if the consumer cannot have access to that technology with the product it has on it hands at any given moment.




GrapeApe said:


> But only about 20% more than the R9800XT core, and the core was slower than the R9600XT. And if it was speedboost alone then the GF5900 -> 6800 jump shouldn't have gotten a generational name change as it went down in speed.
> 
> Performance increase doesn't need dramatic architecture changes, the R9800XT offered larger performance differences over the R9700 as did the X1900 offer over the X1800 depending on the game/setttings, but what constitutes a significant enough change.



Once again, Ati introduced the R9x00 series with the R300 based 9700pro, All other R9x00 models (except for the R9000 and the R9200) shared the same basic architecture with different features, clocks and fab. process, that's precisely my point.



GrapeApe said:


> *The original GF6800 was the NV40, not the NV42 which was the 110nm GF6800 plain 12PS model, and if you don't know what the NV47/48 was in reference to, perhaps you shouldn't bother replying, eh?*



So what, I made a mistake because the GF6800GS has an NV42 core, at least I didn't quote two cores that were never available for sale  

Nvidia's NV47 never existed

Nvidia has canned NV48

The truth of the matter is AMD can name these cards whatever they want, they could name it Radeon HD4000+ for all I care, but it will always be controversial when you raise the expectations of the consumer, and they pay for something that won't exactly live to what they expected, see what happened to the GeForce 4MX and Radeon 9200 users. :shadedshu


----------



## GrapeApe (Oct 26, 2007)

15th Warlock said:


> In this case, you get almost the same GPU from an architectural standpoint (smaller fabrication process,  DX10.1 support which is worthless besides being on more bullet point to add to the feature list) but yet, most uninformed consumers will think this is a whole new card because of the next gen denomination (HD3800>HD2900), when in reality, will have about the same performance but a cheaper price point than the "previous gen" card.



How do you know if they're being misled? That implies intent to decieve and I'd like to see your proof of that since you get so many other things wrong. Right now they're launching an *RV*670 into that new lineup, which may be the lower end of the top cards like the X1K launch with its XL model or X800 series with the PRO. Considering there is supposed to be no more SE-GT-PRO-XT-XTX you don't know where it would've been in that nomenclature-suffix combo that it's the 3800 and 3900 leaves room for that refresh before moving to the R7xx generation. Also you don't even know the performance yet although there's alot of loose talk, just like the loose complaints.
So saying they're being misleading is pretty strong words considering you don't even know all the aspect of it yet, which may or may not be as numerous and different as those you take exception to being call slight tweaks. The only people who would be mislead are same type of buyer as those that buy cards based on VRAM size or numbering where the GF7300>GF6800/X1300>X800.
Bitter about your 512MB X1300HM purchase are you? 



> Or the same as Ati did when they introduced the 9000 and 9200 series, they only supported DX 8.1 when compared to other fully DX 9 "genuine" R9x00 cards.



Which had nothing to do with 9xxx and DX9, just so happened that they worked out that way.



> Or the X600, X300, X700 cards, which used the X denomination but were just PCIe versions of the 9600/9700 series.



Once again you're confused. 
While the X600 was essentially the PCIe version of the R9600, neither the X300 nor the X700 were based on the R9700. The X300 was PS2.0 limited like the rest of the RV3xx series, but had far less shader, TUs and ROPs than the R9700; and the X700 was PS2.0B based architecture with more vertex shaders than the R9700/9800. The codename would help you figure that out with the X700 being the RV410 and the other two being RV3xx cards and the R9700 being R300 series.



> The card that introduced the 9X00 series was the R300 based 9700, not the RV350/360,



A complete non sequitur to my statement about the X800 not being a new process, but something you try to build your strawmen out of. Your focus on the the R9700 goes against your use of the X850 and later models for your examples.



> it has been a common practice in the video card industry for many years for manufacturers to migrate to a smaller fab. process for the mainstream GPU series on any given generation, before using that smaller process for the next gen flagship video cards, just as the HD3800 is a mainstream smaller fab. process version of the HD2900, sorry but this kinda disproves your point in any case...



No actually it disproves your point that the X800 being on 130nm as mattereing for naming strategy; and simply disproves your strawman that anyone ever said the HD3800 was the top flagship card. You're the one who said the process change was important for defining the X800 as a new number/generation, so you're contradicting your own statement and basically conforming with mine, that the process change didn't matter. However, since you said that's one of the things that defined the X800 as different enough to require a new name, I simply said then the HD3800 must be doubly different based on your argument. Don't blame me for your weak statement for the X800.  



> I was just using an example of another feature available on the X8x0 series that wasn't available on the R3x0 series (the two architectures you decided to quote), just to prove that all those features combined don't add up to just "some extremely slight tweaks" between both generations...



Considering the RV3xx in the X600 and X300 did have it and the R4xx didn't have it until the R423 refresh/model, long after the R420 was in place, it doesn't fit your argument, and considering that the change is an electrical change for signalling and not a processing architecture change if you think it's significant, then all those minor HD3800 changes are equally 'significant'.



> Another feature available for consumers on X8x0 cards first, add it to the feature list that doesn't add up to "some extremely slight tweaks". It doesn't matter if the US government used 4 9800XT cards working in parallel for a flight simulator, or Alienware shows some vaporware, if the consumer cannot have access to that technology with the product it has on it hands at any given moment.



Do you even know how crossfire works? :shadedshu 
Tell me what major change was made to the VPU (specifically the R420/423) that made Xfire 'more possible' compared to the addition of the external compositing chip and hardware at the END of the X8xx's life. 
And prior work with the previous VPUs does matter, especially when you're talking about a feature not related to the VPU itself, but how it is used with add-on hardware after the fact, once again not relevant to either the small tweaks not the naming of the X800. You also complain about me using the R9600&9800 in my examples and then call upon a feature that wasn't even used until the 3rd refresh of the R4xx line and only on select cards. 



> So what, I made a mistake because the GF6800GS has an NV42 core, at least I didn't quote two cores that were never available for sale



Other than those X300 and X700 based on some mythic R9700 you mean? 
BTW, the NV47 were released you just know them as the GF7800 which was my point that like I said, if you don't know that maybe you shouldn't be commenting on my reference to the GF7 series like I said. You probably never knew the GF7900Ultra existed as well, doesn't matter that you bought it or saw it as the GTX-512. 
And thanks for the InQ and a random 4th level site doing a blurb about an InQ article, they make me smile like your NV42 muff. Can I use the InQ to debunk you InQ link?

Your link dated Dec 2004 saying the NV47 doesn't exist and the NV48 is cancelled;
Nvidia's NV47 never existed

And your other link in Dec 2004 refer to another fuad article (here's the original)
Nvidia has canned NV48

Then in Feb Fuad changes his tune again, saying the NV48 is back again as a 512MB GF6800;
http://www.theinquirer.net/en/inquirer/news/2005/02/28/nv48-is-nv45-with-512mb-ram
SO what do your links prove when they are contradicted by the author 2 months later?

And how about a year later when Fuad said, Oh no someone lied to us the NV47 DID exist?
http://www.theinquirer.net/en/inquirer/news/2006/03/08/geforce-7800-gtx-512-is-just-a-nv47-after-all
*"Now it turns out that even Microsoft's upcoming Vista calls the Geforce 7800 GTX 512 by the name NV47."*

Even nVidia's own drivers exposed the two models back in 2005, so to say they don't exist is funny, compared to your links which might as well have not existed for their own contradiction/retraction by the author. 



> The truth of the matter is AMD can name these cards whatever they want, they could name it Radeon HD4000+ for all I care,



Obviously not since you seem so bent out of shape by the new numbering scheme, sofar as to accuse AMD of trying to mislead people. Whereas I think it's just a dumb move in a series of dumb marketing moves (like launching days AFTER Crysis, not before).



> but it will always be controversial when you raise the expectations of the consumer, and they pay for something that won't exactly live to what they expected, see what happened to the GeForce 4MX and Radeon 9200 users. :shadedshu



Consumers expectations aren't as important as actually lying to the customer (which all 3 companies have done). This numbering isn't like your examples, that would be the GF6200/7100 and X1050 or X2300, this would be closer to the X800Pro and X1800XL availability first except instead of being crippled better cards they look to be supercharged previously mid-range targeted cards. Considering both AMD's and nV's changes in strategies, how do you even know what will be mid-high end anymore if potentially that high end will be two RV670s on a single board?
Whether ATi launches this as another model number or suffix it won't be anymore of a problem than the HD2400/GF8400 presents to the morons who wish to replace their GF7800GTX/X1800XT because the number was newer. That's their stupidity.
Would you be less uptight about the HD3800XL if you knew there were an HD3800XTX or HD3900XT to launch at a later date like the X1800XT?


----------



## Tatty_One (Oct 26, 2007)

Jesus, I will be better off reading "War and Peace" than trying to get a grip with this thread.


----------



## cdawall (Oct 28, 2007)

great post GrapeApe i have to agree with you i had a friend who replaced his 6800ULTRA with a 7300GS and sat and argued thta it was better thean the 6800 

this new numbering strat wont change anything except maybe make it easier ie the biggest number=the best the smallest=the worst pretty simple vs having the know XL-XT-GT-GTO-XTX-banana-cow? all the different types just confused people anyway  its a good move in AMDs favor they will sell more top end cards to dumb people bc they will think bigger is better vs WTF is the diff between GT and XTX except $100?


----------



## [I.R.A]_FBi (Oct 28, 2007)

so whats the latest?


----------



## 22[Antihero]22 (Nov 2, 2007)

Who gives a shit about the naming, if you don't like figuring out the best performing parts for yourself, then your not actually a computer enthusiast. Just wait until the 15th, when we will know for sure. As for my speculation, i think it will beat the 8800GT, and maybe the new GTS, and still be under nvidia for the crown, but we knew amd wasn't competing for the top, ever since the 2900 was released. If you want to see amd on top, you'll probably have to wait for the HD 4000, or 5x10^21, or whatever they call the R700.


----------



## erocker (Nov 2, 2007)

From the speeds of the memory on the card I doubt it will beat a GT.  If that memory can OC past 1000 (2000) it might.


----------

