Saturday, October 20th 2007
ATI Radeon HD 3800 Series Specs, Photos and Logos
The title says it all. ATI Radeon HD 3870, the one on the second picture will feature 825MHz core and 2400MHz memory clocks, DirectX 10.1 support, and PCI-e 2.0 technology. ATI Radeon HD 2850 on the third picture will be clocked at 700MHz/1800MHz core/memory with DirectX 10.1 support and PCI-e 2.0. Both cards will feature 55nm manufactured graphics processing unit (GPU).
Sources:
Tom's Hardware, VR-forums
80 Comments on ATI Radeon HD 3800 Series Specs, Photos and Logos
The trouble is we can all be mad out how misleading it is, but at the end of the day will that stop us from buying if the price/performance good? No.
8800GTS 320 G80
8800GTS 640 G80
8800GT G92 256
8800GT G92 512
8800GTS 640 with extra pipes (112) G80
etc etc
2900pro
2950pro
2900xt
2950xt???
2952.5 proX :D
I am lost already :cry:
This numbering is STILL GAY
It's not much different than any other naming scheme. As long as there's some rhyme or reason to it, it'll work. That it's an HD2900 or 3870 doesn't really matter as much as whether or not it's worth the money. I don't care if they call it the AMD Corvette as long as it outperforms the AMD Chevette for the price. ;)
They also got rid of 25million transistors yet were able to keep the same # of shaders/TUs/ROPs/etc. :eek:
Still not sure what they got rid of (drop FX12 support? :confused: )
So not only do they get a process reduction benefit, they also cut transistors, which also helps with heat and power, which often helps speed limits. The RV670 likely benefits from a similar change, but depends on what else they added or changed (TMUs/ROPs) in addition to what we already know (UMD/SM4.1) while taking some other things away.
Process reduction alone isn't beneficial though if it isn't efficient reduction, because as you decrease trace size and increase density, you increase the potential for noise which you overcome with more voltage, which usually leads to more heat/power.
But as 80nm and 65/55nm are completely different processes it's not just an optical shrink it's a complete move which gives them the change to change the layout, hopefully to something with potential to reach a little closer to those 1GHz numbers in all those early R600 rumours way back when.
Now if they want to keep power consumption low then it would be best to have lower clocks (likely the single slot solution), but that they are going to have a dual-slot model shows that they are going to push what they can hard which would increase heat/power while getting higher speeds/performance. This may be so that they can get the fab savings over the HD2900 and maybe replace possibly the 512MB model and at least the PRO with a cheaper to make high end part. They have alot of potential if they have less issues than they reportedly had with the TSMC 80nmHS fab.
Having faster shaders would be nice, as would faster everything, but the question is whether you could have the current composition at much faster speeds. There are already a bunch of components working outside of core clock, but how easy is it to implement on those 320SPUs/64shader-cores, and also what's the benefit vs power/heat cost. Personally I'd prefer the opposite of the G80 vis-a-vis the R600 series, faster TMUs/ROPs to make up for the lack of numbers and different composition.
:slap:
Long live ati :rockout:
That's not to say it's without benefit, overclocked SPUs would help a bit with the shader based AA, but it's still heavily TMU and ROP limited at any significant setting used by top-end cards.
I don't disagree that faster SPUs will improve some things, but my main point is that's not it's biggest weakness, and what is the cost of your OC, as it's already a very power hungry and pretty warm VPU without increasing the speed of the SPUs (these increases you seek don't come at 0 cost there). I think that level of power is for next year's games, not really our current batch (although Crysis may prove otherwise if geometry is cranked as high as we hope).
So like I said, personally I'd prefer to see them focus on the back-end for any expenditure of power/heat or even transistors since that's their current Achilles' heel.
:toast: Well its good if DX10.1 comes in stronger, especially with TWICE Bandwidth. Yet TEARS of Pain & How Can They Charge Sooo Much, comes to Mind. Well Until Nvidia PCIe 2.0 pokes new high score, if THEY can. Its HOTTT!!!
Signed:PHYSICIAN THOMAS STEWART VON DRASHEK M.D.
'wooo, that must be better than a 2950' or whatever the hell they end up calling these card's.
Power requirements on those R600's where just insane, for a lot of people, (myself included) that was the only reason I went with an nvidia card.
However, this really does tempt me I must say =] Doing a brand spanking new system for christmas (too bad I will miss out on the new CPU's and 790SLI chipset :()
Martyn
Now, the GF7800 (G70) supported 50% more pixel shaders than the GF6800 (NV42, not 47/48 :confused:) (24ps vs 16ps), 33% more vertex shaders than the GF6800 (8vs vs 6vs), about 40% more transitors than the GF6800 (302 million vs 222 million), 20% higher memory and GPU clocks than the GF6800, supported transparency adaptive AA, supported multiple GPUs on a single board (aka 7950GX2) and even though by the numbers there didn't appear to be so much of a difference between both cards, you could get a performance leap anywhere from 30% to more than a 100% depending on the benchmark or game and the resolution/effects used.
As you can see, both examples you quote, clearly were more than worthy of having a new numerical denomination when compared to their previous gen counterparts :rolleyes:
BTW, R9700s were doing multi-VPU rendering on E&S SimFusion rigs long before nV even had their new 'SLi' and even before Alienware demoed their ALX, so not sure how relevant multi-vpu support is. But only about 20% more than the R9800XT core, and the core was slower than the R9600XT. And if it was speedboost alone then the GF5900 -> 6800 jump shouldn't have gotten a generational name change as it went down in speed. Performance increase doesn't need dramatic architecture changes, the R9800XT offered larger performance differences over the R9700 as did the X1900 offer over the X1800 depending on the game/setttings, but what constitutes a significant enough change. The original GF6800 was the NV40, not the NV42 which was the 110nm GF6800 plain 12PS model, and if you don't know what the NV47/48 was in reference to, perhaps you shouldn't bother replying, eh? :slap: Actually that was multiple GPUs on TWO board (you could actually take them apart if you were so inclined), but a single PCIe socket, you probably should've refered to the ASUS Extreme N7800GT Dual. Also, the GF6800 supported multiple VPUs on a single board as well, guess you never heard of the Gigabyte 3D1 series (both GF6800 and 6600);
www.digit-life.com/articles2/video/nv45-4.html I think both my examples were pretty illustrative of why it's too early to complain about numbering schemes, since similar examples have occured in the past, and especially when most of the people complaining really don't know enough about them to complain in the first place.
BTW, I'm just curious if those who have a problem with the HD3xxx numbering scheme have a similar problem with the GF8800GT and potential GF8800GTS-part2 numbering scheme causing conflicts with the current high-end?
Personally I only dislike the new numbering scheme if they got rid of the suffixes and replaced them with numbers to play down to the dumbest consumers in the marketplace.
That to me focuses on people who don't care anyways and will still buy an HD4100 with 1GB of 64 bit DDR2 memory because the number and VRAM size is higher than the HD3999 with 512MB of 512bit XDR/GDDR5 memory which may outperform it 5:1 or whatever. Those are the same people who are simply better served by a chart printed on the box by the IHV showing the performance positioning of part more than changing an existing numbering scheme. :banghead:
In this case, you get almost the same GPU from an architectural standpoint (smaller fabrication process, DX10.1 support which is worthless besides being on more bullet point to add to the feature list) but yet, most uninformed consumers will think this is a whole new card because of the next gen denomination (HD3800>HD2900), when in reality, will have about the same performance but a cheaper price point than the "previous gen" card.
This is akin to what nVidia did many years ago with the GeForce 4 MX, which was a GeForce 2 MX with higher clocks and a new name, even though the GeForce 4 Ti series were a lot faster than the MX series and had support for pixel and vertex shaders. Or the same as Ati did when they introduced the 9000 and 9200 series, they only supported DX 8.1 when compared to other fully DX 9 "genuine" R9x00 cards. Or the X600, X300, X700 cards, which used the X denomination but were just PCIe versions of the 9600/9700 series. The card that introduced the 9X00 series was the R300 based 9700, not the RV350/360, it has been a common practice in the video card industry for many years for manufacturers to migrate to a smaller fab. process for the mainstream GPU series on any given generation, before using that smaller process for the next gen flagship video cards, just as the HD3800 is a mainstream smaller fab. process version of the HD2900, sorry but this kinda disproves your point in any case... I was just using an example of another feature available on the X8x0 series that wasn't available on the R3x0 series (the two architectures you decided to quote), just to prove that all those features combined don't add up to just "some extremely slight tweaks" between both generations... Another feature available for consumers on X8x0 cards first, add it to the feature list that doesn't add up to "some extremely slight tweaks". It doesn't matter if the US government used 4 9800XT cards working in parallel for a flight simulator, or Alienware shows some vaporware, if the consumer cannot have access to that technology with the product it has on it hands at any given moment. Once again, Ati introduced the R9x00 series with the R300 based 9700pro, All other R9x00 models (except for the R9000 and the R9200) shared the same basic architecture with different features, clocks and fab. process, that's precisely my point. So what, I made a mistake because the GF6800GS has an NV42 core, at least I didn't quote two cores that were never available for sale :slap:
Nvidia's NV47 never existed
Nvidia has canned NV48
The truth of the matter is AMD can name these cards whatever they want, they could name it Radeon HD4000+ for all I care, but it will always be controversial when you raise the expectations of the consumer, and they pay for something that won't exactly live to what they expected, see what happened to the GeForce 4MX and Radeon 9200 users. :shadedshu
So saying they're being misleading is pretty strong words considering you don't even know all the aspect of it yet, which may or may not be as numerous and different as those you take exception to being call slight tweaks. The only people who would be mislead are same type of buyer as those that buy cards based on VRAM size or numbering where the GF7300>GF6800/X1300>X800.
Bitter about your 512MB X1300HM purchase are you? :laugh: Which had nothing to do with 9xxx and DX9, just so happened that they worked out that way. Once again you're confused. :slap:
While the X600 was essentially the PCIe version of the R9600, neither the X300 nor the X700 were based on the R9700. The X300 was PS2.0 limited like the rest of the RV3xx series, but had far less shader, TUs and ROPs than the R9700; and the X700 was PS2.0B based architecture with more vertex shaders than the R9700/9800. The codename would help you figure that out with the X700 being the RV410 and the other two being RV3xx cards and the R9700 being R300 series. A complete non sequitur to my statement about the X800 not being a new process, but something you try to build your strawmen out of. Your focus on the the R9700 goes against your use of the X850 and later models for your examples. No actually it disproves your point that the X800 being on 130nm as mattereing for naming strategy; and simply disproves your strawman that anyone ever said the HD3800 was the top flagship card. You're the one who said the process change was important for defining the X800 as a new number/generation, so you're contradicting your own statement and basically conforming with mine, that the process change didn't matter. However, since you said that's one of the things that defined the X800 as different enough to require a new name, I simply said then the HD3800 must be doubly different based on your argument. Don't blame me for your weak statement for the X800. :p Considering the RV3xx in the X600 and X300 did have it and the R4xx didn't have it until the R423 refresh/model, long after the R420 was in place, it doesn't fit your argument, and considering that the change is an electrical change for signalling and not a processing architecture change if you think it's significant, then all those minor HD3800 changes are equally 'significant'. Do you even know how crossfire works? :shadedshu
Tell me what major change was made to the VPU (specifically the R420/423) that made Xfire 'more possible' compared to the addition of the external compositing chip and hardware at the END of the X8xx's life.
And prior work with the previous VPUs does matter, especially when you're talking about a feature not related to the VPU itself, but how it is used with add-on hardware after the fact, once again not relevant to either the small tweaks not the naming of the X800. You also complain about me using the R9600&9800 in my examples and then call upon a feature that wasn't even used until the 3rd refresh of the R4xx line and only on select cards. :wtf: Other than those X300 and X700 based on some mythic R9700 you mean? :p
BTW, the NV47 were released you just know them as the GF7800 which was my point that like I said, if you don't know that maybe you shouldn't be commenting on my reference to the GF7 series like I said. You probably never knew the GF7900Ultra existed as well, doesn't matter that you bought it or saw it as the GTX-512.
And thanks for the InQ and a random 4th level site doing a blurb about an InQ article, they make me smile like your NV42 muff. Can I use the InQ to debunk you InQ link?
Your link dated Dec 2004 saying the NV47 doesn't exist and the NV48 is cancelled;
Nvidia's NV47 never existed
And your other link in Dec 2004 refer to another fuad article (here's the original)
Nvidia has canned NV48
Then in Feb Fuad changes his tune again, saying the NV48 is back again as a 512MB GF6800;
www.theinquirer.net/en/inquirer/news/2005/02/28/nv48-is-nv45-with-512mb-ram
SO what do your links prove when they are contradicted by the author 2 months later?
And how about a year later when Fuad said, Oh no someone lied to us the NV47 DID exist?
www.theinquirer.net/en/inquirer/news/2006/03/08/geforce-7800-gtx-512-is-just-a-nv47-after-all
"Now it turns out that even Microsoft's upcoming Vista calls the Geforce 7800 GTX 512 by the name NV47."
Even nVidia's own drivers exposed the two models back in 2005, so to say they don't exist is funny, compared to your links which might as well have not existed for their own contradiction/retraction by the author. Obviously not since you seem so bent out of shape by the new numbering scheme, sofar as to accuse AMD of trying to mislead people. Whereas I think it's just a dumb move in a series of dumb marketing moves (like launching days AFTER Crysis, not before). Consumers expectations aren't as important as actually lying to the customer (which all 3 companies have done). This numbering isn't like your examples, that would be the GF6200/7100 and X1050 or X2300, this would be closer to the X800Pro and X1800XL availability first except instead of being crippled better cards they look to be supercharged previously mid-range targeted cards. Considering both AMD's and nV's changes in strategies, how do you even know what will be mid-high end anymore if potentially that high end will be two RV670s on a single board?
Whether ATi launches this as another model number or suffix it won't be anymore of a problem than the HD2400/GF8400 presents to the morons who wish to replace their GF7800GTX/X1800XT because the number was newer. That's their stupidity.
Would you be less uptight about the HD3800XL if you knew there were an HD3800XTX or HD3900XT to launch at a later date like the X1800XT?