# ATI Radeon HD 4800 Series Video Cards Specs Leaked



## malware (Apr 24, 2008)

Thanks to TG Daily we can now talk about the very soon to be released ATI HD 4800 series of graphics cards with more details. One week ahead of its presumable release date, general specifications of the new cards have been revealed. All Radeon 4800 graphics will use the 55nm TSMC produced RV770 GPU, that include over 800 million transistors, 480 stream processors or shader units (96+384), 32 texture units, 16 ROPs, a 256-bit memory controller (512-bit for the Radeon 4870 X2) and native GDDR3/4/5 support as reported before. At first, AMD's graphics division will launch three new cards - Radeon HD 4850, 4870 and 4870 X2:
*ATI Radeon HD 4850* - 650MHz/850MHz/1140MHz core/shader/memory clock speeds, 20.8 GTexel/s (32 TMU x 0.65 GHz) fill-rate, available in 256MB/512MB of GDDR3 memory or 512MB of GDDR5 memory clocked at 1.73GHz 
*ATI Radeon HD 4870* - 850MHz/1050MHz/1940MHz core/shader/memory clock speeds, 27.2 GTexel/s (32 TMU x 0.85 GHz) fill-rate, available in 1GB GDDR5 version only
*ATI Radeon HD 4870 X2* - unknown core/shader clock speeds, available with 2048MB of GDDR5 memory clocked at 1730MHz
The 4850 256MB GDDR3 version will arrive as the successor of the 3850 256MB with a price in the sub-$200 range. The 4850 512MB GDDR3 should retail for $229, while the 4850 512MB GDDR5 will set you back about $249-269. The 1GB GDDR5 powered 4870 will retail between $329-349. The flagship Radeon HD 4870 X2 will ship later this year for $499.

*View at TechPowerUp Main Site*


----------



## choppy (Apr 24, 2008)

sic, i hope this owns a 9600gt which i was going to buy in a couple weeks!


----------



## btarunr (Apr 24, 2008)

So the core is now split to geometry and shader domains with their own clock-gens. Good. close to 4 GHz memory on the HD4870? What for?


----------



## choppy (Apr 24, 2008)

btarunr said:


> So the core is now split to geometry and shader domains with their own clock-gens. Good. close to 4 GHz memory on the HD4870? What for?



what do you mean what for?! speed..performance...


----------



## HTC (Apr 24, 2008)

It seams that the GDDR5 memory, power usage wise, is FAR better


----------



## btarunr (Apr 24, 2008)

choppy said:


> what do you mean what for?! speed..performance...



Yeah and all that bandwidth is going to be put to use. 

It's more of a marketing feature than something that will credit performance genuinely. GDDR*5* ZOMG!


----------



## choppy (Apr 24, 2008)

btarunr said:


> Yeah and all that bandwidth is going to be put to use.
> 
> It's more of a marketing feature than something that will credit performance genuinely. GDDR*5* ZOMG!



games are getting bigger and demanding much more from gfx cards, within the next year you will understand why!


----------



## btarunr (Apr 24, 2008)

choppy said:


> games are getting bigger and demanding much more from gfx cards, within the next year you will understand why!



How much has GDDR4 contributed to the performance leadership of current ATI GPU's over its competition using supposedly slower GDDR3 ? Not much. While I agree that since the memory bus is narrow (256bit for both RV670 and G92), faster memory standards help. But you need a GPU that requires lot of memory bandwidth and that can utilize all that bandwidth. If it doesn't, it remains more of a marketing feature. Watch how the HD38*7*0 X2 uses GDDR3 memory but performs on-par/better than 2x HD3870 which has the faster memory.


----------



## sinner33 (Apr 24, 2008)

Wonder how much faster these 4870's are compared to 3870's?


----------



## HTC (Apr 24, 2008)

btarunr said:


> How much has GDDR4 contributed to the performance leadership of current ATI GPU's over its competition using supposedly slower GDDR3 ? Not much. While I agree that since the memory bus is narrow (256bit for both RV670 and G92), faster memory standards help. But you need a GPU that requires lot of memory bandwidth and that can utilize all that bandwidth. If it doesn't, it remains more of a marketing feature. Watch how the HD38*7*0 X2 uses GDDR3 memory but performs on-par/better than 2x HD3870 which has the faster memory.



That could be because of this:



> The graphics processor itself will integrate more texture memory units (TMUs), which is the Achilles' heel of the R6xx generation: 32 TMUs in the RV770 will challenge the 56/64 units of Nvidia’s G92/G92b.



Not sure, though.


----------



## btarunr (Apr 24, 2008)

Interesting. Good to know ATI is addressing all issues that held back its previous generations. I'm very optimistic about the RV770 because of a much stronger shader domain of the GPU.


----------



## Exceededgoku (Apr 24, 2008)

Why not more TMUs??? Why do they always have to be conservative in the parts that matter? I'll probably still get one though lol.


----------



## magibeg (Apr 24, 2008)

Well it could be cost related for why they don't add more TMU's or maybe, just maybe ati cards are not exactly like nvidia cards and are held back by different things :-O


----------



## mdm-adph (Apr 24, 2008)

Hooray for independent shader speeds.


----------



## newtekie1 (Apr 24, 2008)

malware said:


> *ATI Radeon HD 4870* - 850MHz/1050MHz/1940MHz core/shader/memory clock speeds, 27.2 GTexel/s (32 TMU x 0.85 GHz) fill-rate, available in 1GB GDDR5 version only



What happened to "the 4870 will be the first mass-production GPU with a clock speed higher than 1GHz"?


----------



## mdm-adph (Apr 24, 2008)

newtekie1 said:


> What happened to "the 4870 will be the first mass-production GPU with a clock speed higher than 1GHz"?



I think it got lost in the same dimension as 100% efficient SLI.


----------



## Odin Eidolon (Apr 24, 2008)

Nasty!


----------



## newtekie1 (Apr 24, 2008)

mdm-adph said:


> I think it got lost in the same dimension as 100% efficient SLI.



Can you show me an article actually claiming SLI will be 100% efficient?


----------



## mdm-adph (Apr 24, 2008)

newtekie1 said:


> Can you show me an article actually claiming SLI will be 100% efficient?



Nope, cause it's impossible.  OOOHH BURNSAUCE.


----------



## newtekie1 (Apr 24, 2008)

mdm-adph said:


> Nope, cause it's impossible.  OOOHH BURNSAUCE.



It is impossible because the claim was never made.  However, the 1GHz claim actually WAS made.  It is just more marketing BS put out by the graphics cards companies to trap the fanboys.


----------



## MrMilli (Apr 24, 2008)

RV770 vs RV670:
MEM: 123GB/s vs 72GB/s --> +70%
TEX: 27200 vs 12400 --> +120%
GFLOP: 1008 vs 497 --> +102%

My guess is that it will be faster than 3870X2 since that's 2xRV670 at around 70% efficiency. I guess close to or matching 9800GX2. But for sure much faster than 8800GTX, like some suggest.
(based on these results: http://www.computerbase.de/artikel/..._x2/20/#abschnitt_performancerating_qualitaet )

Other things to keep in mind is the transistor count and die size advantage ATI will have and already has. RV670 is already smaller than G94 and almost half of G92.
This advantage will only grow with RV770 vs GT200. That's around 800M vs almost 1.1B transistors. I know GT200 will be faster but at what cost?


----------



## mdm-adph (Apr 24, 2008)

newtekie1 said:


> It is impossible because the claim was never made.  However, the 1GHz claim actually WAS made.  It is just more marketing BS put out by the graphics cards companies to trap the fanboys.



Oh, I agree about the general nature of marketing BS, but how do you know that the 4870 X2 isn't going to be clocked at 1 GHz?


----------



## btarunr (Apr 24, 2008)

Maybe because the shader domain of the GPU's have a 1.00+ GHz clock, it would have eluded speculators to thinking the GPU itself was 1+ GHz clocked. Maybe they didn't know the RV770 was split into geometry and shader domains with their own clocks.


----------



## HTC (Apr 24, 2008)

mdm-adph said:


> Oh, I agree about the general nature of marketing BS, but how do you know that the 4870 X2 isn't going to be clocked at 1 GHz?



Whether or not it will be clocked @ 1 GHz isn't what's important: to me, it would be EXTREMELY significant *IF* 1 single 4850 could match a 3870x2 in performance. Don't know if it can, though.


----------



## mdm-adph (Apr 24, 2008)

HTC said:


> Whether or not it will be clocked @ 1 GHz isn't what's important: to me, it would be EXTREMELY significant *IF* 1 single 4850 could match a 3870x2 in performance. Don't know if it can, though.



That'd be cool, but there's no way -- a 3870x2 can sure throw out some pixels.


----------



## magibeg (Apr 24, 2008)

So should i just sell my 3870 now and pick up one of these bad boys when they come out. Waiting will only make the value of my card decrease most likely


----------



## HTC (Apr 24, 2008)

mdm-adph said:


> That'd be cool, but there's no way -- a 3870x2 can sure throw out some pixels.



Dunno, but it sure would put the graph market in full throttle, so to speak!


----------



## CY:G (Apr 24, 2008)

Anyone knows when are this going to be released, im thinking of selling my 3870 if they get released in the following months..


----------



## johnnyfiive (Apr 24, 2008)

sinner33 said:


> Wonder how much faster these 4870's are compared to 3870's?



I'm gonna guess 15% at default clocks.


----------



## magibeg (Apr 24, 2008)

batmang said:


> I'm gonna guess 15% at default clocks.



That seems extremely conservative. 320 stream processors to 480 is like a 50% increase in that alone. Then theres the 1050mhz shader speed to the 775. The faster memory should help a little, the fact its 1GB. I would say a clear 50%+ increase if i had to guess.


----------



## lemonadesoda (Apr 24, 2008)

Interesting.

Seems like the HD 4850 512MB GDDR5 is the winner here. Best price/performance/power ratio.

Unfortunately, the jury is still out on raw horsepower. How much faster will the 4850 be compared to the 3850? 50% more shaders. 15% faster RAM, 0% extra ROPs. Higher power consumption, (unless switching to GDDR5).

I would have like to see MORE HORSEPOWER, e.g. Texture units and ROPs, etc. I'm not convinced the extra 50% shaders will do much more than allow 8x AA rather than 4x AA, but still with all other settings the same. I hope I'm wrong.

Excluding the move to GDDR5 (optional), the new ATi cards seem more like a "3950". I dont think they deserve a "4" at the front. After all, performance wise, it's like a X800XT over X800Pro.


----------



## btarunr (Apr 24, 2008)

magibeg said:


> That seems extremely conservative. 320 stream processors to 480 is like a 66% increase in that alone.



Ehm..that's 50%


----------



## Pinchy (Apr 24, 2008)

All paper talk. Lets just wait for the benchies ...

We all know what happened with the "awesome" specs of the 2900's.


----------



## magibeg (Apr 24, 2008)

btarunr said:


> Ehm..that's 50%



Yea sorry i corrected that after. From a % basis its 66% UP from 320 although in an absolute sense its only 50% more shaders total.


----------



## MrMilli (Apr 24, 2008)

batmang said:


> I'm gonna guess 15% at default clocks.



Did you read what i said here:
http://forums.techpowerup.com/showpost.php?p=764016&postcount=21

It will be close to double. But it seems none of you is counting on ANY architectural improvements. Why?


----------



## lemonadesoda (Apr 24, 2008)

> The Radeon 3800 series had a serious flaw called texture low fill-rate, which was addressed by ATI with an increased number of TMUs (Texture Memory Unit) from 16 to 32. The specifications indicate that 16 TMUs can address 80 textures on the fly, which means that 32 units should be able to fetch 160 in the RV770: This should allow the new GPU to catch up with Nvidia’s G92 design. However, the G92 has 64 TMUs that were enabled gradually (some SKUs shipped with 56), resulting in a fill-rate performance that beat the original 8800GTX and Ultra models.
> 
> ATI’s RV770 will be rated at a fill rate of 20.8-27.2 GTexel/s (excluding X2 version), which is on the lower end of the GeForce 9 series (9600 GT: 20.8; 9800 GTX: 43.2 9800 GX2: 76.8).


Interesting. Perhaps the "texture fill bottleneck fix" will mean big improvements in SOME situations.


----------



## mdm-adph (Apr 24, 2008)

lemonadesoda said:


> Excluding the move to GDDR5 (optional), the new ATi cards seem more like a "3950". I dont think they deserve a "4" at the front. After all, performance wise, it's like a X800XT over X800Pro.



Ah, but it's a completely different core -- something actually deserving of a new number prefix for once.


----------



## Mussels (Apr 24, 2008)

i may well look into the 4870x2 - just after a card thats cooler/quieter/smaller than my GTX (which seriously takes 4 slots with the cooling i have on it) and less power use at idle.

Then again, i dont NEED it... lol.


----------



## TUngsten (Apr 24, 2008)

is there an ETA? 

does W1Z have one under the microscope as we speak?


----------



## Solaris17 (Apr 24, 2008)

go ati nice come back


----------



## EastCoasthandle (Apr 24, 2008)

Anyone notice how GT, GTS 512, GTX and Ultra have more Texture Fill Rate
(# of TMUs) x GPU clock rate

&

Pixel Fill Rate
(# of ROPs) x GPU clock rate

but doesn't equate to the same level of frame rates found in games. But that doesn't necessary equal to the performance in games.  In some cases it's only a few frames. 







If you notice, the GT and GTS 512 models of these video cards have higher texture and pixel fill rates then 3870 regardless if it's twice as high or not.  Yes, other factors come into play however, the 3870 doesn't lag behind by the same magnitude which is why I believe it's not very efficient. 

Therefore, it will be interesting to see how the 4870 stacks up.


----------



## newtekie1 (Apr 24, 2008)

mdm-adph said:


> Oh, I agree about the general nature of marketing BS, but how do you know that the 4870 X2 isn't going to be clocked at 1 GHz?



That wasn't the claim.  The claim was that the HD4870 would be the worlds first production GPU clocked at 1GHz.


----------



## lemonadesoda (Apr 24, 2008)

@eastcoast, nice table. What a shame there arent standardised 3dmark06 scores in the table, e.g. same stock system like a Q6600 running same benchmark on each card. That would be a nice comparison. Without real-world tests, the stats dont mean a lot. It's like comparing number of screws on a sportscar. There is so much else that comes into the equation once the car gets on the track.

*EDIT*
Wait, I've just found this on google:

Benchmark HD 4870 on beta drivers vs. HD 3870, HD 3870 Crossfire on Cat 8.4 and 8800GT here: HD.3870.3Dmark06=12,590 vs. HD.4870.3Dmark06benchmark.leak.html=21,223 

ROFL WARNING


----------



## newtekie1 (Apr 24, 2008)

Pinchy said:


> All paper talk. Lets just wait for the benchies ...
> 
> We all know what happened with the "awesome" specs of the 2900's.



+1!!!


----------



## EastCoasthandle (Apr 24, 2008)

lemonadesoda said:


> @eastcoast, nice table. What a shame there arent standardised 3dmark06 scores in the table. That would be nice
> 
> *edit*
> I've just found this on google:
> ...


It's not my table, I googled for it .  In any case as you can see the magnitude of the texture and pixel fill rates between competing cards leaves a pretty large gap.


----------



## Weer (Apr 24, 2008)

I'm sorry, wasn't I thinking about buying an X1950 just one year ago?
How many more numbers are they going to have to increase before people realize that they are repackaging the same old crap?


----------



## magibeg (Apr 24, 2008)

Weer said:


> I'm sorry, wasn't I thinking about buying an X1950 just one year ago?
> How many more numbers are they going to have to increase before people realize that they are repackaging the same old crap?



Could say the same thing about cars, or almost any product for that matter. Very few things are build from complete scratch and most are gradual innovations over time based on what works.


----------



## das müffin mann (Apr 24, 2008)

Weer said:


> I'm sorry, wasn't I thinking about buying an X1950 just one year ago?
> How many more numbers are they going to have to increase before people realize that they are repackaging the same old crap?



a 3870 is not the same as a 1950 will its still a good card they are different 
nvidia kinda did that with the 9800 series but then again didnt ati kinda do that between teh 2900-3xxx series?


----------



## mandelore (Apr 24, 2008)

wonder if the shader will be able to be independantly clocked like with NV cards? now that its unlinked to the gpu speed? 

if so that would be awesome!! and a base core clock of 850mhz aint bad, and if the 55nm gpu allows some nice oc headroom i dont see why it cant be overclocked easily past 1ghz, but then again its all iff and assumptions here, still, pretty exciting stuff


----------



## mandelore (Apr 24, 2008)

Weer said:


> I'm sorry, wasn't I thinking about buying an X1950 just one year ago?
> How many more numbers are they going to have to increase before people realize that they are repackaging the same old crap?



LMFAO , you're saying this about ATI???  

try appling that rant to Nvidia, then you will be onto something


----------



## Mussels (Apr 24, 2008)

Weer said:


> I'm sorry, wasn't I thinking about buying an X1950 just one year ago?
> How many more numbers are they going to have to increase before people realize that they are repackaging the same old crap?



old, boring argument. geforce 2 and geforce 4MX was the same thing, GF 6 and 7 were very similar, 8 and 9 share GPU cores, ATI have so many i wont bother listing them all (9500/9800, 1600/1650, 2xx0 3xx0)


----------



## mab1376 (Apr 24, 2008)

im torn between the 4870 and 9900gtx, im just gonna have to see benchmarks... 

im gonna need something that can play all games at 1920x1200 smoothly with all settings turned up since im getting a new gateway 24" monitor.


----------



## substance90 (Apr 24, 2008)

Nice specs and price ranges, but ATi really should hire some folk that actually CAN program drivers... when this happens, their lower prices will beat the hell out of nVidia or at least create some real competition.


----------



## [I.R.A]_FBi (Apr 24, 2008)

substance90 said:


> Nice specs and price ranges, but ATi really should hire some folk that actually CAN program drivers... when this happens, their lower prices will beat the hell out of nVidia or at least create some real competition.



whachusayinwillis?


----------



## das müffin mann (Apr 24, 2008)

substance90 said:


> Nice specs and price ranges, but ATi really should hire some folk that actually CAN program drivers... when this happens, their lower prices will beat the hell out of nVidia or at least create some real competition.



i dont think their drivers are the problem also nvidia has traditionally been the one with the driver problems


----------



## Mussels (Apr 24, 2008)

das müffin mann said:


> i dont think their drivers are the problem also nvidia has traditionally been the one with the driver problems



both have. repeatedly.

Aimed at no one in particular:
ANYTHING YOU SAY ABOUT NV OR ATI SUCKING CAN BE APPLIED EQUALLY TO THE OTHER ONE. PLEASE STOP THIS REPETITIVE FANBOI CRAP.


----------



## [I.R.A]_FBi (Apr 24, 2008)

Mussels said:


> both have. repeatedly.
> 
> Aimed at no one in particular:
> ANYTHING YOU SAY ABOUT NV OR ATI SUCKING CAN BE APPLIED EQUALLY TO THE OTHER ONE. PLEASE STOP THIS REPETITIVE FANBOI CRAP.



i can honestly say the quality of ati drivers have fallen.


----------



## newtekie1 (Apr 24, 2008)

mandelore said:


> LMFAO , you're saying this about ATI???
> 
> try appling that rant to Nvidia, then you will be onto something



Actually, he is just saying it in general.  I don't say any mention of specific companys at all in his rant.  For all you know he is talking about graphics companies in general.  Of course an ATi fanboy would instantly take the defensive for ATi, and the fact that you seem to imply that ATi is innocent of this and nVidia isn't just adds more argument to your fanboyism.  The fact is both do it.



Mussels said:


> both have. repeatedly.
> 
> Aimed at no one in particular:
> ANYTHING YOU SAY ABOUT NV OR ATI SUCKING CAN BE APPLIED EQUALLY TO THE OTHER ONE. PLEASE STOP THIS REPETITIVE FANBOI CRAP.



Very very true.


----------



## tkpenalty (Apr 24, 2008)

Fillrate... meh all marketing BS. The drivers are the reason why the GPUs dont perform as the specs say they should. Anyway, analysing the fillrate of the HD3850s... it seems pretty obvious why the GPUs had less performance... 

However they said "match" we want to see the HD 4800 series MOW DOWN the G92s... 

-tkpenalty sighs- although its rather early to judge, I would be surprised if these manage to trample the G92 series. AMD for the R6xx GPUs have obviously had a deficit in the TMUs/ROPs, I find it VERY hard to believe how they oversaw that. Their GPUs have the shader power... but not the texturing power to compensate.


----------



## mab1376 (Apr 24, 2008)

[I.R.A]_FBi said:


> i can honestly say the quality of ati drivers have fallen.



when i had my x800 i always used omega drivers, they seemed better. 

for my nv card i was using 174.74 and it blue screened occasionally, i downgraded to 169.44 for more stability, im waiting for a new WHQL driver soon. even though WHQL means a bunch of M$  people played with it for a while and said "yup, seems ok to me"


----------



## [I.R.A]_FBi (Apr 24, 2008)

yeah, latest whql is from december .. lame


----------



## v-zero (Apr 24, 2008)

Looks good, should be 1.5-1.8x as fast, should make the 38xx series rock bottom prices too...


----------



## johnnyfiive (Apr 24, 2008)

magibeg said:


> That seems extremely conservative. 320 stream processors to 480 is like a 50% increase in that alone. Then theres the 1050mhz shader speed to the 775. The faster memory should help a little, the fact its 1GB. I would say a clear 50%+ increase if i had to guess.



I don't want to set myself up for disappointment, lol. So I'm fibbing to myself. I'm really hoping its 50% faster. That would be fantastic.


----------



## btarunr (Apr 24, 2008)

batmang said:


> I don't want to set myself up for disappointment, lol. So I'm fibbing to myself. I'm really hoping its 50% faster. That would be fantastic.



Disappointment is become sort of a phenomenon off late. 8800 GTX -> 9800 GTX, talk about disappointment.


----------



## newtekie1 (Apr 24, 2008)

btarunr said:


> Disappointment is become sort of a phenomenon off late. 8800 GTX -> 9800 GTX, talk about disappointment.



I don't see why everyone was so disappointed with the 9800GTX.  It is exactly what nVidia said it was going to be.  It isn't like they hyped the specs up and then released something that wasn't even close.  I still don't understand why everyone heard "G92 based card with higher clocks" and then expected some beast that would blow away everything currently out.  You knew what it was from the day it was announced, you can't be disappointed when a card comes out exactly as it was presented.


----------



## Mussels (Apr 24, 2008)

newtekie1 said:


> I don't see why everyone was so disappointed with the 9800GTX.  It is exactly what nVidia said it was going to be.  It isn't like they hyped the specs up and then released something that wasn't even close.  I still don't understand why everyone heard "G92 based card with higher clocks" and then expected some beast that would blow away everything currently out.  You knew what it was from the day it was announced, you can't be disappointed when a card comes out exactly as it was presented.



its faster than the 8800GTX, uses less power and runs colder. it also has better decoding for HD media (that was missing from the GTX and ultra, while the 8500/8600 always had it)


----------



## Dangle (Apr 24, 2008)

choppy said:


> sic, i hope this owns a 9600gt


A 3870GX2 pwns one of those.


----------



## JrRacinFan (Apr 24, 2008)

I wonder what the HD4650 model is going to be like.


----------



## mdm-adph (Apr 24, 2008)

newtekie1 said:


> I don't see why everyone was so disappointed with the 9800GTX.  It is exactly what nVidia said it was going to be.



Well, it *wasn't* quiet.


----------



## newtekie1 (Apr 24, 2008)

Dangle said:


> A 3870GX2 pwns one of those.



True, for about double the price, but a single HD3870 does not.  A 9600GT is faster than a HD3870.

Edit: Actually it is closer to triple the price.

I'm hoping the HD4870 is better than the 9800GTX, or at least as good which shouldn't be a problem judging from the specs.  I would love to see some decent competition at that price point again.  Which I hope will drive prices down about $50.


----------



## kylew (Apr 24, 2008)

newtekie1 said:


> It is impossible because the claim was never made.



I love your logic! If it's not claimed, then it's not possible!  

I am disappointed that it seems the 4870 won't have a 1Ghz core speed though.


----------



## Megasty (Apr 24, 2008)

Well atleast these specs seem to be more believable then the junk we saw a few weeks ago. I'm still aiming for the x2. Its stacking up to be sick monster. I wonder when we'll see a game that can use all that power


----------



## newtekie1 (Apr 24, 2008)

kylew said:


> I love your logic! If it's not claimed, then it's not possible!
> 
> I am disappointed that it seems the 4870 won't have a 1Ghz core speed though.



No, you seem to not be able to follow the conversation.

I pointed out that there was a claim that the HD4870 would be the worlds first mass produced GPU clocked at 1GHz and asked what happened to that.

He then said it went away just like 100% efficient SLI.  Which implies that somewhere it was claimed that SLI would be 100% efficient.

I then asked him to show me where that was claimed, which he was unable to do because it was never claimed.  I never said it wasn't possible, I said it wasn't possible for him to produce an article with that claim in it because no article exists.


----------



## kylew (Apr 24, 2008)

batmang said:


> I'm gonna guess 15% at default clocks.



lol at you, plucking random figures from the air. 320 SPs versus 480 SPs, what's the increase there? 50%? Correct. Even the clock increase to 850 on the core is 10%, never mind the independent shader clock. Use common sense and you wouldn't have to pluck random figures out of the air.


----------



## kylew (Apr 24, 2008)

newtekie1 said:


> No, you seem to not be able to follow the conversation.
> 
> I pointed out that there was a claim that the HD4870 would be the worlds first mass produced GPU clocked at 1GHz and asked what happened to that.
> 
> ...



*Cough* I was joking  that's why I didn't quote your whole post


----------



## lemonadesoda (Apr 24, 2008)

The renumbering of GPUs is causing market confusion.  (Perhaps a few TPU experts canbe excepted from the generalisation). Imagine if every year there was a different car model number. The amount of advertising, reposition of brand and model, and confusion in the marketplace as to which model to buy etc.

I think the same thing is true in the GPU market.  There used to be some "stability" were the number increases of the cards were in 50s or 100s. Now we have a 1000 change from one Q to the next.

Too much I say.  I'm not confused. But I DO HAVE TO SPEND TOO MUCH TIME researching ans staying on top of this product confusion.

Or are the GPU manufacturers doing this DELIBERATELY to get FREE advertising on the various websites and magazines?  If they went from 3870 to 3875, perhaps they wouldnt get so much free advertising as when they go from 3870 to 4870 and the "rumours", and the "no tech spec yet", and "tech spec tomorrow", etc. fills the web with all sorts of "brand awareness" copy+paste.

IMO this just goes to show that PR and ad agencies get paid on the WRONG METRIC. They get "paid" for how often information is linked to between websites, and how far up the search engines the information appears.

This only ENCOURAGES smoke and mirrors and tech-site discussion to create page-count to hit the search engines.

AAAARRRGGGHHHH cynic.

Benchmark HD 4870 on beta drivers vs. HD 3870, HD 3870 Crossfire on Cat 8.4 and 8800GT here: HD.3870.3Dmark06=12,590 vs. HD.4870.3Dmark06benchmark.leak.html=21,223 .  It's definitely MUCH faster than you were expecting 

ROFL WARNING


----------



## kylew (Apr 24, 2008)

newtekie1 said:


> +1!!!


 We actually know this is based on R600 tech though, so we know what the worst will be, and consdidering the 3800s panned out the be pretty good cards (8800 G92s actually ended up with a larger AA hit in the end).


----------



## kylew (Apr 24, 2008)

lemonadesoda said:


> The renumbering of GPUs is causing market confusion.  (Perhaps a few TPU experts canbe excepted from the generalisation). Imagine if every year there was a different car model number. The amount of advertising, reposition of brand and model, and confusion in the marketplace as to which model to buy etc.
> 
> I think the same thing is true in the GPU market.  There used to be some "stability" were the number increases of the cards were in 50s or 100s. Now we have a 1000 change from one Q to the next.
> 
> ...



It's a new gen, it's how it works. It would be confusing, and pointless, to try and give products names in small increments of say 25. You're trying to base it on a big number = performance way of thinking about it, when we know a 3600 isn't faster than a 2900. It's a way of distinguishing a different tech. Especially considering the 4XXX are new cores, tweaked from an older architecture, the new name is worthy. At least it's a much larger improvement between HD 2 to HD 3 series.


----------



## kylew (Apr 24, 2008)

Weer said:


> I'm sorry, wasn't I thinking about buying an X1950 just one year ago?
> How many more numbers are they going to have to increase before people realize that they are repackaging the same old crap?



Did you know that you have an 9800GTX? No? Well you do now


----------



## EastCoasthandle (Apr 24, 2008)

*Not the 4870 but the 4870 X2...*

I don't think it's going to be the 4870 that will be most impressive but the 4870 X2 if rumors are true that it will be 2 GPUs recognized as 1 GPU using shared memory.  If that turns out to be true then:
-future  video cards as we know it will change
-it will leave a Nvidia without a competing card (unless the GT200 is a dual GPU solution)
if done successfully via price, performance and innovation.  This is what I really want to know most.  The 4870 won't be bad IMO and at it's price it will be a competing card but it's the 4870 X2 I want to see.  Only time will tell, can't wait.  But remember there is still R800 we haven't seen/heard a peep from yet and it was talked about and rumored to be in development as far as last year (that I know of).


----------



## Mussels (Apr 24, 2008)

lemonadesoda said:


> The renumbering of GPUs is causing market confusion.  (Perhaps a few TPU experts canbe excepted from the generalisation). Imagine if every year there was a different car model number. The amount of advertising, reposition of brand and model, and confusion in the marketplace as to which model to buy etc.
> 
> I think the same thing is true in the GPU market.  There used to be some "stability" were the number increases of the cards were in 50s or 100s. Now we have a 1000 change from one Q to the next.
> 
> ...



that 4870 score isnt that high - people are doing that in the 06 thread here in TPU. At a guess, that score is for the x2 version.


----------



## newtekie1 (Apr 24, 2008)

Mussels said:


> that 4870 score isnt that high - people are doing that in the 06 thread here in TPU. At a guess, that score is for the x2 version.



The 4870 score isn't even real, he is rick rollin' you all.


----------



## erocker (Apr 24, 2008)

I think I know what "rick-rollin" really means now!  Yuck!:shadedshu


----------



## jbunch07 (Apr 24, 2008)

ha nice try lemonadesoda! but my computer comes with "anti rick roll" protection! 
i hope these cards turn out good. even though i just bought my 3870...oh well


----------



## wiak (Apr 24, 2008)

substance90 said:


> Nice specs and price ranges, but ATi really should hire some folk that actually CAN program drivers... when this happens, their lower prices will beat the hell out of nVidia or at least create some real competition.


you  should check you FACt before you say ATi sucks on drivers, last time i checked NVIDIA crashed vista many more times


----------



## Easy Rhino (Apr 24, 2008)

more graphics card options??? i just got a headache.


----------



## newtekie1 (Apr 24, 2008)

wiak said:


> you  should check you FACt before you say ATi sucks on drivers, last time i checked NVIDIA crashed vista many more times



NVidia also has a much larger market share, which explains why it causes more Vista crashes.(I believe I actually stated in that news post that some fanboys would try to use those figures to try and say nVidia's driver suck*).  The fact is, that if you have more users, you will have more crashes.  That doesn't mean anything in terms of drivers.  We would need the percentages of people that had problems before we can make that statement.

Just saying "nVidia caused more Vista crashes then ATi, so ATi must have better drivers" is false.

Edit: *-Yep I did:http://forums.techpowerup.com/showthread.php?t=56282


----------



## EastCoasthandle (Apr 24, 2008)

^^That's incorrect, the number of crashes on Vista was the result of driver itself not how many people used the video card.  The number of people has no merit to the fact that the driver caused the problem.


----------



## Exceededgoku (Apr 24, 2008)

^^ no you're incorrect, it's nearly impossible to find the real cause behind the matter. By your saying 1 person could have crashed a million times and that would be the reason... somehow I don't think so.....


----------



## magibeg (Apr 24, 2008)

No all of you are incorrect, this thread is about the hd48xx not whose drivers cause more errors.


----------



## Mad-Matt (Apr 24, 2008)

still the fact remains ...nvidia driver suck, there chipsets suck and if they continue on as they are , there next gpu will suck too


----------



## DaedalusHelios (Apr 24, 2008)

If you only have an ATi/AMD computer and say Nvidia drivers suck you are an idiot. 

How would you know? Thats right you wouldn't.

I have two ATi computers and two Nvidia computers. I have had more display driver crashes with ATi drivers but neither have EVER crashed my entire computer.


----------



## das müffin mann (Apr 24, 2008)

ok getting away from the drivers thing i cant wait to see some in game benches i wonder how they would stack up compared to nvidia's lineup in game (i dont care about crysis...lol)


----------



## eidairaman1 (Apr 24, 2008)

newtekie1 said:


> It is impossible because the claim was never made.  However, the 1GHz claim actually WAS made.  It is just more marketing BS put out by the graphics cards companies to trap the fanboys.



yet you were trapped by SLI Claiming over 80% performance gain.


----------



## eidairaman1 (Apr 24, 2008)

das müffin mann said:


> a 3870 is not the same as a 1950 will its still a good card they are different
> nvidia kinda did that with the 9800 series but then again didnt ati kinda do that between teh 2900-3xxx series?



Sorry the 1950 Line Could not Overclock that well, 3870 and 3850 overclock very well and can be modified.


----------



## kylew (Apr 24, 2008)

newtekie1 said:


> NVidia also has a much larger market share, which explains why it causes more Vista crashes.(I believe I actually stated in that news post that some fanboys would try to use those figures to try and say nVidia's driver suck*).  The fact is, that if you have more users, you will have more crashes.  That doesn't mean anything in terms of drivers.  We would need the percentages of people that had problems before we can make that statement.
> 
> Just saying "nVidia caused more Vista crashes then ATi, so ATi must have better drivers" is false.
> 
> Edit: *-Yep I did:http://forums.techpowerup.com/showthread.php?t=56282



Quick to call people a fanboy I see, you'll get called a fanboy yourself for disregarding the general consensus that ATi drivers are superior to NV's. Even the people who truly know what they are talking about see ATi drivers as being superior. But hey, you know better, because you like NV, they are in every way superior.  

Based on the driver gains ATi managed to squeeze from the R600 cores, I find their drivers very impressive, and I'd say one of the reasons NV drives are "inferior" is because they're lazy on driver releases.


----------



## das müffin mann (Apr 24, 2008)

eidairaman1 said:


> Sorry the 1950 Line Could not Overclock that well, 3870 and 3850 overclock very well and can be modified.



ok so what does that have to do with what i said?


----------



## spearman914 (Apr 24, 2008)

malware said:


> Thanks to TG Daily we can now talk about the very soon to be released ATI HD 4800 series of graphics cards with more details. One week ahead of its presumable release date, general specifications of the new cards have been revealed. All Radeon 4800 graphics will use the 55nm TSMC produced RV770 GPU, that include over 800 million transistors, 480 stream processors or shader units (96+384), 32 texture units, 16 ROPs, a 256-bit memory controller (512-bit for the Radeon 4870 X2) and native GDDR3/4/5 support as reported before. At first, AMD’s graphics division will launch three new cards - Radeon HD 4850, 4870 and 4870 X2:
> *ATI Radeon HD 4850* - 650MHz/850MHz/1140MHz core/shader/memory clock speeds, 20.8 GTexel/s (32 TMU x 0.65 GHz) fill-rate, available in 256MB/512MB of GDDR3 memory or 512MB of GDDR5 memory clocked at 1.73GHz
> *ATI Radeon HD 4870* - 850MHz/1050MHz/1940MHz core/shader/memory clock speeds, 27.2 GTexel/s (32 TMU x 0.85 GHz) fill-rate, available in 1GB GDDR5 version only
> *ATI Radeon HD 4870 X2* - unknown core/shader clock speeds, available with 2048MB of GDDR5 memory clocked at 1730MHz
> ...



HOLY CRAP at the 4870x2 2 GB vram. I heard someone say in tomshardware the core clock will be 1 GHz. And some say at christmas new versions of 4870x2 will be out at a 3 GHz core clock!!!!!!!!!!! This sounds like the end of nvidia.


----------



## Nitro-Max (Apr 24, 2008)

Im thinking the core clock on the x2 could be at 1000mhz if not more


----------



## eidairaman1 (Apr 24, 2008)

i was getting at the point the 3800 Line is a totally different animal than the 1950 Line.


----------



## das müffin mann (Apr 24, 2008)

i no i said that...


----------



## newtekie1 (Apr 24, 2008)

EastCoasthandle said:


> ^^That's incorrect, the number of crashes on Vista was the result of driver itself not how many people used the video card.  The number of people has no merit to the fact that the driver caused the problem.



Incorrect.  If there are more people using the driver, then there will be more crashes.  All the information provided gave us was the number of crashes.

Example: You take a survey.  25 people report crashes causes by nVidia drivers.  15 people reported crashes caused by ATi drivers.  Who has better drivers?(I'll continue the example once you answer this question.)



eidairaman1 said:


> yet you were trapped by SLI Claiming over 80% performance gain.



No I wasn't.



kylew said:


> Quick to call people a fanboy I see, you'll get called a fanboy yourself for disregarding the general consensus that ATi drivers are superior to NV's. Even the people who truly know what they are talking about see ATi drivers as being superior. But hey, you know better, because you like NV, they are in every way superior.
> 
> Based on the driver gains ATi managed to squeeze from the R600 cores, I find their drivers very impressive, and I'd say one of the reasons NV drives are "inferior" is because they're lazy on driver releases.



You can call me a fanboy all you want for disregarding that, because the people that truly know, the people that use both, will tell you neither is better than the other.  Notice how I use both...I'm guessing you don't have an nVidia card in any of your computers, and probably never have.  As for lazy driver releases, last I checked nVidia has been pushing out 2+ drivers a month, ATi is lucky to see monthly releases anymore.


----------



## das müffin mann (Apr 24, 2008)

hey guys lets get back on topic...


----------



## eidairaman1 (Apr 24, 2008)

newtekie, quit trying to filibuster Nvidia's drivers- your liable to be assassinated lol


----------



## imperialreign (Apr 24, 2008)

defi - I find it interesting that full specs for the 4870x2 weren't leaked - which says to me that ATI is holding that back for the time being.  Out of the 4000 series lineup, I think the most impressive will be the 4870x2.

I'm defi curious to find out whether it's being slapped with two RV770s, or if it'll be the first ATI card to stout to dual-core R700.  If so, that would pose to be a key point of withoolding those clocks ATM.

Anyway we look at it, though, this series appears to be "bringin-it" back to nVidia.  I'm really glad to see hardware between the two competing neck and neck again; it's better for all of us in the long run.



Either way, though - instead of snaggin a 3870x2 within the next couple of months, I might just keep on saving for the release of the 4870x2 instead.


----------



## spearman914 (Apr 24, 2008)

Xazax said:


> http://www.extremetech.com/article2/0,1697,2286045,00.asp
> 
> http://www.extremetech.com/article2/0,1697,2283081,00.asp
> 
> ...


Read these? Things will get insane in..


----------



## EastCoasthandle (Apr 24, 2008)

newtekie1 said:


> Incorrect.  If there are more people using the driver, then there will be more crashes.  All the information provided gave us was the number of crashes.
> 
> Example: You take a survey.  25 people report crashes causes by nVidia drivers.  15 people reported crashes caused by ATi drivers.  Who has better drivers?(I'll continue the example once you answer this question.)
> 
> ...


That is not correct, you are replacing the fact that the issue happened with the number of people who used the driver.  The fact remains that the issues happened therefore, the number of people using it is not relevant to that fact.  It only shows how many people who did use it experienced the same problem not the fact that people used it as to create some sort of arbitrary percentage.





Exceededgoku said:


> ^^ no you're incorrect, it's nearly impossible to find the real cause behind the matter. By your saying 1 person could have crashed a million times and that would be the reason... somehow I don't think so.....


The reason was already addressed in a few articles.  Google it on your own time as this really is getting off topic to this thread.


----------



## eidairaman1 (Apr 24, 2008)

seems the fanboys always try to change the subject of the topics, First it was Nvidia's 790i and now its this topic

KEEP THE FRAGGIN TOPIC ON TRACK!!!


----------



## spearman914 (Apr 24, 2008)

eidairaman1 said:


> seems the fanboys always try to change the subject of the topics, First it was Nvidia's 790i and now its this topic
> 
> KEEP THE FRAGGIN TOPIC ON TRACK!!!



You just talked about something off topic............

"seems the fanboys always try to change the subject of the topics, First it was Nvidia's 790i and now its this topic

KEEP THE FRAGGIN TOPIC ON TRACK!!!" <---- Thats off topic


NOTE: So did I posted something off topic too due to this post?


----------



## kylew (Apr 24, 2008)

newtekie1 said:


> Incorrect.  If there are more people using the driver, then there will be more crashes.  All the information provided gave us was the number of crashes.
> 
> Example: You take a survey.  25 people report crashes causes by nVidia drivers.  15 people reported crashes caused by ATi drivers.  Who has better drivers?(I'll continue the example once you answer this question.)
> 
> ...



They actually do release drivers monthly, and I have had NV cards, and I've returned them, as they weren't what I wanted. I considered going 8800GTX SLi, but I saw NV support as being sub par, so I steered clear of them. The reason I find ATi drivers more impressive is how much performance they've gotten out of R600s, especially when it comes to AA, that "issue" seems to have pretty much gone, and they've managed all this without the hardware advantages NV had with their cards.


----------



## newtekie1 (Apr 24, 2008)

EastCoasthandle said:


> That is not correct, you are replacing the fact that the issue happened with the number of people who used the driver.  The fact remains that the issues happened therefore, the number of people using it is not relevant to that fact.  It only shows how many people who did use it experienced the same problem not the fact that people used it as to create some sort of arbitrary percentage.



No, it is incorrect to say it is a fact that nVidia's drivers are worse based on the survey that says they cause more Vista crashes.  You need the percentages of nVidia users that had crashes and the percentage of ATi users that had crashes before you can come to that conclusion.  The percentage of Vista users, is not the correct information.


----------



## spearman914 (Apr 24, 2008)

newtekie1 said:


> No, it is incorrect to say it is a fact that nVidia's drivers are worse based on the survey that says they cause more Vista crashes.  You need the percentages of nVidia users that had crashes and the percentage of ATi users that had crashes before you can come to that conclusion. The percentage of Vista users, is not the correct information.



This sounds like your having a debate at court. One person say its opinion. Someone says incorrect I disagree. Then another person says I disagree with you. Then another person says your totally wrong. Then another person comes in and says listen to me your all wrong and then a cop comes in and shoots them with a 0.1 sec short lasting 1 hit ko rocket launcher.


----------



## EastCoasthandle (Apr 24, 2008)

newtekie1 said:


> No, it is incorrect to say it is a fact that nVidia's drivers are worse based on the survey that says they cause more Vista crashes.  You need the percentages of nVidia users that had crashes and the percentage of ATi users that had crashes before you can come to that conclusion.  The percentage of Vista users, is not the correct information.


What I addressed in your post:


> ...larger market share, which explains why it causes more Vista crashes...


which is wrong.


----------



## imperialreign (Apr 24, 2008)

newtekie1 said:


> You can call me a fanboy all you want for disregarding that, because the people that truly know, the people that use both, will tell you neither is better than the other.  Notice how I use both...I'm guessing you don't have an nVidia card in any of your computers, and probably never have.  As for lazy driver releases, last I checked nVidia has been pushing out 2+ drivers a month, ATi is lucky to see monthly releases anymore.



nVidia pushes out 2+ beta drivers a month.  Usually they only have one alpha release a month.  They're on par with ATI; only difference is that ATI doesn't release beta drivers left and right like nVidia does - instead, they rely heavily on feedback crews, and consumer feedback (us) for driver development.  If there's an issue they're trying to resolve, we typically see either a hotfix or a beta release.

Now, if we start calling beta drivers as "official" driver releases - than yeah, I'll defi admit that nVidia releases _more_ drivers than ATI does.



And saying that ATI is lucky to see monthly driver releases anymore is absolutely ridiculous - and you know that, man - ATI has been following the same 1 official driver release per month schedule since, what? 2004/2005?  We all know round about when the next driver is rolling out, there's no guessing or hoping involved.  If there was any evidence that ATI would start cutting back to quarterly or bi-monthly driver releases, we would've seen or heard evidence of that already.

I understand there's a debate going on, but in the heat of a debate one's comments can start coming across to be very fanboish - not calling you a fanboi, newtekie1, but IMO, that quote on the driver releases very much sounded that way.


----------



## spearman914 (Apr 24, 2008)

EastCoasthandle said:


> which is wrong
> 
> 
> 
> ...



Which has 3 letters and 3 quotes in 1.


----------



## newtekie1 (Apr 24, 2008)

imperialreign said:


> nVidia pushes out 2+ beta drivers a month.  Usually they only have one alpha release a month.  They're on par with ATI; only difference is that ATI doesn't release beta drivers left and right like nVidia does - instead, they rely heavily on feedback crews, and consumer feedback (us) for driver development.  If there's an issue they're trying to resolve, we typically see either a hotfix or a beta release.
> 
> Now, if we start calling beta drivers as "official" driver releases - than yeah, I'll defi admit that nVidia releases _more_ drivers than ATI does.
> 
> ...



Yes, but it seems as of late, that the driver releases are coming later and later in the month.  We are still seeing driver releases every month though.

As for beta driver releases, I don't care if it is beta or not, as long as it works.  NVidia has come a long way in terms of keeping new drivers coming, they have come a long way from the early days of the 8800 series where they were screwing over their 7 series owners who didn't see even a beta release for months.


----------



## EastCoasthandle (Apr 24, 2008)

spearman914 said:


> Which has 3 letters and 3 quotes in 1.



I am consistent  but I think I said more then 3 letters


----------



## newtekie1 (Apr 24, 2008)

EastCoasthandle said:


> What I addressed in your post:
> 
> which is wrong.



No it isn't.  It is well known fact that nVidia has a larger Vista market share than ATi, and in general a larger market share.  The last report I saw shows nVidia having a 71% share over the descrete graphics card market(Q407 numbers).


----------



## EastCoasthandle (Apr 24, 2008)

Newtekie1, you aren't following your own posts any more.  What you posted does not explain the crashes mentioned earlier nor does it follow up that you thought it did.


----------



## imperialreign (Apr 24, 2008)

newtekie1 said:


> Yes, but it seems as of late, that the driver releases are coming later and later in the month.  We are still seeing driver releases every month though.
> 
> As for beta driver releases, I don't care if it is beta or not, as long as it works.  NVidia has come a long way in terms of keeping new drivers coming, they have come a long way from the early days of the 8800 series where they were screwing over their 7 series owners who didn't see even a beta release for months.




exactly the point - but we can't say that nVidia is supporting better because they release more beta drivers.  beta drivers are only a means for a company to get feedback from their crews and consumers, in an attempt to release better performing, more stable drivers; and overall a better product from hardware to software - most betas aren't even supported by most manufacturers because it's not considered an "official" release.

ATI just does things differently, and their drivers are quite stable and friendly without the need for numerous beta releases.  Could their drivers be better if they did go the same route?  Absolutely.  I firmly believe that if ATI followed the same feedback method nVidia did, ATI drivers would perform much better than they do now, and we wouldn't run into the occasional hiccup like CAT 8.3 + Crossfire.


<edit>

not trying to drag this side of the debate out - just wanting to clarify so others don't get confused in the ongoing pandemoneum


----------



## newtekie1 (Apr 24, 2008)

EastCoasthandle said:


> Newtekie1, you aren't following your own posts any more.  What you posted does not explain the crashes mentioned earlier nor does it follow up that you thought it did.



I'm not trying to explain the crashes.  What I am trying to do is make people realize that you can not draw the conclusion that nVidia has worse drivers based solely on the fact that more Vista users had crashes caused by nVidia drivers.



imperialreign said:


> exactly the point - but we can't say that nVidia is supporting better because they release more beta drivers.  beta drivers are only a means for a company to get feedback from their crews and consumers, in an attempt to release better performing, more stable drivers; and overall a better product from hardware to software - most betas aren't even supported by most manufacturers because it's not considered an "official" release.
> 
> ATI just does things differently, and their drivers are quite stable and friendly without the need for numerous beta releases.  Could their drivers be better if they did go the same route?  Absolutely.  I firmly believe that if ATI followed the same feedback method nVidia did, ATI drivers would perform much better than they do now, and we wouldn't run into the occasional hiccup like CAT 8.3 + Crossfire.
> 
> ...



Exactly, I don't believe ATi's drivers are worse.  Both have their problems, I just don't think nVidia's drivers are any worse than ATi's.


----------



## EastCoasthandle (Apr 24, 2008)

newtekie1 said:


> I'm not trying to explain the crashes.  What I am trying to do is make people realize that you can not draw the conclusion that nVidia has worse drivers based solely on the fact that more Vista users had crashes caused by nVidia drivers.


Actually you did try to explain the crashes (which is the post I original responded to) and you continued to do so in the last few posts.  Not only does it not make sense it didn't relate to the fact that the issues were happening.


----------



## newtekie1 (Apr 24, 2008)

EastCoasthandle said:


> Actually you did as I already quoting you saying so.  Not only did it not make sense it didn't relate to the fact that the issues were happening.



That post litteraly makes no sense.  I'm don't with your fanboy ass(yes you are a huge ATi fanboy, everyone here knows it).  Your arguments are now making no sense at all, and you are just talking in circles. Welcome to my ignore list.


----------



## imperialreign (Apr 24, 2008)

newtekie1 said:


> Exactly, I don't believe ATi's drivers are worse.  Both have their problems, I just don't think nVidia's drivers are any worse than ATi's.



s'all good   I was just trying to clarify that earlier statement, cause it coulda been taken either way.


----------



## EastCoasthandle (Apr 24, 2008)

newtekie1 said:


> That post litteraly makes no sense.  I'm don't with your fanboy ass(yes you are a huge ATi fanboy, everyone here knows it).  Your arguments are now making no sense at all, and you are just talking in circles. Welcome to my ignore list.



Your post is getting off topic. You have already called a few user(s) here fanboy(s) because they posted their opinions, reasonings, etc in this thread.  I think that you've used that "card" enough in one thread.  Although I have posted nothing in this thread saying that I am fan of either company, I did imply/say:


> ..you are replacing the fact that the issue happened with the number of people who used the driver. The fact remains that the issues happened therefore, the number of people using it is not relevant to that fact...



But it would be best that this debate conclude as name calling never enhances any opinion.


----------



## das müffin mann (Apr 24, 2008)

das müffin mann said:


> hey guys lets get back on topic...



ill say it again


----------



## magibeg (Apr 25, 2008)

To give things some focus does anyone wanna take a guess as to when they think these bad boys might be comming out? (yes no one knows but its better than fighting )


----------



## KainXS (Apr 25, 2008)

since I'm gonna do a whole new rig in november I might just go with crossfire 4870's if they are out by then


----------



## kylew (Apr 25, 2008)

magibeg said:


> To give things some focus does anyone wanna take a guess as to when they think these bad boys might be comming out? (yes no one knows but its better than fighting )



For no reason at all, I really hope it's within 2 weeks.  I'm not even desperate for a new card either . With 3D mark vantage due out very soon, I reckon ATi will want their cards out as soon as to "showcase" their new stuff.


----------



## das müffin mann (Apr 25, 2008)

magibeg said:


> To give things some focus does anyone wanna take a guess as to when they think these bad boys might be comming out? (yes no one knows but its better than fighting )



ill say 2 months

just a random guess


----------



## Megasty (Apr 25, 2008)

Waiting for my next monster won't be too bad this time. Through endless amounts of tweaking, I finally got my tri-fire to handle anything I can throw at it. Even with these watered down specs, I still can't stop shaking my fist at the thought of how fast these things will be.


----------



## wolf2009 (Apr 25, 2008)

kylew said:


> For no reason at all, I really hope it's within 2 weeks.  I'm not even desperate for a new card either .



me too, just got 9600GT in hope of saving for a better card.


----------



## Exceededgoku (Apr 25, 2008)

I'm an ATI fanboy out and out but I respect Nvidia's performance but I won't get their cards because I don't like their drivers and I've always noticed a weird stuttering problem with all of their cards...


----------



## Valdez (Apr 25, 2008)

newtekie1 said:


> As for lazy driver releases, last I checked nVidia has been pushing out 2+ drivers a month, ATi is lucky to see monthly releases anymore.









 i did this shot a few minutes ago  Today is 2008. april 25.  The regular user goes to nvidia.com for the newest official, don't searches the net for betas and modded inf drivers. (what nvidia drivers are there anyway? beta, modded inf beta, official beta, whql, official whql   )

I had a 7900gt for almost 2 years before my hd3870. There was a 9 months period with no official whql forceware. 9 months! There was a lot of betas though, all of them ended with BSOD  (i tried them).
Now i'm with ati for 2 months now, and had no driver issues from cata8.1 to cata8.4.
I just like to get a new driver every months.


----------



## erocker (Apr 25, 2008)

This thread isn't about Nvidia drivers.  Stay on topic please.


----------



## zOaib (Apr 25, 2008)

Exceededgoku said:


> I'm an ATI fanboy out and out but I respect Nvidia's performance but I won't get their cards because I don't like their drivers and I've always noticed a weird stuttering problem with all of their cards...



i honestly TRIED to go nvidia , tried 8800 gts 640 , 8800 gtx , 8800 gts g92 , 9800gx2

and out of all the only one that i had fun with was the 8800 gts g92 , all others except 9800 gx2 wud play fine but have weird problems now and then ............. the 9800 gx2 was WELL if i put it in words nv fanbois will be all over my rear end so ill leave it to your imagination , back to hd 3870 x2 and happy .


----------



## lemonadesoda (Apr 25, 2008)

Let's try to get this thread back on track...


----------



## DaedalusHelios (Apr 25, 2008)

What the heck???? Nvidia?






9800GX2 Vista 32bit driver.


----------



## DaedalusHelios (Apr 25, 2008)

At first I thought it was over-active heuristics too...... but it still made its way onto my computer. I may have to delete it with my Linux boot disc.


----------



## DaedalusHelios (Apr 25, 2008)

ghost101 said:


> I have the heuristic analyzer off. I think that's the default setting. Did you turn it on?



Nope, I didn't change any settings.


----------



## Sapientwolf (Apr 25, 2008)

Looks impressive, especially since I'm planning a build in the coming months so hopefully it'll be out by then, it also looks like ATI will make it worthwhile to buy an X2 over Crossfiring 4870s because of the prices, assuming the 4870~$350 and the 4870X2~$500.

_And yeah, although off topic, the number of Vista crashes does not give an accurate measurement of which driver is more prone to failure.  For a pool of 100 people submit a driver crash report over one week's time of time, 70 are Nvidia users and 30 are ATI users (A rough installment base).  You can draw from that data that 70% of crashes were caused by Nvidia, even though both companies are suffering a 1 crash per week ratio, therefore invalidating the 30% vs 70% argument.  A proper example would be if 100 people submit driver crash reports over one week's time and 140 of them are Nvidia users and 30 of them are ATI, then you can say Nvidia's crash ratio is 2 per week while ATI's is still 1 per week.  The key to this example is that the number of users needs to be given before hand and possibly also unique crash counts._


----------



## TheGuruStud (Apr 25, 2008)

Valdez said:


> i did this shot a few minutes ago  Today is 2008. april 25.  The regular user goes to nvidia.com for the newest official, don't searches the net for betas and modded inf drivers. (what nvidia drivers are there anyway? beta, modded inf beta, official beta, whql, official whql   )
> 
> I had a 7900gt for almost 2 years before my hd3870. There was a 9 months period with no official whql forceware. 9 months! There was a lot of betas though, all of them ended with BSOD  (i tried them).
> Now i'm with ati for 2 months now, and had no driver issues from cata8.1 to cata8.4.
> I just like to get a new driver every months.



Someone here is a little freaking retarded. WHQL doesn't mean shit. I'd rather not have M$ slap a meaningless label on it. 174.74s are whql for 9 series, so just get the "beta" (which is identical) if you have an older card  and STFU.

Longer time between releasing probably saves your avg joe dipshit from problems. I can see them going to the website 3 times a month installing new drivers without actually uninstalling (cleaning) the old ones (which is definitely partly Nvidia's fault and ATI, etc) and causing mayhem.
If you're smart enough to be updating drivers (properly), then you probably don't need to go to the official sites to get them (I know I never do).


----------



## eidairaman1 (Apr 25, 2008)

I learned that about the Hotfix Driver for AGP on ATI parts, dont install on the Regular driver- install it by itself.


TheGuruStud said:


> Someone here is a little freaking retarded. WHQL doesn't mean shit. I'd rather not have M$ slap a meaningless label on it. 174.74s are whql for 9 series, so just get the "beta" (which is identical) if you have an older card  and STFU.
> 
> Longer time between releasing probably saves your avg joe dipshit from problems. I can see them going to the website 3 times a month installing new drivers without actually uninstalling (cleaning) the old ones (which is definitely partly Nvidia's fault and ATI, etc) and causing mayhem.
> If you're smart enough to be updating drivers (properly), then you probably don't need to go to the official sites to get them (I know I never do).


----------



## department76 (Apr 25, 2008)

my initial reaction:  good for ATI!!!

i bought a 3870 brand new not long after release for $250, nice to see that the next top model (4870) will be over $300++

and yes, MSRP says a LOT about what the card will do with respect to previous releases.


----------



## Megasty (Apr 25, 2008)

Sapientwolf said:


> Looks impressive, especially since I'm planning a build in the coming months so hopefully it'll be out by then, it also looks like ATI will make it worthwhile to buy an X2 over Crossfiring 4870s because of the prices, assuming the 4870~$350 and the 4870X2~$500.



Especially if the 4870x2 is faster than the 4870 CF as it holds true for their predecessors


----------



## imperialreign (Apr 25, 2008)

Megasty said:


> Especially if the 4870x2 is faster than the 4870 CF as it holds true for their predecessors



most defi - I've holding off for the GDDR4 3870x2s to start dropping before I snag one along with a new PSU - but after these leaked reports of the HD4000 specs, I might keep saving so I can snag 2 4870x2s + a PSU at the same time.

After running dual 1950 PROs, I've become a believer in Crossfire's ability to improve gameplay with higher resolutions; so I'm aiming solely at a minimal dual-GPU card.

two dual-GPU cards, preferably 



Again, though, based on these specs, it looks like the GPU market is going to become a stomping ground between the two companies again - and we haven't seen that in all it's glory since ATI rolled out the X1900/50 series.


----------



## Dyno (Apr 25, 2008)

Okay, more than likely the 4870 X2's will have a 50MHz bump up from the stock core, i'm thinkin'. Well, at least maybe for Sapphire and HIS and not sure about the rest. That would be a nice boost for the gigaflops, right? Is this a good educational guess? Anyone...


----------



## Sapientwolf (Apr 25, 2008)

I haven't built a Crossfire machine yet, but I only hear good things.  I was thinking about pairing up some 3870s on a more expensive X38 or X48 board, but with the introduction of the X2 series that seems to be working very well and the rumors of the HD 4xxx I may just have to wait and see.


----------



## brian.ca (Apr 25, 2008)

newtekie1 said:


> It is impossible because the claim was never made.  However, the 1GHz claim actually WAS made.  It is just more marketing BS put out by the graphics cards companies to trap the fanboys.



I suppose second hand info equates to official marketing and leaked specs equate to final words now?

Did AMD's marketing make any claims like that? Reading the original article I see, "while the 4870 will be the first mass-production GPU with a clock speed higher than 1 GHz. Prototype RV770 boards were clocked at about 1.05 GHz."  Right off the bat the reference to prototypes should be a bit of a redflag for anyone looking to take that claim to heart.  Especially when he referred to final clocks (albeit in reference to the 4850) not being specified in the previous sentence.  It sounds like buddy @ tgdaily might have heard the prototypes were clocked at 1.05 GHz, realized the benchmark that sets and ran with that.  

I could see your point if this was something that came directly from AMD, and if they did I'll go find a crow and some salt, but I was under the impression that AMD has overall been pretty damn tight lipped (this new article seems to echo that) about these new cards and reading that article I'm not seeing any reason to think that the info came direct from AMD.  The only "from AMD" thing reported there seems to be that they'd be rolling out a significant number of products in May, and even among that it goes on to say "sources now confirmed that the introductions will include desktop ... graphics parts." which points towards Dirk not specifying that.  "We'll be releasing a product in May... I won't say what it is but it will be first mass-production GPU with a clock speed higher than 1 GHz" doesn't sound quite right heh...

Likewise, I'm not sure I'd place stock into leaked specs (actually it's kind of funny, if I remember correctly when leaked specs of the 9000 series were posted I could have swore you argued against their validity), especially from a source that seems to contradict itself within a weeks time.  Stuff like this should be an indicition of where things are headed but I wouldn't assume leaked specs to be 100% for any upcoming products from any company.


----------



## btarunr (Apr 25, 2008)

In that case the 'specs leakage' in the first post didn't come from AMD either.


----------



## tkpenalty (Apr 25, 2008)

It doesn't make sense if AMD spends *more than one year* of R&D on the R700 series to make it based off the R600 and one more thing which confuses me, what happened to the R700? Why is there already a "revision" RV770? The RV naming is used for non-flagship products. Example R600 is flagship, RV670 _isn't_ flagship, but the R680 (2xRV670) is flagship. Note the naming.



btarunr said:


> In that case the 'specs leakage' in the first post didn't come from AMD either.


I'd say its mostly speculation.

EDIT: Its all bullshit:

http://www.hardware-infos.com/news.php?news=2008
See this? This is "their sources" Its evidently not from AMD Moreover, see what fudzilla says about the source:http://www.fudzilla.com/index.php?option=com_content&task=view&id=6994&Itemid=1 

Note, fudzilla seem to be NEVER wrong. They are always correct and them actually going that that info is bull says a lot. Which means this news article is to be taken with a grain of salt...

512bit mem bus anyone?


----------



## Mussels (Apr 25, 2008)

nvidia has a 71% market share as posted earlier.

If there are 1,000 people, with 71% being Nv users - that means 710 of them are Nvidia, and the rest are divided between SiS, Intel, S3, and ATI.
lets say half of them crashed (dead split, 50% of that entire group) - funnily enough, more of the crashes will be caused by nvidia since there ARE MORE NVIDIA SYSTEMS TO CRASH.

WHY cant you get that? jesus...


----------



## yogurt_21 (Apr 25, 2008)

newtekie1 said:


> What happened to "the 4870 will be the first mass-production GPU with a clock speed higher than 1GHz"?



probably the same thing that happened to the 32 pixel pipeline x1800, it was never going to happen. Remember that the article is from tg daily, not amd directly, so no there was no official claim from amd/ati that the 4870 was supposed to have a 1GHZ core, just tg daily. So if it's the marketing bs spewer you're after, talk with tg daily.


----------



## yogurt_21 (Apr 25, 2008)

Mussels said:


> nvidia has a 71% market share as posted earlier.
> 
> If there are 1,000 people, with 71% being Nv users - that means 710 of them are Nvidia, and the rest are divided between SiS, Intel, S3, and ATI.
> lets say half of them crashed (dead split, 50% of that entire group) - funnily enough, more of the crashes will be caused by nvidia since there ARE MORE NVIDIA SYSTEMS TO CRASH.
> ...



nvidia has nowhere near that amount when intel is factored into the equation it's more like 75% intel, 15% nvidia, 6% ati and 4% split between sis, via, s3 etc.

AND if I remember crrectly intel has less crashes than either nvidia or ati. the article basically praised intel onboard drivers. as well as intel chipset drivers (as the article never stated that they polled users with addon graphics only) meaning that nvidia chipsets and amd/ati chipsets are in there as well.

Basically ALL of you read the article wrong. and somehow you're all thinking that the article was on add on graphics only, quite funny as intel DOESN"T SELL ADD ON GRAPHICS and was included.

90% of the computer users on vista have onboard graphics, meaning that it has as much to do with chipset drivers as it does vga drivers. seriously get off that article because all it is is Microshaft blaming everyone else for it's own mistakes.


----------



## Valdez (Apr 25, 2008)

TheGuruStud said:


> Someone here is a little freaking retarded. WHQL doesn't mean shit. I'd rather not have M$ slap a meaningless label on it. 174.74s are whql for 9 series, so just get the "beta" (which is identical) if you have an older card  and STFU.
> 
> Longer time between releasing probably saves your avg joe dipshit from problems. I can see them going to the website 3 times a month installing new drivers without actually uninstalling (cleaning) the old ones (which is definitely partly Nvidia's fault and ATI, etc) and causing mayhem.
> If you're smart enough to be updating drivers (properly), then you probably don't need to go to the official sites to get them (I know I never do).




Lol man, i just wrote my expressions with 7900gt, i didn't lie, and i know how to install new drivers properly. Jesus what a troll you are


----------



## tkpenalty (Apr 25, 2008)

tkpenalty said:


> It doesn't make sense if AMD spends *more than one year* of R&D on the R700 series to make it based off the R600 and one more thing which confuses me, what happened to the R700? Why is there already a "revision" RV770? The RV naming is used for non-flagship products. Example R600 is flagship, RV670 _isn't_ flagship, but the R680 (2xRV670) is flagship. Note the naming.
> 
> 
> I'd say its mostly speculation.
> ...



Did people ignore this?


----------



## mandelore (Apr 25, 2008)

ahhh, 512bit bus, we can certainly hope!!! and that means I wont be downgrading my memory bus when i finally wave this sweet sweet 2900xt goodbyeeeee


----------



## Valdez (Apr 25, 2008)

mandelore said:


> ahhh, 512bit bus, we can certainly hope!!! and that means I wont be downgrading my memory bus when i finally wave this sweet sweet 2900xt goodbyeeeee



I don't think they will go on 512 again (at least not in near future). The manufacturing costs were much higher with it, and ATI have to make gpus with low manufacturing cost (to maximize profit).

Anyway 512bit and gddr5? that means a lot of bandwith. What for? Gddr3 is cheaper, 0.8 gddr3 with 512bit memory interface would make some sense, but very fast ram AND 512 bit make no sense at all.

But it would be really a surprise if we would get a cheap card with (1gb) gddr5 and 512bit


----------



## HTC (Apr 25, 2008)

Valdez said:


> I don't think they will go on 512 again (at least not in near future). The manufacturing costs were much higher with it, and ATI have to make gpus with low manufacturing cost (to maximize profit).
> 
> Anyway 512bit and gddr5? that means a lot of bandwith. What for? *Gddr3 is cheaper, 0.8 gddr3 with 512bit memory interface would make some sense, but very fast ram AND 512 bit make no sense at all*.
> 
> But it would be really a surprise if we would get a cheap card with (1gb) gddr5 and 512bit



From what i've read, the reason they opted for GDDR5 is that it runs MUCH cooler then GDDR3 and, therefore, can be clocked higher.


----------



## DarkMatter (Apr 25, 2008)

I don't know why people is being so conservative with their expectations about these cards. They look almost double as fast as current Ati cards, based on the specs shown here. You can believe them or not, that's another story, but looking at them and not expecting at least a 75% performance increase is really pesimistic. If the specs are true, these cards have the potential to be more than twice as fast as RV670. 32 TMUs clocked higher than on RV670 will help for sure, as well as double the GFlops on the shaders. ROPs mean nothing nowadays, specially on Ati cards where AA is done in the shaders. Such high memory bandwidth is not really needed, but it will help increase performance a bit. All in all, I would expect a card with these specs being more than twice as fast as current generation of Radeons.



yogurt_21 said:


> nvidia has nowhere near that amount when intel is factored into the equation it's more like 75% intel, 15% nvidia, 6% ati and 4% split between sis, via, s3 etc.
> 
> AND if I remember crrectly intel has less crashes than either nvidia or ati. the article basically praised intel onboard drivers. as well as intel chipset drivers (as the article never stated that they polled users with addon graphics only) meaning that nvidia chipsets and amd/ati chipsets are in there as well.
> 
> ...



I have quoted you since it's the last post regarding the vista crashes subject, but it's directed to everybody talking about them.

If we HAVE to talk about Vista crashes and Nvidia drivers in a thread about new HD4000 cards, at least let's do it with real numbers. Here you have actual market share figures:

http://www.xbitlabs.com/news/video/..._Back_Market_Share_from_Intel_Nvidia_JPR.html

Here's a resume: Intel's average for the year is around 40%, ~30% for Nvidia and ~19% for Ati.

Now I have to say that I agree a bit with newtekie. Even though he is using bloated numbers, what he said has sense. You have to take into account that most Intel IGP users are not doing anything stressful enough to get a crash related to graphics. The chances for Office, Mozilla or Emule to cause graphics related crashes are not very high, mefinks. Anyone with more "ambitious" needs will use a discrete card, even if it's only for watching movies on the PC, or they will use Ati/Nvidia integrated graphics instead of Intel IGP. 
You can't use bold numbers in this situation since the use that people give to their machines is most relevant than the graphics adapter itself. Overclocking, driver changes, hardcore gaming, benchmarking/stressing the card, all of them are risk factors that could eventually lead to crashes. None of them are going to happen in an Intel IGP.
I would apply the same about Vista. Amongst people using Vista there's a bigger chance to find Ati/Nvidia discrete cards than on XP machines. The last time I heard a Vista and graphics card related news (aside the Vista crashes one), it was about Vista increasing the number of discrete graphic cards sold. 
Those two facts take Intel out of the ecuation IMO. So that leaves us with Ati vs. Nvidia crash numbers. Here Nvidia has ~66% of market share, but there were reports that Nvidia was selling way more DX10 high-end cards (before RV670, but a year selling "only" Nvidia 8 series leads to lot of users), while Ati was selling more low-end and integrated graphics. If you look at the charts, you can see that Nvidia+Ati sold 52 millions of graphics adapters in Q4 2007 and in the next link we can see they sold 31 million discrete graphics on the same timeframe:

http://www.xbitlabs.com/news/video/...Rise_but_Prices_Down_Jon_Peddie_Research.html 

Almost half of the cards are integrated, where Ati was selling more. Again there's a low risk factor between people using integrated graphics. We can easily conclude then that between the risk factor crowd the number of Nvidia cards is a lot bigger than what pure market share would suggest.

In the end what I mean is that Nvidia causing more crashes is purely stadistics at work and has nothing to do with driver quality. None of te companies offer better drivers than the other.


----------



## magibeg (Apr 25, 2008)

DarkMatter said:


> I don't know why people is being so conservative with their expectations about these cards. They look almost double as fast as current Ati cards, based on the specs shown here. You can believe them or not, that's another story, but looking at them and not expecting at least a 75% performance increase is really pesimistic. If the specs are true, these cards have the potential to be more than twice as fast as RV670. 32 TMUs clocked higher than on RV670 will help for sure, as well as double the GFlops on the shaders. ROPs mean nothing nowadays, specially on Ati cards where AA is done in the shaders. Such high memory bandwidth is not really needed, but it will help increase performance a bit. All in all, I would expect a card with these specs being more than twice as fast as current generation of Radeons.
> 
> 
> 
> ...



Beautifully said, now lets put it to rest


----------



## newtekie1 (Apr 25, 2008)

brian.ca said:


> I suppose second hand info equates to official marketing and leaked specs equate to final words now?
> 
> Did AMD's marketing make any claims like that? Reading the original article I see, "while the 4870 will be the first mass-production GPU with a clock speed higher than 1 GHz. Prototype RV770 boards were clocked at about 1.05 GHz."  Right off the bat the reference to prototypes should be a bit of a redflag for anyone looking to take that claim to heart.  Especially when he referred to final clocks (albeit in reference to the 4850) not being specified in the previous sentence.  It sounds like buddy @ tgdaily might have heard the prototypes were clocked at 1.05 GHz, realized the benchmark that sets and ran with that.
> 
> ...




That was actually my exact point, thank you for being the only one capable of getting it.  It can all be summed up into one simple sentence: Leaked specs don't mean shit.

The specs here are from the exact same source as the claim that the 4870 was going to be the first mass produced GPU to run at 1GHz(TG Daily), and they are probably just as full of BS.


----------



## MrMilli (Apr 25, 2008)

Maybe you guys should check out what i posted here:
http://forums.techpowerup.com/showpost.php?p=764016&postcount=21

About the crashes in Vista:
The numbers MS released are only related to the amount of crashes Vista had when Vista was released. So that's only Q1 2007 maybe Q2 too.

These are the numbers:
Rank Graphics Supplier Q1'07 Market Share - Q4'06 Market Share
1 Intel 38.7% - 37.4%
2 Nvidia 28.5% - 28.5%
3 AMD 21.9% - 23.0%
4 VIA Technologies 6.4% - 6.7%
5 Silicon Integrated Systems (SiS) 4.3% - 4.5%
6 Others <1% <1%

So claiming there where 2x more nVidia cards in that period and that's why there are 2x more crashes is ridiculous. Even ATI is not too far off nVidia. ATI's market share used to be bigger in the previous quarters. Some of those people upgraded too to Vista. So as i said, the statement that there are much more nVidia cards can't stand. I have first hand experience with the initial Vista drivers for 8800GTX. U really didn't have to go into a game to experience crashes. It could even crash or blue screen because of Aero or standby. So you shouldn't divide up IGP and discrete GPU's.


----------



## [I.R.A]_FBi (Apr 25, 2008)

STArT A NExT TOPiC FoR THiS DRiVER THiNG. JEeZ


----------



## laszlo (Apr 25, 2008)

this thread  has nothing in common with the original post;i'll remember the  hd2900 early bench posted by a unknown site which has vanished after the fiasco

why don't we all wait till a reliable spec or a bench show up

it's a waist of time reading all the posts...fans from both sides arguing for nothing


----------



## GSG-9 (Apr 25, 2008)

They do look juicy. I am not on a timeline for a video upgrade, but I like the spec's I guess.


----------



## flashstar (Apr 25, 2008)

Look, ATI isn't just going to release another inferior card. They know how fast the competition is and so I'm sure that the R770 will clean up the G92. The question though is whether or not the R770 will beat the G100. That's what really matters because ATI isn't going to come out with a completely new card for another year. They will have a revision in 6 months, but that's it. For ATI's sake I'm hoping that the R770 is 80-90% faster than the R670. That will give them a good 50% lead on the G92 and will hopefully pose a serious threat to the G100.


----------



## btarunr (Apr 25, 2008)

flashstar said:


> Look, ATI isn't just going to release another inferior card. They know how fast the competition is and so I'm sure that the R770 will clean up the G92.



And you think NVidia will let the RV770 compete with G92? Unlikely. The next-generation NVidia GPU cometh.


----------



## newtekie1 (Apr 25, 2008)

flashstar said:


> Look, ATI isn't just going to release another inferior card. They know how fast the competition is and so I'm sure that the R770 will clean up the G92. The question though is whether or not the R770 will beat the G100. That's what really matters because ATI isn't going to come out with a completely new card for another year. They will have a revision in 6 months, but that's it. For ATI's sake I'm hoping that the R770 is 80-90% faster than the R670. That will give them a good 50% lead on the G92 and will hopefully pose a serious threat to the G100.



+1  R770 will have to compete with G100, and I'm hoping it does a damn good job of it.


----------



## MrMilli (Apr 25, 2008)

Just was at a tradeshow of my dealer where Sapphire was present. The representative of Sapphire told me that RV770 based products will be showcased at Computex and released shortly after.


----------



## lemonadesoda (Apr 25, 2008)

flashstar said:


> Look, ATI isn't just going to release another inferior card. They know how fast the competition is and so I'm sure that the R770 will clean up the G92.


The logic is falacious. That's like saying AMD know how fast the Intels are and so will not release a CPU that isnt faster than the Intels.  Clearly rubbish. AMD wll release whatever they CAN... but if they dont have the architecture or technology there is NOTHING they can do about it.

I think the 4xxx series will be great. Certainly a lot better than the 3xxx series, esp. with DOUBLE the texture units.  That is a bottleneck solved.  So heres to the benchmarks  

HD.3870.3Dmark06=12,590 vs. HD.4870.3Dmark06benchmark.leak.html=21,223


----------



## GSG-9 (Apr 25, 2008)

flashstar said:


> Look, ATI isn't just going to release another inferior card.




They have been doing it since right after the 9800...I love ati but come on, lets be realistic. You cant just say there not going to, theres at least a feasible chance they will.


----------



## Thermopylae_480 (Apr 25, 2008)

Please remember to remain on topic.

Thanks


----------



## Mussels (Apr 25, 2008)

yogurt_21 said:


> nvidia has nowhere near that amount when intel is factored into the equation it's more like 75% intel, 15% nvidia, 6% ati and 4% split between sis, via, s3 etc.
> 
> AND if I remember crrectly intel has less crashes than either nvidia or ati. the article basically praised intel onboard drivers. as well as intel chipset drivers (as the article never stated that they polled users with addon graphics only) meaning that nvidia chipsets and amd/ati chipsets are in there as well.
> 
> ...



Nvidia has a 71% share in discrete graphics. That is a fact, although not one that has been linked to in this thread. NV would have more errors by sheer numbers alone. I got a bit annoyed earlier, but its not a fact that can really be argued - yeah intel have more than Nvidia, but Nvidia have more than AMD - i dont see people choosing intel video because it crashes less.


----------



## DaedalusHelios (Apr 25, 2008)

Intel graphics is not a choice. Thats like saying McDonald's bags are the most popular fast food bags. Its not because somebody says lets go get some McDonalds.... they have the best bags.

Intel graphics is just handed out, pretty much free, as part of the transaction when buying an Intel based PC.


----------



## swaaye (Apr 25, 2008)

16 ROPs still, huh?

I think we will be back to AMD at the mid-range and NV at the top in no time. This chip isn't going to really blow past the current top stuff, even in CF. And NV certainly isn't just sending its engineers to posh parties and skipping out on R&D for a new GPU.


----------



## yogurt_21 (Apr 25, 2008)

Mussels said:


> Nvidia has a 71% share in discrete graphics. That is a fact, although not one that has been linked to in this thread. NV would have more errors by sheer numbers alone. I got a bit annoyed earlier, but its not a fact that can really be argued - yeah intel have more than Nvidia, but Nvidia have more than AMD - i dont see people choosing intel video because it crashes less.



I never argued the discreet market share, I argue that applying a 71% market share to nvidia in that article was ridiculous as others have shown, when intel is in the equation, nvidia doesn't have near that amount. And the article didn't even say it focused on graphics, it just reported the amount of crashes due to drivers, amd/ati, intel, nvidia, and via all use drivers for motherboards as well as drivers for graphics. This will also include any tv tuners or other add on devices any of the above companies make that have seperate drivers. You also have to remember that the specs of the systems were not shown. Meaning any joe shmoe who wanted to upgrade his nforce motherboard with an amd athlon cpu to vista is factored in there. You also have to factor any user who decided to mdownload a nvidia quodro or ati fire gl driver for their geforce or radeon. Or any users who tried to install the wrong motherboard drivers.  

All in all you have to take the article with a grain of salt. I mean it came off of 158 pages of suppor tickets without any real system info. Thats inconclusive in any book.


----------



## Exceededgoku (Apr 25, 2008)

Discuss the driver issue here
http://forums.techpowerup.com/showthread.php?p=766039#post766039


----------



## grndzro (Apr 26, 2008)

GSG-9 said:


> They have been doing it since right after the 9800...I love ati but come on, lets be realistic. You cant just say there not going to, theres at least a feasible chance they will.



Who the hell are you kidding?
I'm still running Crossfire 1950XTX.
I've never had driver problems
I have never run across a game yet that I can't max out the graphics and get 70+ fps.
(Not counting Crysis....it is an unoptimized pos that should have been scrapped)

I did work for Dell as a tecnician. Nvidia and Microsoft fought for over a year about who was supposed to foot the bill for rewriting the Nvidia Vista drivers. They absolutely sucked during that time. 
Another point is that many prebuit systems never report their crashes to MS, they are reported to their respective companies. which are never counted in the polls.....and are 80% nvidia.
The Nvidia/Microsoft fiasco over the Vista drivers put Nvidia back quite a ways behind vista compatibility.

Another thing to consider is that ATI-Tool is far more complex at tuning ATI cards and offers far more options for us ATI fanboys than any Nvidia overclocking tool, and that alone creates far more crashes than would normally happen were we content to just use the ATI control panel. and most ATI users do use ATI-Tool for tweaking their graphics.


----------



## HTC (Apr 26, 2008)

grndzro said:


> Who the hell are you kidding?
> I'm still running Crossfire 1950XTX.
> I've never had driver problems
> I have never run across a game yet that I can't max out the graphics and get 70+ fps.
> ...



By any chance, did you bother to read post #180? You know: the post *right before yours*?


----------



## Megasty (Apr 26, 2008)

HTC said:


> By any chance, did you bother to read post #180? You know: the post *right before yours*?



Of course not 

Before this thread completely goes to hell, those specs seem to have a few areas of focus that would cause some to think that they are atleast twice as fast as their 3800 counterparts. The ROPs kills that thought right off. The cards will still pwn, but if the architecture would have allowed for 24 or 32 ROPs, they would have been out of this world. I do have to commend them for fixing the TMU mess as that screwed us over more than anything else


----------



## BumbRush (Apr 26, 2008)

Mussels said:


> both have. repeatedly.
> 
> Aimed at no one in particular:
> ANYTHING YOU SAY ABOUT NV OR ATI SUCKING CAN BE APPLIED EQUALLY TO THE OTHER ONE. PLEASE STOP THIS REPETITIVE FANBOI CRAP.



but at least ati didnt put out an FX class line of cards


----------



## GSG-9 (Apr 26, 2008)

BumbRush said:


> but at least ati didnt put out an FX class line of cards



nope, ATI released the 'VE' series (Radion 7000)


----------



## BumbRush (Apr 26, 2008)

imperialreign said:


> nVidia pushes out 2+ beta drivers a month.  Usually they only have one alpha release a month.  They're on par with ATI; only difference is that ATI doesn't release beta drivers left and right like nVidia does - instead, they rely heavily on feedback crews, and consumer feedback (us) for driver development.  If there's an issue they're trying to resolve, we typically see either a hotfix or a beta release.
> 
> Now, if we start calling beta drivers as "official" driver releases - than yeah, I'll defi admit that nVidia releases _more_ drivers than ATI does.
> 
> ...



Initial release	January 19, 2004 

Ver. 4.1 (Pkg. ver. 7.97) [1]

so yeah, nvidia's supports so much better......yet most of their beta drivers dont support all the cards that use the same chips(g92) and they are NOT official drivers they are BETA.

personaly i have an 8800gt, and really, i wish the 3870 had been out when i got my card, because thats what i would have, and i wouldnt have had to reinstall x64 windows 4 times to get the system useable.


----------



## eidairaman1 (Apr 26, 2008)

HTC said:


> By any chance, did you bother to read post #180? You know: the post *right before yours*?




I didnt have Problems with Drivers Until 7.8s and higher, Once the Hotfix Came Out i switched to those and no problems.

Beyond that Please Lets get back on Topic, this topic is not about drivers but ATi's Radeon 4 Line.


----------



## btarunr (Apr 26, 2008)

BumbRush said:


> but at least ati didnt put out an FX class line of cards



They put out HD2000 series instead


----------



## flashstar (Apr 26, 2008)

On wikipedia, it says that the GT200 has 1000 gflops of processing power. The R770 is estimated to have 1000 as well. It appears that the current situation between ATI and Nvidia will not change much with the exception of ATI being out first this time in the contest between GT200 and R770.

What will matter is how competitive ATI's pricing is. I'm betting that we will see a major price drop in Q3 2008 from ATI for the R770 with the release of GT200 products.


----------



## BumbRush (Apr 26, 2008)

GSG-9 said:


> nope, ATI released the 'VE' series (Radion 7000)



the VE and SE cards are kinda like the MX line, they are just suck cards for people who want cheap.

look how many stupid fools bought gf4mx cards  thinking "its geforce 4 its gotta be good"

the VE was bad for games, but they played dvd's very well, and OLD games where ok, hell the mpeg decoding on them really helped slower systems play dvd's for old skool htpc/mpc systems 




btarunr said:


> They put out HD2000 series instead



at least the hd2000 cards are capable of doing what they advertise even if they loose perf to AA and such.

the fx line CANT play dx9 games worth a damn, i know its the one thing nvidia did that truely ticked me off, soldme a top of the line "dx9" card that turned out to be utterly unable to play dx9 games......unless u like a 4fps slideshow...........

meh, i hope that the 4800's turn out to be kickass


----------



## btarunr (Apr 26, 2008)

In the same way, I should be able to play any DX 10 game at appropriate settings on any HD 2000 series card. I can't play Crysis on even the lowest setting on a HD2400 Pro....4fps slideshow....what I don't like.


----------



## BumbRush (Apr 26, 2008)

2400 is NOT a gaming card tho, just like the 8400 isnt a gaming card, they are made for buisness and work/video playback systems, wanting to play any game other then some 10year old stuff on a low end "value" card is like wanting to use a geo metro to tow a 24foot boat


----------



## btarunr (Apr 26, 2008)

BumbRush said:


> 2400 is NOT a gaming card tho, just like the 8400 isnt a gaming card, they are made for buisness and work/video playback systems, wanting to play any game other then some 10year old stuff on a low end "value" card is like wanting to use a geo metro to tow a 24foot boat



I will use your logic, equate the HD2400 to FX 5200 (which in its line couldn't play DX9 games).  

No more FX / HD2000 discussion. Barring the HD2900 series, HD2000 had been as much a hollow promise to consumers as GeForce FX was.


----------



## eidairaman1 (Apr 26, 2008)

5200 could Run NFSU and U2 fine at normal settings, just couldnt pump graphics, every card has its niche.

But TBH i think this Topic needs to be locked as it has gotten way out of context.


----------



## MrMilli (Apr 26, 2008)

Megasty said:


> Of course not
> 
> Before this thread completely goes to hell, those specs seem to have a few areas of focus that would cause some to think that they are atleast twice as fast as their 3800 counterparts. The ROPs kills that thought right off. The cards will still pwn, but if the architecture would have allowed for 24 or 32 ROPs, they would have been out of this world. I do have to commend them for fixing the TMU mess as that screwed us over more than anything else



Ati really doesn't need more ROP's because they do AA in the shaders (unlike nVidia). The 3870 already proves that since it's more competitive at higher resolutions.
http://www.computerbase.de/artikel/..._x2/20/#abschnitt_performancerating_qualitaet
Check out 2560x1600 ... ATI is short on TMU's not ROP's.


----------



## Megasty (Apr 26, 2008)

MrMilli said:


> Ati really doesn't need more ROP's because they do AA in the shaders (unlike nVidia). The 3870 already proves that since it's more competitive at higher resolutions.
> http://www.computerbase.de/artikel/..._x2/20/#abschnitt_performancerating_qualitaet
> Check out 2560x1600 ... ATI is short on TMU's not ROP's.



The choking point in the 3800 series was definitely the TMUs. The relatively poor performance in the 3870 came from only having 16 TMUs. The card would fly when just starting to game & slow down to a crawl when you enter an environment where there's alot going on. The 3870x2 nearly solved that problem while showing the ability of a card that have 32 total TMUs & ROPs, although they work over 2 gpu's. With the doubling of the TMUs, ATI has tackled the problem headon. They still have half the TMUs of current Nvidia cards which they tried to offset by having a ridiculous amount of basic shaders but that's an architecture issue isn't it


----------



## DarkMatter (Apr 26, 2008)

MrMilli said:


> Ati really doesn't need more ROP's because they do AA in the shaders (unlike nVidia). The 3870 already proves that since it's more competitive at higher resolutions.
> http://www.computerbase.de/artikel/..._x2/20/#abschnitt_performancerating_qualitaet
> Check out 2560x1600 ... ATI is short on TMU's not ROP's.



And I think the architecture is only going to better with the generations in that respect. Since AA is done in the shaders it takes up X shading power. Let's do a bold speculation based on performance hit when AA is enabled and say it takes up 80 SP on the HD3870 at a said resolution (25% of power, please it's just to give an example) . At the same resolution the HD4 series are going to need the same power, 80 SPs, but the difference is:

480 - 80 = 400 
320 - 80 = 240

We have 50% more shaders, but that translates to 66% more *free* shaders, and this will go up as we add shaders. IMO dedicated hardware (more ROPs) is still better, but Ati is going to improve, that's for sure. 66% more shaders clocked 35% higher (1050Mhz / 775 Mhz) translates to 125% more performance: 

P x 1,66 x 1,35 = 2,249 P

Interestingly, we have more or less the same improvement in the texture mapping area, double the units clocked a bit higher:

Texture fillrate -> TFR x 2 x 850/775 = 2,19 TFR

I guess they are aiming at ~2,20X the performance of HD3 series.

And I really hope we are correct and Ati's new generation comes with a 100% improvement or greater, since according to leaked specs Nvidia's chip IS going to be double as fast as G92, since it's double everything. 

We need Ati back on the high-end market.


----------



## mandelore (Apr 26, 2008)

DarkMatter said:


> And I think the architecture is only going to better with the generations in that respect. Since AA is done in the shaders it takes up X shading power. Let's do a bold speculation based on performance hit when AA is enabled and say it takes up 80 SP on the HD3870 at a said resolution (25% of power, please it's just to give an example) . At the same resolution the HD4 series are going to need the same power, 80 SPs, but the difference is:
> 
> 480 - 80 = 400
> 320 - 80 = 240
> ...



then double that up for the 4870x2


----------



## newtekie1 (Apr 26, 2008)

grndzro said:


> Another thing to consider is that ATI-Tool is far more complex at tuning ATI cards and offers far more options for us ATI fanboys than any Nvidia overclocking tool, and that alone creates far more crashes than would normally happen were we content to just use the ATI control panel. and most ATI users do use ATI-Tool for tweaking their graphics.



Have you ever even used Rivatuner?  It is far more complicated than ATItool.


----------



## lemonadesoda (Apr 26, 2008)

DarkMatter said:


> ... lots of calcs leading to conclusion of 2.2x performance.


Given the same architecture, higher clocks, and more shaders, I think these are the performance implications:

1./ Broadly similar performance at standard resolutions e.g. 1280x1024 and with no AA FSAA effects since no architectural changes
2./ General improvement in line with clock-for-clock increases 10-20%
3./ The increase to 32 TMU will mean that the cards wont CHOKE at higher resolutions. It will be able to handle 1920x1200 without hitting the wall
4./ Currently you can dial up 4x AA without any performance hit. With the extra shaders you can do the same at 1920x1200 now
5./ With the extra shaders, you will be able to dial up 8x or 16x at 1280x1024 without a significant hit.
6./ The GPU will run hotter and require more power
7./ Compensated by using GDDR5 memory that will require less power and run a bit cooler

Net net... get the GDDR5 model.

Will there be a "jump" in performance like we saw between the x19xx series and hd38xx? No.


----------



## flashstar (Apr 26, 2008)

lemonadesoda said:


> Will there be a "jump" in performance like we saw between the x19xx series and hd38xx? No.



That is where I have to disagree with you. If performance doesn't "jump", ATI will fail. Then AMD will be very vulnerable to a buyout from some other company and then who knows what will happen. ATI knows that it has to be at least on par with the GT200.


----------



## eidairaman1 (Apr 26, 2008)

Beyond Shaders, ROPs, TMUs there is the Fact of the Basic Transistor Density.


DarkMatter said:


> And I think the architecture is only going to better with the generations in that respect. Since AA is done in the shaders it takes up X shading power. Let's do a bold speculation based on performance hit when AA is enabled and say it takes up 80 SP on the HD3870 at a said resolution (25% of power, please it's just to give an example) . At the same resolution the HD4 series are going to need the same power, 80 SPs, but the difference is:
> 
> 480 - 80 = 400
> 320 - 80 = 240
> ...


----------



## eidairaman1 (Apr 26, 2008)

flashstar said:


> That is where I have to disagree with you. If performance doesn't "jump", ATI will fail. Then AMD will be very vulnerable to a buyout from some other company and then who knows what will happen. ATI knows that it has to be at least on par with the GT200.



What put them behind schedule was the 2900 Line, 3800 Came about due to the Powerdraw of the 2900, Many drivers later the 2900 is a good card if you have the power to run it, Radeon 4 Series is on schedule according to ATi.


----------



## btarunr (Apr 26, 2008)

eidairaman1 said:


> Beyond Shaders, ROPs, TMUs there is the Fact of the Basic Transistor Density.



Beyond all that...developer-level optimisations for games and 3D Apps. The basic architecture of a GPU has diversified very much after the advent of DX 10.


----------



## DarkMatter (Apr 26, 2008)

lemonadesoda said:


> Given the same architecture, higher clocks, and more shaders, I think these are the performance implications:
> 
> 1./ Broadly similar performance at standard resolutions e.g. 1280x1024 and with no AA FSAA effects since no architectural changes
> 2./ General improvement in line with clock-for-clock increases 10-20%
> ...



Want to place a bet? No, seriously, that almost made my day.

So according to you what is what it gives more performance if Gflops, texture fill-rate and memory bandwidth don't increase anything??? Does performance come out off of thin air?

You are not very versed at GPU architectures, are you?


----------



## MrMilli (Apr 26, 2008)

lemonadesoda said:


> Given the same architecture, higher clocks, and more shaders, I think these are the performance implications:
> 
> 1./ Broadly similar performance at standard resolutions e.g. 1280x1024 and with no AA FSAA effects since no architectural changes
> 2./ General improvement in line with clock-for-clock increases 10-20%
> ...



Oh boy, never have i seen a guy knowing so little about GPU architectures making such a long and bold (and completely wrong) statement.
1- If the CPU can deliver, fps will always increase. Since every aspect of RV770 is almost 2x that of RV670 theoretically it can do 2x the fps. What you say is only correct if the CPU is not fast enough but that's not the point here. Secondly, where do they state that it will (or will not) have completely same architecture?
2- Indeed clock increases!
3- First of all, the 3870 isn't hitting a wall at 1920x1200. Actually it's gaining a lot of ground at 2560x1600. TMU's don't have anything to do with the resolution btw. The increase in TMU's will help a lot with shaders and texture lookups.
4- Enabling 4x AA has a performance hit at any resolution. Fact! Only when your CPU is already too slow to deliver enough fps, only then you won't see a performance hit.
5- Nonsence.
6- Obviously. But anything beyond that is guessing. 55nm has matured a lot over the last year.
7- Compensated by GDDR5? Obviously you love guessing.

Performance increase over RV670? Almost double if not more.


----------



## GSG-9 (Apr 26, 2008)

lemonadesoda said:


> Will there be a "jump" in performance like we saw between the x19xx series and hd38xx? No.



Everything about the 4xxx series suggests it will be a dramatic performance increase. anything else I have to say is generally covered in the two posts above me and does not need to be elaborated on.


----------



## lemonadesoda (Apr 26, 2008)

OK, shithot DarkMatter, if you are going to throw personal insults around. Show how confident you are in your 2.2x performance. Put your money where you mouth is. This is a PUBLIC CHALLENGE.

Let's take a CPU with a GDDR4 HD 3870 at stock. Say Q6600 at 3.0Ghz. Run 3dmark06. Record the result.

Now lets put a GDDR5 HD 4870 in there, at stock. Same Q6600 at 3.0Ghz. Lets run 3dmark06 again.  Record the result.

I bet you $100 the result is nowhere near 2.2x. In fact, I'll give you the odds not even at <2.0, but at < 1.7. If it's less than 1.7, I win. If it's more than 1.7, you win. Take on the bet, boyo. If you dont, then *take back your personal insults and lick my boots*.

This bet is also offered to the Belgian sprout from antwerp. Dont be a chicken.


----------



## MrMilli (Apr 26, 2008)

lemonadesoda said:


> OK, shithot DarkMatter, if you are going to throw personal insults around. Show how confident you are in your 2.2x performance. Put your money where you mouth is. This is a PUBLIC CHALLENGE.
> 
> Let's take a CPU with a GDDR4 HD 3870 at stock. Say Q6600 at 3.0Ghz. Run 3dmark06. Record the result.
> 
> ...



Well obviously in this situation you will be right since a Q6600 at 3Ghz is nowhere near fast enough for a two fold increase (it will be a bottleneck). And you made your point, you sir are a moron. 3DMark06 also includes the performance of the CPU in the final score. So when you keep the same CPU, you can't expect 2x the performance in 3DMark since only the GPU is faster.
How 'bout we make the same bet but let's take a real game. Let's say Crysis since that's the hardest game around and we'll use 1680x1050 4xAA/16xAF (very high detail level). How 'bout that? In the Ice level a 3870 gets around 6 fps. 6 x 2 = 12! OK?


----------



## Megasty (Apr 26, 2008)

MrMilli said:


> Well obviously in this situation you will be right since a Q6600 at 3Ghz is nowhere near fast enough for a two fold increase (it will be a bottleneck). And you made your point, you sir are a moron. 3DMark06 also includes the performance of the CPU in the final score. So when you keep the same CPU, you can't expect 2x the performance in 3DMark since only the GPU is faster.
> How 'bout we make the same bet but let's take a real game. Let's say Crysis since that's the hardest game around and we'll use 1680x1050 4xAA/16xAF (very high detail level). How 'bout that? In the Ice level a 3870 gets around 6 fps. 6 x 2 = 12! OK?



My 3870 gets about 6fps on the level at 1920x1200 no AA or AF & my X2 gets about 15fps. If the 4870 gets 12 & 4870X2 gets 30 then I'll eat my 3870  - I really want to eat my 3870


----------



## HTC (Apr 26, 2008)

Megasty said:


> My 3870 gets about 6fps on the level at 1920x1200 no AA or AF & my X2 gets about 15fps. If the 4870 gets 12 & 4870X2 gets 30 then I'll eat my 3870  - *I really want to eat my 3870*



What do you want to go with that?


----------



## Morgoth (Apr 26, 2008)

Megasty said:


> My 3870 gets about 6fps on the level at 1920x1200 no AA or AF & my X2 gets about 15fps. If the 4870 gets 12 & 4870X2 gets 30 then I'll eat my 3870  - I really want to eat my 3870



i Quote you on that


----------



## Megasty (Apr 27, 2008)

Morgoth said:


> i Quote you on that



Nice, very nice. Now I know I'm gonna have to eat it


----------



## magibeg (Apr 27, 2008)

You know I'm thinking someone is definitely going to be eating their card in this situation. On a side note anyone want to buy up a 3870


----------



## eidairaman1 (Apr 27, 2008)

lemee guess, switchin to nvidia right


----------



## Megasty (Apr 27, 2008)

magibeg said:


> You know I'm thinking someone is definitely going to be eating their card in this situation. On a side note anyone want to buy up a 3870



lol, I'm preparing the stew in my sig for the 3870 as we speak


----------



## Thermopylae_480 (Apr 27, 2008)

Please do not insult other members.  If you disagree with the opinion of another member, explain why in a polite and reasonable manner.  Insulting others only creates an unpleasant atmosphere in the forums.  Please do not create competitions for personal vendettas either.

Thanks


----------



## Thermopylae_480 (Apr 27, 2008)

This is a news story treat it as such, we do not like flame wars and insult matches in any section, especially the news section.  I have no problem closing this discussion if I have to revisit this thread again for negative purposes.


----------



## DarkMatter (Apr 27, 2008)

lemonadesoda said:


> OK, shithot DarkMatter, if you are going to throw personal insults around. Show how confident you are in your 2.2x performance. Put your money where you mouth is. This is a PUBLIC CHALLENGE.
> 
> Let's take a CPU with a GDDR4 HD 3870 at stock. Say Q6600 at 3.0Ghz. Run 3dmark06. Record the result.
> 
> ...



First of all I didn't insult you anywhere. I said you are not versed at GPU architectures. We have the proof, it's called "post #200". I'm not very versed at genetics, I'm not very versed at solfegge, hell I could even go and say that I'm not versed at computers compared to what it's left to know. And you know what? I am not insulting myself, because that's the truth. You have a problem if you think that you are versed at computer architectures after what you said. You have a bigger problem if you feel offended when they say you are not. You have even a bigger one if you take such little critics as insults. I really hope you can resolve them.

Second, a bet involving money it's stupid in the net, specially since I live in Europe. Definately I'm not going to give my account number to anyone that I don't know. And as they have already told you, you need faster CPUs and newer games to see the difference. When 6600GT, 7600GT and other midrange cards were launched they offered almost 80% the performance of their high-end cousins, go look if they perform even 50% now.

And finally, I said that 2,2X is the peak power the HD4 series have compared to HD3 when looking at those specs. There are other things to take into account. In fact, I implied a 2x improvement, while you said 1,1x. Middle ground for that is 1,5x. If you really want to place a bet (involving our prestige and honour, I already said I won't exchange money in the net with you. Also I don't want to be a thief robbering your money ) let's do it on Crysis 1920x1200 4x AA or other new games that have not been released yet, and in an overclocked Quad at 3,6++ Ghz. They have already told you why you are not going to see the improvement on 3DM06 on a 3Ghz...

With the above conditions, if the performance increase is more than 50% I win, if it's less than that you win. The loser will have to show as his avatar whatever the other wants.

EDIT: BTW this bet is if you want to do it at launch day. If you want to wait 6-9 months (until new games, CPU, chipsets, etc are launched) I increase that number to 2x the performance.


----------



## lemonadesoda (Apr 27, 2008)

And we have demonstrated proof in your lack of diplomacy. And perhaps I over-reacted: Accept my apology.

Not withstanding that, at no point did I say the gains would be limited to 1.1x. Point 2 refers to the gains associated with clock increases. Point 3 refers to 2x performance on texture bound resolutions, like 1920x1200 and higher. Point 5. refers to a application specific improvement associated with AA and FSAA.

Lets sit back with a beer and see how performance pans out.  The challenge is 1.7x. If performance is >1.7x, I'll open a beer in your name and drink it with pleasure. And vice-versa. But the tool is 3dmark06. And it will be the same CPU. I'll only be looking at the combination of "SM2.0" + "SM3.0" scores, excl. the CPU score.  And no it will NOT be a 1920x1200 test, but the regular demo test on 3dmark06. The 1920x1200 problem which was very clearly identified as being the #1 objective that ATI was trying to solve with the 32 TMU, is covered in my points 3. and 4.

If you misunderstood my original 7 points, that's OK. Perhaps it wasnt clear. But better to say, OK, now I understand what you mean, than to continue this "you dont know anything about xyz", or, "you've got a problem...". It is offensive language. And whether you use it on TPU, or with your friends, or at work, there will be people offended, whether they tell you or not. It's not a good way to start a dialog, let alone, cooperation.  And that's what the TPU community is about.

Let's respect Thermo's request to keep flaming off the board. I'll say nothing more about it. Take it easy.


----------



## DarkMatter (Apr 28, 2008)

lemonadesoda said:


> And we have demonstrated proof in your lack of diplomacy. And perhaps I over-reacted: Accept my apology.
> 
> Not withstanding that, at no point did I say the gains would be limited to 1.1x. Point 2 refers to the gains associated with clock increases. Point 3 refers to 2x performance on texture bound resolutions, like 1920x1200 and higher. Point 5. refers to a application specific improvement associated with AA and FSAA.
> 
> ...



If there was really something offensive, then sorry. It must be something related to the language, something lost in translatioon, since I don't see any offensive language in what I said in my first reply. But I apologize if there was something offensive there. It would help me a lot if you tell me what exactly was offensive and an insult though. I did was offensive in the second, but only because you directly insulted me before.

But if you are talking about me saying you don't know about GPUs, if that is what you are taking as an insult, then I will take my apologies back. That's not an insult nor offensive and I am definately not going to say sorry for that, considering your reaction. It's just not offensive, I explained that in my previous post. There are lots of things that I don't know and I will never take as an insult if someone tells me so. You are demostrating you don't know about this, mate, and you are being arrogant by acting like a victim and taking offense for that. There's nothing to (miss)understand on your statements, they are just wrong. I'm trying to say this kindly, learn  how a GPU works and then we'll discuss if those improvements will yield any gains. Some of the points could be true if they had only improved shaders and kept the rest as is, or if they only improved TMUs, but since they have improved both, plus the bandwidth enough to feed everything well your points are just wrong. 

Just to point one of the things you learnt wrong. TMUs load and filter textures. They do their work on *pixels*. It doesn't matter if the next pixel is from the same frame or the next, it's just the next pixel. For them doing 16x16 pixels at 20 frames is the same as doing 32x16 at 10 FPS. They are just doing their work on 5120 pixels/second. Double the number of TMUs (or double the clock) and you can do either double the frames at same resolution or double the resolution at same frames. It doesn't exist anything like "texture bound resolution". Exactly the same applies to shader processors. Double of their power gives exactly double the performance (for that stage of the graphics pipeline). If we have double the power in every stage, as is the case here, except on pixel-fillrate (ROPs), you will get double the performance. 
Now if you know what ROPs do, you know that since Ati does AA with shaders, the only job that ROPs have to do is blend the different fragments together (sub-pixels, which are calculated in the SPs using the data fetched from textures), and that job is only related to the resolution and the number of fragments. RV670 and G92 have demostrated that the bottleneck was not in ROPs. Specially G92 has demostrated this, because it does AA in ROPs (it's a lot of work being done there), and even though fill-rate is smaller than on RV670, G92 is a lot faster. Ati offloads AA work from ROPs meaning that there's still more room. It's difficult to know if a bottleneck occurs on ROPs in an architecture that will relegate so many things to shaders, but it's common sense they wouldn't make all other parts double as fast, just to let this one be a big bottleneck. They have those things resolved before launch.

My first calculations are based on all that and have their logic based on the graphics pipeline. Your statements don't have any sense, they are not based on the reality of how a GPU works. I didn't want to be offensive when I said you didn't know about GPUs, I still don't. We don't have to know about everything in this life, but if we don't know something, we don't know, that's all, we don't have to act as if we knew and when they prove us wrong act as a victim. That is not the way to go. That's what I thought you were doing. If you are not doing that consciously, I apologize. And I'm going to apologize in advance just in case this post is also offensive to you. I'm not trying to offend you, believe me, I just think you don't know enough about what I explained above and that's all. 
Let's forget about this until we can compare the cards. 
But not in 3DM06, it's the worst aplication you can use to know the power of a card nowadays. Vantage maybe. And definately not in a 3 Ghz bottleneck...


----------



## Azazel (Apr 28, 2008)

wow hoe did i miss this...

well il be getting a 4870x2


----------



## eidairaman1 (Apr 28, 2008)

to simplify the matters between both of you, drop it and Kiss and Makeup.


----------



## magibeg (Apr 28, 2008)

On a side note you said you needed a quad core clocked to 3.6ghz with benches and such in crysis, i can deliver those benchmarks, just send me a 4870 when it comes available


----------



## Megasty (Apr 28, 2008)

eidairaman1 said:


> to simplify the matters between both of you, drop it and Kiss and Makeup.



gah, you're trying to make it worse  

On a lighter note, ATI is definitely not pulling any punches with the 4870 & X2. 1 & 2 GB of GDRR5, uber high gpu clocks, independent shaders, double the TMUs, etc...what is this stuff coming to. I don't feel like munching on my 3870 today as I did the other day but those specs really got me wondering why would they finally want to give out something thats impressing me so much on paper. Oh well I need to stop b4 it sounds like I'm complaining


----------



## BumbRush (Apr 28, 2008)

lemonadesoda said:


> OK, shithot DarkMatter, if you are going to throw personal insults around. Show how confident you are in your 2.2x performance. Put your money where you mouth is. This is a PUBLIC CHALLENGE.
> 
> Let's take a CPU with a GDDR4 HD 3870 at stock. Say Q6600 at 3.0Ghz. Run 3dmark06. Record the result.
> 
> ...



3dmark=utterly useless for anything but comparing tweaks on the same system and as a stab test for overclocks, 3dmark is SYNTHETIC and is only tauted by people on forums to show how big their epeen is, find a real test like some REAL GAMES....


----------



## eidairaman1 (Apr 28, 2008)

Megasty said:


> gah, you're trying to make it worse
> 
> On a lighter note, ATI is definitely not pulling any punches with the 4870 & X2. 1 & 2 GB of GDRR5, uber high gpu clocks, independent shaders, double the TMUs, etc...what is this stuff coming to. I don't feel like munching on my 3870 today as I did the other day but those specs really got me wondering why would they finally want to give out something thats impressing me so much on paper. Oh well I need to stop b4 it sounds like I'm complaining



its a psychological tactic


----------



## BumbRush (Apr 28, 2008)

oh btw, if you wana try and say that gflops and fill rate mean all

google "tomshardware a speedy tiler" and read it, its old but it shows that a cards theoretical numbers mean dick when compared to its acctual numbers, the kyro2 matched its numbers 100%, the cards from other makers fell far short due to memory bandiwth mostly


----------



## Megasty (Apr 28, 2008)

BumbRush said:


> 3dmark=utterly useless for anything but comparing tweaks on the same system and as a stab test for overclocks, 3dmark is SYNTHETIC and is only tauted by people on forums to show how big their epeen is, find a real test like some REAL GAMES....



Besides, vantage will be out tomorrow. Maybe It'll get rid of the cpu dependence so we can really find out if the 4870 & X2 will be 2X faster than the 3800s w/o blowing them up with Crysis


----------



## eidairaman1 (Apr 28, 2008)

i hope 4800 is good


----------



## Mussels (Apr 28, 2008)

I know this goes back to the off topic part of this discussion, but here is proof nvidia are more popular than ATI, thus explaining the more reported driver problems.










From 3dmarks page, about 2 minutes ago.

This is just my way of saying no matter how people HERE feel, look around  - nvidia is certainly more popular in the world. Doesnt matter who is better, who is faster - what matters here is that Nvidia is more popular, especially in the DX10 arena.


----------



## eidairaman1 (Apr 28, 2008)

just defacto, they are not the standard, just like Creative labs, they are popular- doesnt mean they are better.


----------



## Mussels (Apr 28, 2008)

eidairaman1 said:


> just defacto, they are not the standard, just like Creative labs, they are popular- doesnt mean they are better.



yes, exactly.

The argument that got out of hand was because people are saying Nv sucks due to more driver issues.

The response was that there are more Nvidias out there, therefore Nv is not as bad as it appears - this does not mean Nvidia do not have problems, it merely means its exxagerated.

I'm worried about these next gen cards, ATI and nvidia because of these problems. Nv have ditched older cards (i have a geforce 3 and GF4 that no modern drivers work with, they all get stuck in 8 bit color in XP, 7900GT/GX2 have severe driver issues) ATI have fecked up the entire AGP series...

I hope that these new cards coming out, dont mean that yet another old series gets dropped, or the current cards with poor support (AGP ones for ATI) dont get entirely ditched in the process.


----------



## eidairaman1 (Apr 28, 2008)

Mussels, look at the Machine I am on, its a AGP 1950 and im running Hotfix Cat 8.1 Drivers, I found out the 8.1 hotfix are full blow driver so you dont have to install ontop of the Non Hotfix 8.1s The Hotfix is up to 8.4 but im moving to 8.5 after getting specs.


Mussels said:


> yes, exactly, Runs COD 4 good.
> 
> The argument that got out of hand was because people are saying Nv sucks due to more driver issues.
> 
> ...


----------



## DarkMatter (Apr 28, 2008)

Mussels said:


> yes, exactly.
> 
> The argument that got out of hand was because people are saying Nv sucks due to more driver issues.
> 
> ...



Wow. People really is obseseed with drivers here. If it works don't fix it. You don't need new drivers for that cards. I have a GF4 in our 4th machine and I haven't updated it's drivers since 2004. I don't have to. Newer drivers don't increase performance and they don't offer new features for this card. Usually newer drivers only improve performance for newer games, games that these cards can't play. There's no point in changing it.
Seriously I don't know why people is so crazy about this, for example having to use 4 months old drivers like 169.21 if you want to stay WHQL. And what? They work.


----------



## Mussels (Apr 28, 2008)

DarkMatter said:


> Wow. People really is obseseed with drivers here. If it works don't fix it. You don't need new drivers for that cards. I have a GF4 in our 4th machine and I haven't updated it's drivers since 2004. I don't have to. Newer drivers don't increase performance and they don't offer new features for this card. Usually newer drivers only improve performance for newer games, games that these cards can't play. There's no point in changing it.
> Seriously I don't know why people is so crazy about this, for example having to use 4 months old drivers like 169.21 if you want to stay WHQL. And what? They work.



for the older cards, i cant find any that actually work. For example my Geforce 4 and below cannot run TV out and VGA at the same time - all dual display features do not work in XPS2/SP3. They either run microsoft drivers, or are stuck in 8 bit color. i am running driver 30.82 to get a single display even working properly, and that needed modyfing saying no Nvidia card detected.

To the AGP people: yes, but you really shouldnt need a hotfix. seriously, its worrying that they cant sort that out. why not release it seperately  as an AGP driver, instead of this fussing about.


----------



## DarkMatter (Apr 28, 2008)

BumbRush said:


> oh btw, if you wana try and say that gflops and fill rate mean all
> 
> google "tomshardware a speedy tiler" and read it, its old but it shows that a cards theoretical numbers mean dick when compared to its acctual numbers, the kyro2 matched its numbers 100%, the cards from other makers fell far short due to memory bandiwth mostly



True, in that same sense G92 is a lot more efficient than RV670. But you are talking about different architectures and when pixel/vertex shaders were independent (this is important, because it's an extra layer of complexity). Within the same architecture the efficiency is the same and higher theoretical numbers translate to better performance. There gflops and fill-rate (both) mean all, indeed. Again, if the chip is well balanced and all units have their improvement, as is the case. 
An example of this is 9800 GTX/8800 GTS and 8800 GT, with the exception that G92 IS indeed BOTTLENECKED by pixel-fillrate at highest resolutions and AA levels. Any improvement to the SP/TMU in this architecture requires an increase in ROPs. That may or may not be the case with HD4000. Since AA is done on the SPs instead of in ROPs there's a lot of room compared to G92, common sense, it's not going to be a bottleneck, increasing everything except that and let it be a bottleneck is stupid and won't happen.


----------



## jbunch07 (Apr 28, 2008)

HTC said:


> Whether or not it will be clocked @ 1 GHz isn't what's important: to me, it would be EXTREMELY significant *IF* 1 single 4850 could match a 3870x2 in performance. Don't know if it can, though.



i thought the 4870 was supossed to match the 3870 X2...or at least come close


----------



## DarkMatter (Apr 28, 2008)

Mussels said:


> for the older cards, i cant find any that actually work. For example my Geforce 4 and below cannot run TV out and VGA at the same time - all dual display features do not work in XPS2/SP3. They either run microsoft drivers, or are stuck in 8 bit color. i am running driver 30.82 to get a single display even working properly, and that needed modyfing saying no Nvidia card detected.
> 
> To the AGP people: yes, but you really shouldnt need a hotfix. seriously, its worrying that they cant sort that out. why not release it seperately  as an AGP driver, instead of this fussing about.



Mine (GF4800) can do everything. Don't know which drivers are installed. I can't look for them neither because we are remodeling the room where the PC was installed, and frankly I am not going to unpack everything and install it just to see which drivers we are running. But they work, they worked back then when we installed them and they work now. We play old games there and they play well. OK my dad plays there, I usually don't lol. My brother a few times. But I gave a try to Severance: Blade of Darkness last month (to remember how it was) and ran really smooth and stable (incredible graphics BTW, I was shocked by them: the lighting, shadows and shaders had nothing to envy of Doom3, except the resolution of them of course). Even Doom3 and CS:Source (1024x768 0xAA) run stable, not fast, but no crashes.


----------



## Mussels (Apr 28, 2008)

DarkMatter said:


> Mine (GF4800) can do everything. Don't know which drivers are installed. I can't look for them neither because we are remodeling the room where the PC was installed, and frankly I am not going to unpack everything and install it just to see which drivers we are running. But they work, they worked back then when we installed them and they work now. We play old games there and they play well. OK my dad plays there, I usually don't lol. My brother a few times. But I gave a try to Severance: Blade of Darkness last month (to remember how it was) and ran really smooth and stable (incredible graphics BTW, I was shocked by them: the lighting, shadows and shaders had nothing to envy of Doom3, except the resolution of them of course). Even Doom3 and CS:Source (1024x768 0xAA) run stable, not fast, but no crashes.



mines a geforce 4 MX card, not a Ti. I'm not able to get all features working anymore, which is making me annoyed as i gave it to someone to run dual screen with the secondary as a TV. Doesnt matter much man, i'm just saying that i'm having issues in that regard with older cards, and up to date XP.


----------



## DarkMatter (Apr 28, 2008)

Mussels said:


> mines a geforce 4 MX card, not a Ti. I'm not able to get all features working anymore, which is making me annoyed as i gave it to someone to run dual screen with the secondary as a TV. Doesnt matter much man, i'm just saying that i'm having issues in that regard with older cards, and up to date XP.



Haha sorry then, mate. I was just saying that some working drivers exist, they work for us. But maybe they don't work on MX cards. It's a shame. But I guess that support for 5+ years old low-end cards are not in their priorities. It would be better if they supported them, but I can't blame them for not doing so. I know it's not on my wish list. If the card doesn't work eventually I will move to the 9600 pro or buy a cheapo one. We are using that because we have it with passive cooling and the pro does too much noise. I think it's broken or is it that old cards were louder and I am accustomed? I do buy quiet components now, but still...


----------



## Mussels (Apr 28, 2008)

DarkMatter said:


> Haha sorry then, mate. I was just saying that some working drivers exist, they work for us. But maybe they don't work on MX cards. It's a shame. But I guess that support for 5+ years old low-end cards are not in their priorities. It would be better if they supported them, but I can't blame them for not doing so. I know it's not on my wish list. If the card doesn't work eventually I will move to the 9600 pro or buy a cheapo one. We are using that because we have it with passive cooling and the pro does too much noise. I think it's broken or is it that old cards were louder and I am accustomed? I do buy quiet components now, but still...



since SP3 is out, it'd be nice for them to release one final driver. one driver that works for all cards from say the GF FX series back to say, the TnT2. Then they can say rest in piece to that hardware, and we wont care.


----------



## DarkMatter (Apr 28, 2008)

Mussels said:


> since SP3 is out, it'd be nice for them to release one final driver. one driver that works for all cards from say the GF FX series back to say, the TnT2. Then they can say rest in piece to that hardware, and we wont care.



But SP3 isn't really out for the mainstream yet, only OEM and those won't have old cards. Maybe they release one when they finally release SP3 to the masses (tomorrow isn't it?). j/k won't happen  but it's a beautiful dream guesstimate.


----------



## Mussels (Apr 28, 2008)

DarkMatter said:


> But SP3 isn't really out for the mainstream yet, only OEM and those won't have old cards. Maybe they release one when they finally release SP3 to the masses (tomorrow isn't it?). j/k won't happen  but it's a beautiful dream guesstimate.



i've got it, i'm running it. its out for download on a few sites already, just not microsoft.com or windows update. and i dont just mean illegal sites either.


----------



## BumbRush (Apr 28, 2008)

DarkMatter said:


> True, in that same sense G92 is a lot more efficient than RV670. But you are talking about different architectures and when pixel/vertex shaders were independent (this is important, because it's an extra layer of complexity). Within the same architecture the efficiency is the same and higher theoretical numbers translate to better performance. There gflops and fill-rate (both) mean all, indeed. Again, if the chip is well balanced and all units have their improvement, as is the case.
> An example of this is 9800 GTX/8800 GTS and 8800 GT, with the exception that G92 IS indeed BOTTLENECKED by pixel-fillrate at highest resolutions and AA levels. Any improvement to the SP/TMU in this architecture requires an increase in ROPs. That may or may not be the case with HD4000. Since AA is done on the SPs instead of in ROPs there's a lot of room compared to G92, common sense, it's not going to be a bottleneck, increasing everything except that and let it be a bottleneck is stupid and won't happen.



but theoretical rates and acctual rates from ati and intel ALWASE are higher then the acctualy fill rates, its just how things are, powerVR didnt lie, their fillrate was what the card could acctualy do, and dispite having "worse hardware" it kicked my gf2gts allovertheplace i would bet you the numbers both companys  publish are NOT acctual per but purely theoretical because they need to keep up appearances and such.

not saying this will ever change, remmber its marketing, if they show acctual numbers for one company and the theo numbers for the other and the diffrance is drastic then people will be more then willing to belive that the one with higher numbers is better.

example of "higher number means better"

was a best buy a some time back, i had a guy from geeksquad try and sell me a 2600xt they had on sale, because "2600xt has got to be better then 1900xtx" after i explained to him that the 2600xt in xfire wasnt even close to as powerfull as my x1900xtx(that was under RMA) he had to go check the net because he ASSumed that higher number=better.

then i went in to bestbuy again, and somebody was asking the diffrance between 8800gt and 9600gt, diffrent geeksquad rep was telling him the 9600 was better because it was newer and had a higher model number.

i had to walk over and explain that the 9600 is a mid range card hence the 6, and that the 8800gt was the better card( due to a sale they where same price after in store rebates and a free gift card)  i got unbelving looks, again we went to a net terminal and they checked, the guy bought the 8800gt and thanked me.
oh and to explain, bb here was offering a deal for 3 days, some products you got a nice instant disscont on + depending on price u got XX on a gift card to spend later, basickly its a way to insure you come back, in the end it was a good buy for him since he was replacing a dieing x1950xtx and needed to have a good card told him to rma the x1950xtx and resell it OR just keep it as a backup, then told him he could give it to me   (i would have rma'd it happly!!!) 

well hope that explains what i ment, long story but posted numbers dont alwase tell the real story


----------



## DarkMatter (Apr 28, 2008)

BumbRush said:


> but theoretical rates and acctual rates from ati and intel ALWASE are higher then the acctualy fill rates, its just how things are, powerVR didnt lie, their fillrate was what the card could acctualy do, and dispite having "worse hardware" it kicked my gf2gts allovertheplace i would bet you the numbers both companys  publish are NOT acctual per but purely theoretical because they need to keep up appearances and such.
> 
> not saying this will ever change, remmber its marketing, if they show acctual numbers for one company and the theo numbers for the other and the diffrance is drastic then people will be more then willing to belive that the one with higher numbers is better.
> 
> ...



Cards (any device really) never reach their announced maximum numbers. That's true and as I explained it's because every machine has a different efficiency factor, and this is never 100%. But advertised fill-rates are not marketing inventions, they are ROP/TMU/SP amount multiplied by clock speed. Unless the clock or pipe numbers are not the ones advertised, the actual fill-rate is the one advertised always. 
Another thing is the fill-rate that the cards will achieve in some benchmark. In fact, the benchmark itself has associated its own efficiency factor. In the end it doesn't matter because within the same architecture the efficiency will be the same or very close. If HD 3870 has a performance hit of X, you will have the same on HD3850 and the same in HD4000 cards because it's the same architecture. If HD4000 is double as fast in paper, it will be double as fast in applications once that driver and platform bottlenecks are fixed.


----------



## Morgoth (May 3, 2008)

Megasty said:


> Nice, very nice. Now I know I'm gonna have to eat it



eat it


----------



## eidairaman1 (May 3, 2008)

thats a Leapfrog for ATI with the Non Dual GPU board, i wouldnt be surprised if ATI already has something to combat the 9900 line.


----------



## WhiteLotus (May 3, 2008)

eidairaman1 said:


> thats a Leapfrog for ATI with the Non Dual GPU board, i wouldnt be surprised if ATI already has something to combat the 9900 line.



so these things are single GPUs? wow these things can fly imagine two of them!

cor PLEASE be true - PLEASE!!!!


----------



## GSG-9 (May 3, 2008)

Morgoth said:


> eat it



Those are some very nice numbers there.


----------



## Haytch (May 3, 2008)

One thing is certain.  Untill the card is officially released and benchmarked by the very few that actually know whats going on, everyone is just guessing.   Nothing wrong with assumption beyond it being the mother of all F* ups!

I still dont get why they bother with SP Crysis. Has the hype really grasped the masses ?
I thought that whole piracy being the cause for low sales was a load of crap. MP was a total disaster, and it almost seems as though the games only use is for benchmarking.  <--< See, even i assume!

Its only a matter of time before the gpu technology reaches cpu efficiency and power, with multiple cores. The technology already exists, the knowledge is there! Just a matter of time and OUR money


----------



## HAL7000 (May 3, 2008)

It sounds incredible. I am in the process of specing out new builds for 3 systems, My wife and daughter(790GX when released) with whatever hybrid card they support ,,,,but me this *4870 and X2 later on will be in my personal build. *

As far as power conservation,,,,,hell with it for me, I don't care about saving energy with my computer for what little it amounts to. Give me more power at any cost. AMD needs to get back into the elite club in every aspect. The past year they have let me down and delayed my upgrades....b*st*rds. This new 4870 (X2) sounds real promising.......they better come through!!!


----------



## Haytch (May 4, 2008)

I agree, stuff the power consuption, give me a raw beast!


----------



## Morgoth (May 4, 2008)

rwar more power consuption = more heat = les overclock = nucler melddown when overclock


----------



## imperialreign (May 4, 2008)

Morgoth said:


> rwar more power consuption = more heat = les overclock = nucler melddown when overclock



yep - Intel has proven this (except for the less OC bit) with the Prescott lineup


----------



## HAL7000 (May 9, 2008)

Morgoth said:


> rwar more power consuption = more heat = les overclock = nucler melddown when overclock



Understood, that is of course you plan on overclocking and how you cool your system . The energy I was refering to would not even come near what the prescott consumed. 
My point was simple, *I don't care about saving energy to power my system up.*
I just would like AMD to release something worth building (for myself). The 4870 x2 and whatever else they decide to release I hope is not just a play on words. Its been real close many times to join the dark side. But will hold off this one last time.


----------



## MrMilli (May 15, 2008)

http://www.tgdaily.com/html_tmp/content-view-37453-135.html

_quote:
In terms of performance, we heard some interesting claims. A 4870 should perform on par with or better than a dual-chip 3870 X2._

lemonadesoda? reading this?


----------



## lemonadesoda (May 15, 2008)

your link said:
			
		

> In terms of performance, we heard some interesting claims. A 4870 should perform on par with or better than a dual-chip 3870 X2. Our sources explained to us that using a PCIe Gen1 controller 3870 X2 was a mistake, since the board was hungry for data and didn't sync well with this interface


I'd be delighted if the 4870 really was as fast as a 2x 3870 in crossfire. (A 3870X2 is actually clocked as a 3850 and should really be called 3850X2)

But I dont think the reasons they give will result in such a performance gain:

#1. 480 vs. 320 shaders = 50% improvement in the BEST POSSSIBLE situation, ie. purely shader limited.

#2. 32 vs. 16 TMU = 100% improvement... now I actually think THIS is going to have a bigger impact.

#3. 16 ROPS vs. 16 ROPS =  no change here or to architecture.

#4. PCIe v1.0 controller? Well, check my benchies... my AGP is as fast as a PCIe16 card... given similar processor and proc speed. No. The interface is irrelevant UNLESS the graphics assets are in memory and not on the card. 

#5. As I have always said, there will be increases associated with increased clocks, but points 1-4 refer to clock for clock gainst.

Net net? 50%-100% improvement IN THE BEST sitation (clock for clock) depending on where the limit was, ie shader limit or resolution limit.

On average? Less than 50%.

In practice. For the average person, FPS at, say, 1280x1024 will not improve by more than 20-30%. But you WILL BE ABLE to dial up much higher FSAA and AA without performance penalty. (And PLEASE read that as "much performance penalty". Its a relative comment, not supposed to mean exactly 100% same performance )


----------



## MrMilli (May 15, 2008)

lemonadesoda said:


> I'd be delighted if the 4870 really was as fast as a 2x 3870 in crossfire. (A 3870X2 is actually clocked as a 3850 and should really be called 3850X2)
> 
> But I dont think the reasons they give will result in such a performance gain:
> 
> ...



lemonadesoda, if you don't mind, i have to correct you.

First off, the 3870X2 gpu's are clocked at 825Mhz. The 3870 is clocked at 777Mhz. So i don't know why you compare it with a 3850 (670Mhz btw).
Check out this for refrence: http://techreport.com/articles.x/14284/5
So a 3870X2 is only 4% slower than a 3870 CF. The reason why it's slower is because only one CF bridge is connected onboard and one is still free for CF-X. Normal CF uses two bridges.
All things aside, 4% is nothing. So if they say it's as fast or faster than a 3870X2 then that means around 70% faster than a 3870. That's what that sentence mean. Nothing more, nothing less.

A couple of things i need to rectify (again):
You shouldn't compare the *amount* of shaders but the *GFlop* they can compute.
RV670 = 497GFlop
RV770 = 1008GFlop
So shader power is increased more than 100%.

Also i don't care what a GPU can do at 1280x1024. That resolution is mostly cpu bound.
If you want to compare GPU's, you need to go over 1600x1200. That's just the way it is.


----------



## magibeg (May 16, 2008)

Children no fighting until the cards are released please. Then we can find out what they're actually capable of.


----------



## DarkMatter (May 16, 2008)

lemonadesoda said:


> I'd be delighted if the 4870 really was as fast as a 2x 3870 in crossfire. (A 3870X2 is actually clocked as a 3850 and should really be called 3850X2)
> 
> But I dont think the reasons they give will result in such a performance gain:
> 
> ...



Every time you post about this, you demostrate your lack of knowledge on the matter. The X2 with HD3850 clocks? My God. Whatever, I don't want to fight again, I will only try to explain why could they say PCIe 1 wasn't enough and why is so important.

One thing is the interface between the card and the system and another completely different one is the one inside the card, the PCIe bridge. The one they are reffering to is the bridge chip between the two RV670 cores. They are using PCIe as they could have used Hyper Transport or another one. They used this for driver compativility, I'm 99,99% sure. I guess they are using it to comunicate between the cores (obvious), but most importantly to get some kind of cache coherency between them. Why this coherency is important? Because that way one core can use the info calculated by the other. AFAIK normal Crossfire (and SLI) does little of this, each card renders odd frames or lines or something (pixel quads, clusters, whatever, let's call them pixel arrays), while the other renders even parts. If the array in core 1 takes 10 times more to render than the one in core 2, you lose a lot of time waiting. You need some kind of comunication between them to let core2 continue the work of core1 without doing a mess. PCIe bandwidth while more than enough for texture and general data transfers between main memory and the card, is pretty slow for that kind of work. For a comparison PCIe 1.1 has a maximum bandwidth of 4 GB/s, while typical CPU chaches are around 30-50 GB/s. PCIe 2.0 increases to 8 GB/s which is still far away, but definately better. Only Ati knows why they used PCIe 1 in the first place, knowing this, but they learnt and move ahead. Let's hope it turns out better this time around.


----------



## lemonadesoda (May 16, 2008)

@milli,

My bad, i read elsewhere that the 3870X2 "is actually 3850X2 but marketed as 3870X2".  Yep, I should be more careful about what info I pick up and pass on. Thanks for the correction.

RV670 = 497GFlop
RV770 = 1008GFlop
So shader power is increased more than 100%.

If that is true, then GREAT! But hasnt RV770 been advertised as no architectural change? For 50% increase in shaders to give 100% increase in power *must* require quite a different architectural approach. If that's true, then 4870 will be a winner, baby!

@darkmatter,

everytime you prove your lack of diplomacy. Man, they must have been tough on you at school.


----------



## DarkMatter (May 16, 2008)

lemonadesoda said:


> @milli,
> 
> My bad, i read elsewhere that the 3870X2 "is actually 3850X2 but marketed as 3870X2".  Yep, I should be more careful about what info I pick up and pass on. Thanks for the correction.
> 
> ...



:shadedshu I explained why it has double the shader power in the very first post that flamated our discusiion. Maybe I lack diplomacy, but at least I listen (read) to others and learn. I don't talk too much, knowing zero about the matter and discuss others opinions with arbitrary numbers taken out of mist.


----------



## lemonadesoda (May 16, 2008)

Refer to point #5. You are taking my comments out of context. I'm talking about performance increases clock/clock.  Until the boards are out in the channels, we dont know the clocks, so we can only make assessments on KNOWN architectural changes, while the clock effects are guesswork until we know what they are.  My comments have always been very clearly stated as changes on same clocks... so please go back and "read" before getting so hot under the collar!

3870X2 = 3850X2 with overclock. Fact. Both use GDDR3. Put the 3870X2 at the same core clocks as 2x 3870 in crossfire (on GDDR4) and which will win?


----------



## btarunr (May 16, 2008)

lemonadesoda said:


> @milli,
> 
> My bad, i read elsewhere that the 3870X2 "is actually 3850X2 but marketed as 3870X2".  Yep, I should be more careful about what info I pick up and pass on. Thanks for the correction.
> 
> ...


]

The 'different' architecture comes in the form of shaders having their own clock-generator, shaders are clocked well above 1 GHz while the geometry domain stays below 800 MHz.


----------



## DarkMatter (May 16, 2008)

lemonadesoda said:


> Refer to point #5. You are taking my comments out of context. I'm talking about performance increases clock/clock.  Until the boards are out in the channels, we dont know the clocks, so we can only make assessments on KNOWN architectural changes, while the clock effects are guesswork until we know what they are.  My comments have always been very clearly stated as changes on same clocks... so please go back and "read" before getting so hot under the collar!
> 
> 3870X2 = 3850X2 with overclock. Fact. Both use GDDR3. Put the 3870X2 at the same core clocks as 2x 3870 in crossfire (on GDDR4) and which will win?



Point is you can't use clock for clock comparisons because RV770 will run faster, and that's in fact one of the advancements of new chips. Minor changes in internal units can affect how far they reach and improvements in the fab process (within the same process) can help obtaining higher stable clocks. It's like saying that HD3850 is as fast as HD3870 with the argument that if run at same speeds they will be equally fast. Wait, you did. Well it's essentially true, but HD3850 can only dream of reaching as high as HD3870, it's pointless to compare them clock for clock and claim no difference in performance.



lemonadesoda said:


> Given the same architecture, higher clocks, and more shaders, I think these are the performance implications:
> 
> 1./ Broadly similar performance at standard resolutions e.g. 1280x1024 and with no AA FSAA effects since no architectural changes
> 2./ General improvement in line with clock-for-clock increases 10-20%
> ...



Tell me where did you stated you were talking about clock for clock. You didn't up until now, in fact the post above makes me think you had taken clocks into account, since it's the only thing you say will improve the performance in HD4000 series. Neither can we read anything about clock for clock comparison in the next posts, until post #256. Even then you overlook the fact that shaders are running a lot faster and say a 50% is THE BEST POSSIBLE improvement in this area. But that's not the worst part. The worst part is that after saying there's a 50% improvement in shaders and 100% improvement in textures, you come to the conclusion that performance will be LESS than 50%, in fact around 20-30%! 

How can that be? Well, since GDDR5 would make memory bandwidth double of that in HD3000 series, there's only raster power left. You could have argued the weight of ROPs in the final performance, and say that my thoughts about them were wrong, which could be true AT HIGHER RESOLUTIONS, and not in 1280x as you are saying. If shader and texture power is double that of RV670 there are no reasons to say it won't be 2x faster, specially at lower resolutions where ROPS don't count as much. It's an incongruity everything you said, and that's why I say you don't know about this. That and the fact that by your posts, it seems that you act as if fuctional units (ROP, SP, TMU) and clocks were independent and had nothing to do with each other, or something of the like. Like shaders did only AA, like extra TMUs only work when bigger textures are loaded and are idle otherwise, etc. Example of this is when MrMilli said RV770 has double the GFlops you say:



> If that is true, then GREAT! But hasnt RV770 been advertised as no architectural change? For 50% increase in shaders to give 100% increase in power *must* require quite a different architectural approach. If that's true, then 4870 will be a winner, baby!



Maybe I'm understanding this badly, but it seems as if you took that as magic. Like it *must* be something underlaying there, something shaddy. It demostrates your lack of knowlegde IMO.


----------



## lemonadesoda (May 16, 2008)

It's a basic analytic approach. To separate independent factors, to understand where the gains are coming from. An analogue is with CPUs.

Comparing P4 to Core2 you can just go, Chip A vs. Chip B. Oh look chip B is faster. Or you can break it down, and analyse the performance on things what you can set independently. Much better to compare A and B at-the-same-clock first, to see architectural gains, then observe the additional gain/loss through different clockspeeds.  Likewise (in the CPU world) with amount of cache, or number of cores.

With the RV770, you can break it down to:

1./ Increase in shaders ---> impact, and in what situations
2./ Increase in TMU ---> impact, and in what situations
3./ Increase in ROP ---> impact, and in what situations
4./ Change in memory type ---> impact
5./ Increase in clocks ---> impact (unknown at start of thread, although strong speculation now about what they will be, but until in retail channels, we really dont know what is "consumer stable" from the product manufacturers).

With the R770 what is ATi trying to address? The shader and texture "wall" at high resolutions for greater FPS. For regular resolutions? The benefit is being able to dial up higher AA and FSAA.  I still hold the view that at a regular 1280x1024 without (or low) AA, FSAA, the performance gains will be relatively small. At high resolutions like 1920x1600, or when at 8xx and 16xx FSAA, AA, thats where the gain will be.

It's quite clear from the benchmarks that the RV770 *will* be very fast HD.4870.3Dmark06benchmark.leak.html=21,223 .  

I'm very happy to listen to any argument except the lame "you demonstrate lack of knowledge", "you're not very versed, are you?".  I find it insulting, and your continued use of it demonstrates a major lack of politeness and bellicose attitude.

Refer to post #218.  Please do not try to kindle old flames. This was dealt with. Turn off your microphone. The jury's out until the cards are in.


----------



## Mussels (May 16, 2008)

hey guys... we can stop. you're both offering advice/insight here, and conflicting. why dont we just take bets until the first reviews come out, and see if its 20-30% faster, or 50%+

Either way, you can buy me cards and i'll do an independant review for you


----------



## btarunr (May 16, 2008)

lemonadesoda said:


> It's quite clear from the benchmarks that the RV770 *will* be very fast HD.4870.3Dmark06benchmark.leak.html=21,223 .


Hey you rick-rolled us with that link to benchmarks :shadedshu


----------



## DarkMatter (May 16, 2008)

lemonadesoda said:


> It's a basic analytic approach. To separate independent factors, to understand where the gains are coming from. An analogue is with CPUs.
> 
> Comparing P4 to Core2 you can just go, Chip A vs. Chip B. Oh look chip B is faster. Or you can break it down, and analyse the performance on things what you can set independently. Much better to compare A and B at-the-same-clock first, to see architectural gains, then observe the additional gain/loss through different clockspeeds.  Likewise (in the CPU world) with amount of cache, or number of cores.
> 
> ...



Man, that's good and all, but you then forget that if you analyse it and have an increase (2X in fact) in EVERY STAGE, then performance will increase in every (or most) situations!! Until you understand this, I feel I have to continue. I will put an example:

You have three guys making cheeseburgers, one does the meat (A), the second (B) does the cheese and the third takes that, the bread and puts it together (C).

Analyticaly:

- If instead of A we put two guys, we won't get any benefit and will only get an improvement in those situations where you need more than one guy, i.e if you want to put two meat sticks per burger (sorry I don't know their actual names). 

- If we use two B guys it happens the same, unless you want more cheese in the mix.

- Same with C, with the difference that there are proofs that C, indeed, is more than enough to put more burgers together than what A and B can provide.

A is SPs, B is TMUs and C are ROPs, we could add D guy who would provide the products as well as carry the finished ones, memory subsystem and platform, including chipset and CPU. So if we double A, B or C independently we won't get any benefit, but if we improve A and B and C can truly handle the new income of products (and again we have proofs it could be that way), we will either be able to provide more burgers or same amount of burgers with more meat/cheese in each burger. Comparatively in the graphics card, we will be able to provide either more complex image 1920x1200 4xAA 16xAF at same speed or more frames of lesser complexity. BOTH!

EDIT: And following with the example. You say we won't see as much of an improvement on low resolutions and AA/AF levels, and that's right, but not for the reasons you say. It's not because of A,B or C guys, it's because D is not able to provide the resources and carry the large amount of finished burgers that others are generating! They have told you so already, it's because of the CPUs you won't see such an improvement on those settings...


----------



## lemonadesoda (May 16, 2008)

Oh man, you are the King! 






"Have it your way!"
"Can you taste the fire?"



Yes, its all down to where the bottleneck is. I guess we have different positions on where the bottleneck is... AND... we are looking at different points of the spectrum where gains (or roadblocks) will be.


When is the GPU shader constrained... RV770 fixes this
When is the GPU texture fill constrained... RV770 fixes this
When is the GPU memory constrained... RV770 fixes this in the XT version with GDDR5, but nor RV770 Pro, and although GDDR5 has higher clocks, lowerer power consumption (important given the GPU core will need more power), it is also higher latency. We need to see benchmarks for the net net.
When is the GPU vertex, polygon, z-plane, ROP contrained... RV770 does not fix this, except for the core "overclock", which is, in fact, pretty much the same as a regular overclock 3870.
For each resolution, the impact of the above will be different.
For each FSAA, AA setting, the impact of the above will be different.

In some situations, it will be very low improvement, in others up to 100% improvement. But the 100% improvement will only exist if current performance is limited by THAT specific bottleneck.

It's going to be mixed results. 1920x1600 will be REAL winners. But if you are on 1280x1024, I still maintain it wont be worth the upgrade UNLESS you are trying to get to 16xAA 16xFSAA. At 0x, 2x or 4x, I'm not convinced, at 1280x1024, the performance improvement will be that great. Why, because at that resolution and those FSAA AA settings the GPU *is not* shader or TMU constrained. Anyway, i await with interest the first benchmarks that come out.


----------



## DarkMatter (May 16, 2008)

lemonadesoda said:


> Oh man, you are the King!
> 
> 
> 
> ...



We are heading somewhere in the end.  
But first of all, we are not discussing the impact these cards will have on PCs or games of today, i.e. if someone wants to upgrade and have a big improvement, but the actual power of the card. We have already said why you won't see a big improvement now, but you will when new CPU/chipsets launch, a jump you won't get with HD3870 because it's not as much platform bottlenecked as HD4000 will be.
And second vertex and poly data are done in SPs and not in ROPs and I would also assume that since AA is done in shaders, z-depth, at least z-culling is done in SPs in the Radeons as well. So they don't have to repeat the work you know.  Anyway since geometry complexity is not going up, according to the trend followed lately, that won't be a problem. If there's going to be any improvement in geometry complexity, this will be done by geometry shaders and tesselation basically. Shaders once again.

EDIT: Just to be clear. My point is there's nothing to fix in ROP arena. Reasons for that are given in previous posts, but basically because:

1- Nvidia cards have less raster power. 16 ROPs @ 600Mhz vs. 16 ROPs @800 Mhz. And having to perform AA in them, still is almost 50% faster (9800GTX).

2- You just don't increase everything, and I mean everything just to let that be a bottleneck...


----------



## lemonadesoda (May 16, 2008)

If "geometry" = "bump mapping" (in its broadest sense, and including the auto-tesselation concept first introduced by ATi as "TruForm"... yes, I owned a Radeon 8500) then yes, shaders can do this, and  = great for games.

If "geometry" = "more complex objects" then no, shaders wont help, and = not so great for CAD.

TBH, I don't know how to interpret the Stream Processor comment (SP) in the RV770 architecture. How has SP changed R600 to R700? I really dont know. With the comments about "no architectural change" with RV770, I assumed SP was the same. I could well be wrong on this one.


----------



## DarkMatter (May 16, 2008)

lemonadesoda said:


> If "geometry" = "bump mapping" (in its broadest sense, and including the auto-tesselation concept first introduced by ATi as "TruForm"... yes, I owned a Radeon 8500) then yes, shaders can do this, and  = great for games.
> 
> If "geometry" = "more complex objects" then no, shaders wont help, and = not so great for CAD.
> 
> TBH, I don't know how to interpret the Stream Processor comment (SP) in the RV770 architecture. How has SP changed R600 to R700? I really dont know. With the comments about "no architectural change" with RV770, I assumed SP was the same. I could well be wrong on this one.



Vertex (geometry) data has ALWAYS been done in vertex shaders. Since R500 (Xenon), G80 and R600 and their unified shaders this is done in shader or stream processor, which packs vertex shaders, pixel shaders and geometry shaders in the same unit, to say it in some way.
More complex objects require more SPs not more ROPs, in no way. You do need more ROP for Z calculations, unless this is done in SPs as I suggested. But AGAIN vertex data is treated in SPS no ROPS.

Also tesselation is taking a simple model and make it more complex, in the sense of more vertex and polygons. It has nothing to do with bump mapping, except that may use bump maps to have some sort of control on how that NEW vertex data would be, instead of just making the same as TurboSmoth does in 3DSMax for example.


----------



## MrMilli (May 16, 2008)

lemonadesoda said:


> If "geometry" = "bump mapping" (in its broadest sense, and including the auto-tesselation concept first introduced by ATi as "TruForm"... yes, I owned a Radeon 8500) then yes, shaders can do this, and  = great for games.
> 
> If "geometry" = "more complex objects" then no, shaders wont help, and = not so great for CAD.
> 
> TBH, I don't know how to interpret the Stream Processor comment (SP) in the RV770 architecture. How has SP changed R600 to R700? I really dont know. With the comments about "no architectural change" with RV770, I assumed SP was the same. I could well be wrong on this one.



lemonadesoda, i firmly believe you must be pulling our leg.
If not, then please (i'm asking nicely), stop. Just stop because everything you say is wrong.
For the sake of all of us and for your own embarrassment, please stop.

<strike>Bumb</strike>(lol) Bump mapping (MAPPING: the word says it already) has nothing to do with geometry.
You are still connecting geometry to a T&L unit which doesn't exist anymore in modern GPU's. It's emulated on the shaders.

About the shaders on the RV770: they run at 1050Mhz. That why the GFlop increases so much.

That's the last time i'm going to correct you and i'm not comming back to this thread. You ruined it.


----------



## btarunr (May 16, 2008)

MrMilli said:


> lemonadesoda, i firmly believe you must be pulling our leg.
> If not, then please (i'm asking nicely), stop. Just stop because everything you say is wrong.
> For the sake of all of us and for your own embarrassment, please stop.
> 
> ...



Ehm, that's bump-mapping. Bumbs are the heavy things we all carry, there's not much to map, really, except occasional goose-pimples, hair and a deep gorge in the middle.


----------



## lemonadesoda (May 16, 2008)

Traditional




Unified Shader





If you had a "screen render" that fitted into the existing pipeline "4 cycles", single pass for each cycle in the rendering stage... as shown in the diagram, then increasing the number of shaders doesnt change anything. The spare-capacity doesnt help. A low FSAA, AA, 1280x1024 can "fit in" the "4 cycle" path, single pass for each stage.

If you have a scene that is 1920x1200 with 16x, 16x, then a screen render will require more than one pass through each stage.

In instance A, clock speed will get you faster FPS. Shaders doesnt help much.

In instance B, increasing the shaders means more can be done in each pass, meaning fewer passes, ultimately getting to just one single pass through each stage.  Here, gains are from increased shaders in addition to increased clocks.

That's how I've always understood it. If there is a fallacy with the logic... let me know.


----------



## lemonadesoda (May 16, 2008)

MrMilli said:


> Bump mapping has nothing to do with geometry. You are still connecting geometry to a T&L unit which doesn't exist anymore in modern GPU's. It's emulated on the shaders.



Please note the word "If" meaning that, under the situation you might be calling bump mapping geometry effects (which they are)... then all well and true. I did not SAY geometry=bump mapping.

As for the second statement I made, _If "geometry" = "more complex objects" then no, shaders wont help, and = not so great for CAD_, then YES, I *withdraw *that statement. It is wrong for Unified Shaders architecture DirectX10 Shader Model 4.0. It is only true for previous generation GPU.


----------



## DarkMatter (May 16, 2008)

lemonadesoda said:


> Traditional
> 
> 
> 
> ...



No, no, no... you understood it wrong. In your image, where it says shader core, it's not 1 shader processor, it's the entire shader array. The next stage can be calculated in any available ALU within the core. To explain this simply I will use G80 as an example, since it's SPs are fully scalar. R600 is more complicated because it needs some pre-arrangement, but it works equally in the sense of that next stage of the same fragment or a next fragment within the same stage can be calculated in the next available unit. The latter just means you can do A -> B -> C -> D or calculate several pixels in A stage together and then continue. The latter is how they work nowadays.

Example: G80 GTX has 128 SP. Imagine you want to calculate vertex data, vertex are represented by x, y and z coordinates and each one is a floating point variable. We are going to say vertex1 is V1(x1, y1, z1), vertex2 is V2(x2, y2, z2)... vertexn Vn(xn, yn, zn) ,In the SP core (of 128), each dimesion can be calculated in 1 ALU which belongs to 1 SP. (there's controversy here as Nvidia said each SP is capable of 2 per clock per SP, but it seems it can't) 

It works like that:

clock cycle 1 : sp1 runs x1 - sp2 runs y1 - sp3 z1 - sp4 x2 - sp5 y2 - ... - sp127 x44 - sp128 y44   <<<  as you can see V44 is not finalized yet, but it doesn't matter because:

clock cycle 2 : sp1 z44 - sp2 x45 - ... 

And so on. Imagine we have a core with 64 SPs running at 2x the speed. The result, the throughoutput (GFlops) is exacly the same and thus the code is going to be calculated as fast. Same if we have 256 SPs running at half the speed. There won't be any spare SP at any time, unless:

A: It can't fetch enough data from memory pool, the frame buffer, whatever the reason there is for this: other units are slow, not enough data sent by the CPU...

B: The Unit that has to continue the work i.e the ROPs can't keep up and have ordered to not continue with the work as the frame buffer is full of unprocessed data.

You can mix data types in the above example too, as long as they don't belong to the same cluster (I think). G80 and G92 have clusters of 16 SP, GTX and G92 GTS have 8 (8x16=128), GT has 7 clusters. I don't think different data types are allowed within the same cluster, but I wouldn't bet a leg neither...


----------



## DarkMatter (May 16, 2008)

Now that I was off-topic and in an academic fashion and all the confusion with tesselation, I decided to explain the difference between bump-mapping and tesselation. You can find it here:

http://forums.techpowerup.com/showthread.php?p=794688#post794688


----------



## HAL7000 (May 18, 2008)

And to think after all is said and done ........*we still need to wait and see*. Good conversation on everyone's part. A post of the good , the bad and the ugly....lol.

lets hope nvidia's releases get as much arguments.


----------

