# 3870 and 8800GT Architecture Confusion



## red268 (Mar 9, 2008)

I have read countless threads with people asking the same question - 8800GT or HD3870 .... it gets boring. Every time I read one of those threads the 8800GT seems to come out on top.
Also, using Tom's Hardware you get the same results, the 8800GT wins.

http://www23.tomshardware.com/graphics_2007.html?modelx=33&model1=1057&model2=1060&chart=318

Well, now, that's all well and good. But why?!

The HD3870 has way better specs from what I see.
The HD3870 has 825 MHz Core, 2400 MHz memory and GDDR4

Compared to the 8800GTs 660 MHz Core, 1900 MHz memory and GDDR3

So why, when it has better specs, why does the HD3870 come off worse?!

Thanks in advance to anyone that can help.


----------



## Kursah (Mar 9, 2008)

Drivers and support can make or break a card man..I recommend you continue researching more, not just in threads, but pro-comparo reviews also. Good info to be found there.

The 8800GT is faster, the 3870 is cheaper, both are great cards.


----------



## farlex85 (Mar 9, 2008)

I think it may be an architecture thing too. Nvidia's shaders operate quicker than ati's currently I believe. Also nvidia is often optimized for games, or vice versa, so they tend to perform a little better in games.


----------



## Fitseries3 (Mar 9, 2008)

ATI keeps optimizing drivers. that's something nvidia doesn't seem to spend much time doing. the 3870 will get better and hasn't met it's potential yet. nvidia release the 8800gt at full throttle and the only way to get more performance out of it is to overclock. im sure someone will argue but i am in no way biased by either the red or the green. i've got cards from both sides and they are pretty close performance wise. 

a lot of benches by reviewers are run at stock speeds or mildly overclocked. it doesn't show the cards true potential.


----------



## Tatty_One (Mar 9, 2008)

red268 said:


> I have read countless threads with people asking the same question - 8800GT or HD3870 .... it gets boring. Every time I read one of those threads the 8800GT seems to come out on top.
> Also, using Tom's Hardware you get the same results, the 8800GT wins.
> 
> http://www23.tomshardware.com/graphics_2007.html?modelx=33&model1=1057&model2=1060&chart=318
> ...




Lol you cannot compare core/memory speeds on two different cards with completely different architecture, meaning they process the data in different ways through different hardware setups. There are loads of reasons why the 8800GT is faster and as you said in your origional post....most of them are boring.....just take the reviewers word for it, it is faster period.


----------



## Tatty_One (Mar 9, 2008)

fitseries3 said:


> ATI keeps optimizing drivers. that's something nvidia doesn't seem to spend much time doing. the 3870 will get better and hasn't met it's potential yet. nvidia release the 8800gt at full throttle and the only way to get more performance out of it is to overclock. im sure someone will argue but i am in no way biased by either the red or the green. i've got cards from both sides and they are pretty close performance wise.
> 
> a lot of benches by reviewers are run at stock speeds or mildly overclocked. it doesn't show the cards true potential.



Having said that, after 3 driver releases, performance improvements are minimal.


----------



## red268 (Mar 9, 2008)

Thank very much to all who answered. Greatly appreciated.

I'll admit I've always been biased towards the red flag of ATI, but my current GPU is a 8400GS (Came with the PC .... didn't have enough money so went with the lowest graphics card with the intention to upgrade later .... and now is later!!) and I'm happy with what it's managed. I play Crysis with it, DX10 1024 x 768 most settings on low, some on medium.

Another thing - On some websites, such as Ebuyer, they state that the ATI card is DX 10.1 .... will the 8800GT be able to upgrade to that using drives or other software?

Thanks again.


----------



## farlex85 (Mar 9, 2008)

I don't think so. The ability to do that is in the hardware. Same reason a dx9 card can't do dx10. I am not for sure though.

Dx 10.1 won't be much of an improvement over 10 though, some things a little more optimized probably.


----------



## Wile E (Mar 9, 2008)

No, the 8800 doesn't support DX10.1. But that doesn't matter right now anyway. No games currently use DX10.1, and the games that will, won't need it to run.

As far as why the 8800 is faster, it's basically the same as Intel vs AMD. Completely different architectures.


----------



## red268 (Mar 9, 2008)

Ok, last question now .... I think ....

I'm looking to get something for about £150 .... both the 8800GT and HD3870 are in that price range - and I don't want to be spending any more money on my PC for a good 2 more years. With that in mind, would it be worth getting the ATI card simply because then I can use DX 10.1 games when they come out, or, not worry about it and get the faster of the 2 then upgrade in 2 years time if I want?

What are the chances of new DX 10.1 games coming out in the next 2 years? I'm hoping slim to none to be honest!


----------



## erocker (Mar 9, 2008)

Bases on performance/price and if your not limited by your chipset, the 8800GT is the way to go.


----------



## farlex85 (Mar 9, 2008)

Like I said, 10.1 probably isn't going to be much of an improvement, and there aren't even that many dx10 games out now so.......

Go with whichever you want, gt faster, but if you wanna stick w/ ati, 3870 isn't that far behind. You may also want to consider a 9600gt, as they are priced well and may do better than the 8800gt in the future. Also, I don't know how the euros convert precisely, but if you can find an 8800gts(g92) in that price range that would be the way to go.


----------



## DaedalusHelios (Mar 9, 2008)

red268 said:


> Ok, last question now .... I think ....
> 
> I'm looking to get something for about £150 .... both the 8800GT and HD3870 are in that price range - and I don't want to be spending any more money on my PC for a good 2 more years. With that in mind, would it be worth getting the ATI card simply because then I can use DX 10.1 games when they come out, or, not worry about it and get the faster of the 2 then upgrade in 2 years time if I want?
> 
> What are the chances of new DX 10.1 games coming out in the next 2 years? I'm hoping slim to none to be honest!



Microsoft changed the DirectX 10.1 design after the DX10.1 cards came out..... and the change in the dev. kits means that the neat features of DX10.1 standard won't be utilized by these cards. 

So don't buy it based on DX10.1 compatibility. Will you OC it? If so are you buying a motherboard too? If you are buying a motherboard aswell I would suggest the HD 3870 and 3850 in Crossfire, or two HD3850's with upgraded cooling in crossfire for max performance. Your PSU should be up to the task. 

If you are not upgrading your motherboard, a single 8800gt with a VF900 installed and Overclocking it would be perfect.


----------



## red268 (Mar 9, 2008)

Not upgrading my motherboard, but it does support SLi/Crossfire. What's a VF900? Some kind of cooling?

I'd love to get the 8800GTS (G92) but I'm afraid it's out of my budget : (

Overclocking is always an option. I have only started doing it VERY recently, using my new 8400GS .... I managed to up it enough to get 421 fps in Call Of Duty with all highest settings. (Yes, I know it's an old game, but lets be honest, with this card, it's not bad going!!)

Well, it looks like I'm going to get the 8800GT as it is faster and it seems there's no point thinking about DX10.1, at least not for the moment anyway. Maybe I'll just overclock it to the same as the standard GTS? Whatever, that seems the way I'm going.

Thanks very much for the help from everyone. Really greatly appreciated.


----------



## Wile E (Mar 9, 2008)

Yeah, the VF900 is an aftermarket Video card cooler. Works quite well. There are plenty of other great options tho.


----------



## Darren (Mar 10, 2008)

red268 said:


> I'd love to get the 8800GTS (G92) but I'm afraid it's out of my budget : (



The 8800 GTS is actually within your budget, not sure if it's the revised version, but overclockers are selling them cheap for a limited time only.

Leadtek GeForce 8800 GTS 320MB GDDR3 HDTV/Dual DVI (PCI-Express) - *£105.74 inc VAT*

http://overclockers.co.uk/showproduct.php?prodid=GX-111-LT


----------



## TechnicalFreak (Mar 10, 2008)

I think the best way to test is to do it yourself, since I don't believe anything I haven't seen myself.

Sure, most game companies have the "The Way It's Meant to Be Played" slogan and the nVidia logo at startup. But that's them.  

I have always been using ATI, I had however once a GeForce TI-4400 card. It was good, can't say it wasn't.. But I just don't care anymore, because they try to compete with each other - just like Intel and AMD. I never get the point, there is no one on any of the companies that say "This is why we do it".  

Somehow it feels like they have forgotten all of us, the end users and sometimes putting a price tag on their hardware that makes some of us think "Sure, if I don't invest in the repairs of my car perhaps I can afford this hardware". But ofcourse sometimes the price is there for a reason, maybe it's some new technology they added to the hardware..or you get something that makes things go faster.

But at the end.. You have to, _have to_ look at the companies themself and compare from that point of view. Look behind all the hardware and find a reason for why the prices are so different. I know many reasons for why it is so. One of them is that each company have more employees than the other one - resulting in that they just can't put a different/lower price on their hardware.  

The company doesn't pull in as much cash as the other ones, and perhaps that is a reason good enough to why they can't afford to go so "deep" into the hardware and actually 'investigate' in how-to make it better.

But I think they both make good cards if you would ask me. It's how You intend to use it that's the big question: Will you overclock it or not? Modify it to make it better or not?
Hack the drivers to gain more FPS or not?

At the end it's the user(s) who find themselfs asking a lot of questions, asking themselfs
"Why" and "Why not".  If I would meet anyone from the companies, be it Intel, AMD, nVidia or ATI - I would have a million questions to ask them..


----------



## red268 (Mar 10, 2008)

This is slightly out of my price range ....
http://www.overclockers.co.uk/showproduct.php?prodid=GX-017-PV&groupid=701&catid=56&subcat=927
But .... it's the 8800GTS (G92) for £14.49 over my budget!!
(My budget started out at £120 .... but then I saw something just a little over, so increaded my budget, but then I saw something else just a little over .... etc etc. Big mistake!!)

Anyone reading this who is looking to buy a new card. If you've got a budget in mind, STICK TO IT!!!!


----------



## EastCoasthandle (Mar 10, 2008)

red268 said:


> This is slightly out of my price range ....
> http://www.overclockers.co.uk/showproduct.php?prodid=GX-017-PV&groupid=701&catid=56&subcat=927
> But .... it's the 8800GTS (G92) for £14.49 over my budget!!
> (My budget started out at £120 .... but then I saw something just a little over, so increaded my budget, but then I saw something else just a little over .... etc etc. Big mistake!!)
> ...



I agree, always stick to your budget.  It's not there as a suggestion .  Having said that you should find a 3870 at a much cheaper price point.  It will perform much better then your current card and, if you watch DVDs and other videos on your computer you should notice an improved IQ. Most ATI video cards do not come overclock like it's competition.  However, this doesn't prevent you from doing it yourself from CCC.


----------



## will (Mar 10, 2008)

also on overclockers is an 8800gt for £130, it wouldnt be much slower than a gts, and as said you could overclock it to be as fast as a stock gts. imo it is a much better deal.

http://www.overclockers.co.uk/showproduct.php?prodid=GX-071-OK&groupid=701&catid=56&subcat=1008


----------



## sttubs (Mar 10, 2008)

I'm surprised that noone has mentioned the visual quality between the two cards. Having seen both of the cards on very similar systems, LCD were the same model. I'd give it to the 3870, it's way more vibrant & sharper of an image compared to the 8800gt. May I suggest this model: http://www.newegg.com/Product/Product.aspx?Item=N82E16814161218. I bought the "Turbo" model, save your money & OC it yourself. Plays Crysis on XP with mostly high settings.


----------



## Tatty_One (Mar 10, 2008)

Darren said:


> The 8800 GTS is actually within your budget, not sure if it's the revised version, but overclockers are selling them cheap for a limited time only.
> 
> Leadtek GeForce 8800 GTS 320MB GDDR3 HDTV/Dual DVI (PCI-Express) - *£105.74 inc VAT*
> 
> http://overclockers.co.uk/showproduct.php?prodid=GX-111-LT



naaa thats the old one (G80).


----------



## JrRacinFan (Mar 10, 2008)

Tatty_One said:


> naaa thats the old one (G80).



But that is still a decent price and the card really is nothing to scoff at. Granted the new G92 GTS is a tad better.


----------



## Tatty_One (Mar 10, 2008)

JrRacinFan said:


> But that is still a decent price and the card really is nothing to scoff at. Granted the new G92 GTS is a tad better.



Agreed, a very nice price but the new GTS is a bit more than a "tad" better   The HD3850 is pretty near to the old 8800GTS 320MB on performance and can be found even cheaper I beleive.


----------



## JrRacinFan (Mar 10, 2008)

I know. You know me Tatty, I am one to underestimate.


----------



## red268 (Mar 10, 2008)

What's the difference between the G92 and the G92 'Low Power Consumption Edition' ?
Sounds silly, but, I was wondering how much less power it used and if it changed performance at all?

In case anyone is wondering, this is where I saw it:

http://www.overclockers.co.uk/showproduct.php?prodid=GX-017-PV&groupid=701&catid=56&subcat=927

Thanks again people.


----------



## Thermopylae_480 (Mar 10, 2008)

Please use titles that more accurately describe your problem, as stated in the forum guidelines.  I have changed yours.


----------



## red268 (Mar 10, 2008)

Thermopylae_480 said:


> Please use titles that more accurately describe your problem, as stated in the forum guidelines.  I have changed yours.



Thanks for changing it, but at the time I didn't know it was about architecture. Do now though, so thanks.


----------



## Darren (Mar 10, 2008)

red268, didn't you even see the deal that I found?

Leadtek GeForce 8800 GTS 320MB *£105.74 inc VAT*

http://overclockers.co.uk/showproduct.php?prodid=GX-111-LT


----------



## red268 (Mar 10, 2008)

Darren said:


> red268, didn't you even see the deal that I found?
> 
> Leadtek GeForce 8800 GTS 320MB *£105.74 inc VAT*
> 
> http://overclockers.co.uk/showproduct.php?prodid=GX-111-LT



Sorry, I did, but was distracted.
Thanks for finding that for me  Appreciate the effort.

However I'd like to go for something with 512MB or over as I'll be running Crysis as high as possible and Oblivion with massive texture replacers and visual mods.


----------



## Rurouni Strife (Mar 10, 2008)

To answer the first question of architecture differences: 
The 8800 series cards use a scalar architecture, while the radeon uses a super scalar architecture.  The way the 8800 is set up is that 1 unified shader = 1 unified shader.  Ok bad example, but the reason the radeon looks so much better on paper is that it is all marketing.  The newer Radeon cards have a maximum possible useable 320 shaders.  That is where developers and drivers come in, its more difficult to get games optmized for those cards.  When you add in Nvidia's TWIMTBP program, its no wonder why ATI loses lots of benchmarks.  In reality, ATI cards have a mininum 64 shaders when used in situations that dont take advantage of their perticular architecture, or their drivers aren't there.  That is why the 9600GT performs like a 3870 in most games.  If ATI had the support of developers that Nvidia does, it would be a far more level playing field.  IMO anyway.

I probably messed some of that up, hit up Beyond 3D and look at their articles on the two Architectures.

And personally, I prefer underdog companies.  So i buy ATI.  They work extremely well too, no performance problems.  However, until the b3 revision of the Phenom's come out, I'd buy and Intel if I had money.  Sorry AMD...


----------



## Darren (Mar 10, 2008)

The 8800 GTS is usually priced between £150-180 in the UK, so it's definitely a bargain @ £105.74. I found a review and it shows that 320 MB makes only a tiny difference in comparison to it's big brother with 640 MB. Oblivion can be played on midrange cards these days, as for Crysis most cards perform badly on it, why spend an extra  £50 more on a 640 MB version for an extra 5 frames per second on crysis? 

Review
http://www.hardwarezone.com/articles/view.php?id=2222&cid=3&pg=5

Also have you considered the 9600 GT?, I've seen them for as little as £110 and out performs the Radeon 3870 and is known for over clocking well and tend to come with 512 MB DDR3 by default.


----------



## red268 (Mar 10, 2008)

Thanks Rurouni Strife for the info on the architectures, I'll look in to it more 

Also, Darren, thanks for the effort, and I think you may be right. Is 5fps worth £50? That's £10 for 1fps ....

I just ran 3DMark06 .... with my 8400GS (Take a look at my system specs) .... came out with a whopping 1650 marks HAHAHAHA .... Can't wait to run it again with whatever card I end up buying!!


----------



## Lillebror (Mar 10, 2008)

http://en.wikipedia.org/wiki/Radeon_R600 - Lots and lots of info about the r600 architecture 

Its a little weird. I can't really find any detailed info on g92 or just any of the 8800 chips :s


----------



## red268 (Mar 10, 2008)

Lillebror said:


> http://en.wikipedia.org/wiki/Radeon_R600 - Lots and lots of info about the r600 architecture
> 
> Its a little weird. I can't really find any detailed info on g92 or just any of the 8800 chips :s



Thanks for the link


----------



## Darknova (Mar 10, 2008)

I can play Crysis at 1680x1050 at mostly high and it looks fantastic.

Enough said?


----------



## Lillebror (Mar 10, 2008)

Crysis aint really that demanding.. Its just some really bad dx10 coding =D


----------



## Darknova (Mar 10, 2008)

Lillebror said:


> Crysis aint really that demanding.. Its just some really bad dx10 coding =D



That's the point though. It doesn't matter which card, both can play Crysis and look fantastic, and I really don't think we'll see many games surpass Crysis' level of quality any time soon, probably just get optimised for more cores, and optimised in general so it's easier on the GPU. If you're on a budget get the cheapest, if you want every ounce of performance you can get, get the best performer.


----------



## Tatty_One (Mar 10, 2008)

Rurouni Strife said:


> To answer the first question of architecture differences:
> The 8800 series cards use a scalar architecture, while the radeon uses a super scalar architecture.  The way the 8800 is set up is that 1 unified shader = 1 unified shader.  Ok bad example, but the reason the radeon looks so much better on paper is that it is all marketing.  The newer Radeon cards have a maximum possible useable 320 shaders.  That is where developers and drivers come in, its more difficult to get games optmized for those cards.  When you add in Nvidia's TWIMTBP program, its no wonder why ATI loses lots of benchmarks.  In reality, ATI cards have a mininum 64 shaders when used in situations that dont take advantage of their perticular architecture, or their drivers aren't there.  That is why the 9600GT performs like a 3870 in most games.  If ATI had the support of developers that Nvidia does, it would be a far more level playing field.  IMO anyway.
> 
> I probably messed some of that up, hit up Beyond 3D and look at their articles on the two Architectures.
> ...



No your idea/thoughts are along the right lines, if I remember rightly, each of ATI's Stream Processor are capable of only one instruction (process) at a time where as NVidia's are capable of 5 instructions at a time (I think it is), on top of that, if I remember rightly ATI's SP's have a fixed clock of 800mhz where as NVidia's SP clock is not fixed, and depending on the model start from around 1450mhz, my G92 GTS for example when overclocked can run those SP's at 2100mhz!


----------



## Tatty_One (Mar 10, 2008)

Lillebror said:


> Crysis aint really that demanding.. Its just some really bad dx10 coding =D



And what about in DX9???


----------



## Darknova (Mar 10, 2008)

Tatty_One said:


> No your idea/thoughts are along the right lines, if I remember rightly, each of ATI's Stream Processor are capable of only one instruction (process) at a time where as NVidia's are capable of 5 instructions at a time (I think it is), on top of that, if I remember rightly ATI's SP's have a fixed clock of 800mhz where as NVidia's SP clock is not fixed, and depending on the model start from around 1450mhz, my G92 GTS for example when overclocked can run those SP's at 2100mhz!



AFAIK the shaders are linked to the core clock speed on ATi.


----------



## Tatty_One (Mar 10, 2008)

Lillebror said:


> http://en.wikipedia.org/wiki/Radeon_R600 - Lots and lots of info about the r600 architecture
> 
> Its a little weird. I can't really find any detailed info on g92 or just any of the 8800 chips :s



http://www.digit-life.com/articles3/video/g92-3-part1.html


----------



## asb2106 (Mar 10, 2008)

Tatty_One said:


> No your idea/thoughts are along the right lines, if I remember rightly, each of ATI's Stream Processor are capable of only one instruction (process) at a time where as NVidia's are capable of 5 instructions at a time (I think it is), on top of that, if I remember rightly ATI's SP's have a fixed clock of 800mhz where as NVidia's SP clock is not fixed, and depending on the model start from around 1450mhz, my G92 GTS for example when overclocked can run those SP's at 2100mhz!



and that GTS of yours will destroy a 3870!!

With that said a G92 core will be faster, no doubt, but for the price of a 3870(like 180 bucks), its really not a bad deal.  I also think that desicion has to be made based on the chipset you use.  Figure you could get 2 3870s for less than a GTX and you could perform much higher!  and have DX10.1 support(useless now, but will be useful in the future)


----------



## asb2106 (Mar 10, 2008)

Lillebror said:


> Crysis aint really that demanding.. Its just some really bad dx10 coding =D



Im gonna have to disagree here, I have played crysis in dx9 also and the game still is very demanding


----------



## Tatty_One (Mar 10, 2008)

asb2106 said:


> and that GTS of yours will destroy a 3870!!
> 
> With that said a G92 core will be faster, no doubt, but for the price of a 3870(like 180 bucks), its really not a bad deal.  I also think that desicion has to be made based on the chipset you use.  Figure you could get 2 3870s for less than a GTX and you could perform much higher!  and have DX10.1 support(useless now, but will be useful in the future)



Agreed, but you could also get two 8800GT's for less than that GTX.


----------



## asb2106 (Mar 10, 2008)

Tatty_One said:


> Agreed, but you could also get two 8800GT's for less than that GTX.



then its totally based on the chipset of choice!


----------



## Lillebror (Mar 10, 2008)

Tatty_One said:


> And what about in DX9???



Dx9 performance is alot faster than the dx10 in crysis.
Just try the dx10 look-a-like hack for dx9 - Alot faster than the real dx10 and nearly as close in IQ.
Its like with the video cards - When they just get out, they arent as fast as they could possible be, but then we get driver updates to fix and optimize some stuff.
The driver dev's learn new ways of optimizing all the time.
Dx10 is new. We have to give the dev's some time to learn how to use it to its fullest, and for every new game getting out with dx10, we get closer to faster dx10 performance


----------



## asb2106 (Mar 10, 2008)

Lillebror said:


> Dx9 performance is alot faster than the dx10 in crysis.
> Just try the dx10 look-a-like hack for dx9 - Alot faster than the real dx10 and nearly as close in IQ.
> Its like with the video cards - When they just get out, they arent as fast as they could possible be, but then we get driver updates to fix and optimize some stuff.
> The driver dev's learn new ways of optimizing all the time.
> Dx10 is new. We have to give the dev's some time to learn how to use it to its fullest, and for every new game getting out with dx10, we get closer to faster dx10 performance



Alot faster is quite the statement.  I get average of 4 frames better when playing mid-high settings, 1440x900.  Goes from ~34 to ~38.  To me thats not that big of a deal.  and crysis is not patched dx10, its native


----------



## Lillebror (Mar 10, 2008)

Even 5 fps is alot faster  Its also faster, when you get more detail, at the same framerate. Dx10\Dx10.1 is all about better IQ and making game developing easy'er.


----------



## wolf (Mar 10, 2008)

fitseries3 said:


> ATI keeps optimizing drivers. that's something nvidia doesn't seem to spend much time doing. the 3870 will get better and hasn't met it's potential yet. nvidia release the 8800gt at full throttle and the only way to get more performance out of it is to overclock. im sure someone will argue but i am in no way biased by either the red or the green. i've got cards from both sides and they are pretty close performance wise.
> 
> a lot of benches by reviewers are run at stock speeds or mildly overclocked. it doesn't show the cards true potential.



granted that ATi drivers DO keep getting better the 38xx series is only a refinement of the 2900 series, which got better and better for the 6 months after release with new drivers. so even tho its a newer product i tend to think they've optimized that architecture about as much as it will go for now, and it would seem all of theyre driver resources are going into getting the X2 to scale better and better.

oh and also my recommendation is definitely with the 8800, as for architectural differences, in a very simple nutshell it would state is as, ATi Gpus have over double the amount of shader processors as nvidia products (ie, 8800 [128] and 3870 [320]) however they are clocked much slower, nvidia uses separate clock speeds within the core, they are called clock domains. so for example an ATi core with 320 sp's may run at 750mhz, so the sp's and render output sections of the core run in unison. whereas the nvidian core will run at 600mhz (ROPS) but the SP's will run at 1.5ghz or thereabouts.

so very simply, nvidia has less, but they run waaaay faster.


----------



## red268 (Mar 11, 2008)

Firstly, thanks for the informative posts, lots of good information  Thank you!!
I'll look in to each architecture in more detail as soon as I can 

So if they just haven't quite got the hang of the whole DX10 thing yet (To put it rather simply!!) does that mean patches will come out for Crysis to make things better? i.e. less demanding and/or smoother running?


----------



## asb2106 (Mar 11, 2008)

red268 said:


> Firstly, thanks for the informative posts, lots of good information  Thank you!!
> I'll look in to each architecture in more detail as soon as I can
> 
> So if they just haven't quite got the hang of the whole DX10 thing yet (To put it rather simply!!) does that mean patches will come out for Crysis to make things better? i.e. less demanding and/or smoother running?



crysis has pretty much ran the coarse.  I do not see much more being put into it, atleast not any big changes that will show a noticeable performance increase

And crysis runs fine in DX10, you just need monster hardware to really get the full advantage


----------



## wolf (Mar 11, 2008)

well the great thing about current dx10 hardware is that it flies thru dx9 , but too have a product that can run, for example crysis, @ 1920x1200+ with 60+ fps we will need to see another generational leap, like the 8800's were to the 78xx/79xx and as the 6800 was to the Fx59xx....thats the product i really want


----------



## DarkMatter (Mar 11, 2008)

asb2106 said:


> Alot faster is quite the statement.  I get average of 4 frames better when playing mid-high settings, 1440x900.  Goes from ~34 to ~38.  To me thats not that big of a deal.  and crysis is not patched dx10, its native



As Lillebror stated 4-5 fps is a lot when you are moving on low fps. In your own example (from 34 to 38) you are having more than 10% increase in performance. People would pay as much as $100 more for that kind of increase in performance when purchasing hardware: 8800 Ultra vs. GTX, 3870 X2 vs. G92 8800 GTS vs. 8800 GT, etc.

But what is most important is that the difference remains at ~4 fps when you use higher in-game settings, and lower fps and that means going from playable to unplayable. Even a difference of 3 fps is huge, 17 minimum fps is not nearly as playable as 20 fps and 28 average is a lot better than 24. That was exactly the difference for me: on Windows Vista, very-high was impossible to attain comfortable gameplay even when tweaking the Cvars, but it was on XP with the hack and the same Cvar tweaks.

As much as it hurts some individuals (and corporation ), Crysis is DX9 and the DX10 is a port, DX9 very-high hack demostrates that.


----------



## Lillebror (Mar 11, 2008)

red268 said:


> So if they just haven't quite got the hang of the whole DX10 thing yet (To put it rather simply!!) does that mean patches will come out for Crysis to make things better? i.e. less demanding and/or smoother running?



Aint possible  Cause they would have to rewrite most of the game, and there is just no point in that 
Also, Fps aint linear! if we take a game running 5fps, to get it to run at 10fps, we dont need the double power, we need to like, tripple the power. Thats why a small increase like 4-5fps actualy is a lot


----------



## DarkMatter (Mar 11, 2008)

wolf said:


> well the great thing about current dx10 hardware is that it flies thru dx9 , but too have a product that can run, for example crysis, @ 1920x1200+ with 60+ fps we will need to see another generational leap, like the 8800's were to the 78xx/79xx and as the 6800 was to the Fx59xx....thats the product i really want



To run Crysis at such high resolution and with 60+ fps you do need a lot more power than what  we have today. The problem with Crysis is that it is very dependant to the resolution at which you play, and it was developed (optmized) to play at 12x10. That's easy to notice on benchmarks, you won't see as much of a drop (percentage) on other games (COD4, UT3, Bioshock...) as you see on Crysis when going up on resolutions. That is not a consequence of bad optimization, it's because CryEngine 2, unlike other engines, does everything per-pixel: per-pixel lighting, per-pixel shadowing, subsurface scattering, ambient occlusion, parallax occlusion mapping... Most other engines use per-vertex aproaches in order to get the "same" visual effects, except per-pixel lighting and shadowing, which they DO use, although I'm not really sure all of them do per-pixel shadowing anyway. It's really hard to explain here the difference between those aproaches, but it's easy to notice that where per-vertex is not linked to the resolution you use, because vertex number is the same, per-pixel is totally linked to it. It's important to mention right now that the difference ratio in raw power required to go from one resolution step to the next is 1.6 (%60 more power) in 4:3, and 1.33 in 16:10. And with same vertical resolution wide screens need 20% more power than 4:3. On a pixel dependant scenario you could see that kind of drop in performance. For example if you had 50 fps at 12x10 you would get 25 fps at 1920x1200. In the "past" going up in resolution only required more ROPs, with per-pixel techniques you need more shader and texturing power too.

And it's there exactly where the problem comes with Crysis, they developed the game to run at ~12x10 resolution (even Yerli said that). When they where developing Crysis, at it's first stages the most common resolution was 10x7, with 12x10 being the one used by enthusiast and 16x12 being a dream for most of mortals. At the final stages (notice that Crysis launched a year after it's first scheduled release date) of "pure development" (oposed to the last year(s) that is more about content, tweaking and optimizing) 12x10 was the common one, and other resolutions where on enthusiast hands, that's almost true even today anyway. But in the last year there has been a big jump in resolution, if not in the screens, in the number of people that use those screens. And that's something that they couldn't figure out at Crytek (no one could anyway) 2-3 years ago and later there was little to nothing they could do...

That's the problem that Crysis is facing and the same problem that will face all of the newer games that are coming in the future and that will progressively use more and more per-pixel techniques.


----------



## Lillebror (Mar 11, 2008)

*sigh* wish they could keep those awesome graphics to tech demo's, and start making more games with more gameplay


----------



## wolf (Mar 11, 2008)

i just hope the real next gen gfx come out in November like the rumors say, at least thats said to be when nvidia's new beast comes out.


----------



## DarkMatter (Mar 11, 2008)

Lillebror said:


> *sigh* wish they could keep those awesome graphics to tech demo's, and start making more games with more gameplay



I consider Crysis the game with better gameplay. It's all a matter of how you like games to be, and what gameplay really means for you. IMO COD4, for example, is the most overrated game on Earth and its gameplay sucks. Overall is a great game, but except some gimmick features it's the same as any other corridor shooter. I prefer open gameplay than scripted movie-like scenes, and COD4 is only that, scripted scenes all over again, from start to finish. That's more an interactive movie than a game in my book, and guess what? When I want to see enciting scripted-movie-like scenes i go to the theater and see a real action movie. It happened the same with FarCry and Half-Life 2, except this last one had the manipulator and some sandbox gameplay, and that was one of the best additions ever made to a game. Overall I liked HL2 over FarCry, but when it comes to gameplay, I liked FarCry a lot more.

Just to make clear what good gameplay is for me and although maybe a minority, for many other people, this is the list of the games with better gameplay, not in any particular order:

- Crysis
- FarCry
- STALKER
- Oblivion and Morrowind (Morrowind being a lot better than Oblivion)
- ArmA
- Operation Flashpoint

Gameplay according to wikipedia:

http://en.wikipedia.org/wiki/Gameplay



> Gameplay includes all player experiences during the interaction with game systems, especially formal games. Proper use is coupled with reference to "what the player does". Arising alongside game development in the 1980s, gameplay was used solely within the context of video or computer games, though now its popularity has begun to see use in the description of other, more traditional, game forms. Generally, the term gameplay in video game terminology is used to describe the overall experience of playing the game excluding factors like graphics and sound. The term game mechanics refers to sets of rules in a game that are intended to produce an enjoyable gaming experience. Current academic discussions tend to favor terms like game mechanics[citation needed] specifically to avoid 'gameplay'.
> 
> Despite criticism, the term gameplay has gained acceptance in popular gaming nomenclature, being the only common phrase describing story quality, ease of play, and overall desirability of a game all in one word. Some gaming reviews give a specific score for gameplay, along with graphics, sound, and replay value. Many consider "gameplay" to be the most important indicator of the quality of a game.


----------



## Lillebror (Mar 11, 2008)

I just think they are trying to hard to make photo-realistic graphics. But maybe thats just me. Im still playing old DOS games like KKND and Commander Keen


----------



## asb2106 (Mar 11, 2008)

wolf said:


> i just hope the real next gen gfx come out in November like the rumors say, at least thats said to be when nvidia's new beast comes out.



I had heard rumors that ATI's R700 would be out sooner than that!  Im hoping that this next line of cards is amazing.  The last year or two has just been updates to what we already have - and while thats good, its still only updates.  We need a refresh of the architecture, shaders, eveything....


----------



## DarkMatter (Mar 11, 2008)

Lillebror said:


> I just think they are trying to hard to make photo-realistic graphics. But maybe thats just me. Im still playing old DOS games like KKND and Commander Keen



For me one of the most important features in games is immersion. And one of the best ways of immersion is to make better graphics, as close to realilty as possible. The other is the freedom within the game and interactivity, you won't feel you are part of a world if you can't decide what you want to do. That's the main reason I don't like COD4. Freedom of choice and interactivity with the world = 0

And those two are really far from reality right know, but each time they expand them in a game, you can feel as it is really close to reality. It's not objetively close, but subjectively, we've been saying "it's almost real!" since DOOM days. And if immersion is one of the things you want in a game, graphics progression is a must, because once you have played one game enough, you assimilate it's graphics level and it's not "real" anymore. We can say the same for sound, physics, interactivity...


----------



## sttubs (Mar 11, 2008)

Reading through these posts and still no one is addressing image quality. 8800gt is faster, big whoop. It's image quality is inferior to the 3870. I've seen them both in action. Speed is not everything. True that image quality can be subjective, but if you did a "blind" side by side comparison, I'd bet that more people would choose the 3870 over the 8800gt.


----------



## asb2106 (Mar 11, 2008)

DarkMatter said:


> For me one of the most important features in games is immersion. And one of the best ways of immersion is to make better graphics, as close to realilty as possible. The other is the freedom within the game and interactivity, you won't feel you are part of a world if you can't decide what you want to do. That's the main reason I don't like COD4. Freedom of choice and interactivity with the world = 0
> 
> And those two are really far from reality right know, but each time they expand them in a game, you can feel as it is really close to reality. It's not objetively close, but subjectively, we've been saying "it's almost real!" since DOOM days. And if immersion is one of the things you want in a game, graphics progression is a must, because once you have played one game enough, you assimilate it's graphics level and it's not "real" anymore. We can say the same for sound, physics, interactivity...



Those are the same reasons I didnt like COD4.  Crysis has shown that games can be more than just running down the designated path.  New games will allow us to make our gaming experience our own.  And its not just running down the road and shooting the bad guys.  Like Crysis, you can sneak past, or you can snipe out the bad guys, or you can run up and machine them down.  I like the control to do what I want in that situation.  Most other games do all those things.  But you do them when they say its time.  

And to me, graphics are fun, gameplay is great, and being able to combine those to makes gaming alot of fun!  Alot of reviews didnt like crysis, they said it didnt live up to the hype.  I really dont agree, I didnt expect that much, and I was very impressed with it(except for inside the mountain - that bored the hell out of me).


Sorry, I know my post has strayed off topic really bad.........


----------



## Lillebror (Mar 11, 2008)

I think its cause nVidia makes alot of driver tweaks, so you lose some IQ but get alot faster frames.
If Ati did the same, i think both cards would be even in performance.


----------



## asb2106 (Mar 11, 2008)

sttubs said:


> Reading through these posts and still no one is addressing image quality. 8800gt is faster, big whoop. It's image quality is inferior to the 3870. I've seen them both in action. Speed is not everything. True that image quality can be subjective, but if you did a "blind" side by side comparison, I'd bet that more people would choose the 3870 over the 8800gt.



YES!  This was just said over at another post!  

The image quality is superior with ATI, I feel it has been for quite some time.  Nvidia cards might be able to shove out more frames, but I see smooth edges and a glossy finish.  It makes the games playable even at low frames.


----------



## DarkMatter (Mar 11, 2008)

sttubs said:


> Reading through these posts and still no one is addressing image quality. 8800gt is faster, big whoop. It's image quality is inferior to the 3870. I've seen them both in action. Speed is not everything. True that image quality can be subjective, but if you did a "blind" side by side comparison, I'd bet that more people would choose the 3870 over the 8800gt.



Indeed that has been done:

http://www.maximumpc.com/article/videocard_image_quality_shootout?page=0,0

The answer is that out of 15 image designers with some renown, 8 said HD3870 looked better on games, 6 said 8800 GT and one didn't have any preference. So that's more than 50% f people who chose Ati, but almost 50% who didn't too... Also if you read the article, the main argument that most of them gave was that Ati's was more shiny and colorful. That's something you can (you should, I do) tweak on the drivers or the monitor itseft, and has nothing to do with IQ really. In the comparison they used the same adjustment on both of the LCDs for the sake of objectivity, but in real life it's common sense that you have to adjust your monitor for each card if you want the best IQ, even for two different 8800 GT's of the same vendor!! So should they adjusted the monitor in a different way the results would be different? If you want a response from a 3D and image designer with no renown (I'm talking about me ), yes it could be very different.

EDIT: Hmm! I forgot to comment about the control group. It seems that they used a control group and they put the same cards on both rigs. This is important because most of them saw a difference!! That talks a lot about this issue. Bottom line: They both look identical. Maybe not identical directly, but yes on IQ level. There was an article (this one I can't find) where they made a test showing many subjects the same photo, one with slightly intensificated (though false) colors and the othr with normal color saturation. Most of the subjects chose the more colorful one. This talks a lot too... Think of it.


----------



## Rurouni Strife (Mar 11, 2008)

Tatty, thats pretty much right, but you might have the ATi and Nvidia reversed.  ATI can do 5 instrusctions at onced if programmed for.  If not, then it's 1, which leads it to 64.  I just checked my math so I think i'm right.  Nvidia can do 1, but it can do that 1 really fast and say a GTX w/ 128 shaders does 128.  128 > 64.  Good point about the shader speeds in Nvidia cards too.  That is another reason why the 9600 GT is a good performer, the bumped up shader speeds.  Although I saw somewhere earlier in this topic that the 9600GT will eventually do better than a 8800GT...that wont happen.  The differences are still big, more shader heavy games will love the 8800 card.  

As to my reccommendation for a card-if you game at 1440x900 or 1280X1024 hit up a 3850/70 or a 9600GT.  Anything higher will honestly be served by 1 8800GT, though the 3870 would still be a pretty good option.


----------



## yogurt_21 (Mar 11, 2008)

Tatty_One said:


> No your idea/thoughts are along the right lines, if I remember rightly, each of ATI's Stream Processor are capable of only one instruction (process) at a time where as NVidia's are capable of 5 instructions at a time (I think it is), on top of that, if I remember rightly ATI's SP's have a fixed clock of 800mhz where as NVidia's SP clock is not fixed, and depending on the model start from around 1450mhz, my G92 GTS for example when overclocked can run those SP's at 2100mhz!



a little mixed up there tatty.
the 3870's shaders are capable of doing 5 operations at once, while nvidias are only capable of one at a time. this isn't the whole story however.

ati has 64 shaders with 5 stream processors per shader. 4 simple one complex. 

nvidia has 128 shaders each capable of 1 operation at a time simple or complex. 

so a quick look at the numbers will tell you why the 8800 wins, 128 complex operations max vs ati's 64 complex operations max. compound thyat with the fact that nvidias shaders are running at twice the clock speed of the ati shaders and there you go.

ati was technologically more advanced with their setup, but failed to realize that 64 complex shaders simply isn't enough especially at a low clock speed.


----------



## DarkMatter (Mar 11, 2008)

yogurt_21 said:


> a little mixed up there tatty.
> the 3870's shaders are capable of doing 5 operations at once, while nvidias are only capable of one at a time. this isn't the whole story however.
> 
> ati has 64 shaders with 5 stream processors per shader. 4 simple one complex.
> ...



LOL I will never understand why people keep saying Ati was technologically more advanced with their setup. It's more complicated, it has more and different units inside, it's an architecture that moves away more from the conventions than what Nvidia did... Everything true, but that doesn't mean it's more advanced. Advancement is a new aproach to resolve a problem that is better than the old one, something that doesn't happen this time.

EDIT: I'm going to explain why G80/G92 is more advanced than Ati's aproach.

- It's the clear performance winner.
- It consumest a bit less under full load.
- It's miles ahead in performance-per-watt.
- It's ahead in performance-per-transistor count.
- Even when fabricated in 65nm can manage to win on all of the above against the competition in 55nm.
- Even when fabricated in 65nm it overclocks better.
- Even when fabricated in 65nm vs 55nm it doesn't get hotter when same cooler is applied.
- Even when fabricated in 65nm is on par/better on price/performance. 
- It's architecture based on clusters permits failing parts to become other chips, improving yields. It does this a lot better that what Ati's aproach is able to do.

Ati and Amd always try to innovate, and I love that and I wish they follow doing that. Sometimes the result is a better product, but not this time. It's still a good product, but is far for being the most advanced this time. As I said more complex doesn't mean, more advanced. G80 was very innovative too, maybe not as much as Ati's and it has proven to be the better aproach: new technology + success product = advancement. And by success I don't mean market success, I mean winning all of the aboves. 
I really, really, really hope that R700 is a lot better than G92 (sure it will), but I hope as much that the new Nvidia is a lot better if it comes later than R700. If new Nvidia comes first, I really hope R700 is a lot faster than GT200 or whatever its name is.


----------



## Lillebror (Mar 11, 2008)

If you program for those ati cards, they would be alot faster, cause they can do 64*5 operations each clock. So thats 320 vs 128


----------



## Tatty_One (Mar 11, 2008)

Lillebror said:


> If you program for those ati cards, they would be alot faster, cause they can do 64*5 operations each clock. So thats 320 vs 128



No it's not! did you read post 69?  Complex is the key!


----------



## DarkMatter (Mar 11, 2008)

Lillebror said:


> If you program for those ati cards, they would be alot faster, cause they can do 64*5 operations each clock. So thats 320 vs 128



Well if you specifically program for that card, it would be faster than what it is. You would have to program the entire engine around the principle that each clock does 64 operations in which one is a complex operation and 4 simple operations. But in order to do that you would have to program a new API first, because i doubt that DirectX is capable of doing that. Even then it would be near to impossible to make such an engine, because making linear and resource independent engines is difficult enough as to think about something about that.

Anyway specifically programing for each video card goes against the principle upon which APIs like OpenGL and DirectX where created, not to mention HLSL and other shader languajes. This principle is to free developers from low level programing, so they don't have to care about what hardware they program for, and instead rely on drivers to do the hard work of transforming their code in the code that the hardware will understand. 

Indeed Ati drivers do this very well. If you take into account that Nvidia shader processors run at double the clock of what Ati does, you could think of 8800 as having 256 complex shader processors, where Ati only has 64 + 256 simpler ones. That makes Ati drivers very efficient, despite what many people think and would say here. There's very little that they can do already to improve performance on Ati cards, because they are already using most of its power.


----------



## Lillebror (Mar 11, 2008)

Sorry to disappoint you, Tatty_One, but thats how the card works  the problem with r6xx cards is that if you can't fill 5 operations in the shaders, then you aint fully using em, and uses more clocks for the same work. Nvidia's shaders can do 1 instruction at a time, but they run alot faster, and are easier to fill up, cause they can take 1 each. r6xx are about half the speed, and needs 5 instructions to be fully used 

- http://www.elitebastards.com/cms/in...sk=view&id=388&Itemid=31&limit=1&limitstart=3 you can read more about it here 
- http://www.anandtech.com/video/showdoc.aspx?i=2988&p=4 and here


----------



## Tatty_One (Mar 11, 2008)

Lillebror said:


> Sorry to disappoint you, Tatty_One, but thats how the card works  the problem with r6xx cards is that if you can't fill 5 operations in the shaders, then you aint fully using em, and uses more clocks for the same work. Nvidia's shaders can do 1 instruction at a time, but they run alot faster, and are easier to fill up, cause they can take 1 each. r6xx are about half the speed, and needs 5 instructions to be fully used



Yes of course but you are missing the point, it's the complex shaders that really count and the ATi card has only 64.  Ohhhh....and I am not dissapointed....I have 3 NVidia cards


----------



## Judas (Mar 11, 2008)

Tatty_One said:


> Yes of course but you are missing the point, it's the complex shaders that really count and the ATi card has only 64.  Ohhhh....and I am not dissapointed....I have 3 NVidia cards



LOL!


----------



## DarkMatter (Mar 11, 2008)

Lillebror said:


> Sorry to disappoint you, Tatty_One, but thats how the card works  the problem with r6xx cards is that if you can't fill 5 operations in the shaders, then you aint fully using em, and uses more clocks for the same work. Nvidia's shaders can do 1 instruction at a time, but they run alot faster, and are easier to fill up, cause they can take 1 each. r6xx are about half the speed, and needs 5 instructions to be fully used
> 
> - http://www.elitebastards.com/cms/in...sk=view&id=388&Itemid=31&limit=1&limitstart=3 you can read more about it here



What both Tatty and me are trying to say is that it's impossible to fill all those shader processors better than what they are doing right now on Ati drivers. It's stadistically impossible to fill all the shaders with random operations, and random operations is what engines use and need. Maybe you can squeeze out a 5% improvement but that's very unlikely. There's nothing you can do to make R6xx faster than Nvidia offerings. 

What happened is that Ati never thought that Nvidia would pull ahead such a chip. G80 was enormous, and so was R600, but since R600's shaders are simpler they and they were manufactured in smaller process they have a smaller possibility to fail when manufacturing. The whole IT industry was shocked on how big G80 was because the bigger the chip the lower the yields and yields had to be very low indeed, but they managed to make it work. Ati on the other hand wanted lots of shaders too, but they took a more conservative (on manufacturing) strategy and they made them simpler and slower. The rest is history.


----------



## DaedalusHelios (Mar 12, 2008)

Tatty_One said:


> Yes of course but you are missing the point, it's the complex shaders that really count and the ATi card has only 64.  Ohhhh....and I am not dissapointed....I have 3 NVidia cards



See! In Tatty's mind he is never wrong. lol

The guy never concedes. Lets debate and not argue Tatty_One. Please change your Avatar. Nobody wants to see that. Is nudity tolerated on these forums? Admins?

Should I put a Victorias Secret models A$$ as my Avatar? Is it allowed?

Now watch Tatty throw a fit.


----------



## asb2106 (Mar 12, 2008)

DaedalusHelios said:


> See! In Tatty's mind he is never wrong. lol
> 
> The guy never concedes. Lets debate and not argue Tatty_One. Please change your Avatar. Nobody wants to see that. Is nudity tolerated on these forums? Admins?
> 
> ...



thats pretty harsh - Even though people may disagree, you have to agree to disagree.  Its all part of any forum.  In a way Tatty is right, Im not saying that you are not, but who really cares, lets just drop it


----------



## DaedalusHelios (Mar 12, 2008)

asb2106 said:


> thats pretty harsh - Even though people may disagree, you have to agree to disagree.  Its all part of any forum.  In a way Tatty is right, Im not saying that you are not, but who really cares, lets just drop it




Hey Tatty could be right! I wouldn't know because I don't engineer GPUs. I was just saying there is a difference between an argument and a debate. Alot of times he doesn't try to explain why he is right. He just insists that he is. Thats an argument even if he is right everytime.

I am more into the exchange of info and ideas rather than brand loyalty. 

I have always heard that games get sponsored by Nvidia to be engineered primarily to compliment their hardware. Many sources say this which convinces me to believe it. So that is why it isn't utilized. I believe its not a driver problem. Its a lack of support from the game developers. Which is why I have two Nvidia machines and one ATi machine. Ati tries really hard but still doesn't come out on top. Its sad. But there is hope on the horizon for them. 

I bought an Asus P5N-T Deluxe 780i but I would never throw that kind of money at a Ati/AMD chipset due to the absence of an ultra high end from them.


----------



## Lillebror (Mar 12, 2008)

> What both Tatty and me are trying to say is that it's impossible to fill all those shader processors better than what they are doing right now on Ati drivers. It's stadistically impossible to fill all the shaders with random operations, and random operations is what engines use and need. Maybe you can squeeze out a 5% improvement but that's very unlikely. There's nothing you can do to make R6xx faster than Nvidia offerings.



Actualy, it is possible  You just have to finetune your application to it.
I know, no one is gonna do that, but that dosent mean its impossible.

The R600 has 64 Unified Shader but they are vec5 (4v+1skalar), so the R600 has 320 Stream processors or lets call it just multiply-accumulate units. These 64 us are divided in 4 blocks. 320 stream processors are actually 64 4D + 64 1D. If you do the math 64*4+64 will give you the 320 number.

Vec4 means that you can process a Shader with forth dimensional data e.g. X,Y,Z,W or R,G,B,A dimension of data + one more scalar instruction. So if the game has Shaders that needs Vec4 instruction all the time and one additional scalar instruction then ATI have a great GPU and can score twice better than Nvidia. In best case scenario ATI can process 320 streams while Nvidia can do only 160.

Nvidia has a big advantage as its Stream processors work at 1.35 GHz and this really goes well versus the 320 Stream processors or Multiply-accumulate units at 800 MHz.

Nvidia's G80 can process more simple Shaders as its stream processor can process a single instruction or dimension. Nvidia G80 hardware can be seen as 128/4 or 32 4D VEC4 Shaders but each of these 32 units can work parallel. That is how you end up with 128 Stream processors all the time. Nvidia's Shaders are so flexible so Nvidia can take as much units it wants and calculate as many dimensions in vector as you like. It can always do 128 stream processor instructions per clock while ATI in the worst case scenario does 64 and in the best case it can do even 320.


----------



## asb2106 (Mar 12, 2008)

DaedalusHelios said:


> Hey Tatty could be right! I wouldn't know because I don't engineer GPUs. I was just saying there is a difference between an argument and a debate. Alot of times he doesn't try to explain why he is right. He just insists that he is. Thats an argument even if he is right everytime.
> 
> I am more into the exchange of info and ideas rather than brand loyalty.
> 
> ...



see now im on the oppsite end of that, I own strickly ATI for the last 2 years, I think image quality is superior with ATI.

And the main reason I go Intel chipset, is because I love to OC, the p35/x38 boards OC far better than the 700 series couterparts.  Not that they are bad, Im sure you can get a good OC, but when it comes down to it, Intel can work with its own chips alittle better.

But for brute force and frames and such, Nvidia controls the market, now match two cards, and the tables are turned, Crossfire scales much better than SLI, now the problem relies on the game developers to open up better support for dual video card solutions.  crossfire gets no real credit because of this, it gets blamed on ATI and Nvidia for there solutions working poorly in games, and SLI/crossfire is nothing new.  

The beauty of the Nvidia/ATI battle is that we are the ones that win.  They push and push to develop faster & better hardware, and the competition keeps the pricing resonable.  And I can thank Tatty for that(he straitened me out there) , 

UNRELATED TO TOPIC
Ive only been here 3 months now, and when I got here I thought I knew about everything with computers, after being here 2 weeks I realized I was very unaware.   I had built many many systems, I had been running my business for a year at that point.  But now my knoweldge has grown ten fold.  and I have to thank Tatty and others for that.(thats why I came to his defense - sorry - he really does know what he is talking about)


----------



## yogurt_21 (Mar 12, 2008)

DarkMatter said:


> LOL I will never understand why people keep saying Ati was technologically more advanced with their setup. It's more complicated, it has more and different units inside, it's an architecture that moves away more from the conventions than what Nvidia did... Everything true, but that doesn't mean it's more advanced. Advancement is a new aproach to resolve a problem that is better than the old one, something that doesn't happen this time.
> 
> EDIT: I'm going to explain why G80/G92 is more advanced than Ati's aproach.
> 
> ...



lol more advanced doesn't always = more performance.
windows xp is more advanced than windows 3.1 despite the fact that it runs slower in every possible way.



> It's miles ahead in performance-per-watt.


 what exactly is miles? proof please.



> Even when fabricated in 65nm it overclocks better.


g92 vs rv670, yes, g80 vs r600, no



> It's architecture based on clusters permits failing parts to become other chips, improving yields. It does this a lot better that what Ati's aproach is able to do.


ati invented this, how in the world can nvidia be doing it better then? where do you think the 2900 pro and gt's came from? thin air? and on the g80 how exatly are they able to do this? the number of rop's and shaders is tied to the memory bit! they can't just go "well it failed as an 8800ultra, lets make it an 8800gts 640mb." Every manufacturer has the ability to turn their failed cores into something else prior to it being on a board. Ati can do this even after it is on a board. Nvidia cannot, once the core is mounted, they're stuck. However,on the g92 if a shader bank fails on the 512mb gts, sure they can disable one, but odds are much better that an rop will fail first making the card useless to them.

and no advancement doesn't mean better, electric cars are more advanced than the internal cumbustion engine, but they cannot go very far or go as fast. the fact is that ati managed to do something that nvidia didn't, and that was to make each shader run multiple operations at the same time. so whether you think so or not, it's a fact that ati's architecture is more advanced. Nvidia's simply works better.

here's what the dictionary has to say.
http://www.merriam-webster.com/dictionary/advanced

The point of us saying that ati's is more advanced is a reference of where the market is going. We're NOT claiming the Nvidia dev team is inferior to ati's team.


----------



## farlex85 (Mar 12, 2008)

Ah, well, this was a good informative thread. Sometimes I wish that I could hear some of these arguments hashed out by the likes of Robert De Niro or Al Pacino, talking about ROPs and shaders and such. Or maybe a young couple madly in love but one likes nvidia and intel and the other likes amd and ati. So much so that they scream about it and the neihbors hear, "I can't believe you actually think Ati's stream processors are more advanced, Nvidia obviously has superior architecture." ........ Man I need to go to bed,


----------



## Wile E (Mar 12, 2008)

DaedalusHelios said:


> See! In Tatty's mind he is never wrong. lol
> 
> The guy never concedes. Lets debate and not argue Tatty_One. Please change your Avatar. Nobody wants to see that. Is nudity tolerated on these forums? Admins?
> 
> ...


Wow. That was a bit out of line. You obviously don't know Tatty too well.


----------



## DaedalusHelios (Mar 12, 2008)

Wile E said:


> Wow. That was a bit out of line. You obviously don't know Tatty too well.



He got righteous with me on a thread so I wanted to say something to take it down a notch. lol

I wanted to call it before it happened. I was already a little predisposed to feel edgy because of his Avatar. I feel bad for the overweight people in this country. Its just like any other disability. I never was one for the "fat jokes". I love to work out, and it keeps me muscular.... but what about the ones who cannot? We can laugh at them, or respect them for who they are. It doesn't mean we have to date them, or even hang out with them. 

I know he has made friends here and you are one of them. Thats good.


----------



## wolf (Mar 12, 2008)

dude most fat people are fat cos they eat too much, and if they wanna slim down they gotta STOP EATING. now i dont make fun of fat people either, but unless its glandular or something actually physically medical (not mental, of course its part mental), then its not a disability at all, they brought it onto themselves.

not to mention the net is a place for freedom of speech, and if that comes in the way of an avatar pic then so be it, if you dont like it, dont look at it!

and Wile E is correct, it was out of line, and you mustn't know him very well. oh and if you really think you've taken him down a notch, think again.


----------



## DaedalusHelios (Mar 12, 2008)

wolf said:


> dude most fat people are fat cos they eat too much, and if they wanna slim down they gotta STOP EATING. now i dont make fun of fat people either, but unless its glandular or something actually physically medical (not mental, of course its part mental), then its not a disability at all, they brought it onto themselves.
> 
> not to mention the net is a place for freedom of speech, and if that comes in the way of an avatar pic then so be it, if you dont like it, dont look at it!
> 
> and Wile E is correct, it was out of line, and you mustn't know him very well. oh and if you really think you've taken him down a notch, think again.



*As you said, don't look at the post below.*
The overweight people....Oh yes.... lets kill them in the streets. Shame on them. Just like the Australian Aborigine.
That sounds awfully Righteous! Congratulations!!! *tear*

Freedom of speech right?

Don't take it seriously when I am playing with you. I don't mean you harm. I show empathy for them and that is all. I don't blame it on them.


----------



## wolf (Mar 12, 2008)

theres a big difference between an avatar picture and what you just typed...

your taking what i said way too literally and your being silly about it.

but yes in a way you are right, freedom of speech does apply, unless you tread on a nerve and get yourself banned from the forums.

anyway, lets get back on topic, architectural differences.

if you want to start a war against Tatty, me, fat people or fat people pictures then go somewhere else. this is a tech forum.


----------



## DaedalusHelios (Mar 12, 2008)

wolf said:


> theres a big difference between an avatar picture and what you just typed...
> 
> your taking what i said way too literally and your being silly about it.
> 
> ...



Its just a tech forum. No reason to act all threatening with it. Internet is serious business. 

No need to get emotional either. I am sorry if I hurt your feelings. I didn't want to make Tatty mad either. I just wanted to persuade him to be less forceful but you were telling me 





> oh and if you really think you've taken him down a notch, think again.



So maybe you are right. It probably wouldn't work. I was wrong to think it would I guess.


----------



## wolf (Mar 12, 2008)

DaedalusHelios said:


> Its just a tech forum. No reason to act all threatening with it. Internet is serious business.
> 
> No need to get emotional either.



dude your hilarious, you think i was threatening 

and emotional  

your totally overestimating your impact on people. 

classic!


----------



## DaedalusHelios (Mar 12, 2008)

wolf said:


> dude your hilarious, you think i was threatening
> 
> and emotional
> 
> ...



Whoa.... you are totally right...... what was I thinking. 
Will God love me, after what I have done?

Dear Sweet Baby Jesus, 

               I have made a mistake on Tech Power Up forums that shall never be forgotten. What should I do Jesus? There is this wonderful man with a fat naked woman's butt in the air as his avatar. I apparently made a false accusation that he was arrogant when he was in fact the most knowledgeable man on computers on the internet and humble too. His many disciples attacked me in his defense and made attempts to condemn me until I wrote a letter to you to ask for forgiveness. So here it is, sweet baby Jesus. I am deeply sorry.


PS. Please forgive my grammar Jesus.


----------



## wolf (Mar 12, 2008)

man you are hilarious! i dont even know what to make of this anymore except, lol.


----------



## Wile E (Mar 12, 2008)

Reported the thread. This is getting out of hand. You guys need to relax.


----------



## DaedalusHelios (Mar 12, 2008)

Wile E said:


> Reported the thread. This is getting out of hand. You guys need to relax.



I was doing it as a "peace offering".
I want to "make up" with them. 

I'll quit. Point taken.

The admins are free to delete all of my posts in this thread if they feel offended.


----------



## PaulieG (Mar 12, 2008)

Thanks for bringing it down a notch guys. Let's please keep it that way.


----------



## Lillebror (Mar 12, 2008)

Woah, guys! I leave for like 12 hours, and everything got off topic


----------



## Firebeast (Mar 12, 2008)

LOL... here we go, correct place for them 

http://forums.techpowerup.com/forumdisplay.php?f=10




Lillebror said:


> Woah, guys! I leave for like 12 hours, and everything got off topic


----------



## Lillebror (Mar 12, 2008)

http://www.beyond3d.com/content/reviews/1/11 - found this about the g80 arch.
http://techreport.com/articles.x/12458/2 - and this is actualy a really good read on how the r600 works


----------



## wolf (Mar 12, 2008)

it really was getting out of hand, thats why i lol'd and left.



wolf said:


> unless you tread on a nerve and get yourself banned from the forums.
> 
> anyway, lets get back on topic, architectural differences.



as you can see i did try and say that it was a little out of hand, and did try get us back on topic, but i'll admit i also got carried away, sorry guys, general nonsense thread from now on!


----------



## Tatty_One (Mar 12, 2008)

DaedalusHelios said:


> See! In Tatty's mind he is never wrong. lol
> 
> The guy never concedes. Lets debate and not argue Tatty_One. Please change your Avatar. Nobody wants to see that. Is nudity tolerated on these forums? Admins?
> 
> ...



Tatty never throws a fit and he also does regularily concede when he is wrong but I dont see that there is anything to concede unless you are disagreeing with the fact that there are only 64 complex shaders?  If you recall, I was not actually the person who initially made that observation, I just re-enforced someone elses comments.

I am not sure what my Avitar or "Victoria's Secrets" has to do with the topic of this thread, if you are not happy with my Avitar then you should PM a mod and bring it to their attention so they can decide if they can see a naked body, if they are not happy with it I am sure they will let me know, I have been using this one for around 6 weeks now I would guess.  I can only see a bare ar*e which is a little different from a naked body in my experience, maybe in this case you would just like to turn a healthy debate into something personal, if that is the case then it speaks for itself really.

As far as your comment "lets debate and not argue" is concerned, I agree, In my experience, debate only turns into argument if things either get nasty or personal, I would ask you to look over this thread and the other one we disagreed in and then let me know who got nasty or personal........I think you will find, on no occasion did I get either with you!  so if we are at a point that has gone beyond healthy debate into something else then at least one of us has caused that!

One last point, you have kind of indicated that you think my Avitar is intended as a slur on, I think your term was "fat people", you presume far too much and in fact making that assumption is to be quite honest...insulting, lol for all you know I could be a happy 300+ pound person, for the record, the picture is widely available on the net, and was never intended as a slur.  If you are saying that it personally insults you, all you needed to do was send me a PM, rather than going off topic and posting assumptions and ask me to consider removing it, if I had failed to understand your feelings, as I said, you could have contacted a Mod.

Edit:  I changed the avitar back to my old one, another member PM'd me and asked if I would remove it, as I said to him, you only have to ask (hopefully fairly nicely), my intention is not to offend!


----------



## DarkMatter (Mar 12, 2008)

yogurt_21 said:


> lol more advanced doesn't always = more performance.
> windows xp is more advanced than windows 3.1 despite the fact that it runs slower in every possible way.
> 
> and no advancement doesn't mean better, electric cars are more advanced than the internal cumbustion engine, but they cannot go very far or go as fast. the fact is that ati managed to do something that nvidia didn't, and that was to make each shader run multiple operations at the same time. so whether you think so or not, it's a fact that ati's architecture is more advanced. Nvidia's simply works better.
> ...



Your link explains it really well: Ati's is more advanced in their thinking (*2a*) but the fruit of that thinking, R600 design , is not more advanced right now(*2d*). Hope you can now understand. Although I can admit that semantically I could be wrong, for instance I'm taking evolution as a move in the right direction and this is not always true, but this is another story. (Metaphysiscs )
Win XP and electric motors have many features and advantages that win 3.1 and combustion don't have. R600 does not. Electric motors are as old as combustion ones, it happened that in some point combustion motors became more advanced than electric ones for their use on cars. Electric motors have been better for trains, for example. Win XP back in the day of Win 3.1 wouldn't be more advanced, it just wouldn't work at all.



> the fact is that ati managed to do something that nvidia didn't, and that was to make each shader run multiple operations at the same time.



LOL. Your naivety is lovely. Nvidia used a similar aproach to R600 in GForce FX series. Nvidia series 6/7 are more similar to r600 than to G80 in the interior of it's shader processors maybe. With G80 they changed the trend they were following until then deliverately because they thought fully scalar design was the way to go, not because they couldn't make it the other way. If you are one of those who think that "history puts everything on thier place" you have to agree that they were right.



> what exactly is miles? proof please.



http://www.techpowerup.com/reviews/HIS/HD_3870_X2/21.html

Lesser power consumption + better performance = a lot better perf-per-watt. Simple.



> g92 vs rv670, yes, g80 vs r600, no



AFAIK they OC a bit better. Even if they don't, r600 is 80nm and G80 is 90nm. The argument still holds on its own. Plus look at the thread name.



> ati invented this, how in the world can nvidia be doing it better then? where do you think the 2900 pro and gt's came from? thin air? and on the g80 how exatly are they able to do this? the number of rop's and shaders is tied to the memory bit! they can't just go "well it failed as an 8800ultra, lets make it an 8800gts 640mb." Every manufacturer has the ability to turn their failed cores into something else prior to it being on a board. Ati can do this even after it is on a board. Nvidia cannot, once the core is mounted, they're stuck. However,on the g92 if a shader bank fails on the 512mb gts, sure they can disable one, but odds are much better that an rop will fail first making the card useless to them.



LOL  Ati invented what??? 
I don't know who invented it, IBM maybe, I don't know, but for sure it didn't Ati. And what I was saying is that Nvidia has a better inplementation to take advantage of this. Learn better about the architectures:

http://techreport.com/articles.x/12458
http://techreport.com/articles.x/11211

First difference. Nvidia uses a crossbar link between it's functional parts that is just a little bit more flexible than Ati's aproach, but this is not important really. What is important is that R600 and RV670 by extension is arranged in 4 blocks of 80 SPs and G80/G92 is arranged in 8 blocks of 16 SPs. If one single shader processor failed G80 loses 12,5% of its power and R600 loses 25%, if two single shaders failed with the unfortunate fact that it happened in the other side of the chip, R600 loses 50% of its power, G80 25%. Even 4 failures are possible, G92 could still have 50% of it's power left and release a 64 SP, 16 ROP chip... Hmm sounds familiar. R600 would have nothing. 
You are right about 2900 GT, it'r r600 with one block disabled and has 240 SP, Pro is just an underclocked XT. The performance difference between the two is abismal AFAIK. Is the diference as huge between 8800 GTS, 8800GT and 8800GS? That's right, no.
In this regard G80/G92 is a lot more advanced. 

EDIT: Oh and BTW. Shader processors and Texture Units are dependant to each other. And ROPs and Memory interface with each othr. But those two groups are totally independent with each other, and that is why it's more flexible and more advanced. Advancement also comes from a manufacturing point of view. For instance if Nvidia wasn't more advanced than Ati in this respect, it wouldn't never be able to compete, because at 55 nm, chips are alot cheaper to produce.

This was a long reply. Cya.


----------



## DarkMatter (Mar 12, 2008)

Lillebror said:


> Actualy, it is possible  You just have to finetune your application to it.
> I know, no one is gonna do that, but that dosent mean its impossible.
> 
> The R600 has 64 Unified Shader but they are vec5 (4v+1skalar), so the R600 has 320 Stream processors or lets call it just multiply-accumulate units. These 64 us are divided in 4 blocks. 320 stream processors are actually 64 4D + 64 1D. If you do the math 64*4+64 will give you the 320 number.
> ...



Come on, I know all that! You could have just skipped all that info, but I guess you couldn't figure out what I know and what I don't know. 
But I can't believe that knowing what you know, you can't realize that using all those units on the same time is impossible. It's not a matter of fine tunning, because the structure has it's limitations. For instance AFAIK each cluster on 4D+1S can only issue one instruction per clock so this makes nearly impossible to use the scalar and the vec4 finctions at the same time because of the nature of the code that would be using them. Yeah you can use vec4 for RGBA, but you can't use the scalar one then, because there isn't any task that would be issued on the same time. In that situation and while the entire chip is doing color calculations we could see R600 as a 256 SP chip. But that's not all, when doing geometry work it will work over x,y,z most of the time so in that lapse of time the chip has fuctionally 192 SPs. Furthermore, physics calculations require the kind of instructions only present on the scalar unit and then only 64 SPs would be used. 
The way graphics engines work makes impossible to use other units because when you are doing physics calculations you can't do other kind of calculations on the same chunk of code. The work of trying to blend those functions relies on drivers, but in essence they can't do it perfect and will never reach a level near 100% of power utilisation. That's a trade-off inherently present in all VLIW architectures, and was present in Ati's minds. It's not really that R600 doesn't deliver anyway, it's just that G80 has demostrated to be a lot more efficient design. And was that what Ati couldn't even imagine. When they launched 7900GTX, and when asked about R500 or Xenos ,Nvidia said that they thought that unified shaders weren't the way to go yet. The media then thought that Nvidia's next wouldn't use them, and Nidia let them think that way. But the truth was they did, and that thay were developing one long before that, but they just wanted to make it a surprise. And it was.


----------



## Lillebror (Mar 12, 2008)

I bow down and eat the dust! You learn something new everyday  I lost the battle! Here, take my money, and this funny looking goat, as a token of your graphics card mastery!


----------



## Tatty_One (Mar 12, 2008)

Lillebror said:


> I bow down and eat the dust! You learn something new everyday  I lost the battle! Here, take my money, and this funny looking goat, as a token of your graphics card mastery!



I would be the first to admit, when it comes to graphics card architecture, I am left trailing behind the knowledge of Yoghurt and Darkmatter.  I know the basics and kind of semi tech stuff well but reading threads with the in depth stuff just sends me to sleep after a while, I should really try harder


----------



## red268 (Mar 12, 2008)

Thanks people 
I am still reading all of this, and really appreciate the information .... steep learning curve eh?!

Really appreciate all the effort from everyone.

Thanks again


----------



## erocker (Mar 12, 2008)

And, thank you everyone for making this thread beautiful again.. .  Unlike the last page.:shadedshu


----------



## DarkMatter (Mar 13, 2008)

Guys, I'm scared of some of the last posts on this page.   




> A little off topic, but I assume with your name DarkMatter, you are at least somewhat well versed in physics and astronomy. Are there any forums anywhere you know of where thing of this nature are discussed. Like perhaps quantum mechanics and string theory and the like. As with this forum, a lot of the technical stuff goes over my head but I would still enjoy the discourse (if its civil and informative of course). Seems kind of unlikely there would be such a place, but I was wondering if you knew of any.



Sorry, but I can't help you there. But there must be some of them out there!! But I must say that I am somewhat scared of your assumption too LOL. I have Telecomunication Engineering University level of physics, which is not a lot really, and astronomy was my passion when I was 10. Then my interest moved to other areas. But I'm not well versed on any of them. I know something about this, a bit about that... Very few things compared to what one could know about them.

My name is because that particular term somehow changed how I see the life and the knowledge. You know, dark matter is an unknown form of matter that can't be "seen" by any electromagnetic radiation, so we have no way to see it because all the devices that we use to explore everything are based on those, for us it is completely invisible. But this form of matter accounts for the vast mayority of the mass in the universe, and if the universe will continue expanding forever or it will eventually collapse will depend on the quantity of it, and not on our "known" universe. As vast as the viewable universe is for us, it's only a very little fraction of the entire picture. 
In the same way, as much as we think that we know, or what we thing that there is to be known, there's always a lot more left to be known.

I got metaphysic and off-topic again isn't it? Hmm I shouldn't post at late night hours...


----------



## wolf (Mar 15, 2008)

yeah sorry again about my input on that last page, its so easy to get carried away.


----------



## red268 (Mar 24, 2008)

Thanks to everyone. Just thought I'd let you know:

When it came down to it, I really did need to stick to my origional budget. So I ended up buying a Sapphire 3870 XT for £116.85 - http://www.ebuyer.com/product/140852 - and a new case fan for £5.99 (Hyper 120mm) - http://www.ebuyer.com/product/105137 - 
With the better case fan than the stock one that came with my case (92mm) I'm hoping cooling will be a bit better - not that it was bad to start with - which will help if I do decide to overclock it at any point.

I will overclock the card if I feel it's not quick enough at any point, and I think on my origional £120 budget, I've got a good card with a fair amount of overclocking potential.
I'll post once more once I have got it and tried it out - with all the input from everyone, I thought I should share the results!! Should arrive on Wednesday.

Thanks all

Red


----------



## Monkeywoman (Apr 22, 2008)

does this help? As Nvidia(i.e 8800) has scalar MADD MUL architecture... it has 128 shader processors .The ATI  (i.e.3870) has a superscalar MADDx5 architecture... so basically , it has
64 shader processors that can do 5 shader operations per shader processor in one clock cycle (has 5 ALUs in one shader processor ) , but the fifth shader ALU is used for ... special things ( not sure what exactly ) . As for the 8800 Ultra , it has 128 MADD processors ( one ALU per shader processor ) and 128 MUL processors ( MUL not being used really , though ) . So...  asically due to the superscalar architecture's inefficiency makes the superscalar cards slower than they theoretically could be . Also , the shader clock is locked to the core ... so they can't clock them riddiculously high to make up for it's inefficiencies .

the R700 has a modified style, i have no idea what it is, hopefuly is able to compete with big green


----------



## wolf (Apr 23, 2008)

from the Tech Report's "GeForce 9 series multi-GPU extravaganza"



> *The G92 GPU's sheer potency creates a problem for Nvidia*, though, when it becomes the building block for three- and four-way multi-GPU solutions. We saw iffy scaling with these configs in much of our testing, but I don't really blame Nvidia or its technology. The truth is that today's games, displays, and CPUs aren't yet ready to take advantage of the GPU power they're offering in these ultra-exclusive high-end configurations. For the most part, we tested with quality settings about as good as they get [4xAA + 16xAF]. *In nearly every case, dual G92s proved to be more than adequate at 2560x1600*. We didn't have this same problem when we tested CrossFire X. *AMD's work on performance optimizations deserves to be lauded*, but one of the reasons CrossFire X scales relatively well is that *the RV670 GPU is a slower building block*. *Two G92 GPUs consistently perform as well as three or four RV670s*, and they therefore run into a whole different set of scaling problems as the GPU count rises.



much as i had suspected, sure CFX scales better, but its because your building with less.


----------



## DaedalusHelios (Apr 23, 2008)

wolf said:


> from the Tech Report's "GeForce 9 series multi-GPU extravaganza"
> 
> 
> 
> much as i had suspected, sure CFX scales better, but its because your building with less.



Good point.


----------



## red268 (Apr 23, 2008)

Just to let everyone know:

I went for the 3870 simply because it was within my origional budget, and that's something I really did need to stick to.
My brother and I got the same card so that when one of us is away, the other can use their card for Cross Fire  So that's great.
I got the Sapphire HD3870 and am very pleased with it!

Crysis runs fine on high settings and looks amazing on highest (I've only tried highest for a short time, no gun fights or anything. Just standing on a quiet beach .... I guess if I got in to a fight it would get too slow to play reasonably.)

Call Of Duty 1 fetches a nice 1000fps (Looking at the floor in Brecourt) with all settings maxed out - 1000fps being the limit. (Yes, I know it's a pretty old game, but that's still bloody impressive!!)

The card cost £116.

I've tried overclocking it and it works a charm. But at the moment I have no need to overclock it as everything runs brilliantly!! 

It came with the Core at 776 and the memory at 1126.  ATI Catalyst Control Centre limits the overclocking to 885 Core and 1387 Memory. I'm sure there's a way to overclock more, in another program ect. But as I said, I really don't need to at the moment, so won't bother.

Idle temperature is about 42*C and under full load it gets to about 64*C so it's nice and cool! (My old 8400GS idled at about 55*C and under full load would climb to about 78*C!!)

Only slight gripe with the card is the noise. It's much louder than the rest of my PC put together. But I'm used to it now, and am not that bothered anymore.

Thanks once again to everyone for their help!! Greatly appreciated!!


----------



## das müffin mann (Apr 23, 2008)

glad your happy with your purchase!


----------

