# Safe GPU temps



## cdawall (Oct 1, 2008)

Alot of talk goes around on these forums and many others about what a "safe" GPU temp is. Most people you will ask will say keep it under 70C or i don't like anything in my case running over 60C. This is bunk, BS, whatever you want to call it. 

All current GPU's since the Ti4x00 series from Nvidia and 9xx0 for ATi have been able to handle 100C+ temps. current cards are safe running 24/7 at 100C. G80 and up are rated to fail at over 120C. this means at 90C your card is doing just fine!

When you oc on a reference cooler the GPU is not what will fail you will loose the VRM's or capacitors way before the GPU as it will throttle itself down if temps become to high.

A video card at stock will not fail due to overheating without an external problem. Things like bad air flow, (read this for help with keeping you case cool) dust, improper installation that damages a heatsink, or even an unplugged heatsink fan. 

New high-end cards run hot that's the way it is this wont change with out a major revamp on core designs.


----------



## MRCL (Oct 1, 2008)

This might be right, but I for one find it just more calming, if the GPU temps are in the two digit midfield of temperature scale (I'm talkin celsius)... 51° simply look better than 91°, even tough it has virtually no effect on the card... imho


----------



## mlee49 (Oct 1, 2008)

cdawall said:


> This means at 90C your card is doing just fine!



Agreed.  Safe temp is below 90C.  

Thanks for helping stamp out these misconceptions about GPU temps!


----------



## cdawall (Oct 1, 2008)

i just got tired of the 8000000 threads asking is my GPU ok at watever temps if it isn't crashing in games and 3D its temp is probably ok!


----------



## zithe (Oct 2, 2008)

People will only stop asking this once they get an answer. The old posts tend to get deleted. I just went and used control - f and searched 'temp' on every page in this forum and the first few of ATI and Nvidia. 

This was the only topic I hit.

Edit: Just tried looking up 'temp' under search. This was the first topic. I saw loads of others but they were saying stuff like "Why didn't they include the *temps* in the review?" and stuff like that.


----------



## cdawall (Oct 2, 2008)

thats because its not normally threads its posts inside of threads that go hey is my GPU ok at XX temp


----------



## DrPepper (Oct 3, 2008)

I know gpu's can handle high temps but I like to keep mine cool because when its hot the whole room heats up lol. Plus I feel uncomfortable even though my old cards managed higher temps without me realising.


----------



## Lillebror (Oct 3, 2008)

DrPepper said:


> I know gpu's can handle high temps but *I like to keep mine cool because when its hot the whole room heats up* lol. Plus I feel uncomfortable even though my old cards managed higher temps without me realising.



That, my friend, is a placebo  The card is generating the same heat, your just moving it into your room faster


----------



## newtekie1 (Oct 3, 2008)

I disagree that 90C is safe for GPU's.  It might not fry the card instantly, but it will definitely shorten the lifespan.  Why it is safe for a GPU to be at 90C, but not a CPU, they are made out of the same materials in the same ways, so why can't a CPU take 90C?


----------



## DrPepper (Oct 3, 2008)

Lillebror said:


> That, my friend, is a placebo  The card is generating the same heat, your just moving it into your room faster



No  I my stock cooler had a fan and did about 90's I went passive and i'm only at 30's


----------



## Jeno (Oct 3, 2008)

mine never gets over 55 on a hot day, so i should probably ramp up the oc's some more before farcry 2... right?


----------



## KainXS (Oct 3, 2008)

newtekie1 said:


> I disagree that 90C is safe for GPU's.  It might not fry the card instantly, but it will definitely shorten the lifespan.  Why it is safe for a GPU to be at 90C, but not a CPU, they are made out of the same materials in the same ways, so why can't a CPU take 90C?



Yep I totally agree with that, when your talking about the life expectancy of cpu's and gpu's 

Its widely known that failure in a cpu/gpu/etc is a DIRECT result of the amount of thermal cycles and the change in temperature above ambient

and theres formulas that prove it.


----------



## Deusxmachina (Oct 3, 2008)

Yeah, I feel sorry for people's wallets when told to go buy an aftermarket cooler for $40 because holy crap 75 degrees is too hot.  

Heat does indeed kill electronics, so cooler is always better, but a lot of people take cooling to an extreme.  It's up there with decreasing the life of a CPU by overclocking it.  Yeah, maybe it reduces the life, but it will still likely last long after it's outdated anyway.


----------



## Formula350 (Oct 4, 2008)

Lillebror said:


> That, my friend, is a placebo  The card is generating the same heat, your just moving it into your room faster



Actually it's not. I kept my bedroom warm for a few Minnesota winters SOLELY off my 2x Athlon MP rig. Rest of the house used the heater, while my room I had the vent shut and my door was also always shut. If my door was open, I could feel the cooler (70f) air coming in and my nicer 80f air leaving heh. They also play off each other as well. Computer gets hotter, then room gets hotter, room hotter = higher ambient = higher computer. It won't keep going like that, since the ambient temp would have to really jump up, and my heatsinks were able to keep both chips around 48c under load. So we have 2 chips at 48c, a graphics card (which would've been my 9700 Pro) around 55c (to be fair to it's power and shite stock hsf), ram, around 4 HDDs, PSU and the other small ICs on the motherboard. Turns into quite a baseboard heater that moonlights as a speedy computer 



Deusxmachina said:


> Yeah, I feel sorry for people's wallets when told to go buy an aftermarket cooler for $40 because holy crap 75 degrees is too hot.
> 
> Heat does indeed kill electronics, so cooler is always better, but a lot of people take cooling to an extreme.  It's up there with decreasing the life of a CPU by overclocking it.  Yeah, maybe it reduces the life, but it will still likely last long after it's outdated anyway.



I know that with my better cooling on my x1950 Pro, it's overclock potential skyrocketed. It could barely move with the stock POS, then I threw on a HIS IceQ3 setup on it (mines a Sapphire) and I have one of the highest older style PCB 1950 OCs that's only on air.

You're dead on with that last bit though!


----------



## Mussels (Oct 4, 2008)

I've always said look at stock load temps, and when OC'ing keep it under that.

VRM's and ram are included in that logic, getting a small infrared thermometer can really help


----------



## ascstinger (Oct 4, 2008)

KainXS said:


> Yep I totally agree with that, when your talking about the life expectancy of cpu's and gpu's
> 
> Its widely known that failure in a cpu/gpu/etc is a DIRECT result of the amount of thermal cycles and the change in temperature above ambient
> 
> and theres formulas that prove it.




Back when I kept parts for more than a month, I had an x1900xtx that did 95 on load, when I overclocked, the temps actually dropped because the fan kicked up a notch. Lasted me a year, and I sold it off.

Once gpu's start to break before the card is rendered useless by modern games, it might present more of an issue. Most non-gaming cards dont reach such temperatures, so this doesnt apply to them. I agree heat kills, but not at a rate that worries me. Maybe I just take extra good care of my gear or been lucky, but it hasnt presented a problem to me yet


----------



## Jacko28 (Oct 4, 2008)

However a Cooler GPX does mean a cooler room which is always nice in the summer if your stuck on your computer.


----------



## Widjaja (Oct 4, 2008)

I think the temperature threshold depends for card  to card.
I had a 8800GT xxx which would crap out at 95degC. (cooler was faulty)
If I dropped the clocks to stock 8800GT it would carry on at 105degC without fault.
I now own a HD4850 which reaches 95degC regularly and has never caused a problem.


----------



## grunt_408 (Oct 4, 2008)

Widjaja said:


> I think the temperature threshold depends for card  to card.
> I had a 8800GT xxx which would crap out at 95degC. (cooler was faulty)
> If I dropped the clocks to stock 8800GT it would carry on at 105degC without fault.
> I now own a HD4850 which reaches 95degC regularly and has never caused a problem.



Geez I was worried about my 3870 idling @60 so I altered bios fan settings so I get idle round 45 now. I wasnt too sure how hot they should get.


----------



## Widjaja (Oct 4, 2008)

Craigleberry said:


> Geez I was worried about my 3870 idling @60 so I altered bios fan settings so I get idle round 45 now. I wasnt too sure how hot they should get.



Yeah at the end of the day, if the card craps out at stock clocks and stock fan speed, there's a problem with it.

No one should ever have to increase anything to make thier GPU run properly.


----------



## MoonPig (Oct 4, 2008)

100c+ :O. Ive just run the CSS stress test with:

1440x900
All high
4xAA 4xAF

And my 3870 didnt go above 40c. Thats with an overclock too! How do you manage 100c?


----------



## Widjaja (Oct 4, 2008)

If you a talking about the 8800GT temperatures I was getting, I did put in brackets the cooler was faulty.
The other 8800GT xxx I recieved both maxxed out at 77degC.


----------



## Mussels (Oct 4, 2008)

Jacko28 said:


> However a Cooler GPX does mean a cooler room which is always nice in the summer if your stuck on your computer.



um, yeah. so if you take the heat off the GPU, it goes into the air in your case, into the air in your room... and that makes it colder? if anything, overheating hardware would concentrate the heat into the PC case, leaving hte rest of the room colder.


----------



## Formula350 (Oct 4, 2008)

Mussels said:


> um, yeah. so if you take the heat off the GPU, it goes into the air in your case, into the air in your room... and that makes it colder? if anything, overheating hardware would concentrate the heat into the PC case, leaving hte rest of the room colder.



If you don't have proper case flow, yes  No offense, but anyone who builds their own computer, and doesn't have a way to vent the warm air, should be buying a pre-built and not overclock! My case is passively vented, the top 5.25" bay is wide open and the heat just rolls on out. I also have my CPU fan ducted from the outside for cool air, and my video card has a fan that blows in cool air from the front, right across it's HSF, and then it is ducted out as well. The only heat that gets put in the case (other than from HDDs and ICs) is from my CPU, but since it's getting ducted outside air, that doesn't bother me any


----------



## Mussels (Oct 4, 2008)

Formula350 said:


> If you don't have proper case flow, yes  No offense, but anyone who builds their own computer, and doesn't have a way to vent the warm air, should be buying a pre-built and not overclock! My case is passively vented, the top 5.25" bay is wide open and the heat just rolls on out. I also have my CPU fan ducted from the outside for cool air, and my video card has a fan that blows in cool air from the front, right across it's HSF, and then it is ducted out as well. The only heat that gets put in the case (other than from HDDs and ICs) is from my CPU, but since it's getting ducted outside air, that doesn't bother me any



you missed the point. he thinks that a colder GPU means a colder room. Thats entirely untrue, as giving it better cooling simply gets the heat into the air faster - it doesnt change the amount of heat produced.


----------



## Formula350 (Oct 4, 2008)

Mussels said:


> you missed the point. he thinks that a colder GPU means a colder room. Thats entirely untrue, as giving it better cooling simply gets the heat into the air faster - it doesnt change the amount of heat produced.



So a cooler component, will not result in a cooler room, in comparison to the same room with a hotter component? Because that just doesn't compute for me :\


----------



## Mussels (Oct 4, 2008)

Formula350 said:


> So a cooler component, will not result in a cooler room, in comparison to the same room with a hotter component? Because that just doesn't compute for me :\



lets put it another way.

95W CPU. 95 watts of heat. no matter how well you cool it, its STILL producing 95W of heat. thats 95W of heat that can be from a CPU at 30C or 100C, it doesnt matter - its the same AMOUNT of heat, all that changes is where it is (the cooler the CPU is, the more of it is pushed out into the air)


----------



## newtekie1 (Oct 4, 2008)

Mussels said:


> lets put it another way.
> 
> 95W CPU. 95 watts of heat. no matter how well you cool it, its STILL producing 95W of heat. thats 95W of heat that can be from a CPU at 30C or 100C, it doesnt matter - its the same AMOUNT of heat, all that changes is where it is (the cooler the CPU is, the more of it is pushed out into the air)



Correct, actually a CPU that is running hotter, makes the room ever so slightly cooler, as more of the heat that was produced by the CPU is trapped in the CPU and heatsink and not exhausted into the room.


----------



## Formula350 (Oct 4, 2008)

Well, then I'm confused as to why my computer running at 40c resulted in a warmer room then when it runs at 30c. It's not a placebo when I had a thermometer to tell me heh I'd love to recreate the 'test', but I can't now for multiple reasons: Not in that room, don't have that thermometer (moving next week).

Just don't know what to say :\ My computer hasn't change in over a year with the exception of adding a duct for the CPU which dropped it's temps a bunch.


----------



## grunt_408 (Oct 5, 2008)

Widjaja said:


> Yeah at the end of the day, if the card craps out at stock clocks and stock fan speed, there's a problem with it.
> 
> No one should ever have to increase anything to make thier GPU run properly.



There are a few things to consider though I guess.. Noise is one thing
I was sniffing around the web trying to find out how hot my MSI Video card should be getting and stumbled upon lots of people with issues of how loud the fan was. I had never really heard the fan on my card spin!!
Then I used rivatuna to make a fan profile and I seemed to get better performance with the fan @50-80 percent when gaming so I decided to look at the bios and see when the thing was supposed to kick in and actually do something...lol. If I remember rightly the fan was set to start spinning @80deg I changed it I cant remember now what to but I went from an idle of 60deg down to 45deg. My load temps while folding never go above 80deg. You are right widjaja end users should never have to touch anything like that but if most of them are like me they will never be happy with stock


----------



## Mussels (Oct 5, 2008)

Formula350 said:


> Well, then I'm confused as to why my computer running at 40c resulted in a warmer room then when it runs at 30c. It's not a placebo when I had a thermometer to tell me heh I'd love to recreate the 'test', but I can't now for multiple reasons: Not in that room, don't have that thermometer (moving next week).
> 
> Just don't know what to say :\ My computer hasn't change in over a year with the exception of adding a duct for the CPU which dropped it's temps a bunch.



did it occur to you, that the reason the PC was cooler was BECAUSE the air temp was cooler?  colder ambient = colder CPU


----------



## grunt_408 (Oct 5, 2008)

@Formula350 how long since you pulled it apart and gave it all a good clean?


----------



## Formula350 (Oct 5, 2008)

Mussels said:


> did it occur to you, that the reason the PC was cooler was BECAUSE the air temp was cooler?  colder ambient = colder CPU



 I'm not talking about my PC (directly), I'm talking about my room temp. Room temp was XXºF, and then I installed the ducting, while room temp was still XXºF. After booting windows I noticed my CPU temp came down, then a couple hours later (still on the PC mind you) I noticed my room had chilled off and thought my door was open a crack, but it wasn't. Alone, in my room, w/ the only variable changing being a duct added to the CPU and it's temps dropping because of that. It was night time, outside temps hadn't dropped any (would've had to of been a temp drop of epic proportions) and even had they a few ºs it wouldn't of impacted my room. AC wasn't off/on making a difference in how things were either. Don't know what to tell you other than a clear fact that my CPU temps being lower impacted my overall room temperature. 



Craigleberry said:


> @Formula350 how long since you pulled it apart and gave it all a good clean?



I don't see why I need to? My computer temps are much lower than the majority of people I know. Especially considering: It's a dual core on the junk OEM aluminum non-heatpipped HS and overclocked 700mhz. Idle, my CPU is 27c (Yes, 27c), and under load it moves up to only around 38c. My load is what most people IDLE at! That is the benefit of this ducting. I used to be around 38c idle.


----------



## [I.R.A]_FBi (Oct 14, 2008)

clean it anyway


----------



## Formula350 (Oct 14, 2008)

[I.R.A]_FBi said:


> clean it anyway



I actually had, not too long before this thread started. Won't deny I didn't pull out some dust, but it obviously must not have been enough to impact temps :\


On a side note, I'm now in a different place. Everything is setup and I had my door closed to keep sound in my room so not to disturb my parents. I had not even though about it, but when I opened my door to the rest of the house to goto the kitchen, I noticed the drastic temperature change! I should've checked the PC temps, but I didn't think of it :\


----------



## salimbest83 (Oct 15, 2008)

i think its depend on which chip ..and room temp.
as my Powercolor HD 4850 with stock HSF always stay more than 95 C on load..
hope it will last me at least 2 years..( he's quite fast )


----------



## AsRock (Oct 17, 2008)

VRM's are the issue with the 2900XT for sure. I am using the Accelero Xtreme so the 100% fan speed is still silent all though the VRMS are what making all the heat on my card.

Before these speeds were a issue due to weather and the VRMS or a like failing.  Monitor would turn off and The Zalman fan controller would tell me that the card is not running 3D mode still.

Would be nice to cut there temps down more without going water.

EDIT: same issue with the 7900 as well..

Here's a pic ( temps are on the right side of the pic i did it this way so the cards temps are all shown while in 3D and not in 2D).

http://forums.techpowerup.com/attachment.php?attachmentid=19371&stc=1&d=1224251707

EDIT: A PC can\will warm the air up and when the air in a room is higher i have noticed the computers temps go up it's only logical.  Even more so if the room has clossed doors or not open windows.


----------



## Eternal (Oct 17, 2008)

Thanks, cdawall. We needed this thread. Now i dont need to ask the same question when i put my rig together.


----------



## _jM (Nov 9, 2008)

Ive never seen my GPU run hotter than 65c and that is when its Folding,  when in COD4 it maybe hits 45c.


----------



## cdawall (Nov 9, 2008)

mine never goes over 70C even with an oc on it


----------



## johnnyfiive (Nov 11, 2008)

cdawall said:


> Alot of talk goes around on these forums and many others about what a "safe" GPU temp is. Most people you will ask will say keep it under 70C or i don't like anything in my case running over 60C. This is bunk, BS, whatever you want to call it.
> 
> All current GPU's since the Ti4x00 series from Nvidia and 9xx0 for ATi have been able to handle 100C+ temps. current cards are safe running 24/7 at 100C. G80 and up are rated to fail at over 120C. this means at 90C your card is doing just fine!
> 
> ...



I 100% agree. I recall my Creative Lab's Riva TNT warming my entire room during the winter in Ohio. Video cards run hot, they will not run as cool as CPU's. Not for a long time.


----------



## Sylvester (Dec 29, 2008)

Planned obsolescence is the thing which concerns me, after reading about the underfill issues with thermal cycling I can see how thermal cycling can be used to limit the functional life of parts at some average length greater than warranty. 

IMHO this may be a factor in the way parts are designed to run at 90°C. They are being constructed to be disposable though I am not sure they were ever constructed to function indefinitely. Its always been a fact that the lower you keep the thermal peaks on your system the longer parts will last. Statistically speaking someone who believes that 90°C is a good temp for a card will have their card a shorter time than someone who aims for 60° because thermal cycling will be more intense and bumps will fail faster. Of course that suits some users who like the excuse to upgrade, but really who needs an excuse? I just dont want the damn thing to crap out when I least expect it.

The idea that they magically developed a whole new integrated thermal tech which allows thermal environments 40°C higher than a couple of years ago seems a bit hard to swallow IMHO especially when you read about the nVidia fiasco. These guys screw up all the time.


----------



## Mussels (Dec 30, 2008)

Sylvester said:


> Planned obsolescence is the thing which concerns me, after reading about the underfill issues with thermal cycling I can see how thermal cycling can be used to limit the functional life of parts at some average length greater than warranty.
> 
> IMHO this may be a factor in the way parts are designed to run at 90°C. They are being constructed to be disposable though I am not sure they were ever constructed to function indefinitely. Its always been a fact that the lower you keep the thermal peaks on your system the longer parts will last. Statistically speaking someone who believes that 90°C is a good temp for a card will have their card a shorter time than someone who aims for 60° because thermal cycling will be more intense and bumps will fail faster. Of course that suits some users who like the excuse to upgrade, but really who needs an excuse? I just dont want the damn thing to crap out when I least expect it.
> 
> The idea that they magically developed a whole new integrated thermal tech which allows thermal environments 40°C higher than a couple of years ago seems a bit hard to swallow IMHO especially when you read about the nVidia fiasco. These guys screw up all the time.



is it really that hard to beleive? with die shrinks and new materials being used, i can easily beleive that higher temps are not a problem. The nvidia fiasco was the solder used and not the chips fault - they designed a chip to run at 90C, and then some nugget decided to use solder that couldnt take the same amount of heat. (probably changed at the last minute because of the RoHS stuff, without consulting the people who designed the thing)


----------



## Flyordie (Dec 30, 2008)

Nvidia's fiasco was caused by a packaging failure. They removed the "PI" layer.  I can buy a G86 and take it to the University of MO, Rolla's.. Mexico, MO Nuclear Engineering Depot and use their Electron Microscope and test it out if needed... (although I really don't want to destroy a "somewhat" usable card.

I like to keep my card at the sweet spot of 65-70C. Anything over 73C and my card becomes unstable at 780Mhz and shows artifacts.


----------



## DarkMatter (Dec 30, 2008)

newtekie1 said:


> Correct, actually a CPU that is running hotter, makes the room ever so slightly cooler, as more of the heat that was produced by the CPU is trapped in the CPU and heatsink and not exhausted into the room.



Now that we are all fussy and almost correcting each other  neither that is true.

In the seconds that the chip is heating up and until it finds a stable temperature, that's true, but after that, BOTH the cool and the hot chip will transfer the same energy/heat to the ambient. Otherwise the chip would continue getting hotter forever. In that heat transfer the chip/cooler just acts like a bucket under falling water, the bigger the bucket more water it will contain (temperature), but once that is filled, the same amount of water will continue falling (heat that goes to the ambient).


----------



## Deleted member 24505 (Dec 30, 2008)

So does this mean a full cover waterblock on the card is in fact better,as it keeps both the gpu and vrm's at lower more uniform temp lvls? ie gpu cool,vrms cool rather than gpu cool,vrms cooking.


----------



## DonInKansas (Dec 30, 2008)

Just get an accelero and your temp worries are over.


----------



## Deleted member 24505 (Dec 30, 2008)

I actually do have a full cover block on my 4850.Max temp is about 40c at 750/1200


----------



## Mussels (Dec 30, 2008)

in short, its the extremes. as they heat up they expand, and as they cool down they contract. Its only tiny levels, but thats why chips die eventually, and why hotter temps kill them faster.

This is why temp controlled fans are good - if you tell it to stay at say, 70C - the fan will slow down at idle/less load, therefore making the temp extremes less extreme. (with a 100% fan, you're talking say 40C idle and 70C load, with a temp controlled fan it might be 60C idle and 75C load)


----------



## Hayder_Master (Dec 30, 2008)

their is other thing i am not sure about at which is the ATI's cards have more range in overheat cuz ATI's cards always have high temp , i have an ATI 4870 but it run well with overclock 800/1000 and the temp with full load under 60c with 50% fan speed


----------



## OzzmanFloyd120 (Dec 30, 2008)

newtekie1 said:


> I disagree that 90C is safe for GPU's.  It might not fry the card instantly, but it will definitely shorten the lifespan.  Why it is safe for a GPU to be at 90C, but not a CPU, they are made out of the same materials in the same ways, so why can't a CPU take 90C?



90C is plenty safe, that's the temp that my card runs at under load with stock volts.
And I'll admit, when I got my GX2 I posted one of those "Is this safe?!" threads because my card was running at like 80C idle.


----------



## Deusxmachina (Dec 31, 2008)

Sylvester said:


> Planned obsolescence is the thing which concerns me, after reading about the underfill issues with thermal cycling I can see how thermal cycling can be used to limit the functional life of parts at some average length greater than warranty.



So the video card is only going to last nine years instead of 10?


----------



## Sylvester (Dec 31, 2008)

I don't know deus, that is the question. It could be 9/10 but I get the feeling its more like 7/15 with some of them. In any case thermal cycling stress depends on usage patterns. For manufacturers as long as its more than warranty its OK until they get a reputation for parts failure. With any bell curve you will have a proportion failing early which is undesirable for users IMHO. 

But one factor that works against planned obsolescence is that there is no monopoly so its a roll of the dice. If a manufacturers part fails then the user has a choice about how to upgrade, there is no guarantee they will choose the same company again and in fact they are less likely to do so if they experienced catastrophic failure. That is why IMHO the poor cooling solutions often used by manufacturers are a mistake rather than a deliberate policy. 

ATI didnt have a bad underfill issue but cooling on their high end cards has been rubbish for two generations because they took absolutely no notice of the need for fan control. That has to be stupidity/incompetance given that they have just implemented it in drivers and the relative cost of that must have been negligeable.

I dont think thermal tech is that much more advanced, a little perhaps. My guess is that manufacturers have noticed the disparity between useful life of card (3 yrs?) and the hardware life of cards (15 odd) and have allowed more extreme thermals and tried to cut corners re: cooling where they think they wont be noticed, to save manufacturing costs. But that will cause a higher failure rate so its a trade off between money saved and expense incurred for them.


----------



## someone_else (Dec 31, 2008)

Any idea what the accuracy of these diode temperatures are? For example if it is 20C then a 80C reading could be 100C or 60C. 

Why don't the manufacturers publish full specs, it would make life so much easier.


----------



## crtecha (Jan 5, 2009)

Thanks a lot cdawall.  Every so often I look a little bit on average temps.  Just cause each gfx card model and make is different.  My card runs idle at about 45c full load no higher than 60c.  Thats at gpu core 895mhz and memory clock at 510mhz.  Stock is 800\405.


----------



## VanguardGX (Jan 7, 2009)

My 3870 hits 79c on load! thats @ 880Mhz core 1280Mhz Vram btw and its ok! In fact ATi said that the RV670 is ok at those temps. I must say i was a temp freak, never liked anything in my system gettin over 45c. Until i got a Prescott, and that all changed


----------



## Widjaja (Jan 7, 2009)

It appears the temperatures of the high-end reference GPUs do tend to be higher than the previous high-end.
My card can literally heat up my bedroom.


----------



## VanguardGX (Jan 8, 2009)

Trust me Widjaja's not kidding, it really does.


----------



## Formula350 (Jan 8, 2009)

Oddly enough, the cards today are getting hotter and hotter, yet they are seemingly less prone to that heat as previous chips. And while most people will say "yea more powerful chips should run hotter", I don't FULLY buy that. My AthlonMPs (converted XPs) ran at 43c idle with nice full copper heatsinks at only 2.1ghz. While this A64 _DUAL CORE_ (I stress that since it was 43c for a single AXP) overclocked and overvolted to 2.8ghz runs @ 28c idle with the STOCK aluminum heatsink!!! Now I'll deduct say, 8c, because I have a fan duct that draws in fresh ambient room air compared to hotter inside-case air, which the SMP rig did, but 36c < 43c  And for anyone curious, this isn't that nice stock heatpipe heatsink, it's the old-school stock kind that the single cores and 3200+ dual cores came with. Anyways, rambled a bit, point is that we can run our video cards and CPUs quite a bit higher than we used to, and unless you're aiming for overclocks you really don't need to worry too much


----------

