• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

RX 7900 XTX reference at possible vapor chamber design problem + very high hot spot (110c)

Status
Not open for further replies.

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
42,932 (6.71/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
Look 1 Person Provided a Helpful Link. If you have a problem Contact the Support Department.


Thanks @LFaWolf for providing the Important info needed to help users resolve this problem

I vote the thread be locked
 
Last edited:

OneMoar

There is Always Moar
Joined
Apr 9, 2010
Messages
8,800 (1.63/day)
Location
Rochester area
System Name RPC MK2.5
Processor Ryzen 5800x
Motherboard Gigabyte Aorus Pro V2
Cooling Thermalright Phantom Spirit SE
Memory CL16 BL2K16G36C16U4RL 3600 1:1 micron e-die
Video Card(s) GIGABYTE RTX 3070 Ti GAMING OC
Storage Nextorage NE1N 2TB ADATA SX8200PRO NVME 512GB, Intel 545s 500GBSSD, ADATA SU800 SSD, 3TB Spinner
Display(s) LG Ultra Gear 32 1440p 165hz Dell 1440p 75hz
Case Phanteks P300 /w 300A front panel conversion
Audio Device(s) onboard
Power Supply SeaSonic Focus+ Platinum 750W
Mouse Kone burst Pro
Keyboard SteelSeries Apex 7
Software Windows 11 +startisallback
agree lock this thread most of the people commenting are going in circles and not reading though it and thus repeating the same crap
 
Joined
Jan 11, 2013
Messages
1,237 (0.28/day)
Location
California, unfortunately.
System Name Sierra
Processor Core i5-11600K
Motherboard Asus Prime B560M-A AC
Cooling CM 212 Black RGB Edition
Memory 64GB (2x 32GB) DDR4-3600
Video Card(s) MSI GeForce RTX 3080 10GB
Storage 4TB Samsung 990 Pro with Heatsink NVMe SSD
Display(s) 2x Dell S2721QS 4K 60Hz
Case Asus Prime AP201
Power Supply Thermaltake GF1 850W
Software Windows 11 Pro
Joined
Nov 4, 2005
Messages
12,037 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
A lot of cry babies in this thread that may or may not have the hardware. Also, someone calling something a personal attack when it’s clearly not.

On topic, looks like the cooler migrates some of the fluid out of circulation when moved around, strange that W1zz didn’t see the high temps with horizontal mounting (based on the case) but maybe he is using a riser cable and vertical position.

The silly guy in the video should have turned it horizontal with the fans up, gravity would have pushed the liquid onto the area where the dies are and temps would be lower again I bet.
 
Last edited:

Vlooi

New Member
Joined
Nov 21, 2022
Messages
23 (0.03/day)
I observed a very interesting thing that happened yesterday out of the blue, and re-tested it again this morning.

A few notes first:

1. Ambient temps about the same.
2. Settings on Adrenaline the same.
3. Same games (Cyberpunk, HZD, Witcher3 Next Gen). Raytracing on for Cyberpunk and Witcher.

My 7900XTX maxed at 100 degrees hotspot after more than an hour gameplay, core at 67, fans RPM at 2900. HDR on in Windows 11 as always.

At some stage I switched HDR off in Windows, and immediately went back into the games to test for more than an hour again. My hotspot did not go above 86 degrees celcius, core still at 67 degrees max, fans now at average 1920 RPM. Same scenes, etc.

This morning re-activated HDR, went back in same games, same scenes, same settings in game and Adrenaline, and hotspot maxed at 86 degrees, core still at 67. Same when I de-activated HDR.

Additional info:

1. GPU Fans average 1920 rpm.
2. Core speed averages above 2700 Mhz.
3. GPU utilization 100%.
4. GPU board power maxed at 373W during gameplay.
5. VRam at 2535.

Could this be related to the displayport, maybe the cable, maybe a combination? Its just too coincidental. Why would my hotspot drop average 14 DEGREES CELCIUS because I flipped a switch that relates to display and the cable? Did some component "unload" during the on and off switching of HDR? The card's temps is now rock stable.

May there be some odd thing with the new displayport 2.1?

Thought I would just throw some thoughts and finding around on the forum, because this to me is very interesting. Is there a problematic BIOS setting in some cards related to this, influencing the algorithm?

Regards
 
Joined
Sep 1, 2020
Messages
2,448 (1.54/day)
Location
Bulgaria
I observed a very interesting thing that happened yesterday out of the blue, and re-tested it again this morning.

A few notes first:

1. Ambient temps about the same.
2. Settings on Adrenaline the same.
3. Same games (Cyberpunk, HZD, Witcher3 Next Gen). Raytracing on for Cyberpunk and Witcher.

My 7900XTX maxed at 100 degrees hotspot after more than an hour gameplay, core at 67, fans RPM at 2900. HDR on in Windows 11 as always.

At some stage I switched HDR off in Windows, and immediately went back into the games to test for more than an hour again. My hotspot did not go above 86 degrees celcius, core still at 67 degrees max, fans now at average 1920 RPM. Same scenes, etc.

This morning re-activated HDR, went back in same games, same scenes, same settings in game and Adrenaline, and hotspot maxed at 86 degrees, core still at 67. Same when I de-activated HDR.

Additional info:

1. GPU Fans average 1920 rpm.
2. Core speed averages above 2700 Mhz.
3. GPU utilization 100%.
4. GPU board power maxed at 373W during gameplay.
5. VRam at 2535.

Could this be related to the displayport, maybe the cable, maybe a combination? Its just too coincidental. Why would my hotspot drop average 14 DEGREES CELCIUS because I flipped a switch that relates to display and the cable? Did some component "unload" during the on and off switching of HDR? The card's temps is now rock stable.

May there be some odd thing with the new displayport 2.1?

Thought I would just throw some thoughts and finding around on the forum, because this to me is very interesting. Is there a problematic BIOS setting in some cards related to this, influencing the algorithm?

Regards
Still looks like a driver issue. Apparently the driver fails to detect the load correctly initially. After the manipulation of the button, it apparently leads to reloading of the driver or part of it and it finally "bites", how to work properly.
 
Joined
Nov 11, 2016
Messages
3,507 (1.18/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
Well it's winter in the northern hemisphere, AMD would need to solve the issue before summer, or just stop making MBA cards and let AIBs charge whatever they want.
 

Vlooi

New Member
Joined
Nov 21, 2022
Messages
23 (0.03/day)
Well it's winter in the northern hemisphere, AMD would need to solve the issue before summer, or just stop making MBA cards and let AIBs charge whatever they want.
For interest sake I live in very hot part of the world, north-eastern South Africa, right next to the Kruger Park, no air-conditioning inhouse. Thus with the temps I am getting now with the reference cooler you guys in the North should be A-OK in summer. (Ambient in summer varies between 29 to 35 celcius inhouse, outside up to 45 celcius with high humidity). I am actually quite surprised at how well the cooler works under these circumstances.
 
Joined
Nov 30, 2018
Messages
42 (0.02/day)
Processor Ryzen 7 9800x3D
Motherboard ASRock B650e PG Riptide Wifi
Cooling Asus Strix LC II 360
Memory 64GB(2x32) 6000c30 DDR5 Buildzoid Timings
Video Card(s) RTX 4090
Storage 1x850 Evo 250GB SSD, 2x1TB HDD 1x4TB HDD, 1x Inland Premium 1TB SSD, 1x Inland Performance Plus 2TB
Display(s) 1x Acer XV275K P3 2x LG 27GN950
Case Lian-Li 216
Power Supply 1000w ASUS TUF
Mouse Logitech G502 Lightspeed
VR HMD Meta Quest 3
I observed a very interesting thing that happened yesterday out of the blue, and re-tested it again this morning.

A few notes first:

1. Ambient temps about the same.
2. Settings on Adrenaline the same.
3. Same games (Cyberpunk, HZD, Witcher3 Next Gen). Raytracing on for Cyberpunk and Witcher.

My 7900XTX maxed at 100 degrees hotspot after more than an hour gameplay, core at 67, fans RPM at 2900. HDR on in Windows 11 as always.

At some stage I switched HDR off in Windows, and immediately went back into the games to test for more than an hour again. My hotspot did not go above 86 degrees celcius, core still at 67 degrees max, fans now at average 1920 RPM. Same scenes, etc.

This morning re-activated HDR, went back in same games, same scenes, same settings in game and Adrenaline, and hotspot maxed at 86 degrees, core still at 67. Same when I de-activated HDR.

Additional info:

1. GPU Fans average 1920 rpm.
2. Core speed averages above 2700 Mhz.
3. GPU utilization 100%.
4. GPU board power maxed at 373W during gameplay.
5. VRam at 2535.

Could this be related to the displayport, maybe the cable, maybe a combination? Its just too coincidental. Why would my hotspot drop average 14 DEGREES CELCIUS because I flipped a switch that relates to display and the cable? Did some component "unload" during the on and off switching of HDR? The card's temps is now rock stable.

May there be some odd thing with the new displayport 2.1?

Thought I would just throw some thoughts and finding around on the forum, because this to me is very interesting. Is there a problematic BIOS setting in some cards related to this, influencing the algorithm?

Regards

While there definitely appears to be hardware issues on at least some units, there's definitely driver issues as well; games will just randomly not perform as well out of the blue and require a restart sometimes. Beyond the multi-monitor idle power draw, we've had folks like LTT notice performance differences just depending on the monitor you used... here's hoping we quickly get some driver improvements this month.
 
Joined
Jun 14, 2020
Messages
3,647 (2.19/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
The RTX 4090 model also encountered this burning power input problem. I don't know if they fixed it.
Τhere was no burning problem. Users didnt plug the cable properly. It's a user error, nothing to do with nvidia
 
Joined
Feb 23, 2019
Messages
6,113 (2.85/day)
Location
Poland
Processor Ryzen 7 5800X3D
Motherboard Gigabyte X570 Aorus Elite
Cooling Thermalright Phantom Spirit 120 SE
Memory 2x16 GB Crucial Ballistix 3600 CL16 Rev E @ 3600 CL14
Video Card(s) RTX3080 Ti FE
Storage SX8200 Pro 1 TB, Plextor M6Pro 256 GB, WD Blue 2TB
Display(s) LG 34GN850P-B
Case SilverStone Primera PM01 RGB
Audio Device(s) SoundBlaster G6 | Fidelio X2 | Sennheiser 6XX
Power Supply SeaSonic Focus Plus Gold 750W
Mouse Endgame Gear XM1R
Keyboard Wooting Two HE
I never had any issues with my reference 6900XT, but I'm definitely glad I went with a Red Devil for my 7900XTX.
Have you looked into RMA options from Powercolor? Because in Poland they don't do any RMA services for end users, they tell them to go back to the store.
 
Last edited:
Joined
May 17, 2021
Messages
3,122 (2.35/day)
Processor Ryzen 5 5700x
Motherboard B550 Elite
Cooling Thermalright Perless Assassin 120 SE
Memory 32GB Fury Beast DDR4 3200Mhz
Video Card(s) Gigabyte 3060 ti gaming oc pro
Storage Samsung 970 Evo 1TB, WD SN850x 1TB, plus some random HDDs
Display(s) LG 27gp850 1440p 165Hz 27''
Case Lian Li Lancool II performance
Power Supply MSI 750w
Mouse G502
Have looked into RMA options from Powercolor? Because in Poland they don't do any RMA services for end users, they tell them to go back to the store.

don't ever RMA in Europe stuff inside the warranty, you will lose all your legislation protection, they will no longer have to answer you in 30 days, they can do whatever they want. That only applies to retailers. Trust me i felt for this once. That's for the AmeriNoRights people
 

wolf

Better Than Native
Joined
May 7, 2007
Messages
8,267 (1.28/day)
System Name MightyX
Processor Ryzen 9800X3D
Motherboard Gigabyte X650I AX
Cooling Scythe Fuma 2
Memory 32GB DDR5 6000 CL30
Video Card(s) Asus TUF RTX3080 Deshrouded
Storage WD Black SN850X 2TB
Display(s) LG 42C2 4K OLED
Case Coolermaster NR200P
Audio Device(s) LG SN5Y / Focal Clear
Power Supply Corsair SF750 Platinum
Mouse Corsair Dark Core RBG Pro SE
Keyboard Glorious GMMK Compact w/pudding
VR HMD Meta Quest 3
Software case populated with Artic P12's
Benchmark Scores 4k120 OLED Gsync bliss
I vote keep the thread going, for the most part it's an interesting discussion that can evolve as the situation does.

I do believe in coming days more reputable testing will be done and shed more light on it, and technical discussion should always be welcome on the forum. See a shitpost? Report it.
 
Joined
Aug 15, 2015
Messages
33 (0.01/day)
Location
Norway
System Name ExcaliBuR
Processor Ryzen 7 7800X3D
Motherboard Asus ROG STRIX X670E-I Gaming
Cooling Lian Li GA II LCD 280MM
Memory G.Skill Trident Z5 Neo 32GB x 2 6000C30
Video Card(s) Sapphire Radeon 7900 XTX Pulse!
Storage 2x4TB Samsung 990 Pro
Display(s) Samsung G8 OLED 34"
Case Ssupd Meshliciuos
Audio Device(s) Edifier SPIRIT STAX S3
Power Supply Corsair SF750
Mouse SteelSeries Aerox 3 Wireless
Keyboard Steelseries Apex Pro TKL Wireless
VR HMD HP Reverb G2
Software W11
Just tried the HDR on/off method as @Vlooi experienced, no luck

Tried ONLY HDMI connected, no luck.

What helped was tilting the case before HS gets heated.
 
Joined
Jan 8, 2017
Messages
9,525 (3.26/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
My 3090RTX is colder when consuming 470W.

I think you might need one or two courses in physics, temperature =/= hot or cold. Your almost 500W RTX 3090 is in no way or shape "cold" buddy.

You got an MBA 7900XT? Cooler looks a bit different, one screw hole less around GPU.
Yeah the shape of the vapor chamber looks different too, so it's probably not impacted (though, if it is, maybe not enough folks have bought one to notice?

I said this in another thread but if the vapor chamber really was the problem then the entire GPU would run hotter, it wouldn't affect just the hot spot temperatures. Meaning there wouldn't be a 30C+ gap between those temperatures, which is the actual problem here.
 
Joined
Nov 30, 2018
Messages
42 (0.02/day)
Processor Ryzen 7 9800x3D
Motherboard ASRock B650e PG Riptide Wifi
Cooling Asus Strix LC II 360
Memory 64GB(2x32) 6000c30 DDR5 Buildzoid Timings
Video Card(s) RTX 4090
Storage 1x850 Evo 250GB SSD, 2x1TB HDD 1x4TB HDD, 1x Inland Premium 1TB SSD, 1x Inland Performance Plus 2TB
Display(s) 1x Acer XV275K P3 2x LG 27GN950
Case Lian-Li 216
Power Supply 1000w ASUS TUF
Mouse Logitech G502 Lightspeed
VR HMD Meta Quest 3
Have you looked into RMA options from Powercolor? Because in Poland they don't do any RMA services for end users, they tell them to go back to the store.

Ok? I got the protection plan for my card at Microcenter, so it won't be in my system any longer than 2 years regardless.
 

Vlooi

New Member
Joined
Nov 21, 2022
Messages
23 (0.03/day)
I think you might need one or two courses in physics, temperature =/= hot or cold. Your almost 500W RTX 3090 is in no way or shape "cold" buddy.




I said this in another thread but if the vapor chamber really was the problem then the entire GPU would run hotter, it wouldn't affect just the hot spot temperatures. Meaning there wouldn't be a 30C+ gap between those temperatures, which is the actual problem here.
Well stated. One tends to lose perspective sometimes, and don't look at things more broadly. Sounds very logical.

Just tried the HDR on/off method as @Vlooi experienced, no luck

Tried ONLY HDMI connected, no luck.

What helped was tilting the case before HS gets heated.
Falck,

I have an aftermarket DP cable, I don't use the one that came with the monitor. Have you tried another cable? It may be simple suggestion, but who knows? I do take note of the tilting of the case which decreases your temps though.
 
Joined
Sep 17, 2014
Messages
22,795 (6.06/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Does anyone work for free for public benefit by his own choice?
Many do.

to be fair that was a new product where the company doing the standard is also at fault, this is a vapor chamber, this is old tech, this should not have happened.
There isn't a difference here.

Nvidia used an old principle of dividing power across strands of copper/metal to direct power to the GPU, and this doesn't work proper. They deployed an adapter (simple stuff) to cover the transition from old to new. The standard isn't at fault. The adapter Nvidia built, is.

AMD is using a new technology too, they're stacking chips beside the GPU. They've applied an old principle of a vapor chamber (simple stuff) over it and this doesn't work proper.

In both cases its the application of existing technologies and techniques, the quality of implementation is shite in both camps. Both approaches scream cost reduction over common sense and QC; and if you compare the 12VHPWR spec to the pcie spec, this also screams cost reduction over foolproof and safety. Tolerances are lower, but we said 'this is fine'.

And now they both burn for it. Well played, I hope the reality check holds for longer than this gen.
 
Last edited:
Joined
Jan 14, 2019
Messages
12,965 (5.94/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE Plasma
I don't think there should be any facts that should be ignored. The fact that companies react when there are irregularities is not by their will, but by force of the circumstances, and there are probably laws that they have to comply with.
They could just issue a statement saying that "it's within spec", as turbo clocks are always an "up to" value. Let's see if that happens.
 
Joined
Aug 15, 2015
Messages
33 (0.01/day)
Location
Norway
System Name ExcaliBuR
Processor Ryzen 7 7800X3D
Motherboard Asus ROG STRIX X670E-I Gaming
Cooling Lian Li GA II LCD 280MM
Memory G.Skill Trident Z5 Neo 32GB x 2 6000C30
Video Card(s) Sapphire Radeon 7900 XTX Pulse!
Storage 2x4TB Samsung 990 Pro
Display(s) Samsung G8 OLED 34"
Case Ssupd Meshliciuos
Audio Device(s) Edifier SPIRIT STAX S3
Power Supply Corsair SF750
Mouse SteelSeries Aerox 3 Wireless
Keyboard Steelseries Apex Pro TKL Wireless
VR HMD HP Reverb G2
Software W11
All i know is if AMD, or suppliers, don't take responsibility I'm most likely to returning the card. I have a 60-day return window so I have until Feb. 15. to make up my mind.
 
Joined
Jan 14, 2019
Messages
12,965 (5.94/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE Plasma
I don't see anything new, I have tried 4 6800XTs from different manufacturers and 2 6900XTs, all of them were trash. The hotspots were always reaching 100C on them, having my case side panel off/on, no change.

Winter here and having 20C+ in the room, I don't know the exact number. So during summer, when 35C is nothing unusual here, I would be reaching 110C+ on all of them and would throttle.

AMD GPUS are trash. My 3090RTX is colder when consuming 470W. Pathetic. This has been the continuous problem since 5700XT.
Are AMD GPUs trash because they give you actual hotspot temps instead of estimates (yes, up until Ampere, hotspot is just a software estimate on GeForce cards), or is it because they don't run your games?

Guess what, my reference 6750 XT runs with a 105-107 °C hotspot, but achieves 99.7% in the 3DMark Time Spy stress test every single time.

Guys, stop judging a graphics card by its temperature alone! It's not 2008 anymore.

All i know is if AMD, or suppliers, don't take responsibility I'm most likely to returning the card. I have a 60-day return window so I have until Feb. 15. to make up my mind.
Have you tried a 3DMark stress test yet?
 
Joined
May 12, 2017
Messages
2,207 (0.79/day)
I said this in another thread but if the vapor chamber really was the problem then the entire GPU would run hotter, it wouldn't affect just the hot spot temperatures. Meaning there wouldn't be a 30C+ gap between those temperatures, which is the actual problem here.

That's a very valid point. It could be an component that's related to the thermal read-out. .Let me watch that video again.
 
Joined
Jan 8, 2017
Messages
9,525 (3.26/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Just because something is consuming more power, doesn't mean, it is automatically warmer, you know there are more things into, aka the die area, cooling solution etc.

Except that in this case they are, ICs convert over >99% of power into heat, the die sizes are comparable, which means that yes, they're just as warm. If something has the same power output, same area, same heat, it will also have the same thermal density, irrespective of what a temperature sensor reports, which might I add, Nvidia and AMD are at complete liberty to choose what exactly their driver reports and how that is calculated since there is no correct or incorrect way to calculate temperature in an IC, so 100C in an Nvidia card might and most likely means something completly differnt from what 100C means in an AMD card. Long story short, that number means absolutely nothing with regards to how cold or not a GPU really is. Of course someone with a non-empty head would be well aware of that.

Stop being in denial about 8th grade physics.
 
Last edited:
Joined
Jan 14, 2019
Messages
12,965 (5.94/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE Plasma
I observed a very interesting thing that happened yesterday out of the blue, and re-tested it again this morning.

A few notes first:

1. Ambient temps about the same.
2. Settings on Adrenaline the same.
3. Same games (Cyberpunk, HZD, Witcher3 Next Gen). Raytracing on for Cyberpunk and Witcher.

My 7900XTX maxed at 100 degrees hotspot after more than an hour gameplay, core at 67, fans RPM at 2900. HDR on in Windows 11 as always.

At some stage I switched HDR off in Windows, and immediately went back into the games to test for more than an hour again. My hotspot did not go above 86 degrees celcius, core still at 67 degrees max, fans now at average 1920 RPM. Same scenes, etc.

This morning re-activated HDR, went back in same games, same scenes, same settings in game and Adrenaline, and hotspot maxed at 86 degrees, core still at 67. Same when I de-activated HDR.

Additional info:

1. GPU Fans average 1920 rpm.
2. Core speed averages above 2700 Mhz.
3. GPU utilization 100%.
4. GPU board power maxed at 373W during gameplay.
5. VRam at 2535.

Could this be related to the displayport, maybe the cable, maybe a combination? Its just too coincidental. Why would my hotspot drop average 14 DEGREES CELCIUS because I flipped a switch that relates to display and the cable? Did some component "unload" during the on and off switching of HDR? The card's temps is now rock stable.

May there be some odd thing with the new displayport 2.1?

Thought I would just throw some thoughts and finding around on the forum, because this to me is very interesting. Is there a problematic BIOS setting in some cards related to this, influencing the algorithm?

Regards
Very interesting!

Could hotspot be measured at the GPU's display engine when it runs hot, instead of its parts responsible for 3D?

Except that in this case they are, ICs convert over >99% of power into heat, the die sizes are comparable, which means that yes, they're just as warm. Same power output, same area, same heat, irrespective of what a temperature sensor reports, which might I add, Nvidia and AMD are at complete liberty to choose what exactly their driver reports and how that is calculated since there is no correct or incorrect way to calculate temperature in an IC. Long story short, that number means absolutely nothing with regards to how cold or not a GPU really is. Of course someone with a non-empty head would be well aware of that.

Stop being in denial about 8th grade physics.
Not to mention that how hot a chip is designed to run isn't related to its use value. Someone try running a Pentium 3 at temperatures modern CPUs run at and see what happens!
 
Joined
Nov 6, 2019
Messages
38 (0.02/day)
Are AMD GPUs trash because they give you actual hotspot temps instead of estimates (yes, up until Ampere, hotspot is just a software estimate on GeForce cards), or is it because they don't run your games?

Guess what, my reference 6750 XT runs with a 105-107 °C hotspot, but achieves 99.7% in the 3DMark Time Spy stress test every single time.

Guys, stop judging a graphics card by its temperature alone! It's not 2008 anymore.


Have you tried a 3DMark stress test yet?
I will tell you something shocking, the hotspot is not actually the hottest part of the chip, because it is also an estimate according to the simulations so the sensor is placed there. The entire die is not fully covered by temp measuring sensors so if someone, for example, manages to leave the part of the die without paste or not touching the cooler, it can simply get damaged by high temps depending on the sensor grid and solution.

Secondly, the safe operating temperatures of GPUs are specifically made with hotspots in mind and local temps up to 120C are safe depending on the process, architecture, area, PCB, material and longevity of the exposure to such temp. It was AMD's decision to OC them out of the box and so much they are risking damaging the die if the exposure to such temps would be prolonged so that's why they actually measure the hotspot, it is not because they like you or technically superior.
 
Status
Not open for further replies.
Top