• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

RX 7900 XTX reference at possible vapor chamber design problem + very high hot spot (110c)

Status
Not open for further replies.
Joined
Jan 14, 2019
Messages
12,965 (5.94/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE Plasma
I will tell you something shocking, the hotspot is not actually the hottest part of the chip, because it is also an estimate according to the simulations so the sensor is placed there. The entire die is not fully covered by temp measuring sensors so if someone, for example, manages to leave the part of the die without paste or not touching the cooler, it can simply get damaged by high temps depending on the sensor grid and solution.
That's speculation. Fact is, having X number of sensors that cover some area on the GPU die is still better than nothing.

Secondly, the safe operating temperatures of GPUs are specifically made with hotspots in mind and local temps up to 120C are safe depending on the process, architecture, area, PCB, material and longevity of the exposure to such temp. It was AMD's decision to OC them out of the box and so much they are risking damaging the die if the exposure to such temps would be prolonged so that's why they actually measure the hotspot, it is not because they like you or technically superior.
If it's safe (like you said it yourself), then how is AMD risking damaging the card? What is prolonged use? An hour? Maybe two? Six hours? Do you think AMD didn't factor this in when they gave guidelines on what a safe temperature means?

We have hotspot temperature sensors so that the driver has a better understanding of how hot the GPU die actually is so it can give us more performance without a risk of damage. I never said AMD liked me or were superior in any way.

Edit: I'm not just talking about AMD vs Nvidia. I'm talking about any GPU that has hotspot temperature sensors on it.
 
Joined
Nov 6, 2019
Messages
38 (0.02/day)
Except that in this case they are, ICs convert over >99% of power into heat, the die sizes are comparable, which means that yes, they're just as warm. If something has the same power output, same area, same heat, it will also have the same thermal density, irrespective of what a temperature sensor reports, which might I add, Nvidia and AMD are at complete liberty to choose what exactly their driver reports and how that is calculated since there is no correct or incorrect way to calculate temperature in an IC. Long story short, that number means absolutely nothing with regards to how cold or not a GPU really is. Of course someone with a non-empty head would be well aware of that.

Stop being in denial about 8th grade physics.

My 3090 RTX is 15C colder on the core when CONSUMING THE SAME AMOUNT OF POWER as a 6900XT. And yes, I am talking about the same amount of power consumption by the gpu die minus the memory.

And of course, the numbers mean completely nothing that's why we have standards and the output is used to determine the voltage and frequency of the gpu. Yes, they are totally arbitrary and AMD just decided to show 110C for fun instead of a better number which would be more convenient for marketing purposes.

Secondly,the die size is also one of the variables. Two same die mm2 wise can have different temperatures, it is also about thickness and your ability to effectively dissipate the heat.

Also, they can totally do that legally speaking without being hit by lawsuits and certificate entities.
 
Last edited:
Joined
Jan 14, 2019
Messages
12,965 (5.94/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE Plasma
My 3090 RTX is 15C colder on the core when CONSUMING THE SAME AMOUNT OF POWER as a 6900XT. And yes, I am talking about the same about of power consumption by the gpu die minus the memory.

And of course, the numbers mean completely nothing that's why we have standards and the output is used to determine the voltage and frequency of the gpu. Yes, they are totally arbitrary and AMD just decided to show 110C for fun instead of a better number which would be more convenient for marketing purposes.
"The numbers mean completely nothing" - you said it yourself, so what are you arguing about, then? If your AMD GPU was limited to 60 °C hotspot instead of 110, it would mean absolutely nothing.

And no, we do not have standards - AMD and Nvidia can set whatever temperature they consider safe, and the 110 °C limit isn't "just for fun" or "marketing purposes".
 
Last edited by a moderator:
Joined
Jan 8, 2017
Messages
9,525 (3.26/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
And of course, the numbers mean completely nothing that's why we have standards and the output is used to determine the voltage and frequency of the gpu.

Is the output used in the same way to determine the voltage and frequency in an AMD GPU that it is used in an Nvidia GPU ? Surely even someone that knows as little as you can realize that this is probably not the case, this varies from architecture to architecture, maybe even from GPU to GPU of the same architecture, so why would you ever believe those values are meant to represent the exact same thing. You are aware chips have dozens, maybe hundreds of different sensors? Out of which these vendors expose one maybe two, which actually aren't even some kind of raw values, they're probably some kind of aggregate measurement, that neither me or you knows how it is calculated. Or maybe you do, what the hell, enlightens us if you know something we don't.
 
Last edited by a moderator:
Joined
Nov 6, 2019
Messages
38 (0.02/day)
That's speculation. Fact is, having X number of sensors that cover some area on the GPU die is still better than nothing.


If it's safe (like you said it yourself), then how is AMD risking damaging the card? What is prolonged use? An hour? Maybe two? Six hours? Do you think AMD didn't factor this in when they gave guidelines on what a safe temperature means?

We have hotspot temperature sensors so that the driver has a better understanding of how hot the GPU die actually is so it can give us more performance without a risk of damage. I never said AMD liked me or were superior in any way.

Edit: I'm not just talking about AMD vs Nvidia. I'm talking about any GPU that has hotspot temperature sensors on it.
Maybe try to read what I actually wrote.

I called AMD gpus trash because all of them I tried would thermal throttle during summer here.

And then you replied but muh AMD added the sensors, muh Nvidia had only SW estimate.

And then I literally explained to you why AMD is using the sensors, because they OC'ed the gpus out of the box to be competitive with Nvidia and due to the significant variety of the die and poor cooling solutions they provide with their partners and the die making very poor contact with the cooler on many models. That issue was covered by plenty of tech youtubers and jounalists, there are even 40C degrees differences in hotspot temperatures. Not to mention that depending on the game, the thermal throttling causes issues such as stuttering, as my friend is experiencing with his 6950X, but it is all ok because everything is within the specs, they told him after the denied RMA. "It is supposed to work that way".
 
Joined
May 12, 2017
Messages
2,207 (0.79/day)
I think I have a solution to the problem.

If anyone has contact with the OP of the video, ask him to LM the core & the six triplets. Voiding the warranty on one of his cards should not be an issue for him. I think the issue is uneven surface somewhere either the coldplate the die itself.
 
Last edited:
Joined
Aug 15, 2015
Messages
33 (0.01/day)
Location
Norway
System Name ExcaliBuR
Processor Ryzen 7 7800X3D
Motherboard Asus ROG STRIX X670E-I Gaming
Cooling Lian Li GA II LCD 280MM
Memory G.Skill Trident Z5 Neo 32GB x 2 6000C30
Video Card(s) Sapphire Radeon 7900 XTX Pulse!
Storage 2x4TB Samsung 990 Pro
Display(s) Samsung G8 OLED 34"
Case Ssupd Meshliciuos
Audio Device(s) Edifier SPIRIT STAX S3
Power Supply Corsair SF750
Mouse SteelSeries Aerox 3 Wireless
Keyboard Steelseries Apex Pro TKL Wireless
VR HMD HP Reverb G2
Software W11
I think you might need one or two courses in physics, temperature =/= hot or cold. Your almost 500W RTX 3090 is in no way or shape "cold" buddy.




I said this in another thread but if the vapor chamber really was the problem then the entire GPU would run hotter, it wouldn't affect just the hot spot temperatures. Meaning there wouldn't be a 30C+ gap between those temperatures, which is the actual problem here.
If it's not a VC issue, why does it help tilting the card vertical?
 
Joined
Jan 8, 2017
Messages
9,525 (3.26/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
"The numbers mean completely nothing" - you said it yourself

He's being ironic, he genuinely believes they all mean exactly the same thing. Mind boggling stuff.

If it's not a VC issue, why does it help tilting the card vertical?

No idea, didn't der8auer said that once he put it horizontally and back vertically the temps did not go back ? That makes no sense to me, does it to you ?
 
Joined
Aug 15, 2015
Messages
33 (0.01/day)
Location
Norway
System Name ExcaliBuR
Processor Ryzen 7 7800X3D
Motherboard Asus ROG STRIX X670E-I Gaming
Cooling Lian Li GA II LCD 280MM
Memory G.Skill Trident Z5 Neo 32GB x 2 6000C30
Video Card(s) Sapphire Radeon 7900 XTX Pulse!
Storage 2x4TB Samsung 990 Pro
Display(s) Samsung G8 OLED 34"
Case Ssupd Meshliciuos
Audio Device(s) Edifier SPIRIT STAX S3
Power Supply Corsair SF750
Mouse SteelSeries Aerox 3 Wireless
Keyboard Steelseries Apex Pro TKL Wireless
VR HMD HP Reverb G2
Software W11
He's being ironic, he genuinely believes they all mean exactly the same thing. Mind boggling stuff.



No idea, didn't der8auer said that once he put it horizontally and back vertically the temps did not go back ? That makes no sense to me, does it to you ?
Well, he hypthesize that the design of the chambers are wrong, making the vapour beeing trapped and not finding it's way back to the plate correctly when horizontal. And when the temp is so high already there isn't enough to cool down the gas to make it a liquid again so it can trickle back?
Because the air intake is different.
Is it that much different from 6800 XT? I had that earlier in the same case with no issues, also MBA.
 
Joined
Jan 8, 2017
Messages
9,525 (3.26/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Well, he hypthesize that the design of the chambers are wrong, making the vapour beeing trapped and not finding it's way back to the plate correctly when horizontal. And when the temp is so high already there isn't enough to cool down the gas to make it a liquid again so it can trickle back?

It's not about the temperature but about the heat, which should be the same whether the card is horizontal or vertical. I've never heard of a vapor chamber which works better according to a certain position, the vapors turn liquid in whichever part of the heatsink is cool enough for that to happen, doesn't matter what orientation the cooler is in.

Maybe there are some kind of channels that prevent the liquid from flowing back properly once the card is horizontally mounted but like I said, if this was the case, why isn't the entire GPU heating up ?
 
Joined
Jan 14, 2019
Messages
12,965 (5.94/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE Plasma
Maybe try to read what I actually wrote.

I called AMD gpus trash because all of them I tried would thermal throttle during summer here.

And then you replied but muh AMD added the sensors, muh Nvidia had only SW estimate.

And then I literally explained to you why AMD is using the sensors, because they OC'ed the gpus out of the box to be competitive with Nvidia and due to the significant variety of the die and poor cooling solutions they provide with their partners and the die making very poor contact with the cooler on many models. That issue was covered by plenty of tech youtubers and jounalists, there are even 40C degrees differences in hotspot temperatures. Not to mention that depending on the game, the thermal throttling causes issues such as stuttering, as my friend is experiencing with his 6950X, but it is all ok because everything is within the specs, they told him after the denied RMA. "It is supposed to work that way".
Right... define "thermal throttling" on an AMD GPU, please. Does your GPU clock drop below the advertised base clock? Have you tried running any stress test on it (as in 3DMark Time Spy, or soemthing that gives you reliable results)? Are you 100% sure that your friend's stuttering is caused by the GPU's temperature and not something else? What you're doing here, is you're looking at ONE attribute of the GPU and associate every single problem that your (or your friend's) system exhibits to that attribute. I can't tell you how short-sighted and narrow-minded that approach is.
 
Joined
Aug 15, 2015
Messages
33 (0.01/day)
Location
Norway
System Name ExcaliBuR
Processor Ryzen 7 7800X3D
Motherboard Asus ROG STRIX X670E-I Gaming
Cooling Lian Li GA II LCD 280MM
Memory G.Skill Trident Z5 Neo 32GB x 2 6000C30
Video Card(s) Sapphire Radeon 7900 XTX Pulse!
Storage 2x4TB Samsung 990 Pro
Display(s) Samsung G8 OLED 34"
Case Ssupd Meshliciuos
Audio Device(s) Edifier SPIRIT STAX S3
Power Supply Corsair SF750
Mouse SteelSeries Aerox 3 Wireless
Keyboard Steelseries Apex Pro TKL Wireless
VR HMD HP Reverb G2
Software W11
@Vya Domus , that's a valid question. I'm thinking the other tempereature is an average from the whole substrate/all dies? Whilst Junction/HS is the worst and not just one spot maybe? Depends on which transistors are working the hardest. I.e. When I run Black Mesa the avg. temp is 55C, but HS is @110. Cyberpunk average is at 65-70C while HS is 110.......
 
Joined
May 12, 2017
Messages
2,207 (0.79/day)
Well, he hypthesize that the design of the chambers are wrong, making the vapour beeing trapped and not finding it's way back to the plate correctly when horizontal. And when the temp is so high already there isn't enough to cool down the gas to make it a liquid again so it can trickle back?

Is it that much different from 6800 XT? I had that earlier in the same case with no issues, also MBA.

As someone has already pointed out, if it's a vapour chamber problem then the GPU sensor itself will also shot up. but this this is not happening in most cases. It looks to me this is flatness of the coldplate or the die itself. Who knows, maybe those chiplets are not sitting comfortable.

No amount of mounting pressure is going to fix that. ..Get the OP to void warranty one of his cards & LM it. I'm fairly comfortable you will see a change.
 
Joined
Jan 8, 2017
Messages
9,525 (3.26/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
I'm thinking the other tempereature is an average from the whole substrate/all dies?

But they all should heat up more, right ?
 
Joined
Nov 6, 2019
Messages
38 (0.02/day)
He's being ironic, he genuinely believes they all mean exactly the same thing. Mind boggling stuff.



No idea, didn't der8auer said that once he put it horizontally and back vertically the temps did not go back ? That makes no sense to me, does it to you ?
Every manufacturer provides you with technical documentation and specs and if there are any adjustments made to the reported data, aka offsets etc, not only it is legally required in a majority of regions such as the EU but a long-standing industry safety standard.

I never said that Nvidia and AMD use standardised methods of reporting stuff, they used different architecture and different components, and their monikers for respective parts of the gpus and system will vary.. I said that the values they do report from the sensors on the cards must be backed by reality within the measurement error of the respective components and die, part variability and specs.

They cannot make up the numbers out of their ass, because the spec stated are being tested by government authorities and certificate authorities and if the specs were false they would not get certificated and could not be sold. Secondly, the data are being used for validating and servicing purposes. And legally speaking, they cannot report false numbers either.

As I said you have no clue. At this point I seriously doubt you have used a multimeter once in your life. It shows arbitrary values too, right.
 
Last edited by a moderator:
Joined
Jan 8, 2017
Messages
9,525 (3.26/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
I said that the values they do report from the sensors on the cards must be backed by reality within the measurement error of the respective components and die, part variability and specs.

AMD goes to some AIB and tells them: look, we expose this sensor, if value X is reached then Y happens and so forth. The AIB, me, you or anyone else has no clue what exactly those values mean beyond that. They're just some values with no way to tell exactly what they represent, only AMD knows that. Nvidia does the same and there is no relationship whatsoever between what each of these vendors do.

This means that to say that a card from Nvidia is colder because whatever sensor reports a lower number is unquestionably and undeniably, dumb and utter nonsense. I tried to sugar coat this as much as I could but there is only so much that I can do.
 
Last edited by a moderator:
Joined
Jan 14, 2019
Messages
12,965 (5.94/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE Plasma
Every manufacturer provides you with technical documentation and specs and if there are any adjustments made to the reported data, aka offsets etc, not only it is legally required in a majority of regions such as the EU but a long-standing industry safety standard.

I never said that Nvidia and AMD use standardised methods of reporting stuff, they used different architecture and different components, and their monikers for respective parts of the gpus and system will vary.. I said that the values they do report from the sensors on the cards must be backed by reality within the measurement error of the respective components and die, part variability and specs.

They cannot make up the numbers out of their ass, because the spec stated are being tested by government authorities and certificate authorities and if the specs were false they would not get certificated and could not be sold. Secondly, the data are being used for validating and servicing purposes. And legally speaking, they cannot report false numbers either.
If that's the case, then what are we arguing about again?

At one minute you're saying 110 °C hotspot is bad, and then you turn around and say that there are standards that AMD and Nvidia cannot deviate from. Please decide. It seems like you're arguing against yourself.
 
Joined
Nov 6, 2019
Messages
38 (0.02/day)
Right... define "thermal throttling" on an AMD GPU, please. Does your GPU clock drop below the advertised base clock? Have you tried running any stress test on it (as in 3DMark Time Spy, or soemthing that gives you reliable results)? Are you 100% sure that your friend's stuttering is caused by the GPU's temperature and not something else? What you're doing here, is you're looking at ONE attribute of the GPU and associate every single problem that your (or your friend's) system exhibits to that attribute. I can't tell you how short-sighted and narrow-minded that approach is.
His gpu literally thermal throttle after having reached 110C on the hotspot and the frequency starts to aggressively fluctuate which causes stuttering in affected games, which are able to utilize the gpu to the fullest and are not cpu bound.

Easily reproduceable. "Please, leave my AMD alone!"
 
Joined
Jan 8, 2017
Messages
9,525 (3.26/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Right... define "thermal throttling" on an AMD GPU, please. Does your GPU clock drop below the advertised base clock? Have you tried running any stress test on it (as in 3DMark Time Spy, or soemthing that gives you reliable results)? Are you 100% sure that your friend's stuttering is caused by the GPU's temperature and not something else? What you're doing here, is you're looking at ONE attribute of the GPU and associate every single problem that your (or your friend's) system exhibits to that attribute. I can't tell you how short-sighted and narrow-minded that approach is.

To add to that, literally all modern GPUs throttle down under any circumstance for various reasons, either because of voltage, temperature, power, etc.
 
Joined
Jan 14, 2019
Messages
12,965 (5.94/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE Plasma
His gpu literally thermal throttle after having reached 110C on the hotspot and the frequency starts to aggressively fluctuate which causes stuttering in affected games, which are able to utilize the gpu to the fullest and are not cpu bound.

Easily reproduceable. "Please, leave my AMD alone!"
You still haven't defined "thermal throttling".

Clock speed always fluctuates on an AMD GPU depending on a lot of different factors, including 3D load, the voltage-frequency curve, temperature, etc. That's not thermal throttling.

The question still stands: have you tried running a 3DMark, or other stress test on it that gives you reliable data across several runs?
 
Joined
Mar 10, 2010
Messages
11,880 (2.19/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Gskill Trident Z 3900cas18 32Gb in four sticks./16Gb/16GB
Video Card(s) Asus tuf RX7900XT /Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores laptop Timespy 6506
I don't see anything new, I have tried 4 6800XTs from different manufacturers and 2 6900XTs, all of them were trash. The hotspots were always reaching 100C on them, having my case side panel off/on, no change.

Winter here and having 20C+ in the room, I don't know the exact number. So during summer, when 35C is nothing unusual here, I would be reaching 110C+ on all of them and would throttle.

AMD GPUS are trash. My 3090RTX is colder when consuming 470W. Pathetic. This has been the continuous problem since 5700XT.
Seems like you set a high bar with your first post.

And your still arguing about how cool your 3090 is while also clearly being aware that the number you see is decided by Nvidia not you, funny.

Not really on topic though, so you don't like AMD is my main takeaway of your POINT, and....

GPU's run hot, shocker.

A chip designed to run until it's limit works when it hits it's limit as expected, shocker.

A 3090 with a more expensive cooler fittied to it is cooled better than another card, well you did pay for it so again shocker.

Now please do get back to topic or spam some other thread with shite.

I'm actually interested in what's going on with the 7900XTX not your old expensive piece of shit :p :D

Yeah I went there troll.
 
Joined
Jan 8, 2017
Messages
9,525 (3.26/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
The question still stands: have you tried running a 3DMark, or other stress test on it that gives you reliable data across several runs?

Obviously Nvidia GPUs do not throttle down, ever, they just run at a constant clock speed of a gazillion Ghz , ice cold no less.

They haven't invented yet thermal throttling, only AMD has figured that novelty out.
 
Joined
Nov 6, 2019
Messages
38 (0.02/day)
Likewise.



AMD goes to some AIB and tells them: look, we expose this sensor, if value X is reached then Y happens and so forth. The AIB, me, you or anyone else has no clue what exactly those values mean beyond that. They're just some values with no way to tell exactly what they represent, only AMD knows that. Nvidia does the same and there is no relationship whatsoever between what each of these vendors do.

This means that to say that a card from Nvidia is colder because whatever sensor reports a lower number is unquestionably and undeniably, dumb and utter nonsense. I tried to sugar coat this as much as I could but there is only so much that I can do.
I will tell you something shocking.

The AIB have pretty extensive knowledge of the chip, design, and schematics because be it AMD or Nvidia, they do share the documentation with their partners. Without the documentation and knowledge, they /the AIB/would not be able to make customs PCBs and their own designs.. They also possess the ability to modify the bios, even having access to the source code.

Secondly, Nvidia and AMD do not produce the dies physically. They draw the schematics and the dies are produced by a third party, TSMC, Samsung, etc. So by extrapolating your little genius, unless TSMC, and Samsung tell them, they do not know either and everything is just a little secret!
 
Status
Not open for further replies.
Top