• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Will I benefit from changing the thermal pad?

Joined
Jun 27, 2024
Messages
50 (0.19/day)
The replacement pads have the exact same thickness with the stock ones. I got the information from MSI itself. The question is however, would the new pads function more effectively or not.


You're correct, the die consumes much of the total power and so dissipates the heat the most. It looks memory temps are not worrying but, still I want to increase its longevity.

But why I'm worried?

Because since I purchased this card, rarely like very very really I see checkerboard like squares on the desktop (never while gaming though) so I thought my card is more susceptible to vram failure in the near future.
Since MSI gave you the exact thickness for the pads and the card is already taken apart then, assuming you didn't already put it back together, you might as well just replace them. The worst case scenario is that it makes no difference, but you should be able to get better performing thermal pads than the originals.
 
Joined
Mar 7, 2023
Messages
1,025 (1.39/day)
Processor 14700KF/12100
Motherboard Gigabyte B760 Aorus Elite Ax DDR5
Cooling ARCTIC Liquid Freezer II 240 + P12 Max Fans
Memory 32GB Kingston Fury Beast @ 6000
Video Card(s) Asus Tuf 4090 24GB
Storage 4TB sn850x, 2TB sn850x, 2TB Netac Nv7000 + 2TB p5 plus, 4TB MX500 * 2 = 18TB. Plus dvd burner.
Display(s) Dell 23.5" 1440P IPS panel
Case Lian Li LANCOOL II MESH Performance Mid-Tower
Audio Device(s) Logitech Z623
Power Supply Gigabyte ud850gm pg5
Keyboard msi gk30
I wouldn't change pads unless they appear ripped or worn, personally.

But buying one of those packs of pads, with every thickness in large squares that can be cut out to whatever size you need is super convenient to have around.
 

FreedomEclipse

~Technological Technocrat~
Joined
Apr 20, 2007
Messages
24,582 (3.76/day)
Location
(currently) Hong Kong
System Name WorkInProgress
Processor AMD 7800X3D
Motherboard MSI X670E GAMING PLUS
Cooling Thermalright AM5 Contact Frame + Phantom Spirit 120SE
Memory 2x32GB G.Skill Trident Z5 NEO DDR5 6000 CL32
Video Card(s) Asus Dual Radeon™ RX 6700 XT OC Edition
Storage WD SN770 1TB (Boot)|1x WD SN850X 8TB (Gaming)| 2x2TB WD SN770| 2x2TB+2x4TB Crucial BX500
Display(s) LG GP850-B
Case Corsair 760T (White) {1xCorsair ML120 Pro|5xML140 Pro}
Audio Device(s) Yamaha RX-V573|Speakers: JBL Control One|Auna 300-CN|Wharfedale Diamond SW150
Power Supply Seasonic Focus GX-850 80+ GOLD
Mouse Logitech G502 X
Keyboard Duckyshine Dead LED(s) III
Software Windows 11 Home
Benchmark Scores ლ(ಠ益ಠ)ლ
If you want big results. Consider a chopper shim memory mod. Its going to cost more for obvious reasons though.


::edit::

Even did a follow up vid


::EDIT II::

while we are at it. There are a tonne of videos of other people doing it.


 
Last edited:
Joined
Oct 2, 2020
Messages
1,097 (0.68/day)
System Name Laptop ASUS TUF F15 | Desktop 1 | Desktop 2
Processor Intel Core i7-11800H | Intel Core i5-14600K@135W | Intel Core i3-10100
Motherboard ASUS FX506HC | Gigabyte B660M DS3H DDR4 | MSI MAG B560M Bazooka
Cooling Laptop built-in cooling lol | Thermalright Assassin Spirit w/ BeQuiet Shadow Wings fan| Stock Copper
Memory 24 GB @ 3200 | 32 GB @ 3200 | 16 GB @ 3200
Video Card(s) Nvidia RTX 3050 Mobile 4GB | Nvidia GTX 1650 | Nvidia GTX 960 2 GB
Storage Adata XPG SX8200 Pro 512 GB | Samsung M2 SSD 256 GB & 1 TB 2.5" HDD @ 7200| SSD 250 GB & SSD 240 GB
Display(s) Laptop built-in 144 Hz FHD screen | Dell 27" WQHD @ 75 Hz & 49" TV FHD | Samsung 32" TV FHD
Case It's a laptop, it doesn't need case lmfao | Deepcool Mattrexx 55 MESH | Aerocool Cylon PRO
Audio Device(s) laptop built in audio | Logitech stereo speakers | Logitech 2.1 speakers
Power Supply ASUS 180W PSU | SeaSonic Focus GX-550 | MSI MAG A550BN
Mouse Logitech G604 | Corsair Harpoon wired mouse| Logitech G305
Keyboard Laptop built-in keyboard |Razer Blackwidow | Steelseries APEX 7 TKL
VR HMD Quest 2 sold out and don't need VR anymore lol
Software Windows 10 Enterprise 20H2 | Windows 11 24H2 LTSC | Windows 11 24H2 LTSC
Benchmark Scores good enough
Similarly priced and tiered GTX 1080 is now an entry level card that can't play some games without DX11/DXVK tweaks, and even then, framerates and IQ aren't quite there. By 2030, maybe 32, the same will happen to 3080. And I'm pretty sure 3080 can physically survive for this long.
well there is a diff in 1080 era and 3080 era,
and to call 1080 "entry level" is stupid, unless you are super crazy 500 Hz monitor-player or want it "ultra-only" with all the sh*t put to the max, 1080 is pretty OK card to play. 1080 equals 2060, It's pretty medium card these days, unless, read above.:rolleyes:
Maybe call a 1060 6 GB "entry level", still, "entry level" VARIES very! For one "entry level" is the cheapest card he could allow, for other "entry level" is "minimum req" for some game, and also I could get cheapo 4K@60 Hz monitor, get some RTX 3070 and call it "entry level", because, otherwise I can't play at 4K res:D
 
Joined
Feb 24, 2023
Messages
3,688 (4.94/day)
Location
Russian Wild West
System Name D.L.S.S. (Die Lekker Spoed Situasie)
Processor i5-12400F
Motherboard Gigabyte B760M DS3H
Cooling Laminar RM1
Memory 32 GB DDR4-3200
Video Card(s) RX 6700 XT (vandalised)
Storage Yes.
Display(s) MSi G2712
Case Matrexx 55 (slightly vandalised)
Audio Device(s) Yes.
Power Supply Thermaltake 1000 W
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 11 / 10 / 8
Benchmark Scores My PC can run Crysis. Do I really need more than that?
and to call 1080 "entry level" is stupid
Subjective? Sure. Dumb? I don't think so. 1080 is about the same level as 3050 which was low mid tier when it was released. Time is remorseless so now 3050 is to be considered entry level, i.e. only comfortable for gaming at settings BELOW high and straight up unplayable at higher resolutions such as UW1440p or 4K. Same applies to 1080 which not only doesn't reach 3060 in raw performance, it also lacks some DX12 features and can't enable DLSS (which is massive for GPUs of this segment).

4K@60 Hz monitor, get some RTX 3070 and call it "entry level"
Sure, 3070 is exactly an entry level 4K GPU. Or a reasonable 1440p GPU. Or a good 1080p GPU.
 
Joined
Jul 5, 2013
Messages
29,632 (6.94/day)
I have an MSI Suprim x RTX 3080 model GPU. In order to prevent throttling in some demanding games, I wanted to lower the memory temperature values (if possible). The Vrams are running at 85 degrees of celsius.
But my question is, since this type of GPU is the flagship of MSI (suprim is the most expensive gpu of MSI), will I get considerable benefit in changing the pads? Or the stock pads are already the best in the market? Does it worth for but for my time and cost?
I'm planing to buy Thermal grizzly minus pad:
View attachment 387790
Here how the stock pads are look like:
View attachment 387789
The benefit would be a margin of error kind of thing. Stick with what you've got. The only real reason to replace thermal pads is if they're damaged, and yours look fine.
 
Joined
Jul 31, 2024
Messages
1,028 (4.59/day)
If you want big results. Consider a chopper shim memory mod. Its going to cost more for obvious reasons though.

No offense. Some youtube repair channel I think claimed that some of these users ruined their hardware.

I do not know how I would determine 100% the dimensions of those copper plates

Note: See it as warning to be careful.
 
Joined
Jan 22, 2020
Messages
1,047 (0.56/day)
Location
Turkey
System Name MSI-MEG
Processor AMD Ryzen 9 3900X
Motherboard MSI MEG X570S ACE MAX
Cooling AMD Wraith Prism + Thermal Grizzly
Memory 32 GB
Video Card(s) MSI Suprim X RTX 3080
Storage 500 GB MSI Spatium nvme + 500 GB WD nvme + 2 TB Seagate HDD + 2 TB Seagate HDD
Display(s) 27" LG 144HZ 2K ULTRAGEAR
Case MSI MPG Velox Airflow 100P
Audio Device(s) Philips
Power Supply Seasonic 750W 80+ Gold
Mouse HP OMEN REACTOR
Keyboard Corsair K68
Software Windows10 LTSC 64 bit
No offense. Some youtube repair channel I think claimed that some of these users ruined their hardware.

I do not know how I would determine 100% the dimensions of those copper plates

Note: See it as warning to be careful.
I'm not courageous enough to even think about that
 
Joined
Feb 20, 2019
Messages
8,923 (4.03/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
I haven't placed an order for pads.. yet you are very right about they all touch the same surface. so pads will not magically cool down the vrams at all.
At the moment I will continue to monitor the temps for a while. let's see how things will unfold.

Micron GDDR6X is nominal at 95C and happy for short bursts such as gaming up to 110C. So 95C isn't hot, it's normal. If it's running cooler than that, it's just that the cooler is doing more than needed.

When you read around the web about "is 105C too hot for my GDDRX6" and you see results that say "no, you should redo the pads" you have to remember that 95% of the information about GDDR6X for the 3080 cards was from ETH miners who were overclocking their VRAM and running incredibly memory-intensive loads on them 24/7/365. They replaced the pads because they were literally running a 24/7 stress test of the VRAM whilst overclocking the stuff at the same time.

If you GPU runs it's VRAM at 100C when gaming it's perfectly okay. Only worry if you see temps near or over 110C for prolonged periods, because that indicates poor contact and one or more of the GDDR6X modules overheating.
 
Joined
Jan 22, 2020
Messages
1,047 (0.56/day)
Location
Turkey
System Name MSI-MEG
Processor AMD Ryzen 9 3900X
Motherboard MSI MEG X570S ACE MAX
Cooling AMD Wraith Prism + Thermal Grizzly
Memory 32 GB
Video Card(s) MSI Suprim X RTX 3080
Storage 500 GB MSI Spatium nvme + 500 GB WD nvme + 2 TB Seagate HDD + 2 TB Seagate HDD
Display(s) 27" LG 144HZ 2K ULTRAGEAR
Case MSI MPG Velox Airflow 100P
Audio Device(s) Philips
Power Supply Seasonic 750W 80+ Gold
Mouse HP OMEN REACTOR
Keyboard Corsair K68
Software Windows10 LTSC 64 bit
Micron GDDR6X is nominal at 95C and happy for short bursts such as gaming up to 110C. So 95C isn't hot, it's normal. If it's running cooler than that, it's just that the cooler is doing more than needed.

When you read around the web about "is 105C too hot for my GDDRX6" and you see results that say "no, you should redo the pads" you have to remember that 95% of the information about GDDR6X for the 3080 cards was from ETH miners who were overclocking their VRAM and running incredibly memory-intensive loads on them 24/7/365. They replaced the pads because they were literally running a 24/7 stress test of the VRAM whilst overclocking the stuff at the same time.

If you GPU runs it's VRAM at 100C when gaming it's perfectly okay. Only worry if you see temps near or over 110C for prolonged periods, because that indicates poor contact and one or more of the GDDR6X modules overheating.
Luckily, my vrams run at 85c at most. In normal circumstances vrams run at 75-80c Celsius
 
Joined
Mar 2, 2011
Messages
111 (0.02/day)
But the pads has leaked tons of oil..
If they leaked changed them, I strongly advice.

Silicon oil becomes sticky and dust will stick to it and both are insulators. The dust will bunch up around little components which are not cooled and act as insulator and than dust attracts moisture which is not a good friend with circuits. After that you might miniscule discharges of electricity (because of the moisture in the dust) among components which will become critical.

Silicone oil can penetrate the epoxy of integrated circuit. Additionally, any residue or dirt that the oil dissolves can increase its conductivity, that is the reverse if we don't have moisture

You can see bellow on the left, that brown droplet, that is silicon oil with changed color that already had a chemical reaction with the PCB protective layers or soldering base.

On the right hand side you see how dust got collected by the silicon oil around components while the rest of the PCB is clean.

EVGA Silicon.jpg


What you see in the picture is my 1080 TI which use to run with the leaky original EVGA thermal pads.
Use to have VRAM temps close to hotspot temps but usually more I changed thermal paste on the GPU and replace the thermal pads with thermal putty. If my GPU temp is 65 C VRAM sit around 59C.

You need a silicon brush(very important), Alcohol 99%, caliper and the thermal pads.
First you need to measure you old thermal pads with the caliper to find the thickness or find data exactly for your card on all pads thickness.

Only than you can order the new pads when you already know the thickness.

Micron GDDR6X is nominal at 95C and happy for short bursts such as gaming up to 110C. So 95C isn't hot, it's normal. If it's running cooler than that, it's just that the cooler is doing more than needed.

When you read around the web about "is 105C too hot for my GDDRX6" and you see results that say "no, you should redo the pads" you have to remember that 95% of the information about GDDR6X for the 3080 cards was from ETH miners who were overclocking their VRAM and running incredibly memory-intensive loads on them 24/7/365. They replaced the pads because they were literally running a 24/7 stress test of the VRAM whilst overclocking the stuff at the same time.

If you GPU runs it's VRAM at 100C when gaming it's perfectly okay. Only worry if you see temps near or over 110C for prolonged periods, because that indicates poor contact and one or more of the GDDR6X modules overheating.
Did Samsung or Hynix or Micron gave you in writing that their DDR6X resist at certain temperatures for a certain number of hours or days?
I believe not. Do you have such data I'll be very interested, please.
IMHO 95 C is critical because simply you don't know for how long that thermal stress can last before one single memory chip gives up, and than you have a brick not a GPU.
The card manufacturers safe margins are plain BS. RMA a dead card with burnt VRAM see how they gonna find a little scratch on the shroud and deny the RMA Asus, Gigabyte and others. The channel down below is full of that to.

GPU throttles down on high temps not gonna burn but, VRAM does. Is really not worth the risk. Is one of major factor faults for 2000 and 3000 series. North West channel on YT is full of such repairs.

Just think why the mem controller throttle down your M2 when is 70 C. M2 does have memory chips right? Some controllers throttle down the M2 even on lows like 50-60C, if remember well is some of the P3.

If you want big results. Consider a chopper shim memory mod. Its going to cost more for obvious reasons though.


::edit::

Even did a follow up vid


::EDIT II::

while we are at it. There are a tonne of videos of other people doing it.


Copper mods you have to be careful, you can't go with little square shims over VRAM and certainly not thermal paste. We usually have line of 2 or 3 mem chips. You better use a long copper shim to cover let's say 3 mem chips, to prevent moving out of the place while reassemble the card which will result in shorts. Also thermal putty will prevent that. How long copper shim also help with, not all mem chips run at same temp, with longer copper shims you spread the heat evenly or close to.
You need thermal putty instead of thermal paste, you make sure thermal putty overflow over the mem chip and covers the little components near the chip to prevent shorts as well.
You have to bare in mind how thick is the thermal putty you apply to prevent shorts and come with the right thickness for the shims.

Copper backplate mod is very good also but than you need a lot of thermal putty to prevent shorts. If you want to go cheaper make sure the contact patches are tall, at least 1.5mm or 2mm.
Industry standard said at least 0.3 mm distance but I would go higher than that, copper is highly conductive.
With copper mod plate you really need anti sag support for the GPU , copper is really heavy compared with Al.

There are some plastic film that are quiet thermal conductive which some manufacturers apply on the backplates to prevent shorts. I have to find the exact material is interested and I'm gonna test.

Thanks for the tip, I blow air once in two weeks, so there would be hardly an dust particle.

dude, I can play any games at any settings
No worries you card is fine and has 10GB not 8GB, It will go for another 1-2 years in terms of performance.

I have same screen as you but just a 1080 TI. Even mine does well in 1440P if I cap the FPS. I can't expect my card to go on 100 FPS in the new games.
The only game it drops under 50 FPS is HD 2 @1440P @ Native settings which are just above Ultra settings.

I hope you have anti sag support for your card. A common issue with 3000 series is that pads underneath VRAM and GPU get ripped in the PCB substrate. Even by sagging the pads can be ripped and usually this kind of issues are out of repair bracket. Another issue of 2000, 3000, 4000 series is that PCBs are very thin.
The main cause > lack of Lead in the soldering. Lead gives elasticity to soldering joints without it, soldering joints becomes brittle. A single fissure in one of thousands soldering joints of a GPU can become a major issue with a chain reactions.
MSI was one of the first companies to jump in the ECO ship so they use the soldering base without lead. As I remember Asus and Gigabyte followed. To replace the lead you need another metal with high elasticity and conductivity >Silver< which is expensive you need more than 5 % of silver in the soldering base to give the desired and need it elasticity, resins also becomes brittle and are not strong enough.

PNY sit back and kept using soldering with lead. I was looking at some 4070 made by PNY it said clearly with a warning sign on that page that contains lead.

Now pls someone tell me how is that silver in soldering base is sustainable? Is expensive and is already has extensive use for various things in Automotive and Weapon Industry, Pharmacology etc. also in manufacturer computer hardware(as early as 1990), now if they add it in soldering is really unsustainable.

For example the stocks for silver mining companies increased form last summer to now from 430$ to 800$, that will affect the prices of Silver but can't tell how much.

The electronics without lead will fill up faster the landfills and will become more problematic for environment than the use and recycling of the lead. NorthWest repair guy which is much more experienced in electronic circuits said that and I totally agree.

Lead will be toxic for people who' will bring the lead over 380 C (when toxic fumes are released) not for users of hardware contain soldering with lead.
 
Last edited:
Joined
Feb 20, 2019
Messages
8,923 (4.03/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
If they leaked changed them, I strongly advice.

Silicon oil becomes sticky and dust will stick to it and both are insulators. The dust will bunch up around little components which are not cooled and act as insulator and than dust attracts moisture which is not a good friend with circuits. After that you might miniscule discharges of electricity (because of the moisture in the dust) among components which will become critical.

Silicone oil can penetrate the epoxy of integrated circuit. Additionally, any residue or dirt that the oil dissolves can increase its conductivity, that is the reverse if we don't have moisture.
Dirt is dirt. If your card is filthy then yes - that's a different issue. I agree that excess oil is undesirable, but your advice is a little contradictory - new pads have MORE oil in them than old pads. If you're campaigning that excess oil is bad, then replacing the pads for fresh ones that have more oil to squeeze out is the exact opposite of your goal. Sure, brand new pads tend to have more viscous grease in them but that's only a temporary state as the oil will get out as soon as the pads are brought up to high temperatures under the compression of a cooler.

Did Samsung or Hynix or Micron gave you in writing that their DDR6X resist at certain temperatures for a certain number of hours or days?
I believe not. Do you have such data I'll be very interested, please.
IMHO 95 C is critical because simply you don't know for how long that thermal stress can last before one single memory chip gives up, and than you have a brick not a GPU.
If you're asking about Samsung or Hynix GDDR6X then you're not familiar with GDDR6X at all, so throwing doubt about what others are saying is a very precarious position to be standing in, IMO.

GDDR6X is a Micron-exclusive technology. Safe temperatures for GDDR6X were covered extensively during the ETH mining craze by many channels, most notably Igor's Lab but they all directly referenced the Micron spec sheets for GDDR6X (no longer on the Micron website since GDDR6X is EoL, you'll either need to refer to those articles/videos or use archive.org) 95C/105C is the max 24/7 operational temperature, TjMax throttle temperature is 110C - i.e. the temperature at which the VRAM will throttle down to preserve itself.

Here is just one of Igor's multiple articles, and there are several more in-depth ones around the web

That 95C/105C ambiguity is only ambiguous if you don't do your homework: 95C is case temperature, i.e. the temperature of the plastic exterior housing. It's an irrelevant number because no graphics cards measure the case temperature - you'd only get it by dismantling the card and applying a thermocouple to the plastic case of the GDDR6X module, then running it exposed without a heatsink or thermal pads. It's 10C lower than the important 105C junction temperature because it needs to estimate the actual junction temperature based on the thermal gradient between the junction bettween the silicon and plastic inside the chip and the outer surface of the plastic case.

105C is the 24/7 operational junction temp of Micron GDDR6X as per their datasheet and this is what all GPU readings are. It's also overly-conservative based on real-world experimentation, most people who investigated it via thermocouples expect it's likely okay at 115-120C, but Micron played it safe to avoid risk of warranty/RMA disputes.
 
Joined
Jan 22, 2020
Messages
1,047 (0.56/day)
Location
Turkey
System Name MSI-MEG
Processor AMD Ryzen 9 3900X
Motherboard MSI MEG X570S ACE MAX
Cooling AMD Wraith Prism + Thermal Grizzly
Memory 32 GB
Video Card(s) MSI Suprim X RTX 3080
Storage 500 GB MSI Spatium nvme + 500 GB WD nvme + 2 TB Seagate HDD + 2 TB Seagate HDD
Display(s) 27" LG 144HZ 2K ULTRAGEAR
Case MSI MPG Velox Airflow 100P
Audio Device(s) Philips
Power Supply Seasonic 750W 80+ Gold
Mouse HP OMEN REACTOR
Keyboard Corsair K68
Software Windows10 LTSC 64 bit
If they leaked changed them, I strongly advice.

Silicon oil becomes sticky and dust will stick to it and both are insulators. The dust will bunch up around little components which are not cooled and act as insulator and than dust attracts moisture which is not a good friend with circuits. After that you might miniscule discharges of electricity (because of the moisture in the dust) among components which will become critical.

Silicone oil can penetrate the epoxy of integrated circuit. Additionally, any residue or dirt that the oil dissolves can increase its conductivity, that is the reverse if we don't have moisture

You can see bellow on the left, that brown droplet, that is silicon oil with changed color that already had a chemical reaction with the PCB protective layers or soldering base.

On the right hand side you see how dust got collected by the silicon oil around components while the rest of the PCB is clean.

View attachment 388631

What you see in the picture is my 1080 TI which use to run with the leaky original EVGA thermal pads.
Use to have VRAM temps close to hotspot temps but usually more I changed thermal paste on the GPU and replace the thermal pads with thermal putty. If my GPU temp is 65 C VRAM sit around 59C.

You need a silicon brush(very important), Alcohol 99%, caliper and the thermal pads.
First you need to measure you old thermal pads with the caliper to find the thickness or find data exactly for your card on all pads thickness.

Only than you can order the new pads when you already know the thickness.


Did Samsung or Hynix or Micron gave you in writing that their DDR6X resist at certain temperatures for a certain number of hours or days?
I believe not. Do you have such data I'll be very interested, please.
IMHO 95 C is critical because simply you don't know for how long that thermal stress can last before one single memory chip gives up, and than you have a brick not a GPU.
The card manufacturers safe margins are plain BS. RMA a dead card with burnt VRAM see how they gonna find a little scratch on the shroud and deny the RMA Asus, Gigabyte and others. The channel down below is full of that to.

GPU throttles down on high temps not gonna burn but, VRAM does. Is really not worth the risk. Is one of major factor faults for 2000 and 3000 series. North West channel on YT is full of such repairs.

Just think why the mem controller throttle down your M2 when is 70 C. M2 does have memory chips right? Some controllers throttle down the M2 even on lows like 50-60C, if remember well is some of the P3.


Copper mods you have to be careful, you can't go with little square shims over VRAM and certainly not thermal paste. We usually have line of 2 or 3 mem chips. You better use a long copper shim to cover let's say 3 mem chips, to prevent moving out of the place while reassemble the card which will result in shorts. Also thermal putty will prevent that. How long copper shim also help with, not all mem chips run at same temp, with longer copper shims you spread the heat evenly or close to.
You need thermal putty instead of thermal paste, you make sure thermal putty overflow over the mem chip and covers the little components near the chip to prevent shorts as well.
You have to bare in mind how thick is the thermal putty you apply to prevent shorts and come with the right thickness for the shims.

Copper backplate mod is very good also but than you need a lot of thermal putty to prevent shorts. If you want to go cheaper make sure the contact patches are tall, at least 1.5mm or 2mm.
Industry standard said at least 0.3 mm distance but I would go higher than that, copper is highly conductive.
With copper mod plate you really need anti sag support for the GPU , copper is really heavy compared with Al.

There are some plastic film that are quiet thermal conductive which some manufacturers apply on the backplates to prevent shorts. I have to find the exact material is interested and I'm gonna test.


No worries you card is fine and has 10GB not 8GB, It will go for another 1-2 years in terms of performance.

I have same screen as you but just a 1080 TI. Even mine does well in 1440P if I cap the FPS. I can't expect my card to go on 100 FPS in the new games.
The only game it drops under 50 FPS is HD 2 @1440P @ Native settings which are just above Ultra settings.

I hope you have anti sag support for your card. A common issue with 3000 series is that pads underneath VRAM and GPU get ripped in the PCB substrate. Even by sagging the pads can be ripped and usually this kind of issues are out of repair bracket. Another issue of 2000, 3000, 4000 series is that PCBs are very thin.
The main cause > lack of Lead in the soldering. Lead gives elasticity to soldering joints without it, soldering joints becomes brittle. A single fissure in one of thousands soldering joints of a GPU can become a major issue with a chain reactions.
MSI was one of the first companies to jump in the ECO ship so they use the soldering base without lead. As I remember Asus and Gigabyte followed. To replace the lead you need another metal with high elasticity and conductivity >Silver< which is expensive you need more than 5 % of silver in the soldering base to give the desired and need it elasticity, resins also becomes brittle and are not strong enough.

PNY sit back and kept using soldering with lead. I was looking at some 4070 made by PNY it said clearly with a warning sign on that page that contains lead.

Now pls someone tell me how is that silver in soldering base is sustainable? Is expensive and is already has extensive use for various things in Automotive and Weapon Industry, Pharmacology etc. also in manufacturer computer hardware(as early as 1990), now if they add it in soldering is really unsustainable.

For example the stocks for silver mining companies increased form last summer to now from 430$ to 800$, that will affect the prices of Silver but can't tell how much.

The electronics without lead will fill up faster the landfills and will become more problematic for environment than the use and recycling of the lead. NorthWest repair guy which is much more experienced in electronic circuits said that and I totally agree.

Lead will be toxic for people who' will bring the lead over 380 C (when toxic fumes are released) not for users of hardware contain soldering with lead.
Lead regulation mandate was introduced in 2006 I recall. Since then hardware failures have risen up sharply. Don't know which metal could be used instead if lead but I doubt the industry is keen to find a proper replacement since it is also one pillar of planned obsolescence.
 
Last edited:
Joined
Mar 2, 2011
Messages
111 (0.02/day)
Dirt is dirt. If your card is filthy then yes - that's a different issue. I agree that excess oil is undesirable, but your advice is a little contradictory - new pads have MORE oil in them than old pads. If you're campaigning that excess oil is bad, then replacing the pads for fresh ones that have more oil to squeeze out is the exact opposite of your goal. Sure, brand new pads tend to have more viscous grease in them but that's only a temporary state as the oil will get out as soon as the pads are brought up to high temperatures under the compression of a cooler.
Dirt is not a different issue at all, we all have that in our GPUs and in the present situation of silicon oil leakage, dirt is really becomes part of the issue, it sticks to the PCB in very undesired places.
The point of dirt in this context is that it will stick to the silicon leakage otherwise dust can be easily blown by a blower even by fans at max RPM. You see clearly the rest of PCB has no accumulated dust because in some parts of PCB was no silicon oil present, and you ignoring that.
Brand new pads have more silicon oil, yes but they all leak equally? No. Some will really leak early while others will leak higher quantity. Use trial and error or used other people experience to chose maybe right T pads.
Thermal putty by Upsiren doesn't leak and is used by the guys doing crypto. Maybe for him it will be hard to work with putty if he didn't do that before.

"get out as soon as the pads are brought up to high temperatures under the compression of a cooler." I don't agree. But that is my experience and doesn't invalidate yours.

"If you're asking about Samsung or Hynix GDDR6X then you're not familiar with GDDR6X at all, so throwing doubt about what others are saying is a very precarious position to be standing in, IMO.
GDDR6X is a Micron-exclusive technology"

-
I was talking about Micron failure on 2000 and 3000 series not all have DDR 6X and NW repair guy he seen higher failure rate of Micron memory as I remember code starting with 08. If I find the video I will post it here.
My homework and maybe yours( with all respect) is useless compared to the NW repair guy experience, his actually sick on the countless mem chips he had to replace, that includes Hynix and Samsung as well. I'm not trying to hit Micron as being the worst. They just higher failure rate compared to other in that time bracket.



95C/105C is the max 24/7 operational temperature
:D for how long? 10 months, 12? or it goes just for the duration of warranty of the card...
I wouldn't trust it, and I really advice others to do the same, I will do the best of my ability to cool further if I can, the presence of copper layers in PCB help with the amount heat traversing PCB so I can cool them from both sides.
Manufacturers does the bare minimum in most cases, remains our job to cool it further if we want longevity and no RAM hassle, which now days is usually just waste of time.

Hope you are familiar with textolite which was doing the reverse of what I said above.

If you're asking about Samsung or Hynix GDDR6X then you're not familiar with GDDR6X at all, so throwing doubt about what others are saying is a very precarious position to be standing in, IMO.
My mistake, sorry, I forgot to add DDR6 into context but, you realize that down bellow I continue to talk about
Is really not worth the risk. Is one of major factor faults for 2000 and 3000 series.
Which implies I was talking in general about DDR equipped on those cards which are not all are DDR6X. You have to agree, you pull it a bit out of the context.

Lead regulation mandate was introduced in 2006 I recall. Since then hardware failures have risen up sharply. Don't know which metal could be used instead if lead but I doubt the industry is keen to find a proper replacement since it is also one pillar of planned obsolescence.
They didn't jump in that ship immediately, as long as I know 900 series was not affected by that.
I just said what metal they are using already, Silver, and is not sustainable, simply the amount needs of that is way too much. 5% Silver which is already present in some soldering wire is not enough to provide the elasticity of the lead which was pulled out and was around 40-60%. NW guy uses only lead soldering with his repairs.

We talked about this on your other thread and I'll still say the same, 85C is totally fine with GDDR6X. Even ~100C is in specs (like with 3090 FE). :)

Also the stock pads look totally fine, I wouldn't change those.
The silicon leak is the problem here, not the present quality of his pads. Needs cleaning at least with silicon tooth brush 99% alcohol, sometimes you need kerosene, not all oil silicon can be removed totally with 99% A. I couldn't remove from parts of my backplate remained imbedded in the paint. Another proof that S O can penetrate.
 
Last edited:
Joined
Feb 20, 2019
Messages
8,923 (4.03/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
- I was talking about Micron failure on 2000 and 3000 series not all have DDR 6X and NW repair guy he seen higher failure rate of Micron memory as I remember code starting with 08. If I find the video I will post it here.
My homework and maybe yours( with all respect) is useless compared to the NW repair guy experience, his actually sick on the countless mem chips he had to replace, that includes Hynix and Samsung as well. I'm not trying to hit Micron as being the worst. They just higher failure rate compared to other in that time bracket.




:D for how long? 10 months, 12? or it goes just for the duration of warranty of the card...
I wouldn't trust it, and I really advice others to do the same, I will do the best of my ability to cool further if I can, the presence of copper layers in PCB help with the amount heat traversing PCB so I can cool them from both sides.
Manufacturers does the bare minimum in most cases, remains our job to cool it further if we want longevity and no RAM hassle, which now days is usually just waste of time.

Hope you are familiar with textolite which was doing the reverse of what I said above.


My mistake, sorry, I forgot to add DDR6 into context but, you realize that down bellow I continue to talk about

Which implies I was talking in general about DDR equipped on those cards which are not all are DDR6X. You have to agree, you pull it a bit out of the context.


They didn't jump in that ship immediately, as long as I know 900 series was not affected by that.
I just said what metal they are using already, Silver, and is not sustainable, simply the amount needs of that is way too much. 5% Silver which is already present in some soldering wire is not enough to provide the elasticity of the lead which was pulled out and was around 40-60%. NW guy uses only lead soldering with his repairs.


The silicon leak is the problem here, not the present quality of his pads. Needs cleaning at least with silicon tooth brush 99% alcohol, sometimes you need kerosene, not all oil silicon can be removed totally with 99% A. I couldn't remove from parts of my backplate remained imbedded in the paint. Another proof that S O can penetrate.
That's a lot of words to try and explain how you're talking about issues that don't apply here and aren't relevant.

The only valid context here is OP's 3080 with Micron GDDRX6, if you want to talk about other memory types there are thousands of other threads for that.
 
Joined
Mar 2, 2011
Messages
111 (0.02/day)
That's a lot of words to try and explain how you're talking about issues that don't apply here and aren't relevant.

The only valid context here is OP's 3080 with Micron GDDRX6, if you want to talk about other memory types there are thousands of other threads for that.
Not really, you talked about DDR6X temps which involves the OP 3080 card. And I said 95 C is critical and shouldn't expose too long VRAM to that temps, in most of the cases 110C where is throttling is not reached and VRAM fries at under 110 C in long exposures, that is why I consider 95 C critical and not to contradict you, I mention 95 C is critical in other posts.

However OP can do as he pleases and consider the data he wants, but he has to know the other side of the coin not only the official one before taking that decision.
Hope he learned something from our discussion and is not confused or in doubts, but I believe you can help him further if that is the case.
 
Joined
Jan 22, 2020
Messages
1,047 (0.56/day)
Location
Turkey
System Name MSI-MEG
Processor AMD Ryzen 9 3900X
Motherboard MSI MEG X570S ACE MAX
Cooling AMD Wraith Prism + Thermal Grizzly
Memory 32 GB
Video Card(s) MSI Suprim X RTX 3080
Storage 500 GB MSI Spatium nvme + 500 GB WD nvme + 2 TB Seagate HDD + 2 TB Seagate HDD
Display(s) 27" LG 144HZ 2K ULTRAGEAR
Case MSI MPG Velox Airflow 100P
Audio Device(s) Philips
Power Supply Seasonic 750W 80+ Gold
Mouse HP OMEN REACTOR
Keyboard Corsair K68
Software Windows10 LTSC 64 bit
Joined
Mar 2, 2011
Messages
111 (0.02/day)
A little reality check, 3080Ti
Look at the title "Common problems with most GPUs" what he does, he replace a faulty mem chip. Why didn't throttle down on heat? Heat? because this guy complains about Zotac's cooling in more videos. Asus Strix 3090 as well.

For reference regarding GDDR6X reability look on his channel how many 4090, 4080, 4070 Super, 3080 he repairs and replaces the chips or just ask him > all this cards have GDDR6X. Decision what to do after that is easy.

 
Joined
Oct 2, 2020
Messages
1,097 (0.68/day)
System Name Laptop ASUS TUF F15 | Desktop 1 | Desktop 2
Processor Intel Core i7-11800H | Intel Core i5-14600K@135W | Intel Core i3-10100
Motherboard ASUS FX506HC | Gigabyte B660M DS3H DDR4 | MSI MAG B560M Bazooka
Cooling Laptop built-in cooling lol | Thermalright Assassin Spirit w/ BeQuiet Shadow Wings fan| Stock Copper
Memory 24 GB @ 3200 | 32 GB @ 3200 | 16 GB @ 3200
Video Card(s) Nvidia RTX 3050 Mobile 4GB | Nvidia GTX 1650 | Nvidia GTX 960 2 GB
Storage Adata XPG SX8200 Pro 512 GB | Samsung M2 SSD 256 GB & 1 TB 2.5" HDD @ 7200| SSD 250 GB & SSD 240 GB
Display(s) Laptop built-in 144 Hz FHD screen | Dell 27" WQHD @ 75 Hz & 49" TV FHD | Samsung 32" TV FHD
Case It's a laptop, it doesn't need case lmfao | Deepcool Mattrexx 55 MESH | Aerocool Cylon PRO
Audio Device(s) laptop built in audio | Logitech stereo speakers | Logitech 2.1 speakers
Power Supply ASUS 180W PSU | SeaSonic Focus GX-550 | MSI MAG A550BN
Mouse Logitech G604 | Corsair Harpoon wired mouse| Logitech G305
Keyboard Laptop built-in keyboard |Razer Blackwidow | Steelseries APEX 7 TKL
VR HMD Quest 2 sold out and don't need VR anymore lol
Software Windows 10 Enterprise 20H2 | Windows 11 24H2 LTSC | Windows 11 24H2 LTSC
Benchmark Scores good enough
Subjective? Sure. Dumb? I don't think so. 1080 is about the same level as 3050 which was low mid tier when it was released. Time is remorseless so now 3050 is to be considered entry level, i.e. only comfortable for gaming at settings BELOW high and straight up unplayable at higher resolutions such as UW1440p or 4K. Same applies to 1080 which not only doesn't reach 3060 in raw performance, it also lacks some DX12 features and can't enable DLSS (which is massive for GPUs of this segment).


Sure, 3070 is exactly an entry level 4K GPU. Or a reasonable 1440p GPU. Or a good 1080p GPU.
no way.
1080=2060
3050=1660S/Ti=1070

2060 IS ALWAYS > 1660S/Ti ;):)
 
Joined
Feb 24, 2023
Messages
3,688 (4.94/day)
Location
Russian Wild West
System Name D.L.S.S. (Die Lekker Spoed Situasie)
Processor i5-12400F
Motherboard Gigabyte B760M DS3H
Cooling Laminar RM1
Memory 32 GB DDR4-3200
Video Card(s) RX 6700 XT (vandalised)
Storage Yes.
Display(s) MSi G2712
Case Matrexx 55 (slightly vandalised)
Audio Device(s) Yes.
Power Supply Thermaltake 1000 W
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 11 / 10 / 8
Benchmark Scores My PC can run Crysis. Do I really need more than that?
3050=1660S/Ti=1070
DLSS and better DX12 support make this GPU superior to 1070, I'd place it right atween 1070 Ti and 1080. In ancient games, you get million FPS anyway. In modern games, 3050 feels better because it doesn't need to emulate stuff, it runs everything natively. And upscales better. Some energy saving on top of that, and it's becoming even better.
 
Top