• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

MSI M.2 Shield is Snake Oil Say Tests, Company Refutes Charges

Joined
Nov 4, 2005
Messages
11,984 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
I'm not trying to be an asshole, I work with precision GPS for a living, and understand the difference in quality. Sometimes trying to explain why an oscilloscope is needed to diagnose a problem to an average tech spills over here.
 

VSG

Editor, Reviews & News
Staff member
Joined
Jul 1, 2014
Messages
3,653 (0.96/day)
I'm not trying to be an asshole, I work with precision GPS for a living, and understand the difference in quality. Sometimes trying to explain why an oscilloscope is needed to diagnose a problem to an average tech spills over here.

Oh you are not being an asshole at all, I really appreciate the discussion- we have no vested interest here except to learn more so I am all for it.
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,473 (4.10/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
Here is my take on the whole thing. If the heat issues of M.2 drives could be fixed with a simple heat spreader, the M.2 manufacturers would have added heat spreaders to their drives a long time ago. I refuse to believe none of them thought about trying that to see if it helped. So, since they don't come with heatspreaders, I'd be willing to bet that the MSI heatspreader isn't really helping any.

It looks cool though...
 
Joined
Nov 4, 2005
Messages
11,984 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
Here is my take on the whole thing. If the heat issues of M.2 drives could be fixed with a simple heat spreader, the M.2 manufacturers would have added heat spreaders to their drives a long time ago. I refuse to believe none of them thought about trying that to see if it helped. So, since they don't come with heatspreaders, I'd be willing to bet that the MSI heatspreader isn't really helping any.

It looks cool though...

Reviewers have listed 4W as the power consumption of high end drives, it may be higher than that however especially as temperature increases. I'm sure most don't come with anything to meet size specifications, and much like older GDDR it probably isn't the memory but the controller that is producing most of the heat and that has to be kept at a lower temp. Drive watercoolers may finally have a use.
 

VSG

Editor, Reviews & News
Staff member
Joined
Jul 1, 2014
Messages
3,653 (0.96/day)
Yup, the controllers on the M.2 SSDs are the heat issue. The flash modules themselves are fine. As someone else pointed out, Aquacomputer has a watercooled M.2 adapter if you have a custom loop already.
 
Joined
Oct 29, 2016
Messages
111 (0.04/day)
My point when I brought this up was that I do not trust thermal imaging as a means of quantifying heat, and so I am taking the Toms Hardware numbers with a grain of salt. The exact VRM modules used on those cards are rated at well above 125 °C, including the ones that went bad. Gamers Nexus used multiple thermocouples with EMI shieldingand showed that temps were well within rated specs with and without the new BIOS and thermal pad kit. To bring this back to topic, I argue the same applies to the M.2 heatsink test here- thermal imaging done by Kitguru is not something I agree with, so I am more reliant to take the Gamers Nexus tests as a point of reference.

That said, I did note some things that need to be re-done and I have been having discussions with Steve about this since Monday. So that Gamers Nexus article will be updated shortly. Similarly, I can definitely understand the skepticism behind the EVGA VRM issue and I wholeheartedly agree that the issue was not resolved to my satisfaction either. My contacts at EVGA mentioned it may have been bad modules, but without the actual data it is left to their word.

There are two things I find suspect with the thermocouple and 125 °C ideas. The first is that thermocouples are typically used to probe larger objects and coupling is always an issue with smaller objects. In Gamers Nexus's final tests, the thermal couple was coupled to the back of the PCB. This can only measure the stead-state temperature of the back side. To extrapolate that number to actual VRM temperatures, one would have to assume a steady-state thermal gradient and do a few multiplications. One way to obtain that, would be the use of a thermal imaging device.

Which brings us to our second issue, these VRM packages are like any other silicon chip - the core is much hotter than the plastic packaging since the heat is generated in the silicon parts underneath. The manufacturer of the VRM chips usually assumes a given amount of cooling and hence an expected thermal gradient range, which is why the VRM as a whole package is specified for Tcase temperatures, when really, the MOSFETs are the main sources of heat. It is up to the GPU maker to implement such a thermal gradient. Given this thermal gradient, it is not clear what the core temperature of these VRM chips are. This is further complicated due to the power MOSFETs in the VRMs being typically specified for a steady-state thermal impedance and a transient thermal impedance (and in general the whole package features this distinction as well, due to thermal capacitance of any finite-sized object). A small burst workload (<1 second) close enough to the maximal internal channel temperatures at already high duty may trigger a transient over-temperature condition (in the silicon parts and not measurable as steady-state increases of temperature outside the package) and cause eventual failure through aggregated damage if not immediate.

For the longevity and reliability of these VRM chips, I would think a good design would be to keep the MOSFETs several 10s of degrees below the 125°C maximum to ensure actual MOSFET channel temperatures are always below the maximal channel temperatures specified. To do so, you would need to ensure the thermal gradient across the VRM chips to be as low as possible, which means effective cooling coupling in the form of a heatsink (that lowers the thermal impedance of the VRMs to the surrounding air).
 
Last edited:

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.46/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Kind of disappointed they didn't test the EMI aspects of it. EMI is the only reason why I could see using it. That said, is there even any documented evidence that EMI shielding on an SSD would either prolong the life of the SSD or decrease the error rate? I doubt it. Still...due diligence...
 
Joined
Nov 4, 2005
Messages
11,984 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
There are two things I find suspect with the thermocouple and 125 °C ideas. The first is that thermocouples are typically used to probe larger objects and coupling is always an issue with smaller objects. In Gamers Nexus's final tests, the thermal couple was coupled to the back of the PCB. This can only measure the stead-state temperature of the back side. To extrapolate that number to actual VRM temperatures, one would have to assume a steady-state thermal gradient and do a few multiplications. One way to obtain that, would be the use of a thermal imaging device.

Which brings us to our second issue, these VRM packages are like any other silicon chip - the core is much hotter than the plastic packaging since the heat is generated in the silicon parts underneath. The manufacturer of the VRM chips usually assumes a given amount of cooling and hence an expected thermal gradient range, which is why the VRM as a whole package is specified for Tcase temperatures, when really, the MOSFETs are the main sources of heat. It is up to the GPU maker to implement such a thermal gradient. Given this thermal gradient, it is not clear what the core temperature of these VRM chips are. This is further complicated due to the power MOSFETs in the VRMs being typically specified for a steady-state thermal impedance and a transient thermal impedance (and in general the whole package features this distinction as well, due to thermal capacitance of any finite-sized object). A small burst workload (<1 second) close enough to the maximal internal channel temperatures at already high duty may trigger a transient over-temperature condition (in the silicon parts and not measurable as steady-state increases of temperature outside the package) and cause eventual failure through aggregated damage if not immediate.

For the longevity and reliability of these VRM chips, I would think a good design would be to keep the MOSFETs several 10s of degrees below the 125°C maximum to ensure actual MOSFET channel temperatures are always below the maximal channel temperatures specified. To do so, you would need to ensure the thermal gradient across the VRM chips to be as low as possible, which means effective cooling coupling in the form of a heatsink (that lowers the thermal impedance of the VRMs to the surrounding air).


They can calculate the expected TDP of any VRM with a few simple tests, gate cross-section, voltage and resistance on the load side, plus capacitive and inductive losses (these go way up as temperature increases). The more I look at the failures the more it looks like the solder is getting too hot and it melts enough to short out but still has high enough resistance the PSU doesn't cut off before it kills the GPU and everything close to the VRM. The voltage controller in PLL mode should be driving them all the same and is able to calculate the approximate wattage through the VRM's in milliseconds so the VRM's being overdriven/overloaded shouldn't be an issue unless the solder points were high resistance.
 
Joined
Sep 5, 2007
Messages
512 (0.08/day)
System Name HAL_9017
Processor Intel i9 10850k
Motherboard Asus Prime z490-A
Cooling Corsair H115i
Memory GSkill 32GB DDR4-3000 Trident-Z RGB
Video Card(s) NVIDIA 1080GTX FE w/ EVGA Hybrid Water Cooler
Storage Samsung EVO 960 M.2 SSD 500Gb
Display(s) Asus XG279
Case In Win 805i
Audio Device(s) EVGA NuAudio
Power Supply CORSAIR RM750
Mouse Logitech Master 2s
Keyboard Keychron K4
LOTS of talk about m.2 cooling. I saw This thermal pad from Silverstone and curious how you could pair a heatsink then with it. I assume it would be better than the MSi solution, but, where to begin?
 
Joined
Jul 20, 2013
Messages
236 (0.06/day)
System Name Coffee Lake S
Processor i9-9900K
Motherboard MSI MEG Z390 ACE
Cooling Corsair H115i Platinum RGB
Memory Corsair Dominator Platinum RGB 32GB (2x16GB) DDR4 3466 C16
Video Card(s) EVGA RTX 2080 Ti XC2 Ultra
Storage Samsung 970 Pro M.2 512GB - Samsung 860 EVO 1TB SSD - WD Black 2TB HDD
Display(s) Dell P2715Q 27" 3840x2160 IPS @ 60Hz
Case Fractal Design Define R6
Power Supply Seasonic 860 watt Platinum
Mouse SteelSeries Rival 600
Keyboard Corsair K70 RGB MK.2
Software Windows 10 Pro 64 bit
I have no idea what this title means. Anyone care to explain?


Seriously - you don't know what "snake oil" is?

It means it's a fugazzi.
 
Joined
Oct 29, 2016
Messages
111 (0.04/day)
They can calculate the expected TDP of any VRM with a few simple tests, gate cross-section, voltage and resistance on the load side, plus capacitive and inductive losses (these go way up as temperature increases). The more I look at the failures the more it looks like the solder is getting too hot and it melts enough to short out but still has high enough resistance the PSU doesn't cut off before it kills the GPU and everything close to the VRM. The voltage controller in PLL mode should be driving them all the same and is able to calculate the approximate wattage through the VRM's in milliseconds so the VRM's being overdriven/overloaded shouldn't be an issue unless the solder points were high resistance.

Interesting hypothesis. I think that the lack of direct feedback at the driver about the instantaneous thermal condition of the MOSFETs had some role. Given that the external temperatures of the VRMs were close enough to the maximum rated case temperatures, I wouldn't be surprised if the channel temperatures occasionally exceeded safe channel temperatures. The poor solder contact through melting (I'm not sure what solder they used, whether it is eutectic or some other formulation) would reduce heat transfer substantially to the PCB, which in the absence of a proper heatsink, is the best source of cooling (usually spread through a couple of vias or a large ground plane on the PCB). Typically without something like a PowerPAD on the top side, the solder side of the VRM packages is the hottest side.

Anyway, we do not have any means to figure this out completely. I do think there are issues with the tests conducted with thermal probes.
 
Last edited:
Joined
Dec 3, 2014
Messages
348 (0.10/day)
Location
Marabá - Pará - Brazil
System Name KarymidoN TitaN
Processor AMD Ryzen 7 5700X
Motherboard ASUS TUF X570
Cooling Custom Watercooling Loop
Memory 2x Kingston FURY RGB 16gb @ 3200mhz 18-20-20-39
Video Card(s) MSI GTX 1070 GAMING X 8GB
Storage Kingston NV2 1TB| 4TB HDD
Display(s) 4X 1080P LG Monitors
Case Aigo Darkflash DLX 4000 MESH
Power Supply Corsair TX 600
Mouse Logitech G300S
I have no idea what this title means. Anyone care to explain?

URBAN DICTIONARY said:
comes from the 19th-century American practice of selling cure-all elixirs in traveling medicine shows. Snake oil salesmen would falsely claim that the potions would cure any ailments. Now-a-days it refers to fake products.
 
Joined
Nov 29, 2016
Messages
671 (0.23/day)
System Name Unimatrix
Processor Intel i9-9900K @ 5.0GHz
Motherboard ASRock x390 Taichi Ultimate
Cooling Custom Loop
Memory 32GB GSkill TridentZ RGB DDR4 @ 3400MHz 14-14-14-32
Video Card(s) EVGA 2080 with Heatkiller Water Block
Storage 2x Samsung 960 Pro 512GB M.2 SSD in RAID 0, 1x WD Blue 1TB M.2 SSD
Display(s) Alienware 34" Ultrawide 3440x1440
Case CoolerMaster P500M Mesh
Power Supply Seasonic Prime Titanium 850W
Keyboard Corsair K75
Benchmark Scores Really Really High
Sorry the Gamer Nexus test is flawed. They actually said that it lower the temperature of the SSD but the back of the SSD got hotter.

1-Most SSDs don't have chips on the back or not a lot of chips.
2-The back of the SSD will be the same with or without the shield, the shield only covers the front surface so how is the shield blocking the back at all.
 

Ruru

S.T.A.R.S.
Joined
Dec 16, 2012
Messages
12,797 (2.93/day)
Location
Jyväskylä, Finland
System Name 4K-gaming / media-PC
Processor AMD Ryzen 7 5800X / Intel Core i7-6700K
Motherboard Asus ROG Crosshair VII Hero / Asus Z170-A
Cooling Arctic Freezer 50 / Thermaltake Contac 21
Memory 32GB DDR4-3466 / 16GB DDR4-3000
Video Card(s) RTX 3080 10GB / RX 6700 XT
Storage 3.3TB of SSDs / several small SSDs
Display(s) Acer 27" 4K120 IPS + Lenovo 32" 4K60 IPS
Case Corsair 4000D AF White / DeepCool CC560 WH
Audio Device(s) Creative Omni BT speaker
Power Supply EVGA G2 750W / Fractal ION Gold 550W
Mouse Logitech MX518 / Logitech G400s
Keyboard Roccat Vulcan 121 AIMO / NOS C450 Mini Pro
VR HMD Oculus Rift CV1
Software Windows 11 Pro / Windows 11 Pro
Benchmark Scores They run Crysis
Sorry the Gamer Nexus test is flawed. They actually said that it lower the temperature of the SSD but the back of the SSD got hotter.

1-Most SSDs don't have chips on the back or not a lot of chips.
2-The back of the SSD will be the same with or without the shield, the shield only covers the front surface so how is the shield blocking the back at all.
My Intel 600p 256GB has chips only on one side, I have Alphacool's heatspreader on it and depending on the situation, it runs 5-10C cooler.
 
Top