- Joined
- Dec 25, 2020
- Messages
- 7,021 (4.81/day)
- Location
- São Paulo, Brazil
System Name | "Icy Resurrection" |
---|---|
Processor | 13th Gen Intel Core i9-13900KS Special Edition |
Motherboard | ASUS ROG Maximus Z790 Apex Encore |
Cooling | Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM |
Memory | 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V |
Video Card(s) | ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition |
Storage | 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD |
Display(s) | 55-inch LG G3 OLED |
Case | Pichau Mancer CV500 White Edition |
Audio Device(s) | Apple USB-C + Sony MDR-V7 headphones |
Power Supply | EVGA 1300 G2 1.3kW 80+ Gold |
Mouse | Microsoft Classic Intellimouse |
Keyboard | IBM Model M type 1391405 (distribución española) |
Software | Windows 11 IoT Enterprise LTSC 24H2 |
Benchmark Scores | I pulled a Qiqi~ |
Hotspot temp in itself is meaningless.
It's the delta between the core temp and hotspot that matters. If your core is 90c and your hotspot is 100c, then everything is just getting very toasty, but making proper contact.
However, if your core is 70c, and your hotspot is 100c, then you deffo know that it's either a bad mount, or bad TIM application.
I agree. Usually up to a 15 to maybe 20°C delta depending on the heatsink type is acceptable IMO. The thing to keep in mind about core hotspot and memory junction temperature readings is that it's the absolute worst temperature measured in either the entire die or the entire memory array, so it's a worst case scenario.