- Joined
- Dec 25, 2020
- Messages
- 6,624 (4.67/day)
- Location
- São Paulo, Brazil
System Name | "Icy Resurrection" |
---|---|
Processor | 13th Gen Intel Core i9-13900KS Special Edition |
Motherboard | ASUS ROG MAXIMUS Z790 APEX ENCORE |
Cooling | Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM |
Memory | 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V |
Video Card(s) | ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition |
Storage | 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD |
Display(s) | 55-inch LG G3 OLED |
Case | Pichau Mancer CV500 White Edition |
Power Supply | EVGA 1300 G2 1.3kW 80+ Gold |
Mouse | Microsoft Classic Intellimouse |
Keyboard | Generic PS/2 |
Software | Windows 11 IoT Enterprise LTSC 24H2 |
Benchmark Scores | I pulled a Qiqi~ |
That test only looked at average FPS, the techspot article indicated sometimes substantial differences in 1% lows, but I believe the conclusions were the same. Both articles used the 12900k too, maybe things would be different if tested with newer AMD DDR5 CPU's?
AMD's chips are no good to test impact of memory performance IMO. They only take low speed DDR5, and the X3D versions largely exist to mitigate that.
Test needs to be done on Raptor Lake and 1DPC boards that can stretch DDR5 clocks. Raptor's about the best chance we've got even though its IMC is very much behind the memory technology itself.