- Joined
- Jul 29, 2023
- Messages
- 48 (0.09/day)
System Name | Shirakami |
---|---|
Processor | 7800X3D / 2200 IF & -30 CO |
Motherboard | Gigabyte B650i AORUS Ultra |
Cooling | Corsair iCUE H100i ELITE CAPELLIX w/ ultra slim 2x120mm fans |
Memory | 2x24GB Hynix M-Die @ 6000 - 26-35-28-30 - 65k tREFI 1.55 VDD / 1.10 VDDQ / 1.10 SoC |
Video Card(s) | 6900 XT Reference / -120 mV @ 2.4 GHz |
Storage | 1 & 2 TB NVMe - 1 TB SATA SSD |
Display(s) | LG 34GN850 (3440x1440) 160 Hz overclock |
Case | Lian Li Q58 |
VR HMD | Reverb G2 V2 |
Software | Fedora Linux 40 operating system (daily) |
Earlier this week, I made the jump from my 13700k to a 7800X3D. Oddly enough, I didn't do this because of degradation or the current Intel debacle. Rather, I switched because of power draw and thread scheduling. My system was completely stable in the one year I owned it. My specifications were as follows,
13700k - ASUS Z690 ITX - DDR5 7200 2x24GB - 6900 XT
My new specifications are
7800X3D - Gigabyte B650i AORUS ULTRA - 7200 2x24GB @ 6000 - 6900 XT
My Intel system was fairly tuned (~10% more performance, mostly from the memory and ring) and the 7800X3D is.. as tuned as you can make it (~7% more performance).
To begin with, I loved my 13700k. I found it to be a wonderful CPU and a nice upgrade from a 10900k. Unfortunately, the power draw was quite immense in modern games - Cyberpunk 2077 averaged around 140 watts with spikes upwards of 165 on occasion (Hwinfo64 CPU package power). I tried mitigating this by disabling Hyperthreading, but the power draw only lowered to about 125 with spikes up to 150. I also tried disabling efficiency cores (HT still enabled), the power draw lowered to around 120 with spikes up to 140. Disabling both HT and e-cores resulted in the most substantial loss of power, going all the way down to 115 watts with no real spikes to speak of. The performance difference between all three setups was less than 10%, which is one of the main driving factors that pushed me towards changing platforms. If it's pulling upwards 140 watts with current games, I can't even imagine what the CPU will draw in five years from now.. and electricity isn't getting any cheaper.
Disabling HT feels bad, even if it's not a huge driving factor to gaming performance. Disabling the e-cores feels even worse, considering those are tangible cores unlike HT, where more instructions being shoved into the transistors on each clock cycle. There's also the argument that disabling either of those features will hurt gaming performance in the future - leaving you with only one option. Underclocking. You can set a flat 5 GHz p-cores and 3.9 GHz e-cores at a fairly reasonable voltage; the power consumption gets cut by roughly 25% in gaming scenarios.. but you also completely lose out on Raptor Lake being a monstrosity when it comes to clockspeed. It feels just as bad as the above concessions. There is also the issue of thread scheduling; I do not run Windows 11, I daily Linux which didn't get proper support for Raptor Lake until Kernel 6.6 in October 2023. I use the term support loosely, because in gaming scenarios, the e-cores still become loaded with instructions. Windows 10 was and is my go-to for whenever I need to dual boot, however, the thread scheduler in Windows 10 isn't anywhere near as nice when compared to Windows 11. It's a similar issue to Linux wherein the efficiency cores are loaded with instructions, causing noticeable microstutters in gaming situations.
Add all of the issues I've listed above, including the poor memory controller found on 13th and 14th generation - with stability drifting on and off when aiming for speeds beyond 7200 MT/s - alongside the IHS bending, requiring a third-party loading mechanism (which I purchased) - not to mention the current degradation due to the SVID requesting upwards of 1.65v ..
Needless to say, after over a year of ownership, I was fairly sick of it all.
The power was a problem I needed to solve. I undervolted + overclocked the system and set a strict power limit of 175W to assure AVX2 workloads wouldn't cause crashing. It only needed to be stable for games, not heavy workloads, therefore the voltage I aimed for was whatever could run Cyberpunk 2077 at a CPU bind.
The scheduling was an issue I needed to solve, so I purchased Process Lasso for when I use W10 - which I highly recommend, it's a wonderful application.
The bending was an issue I needed to solve, so I purchased the ThermalTake loading mechanism.
The lack of memory stability at XMP speeds (7200) was an issue I needed to solve, which took weeks of tuning PLL voltages, VDD & VDDQ, IMC voltages, SA voltages.. etc.
The heat itself was an issue I needed to solve; I was highly uncomfortable with my CPU pulling 140 watts in games only to thermal throttle during loadscreens since the radiator, IHS, and liquid were all saturated. (240mm AIO).
The only saving grace of my build was the ASUS Z690 Gaming Wifi ITX motherboard. I absolutely adored that motherboard, it was overengineered and the VRM's didn't ever rise past 65C in stress tests.. and the single threaded performance. Raptor Lake has absolutely insane ST performance. My 7800X3D achieves 655 points in CPU-Z single threaded, while the 13700k is upwards of 900.
Admittedly, as of writing this, I haven't been on AM5 long enough to have experienced SoC voltages murdering AM5 CPU's outright. I also wasn't around for when memory training took upwards of 5 minutes. And I wasn't here for when core parking was a problem either. However, in the here-and-now, I'm absolutely in-love with the ease-of-use of my 7800X3D. It performs about the same, sometimes better, than my 13700k did in gaming situations.. while drawing 40 watts. If I were recommending a system to a friend today, I couldn't in good conscience recommend LGA1700 - the experience I had with it was quite negative, even if the CPU itself was blistering fast.
I hope that whatever Intel releases for their next generation is more refined. Competition is a very, very good thing - without it, we wouldn't even have an alternative like the 7800X3D to choose from. I love both teams. I even purchased a 1700X the first day it came out and dealt with the fiasco that was X370 BIOS' being very buggy. I loved my 10900k as well, it was a wonderful CPU but made very obsolete when 12th generation released - considering the e-cores were almost on par in ST performance. The 3570k I had before that was rock-solid, I never had any issues with it whatsoever. But this entire LGA1700 has just been a mess for me personally; in retrospect I wish that I had just purchased a 12900k in place of the 13700k. At least on Alder Lake, disabling the efficiency cores granted you a boost to cache speed - so it wasn't a complete loss.
But as things stand, I'll be holding on this chip for a while. I should note the 7800X3D did not exist at the time of me purchasing my 13700k.
13700k - ASUS Z690 ITX - DDR5 7200 2x24GB - 6900 XT
My new specifications are
7800X3D - Gigabyte B650i AORUS ULTRA - 7200 2x24GB @ 6000 - 6900 XT
My Intel system was fairly tuned (~10% more performance, mostly from the memory and ring) and the 7800X3D is.. as tuned as you can make it (~7% more performance).
To begin with, I loved my 13700k. I found it to be a wonderful CPU and a nice upgrade from a 10900k. Unfortunately, the power draw was quite immense in modern games - Cyberpunk 2077 averaged around 140 watts with spikes upwards of 165 on occasion (Hwinfo64 CPU package power). I tried mitigating this by disabling Hyperthreading, but the power draw only lowered to about 125 with spikes up to 150. I also tried disabling efficiency cores (HT still enabled), the power draw lowered to around 120 with spikes up to 140. Disabling both HT and e-cores resulted in the most substantial loss of power, going all the way down to 115 watts with no real spikes to speak of. The performance difference between all three setups was less than 10%, which is one of the main driving factors that pushed me towards changing platforms. If it's pulling upwards 140 watts with current games, I can't even imagine what the CPU will draw in five years from now.. and electricity isn't getting any cheaper.
Disabling HT feels bad, even if it's not a huge driving factor to gaming performance. Disabling the e-cores feels even worse, considering those are tangible cores unlike HT, where more instructions being shoved into the transistors on each clock cycle. There's also the argument that disabling either of those features will hurt gaming performance in the future - leaving you with only one option. Underclocking. You can set a flat 5 GHz p-cores and 3.9 GHz e-cores at a fairly reasonable voltage; the power consumption gets cut by roughly 25% in gaming scenarios.. but you also completely lose out on Raptor Lake being a monstrosity when it comes to clockspeed. It feels just as bad as the above concessions. There is also the issue of thread scheduling; I do not run Windows 11, I daily Linux which didn't get proper support for Raptor Lake until Kernel 6.6 in October 2023. I use the term support loosely, because in gaming scenarios, the e-cores still become loaded with instructions. Windows 10 was and is my go-to for whenever I need to dual boot, however, the thread scheduler in Windows 10 isn't anywhere near as nice when compared to Windows 11. It's a similar issue to Linux wherein the efficiency cores are loaded with instructions, causing noticeable microstutters in gaming situations.
Add all of the issues I've listed above, including the poor memory controller found on 13th and 14th generation - with stability drifting on and off when aiming for speeds beyond 7200 MT/s - alongside the IHS bending, requiring a third-party loading mechanism (which I purchased) - not to mention the current degradation due to the SVID requesting upwards of 1.65v ..
Needless to say, after over a year of ownership, I was fairly sick of it all.
The power was a problem I needed to solve. I undervolted + overclocked the system and set a strict power limit of 175W to assure AVX2 workloads wouldn't cause crashing. It only needed to be stable for games, not heavy workloads, therefore the voltage I aimed for was whatever could run Cyberpunk 2077 at a CPU bind.
The scheduling was an issue I needed to solve, so I purchased Process Lasso for when I use W10 - which I highly recommend, it's a wonderful application.
The bending was an issue I needed to solve, so I purchased the ThermalTake loading mechanism.
The lack of memory stability at XMP speeds (7200) was an issue I needed to solve, which took weeks of tuning PLL voltages, VDD & VDDQ, IMC voltages, SA voltages.. etc.
The heat itself was an issue I needed to solve; I was highly uncomfortable with my CPU pulling 140 watts in games only to thermal throttle during loadscreens since the radiator, IHS, and liquid were all saturated. (240mm AIO).
The only saving grace of my build was the ASUS Z690 Gaming Wifi ITX motherboard. I absolutely adored that motherboard, it was overengineered and the VRM's didn't ever rise past 65C in stress tests.. and the single threaded performance. Raptor Lake has absolutely insane ST performance. My 7800X3D achieves 655 points in CPU-Z single threaded, while the 13700k is upwards of 900.
Admittedly, as of writing this, I haven't been on AM5 long enough to have experienced SoC voltages murdering AM5 CPU's outright. I also wasn't around for when memory training took upwards of 5 minutes. And I wasn't here for when core parking was a problem either. However, in the here-and-now, I'm absolutely in-love with the ease-of-use of my 7800X3D. It performs about the same, sometimes better, than my 13700k did in gaming situations.. while drawing 40 watts. If I were recommending a system to a friend today, I couldn't in good conscience recommend LGA1700 - the experience I had with it was quite negative, even if the CPU itself was blistering fast.
I hope that whatever Intel releases for their next generation is more refined. Competition is a very, very good thing - without it, we wouldn't even have an alternative like the 7800X3D to choose from. I love both teams. I even purchased a 1700X the first day it came out and dealt with the fiasco that was X370 BIOS' being very buggy. I loved my 10900k as well, it was a wonderful CPU but made very obsolete when 12th generation released - considering the e-cores were almost on par in ST performance. The 3570k I had before that was rock-solid, I never had any issues with it whatsoever. But this entire LGA1700 has just been a mess for me personally; in retrospect I wish that I had just purchased a 12900k in place of the 13700k. At least on Alder Lake, disabling the efficiency cores granted you a boost to cache speed - so it wasn't a complete loss.
But as things stand, I'll be holding on this chip for a while. I should note the 7800X3D did not exist at the time of me purchasing my 13700k.