• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

14900k - Tuned for efficiency - Gaming power draw

Joined
Sep 17, 2014
Messages
22,828 (6.06/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
More cores at least can be 100% utilized. 3d cache being 100% utilized I think is a challenge AMD needs to face toward the future otherwise the manufacturing complexity and cost of it may not be worth it. This is why I was wondering if the trend of new games benefitting from x3d cache was going up or down. If it's going down then x3d cpus become obsolete going into the future and lose significant value. AMD needs to ensure x3d retains value.
Isn't it more a case of whatever you can feed the CPU from software?

There hasn't been maximum core utilization either on the vast majority of applications but we still get more cores.
Is either one really economically viable? What is viable? I think in the future new CPUs come with new architectures / changes that will again be both a response and a look towards the future in terms of how applications can use them. But I don't see how cache is 'more dead' than another bunch of cores that way, its probably less dead because cache will always accelerate certain workloads, so if you don't have it, you won't have the acceleration.
 
Joined
Mar 21, 2016
Messages
2,508 (0.78/day)
Can be utilised is not the same as will be utilised. Paying more for more cache isn't a bigger gamble than paying for more cores, except that bigger cache can also be used by today's games:
View attachment 326629

It's really a combination of both today's games can use more cache or more cores and will vary by game. The bigger looming question is how games in 2-5 years from could look in terms of how heavily they utilize or are dependent upon core ST/MT versus core cache capacity. If it were me trying to project the future on how things will compare I would suspect the higher ST/MT with broader memory performance IMC expectations will age more gracefully longer term while lower latency is better more readily and immediately on more game software today and current high end GPU hardware.

It's not even like it's a enormous performance % gap either. If you want a ton more MT for instances where you can leverage them to forgo a small performance difference you might not even notce at resolution targets you play at that seems legit enough to me. Everyone's use cases are different though and coming from a i3-6100 was often in CPU bottleneck scenario where the CPU MT performance especially held back and choked performance severely so made absolute sure that wasn't the case this time with my CPU.

The 14700K can handle anything I throw at it basically and barely flinch even throwing multiple things at once that use to choke my old CPU. It's a much more fluid experience multi-task experience. I'm sure the 7800X3D or 7900X3D would've been pretty great a swell, but I like having all the MT availible in case I ever want or need it for something rather than being shorthanded and wishing I had more MT headroom. It's a great all around chip in the right hands irrespective of some small % gaming difference at resolution targets I hate to game at in 2023 on a 1440p 10-bit display.
 
Joined
Jan 14, 2019
Messages
13,198 (6.03/day)
Location
Midlands, UK
Processor Various Intel and AMD CPUs
Motherboard Micro-ATX and mini-ITX
Cooling Yes
Memory Anything from 4 to 48 GB
Video Card(s) Various Nvidia and AMD GPUs
Storage A lot
Display(s) Monitors and TVs
Case The smaller the better
Audio Device(s) Speakers and headphones
Power Supply 300 to 750 W, bronze to gold
Mouse Wireless
Keyboard Wired
VR HMD Not yet
Software Linux gaming master race
The 14700K can handle anything I throw at it basically and barely flinch even throwing multiple things at once that use to choke my old CPU. It's a much more fluid experience multi-task experience. I'm sure the 7800X3D or 7900X3D would've been pretty great a swell, but I like having all the MT availible in case I ever want or need it for something rather than being shorthanded and wishing I had more MT headroom. It's a great all around chip in the right hands irrespective of some small % gaming difference at resolution targets I hate to game at in 2023 on a 1440p 10-bit display.
Same with me and the 7800X3D. It loses a couple % on MT, but gains a couple % in gaming, which is my system's main and only focus, so I don't need anything else. Whether you opt for more cache, or more cores, is up to your personal preference, and you're not wrong with either, in my opinion. :)
 
Joined
Nov 16, 2023
Messages
1,575 (3.75/day)
Location
Nowhere
System Name I don't name my rig
Processor 14700K
Motherboard Asus TUF Z790
Cooling Air/water/DryIce
Memory DDR5 G.Skill Z5 RGB 6000mhz C36
Video Card(s) RTX 4070 Super
Storage 980 Pro
Display(s) Some LED 1080P TV
Case Open bench
Audio Device(s) Some Old Sherwood stereo and old cabinet speakers
Power Supply Corsair 1050w HX series
Mouse Razor Mamba Tournament Edition
Keyboard Logitech G910
VR HMD Quest 2
Software Windows
Benchmark Scores Max Freq 13700K 6.7ghz DryIce Max Freq 14700K 7.0ghz DryIce Max all time Freq FX-8300 7685mhz LN2
They were originally thinking about the thick heatspreader being a vapour chamber.

I'm not fond for the IHS design. Thicker, I'm ok with. It raises the gradient a little, but not bad. The notches which reduces surface area is my concern. Because that's the only surface area of which you can remove the thermals.

Anyhow, on the Intel Topic, I've just completed a 6ghz run 3DMark05 which is cpu intense mostly. And only requires a dual core. I am easily cooling this chip on air at this frequency with 3 cores. Probably around 150w.

Have the parts for a water loop now. I had to toss the old pump and tubing was a little slimy. Should be able to test CyberPunk closer to 300w loads. I'll use the all copper nickel plated block to handle the thermals. The only thing I'm missing is fans. Gotta get some new ones. Also use 120.3 thick radiator.

I figure I can move up to, but not exceeding 1200-1300 BTU/Hr which will easily cover the cpu. In the past 6.1ghz 16 threads.
 
Joined
Jun 6, 2022
Messages
622 (0.66/day)
Why would I buy a 1-2, or any % weaker CPU for gaming?
The answer is simple: you don't have RTX 4090! Because only with this video card you have these small differences. When that small difference disappears (because you don't have the most powerful video card of the moment), the 7800X3D turns into a performance/price disaster.
Interesting how AMD supporters gave up on the multitasking argument when it disappeared with the advent of E-cores.

On:
Another game, just bought.
GPU: 100%
CPU: 20%
CPU Power: <40W
A little earlier I had to archive ~50GB of data. And I played Cyberpunk all this time, I just wasn't there to admire the evolution of compression. Ctrl+Shit+F2 to activate the full computing power (double compared to 7800X3D) and I didn't see any stuttering in the game.
That's the beauty of these processors that are blamed only because the reviews used them "untamed": 14700K(F) can get SC/MT performance it unmatched by a 7800X3D with the same power consumption, but I can also double the performance when I need it and. KF variant is even cheaper than the 7800X3D.
Thanks to these Intel options, AMD waived $50 for the 7800X3D. I remind you that it was launched at $450 and at this price it was sold for several months. Now it has a $400 recommendation, and I still consider it too expensive for what it offers.

Maybe it was better to stay with the X3Ds in your garden.

Robo OSD.png
Robo power.jpg
 
Last edited:
Joined
Jul 30, 2019
Messages
3,374 (1.70/day)
System Name Still not a thread ripper but pretty good.
Processor Ryzen 9 7950x, Thermal Grizzly AM5 Offset Mounting Kit, Thermal Grizzly Extreme Paste
Motherboard ASRock B650 LiveMixer (BIOS/UEFI version P3.08, AGESA 1.2.0.2)
Cooling EK-Quantum Velocity, EK-Quantum Reflection PC-O11, D5 PWM, EK-CoolStream PE 360, XSPC TX360
Memory Micron DDR5-5600 ECC Unbuffered Memory (2 sticks, 64GB, MTC20C2085S1EC56BD1) + JONSBO NF-1
Video Card(s) XFX Radeon RX 5700 & EK-Quantum Vector Radeon RX 5700 +XT & Backplate
Storage Samsung 4TB 980 PRO, 2 x Optane 905p 1.5TB (striped), AMD Radeon RAMDisk
Display(s) 2 x 4K LG 27UL600-W (and HUANUO Dual Monitor Mount)
Case Lian Li PC-O11 Dynamic Black (original model)
Audio Device(s) Corsair Commander Pro for Fans, RGB, & Temp Sensors (x4)
Power Supply Corsair RM750x
Mouse Logitech M575
Keyboard Corsair Strafe RGB MK.2
Software Windows 10 Professional (64bit)
Benchmark Scores RIP Ryzen 9 5950x, ASRock X570 Taichi (v1.06), 128GB Micron DDR4-3200 ECC UDIMM (18ASF4G72AZ-3G2F1)
Can be utilised is not the same as will be utilised. Paying more for more cache isn't a bigger gamble than paying for more cores, except that bigger cache can also be used by today's games:
View attachment 326629

Isn't it more a case of whatever you can feed the CPU from software?

There hasn't been maximum core utilization either on the vast majority of applications but we still get more cores.
Is either one really economically viable? What is viable? I think in the future new CPUs come with new architectures / changes that will again be both a response and a look towards the future in terms of how applications can use them. But I don't see how cache is 'more dead' than another bunch of cores that way, its probably less dead because cache will always accelerate certain workloads, so if you don't have it, you won't have the acceleration.
Sorry I wasn't very clear when I was talking about utilization. I meant in a very industry wide broad sense not necessarily on a per machine basis someone getting 100% usage per task manager. You can use cores for just about anything however if the x3d cache is (for the sake of argument) only 50% effective across all games and 10% effective across all applications but on average costs 30% more to produce than a non-x3d chip then there will conceivably be a problem down the road if your chip infrastructure moves to only building x3d cache cpus. The value and demand for those chips won't be there if effective utilization goes in a downward trend and costs will be higher compared to your competition. We can see at least at the moment in the consumer space AMD hasn't banked on making all Ryzen chips x3d based chips yet. Maybe they will, maybe they won't, only time will tell. Also you don't see Intel trading off ecores to provide gobs of cache, at least not yet.

Also sorry to the OP I've gone off topic so I will retire this subconversation.
 
Last edited:
Joined
Jan 14, 2019
Messages
13,198 (6.03/day)
Location
Midlands, UK
Processor Various Intel and AMD CPUs
Motherboard Micro-ATX and mini-ITX
Cooling Yes
Memory Anything from 4 to 48 GB
Video Card(s) Various Nvidia and AMD GPUs
Storage A lot
Display(s) Monitors and TVs
Case The smaller the better
Audio Device(s) Speakers and headphones
Power Supply 300 to 750 W, bronze to gold
Mouse Wireless
Keyboard Wired
VR HMD Not yet
Software Linux gaming master race
The answer is simple: you don't have RTX 4090! Because only with this video card you have these small differences. When that small difference disappears (because you don't have the most powerful video card of the moment), the 7800X3D turns into a performance/price disaster.
Interesting how AMD supporters gave up on the multitasking argument when it disappeared with the advent of E-cores.
I don't have a 4090, so I have to buy a CPU that's weaker for gaming, even if it's more expensive, just because it has more cores which I'm not gonna use. What kind of moronic argument is this? :kookoo:

It amazes me that after all these posts, you still fail to read what I'm saying. Let's put it in simpler terms:
  1. I am not an AMD supporter. About 80% of my systems through my life have been Intel+Nvidia, and I love(d) them equally as I love my AMD ones.
  2. I gave up on the multitasking argument after spending nearly a grand on a 5950X that didn't improve my gaming experience at all.
  3. I do not need or want mixed e/p core chips because they need Windows 11 to give their best, which I'm unwilling to upgrade to. I also do not need or want a dual-CCD AMD chip, because the extra cores are completely unnecessary for my use case, and the inter-CCD latency can even hurt performance in some odd cases.
  4. I do not recommend the 7800X3D as a mid-tier CPU option. It is not great at a price-to-performance level, and I never said it was. I bought mine only because I was curious (and I still am curious, considering that I'm still not using its full potential with a 7800 XT), which I do not recommend others with more sensible budgets to do. I'm a hobby PC builder, I buy parts for fun, not because they make monetary sense. What I do recommend however, is a 7700 non-X or a 7600 or an i5-13500. So basically, is a 7800X3D necessary for gaming? No. Or is an 11700 with its 8 cores and 4.9 GHz max boost necessary for watching films on a HTPC? Absolutely not, but I do have one because of YOLO. Got any issues with that?
  5. You're happy with the 14700K. That's great! But why does that mean that others can't be happy with what they've got? What does my happiness take away from yours? Is this some sort of personal agenda against me or against AMD buyers in general? Why do you even feel like you have to prove something?

Sorry I wasn't very clear when I was talking about utilization. I meant in a very industry wide broad sense not necessarily on a per machine basis someone getting 100% usage per task manager. You can use cores for just about anything however if the x3d cache is (for the sake of argument) only 50% effective across all games and 10% effective across all applications but on average costs 30% more to produce than a non-x3d chip then there will conceivably be a problem down the road if your chip infrastructure moves to only building x3d cache cpus. The value and demand for those chips won't be there if effective utilization goes in a downward trend and costs will be higher compared to your competition. We can see at least at the moment in the consumer space AMD hasn't banked on making all Ryzen chips x3d based chips yet. Maybe they will, maybe they won't, only time will tell. Also you don't see Intel trading off ecores to provide gobs of cache, at least not yet.

Also sorry to the OP I've gone off topic so I will retire this subconversation.
I don't think we'll see a product lineup with X3D-only CPUs, ever. Some applications are not cache-sensitive, proven by AMD's investment in the smaller Zen 4c cores.
 
Joined
Mar 13, 2021
Messages
483 (0.35/day)
Processor AMD 7600x
Motherboard Asrock x670e Steel Legend
Cooling Silver Arrow Extreme IBe Rev B with 2x 120 Gentle Typhoons
Memory 4x16Gb Patriot Viper Non RGB @ 6000 30-36-36-36-40
Video Card(s) XFX 6950XT MERC 319
Storage 2x Crucial P5 Plus 1Tb NVME
Display(s) 3x Dell Ultrasharp U2414h
Case Coolermaster Stacker 832
Power Supply Thermaltake Toughpower PF3 850 watt
Mouse Logitech G502 (OG)
Keyboard Logitech G512
I don't think we'll see a product lineup with X3D-only CPUs, ever. Some applications are not cache-sensitive, proven by AMD's investment in the smaller Zen 4c cores.
Oh there are there and there is a need, but its mostly Enterprise systems that benefit from so much cache.
Its the EPYC chips ending in X like the 9684X with over 1Gb of L3 cache. Things like Databases/ML etc will benefit from larger cache as more info you can keep nearer the CPU the better.

The reason for Zen4c is for things like AWS etc where they can increase VM density per U of rack space without impacting performance greatly.

But as you mentioned the inter CCD latency kills any real benefit of it for things like games currently.
 

freeagent

Moderator
Staff member
Joined
Sep 16, 2018
Messages
9,188 (3.98/day)
Location
Winnipeg, Canada
Processor AMD R7 5800X3D
Motherboard Asus Crosshair VIII Dark Hero
Cooling Thermalright Frozen Edge 360, 3x TL-B12 V2, 2x TL-B12 V1
Memory 2x8 G.Skill Trident Z Royal 3200C14, 2x8GB G.Skill Trident Z Black and White 3200 C14
Video Card(s) Zotac 4070 Ti Trinity OC
Storage WD SN850 1TB, SN850X 2TB, SN770 1TB
Display(s) LG 50UP7100
Case Fractal Torrent Compact
Audio Device(s) JBL Bar 700
Power Supply Seasonic Vertex GX-1000, Monster HDP1800
Mouse Logitech G502 Hero
Keyboard Logitech G213
VR HMD Oculus 3
Software Yes
Benchmark Scores Yes
My next hat trick.
Try and run 13700K with passive cooling. XD
I can do 180w passive with my 5900X and one of my coolers using Linpack Xtreme :D
 
Joined
Jun 6, 2022
Messages
622 (0.66/day)
I don't have a 4090, so I have to buy a CPU that's weaker for gaming, even if it's more expensive, just because it has more cores which I'm not gonna use. What kind of moronic argument is this? :kookoo:
I just said that, in your case, X3D does not bring you any benefit in games. You can find at the same price, even cheaper, processors that achieve the same results in gaming as a 4080, but with superior performance in many other applications. In the case of 14700K, at the same price, the difference is colossal.

According to the TPU review, the difference between the 4080 and 4090 is:
5% in 1080p
11% in 1440p
25% in 4K
25% difference in 4K is demonstrated in the video below (as if they heard me and offered me a helping hand).
So, under these conditions, with 25% fewer frames per second, that humble advantage obtained by the 7800X3D in games (1%) against cheaper processors definitely disappears. If this small advantage disappears, I repeat, only the excessively high price remains from the 7800X3D, surpassed by the cheaper 7700X and effectively destroyed by the 13/14700K(F), processors sold at the same price or even lower.

Keep in mind that the reviews are performed with an "empty" operating system, without any applications running in the background. This fact puts Intel at a disadvantage, which has those E-cores that take care of the background. So, in real life, it is possible that the 7800X3D will lose to the processors with E-cores inside even if the 4090 is used.
For the other applications, we have enough data to state that the 14700K, at the same price, destroys the 7800X3D by ~20% in single and almost 100% in intensive CPU or intensive multithreading applications.

Only energy efficiency remains in the discussion, but... can you get 11.5K in CPU-Z and 25+K in Cine R23 without exceeding 80W? Can you actually get these scores, even using LN2? 14700K(F) succeeds. Without LN2, of course.

I notice that the illusion of that extra 1% (obtained only in games, in ideal conditions and with the most powerful video card) gives you a lot of power to force the ridiculous.

 
Last edited:
Low quality post by ElMoIsEviL
Joined
Feb 13, 2013
Messages
5 (0.00/day)
I just said that, in your case, X3D does not bring you any benefit in games. You can find at the same price, even cheaper, processors that achieve the same results in gaming as a 4080, but with superior performance in many other applications. In the case of 14700K, at the same price, the difference is colossal.

According to the TPU review, the difference between the 4080 and 4090 is:
5% in 1080p
11% in 1440p
25% in 4K
25% difference in 4K is demonstrated in the video below (as if they heard me and offered me a helping hand).
So, under these conditions, with 25% fewer frames per second, that humble advantage obtained by the 7800X3D in games (1%) against cheaper processors definitely disappears. If this small advantage disappears, I repeat, only the excessively high price remains from the 7800X3D, surpassed by the cheaper 7700X and effectively destroyed by the 13/14700K(F), processors sold at the same price or even lower.

Keep in mind that the reviews are performed with an "empty" operating system, without any applications running in the background. This fact puts Intel at a disadvantage, which has those E-cores that take care of the background. So, in real life, it is possible that the 7800X3D will lose to the processors with E-cores inside even if the 4090 is used.
For the other applications, we have enough data to state that the 14700K, at the same price, destroys the 7800X3D by ~20% in single and almost 100% in intensive CPU or intensive multithreading applications.

Only energy efficiency remains in the discussion, but... can you get 11.5K in CPU-Z and 25+K in Cine R23 without exceeding 80W? Can you actually get these scores, even using LN2? 14700K(F) succeeds. Without LN2, of course.

I notice that the illusion of that extra 1% (obtained only in games, in ideal conditions and with the most powerful video card) gives you a lot of power to force the ridiculous.


Multi-Threading. The very thing the guy who opened this thread disabled, is the thing which takes care of background tasks. The 7800X3D does not magically get slower because of background processes. It has no issues with those.

It's the better CPU. I know it's hard for you to admit this but it is. Just a better design overall. Positioned to tackle Games quite well.

As for Applications, just get a 7950X3D and turn on the extra cores or turn them off based on whether you're gaming or working on applications. Still better than anything Intel has.

Intel is playing second fiddle with right now, might change if they get their manufacturing working correctly as well as their modular designs.
 
Last edited:

Ruru

S.T.A.R.S.
Joined
Dec 16, 2012
Messages
13,129 (2.98/day)
Location
Jyväskylä, Finland
System Name 4K-gaming / console
Processor AMD Ryzen 7 5800X / Intel Core i7-8600K
Motherboard ROG Crosshair VII Hero / ROG Strix Z370-F
Cooling Alphacool Eisbaer 360 / Alphacool Eisbaer 240
Memory 32GB DDR4-3466 / 16GB DDR4-3000
Video Card(s) Asus RTX 3080 TUF OC / Powercolor RX 6700 XT
Storage 3.5TB of SSDs / several small SSDs
Display(s) Acer 27" 4K120 IPS + Lenovo 32" 4K60 IPS
Case Corsair 4000D AF White / DeepCool CC560 WH
Audio Device(s) Sony WH-CN720N
Power Supply EVGA G2 750W / Fractal ION Gold 550W
Mouse Logitech MX518 / Logitech G400s
Keyboard Roccat Vulcan 121 AIMO / NOS C450 Mini Pro
VR HMD Oculus Rift CV1
Software Windows 11 Pro / Windows 11 Pro
Benchmark Scores They run Crysis
Just wondering that why.... the whole idea of this Emergency Edition CPU is to be the fastest in some scenarios, no matter of its power consumption and it can reach 6GHz for a blink of an eye.

When you fine-tune it, why just not get a much cooler and not factory-overclocked to the roof-i7 and do the same to it?

Let's discuss the topic and not go off on tangents
Sorry, saw your post after I posted mine.
 
Low quality post by Gica
Joined
Jun 6, 2022
Messages
622 (0.66/day)
It's the better CPU. I know it's hard for you to admit this but it is. Just a better design overall. Positioned to tackle Games quite well.
Good for what? Does she cook, clean and take the children to school?
If you prove that a GT1030 helps, hats off!
For now, it excels only with the catastrophic performance/price, because you pay more for less performance and a hypothetical 1% (WoW!!!) in gaming only with RTX 4090.

Let's be serious!

You cling, desperately, to extremes. We are not discussing AMD or Intel actions here, but how much it helps me, owner of GTX 1050, RTX 2060, RTX 3060, RX 5700, RX6700XT... I think you get the idea. Anything below 4090 or 7900XTX. Can you prove that there is any difference between the 7800X3D and much cheaper processors using the video cards used by over 90% of gamers?
The same blah blah blah I saw from the AMD camp at the launch of 5800X3D@450$, I think June 2022. We are in 2023 and we see 5800X3D it at the level of 7600X and only with RTX4090. With a weaker video card... it's not hard to imagine that there is no difference even in 2022, between the 5800X3D and the much cheaper 5700X.
Prove that it brings at least 20% extra performance in games to have a topic of discussion. With this percentage, it is worth the price.
 
Last edited:
Low quality post by AusWolf
Joined
Jan 14, 2019
Messages
13,198 (6.03/day)
Location
Midlands, UK
Processor Various Intel and AMD CPUs
Motherboard Micro-ATX and mini-ITX
Cooling Yes
Memory Anything from 4 to 48 GB
Video Card(s) Various Nvidia and AMD GPUs
Storage A lot
Display(s) Monitors and TVs
Case The smaller the better
Audio Device(s) Speakers and headphones
Power Supply 300 to 750 W, bronze to gold
Mouse Wireless
Keyboard Wired
VR HMD Not yet
Software Linux gaming master race
I just said that, in your case, X3D does not bring you any benefit in games. You can find at the same price, even cheaper, processors that achieve the same results in gaming as a 4080, but with superior performance in many other applications. In the case of 14700K, at the same price, the difference is colossal.

According to the TPU review, the difference between the 4080 and 4090 is:
5% in 1080p
11% in 1440p
25% in 4K
25% difference in 4K is demonstrated in the video below (as if they heard me and offered me a helping hand).
So, under these conditions, with 25% fewer frames per second, that humble advantage obtained by the 7800X3D in games (1%) against cheaper processors definitely disappears. If this small advantage disappears, I repeat, only the excessively high price remains from the 7800X3D, surpassed by the cheaper 7700X and effectively destroyed by the 13/14700K(F), processors sold at the same price or even lower.
You obviously didn't read my whole post:
I do not recommend the 7800X3D as a mid-tier CPU option. It is not great at a price-to-performance level, and I never said it was. I bought mine only because I was curious (and I still am curious, considering that I'm still not using its full potential with a 7800 XT), which I do not recommend others with more sensible budgets to do. I'm a hobby PC builder, I buy parts for fun, not because they make monetary sense. What I do recommend however, is a 7700 non-X or a 7600 or an i5-13500. So basically, is a 7800X3D necessary for gaming? No. Or is an 11700 with its 8 cores and 4.9 GHz max boost necessary for watching films on a HTPC? Absolutely not, but I do have one because of YOLO. Got any issues with that?
---
Keep in mind that the reviews are performed with an "empty" operating system, without any applications running in the background. This fact puts Intel at a disadvantage, which has those E-cores that take care of the background. So, in real life, it is possible that the 7800X3D will lose to the processors with E-cores inside even if the 4090 is used.
Games don't saturate 8 cores to 100%, not even with a 4090, so this is pure bollocks.

This is my last post on this. I bought what I bought because YOLO. If you don't like it, that's your problem, not mine. Now, let's get back on topic, shall we? :)
 
Low quality post by Bagerklestyne
Joined
Oct 30, 2022
Messages
243 (0.30/day)
Location
Australia
System Name Blytzen
Processor Ryzen 7 7800X3D
Motherboard ASRock B650E Taichi Lite
Cooling Deepcool LS520 (240mm)
Memory G.Skill Trident Z5 Neo RGB 64 GB (2 x 32 GB) DDR5-6000 CL30
Video Card(s) Powercolor 6800XT Red Dragon (16 gig)
Storage 2TB Crucial P5 Plus SSD, 80TB spinning rust in a NAS
Display(s) MSI MPG321URX QD-OLED (32", 4k, 240hz), Samsung 32" 4k
Case Coolermaster HAF 500
Audio Device(s) Logitech G733 and a Z5500 running in a 2.1 config (I yeeted the mid and 2 satellites)
Power Supply Corsair HX850
Mouse Logitech G502X lightspeed
Keyboard Logitech G915 TKL tactile
Benchmark Scores Squats and calf raises
We are in 2023 and we see 5800X3D it at the level of 7600X and only with RTX4090. With a weaker video card... it's not hard to imagine that there is no difference even in 2022, between the 5800X3D and the much cheaper 5700X.
Prove that it brings at least 20% extra performance in games to have a topic of discussion. With this percentage, it is worth the price.

You're right, in some games it's marginal, but in Far Cry and Borderlands 3 it was well over 20% compared to the 5800X (which I think we agree is going to be ahead of the 5700X on framerates) and that was with a 3080

But comparing to the 7600X and 'only getting the same performance', temper that with the fact it's a drop in solution (the 5800X3D) to a platform that's 7 years old, vs a new platform, with higher clocks, substantial IPC gains and DDR5 to go with it.
 

the54thvoid

Super Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
13,156 (2.39/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
Merry Christmas! - Two reply bans given.


Despite the warning from another mod, some have continued with OT about AMD chips. This is about tuning the 14900k for efficiency. Stick to it.
 
Joined
Mar 21, 2016
Messages
2,508 (0.78/day)
I think the biggest thing for me with 14700K was adjusting the max ratio of P-core and E-core depending on application usage you can get a bit more or a bit less performance from either or to target ST or MT usage.

I haven't tinker with HT, but enabling all HT or disabling all HT or just select amount on particular P-cores could all react a bit differently with different software so that could be worth testing out and trying if you're bios allows for it. I'm not sure if HT can be adjusted in software or not, but probably. I just left it on with my old CPU, but it was a dual core there wasn't a big difference in either scenario and just seemed more consistent on with that hardware. It's very different with like dozens of cores the need for HT tends to be less dramatic, but certain software will leverage as many threads as you throw at it basically as well.

For the OS with windows 11 I like to use these command line tweaks to a few things. It's to do with memory usage and memory compression.

Code:
fsutil behavior set memoryusage 2

MMAgent

Enable-MMAgent -mc

Enable-MMAgent -pc

Set-MMAgent -MaxOperationAPIFiles 8192

This is a registry option for DWM you may have some luck with. It felt like keeping averaging period higher than threshold percent was ideal. I also dropped the averaging period down to like 252 and keep threshold percent somewhere between 7 and 63 in intervals of +/-7 that way it syncronizes evenly for better consistency. I think it's to do with drawing of GUI GDI desktop elements with mouse/keyboard interactions, but not 100% sure I haven't seen it really talked about though I think DWM it's to do with GDI.

Code:
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Dwm\ExtendedComposition]
"ExclusiveModeFramerateAveragingPeriodMs"
"ExclusiveModeFramerateThresholdPercent"

Overall I don't think there is a really a great deal you can tune on these chips relative to stock. You can a little more efficiency or performance or both and slightly emphasize ST or MT, but overall the chips are already pushed pretty close to there general limits. You can forgo performance to tune efficiency to a higher extent though you'd want to weight that versus your needs and expectations of the chip.

What I would recommend is testing to see how it reacts to pushing P cores or E cores a bit more aggressively by dropping the ratio of one to boost the ratio of the other by x1. If you can do that and get it stable on your system you can kind of weight it a little more or a little less to either or for any given application pretty easily or keep it more stock balanced as is.
 
Joined
May 24, 2023
Messages
957 (1.61/day)
I just tested power draw of my ECO 14900K (at 4800/4000 MHz, HT off) with RTX 4070 with Cyberpunk built-in benchmark, I report power draw during camera passing the bar.

First low settings 1080p resolution, the game probably cannot utilise the CPU more: 94 W

low setting CPU util.png

Then ultra settings (no RT), 1440p resolution, this time GPU limited: 57 W

1440p ultra no RT CPU util.png


I normally see even lower CPU power draw in this game (typically just below 50W), because I have RT enabled.

So in this scenario: CPU at lower frequency, HT off and GPU limited, the power draw or this CPU is VERY MODEST.
 
Joined
May 24, 2023
Messages
957 (1.61/day)
To the previous post, which I cannot edit anymore, I could add that I was pretty disappointed when I saw that the real full gaming load power draw (94W), because after seeing that 50W with my typical GPU settings, I did not expect such "high number".

On the other hand in that 94W you have 8 P cores 100% loaded and 16 E cores loaded to what seems to be roughly 55% on average... The total computing output should be approx. equivalent to 13 P core CPU. And you have AMD 8 cores with enlarged cache beating it in gaming. I already complained about Intels lazy approach and that they could have easilly built a 10P core CPU with enlarged cache just with already developed elements/building blocks. What is interesting - it seems that there are no rumours about Intel developing a special gaming CPU in the future either. It that not weird?
 
Last edited:
Joined
Jul 30, 2019
Messages
3,374 (1.70/day)
System Name Still not a thread ripper but pretty good.
Processor Ryzen 9 7950x, Thermal Grizzly AM5 Offset Mounting Kit, Thermal Grizzly Extreme Paste
Motherboard ASRock B650 LiveMixer (BIOS/UEFI version P3.08, AGESA 1.2.0.2)
Cooling EK-Quantum Velocity, EK-Quantum Reflection PC-O11, D5 PWM, EK-CoolStream PE 360, XSPC TX360
Memory Micron DDR5-5600 ECC Unbuffered Memory (2 sticks, 64GB, MTC20C2085S1EC56BD1) + JONSBO NF-1
Video Card(s) XFX Radeon RX 5700 & EK-Quantum Vector Radeon RX 5700 +XT & Backplate
Storage Samsung 4TB 980 PRO, 2 x Optane 905p 1.5TB (striped), AMD Radeon RAMDisk
Display(s) 2 x 4K LG 27UL600-W (and HUANUO Dual Monitor Mount)
Case Lian Li PC-O11 Dynamic Black (original model)
Audio Device(s) Corsair Commander Pro for Fans, RGB, & Temp Sensors (x4)
Power Supply Corsair RM750x
Mouse Logitech M575
Keyboard Corsair Strafe RGB MK.2
Software Windows 10 Professional (64bit)
Benchmark Scores RIP Ryzen 9 5950x, ASRock X570 Taichi (v1.06), 128GB Micron DDR4-3200 ECC UDIMM (18ASF4G72AZ-3G2F1)
To the previous post, which I cannot edit anymore, I could add that I was pretty disappointed when I saw that the real full gaming load power draw (94W), because after seeing that 50W with my typical GPU settings, I did not expect such "high number".

On the other hand in that 94W you have 8 P cores 100% loaded and 16 E cores loaded to what seems to be roughtly 55% on average... The total computing output should be approx. equivalent to 13 P core CPU. And you have AMD 8 cores with enlarged cache beating it in gaming. I already complained about Intels lazy approach and that they could have easilly built a 10P core CPU with enlarged cache just with already developed elements/building blocks. What is interesting - it seems that there are no rumours about Intel developing a special gaming CPU in the future either. It that not weird?
That's not weird at all. Rarely does throwing gobs cache at a problem ever fix the problem.
 
Joined
May 24, 2023
Messages
957 (1.61/day)
That's not weird at all. Rarely does throwing gobs cache at a problem ever fix the problem.
Well, AMD did exactly that and it worked for them, not sure why enlarged cache on an Intel CPu would be less beneficial... I am afraid we are off topic here.

Back to topic: in the above case, while being GPU limited, perhaps I could lower the CPU frequency even more without impacting performance?

When I see that approx. 65% percent P core and 25% E core utilisation, I wonder why the game does not use the P cores more?
 
Joined
Mar 13, 2021
Messages
483 (0.35/day)
Processor AMD 7600x
Motherboard Asrock x670e Steel Legend
Cooling Silver Arrow Extreme IBe Rev B with 2x 120 Gentle Typhoons
Memory 4x16Gb Patriot Viper Non RGB @ 6000 30-36-36-36-40
Video Card(s) XFX 6950XT MERC 319
Storage 2x Crucial P5 Plus 1Tb NVME
Display(s) 3x Dell Ultrasharp U2414h
Case Coolermaster Stacker 832
Power Supply Thermaltake Toughpower PF3 850 watt
Mouse Logitech G502 (OG)
Keyboard Logitech G512
Well, AMD did exactly that and it worked for them, not sure why enlarged cache on an Intel CPu would be less beneficial... I am afraid we are off topic here.

Back to topic: in the above case, while being GPU limited, perhaps I could lower the CPU frequency even more without impacting performance?

When I see that approx. 65% percent P core and 25% E core utilisation, I wonder why the game does not use the P cores more?
Regarding the Cache you really need to design the core/L3 area to be able to run the relevant VIAs up to 3d vCache style soloutions. Its not something you could just "bolt on" per say. It may be something coming in the next generation

I suspect due to the loading 1440p ultra settings means that its GPU limited so there are times when the CPU is waiting on GPU Busy timers to clear. while if you look at both there is still a fair load on the E-Cores but by having some cycle time free on the P Cores now in turn the E Cores are less loaded as well. You could possibly lower the CPU speed but how much less power draw are you actually going to achieve.

People have to remember that technically Intel is still on a 7nm Node vs AMDs access to 5nm via TSMC so they are always on the backfoot there.
 
Joined
Mar 7, 2023
Messages
943 (1.40/day)
System Name BarnacleMan
Processor 14700KF
Motherboard Gigabyte B760 Aorus Elite Ax DDR5
Cooling ARCTIC Liquid Freezer II 240 + P12 Max Fans
Memory 32GB Kingston Fury Beast
Video Card(s) Asus Tuf 4090 24GB
Storage 4TB sn850x, 2TB sn850x, 2TB Netac Nv7000 + 2TB p5 plus, 4TB MX500 * 2 = 18TB. Plus dvd burner.
Display(s) Dell 23.5" 1440P IPS panel
Case Lian Li LANCOOL II MESH Performance Mid-Tower
Audio Device(s) Logitech Z623
Power Supply Gigabyte ud850gm pg5
Regarding the Cache you really need to design the core/L3 area to be able to run the relevant VIAs up to 3d vCache style soloutions. Its not something you could just "bolt on" per say. It may be something coming in the next generation

I suspect due to the loading 1440p ultra settings means that its GPU limited so there are times when the CPU is waiting on GPU Busy timers to clear. while if you look at both there is still a fair load on the E-Cores but by having some cycle time free on the P Cores now in turn the E Cores are less loaded as well. You could possibly lower the CPU speed but how much less power draw are you actually going to achieve.

People have to remember that technically Intel is still on a 7nm Node vs AMDs access to 5nm via TSMC so they are always on the backfoot there.
Isn't intel still on 10nm technically, just branded as 'intel 7?' Though I guess the whole measuring in NM thing is getting kind of muddy either way.

Anyway, considering how far back the node is compared to AMD, its quite impressive intel is able to get the performance that they do, I guess that gap is bridged through extra power draw. Well, while at load anyway. They're still really good at power draw during idle/low load.
 
Last edited:
Joined
May 24, 2023
Messages
957 (1.61/day)
Regarding the Cache you really need to design the core/L3 area to be able to run the relevant VIAs up to 3d vCache style soloutions. Its not something you could just "bolt on" per say.
Amd designed a computing chiplet to be as universal as possible with interconnections for an optional cache on top of it.

I wrote (or at least meant) that Intel could have designed a CPU with more cache already built in on the same piece of silicone, built from the exactly same stuff as the other CPUs just resized/rearranged.
Isn't intel still on 10nm technically, just branded as 'intel 7?'
...
Anyway, considering how far back the node is compared to AMD, its quite impressive intel is able to get the performance that they do, I guess that gap is bridged through extra power draw. ...
The old process Intel makes CPUs with is actually still quite usable to this day, but IMO is really well usable only for frequencies up to 5 GHz, it runs pretty efficiently, CPUs are very easy to cool and the power draw is still in sane levels even for pretty powerful CPUs.

Intel is raping their own silicon with voltage and heat to push it further, than it should be.
 
Last edited:
Top