• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Announces Ryzen 7000 Series "Zen 4" Desktop Processors

Joined
Jan 27, 2015
Messages
1,746 (0.48/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
That was on a G3258 7 years back, wanna try that again?
Overall system consumption didn't exceed 100W including the monitor.

But you are going to OC it right. Let's just forget that, for example, my OC 10850K sits here at 31W like it does 98% of the time.

1662013944152.png



Most PCs sit at idle way more than 90% of the time. And if you are doing rendering, just look at what the pro's who really do that for a living say - time is money and hence speed is more important than power.

The whole power argument for anyone not doing rendering or encoding all the time is just idiotic on the face of it. And now you're going to invoke "global warming"?

So what's your house thermostat set at?
 
Joined
May 31, 2016
Messages
4,440 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
At this point we should just lump all the climate change deniers with their flat earther brethren, not saying you are a denier (since I don't know your position) but spending more power for very little to virtually no benefit these days makes zero sense!
hehe. I think purchasing a CPU for loads of cash, limit its power and performance in half, then brag how efficient it is makes no sense. This is plain stupidity.
Of course, it's also possible that they've changed this since - I wouldn't be surprised, given the shitstorm they faced for those ridiculous numbers.
Have you ever noticed the spec for 10900k and 12900k on Intel's website? How they address the TDP's?
There has been a shift in Intel's naming scheme and how they evaluate the CPU's power usage.
Νo, that's not what Im arguing at all. No wonder you disagree since you haven't even understood the point.

Let me try once more, in the hopes you get it.

CPU A is at 100w and scores 100 points at stock (let's say it's the 5950x)
CPU B shows up and it is running at 170w and scores 150 points (let's say it's the 7950x).

CPU B looks more inefficient than A, but when you actually test them both at same wattage, CPU B can score 120 points at 100w. So it is more efficient. So if you are after efficiency, you can run CPU B at the same wattage cpu A was at, and still beat it in both performarnce and efficiency. Yet here we have people complaining about how inefficient zen 4 are. The same thing was going on with alderlake..

If you still don't get what im saying, I give up
It is not about you and what you argue about but what I'm arguing about with you. I get the premise about lowering voltage and wattage and tweak CPU's to use a bit less power and dissipate less heat. You claim that CPUs are efficient because you can lower the power but they lose performance and that B CPU would have been great if it was released when A CPU release was. Your argument consists of 2 CPUs more than 2 years apart. You literally disregard the 3 principles to evaluate a CPU with current market. Focus on the one you purchased and how it impacts the price performance and power use today not 2 years ago. Anyway, lets move on there will be plenty of threads to argue.
 
Joined
Apr 12, 2013
Messages
7,563 (1.77/day)
Most PCs sit at idle way more than 90% of the time. And if you are doing rendering, just look at what the pro's who really do that for a living say - time is money and hence speed is more important than power.

The whole power argument for anyone not doing rendering or encoding all the time is just idiotic on the face of it. And now you're going to invoke "global warming"?

So what's your house thermostat set at?
Not always & it's not a zero sum game, you can lower the TDP a bit/do some undervolt & tighten the memory timings to get ~95% performance at anywhere between 5-25% less power. And no unless you're doing rendering that takes days 5-10% more time isn't worth much!

Of course because it's wasteful.

What thermostat? I have sweltering heat here in the summers with temps going past 35C inside the home! No AC, only BLDC fans & pretty much every electrical appliance with 4 (energy) star rating or higher. Do you wanna see my avg energy bill/power consumption as well?
 
Joined
Apr 14, 2022
Messages
758 (0.77/day)
Location
London, UK
Processor AMD Ryzen 7 5800X3D
Motherboard ASUS B550M-Plus WiFi II
Cooling Noctua U12A chromax.black
Memory Corsair Vengeance 32GB 3600Mhz
Video Card(s) Palit RTX 4080 GameRock OC
Storage Samsung 970 Evo Plus 1TB + 980 Pro 2TB
Display(s) Acer Nitro XV271UM3B IPS 180Hz
Case Asus Prime AP201
Audio Device(s) Creative Gigaworks - Razer Blackshark V2 Pro
Power Supply Corsair SF750
Mouse Razer Viper
Keyboard Asus ROG Falchion
Software Windows 11 64bit
Νo, that's not what Im arguing at all. No wonder you disagree since you haven't even understood the point.

Let me try once more, in the hopes you get it.

CPU A is at 100w and scores 100 points at stock (let's say it's the 5950x)
CPU B shows up and it is running at 170w and scores 150 points (let's say it's the 7950x).

CPU B looks more inefficient than A, but when you actually test them both at same wattage, CPU B can score 120 points at 100w. So it is more efficient. So if you are after efficiency, you can run CPU B at the same wattage cpu A was at, and still beat it in both performarnce and efficiency. Yet here we have people complaining about how inefficient zen 4 are. The same thing was going on with alderlake..

If you still don't get what im saying, I give up

But you didn’t buy the cpu B for the 120 points. You buy it for the 150 or more with OC.
Or for 148 after undervolting and power limiting in order to have less heat, if it’s worth doing it.

No one on earth is going to limit it to the theoretical 120 points no matter how much efficient the cpu is at that level.

And after all every new gen is more efficient than the previous ones by a small or big margin. That’s not an advantage. That goes without saying.
 
Joined
Jan 27, 2015
Messages
1,746 (0.48/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
Not always & it's not a zero sum game, you can lower the TDP a bit/do some undervolt & tighten the memory timings to get ~95% performance at anywhere between 5-25% less power. And no unless you're doing rendering that takes days 5-10% more time isn't worth much!

Of course because it's wasteful.

What thermostat? I have sweltering heat here in the summers with temps going past 35C in the summers, inside the home! No AC, only BLDC fans & pretty much every electrical appliance with 4 (energy) star rating or higher. Do you wanna see my avg energy bill/power consumption as well?

I've seen more people who do this for a living who try to do serious OC than not. That 95% perf for 25% less power is true, nothing new about it all, and totally irrelevant.

The only time you benefit from that power savings is when you start to push all core workloads. How often do you do that? Very rarely for most people. And what's the actual benefit for that loss of response when you need it? Virtually nothing because for most people, when they do push all core work, it's for very short periods (like, 5 seconds).

The flip side is you can get 110% for 50% more power.

If that 10% saves you 45 minutes a day because your income depends on rendering / video editing and so on, and you make 30$/hr, that's like getting $21 more per day.

Compared to paying an extra .02c per day in power, it is flatly a no-brainer for the people who need more speed.

So yeah, my comment about where are the performance enthusiasts was kind of a joke and kind of not.

All of the so far released CPUs are AMD X CPUs, they are OC capable up-tuned chips for enthusiasts. The more mundane non X chips will come later. If you are not into OC / high performance CPUs, why are you here.
 
Joined
Apr 12, 2013
Messages
7,563 (1.77/day)
Right the point is ~ always aiming for max performance even at the cost of much more power is not something that should be encouraged. I've bought the most energy efficient products that I could afford on credit (i.e. interest) & paid the early adopter tax to get my energy consumption as low as I possibly could at home. And yes my point about lower power consumption was mostly about 100% or near 100% load on the CPU like you said wrt (rendering) time is money. We should all do our bit & every little bit counts!
 
Joined
Jan 27, 2015
Messages
1,746 (0.48/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
Right the point is ~ always aiming for max performance even at the cost of much more power is not something that should be encouraged. I've bought the most energy efficient products that I could afford on credit (i.e. interest) & paid the early adopter tax to get my energy consumption to as low as I possibly could at home. And yes my point about lower power consumption was mostly about 100% or near 100% load on the CPU like you said wrt (rendering) time is money. We should all do our bit & every little bit counts!

Ok so I disagree and don't really care about you preaching your political agendas. +1 to my ignore list.
 
Joined
May 31, 2016
Messages
4,440 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
Ok so I disagree and don't really care about you preaching your political agendas. +1 to my ignore list.
You should tell that to miners. They care about the power because they see the difference with all the devices they use and in a grand scale (2000 cards for instance) in order to make a profit they need to tweak power. You dont because you use only one card or CPU. If you consider that 2000 users are like you, the waste of power is enormous but individually they don't see it or maybe it is small and irrelevant. If they act like the miners, optimizing the power of the 2000 cards that makes huge difference in a grand scale. Now imagine what actually is the grand scale with the computers we are using and users that could make a change? Being ignorant like you helps to disregard this issue. People need to hit rock bottom to understand things or start to care. If you dont have hot water or your kids start to starve then you will think, why didn't we do anything about it when there was still time?
 
Joined
Jun 14, 2020
Messages
3,530 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
But you didn’t buy the cpu B for the 120 points. You buy it for the 150 or more with OC.
Or for 148 after undervolting and power limiting in order to have less heat, if it’s worth doing it.

No one on earth is going to limit it to the theoretical 120 points no matter how much efficient the cpu is at that level.

And after all every new gen is more efficient than the previous ones by a small or big margin. That’s not an advantage. That goes without saying.
What do you even mean you dont buy it for the 120 points? When i buy a cpu i ALWAYS buy it for the performance it delivers at the wattage i want to run it at.

Prime example, non k cpus. Lots of people and lots of reviewers suggest buying them, not for the stock performance but the performance they get after you remove the power limits. So seriously, what are you even talking about?
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Νo, that's not what Im arguing at all. No wonder you disagree since you haven't even understood the point.

Let me try once more, in the hopes you get it.

CPU A is at 100w and scores 100 points at stock (let's say it's the 5950x)
CPU B shows up and it is running at 170w and scores 150 points (let's say it's the 7950x).

CPU B looks more inefficient than A, but when you actually test them both at same wattage, CPU B can score 120 points at 100w. So it is more efficient. So if you are after efficiency, you can run CPU B at the same wattage cpu A was at, and still beat it in both performarnce and efficiency. Yet here we have people complaining about how inefficient zen 4 are. The same thing was going on with alderlake..

If you still don't get what im saying, I give up
But this is the thing, and where this argument keeps going in circles: both these statements are true. There is no contradiction between the two. At stock, CPU B is, unequivocally, less efficient than CPU A. That doesn't mean that it doesn't have the potential to be more efficient - but that's not how it's configured from the factory. All silicon implementations of an architecture have a wide range of possible efficiencies at various tuning and performance levels. But that means that CPU B can be more efficient, given appropriate tuning. It just isn't (in this example) at stock.

Two things spring from this:

First: non-iso power comparisons don't necessarily give a good picture of the architectural or implemented efficiency of each design, as they are tuned differently. What they do is give a representative picture of actual, real-world product efficiency. What people buy and put into their PCs. Then again, iso power measurements don't really give a good picture of efficiency either, as you're still just measuring a single point along a complex curve for each, and there's nothing in that measurement telling you whether these tuning levels are "equal" (as if that's possible) along their respective curves. Comparing two chips at, say, 150W can also be extremely problematic if one chip is pushed to its limits at that point while the other can go much further. For an actual overview of architectural efficiency that is worth anything at all, you need to run a wide range of tests across a wide range of wattages - anything less is just as flawed as non-iso power testing.

The second thing springing from both statements being true: the major question here is what you're actually looking for - practical, useful information that's generally applicable, or specialized information that's only applicable in specialized settings. This is where we've been disagreeing for a long, long time, as I think the generally applicable information gained from looking at real-world stock behaviour is by far the most important data, while you care only about the highly specialized niche of people actually willing and able to tune their chips manually.

Of course, once we start looking past either pure ST or nT applications, as well as looking at power draws across a range of various workloads, things get very complicated very quickly, as there's a ton of variability in how each specific workload will interact with each specific CPU both architecturally and in terms of its physical implementation. I really, really wish there was someone doing comprehensive power monitoring across their whole range of CPU testing, but there isn't - and it's understandable, as that's a massive, massive amount of work. Anandtech seemed to be working towards that at one point, but never actually got there, and sadly the decline of that site has been ever more obvious in recent years.

I've seen more people who do this for a living who try to do serious OC than not. That 95% perf for 25% less power is true, nothing new about it all, and totally irrelevant.

The only time you benefit from that power savings is when you start to push all core workloads. How often do you do that? Very rarely for most people. And what's the actual benefit for that loss of response when you need it? Virtually nothing because for most people, when they do push all core work, it's for very short periods (like, 5 seconds).

The flip side is you can get 110% for 50% more power.

If that 10% saves you 45 minutes a day because your income depends on rendering / video editing and so on, and you make 30$/hr, that's like getting $21 more per day.

Compared to paying an extra .02c per day in power, it is flatly a no-brainer for the people who need more speed.

So yeah, my comment about where are the performance enthusiasts was kind of a joke and kind of not.

All of the so far released CPUs are AMD X CPUs, they are OC capable up-tuned chips for enthusiasts. The more mundane non X chips will come later. If you are not into OC / high performance CPUs, why are you here.
All AMD CPUs can be OC'd, they don't follow Intel's limitations there. But I disagree with your overall conclusion here. Why? Because - outside of the downright silly and vastly oversimplified calculation you're basing your argument on - the vast majority of us don't actually do these types of work. Most of us are PC enthusiasts - hobbyists - or gamers, or some mix of the above. And, crucially, there are a lot of use cases where this type of logic either doesn't apply or just isn't valid.

As to your calculation:
- If you do that kind of work for a large company, on a salary, then you gain nothing from that speed-up save for possibly having to do more work. Also, are you just sitting on your ass doing nothing during that render? No, you're doing other work. So, increasing that speed might benefit your workflow - or it might get in the way of other necessary tasks, or it might just make your boss more money while you're just left with a bigger workload - on a fixed wage, that theoretical $21 of yours goes into your boss' pocket, not yours.
- If you're a freelancer, contractor, or running your own business, you might get the opportunity to make more money from such a speedup, but only if you are constantly in a state of having more than enough work. If you don't then, congrats, you've now got slightly more free time - which is of course also nice, but you could have had that already by just scheduling your renders for the end of the day.

In both of these cases, the applicability of your logic is extremely narrow. That doesn't make it wrong, it just makes it myopic.

And, of course this also fails to take into account a whole bunch of other factors that play into this:
- Scheduling renders for EOD/overnight means less heat dumped into your workspace while you're there, potentially increasing comfort (and saving on AC costs if applicable)
- Running renders slower but more efficiently overnight - when there's plenty of time for them to finish - can save you meaningful electricity costs in the long run
- Overclocking production gear is generally considered a huge no-no due to instability and the possibility of errors. Saving 10% of time on a render isn't much help if you have to do it all again because one frame got partially corrupted.

... oh, and if a 10% speedup saves you 45 minutes a day, then you're already running 7.5 hours of renders a day - meaning this isn't a workstation, but a dedicated render rig - which would be runnning 24/7 anyway. Once you're at that level, setting up a second render rig - or renting an off-site render farm - will be the next step, not an OC.
 
Joined
Jun 14, 2020
Messages
3,530 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
But this is the thing, and where this argument keeps going in circles: both these statements are true. There is no contradiction between the two. At stock, CPU B is, unequivocally, less efficient than CPU A. That doesn't mean that it doesn't have the potential to be more efficient - but that's not how it's configured from the factory. All silicon implementations of an architecture have a wide range of possible efficiencies at various tuning and performance levels. But that means that CPU B can be more efficient, given appropriate tuning. It just isn't (in this example) at stock.

I completely agree. But that is the thing, taken at face value the statement "alderlake is inefficient" seems to be referring to architectural efficiency. In which case the statement is wrong.

The guy im replying to is basically talking about architectural efficiency but he does so while comparing stock settings. Which is just undoubtetly flawed, cause it leads to contradictions. For example the 12900t is more efficient than the 12900k, therefore the alderlake architecture is more efficient than the alderlake architecture. This is what the guy is arguing for...
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I completely agree. But that is the thing, taken at face value the statement "alderlake is inefficient" seems to be referring to architectural efficiency. In which case the statement is wrong.

The guy im replying to is basically talking about architectural efficiency but he does so while comparing stock settings. Which is just undoubtetly flawed, cause it leads to contradictions. For example the 12900t is more efficient than the 12900k, therefore the alderlake architecture is more efficient than the alderlake architecture. This is what the guy is arguing for...
Yeah, that's exactly the problem. We just don't have the data to say anything conclusively about architectural efficiency - nobody does that kind of comprehensive testing. What we do have are various snippets of data from various workloads, which - due to the high boost clocks and stock power limits of current chips - paint a very complex picture (at least compared to how things used to be in the Skylake-and-before era). Is ADL more or less efficient than Zen3? Yes, I guess? It's both. And neither. At the same time.

And I completely agree that some of @ratirt's arguments here have been ... well, off at best. Like the "why pay for a high end CPU and downclock for efficiency when you can just buy a more efficient lower end CPU" argument, which (at least for any nT workload) is just a completely false premise - there is no lower end, cheaper CPU that's more efficient. A 12900K, 12900 and 12900T (if it exists?) will cost you roughly the same money, and the non-K and T will both be vastly more efficient in nT workloads than any lower core count CPU with a matching power limit. The same goes on the AMD side, though they (mostly) don't even make separate low power SKUs, just implement Eco Mode settings in BIOS instead. For any nT task, a 65W 5950X will be vastly more efficient than, say, a 5600X. I see where the argument is coming from, as a lot of lower end SKUs are pushed to less of an extreme than higher end SKUs, but the only situation in which "buy lower end for more efficiency" applies is if you're not talking very heavy workloads to begin with, if you're talking mainly ST, or if you're only looking at ~i5 class CPUs to begin with.
 
Joined
Jun 14, 2020
Messages
3,530 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Yeah, that's exactly the problem. We just don't have the data to say anything conclusively about architectural efficiency - nobody does that kind of comprehensive testing. What we do have are various snippets of data from various workloads, which - due to the high boost clocks and stock power limits of current chips - paint a very complex picture (at least compared to how things used to be in the Skylake-and-before era). Is ADL more or less efficient than Zen3? Yes, I guess? It's both. And neither. At the same time.
Sure but doesnt that apply to stock testing as well? I mean mostly youll see 5 workloads tested for efficiency. You can only draw some very vague generalizations. I can safely say that rocket lake for example is an absolute pig when it comes to efficiency, but its way more complicated for zen 3 and alderlake. I assume its going to be the case for zen 4 as well
 
Joined
May 31, 2016
Messages
4,440 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
"why pay for a high end CPU and downclock for efficiency when you can just buy a more efficient lower end CPU"
Is it though? If you do nothing but gaming which is literally what @fevgatos always brings up with efficiency of an AL buying a 12600k or even non-k 65w chip is the best option. It is way cheaper, it is already very efficient an you don't don't lose that much performance. Going 12900K and put a power cap on it to make it run cooler and use less power while losing performance in my opinion (it is an opinion) is foolish. To be fair, even if the load is light for both, the CPU 12600k will still use less power than a 12900k due to core number difference. If you need a 12900k performance for whatever workload, you dont want to limit its power because you need it. If you are an enthusiast and you just both it to have it and you play games with it that is fine. You can limit it to 35w and use it as you please but claiming it is efficient in gaming? Well 12600k non-k has better efficiency no matter how you look at it.
 
Joined
Jun 14, 2020
Messages
3,530 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Is it though? If you do nothing but gaming which is literally what @fevgatos always brings up with efficiency of an AL
No it's not. YOU were talking about gaming efficiency with another user, don't remember the name, so ireplied on that. ALD is more efficient on 99.9% of workloads bar a few heavy MT workloads, in those you need to power limit them or they consume wattage like a pig

If you need a 12900k performance for whatever workload, you dont want to limit its power because you need it.
And you are still wrong. It doesn't even make sense...

Have you seen the m1 desktops? You know they are limited to like 40 watts? You realize these are meant for professionals? So what gives?
Have you heard of xeons? You realize they are low power multicore cpu's?


There are lots of uses for a high core cpu in low wattage, i don't understand how you don't get this.
 
Joined
Aug 18, 2022
Messages
202 (0.24/day)
No it's not. YOU were talking about gaming efficiency with another user, don't remember the name, so ireplied on that. ALD is more efficient on 99.9% of workloads bar a few heavy MT workloads, in those you need to power limit them or they consume wattage like a pig


And you are still wrong. It doesn't even make sense...

Have you seen the m1 desktops? You know they are limited to like 40 watts? You realize these are meant for professionals? So what gives?
Have you heard of xeons? You realize they are low power multicore cpu's?


There are lots of uses for a high core cpu in low wattage, i don't understand how you don't get this.

Just hate for Intel because it is not AMD is why.

For some reason, people using AMD get fixated on whatever power Intel CPU's use, when it has no impact on their PC or their life at all.

Just arguing for the sake of it to me, these people will never buy Intel and will most likely end up with AM5 systems, but will still flood the forum with posts about how much power Intel CPU's use.
 
Joined
Jun 14, 2020
Messages
3,530 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Just hate for Intel because it is not AMD is why.

For some reason, people using AMD get fixated on whatever power Intel CPU's use, when it has no impact on their PC or their life at all.

Just arguing for the sake of it to me, these people will never buy Intel and will most likely end up with AM5 systems, but will still flood the forum with posts about how much power Intel CPU's use.
And that's fine, you can criticize a product you don't have. But fixating on a problem (stock power limit) when it's not really a problem cause you can change it in 3 seconds, faster than you can enable xmp or upgrade your motherboard bios (for that sweet mobo upgradability we keep hearing the am4 has) is just nuts.
 
Joined
Jan 27, 2015
Messages
1,746 (0.48/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
Leak from ECSM_Official on billibi. This person has posted pics of tests on both 13900K and 7950X.

This is translated from Chinese so, keep that in mind.

1662151196914.png


Source:

 
Joined
May 31, 2016
Messages
4,440 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
Just hate for Intel because it is not AMD is why.
I don't hate Intel I disagree with our friend because his logic is flawed and circumstantial.
For some reason, people using AMD get fixated on whatever power Intel CPU's use, when it has no impact on their PC or their life at all.

Just arguing for the sake of it to me, these people will never buy Intel and will most likely end up with AM5 systems, but will still flood the forum with posts about how much power Intel CPU's use.
That is exactly what you guys are doing. all reviewers that I watched said that the AL is power hungry and hot. You on the other hand flood threads (either AMD or Intel or even NV) with how efficient AL in general and then when asked you say gaming and at 45w. Well that is a huge stretch for me and i will never agree to call AL efficient just because of gaming (not in all games btw, light threaded tasks and tweaks in voltage and and 45w power limits as a proof.
No it's not. YOU were talking about gaming efficiency with another user, don't remember the name, so ireplied on that. ALD is more efficient on 99.9% of workloads bar a few heavy MT workloads, in those you need to power limit them or they consume wattage like a pig

And you are still wrong. It doesn't even make sense...

Have you seen the m1 desktops? You know they are limited to like 40 watts? You realize these are meant for professionals? So what gives?
Have you heard of xeons? You realize they are low power multicore cpu's?


There are lots of uses for a high core cpu in low wattage, i don't understand how you don't get this.
So we are going to disagree you know my stand here. You examples are meaningless.
M1 desktop? Xeons low power multicore? That is your answer? Do you even know why these are bad examples?
You are unbelievable your flawed logic is beyond believe.
Agree to disagree. I will never agree with your statements and way of perceiving things in that matter sorry.
 
Joined
Aug 18, 2022
Messages
202 (0.24/day)
AL is power hungry and hot

When is AL power hungry and hot? when it is gaming? no, when you are sat watching a film on your PC? no. It is when you are running it balls out, which no one does all the time, so your point is irrelevant. I have not said it is efficient, i in fact have said i do not care as long as i can cool it, which is all that matters to me. It can use 500 watts and as long as i can cool it i still would not care. The only argument against AL is it uses a lot of power, nothing else. It is the same argument that has been blabbered by AMD users since AL came out. Time to change the record as we have heard it over and over on repeat, you can stop playing it now as WE GET IT.

Also re tweaks in voltage, Most Ryzen users tweak the voltage of their CPUs so why is it a big deal for AL users to do it? Another broken record, blah blah blah
 
Joined
May 31, 2016
Messages
4,440 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
When is AL power hungry and hot? when it is gaming? no, when you are sat watching a film on your PC? no. It is when you are running it balls out, which no one does all the time, so your point is irrelevant. I have not said it is efficient, i in fact have said i do not care as long as i can cool it, which is all that matters to me. It can use 500 watts and as long as i can cool it i still would not care. The only argument against AL is it uses a lot of power, nothing else. It is the same argument that has been blabbered by AMD users since AL came out. Time to change the record as we have heard it over and over on repeat, you can stop playing it now as WE GET IT.
Read some reviews and opinions with a range of benchmarks not just certain scenarios that favors and feed your opinions. AL being efficient is like saying ecores are very good or help in gaming performance which in both cases is total malarkey at best.
 
Joined
Aug 18, 2022
Messages
202 (0.24/day)
Read some reviews and opinions with a range of benchmarks not just certain scenarios that favors and feed your opinions. AL being efficient is like saying ecores are very good or help in gaming performance which in both cases is total malarkey at best.

Already said i don't care if it is or not. You obviously do as you are still going on about it. Let me say it for you again, I don't care. I have a 12700k in my PC and I don't care how much it uses as long as i can cool it, which i can with a dual 360mm loop, so i don't care if it uses 45w or 200w. Get it, you are preaching your noise to someone who is deaf to it.
 
Top