# Intel Core i9-12900K



## W1zzard (Nov 4, 2021)

The Intel Core i9-12900K is Intel's flagship processor for the Alder Lake architecture. In our testing, we saw fantastic gaming performance from this new processor. Not only low-threaded tests have improved, the 12900K can even beat AMD at highly threaded workloads.

*Show full review*


----------



## Crackong (Nov 4, 2021)

Good article as usual.




Intel PR marketing Leaks:

8XX CPU-Z score 
Best XXX CPU
Total dominance


Reality:
Win some, Lose some 
Double the power consumption
Double the heat
Double the platform cost
Windows 11


----------



## Chaitanya (Nov 4, 2021)

Those power consumption figures aren't confidence inspiring(for their enterprise products that are to come).
Edit: waiting to see comparison between DDR4 and DDR5.


----------



## W1zzard (Nov 4, 2021)

Chaitanya said:


> waiting to see comparison between DDR4 and DDR5.


Next week  Intel CPUs arrived yesterday. I've been rebenching everything else since the W11 AMD L3 cache fix came out. Will rebench Zen 2 and 9th gen too and add them to the reviews.


----------



## Rhein7 (Nov 4, 2021)

Eww that power consumption


----------



## GerKNG (Nov 4, 2021)

some games are even worse than Skylake. awful .1% lows in many games. slower than a 5950x in Multithreading while consuming almost twice the power...
and i almost pre ordered one yesterday...


----------



## dicktracy (Nov 4, 2021)

Cheaper and faster than 5950x. Thank you Intel.


----------



## lexluthermiester (Nov 4, 2021)

So to sum up, Intel's delivered the goods, but this model runs hot and is pricey. Seems to have taken back the performance crown. Imagine if they added in another set of core packs....


----------



## luches (Nov 4, 2021)

That Temp !!! So you can no longer air-cool intel's flagship even with the top of the line air cooler !!! 100c will turn your room into a furnace .
I consider my 5900 running at 76c to be pretty high and it heats up my room but 100c.. HELL NO !!!! Not to mention 300W power draw.  My undervolted 3080ti only draws 30W more @ 330W.
This feels like a very bad trade off. Sacrifice all the efficiency for the sake of performance.


----------



## oldwalltree (Nov 4, 2021)

Looks like my x299 will live on another generation....


----------



## THANATOS (Nov 4, 2021)

W1zzard said:


> Next week  Intel CPUs arrived yesterday. I've been rebenching everything else since the W11 AMD L3 cache fix came out. Will rebench Zen 2 and 9th gen too and add them to the reviews.


Thanks for the review, but honestly I wouldn't compare It to AMD for now only to Its predecessor.
For example Ryzen 9 5950X has too low score in Cinebench R23. I think Hardware Unboxed mentioned a problem when you change AMD CPUs then the L3 fix doesn't work or something like that.


----------



## W1zzard (Nov 4, 2021)

luches said:


> 100c will turn your room into a furnace .
> I consider my 5900 running at 76c to be pretty high and it heats up my room but 100c.. HELL NO !!!!


Yeah my lab is quite warm right now  Just to add a bit here, what heats up your room is the Watts, not the absolute temp. If you slow down your fan speed your CPU temp will go up, yet the heat output of the CPU stays the same, and thus your room will be just as warm.


----------



## so11ex (Nov 4, 2021)

Heat and noise is now a problem for top perfomance PC.
Videocard alone can take 300w+ for 6900xt and 400w+ for 3090

Now add that extra 300w from 12900 CPU - too much.

I have 5900x and 6900xt - both are downvolted / downclocked because even for a big PC case with lots of 140mm fans - there is too much heat output!
It becomes too hot and noisy!

Sad intel dont have Pcore only 8core and 10-12 core units. that would be nice.


----------



## BSim500 (Nov 4, 2021)

So +11% faster (1080p) falling to 7% faster (1440p) in games on average for +23% higher power consumption on a newer 10nm process vs 2-gen old i9-10900K on 14nm process and 92-100c temps even with a Noctua NH-U14S? That's... not very impressive...


----------



## Lord_Soth (Nov 4, 2021)

dicktracy said:


> Cheaper and faster than 5950x. Thank you Intel.



Yes, you only forgot: hot, power hungry, more expensive system, lose in half of the cpu test, worst efficiency to an year old AMD processor.


----------



## W1zzard (Nov 4, 2021)

THANATOS said:


> For example Ryzen 9 5950X has too low score in Cinebench R23


Seems fine? I have 25813

__
		https://www.reddit.com/r/Amd/comments/kf2gqs
The higher scores are with manual PBO settings?


----------



## xulos (Nov 4, 2021)

Highly recommend for what? Heating your room? To sum this up, It cost 25% more than 5900x , faster ~5%, and it draws 100% more watts, while cooking under 100c. Wp Intel , you have fastest cpu!


----------



## b4psm4m (Nov 4, 2021)

luches said:


> That Temp !!! So you can no longer air-cool intel's flagship even with the top of the line air cooler !!! 100c will turn your room into a furnace .
> I consider my 5900 running at 76c to be pretty high and it heats up my room but 100c.. HELL NO !!!! Not to mention 300W power draw.  My undervolted 3080ti only draws 30W more @ 330W.
> This feels like a very bad trade off. Sacrifice all the efficiency for the sake of performance.


It's watts (Joules per second) that heat rooms, not component temperature. If you had the end of a pin at 1000C in a room, it will make hardly any difference; but if you have a 3kW bar fire at 200C, that will het the entire room. But I get what you mean. 

Anyway, it's good to see intel strike back but imo, amd 5000 is still the platform of choice


----------



## lexluthermiester (Nov 4, 2021)

THANATOS said:


> Thanks for the review, but honestly I wouldn't compare It to AMD for now only to Its predecessor.
> For example Ryzen 9 5950X has too low score in Cinebench R23. I think Hardware Unboxed mentioned a problem when you change AMD CPUs then the L3 fix doesn't work or something like that.


Comparing AMD's top consumer model to Intel's now top consumer model is perfectly fair regardless of details.


----------



## KarymidoN (Nov 4, 2021)

On Linus Video he showed that the NH-D15 can't cool this beast (especially when summer comes).
DDR5 and Z690 is also a more expensive Platform. 
The I5 12600K is the winner, amazing price/performance.


----------



## Camm (Nov 4, 2021)

Definitely interesting. If I was building right now, I'd certainly consider it, but power usage and temps are just batshit. That's going to have flow ons in what cases you can run, what coolers you can run, etc. New motherboard platform is quite nice as well.

So a swing and a nudge I think.


----------



## luches (Nov 4, 2021)

> it's watts (Joules per second) that heat rooms, not component temperature. If you had the end of a pin at 1000C in a room, it will make hardly any difference; but if you have a 3kW bar fire at 200C, that will het the entire room. But I get what you mean.
> 
> Anyway, it's good to see intel strike back but imo, amd 5000 is still the platform of choice



Thanks for elborating. Yes, I understand and I was generally speaking.
I went from a 8700k to 5900x cooled by a Assassin III and immediately noticed how its heating up my room quite abit more (It's even undervolted). Now 12900k drawing 300W and the extra heat gonna be so unpleasant, atleast for me. Plus I always use air cooler and you can't cool this lava pool in summer.
My undervolted 3080ti only draws 30W more @ 330W !


----------



## Zareek (Nov 4, 2021)

Thanks for another great review @W1zzard . 

The power consumption and temps are concerning but you can't argue with that performance. Too bad it requires Windows 11 to work properly. Maybe there will be a Windows 10 patch for it.


----------



## N3M3515 (Nov 4, 2021)

So......overclocking is out useless on intel's 12th gen?


----------



## Ravenas (Nov 4, 2021)

I agree that “Intel is back”.

How is this any different than before with good single threaded performance at the cost of horrible power consumption?

There isn’t a significant reason for someone like myself (even with platform upgrades such as DDR5) with a 5950x to swap over platforms and regress in power consumption for slightly better single core performance and essentially the same or worse multi threaded performance.

This is why the pricing is below the 5950x MSRP.


----------



## THANATOS (Nov 4, 2021)

W1zzard said:


> Seems fine? I have 25813
> 
> __
> https://www.reddit.com/r/Amd/comments/kf2gqs
> The higher scores are with manual PBO settings?


Here is the mention of a bug when you change the CPU.

__ https://twitter.com/i/web/status/1454778534929461249
Not sure If PBO was enabled, I will ask the guy on the other forum.
I will also have to check other reviews, If they have comparable score to you, maybe I was mistaken.


----------



## ToxicTears (Nov 4, 2021)

Thumbs up: 

10 nanometer production process    
really?


----------



## oxrufiioxo (Nov 4, 2021)

Impressive, I'm guessing intel will have the temps sorted with Raptorlake. The power consumption/heat output is slightly disappointing but this is still so much better than Rocketlake.


----------



## Flanker (Nov 4, 2021)

W1zzard said:


> Next week  Intel CPUs arrived yesterday. I've been rebenching everything else since the W11 AMD L3 cache fix came out. Will rebench Zen 2 and 9th gen too and add them to the reviews.


That sounds like sleep deprivation lol


----------



## W1zzard (Nov 4, 2021)

Flanker said:


> That sounds like sleep deprivation lol


Yup, no sleep, but I'm happy that I made it in time, with 3 reviews and that there's now thousands of people reading each of the reviews


----------



## THANATOS (Nov 4, 2021)

Flanker said:


> That sounds like sleep deprivation lol


Hi Flanker. You mentioned on pctuning.cz, that you managed 28 593 points with your R9 5950X in Cinebench R23. Can you classify your setup and If you had PBO enabled? W1zzard managed only 25813 points in this review. As I checked many reviews have even lower scores than his.


----------



## neatfeatguy (Nov 4, 2021)

The temps this thing hits is incredible. My 5900x can flirt with 90C, but that's because of the Noctua NH-U9S, I would say it's the very minimum you'd want to use with the 5900x (had to go with it because the profile is low enough to allow the use of the top fan in my case). I'll soon be making a change to a AIO because my new GPU is short enough to allow the mounting of a radiator; since I no longer have that 13" long 980Ti hogging all the space in my case.

The 12700k looks like a better value in comparison to the 12900k.  Is there really any reason to get the 12900k over a 12700k? I'd much rather go for a 5950x if I needed the extra cores for multithreading since the power draw is lower and the performance is pretty close.


----------



## SaLaDiN666 (Nov 4, 2021)

luches said:


> That Temp !!! So you can no longer air-cool intel's flagship even with the top of the line air cooler !!! 100c will turn your room into a furnace .
> I consider my 5900 running at 76c to be pretty high and it heats up my room but 100c.. HELL NO !!!! Not to mention 300W power draw.  My undervolted 3080ti only draws 30W more @ 330W.
> This feels like a very bad trade off. Sacrifice all the efficiency for the sake of performance.


Physics PhD, may I presume?


----------



## Kissamies (Nov 4, 2021)

What a toaster just like few last high-end Intel ones. Though competition is always good.

Will be interested to see how Zen4 competes with this.


----------



## Sandbo (Nov 4, 2021)

GerKNG said:


> some games are even worse than Skylake. awful .1% lows in many games. slower than a 5950x in Multithreading while consuming almost twice the power...
> and i almost pre ordered one yesterday...
> View attachment 223631



Well I do hope it at least makes 5950X cheaper


----------



## neatfeatguy (Nov 4, 2021)

Sandbo said:


> Well I do hope it at least makes 5950X cheaper


Local Micro Center store has the 12900K priced at $650 and the 5950x priced at $720. I'm not sure about prices for places that sell online, but if you can hoof it to a Micro Center location the prices aren't very far apart.


----------



## Raendor (Nov 4, 2021)

Running 11700 on b560i with 125w PL1 and 3080 at 1440p I can safely now avoid any FOMO after these benchmarks. My rig is primarily for gaming and there's no tangible advantage at all.


----------



## claylomax (Nov 4, 2021)

Any problems with older Denuvo games W1zzard?


----------



## Vya Domus (Nov 4, 2021)

It looks to me like like they're fighting for the highest power consumption crown. 10nm, excuse me, I meant to say 7nm,  with a new architecture and still horrendous power figures, how is this possible ?

Edit : I just remembered that this is supposed to have 8 "efficiency cores", holy crap this is beyond laughable. What if this was an "actual" 16 core CPU ? What would it use ? 400W ?



lexluthermiester said:


> Comparing AMD's top consumer model to Intel's now top consumer model is perfectly fair regardless of details.



No, it's not, DDR4 vs DDR5.

And it's not about it being unfair, having just one platform on DDR5 isn't enough to infer how good these CPUs actually are. Any CPU with faster memory will also perform better, nothing new here.


----------



## ShurikN (Nov 4, 2021)

A CPU hitting 100C and drawing 300+ watts is not something I would consider recommendable.


----------



## Raven Rampkin (Nov 4, 2021)

What's with the placeholder on page 22...


----------



## Richards (Nov 4, 2021)

You should  test with a 3090... 12900k looking  good


----------



## W1zzard (Nov 4, 2021)

Raven Rampkin said:


> What's with the placeholder on page 22...


Intel samples came in yesterday, there's only so much that is possible without a time machine. Working on OC for 12600K right now, then the others. Then bench DDR4, at various speeds, bench various DDR5 speeds, turn that into an article, bench Windows 10, turn that into an article, bench older CPUs, oh and new games releasing too, and I have 5 SSDs and 2 graphics cards in the queue



Richards said:


> You should  test with a 3090... 12900k looking  good


Nah, 3080 is a good realistic graphics card. Even if you sent me a 3090 I probably would stick with the 3080


----------



## DuxCro (Nov 4, 2021)

Very hot and very power hungry. Better to get R9 5900X if you need lots of cores. Unless you're one of the gamers who need every frame regardless of price.


----------



## Leiesoldat (Nov 4, 2021)

Yikes those power consumption numbers are atrocious (we're supposed to be going towards more efficiency, Intel, not backwards). Can't imagine what the heat density and cooling problems would be like on a mini ITX system.


----------



## TKnockers (Nov 4, 2021)

i own 11700k... "upgraded" from 10850k because I had a chance to sell 10850k for nice sum, and bought 11700k for a half of the money I got for 10850k.. deal and a half anyway. Upgrade from 11700k to 12700k or 12900k doesn't make much sense if You are using Your pc for gaming. To see any difference You'd have to be gaming with rtx 3070 or stronger gpu on a super low resolution.. 720p Even on full hd the difference is minimal.


----------



## chodaboy19 (Nov 4, 2021)

Good thing intel released the Alder Lake line-up close to winter, all joking aside, is 12900K viable in an ITX setup? Can I dare to think air cooled and ITX???


----------



## Deleted member 215115 (Nov 4, 2021)

oxrufiioxo said:


> but this is still so much better than Rocketlake.


Is it though?


----------



## Metroid (Nov 4, 2021)

Overall, as much as 20% better on single thread using same watts x 5900x, same as 5900x on multithread using twice as much watts, so this cpu is mainly for single thread, for multithread stick with ryzen 5900x or 5950x.

This CPU is not a step in the right direction concerning watts. I miss those days where Intel would prioritize watts x performance. I wonder if you can lower the voltage on this as much as possible and see if on multithread performance drops only a little, might be worth, yeah you will lose single thread performance I guess, in the end 12900 might become a 5900 on single thread and lower multi thread performance than 5900x, maybe it can't be helped.

I will just skip this and jump straight in the next amd cpu with ddr5, things by then will have matured a lot more. If the next amd cpu can match 12900k in single thread without extrapolate watts in  multithread then it's a win.


----------



## Denver (Nov 4, 2021)

The difference would be even smaller in games with minor ram tweaks on the AM4 platform, it would be fair since the 12900k is running one of the best DDR5 modules on the planet. Also, why not the 3090 ?


----------



## defaultluser (Nov 4, 2021)

Crackong said:


> Good article as usual.
> 
> 
> 
> ...




And the best part of this whole thing is:* if the platform cost is reduced by buying a DDR4 mother board (and not waiting months  for sold-out kit,) the 5% Adler Lake performance advantage swings back in AMD's favor!*

Zen 3+ is going to crush this thing into the ground!


----------



## Valantar (Nov 4, 2021)

Seems like a pretty decent effort from Intel! Definitely major performance improvements across the board. Ineresting to see how most applications seem to handle the E cores fine, but some seem to stumble completely with them active. Wonder if this is down to the scheduler or the application.

Power consumption is still worrying though, and the inability to cool the CPU properly at all with a U14s - which is not a small cooler! - is pretty shocking. This is a top-of-the-line CPU, sure, but it shouldn't _require_ an AIO still.

@W1zzard two questions:
- Why are your graphs consistently ranked with the best result at the bottom? This feels very counterintuitive and weird. If nothing else I would question how good a choice this is in terms of readability/accessibility (whether for those with sight impairments, dyslexia or others), as the convention of 'best on top' is pretty universally accepted and breaking conventions like that can make reading much more difficult.
- Is your motherboard actually respecting Intel's stock power limits including Tau? Power draw numbers for stock and unlimited are near identical, which would seem to indicate that either Tau is infinite or something else fishy is going on. Shouldn't the limited version be stepping down to 125W for a steady-state power draw?


----------



## oxrufiioxo (Nov 4, 2021)

Valantar said:


> Shouldn't the limited version be stepping down to 125W for a steady-state power draw?



I think intel ditched that approach with Alderlake likely to win in some MC benchmarks.


----------



## W1zzard (Nov 4, 2021)

Valantar said:


> as the convention of 'best on top'


Not sure if 'convention', and it's how we've done things since forever. Happy to change it if there's sufficient demand



Valantar said:


> Is your motherboard actually respecting Intel's stock power limits including Tau?


Of course. Intel stock limits for the 12900K is PL1=PL2=241 W. There is no stepping down and no steady state 125 W.



			
				W1zzard asking Intel said:
			
		

> The SKU table mentions "Processor Base Power" "125 W", how does that work when PL1=PL2=241W by default?





			
				Answer via e-mail from Intel said:
			
		

> Intel’s processor specifications and programming guidelines allow for setting PL1 within a range of values including the base power level and PL2 level.



My translation is: "The default is PL1=PL2=241, but you can change the value, manually, to any other number if you want". Which is factually 100% correct of course. 

What is more important here is what they didnt say: "but the default really is 125W", "wizz you got it all it all wrong", "why we abolished 125 W" "is there even 125 W besides the specs table" "is 125 W bs"


----------



## Pilgrim (Nov 4, 2021)

This looks depressing tbh. I don't know how you guys could give the 12900K a "Recommended" award. Recommended for what exactly? And over which processor? The only decent processor in this lineup is the 12600K.


----------



## Darmok N Jalad (Nov 4, 2021)

Vya Domus said:


> It looks to me like like they're fighting for the highest power consumption crown. 10nm, excuse me, I meant to say 7nm,  with a new architecture and still horrendous power figures, how is this possible ?
> 
> Edit : I just remembered that this is supposed to have 8 "efficiency cores", holy crap this is beyond laughable. What if this was an "actual" 16 core CPU ? What would it use ? 400W ?
> 
> ...


Yeah, the e-cores do nothing for idle savings, probably due to the complex power delivery system needed to supply a CPU that is allowed to consume 241W for as long as it can. This is the concern I kept bringing up—complexity adds to cost. Not only you pay more when you purchase, but operating the system isn’t any cheaper than anything else—it is probably even more expensive to operate since you’re consuming more power and fighting more heat. What value do the e-cores bring beyond eeking out some multithreaded wins? Sounds like they end up as a net-negative, since they can accidentally receive threads meant for the P-cores.  It’s a dubious design that requires a well-behaved scheduler.

Honest question, but how many gamers do any of these top CPUs really help out? I was under the impression we were past “CPU-limted” gaming performance some time ago. What does this offer over something more mid-grade in terms of real-world value? I don’t game anymore, so I’m curious how much value this really brings beyond being considered the fastest.


----------



## Lord_Soth (Nov 4, 2021)

SaLaDiN666 said:


> Physics PhD, may I presume?



You don't need a an physics PhD to understand that intel has pulled this processor to the clocks he needs to rival AMD, whatever power consumption this required.
The thing that seems no one has noticed in the review is that any to PL1, PL2 or E cores setting has almost no impact on consumption, which remains very high if not increase.
The 125w in the specs is a total scam...


----------



## The red spirit (Nov 4, 2021)

If I hadn't knew that this was supposed to be a whole new architecture, from results alone I would have thought that Intel just raised clocks again and increased PLs. Performance improvement is really underwhelming and it couldn't decisively beat Ryzens. Energy efficiency is in toilet, old ass i5 10400F is winning there. And impossible to cool with any normal means. Embarrassing FX 9590 was possible to cool with puny 120mm AIO or Hyper 212, but this can't be cooled with D15, 280mm AIOs. This is yet another garbage release from Intel, very dissapointing.


----------



## FedericoUY (Nov 4, 2021)

THere shouldn't be required to turn off E cores to get the best performance out of the cpu. I hope that gets fixed soon...


----------



## cadaveca (Nov 4, 2021)

W1zzard said:


> Not sure if 'convention', and it's how we've done things since forever. Happy to change it if there's sufficient demand
> 
> 
> Of course. Intel stock limits for the 12900K is PL1=PL2=241 W. There is no stepping down and no steady state 125 W.
> ...


How much of this is up to teh board maker....?


----------



## W1zzard (Nov 4, 2021)

cadaveca said:


> How much of this is up to teh board maker....?


They may set any value, but they Intel default is PL1=PL2=241 W for 12900K. Many boards actually set PL1=PL2=maximum=4095 W by default. Have for years, it's that whole ASUS MultiCore Enhancement debate again. I always test my CPUs at stock power limits, and provide an additional data point for all power limits removed


----------



## luches (Nov 4, 2021)

Looking at all the charts, what are the odds of Zen3+ completely closing the gap again and taking back the crown ? They did said average 15% uplift in games.


----------



## W1zzard (Nov 4, 2021)

Added CPU-Z screenshot for 5.0 GHz OC, OC text, OC results for power and temps. Now benching OC 5.0 performance, will have results in around 4 hours


----------



## Flanker (Nov 4, 2021)

THANATOS said:


> Hi Flanker. You mentioned on pctuning.cz, that you managed 28 593 points with your R9 5950X in Cinebench R23. Can you classify your setup and If you had PBO enabled? W1zzard managed only 25813 points in this review. As I checked many reviews have even lower scores than his.


Unfortunately that's another person with the same username


----------



## Vya Domus (Nov 4, 2021)

Darmok N Jalad said:


> Yeah, the e-cores do nothing for idle savings, probably due to the complex power delivery system needed to supply a CPU that is allowed to consume 241W for as long as it can. This is the concern I kept bringing up—complexity adds to cost. Not only you pay more when you purchase, but operating the system isn’t any cheaper than anything else—it is probably even more expensive to operate since you’re consuming more power and fighting more heat. What value do the e-cores bring beyond eeking out some multithreaded wins? Sounds like they end up as a net-negative, since they can accidentally receive threads meant for the P-cores.  It’s a dubious design that requires a well-behaved scheduler.



To me power consumption isn't usually a problem, but when you have a CPU that outputs as much heat as a mid range GPU it's starting to become kind of insane. Intel's E-cores would make sense in a laptop but now we know they're completely worthless because they still use a ton of power anyway.



Darmok N Jalad said:


> Honest question, but how many gamers do any of these top CPUs really help out? I was under the impression we were past “CPU-limted” gaming performance some time ago. What does this offer over something more mid-grade in terms of real-world value? I don’t game anymore, so I’m curious how much value this really brings beyond being considered the fastest.



You're right, game these days are basically never CPU limited unless you specifically look for that. But there are many who would still not shut up about how getting 400 FPS instead of 350 in CS:GO or something like that makes huge difference, so here we are.


----------



## rvalencia (Nov 4, 2021)

AMD may need to release XT Zen 3 variants i.e. make overclock official. 3D cache Zen 3+ would be overkill.


----------



## Lightofhonor (Nov 4, 2021)

With a SFF PC it's pretty tough to run a 5950X, but seems like it would be impossible without undervolting/limiting the new 12900K.


----------



## Valantar (Nov 4, 2021)

W1zzard said:


> Not sure if 'convention', and it's how we've done things since forever. Happy to change it if there's sufficient demand


Yeah, I think I've noticed it before as well, I guess this is just the first time in a while I've been looking at this many charts at once. As for questioning whether "best on top" is a convention ... universally used phrases such as "who is on top", "top of the line", "topping the charts" etc. should be plentiful evidence for that being the dominant convention. The only place I'm used to seeing "number one at the bottom" is in listicle-type writing where the point is to make readers go through the entire list and not just look at number one and then leave. At least that line of reasoning doesn't apply here.

Another thing: you're not really consistent about it. At least in the 12600K review's power consumption section you have the lower numbers (i.e. the better ones) on top, while in the energy efficiency part (on the same page) you have the higher (worse) numbers on top. The same is true for the temperature graph in this review - lowest/best on top. And IIRC I've seen similar inconsistency previously. I could understand a hard-line "the higher value will always be on top, regardless if it's good or bad"


W1zzard said:


> Of course. Intel stock limits for the 12900K is PL1=PL2=241 W. There is no stepping down and no steady state 125 W.


So they removed TDP and replaced it with two more informative specifications just to immediately render one of them irrelevant. Great move!


W1zzard said:


> My translation is: "The default is PL1=PL2=241, but you can change the value, manually, to any other number if you want". Which is factually 100% correct of course.


That sounds to me like they're talking about motherboard manufacturers - "programming guidelines" is not something that end users are privy to to my knowledge. Which I guess just means that, as you say, MCE all over again.


W1zzard said:


> What is more important here is what they didnt say: "but the default really is 125W", "wizz you got it all it all wrong", "why we abolished 125 W" "is there even 125 W besides the specs table" "is 125 W bs"


So effectively the only thing that's changed is that PL2 is now listed in the spec table. I guess that's ... "progress"?


----------



## robert3892 (Nov 4, 2021)

Using DDR5 6000 memory in your Alder Lake system versus DDR4 3600 skews the results in my opinion. Why didn't you use higher frequency DDR4?


----------



## ShurikN (Nov 4, 2021)

chodaboy19 said:


> Good thing intel released the Alder Lake line-up close to winter, all joking aside, is 12900K viable in an ITX setup? Can I dare to think air cooled and ITX???


Dave2D tried. Says it cant be done.


----------



## maxfly (Nov 4, 2021)

Well done! Honestly appreciate your answering all our questions W1zzard.
Looking forward to seeing how the ddr4 boards fare. The added 12900k heat means nothing to me personally. My loop is configured to handle anything. But the combination of ddr5 and blown up mb expense is frustrating to say the least. It was expected that ddr5 pricing would be stupid but mb mannies need a good kick in the rear. 
AMD will continue to be my go to until both come down in price significantly. 
Even if i were upgrading my own rig im not into early release memory. So by the time something acceptable is released at a reasonable price, it'll be time to evaluate AMDs next swing at the fences. Not a bad thing


----------



## cadaveca (Nov 4, 2021)

robert3892 said:


> Using DDR5 6000 memory in your Alder Lake system versus DDR4 3600 skews the results in my opinion. Why didn't you use higher frequency DDR4?


given how memory stability is, i'd say the equivalents are basically there. I mean, i get what you're saying, but then shouldn't we be seeing like DDR5-6600-6800?



ShurikN said:


> Dave2D tried. Says it cant be done.


In that little blue box? LOL.


----------



## Pilgrim (Nov 4, 2021)

robert3892 said:


> Using DDR5 6000 memory in your Alder Lake system versus DDR4 3600 skews the results in my opinion. Why didn't you use higher frequency DDR4?


It won't matter much since Zen 3 infinity fabric can't clock much higher than 1800Mhz (Some lucky chips will go up to 2000Mhz). They need 1:1 infinity fabric to DRAM frequency to get the best performance. 3600MT/s is already the sweet spot


----------



## ShurikN (Nov 4, 2021)

cadaveca said:


> In that little blue box? LOL.


Not really sure what you're getting at with this comment.


----------



## Valantar (Nov 4, 2021)

Pilgrim said:


> It won't matter much since Zen 3 infinity fabric can't clock much higher than 1800Mhz (Some lucky chips will go up to 2000Mhz). They need 1:1 infinity fabric to DRAM frequency to get the best performance. 3600Mhz is already the sweet spot


3600 is at least a reasonable expectation of something everyone can hit at 1:1, which is a good starting point for benchmarking. IMO the bar should be either that or whatever the spec of the chip is, which would leave Ryzen at 3200 and these at ... 4800?

At least good to see that these can indeed handle faster memory than what Intel is specifying. That table with 4800 only supported in single rank on 1dpc boards, with even 1dpc installed on 2dpc boards being lower? That's pretty terrible.


ShurikN said:


> Dave2D tried. Says it cant be done.


Can probably be done if you're willing to manually configure your power limits to something more sensible, with some undervolting to try and regain some of that performance. Would be interesting to see where this ends up on the benchmarks.


----------



## b4psm4m (Nov 4, 2021)

@Wizzard.  It would be interesting to compare multicore scores vs the 5950X when the power level of the 12900K was limited to what the 5950X draws.  Can that be done??


----------



## ShurikN (Nov 4, 2021)

Valantar said:


> Can probably be done if you're willing to manually configure your power limits to something more sensible, with some undervolting to try and regain some of that performance. Would be interesting to see where this ends up on the benchmarks.


Yeah but at that point you might as well get a 5900X, and manually configure nothing. And it'll be cheaper.


----------



## Deleted member 215115 (Nov 4, 2021)

Pilgrim said:


> It won't matter much since Zen 3 infinity fabric can't clock much higher than 1800Mhz (Some lucky chips will go up to 2000Mhz). They need 1:1 infinity fabric to DRAM frequency to get the best performance. 3600Mhz is already the sweet spot


1900 is pretty much guaranteed with Zen 3 but anything higher than that is impossible to get stable. Some chips can bench 2100-2133 though.


----------



## Valantar (Nov 4, 2021)

ShurikN said:


> Yeah but at that point you might as well get a 5900X, and manually configure nothing. And it'll be cheaper.


Yep. It'll be really interesting to see performance comparisons between these with some sort of power limit on the 12900K.


----------



## cadaveca (Nov 4, 2021)

ShurikN said:


> Not really sure what you're getting at with this comment.


He cooled it on AIO water. So it can be done. It was also a cooler-mounting thing. The board's VRMs were in the way. Just gotta find the right cooler to fit, or the right board. It'll be hot, but so are laptops.


----------



## robert3892 (Nov 4, 2021)

cadaveca said:


> given how memory stability is, i'd say the equivalents are basically there. I mean, i get what you're saying, but then shouldn't we be seeing like DDR5-6600-6800?
> 
> 
> In that little blue box? LOL.


The memory should be as close in frequency as possible otherwise the reviewer is giving the new Gen Intel CPUs an advantage. I would say to perform the test again with DDR5 4800 or 5200 memory and use a higher frequency DDR4 memory.


----------



## Valantar (Nov 4, 2021)

robert3892 said:


> The memory should be as close in frequency as possible otherwise the reviewer is giving the new Gen Intel CPUs an advantage. I would say to perform the test again with DDR5 4800 or 5200 memory and use a higher frequency DDR4 memory.


Higher frequency DDR4 means running out of sync with IF on AMD or in gear 2 for Intel, which generally performs worse than lower speeds in 1:1/gear 1 (outside of strictly bandwidth-bound workloads, which there are essentially none of in this test suite or any normal consumer workload). Also, remember that DDR5-6000c36 is much higher absolute latency than DDR4-3600c16. 1000ms/1800MHz*16=8.89ms latency; 1000ms/3000MHz*36=12ms latency. And to be clear, most consumer workloads are far more dependent on memory latency than memory bandwidth (with iGPU gaming being the main exception).


----------



## rrrrex (Nov 4, 2021)

What a reason to make E-Cores? Idle consumption isn't great with it, оverall perfomance is about the same. Maybe Windows should be improved a lot in that way, something like OS and it's services work on E-cores and keep other cores for work applications.


----------



## Pilgrim (Nov 4, 2021)

rrrrex said:


> What a reason to make E-Cores? Idle consumption isn't great with it, оverall perfomance is about the same. Maybe Windows should be improved a lot in that way, something like OS and it's services work on E-cores and keep other cores for work applications.


On the Anandtech review, they have tested with just the E-Cores enabled and they top out at 50W full utilization. That's seriously impressive. So the problem is with the P-Cores I guess, they are just ridiculously inefficient to the point of negating any gains from the E-Cores.


----------



## Valantar (Nov 4, 2021)

rrrrex said:


> What a reason to make E-Cores? Idle consumption isn't great with it, оverall perfomance is about the same. Maybe Windows should be improved a lot in that way, something like OS and it's services work on E-cores and keep other cores for work applications.


It allows them to have more than 8c16t without ballooning die size (each 4c E core cluster is slightly larger than a single P core; this die is as large as the 10900K at 208mm²), and it significantly increases MT performance in apps capable of making use of them. They're not blazing fast, but they aren't slow either, and there are 8 of them after all. They're not for idle power consumption reduction, at least not in desktops. Should do that job decently in laptops, though we'll see if they're able to implement them so that all P cores can go to sleep (at least disabling all P cores on these desktop chips is not possible) while keeping the E cores running.


Pilgrim said:


> On the Anandtech review, they have tested with just the E-Cores enabled and they top out at 50W full utilization. That's seriously impressive. So the problem is with the P-Cores I guess, they are just ridiculously inefficient to the point of negating any gains from the E-Cores.


All the more reason for them to add them - if you're going for a 250W power budget, better to spend 50W on great efficiency and 200W on crap efficiency than 250W on crap efficiency. It'll be _really_ interesting to see how mobile versions of these chips stack up against the M1 Pro and Max!


----------



## R0H1T (Nov 4, 2021)

Pilgrim said:


> they are just ridiculously inefficient to the point of negating any gains from the E-Cores.


The bigger issue seems to be that unlimited tau or turbos ~


----------



## Deleted member 215115 (Nov 4, 2021)

rrrrex said:


> What a reason to make E-Cores? Idle consumption isn't great with it, оverall perfomance is about the same. Maybe Windows should be improved a lot in that way, something like OS and it's services work on E-cores and keep other cores for work applications.


Software has always been behind but maybe this transition to big.LITTLE will change that.



Valantar said:


> All the more reason for them to add them - if you're going for a 250W power budget, better to spend 50W on great efficiency and 200W on crap efficiency than 250W on crap efficiency. It'll be _really_ interesting to see how mobile versions of these chips stack up against the M1 Pro and Max!


There's no way they'll be a match for M1.


----------



## R0H1T (Nov 4, 2021)

rares495 said:


> There's no way they'll be a match for M1.


With the M1(x) you aren't just comparing the chip. It's the entire Mac platform, so a truly apples to apples comparison will be hard to come by.


----------



## Valantar (Nov 4, 2021)

rares495 said:


> There's no way they'll be a match for M1.


Not in ST, no, as the M1 essentially ties the best ST cores from both Intel and AMD. But for MT? It could be pretty close, if Intel is able to run 8 E cores at 3.9GHz at 50W. Remember, the M1 Pro/Max has essentially no power ratings or limits at all, and can range from 40 to ~100W under MT loads depending on the load and thermals.



R0H1T said:


> With the M1(x) you aren't just comparing the chip. It's the entire Mac platform. So a truly apples to apples comparison will be hard to come by.


That doesn't really matter if you're controlling the workload properly, i.e. compiling a known test suite yourself like AnandTech does, or running tests in multi-platform applications like Creative Suite. Both have pros and cons, but both are valid comparisons in their own way (the former is as close to a level playing field as you'll get; the latter is as close to real-world as you'll get). The issues start arising if you're running synthetic benchmarks that you have no control over (i.e. GeekBench), or are using different software that "kind of does the same things" like some bad reviewers tend to do.


----------



## Pilgrim (Nov 4, 2021)

Valantar said:


> It allows them to have more than 8c16t without ballooning die size (each 4c E core cluster is slightly larger than a single P core; this die is as large as the 10900K at 208mm²), and it significantly increases MT performance in apps capable of making use of them. They're not blazing fast, but they aren't slow either, and there are 8 of them after all. They're not for idle power consumption reduction, at least not in desktops. Should do that job decently in laptops, though we'll see if they're able to implement them so that all P cores can go to sleep (at least disabling all P cores on these desktop chips is not possible) while keeping the E cores running.
> 
> All the more reason for them to add them - if you're going for a 250W power budget, better to spend 50W on great efficiency and 200W on crap efficiency than 250W on crap efficiency. It'll be _really_ interesting to see how mobile versions of these chips stack up against the M1 Pro and Max!


I honestly think Intel should spend more time extracting more performance from those E-Cores. They're actually faster than Skylake cores while basically sipping power. Very impresseive


----------



## Valantar (Nov 4, 2021)

Pilgrim said:


> I honestly think Intel should spend more time extracting more performance from those E-Cores. They're actually faster than Skylake cores while basically sipping power. Very impresseive


Yep, I was just thinking whether we might see Intel going hard in that direction in the near future architecturally. Though it's not unlikely for those cores to have a _hard_ frequency limit that's much lower than the P cores, so their ST performance might suffer. Also makes me wonder what would happen if they gave the E core clusters a massive L2 cache like Apple's M1 P core clusters.


----------



## R0H1T (Nov 4, 2021)

Valantar said:


> but both are valid comparisons in their own way (the former is as close to a level playing field as you'll get; the latter


Not really no, you're still bound by the OS & scheduler. Looking at some of the results currently win11 still needs a bit of a work handling a lot of these tasks properly. Apple has probably at least a decade of lead over MS in this & similar margin wrt Intel. The hardware scheduler (thread detector) on ADL is interesting but it also raises the question as to how it will work with or maybe override the built in scheduler for Windows in certain tasks.


----------



## Valantar (Nov 4, 2021)

R0H1T said:


> Not really no, you're still bound by the OS & scheduler. Looking at some of the results currently win11 still needs a bit of a work handling a lot of these tasks properly. Apple has probably at least a decade of lead over MS in this & similar margin wrt Intel. The hardware scheduler (thread detector) on ADL is interesting but it also raises the question as to how it will work or override the built in scheduler for Windows.


Wait, Apple has a decade's lead for their 1-year old desktop architecture? Remember, MacOS isn't iOS. Also: the OS is out of the control of literally everyone except for Apple and MS. Software vendors, users, Intel, AMD, doesn't matter. It is what it is, and it is accounted for in testing. If Apple's OS and scheduler are doing a better job than Windows, does that undermine the performance or efficiency of their cores? Of course not. It's likely the integration helps them, but it's not what is causing their 2-3x efficiency lead. And besides, the performance you get is the performance you get in the real world. Saying "but one has an OS/scheduler advantage" doesn't change that. Performance is ultimately performance.


----------



## Broken Processor (Nov 4, 2021)

Was hoping to see something to keep Zen3d prices lower on release but sadly this ain't it, still if my home heating ever breaks I know what to replace it with.


----------



## Tom Sunday (Nov 4, 2021)

Crackong said:


> Reality:
> Win some, Lose some
> Double the power consumption
> Double the heat
> ...


Money is my Reality: I only care about Intel stock doing me a favor like AMD did last year! Doubling my AMD money in less then 10-months time. Now the big 401K money managers and their contributing clients and those who still have REAL JOBS, looking conservatively for $85 plus per share during Intel's 4th quarter report. And it looks like the Intel boys are on the right track! Yes AMD had their moment in time but Wall Street as we all know has no memory! What have you done for me lately AMD keeps on coming up. Win some an lose some.


----------



## R0H1T (Nov 4, 2021)

Valantar said:


> Apple has a decade's lead for their 1-year old desktop architecture


Apple has experience with  big.LITTLE for close to a decade, yes it isn't iOS but you're telling me that their experience with Axx chips or ARM over the years won't help them here? Yes technically MS also had Windows on ARM but we know where that went.


Valantar said:


> If Apple's OS and scheduler are doing a better job than Windows, does that undermine the performance or efficiency of their cores?


No of course not but without the actual chips out there how can MS optimize for it? You surely don't expect win11 to be 100% perfect right out the gate with something that's basically releasing after the OS was RTMed? Real world user feedback & subsequent telemetry data will be needed to better tune for ADL ~ that's just a reality. Would you say that testing AMD with those skewed L3 results was also just as fair?


----------



## Oberon (Nov 4, 2021)

A year late and not a clean sweep. Kind of disappointing to me.


----------



## dgianstefani (Nov 4, 2021)

Lightofhonor said:


> With a SFF PC it's pretty tough to run a 5950X, but seems like it would be impossible without undervolting/limiting the new 12900K.


Rubbish.


----------



## Denver (Nov 4, 2021)

Pilgrim said:


> It won't matter much since Zen 3 infinity fabric can't clock much higher than 1800Mhz (Some lucky chips will go up to 2000Mhz). They need 1:1 infinity fabric to DRAM frequency to get the best performance. 3600MT/s is already the sweet spot







So why does the GPU test bench use 4000Mhz modules with the 5800x? Also, Previous benchmarks show even higher fps. 112 vs 96.


----------



## R-T-B (Nov 4, 2021)

ToxicTears said:


> Thumbs up:
> 
> 10 nanometer production process
> really?


Vs 14nm before?  Yes.


----------



## lola46 (Nov 4, 2021)

Is sad intel still losing in some Benchmark even after nearly two generation


----------



## Dr. Dro (Nov 4, 2021)

What a shame, I was told for months that AMD would go bankrupt today and I had hoped to buy some of their worthless assets to flip as I fully intend to buy one of the 5 Zen 3D prototype units they found the coins to manufacture before sinking into eternal debt, but instead I get 3% less performance overall and 3% more in games, no bankruptcy in sight. Sad! 

On a serious note, I'm impressed. Alder Lake is an excellent platform and a feat of engineering, this shows. The power consumption may still be on the wild side and some compromises were made like the removal of AVX-512 support, but I understand what Intel wants to do here - they're increasingly going to focus on the performance and efficiency of the small cores going forward, which should eventually rival and supplant the high-performance ones entirely, while retaining the major advantages of that design. This is the Intel we want, an Intel with enormous engineering prowess, competitive prices and high availability.

Eager to see AMD's response, which thankfully will be a drop-in upgrade for me. That way my brother gets my 5950X, I flip the 3900XT I gave him when I upgraded, and I get a modestly priced upgrade for everyone by paying roughly half of a single CPU's price. Even though the GPU market is sad beyond belief right now, it's awesome to see that at least in CPU land, things are going well.

Cheers


----------



## Tom Sunday (Nov 4, 2021)

lexluthermiester said:


> So to sum up, Intel's delivered the goods, but this model runs hot and is pricey. Seems to have taken back the performance crown. Imagine if they added in another set of core packs....





Zareek said:


> The power consumption and temps are concerning but you can't argue with that performance.


I think that once the dancing here is over the 'inconsequential' stuff like heat, pricing, platform upgrades, power consumption, etc. The Intel win for now will finally be real and surely settling-in with the many spectators here. Now let’s see Intel stock jumping (or not) in the weeks ahead. Because in the end it’s all about the money. Nothing else matters and reality as we all know bites.


----------



## WhoDecidedThat (Nov 4, 2021)

In Igor Lab's review, (<- linked here) they measure CPU power consumption when gaming -






and measure watts consumed per fps.






Just putting it out there as an additional data point to consider.


----------



## ARF (Nov 4, 2021)

Dr. Dro said:


> What a shame, I was told for months that AMD would go bankrupt today and I had hoped to buy some of their worthless assets to flip as I fully intend to buy one of the 5 Zen 3D prototype units they found the coins to manufacture before sinking into eternal debt, but instead I get 3% less performance overall and 3% more in games, no bankruptcy in sight. Sad!
> 
> On a serious note, I'm impressed. Alder Lake is an excellent platform and a feat of engineering, this shows. The power consumption may still be on the wild side and some compromises were made like the removal of AVX-512 support, but I understand what Intel wants to do here - they're increasingly going to focus on the performance and efficiency of the small cores going forward, which should eventually rival and supplant the high-performance ones entirely, while retaining the major advantages of that design. This is the Intel we want, an Intel with enormous engineering prowess, competitive prices and high availability.
> 
> ...



Alder Lake is a mediocre product in the best case, and a meh product in the worst case.

Intel has been sabotaging its own sales figures but this arrogant and stupid policy to always offer heavily castrated products compared to the top available Ryzen (for example the Ryzen 9 5950X with 16 cores and 32 threads).

12900K is only 8 cores 16 threads coupled with 8 small cores.
Should have been 16 full fat big performance cores and outsourced production to the TSMC proper 7nm process.

Intel is done.

The only thing that impresses is the fact that the performance delta between the Ryzen 9 5950X and 10900K and the slower 11900K was so gigantic, that now this miserable 12900K looks somewhat acceptable.


Well, it is not..


----------



## dyonoctis (Nov 4, 2021)

Pilgrim said:


> I honestly think Intel should spend more time extracting more performance from those E-Cores. They're actually faster than Skylake cores while basically sipping power. Very impresseive


Reminds me of what happened when they realized that the Pentium m had a better potential than whatever they did with the pentium 4


----------



## Dr. Dro (Nov 4, 2021)

ARF said:


> Alder Lake is a mediocre product in the best case, and a meh product in the worst case.
> 
> Intel has been sabotaging its own sales figures but this arrogant and stupid policy to always offer heavily castrated products compared to the top available Ryzen (for example the Ryzen 9 5950X with 16 cores and 32 threads).
> 
> ...



That is one way of looking at things but as a 5950X owner myself,  I disagree, like the original Ryzen (1800X), Alder Lake represents a new type of processor in its relative infancy. There are myriad improvements and some are quite significant (like the hardware thread scheduler), and well, like the saying goes, Rome wasn't built in a day.

For the reason I mentioned before - as well as preference for a more mature platform, I plan on grabbing the Zen 3D chip once it comes out. But that's dead end for the aging AM4 platform, while Z690 will live on to Raptor Lake, and will serve as the foundation for Intel's future processors as well. In fact, Zen 3D itself is a stepping stone, the increased cache sizes are the first foray into AMD's own next-gen CPU technology, and those chips will be but a taste of what Zen 4 onwards will offer us.


----------



## W1zzard (Nov 4, 2021)

robert3892 said:


> Using DDR5 6000 memory in your Alder Lake system versus DDR4 3600 skews the results in my opinion. Why didn't you use higher frequency DDR4?


3600 low latency is a fair sweet spot for Zen 3, Rocket Lake and Comet Lake imo. 

What speed would you like me to use? Dual rank? Gear 2 on Rocket Lake? Possibly non 1:1 IF on Zen 3? WHEA errors on Zen 3?



Valantar said:


> or whatever the [memory] spec of the chip is


Anandtech does that iirc, but I feel for our enthusiast audience that it's reasonable to go beyond the very conservative memory spec and use something that's fairly priced and easily attainable


----------



## Tom Sunday (Nov 4, 2021)

During the first quarter of 2021, Intel spent $3.62 billion on R&D, while AMD spent $610 million. There is a big message here with these numbers. Like I said many times…in the end its all about the money. The golden rule: "He who has the gold makes the rule."


----------



## dyonoctis (Nov 4, 2021)

Vya Domus said:


> To me power consumption isn't usually a problem, but when you have a CPU that outputs as much heat as a mid range GPU it's starting to become kind of insane. Intel's E-cores would make sense in a laptop but now we know they're completely worthless because they still use a ton of power anyway.


Looking at the current state of things, it doesn't look like Intel can afford to go beyond 8p core...at least without severely lowering their frequency. Right now, their Big.little looks a bit like a shortcut taken to get better MT performance without making a 400w CPU that needs a dual 360 custom cooler just for itself. Which makes me really curious as to how Alder lake mobile will behave. It doesn't matter if they got the best score on geekbench when the M1 max biggest strength is sustained those performances even on battery


----------



## RandallFlagg (Nov 4, 2021)

Tom Sunday said:


> During the first quarter of 2021, Intel spent $3.62 billion on R&D, while AMD spent $610 million. There is a big message here with these numbers. Like I said many times…in the end its all about the money. The golden rule: "He who has the gold makes the rule."



Another false comparison.  Tell us, what did TSMC spend on R&D in that same period.  Now realize, AMD does not manufacture chips, Intel and TSMC do.


----------



## ARF (Nov 4, 2021)

Pilgrim said:


> I honestly think Intel should spend more time extracting more performance from those E-Cores. They're actually faster than Skylake cores while basically sipping power. Very impresseive





dyonoctis said:


> Reminds me of what happened when they realized that the Pentium m had a better potential than whatever they did with the pentium 4



I see where you are going to.
Actually what you propose is that Intel should cut the P cores altogether and glue as many E cores as possible.
For example 32 E cores on a single die.

And see what happens in a 105-watt power budget


----------



## Dr. Dro (Nov 4, 2021)

ARF said:


> I see where you are going to.
> Actually what you propose is that Intel should cut the P cores altogether and glue as many E cores as possible.
> For example 32 E cores on a single die.
> 
> And see what happens in a 105-watt power budget



That is actually the ultimate objective, to bring the big cores' high performance into the high-efficiency cores, while retaining the advantages of that design. It's ingenious and very hard to pull off, but if anyone can do it, Intel can. A strong hint to this probably being true in the long run is that RPL will have twice the amount of little cores.

That way they increase performance, power consumption and more importantly, density all in one fell swoop.


----------



## RandallFlagg (Nov 4, 2021)

Meh..  12900K sold out, now selling for $1350 $1599 on Amazon.


----------



## Manoa (Nov 4, 2021)

blanarahul I see what are you saying but you have to understand this is only one load out of many that you might need at any time, yes today you may be playing a games and do some internets but what if tomorrow you will need virtualization ? or x264 ? or you need to compile something ? to have a processor that can do all things well is more important than to have a processor that do only one thing well, besides when you do need to do those things on intel you will be paying back it all  and don't forget that you can use affinity on the games and other processes so that they don't wake the other die on zen​


----------



## HenrySomeone (Nov 4, 2021)

ARF said:


> Alder Lake is a mediocre product in the best case, and a meh product in the worst case.
> 
> Intel has been sabotaging its own sales figures but this arrogant and stupid policy to always offer heavily castrated products compared to the top available Ryzen (for example the Ryzen 9 5950X with 16 cores and 32 threads).
> 
> ...


LMAO, the fanboyism, the delusions!   Intel is done, hahaha!  We'll see what drivel you'll be spilling forward two years from now when Meteor Lake hits while Ryzens will barely get to TSMC 5nm


----------



## TheEndIsNear (Nov 4, 2021)

Wow.  Everything except the power consumption.  Yikes.  I'll stick with my 10900k and 5600x.  4k you're limited by the video card anyway.  The 10900k sucks enough power and puts off enough heat.


----------



## Raendor (Nov 4, 2021)

ARF said:


> Alder Lake is a mediocre product in the best case, and a meh product in the worst case.
> 
> Intel has been sabotaging its own sales figures but this arrogant and stupid policy to always offer heavily castrated products compared to the top available Ryzen (for example the Ryzen 9 5950X with 16 cores and 32 threads).
> 
> ...


Intel is done? How? Even though they’re not the undisputed kings, they still have competitive products and pricing for non-halo models. Unlike amd they never in the latest years reached something like bulldozer shit-tier level that was kept being pushed for years and years until only but the time of not zen, zen+ or zen 2, but zen 3 to actually get to decent performance. Amd came back, and it’s great to see after dozer disaster, but it’s only way upwards for intel from here too as platform will mature with faster ddr5 and improvements in following lakes.


----------



## DeathtoGnomes (Nov 4, 2021)

RandallFlagg said:


> Meh..  12900K sold out, now selling for $1350 $1599 on Amazon.


opening the 'New Offers' page, one  price is $1097,  which I  think is a typo, while  others are  $1477 up to $1999

and let me say I'm looking for that leaked "its 50% faster over AMD" bit.

---

My take from this  review, each camp is better at some things than the other, despite  the  ddr4 vs ddr5 argument.


----------



## RandallFlagg (Nov 4, 2021)

DeathtoGnomes said:


> opening the 'New Offers' page, one  price is $1097,  which I  think is a typo, while  others are  $1477 up to $1999
> 
> and let me say I'm looking for that leaked "its 50% faster over AMD" bit.



It actually changed while I was typing that one-liner.  Scalpers looking for the highest price the market will bear, I suspect.


----------



## Denver (Nov 4, 2021)

blanarahul said:


> According to Igor Lab's review (<- linked here) where they measure CPU power consumption when gaming -
> 
> 
> 
> ...


Not really.


----------



## HenrySomeone (Nov 4, 2021)

Ah yes, by far the most pro-AMD biased outlet out there (among the larger ones, that is not counting the full-on, out in the open fanboys like Moore's Law is Dead and the likes), they're certainly the ones to trust!


----------



## NuCore (Nov 4, 2021)

Once eggs were fried on Fermi (GTX 480), now it will be boiling water for coffee or tea on 12900K (don't thank for the idea, just do such a test - who has this CPU of course).


----------



## mb194dc (Nov 4, 2021)

12900k is basically overclocked out of the box. What I'm thinking is, if I get a 5950x clock it and and push it's power envelope to 300-400w~ with appropriate cooling to match it, what results am I going to then get in benchmarks?


----------



## RandallFlagg (Nov 4, 2021)

Denver said:


> Not really.
> View attachment 223695



Config?


----------



## Pilgrim (Nov 4, 2021)

ARF said:


> I see where you are going to.
> Actually what you propose is that Intel should cut the P cores altogether and glue as many E cores as possible.
> For example 32 E cores on a single die.
> 
> And see what happens in a 105-watt power budget


LOL I'd buy that. Extrapolating from the Anandtech review, a 32 E-Core processor will roughly use 192W @ Max utilization


----------



## RandallFlagg (Nov 4, 2021)

PC World on power with the 12900K.

Are we done yet?


----------



## fevgatos (Nov 4, 2021)

BSim500 said:


> So +11% faster (1080p) falling to 7% faster (1440p) in games on average for +23% higher power consumption on a newer 10nm process vs 2-gen old i9-10900K on 14nm process and 92-100c temps even with a Noctua NH-U14S? That's... not very impressive...


Uhm , it OBVIOUSLY does not consume 23% more power OR run at 100c in gaming. It actually consumes less or equal power to amd cpus in gaming, and with pretty muchthe same temperatures.


----------



## Deleted member 215115 (Nov 4, 2021)

NuCore said:


> Once eggs were fried on Fermi (GTX 480), now it will be boiling water for coffee or tea on 12900K (don't thank for the idea, just do such a test - who has this CPU of course).


Will it boil the coolant in the custom loop that's needed to cool it? Probably. Only time will tell.


----------



## BSim500 (Nov 4, 2021)

fevgatos said:


> Uhm , it OBVIOUSLY does not consume 23% more power OR run at 100c in gaming. It actually consumes less or equal power to amd cpus in gaming, and with pretty muchthe same temperatures.


People buy i9's and 24 thread CPU's in general for more than just gaming (and TPU isn't just a pure gaming site). Many gamers do video editing and have other mixed workloads too. If I wanted just a pure gaming chip, given that barely 0.4-4.0% separate the i5-12600K vs the i9-12900K (down to just 0.4% for 4k resolution), I'd buy the cheaper chip and spend more on the GPU. Or not upgrade at all and spend even more on the GPU. Put under heavy productivity load though, the temps and power are what they are, and I don't see why they should be arbitrarily excluded simply because "it's not gaming" or why a GPU bottleneck should be used to measure CPU power usage in a CPU review. The flip side of that is to declare the RTX 3070 a "73w card like the 1050Ti" by pairing it with a really slow CPU that matches the 60Hz VSync numbers, cherry pick that as the "real power figures" and throw all other measurements out of the window that actually load the component in question being tested 100%...


----------



## Valantar (Nov 4, 2021)

R0H1T said:


> Apple has experience with  big.LITTLE for close to a decade, yes it isn't iOS but you're telling me that their experience with Axx chips or ARM over the years won't help them here? Yes technically MS also had Windows on ARM but we know where that went.


CPU development cycles for a new arch are in the ~5 year range. In other words, MS has known for at least 3+ years that Intel is developing a big+little-style chip. Test chips have been available for at least a year. If MS haven't managed to make the scheduler work decently with that in that time, it's their own fault.


R0H1T said:


> No of course not but without the actual chips out there how can MS optimize for it? You surely don't expect win11 to be 100% perfect right out the gate with something that's basically releasing after the OS was RTMed? Real world user feedback & subsequent telemetry data will be needed to better tune for ADL ~ that's just a reality. Would you say that testing AMD with those skewed L3 results was also just as fair?


Perfect? No. Pretty good? Yes. See above.

And ... the AMD L3 bug is a bug. A known, published bug. Are there any known bugs for ADL scheduling? Not that I've heard of. If there are, reviews should be updated. Until then, the safe assumption is that the scheduler is doing its job decently, as performance is good. These aren't complex questions.


Denver said:


> View attachment 223678View attachment 223679
> So why does the GPU test bench use 4000Mhz modules with the 5800x? Also, Previous benchmarks show even higher fps. 112 vs 96.


Because the GPU test bench is trying to eliminate CPU bottlenecks, rather than present some sort of representative example of CPU performance? My 5800X gives me WHEA errors at anything above 3800, so ... yeah.


blanarahul said:


> According to Igor Lab's review (<- linked here) where they measure CPU power consumption when gaming -
> 
> 
> 
> ...


That looks pretty good - if that's representative, the E cores are clearly doing their job. I would guess that is highly dependent on the threading of the game and how the scheduler treats it though.


W1zzard said:


> Anandtech does that iirc, but I feel for our enthusiast audience that it's reasonable to go beyond the very conservative memory spec and use something that's fairly priced and easily attainable


Yep, as I was trying to say I see both as equally valid, just showing different things. It's doing anything else - such as pushing each chip as far as it'll go - that I have a problem with.


RandallFlagg said:


> Config?
> 
> View attachment 223698


Wait, are those light blue numbers idle numbers? How on earth are they managing 250W idle power draw? Or are those ST load numbers? Why are there no legends for this graph? I can't even find them on their site, wtf? If the below text is supposed to indicate that the light blue numbers are indeed idle, there is something _very_ wrong with either their configurations or they measure that. Modern PC platforms idle in the ~50W range, +/- about 20W depending on the CPU, RAM, GPU and so on.



Pilgrim said:


> LOL I'd buy that. Extrapolating from the Anandtech review, a 32 E-Core processor will roughly use 192W @ Max utilization


Well, you'd need to factor in a fabric capable of handling those cores, so likely a bit more. Still, looking forward to seeing these in mobile applications.


----------



## docnorth (Nov 4, 2021)

Vya Domus said:


> No, it's not, DDR4 vs DDR5.
> 
> And it's not about it being unfair, having just one platform on DDR5 isn't enough to infer how good these CPUs actually are. Any CPU with faster memory will also perform better, nothing new here.


ComputerBase has the answer, DDR5 6200 vs DDR4 3800, practically no difference.








						Intel Alder Lake im Spiele-Benchmark-Test: DDR4 gegen DDR5, Resizable BAR und Fazit
					

Gaming-Benchmarks: DDR4 gegen DDR5, Resizable BAR und Fazit / Speicher-OC mit bis zu DDR5-6200 / Mit DDR5-4800 wird DDR4-3200 überholt




					www.computerbase.de


----------



## fevgatos (Nov 4, 2021)

BSim500 said:


> People buy i9's and 24 thread CPU's in general for more than just gaming (and TPU isn't just a pure gaming site). Many gamers do video editing and have other mixed workloads too. If I wanted just a pure gaming chip, given that barely 0.4-4.0% separate the i5-12600K vs the i9-12900K (down to just 0.4% for 4k resolution), I'd buy the cheaper chip and spend more on the GPU. Or not upgrade at all and spend even more on the GPU. Put under heavy productivity load though, the temps and power are what they are, and I don't see why they should be arbitrarily excluded simply because "it's not gaming" or why a GPU bottleneck should be used to measure CPU power usage in a CPU review. The flip side of that is to declare the RTX 3070 a "73w card like the 1050Ti" by pairing it with a really slow CPU that matches the 60Hz VSync numbers, cherry pick that as the "real power figures" and throw all other measurements out of the window that actually load the component in question being tested 100%...


LOL. But YOU mentioned only the gaming performance and then tossed the power consumption and temperatures from blender. Now you are telling me CPUs ain't just for gaming. Then why did you use the gaming numbers?

CPU's aren't just for n-multithreaded workloads either. If my job consists of lightly threaded tasks (like photoshop / premiere and the likes), that single thread performance of the 12900k is king. Without the power consumption and temperature baggage either. If your workloads consists of n threads scaling then you should be looking at the the threadrippers i guess.


----------



## HenrySomeone (Nov 4, 2021)

BSim500 said:


> So +11% faster (1080p) falling to 7% faster (1440p) in games on average for +23% higher power consumption on a newer 10nm process vs 2-gen old i9-10900K on 14nm process and 92-100c temps even with a Noctua NH-U14S? That's... not very impressive...


So you first state the (supposedly small) increase in gaming performance, then in the same sentence you quote power and temp figures from an all-core stress test? To use your own phrase - that's not very impressive argumentation...


----------



## ncrs (Nov 4, 2021)

Valantar said:


> CPU development cycles for a new arch are in the ~5 year range. In other words, MS has known for at least 3+ years that Intel is developing a big+little-style chip. Test chips have been available for at least a year. If MS haven't managed to make the scheduler work decently with that in that time, it's their own fault.



This isn't even the first big.little CPU from Intel either, Lakefield shipped in Q2'20 with 1P+4E 



Valantar said:


> And ... the AMD L3 bug is a bug. A known, published bug. Are there any known bugs for ADL scheduling? Not that I've heard of. If there are, reviews should be updated. Until then, the safe assumption is that the scheduler is doing its job decently, as performance is good. These aren't complex questions.



The hotfix for AMD L3 bug isn't perfect either:




There are latency regressions even with the update applied. Especially for dual chiplet models.




Bandwidth is not at the Win10 levels either, but dramatically better than the original Win11.
Edit: broken graphs.


----------



## HenrySomeone (Nov 4, 2021)

BSim500 said:


> People buy i9's and 24 thread CPU's in general for more than just gaming (and TPU isn't just a pure gaming site). Many gamers do video editing and have other mixed workloads too. If I wanted just a pure gaming chip, given that barely 0.4-4.0% separate the i5-12600K vs the i9-12900K (down to just 0.4% for 4k resolution), I'd buy the cheaper chip and spend more on the GPU. Or not upgrade at all and spend even more on the GPU. Put under heavy productivity load though, the temps and power are what they are, and I don't see why they should be arbitrarily excluded simply because "it's not gaming" or why a GPU bottleneck should be used to measure CPU power usage in a CPU review. The flip side of that is to declare the RTX 3070 a "73w card like the 1050Ti" by pairing it with a really slow CPU that matches the 60Hz VSync numbers, cherry pick that as the "real power figures" and throw all other measurements out of the window that actually load the component in question being tested 100%...





fevgatos said:


> LOL. But YOU mentioned only the gaming performance and then tossed the power consumption and temperatures from blender. Now you are telling me CPUs ain't just for gaming. Then why did you use the gaming numbers?
> 
> CPU's aren't just for n-multithreaded workloads either. If my job consists of lightly threaded tasks (like photoshop / premiere and the likes), that single thread performance of the 12900k is king. Without the power consumption and temperature baggage either. If your workloads consists of n threads scaling then you should be looking at the the threadrippers i guess.


Bingo! It's always the same with them lot - when trying to make Intel look bad and AMD good, all and every tactic is fair, the dirtier the better actually...


----------



## RandallFlagg (Nov 4, 2021)

docnorth said:


> ComputerBase has the answer, DDR5 6200 vs DDR4 3800, practically no difference.
> 
> 
> 
> ...



Yeah but DDR4-3200 is a bit too slow.  From different reviews it seems like if you are running DDR4-3600 or higher with decent latency (like CL16) then it's fine, zero or almost zero difference, but the tests with DDR4-3200 on AL are highly variable vs DDR5.


----------



## Exilarch (Nov 4, 2021)

Compared to this power consumption Bulldozer seems like a good CPU. It wasn't as fast as Intel's offerings at the time, but then again it wasn't trying to burn your house down either.


----------



## BSim500 (Nov 4, 2021)

HenrySomeone said:


> Bingo! It's always the same *with them lot* - when trying to make *Intel look bad and AMD good*, all and every tactic is fair, the dirtier the better actually...


Considering I own a 10th Gen Intel, I've no idea who this dumb anti-fanboyism fanboyism of yours is even aimed at. I just have zero interest in space heaters of either brand and 100c *with* an $80 Noctua NH-U14S is piss-poor thermals...


----------



## RandallFlagg (Nov 4, 2021)

BSim500 said:


> Considering I own a 10th Gen Intel, I've no idea who this dumb anti-fanboyism fanboyism of yours is even aimed at. I just have zero interest in space heaters of either brand and 100c *with* an $80 Noctua NH-U14S is piss-poor thermals...



Then set the power limit to 88W on the AL and still walk all over your neighbors 5900X.

Computerbase.de :


----------



## theeldest (Nov 4, 2021)

W1zzard said:


> Next week  Intel CPUs arrived yesterday. I've been rebenching everything else since the W11 AMD L3 cache fix came out. Will rebench Zen 2 and 9th gen too and add them to the reviews.



I understand how much trouble it is to rebench everything. Thanks for the extra effort, we really do appreciate your thoroughness.


----------



## BSim500 (Nov 4, 2021)

RandallFlagg said:


> Then set the power limit to 88W on the AL and still walk all over your neighbors 5900X.
> 
> Computerbase.de :


In glorious 720p? I miss the old Tom's Hardware Guide 640x480 CPU reviews...


----------



## Easo (Nov 4, 2021)

This is not going to change my plans for upgrade to Ryzen someday, but good job Intel.
P.S.
I will never understand complaints about high power draw and heat when you are spending cash for top end product. Does eletricity bill _really _applies to someone who can shell out the cash for this?! I doubt it very much...


----------



## theeldest (Nov 4, 2021)

Easo said:


> This is not going to change my plans for upgrade to Ryzen someday, but good job Intel.
> P.S.
> I will never understand complaints about high power draw and heat when you are spending cash for top end product. Does eletricity bill _really _applies to someone who can shell out the cash for this?! I doubt it very much...



I'm more concerned with the heat output in one room. Though I'm in central texas so it's a bigger concern for me than others. (my solution was a mini-split in my server room and switching to a minipc at my desk where I remote to my gaming system. Stays cools and I don't care how much heat it generates.)


----------



## B-Real (Nov 4, 2021)

"*Fighting for the Performance Crown*"

And yet fails to beat the 1 year old rival with the same amount of cores. With TWICE worse efficiency. The i7 model is also 5% behind the 5900X while having the same number of cores. The only model able to beat its rival is the i5 - having 4 extra cores compared to the 5600X.


----------



## Oasis (Nov 4, 2021)

- Did you use a U14s or a U12s for overclocking? @W1zzard


----------



## kane nas (Nov 4, 2021)

Very good work but if you allow me I can find out the reason why you changed the Zen setup from EVGA X570 DARK with 4000mhz@2000 IF memories that you used in the last reviews you did on MSI X570 and 3600@1800 IF memories?


----------



## Darmok N Jalad (Nov 4, 2021)

rares495 said:


> Software has always been behind but maybe this transition to big.LITTLE will change that.


I have my doubts. Because this is the first x86 product like this on Windows, and the hybrid approach is limited to only part of the 12 series, it means 99% of the hardware out there will still be homogeneous CPU architecture, and for many years to come with the way our hardware can now last for so long. It’s going to be on Intel to make this work, then MS, and maybe developers will jump in. I could easily see developers just saying “use different hardware” if you have issues, at least for a while.


docnorth said:


> ComputerBase has the answer, DDR5 6200 vs DDR4 3800, practically no difference.
> 
> 
> 
> ...


Anandtech came to the conclusion that DDR5 does contribute to the performance increase. They have 2 pages of DDR4 vs DDR5 that show measurable gains. It’s not across the board, but significant, especially mutlithread. They even concede that AMD should see similar gains when they implement DDR5, though they are obviously further out.


----------



## ncrs (Nov 4, 2021)

Darmok N Jalad said:


> I have my doubts. Because this is the first x86 product like this on Windows, and the hybrid approach is limited to only part of the 12 series, it means 99% of the hardware out there will still be homogeneous CPU architecture, and for many years to come with the way our hardware can now last for so long. It’s going to be on Intel to make this work, then MS, and maybe developers will jump in. I could easily see developers just saying “use different hardware” if you have issues, at least for a while.



It's actually the second - Lakefield was released in Q2 2020 with 1P+4E. The difference here is that Intel Thread Director is present to help the Windows scheduler make sensible decisions. The AnandTech article explains in detail what is happening behind the scenes, especially on Windows 10, which lacks ITD support.

I don't think software vendors will ignore the potential issues, but the worst solution would probably be "disable E-cores in BIOS" instead of "use different hardware", because the P-cores are superior to previous Intel cores


----------



## Vya Domus (Nov 4, 2021)

docnorth said:


> ComputerBase has the answer, DDR5 6200 vs DDR4 3800, practically no difference.
> 
> 
> 
> ...


Only half the story, still need to see how AMD will perform under DDR5.



rares495 said:


> Software has always been behind but maybe this transition to big.LITTLE will change that.


Doubt it, big.LITTLE will always produce terrible results under certain situations, it's a problem impossible to solve without negative side effects. It's just that on mobile those sides effects aren't that noticeable but now it became obvious that on desktop PCs they are.


----------



## TheoneandonlyMrK (Nov 4, 2021)

theeldest said:


> I'm more concerned with the heat output in one room. Though I'm in central texas so it's a bigger concern for me than others. (my solution was a mini-split in my server room and switching to a minipc at my desk where I remote to my gaming system. Stays cools and I don't care how much heat it generates.)


I live in England and I would worry about the heat!, 

12900k is totes pointless IMHO, the 12700K is pretty good the 12600K is good in its bracket but all in not worth most people upgrading to If your on the last generation and casual gaming.


----------



## Valantar (Nov 4, 2021)

TheoneandonlyMrK said:


> all in not worth most people upgrading to If your on the last generation and casual gaming.


There is essentially no scenario where upgrading makes sense if you're on the last generation, period. The gains are never that significant.


----------



## RandallFlagg (Nov 4, 2021)

Darmok N Jalad said:


> I have my doubts. Because this is the first x86 product like this on Windows, and the hybrid approach is limited to only part of the 12 series, it means 99% of the hardware out there will still be homogeneous CPU architecture, and for many years to come with the way our hardware can now last for so long. It’s going to be on Intel to make this work, then MS, and maybe developers will jump in. I could easily see developers just saying “use different hardware” if you have issues, at least for a while.
> 
> Anandtech came to the conclusion that DDR5 does contribute to the performance increase. They have 2 pages of DDR4 vs DDR5 that show measurable gains. It’s not across the board, but significant, especially mutlithread. They even concede that AMD should see similar gains when they implement DDR5, though they are obviously further out.



That would be because AnandTech uses JEDEC standard RAM.  This is garbage to any DIY builder.  The benchmarks are also being run on Windows 10 for inexplicable reasons.






I will say I agree in the future DDR5 will be faster, but it doesn't compete well right now against enthusiast grade DD4 on Alder Lake (i.e. anything over 3600).  

To trade blows with moderately fast DDR4-3800 (basically $200 for 32GB), you would need top shelf DDR5-6000+ (which is like $800 and effectively unobtainable).   

Case in point :


----------



## Lycanwolfen (Nov 4, 2021)

Actually Windows 10 support Big and Small cores always has but Microsoft turns it off. The Key is HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Power\PowerSettings\54533251-82be-4824-96c1-47b60b740d00\93b8b6dc-0698-4d1c-9ee4-0644e900c85d Change the Attributes to value of 2 and also HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Power\PowerSettings\54533251-82be-4824-96c1-47b60b740d00\bae08b81-2d5e-4688-ad6a-13243356654b Change the Attributes to value of 2. Also a golden oldie which allow much better boosting in WIndows 10 HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Power\PowerSettings\54533251-82be-4824-96c1-47b60b740d00\be337238-0d82-4146-a960-4f3749d470c7 and change Attributes to 2.

Once those are at 2 goto power settings and advanced under processor you can now access those features.


----------



## HenrySomeone (Nov 4, 2021)

Valantar said:


> There is essentially no scenario where upgrading makes sense if you're on the last generation, period. The gains are never that significant.


Well, I guess if you were a really, really hardcore AMD fanboy with an FX 8370 who wouldn't touch Intel with a 10 foot pole, then I guess moving to Zen1 kinda, sorta made sense?


----------



## W1zzard (Nov 4, 2021)

Oasis said:


> - Did you use a U14s or a U12s for overclocking? @W1zzard


meh.. fail .. NH-U14S of course, review has been updated. Thanks!



kane nas said:


> Very good work but if you allow me I can find out the reason why you changed the Zen setup from EVGA X570 DARK with 4000mhz@2000 IF memories that you used in the last reviews you did on MSI X570 and 3600@1800 IF memories?


The EVGA Dark is for in the graphics card test system. Not the CPU review system. None of my "CPU review" Zen 3s can do 2000 MHz


----------



## Valantar (Nov 4, 2021)

HenrySomeone said:


> Well, I guess if you were a really, really hardcore AMD fanboy with an FX 8370 who wouldn't touch Intel with a 10 foot pole, then I guess moving to Zen1 kinda, sorta made sense?


That's not a generation, that's a whole new species


----------



## docnorth (Nov 4, 2021)

Darmok N Jalad said:


> Anandtech came to the conclusion that DDR5 does contribute to the performance increase. They have 2 pages of DDR4 vs DDR5 that show measurable gains. It’s not across the board, but significant, especially mutlithread. They even concede that AMD should see similar gains when they implement DDR5, though they are obviously further out.


Anandtech's review uses W10 and and today's JEDEC speeds and indeed came to the conclusion that some of the so-called multithreaded CPU tests benefit from DDR5.  OTOH @W1zzard found that the hybrid architecture does not (yet) work properly with at least 4 CPU tests and 3 of them saw a huge performance increase just by disabling the E cores. So DDR5 seems to be an advantage (which will increase in a few months) for AL and the architecture's immaturity a disadvantage (which should shrink in a few months) when it comes to productivity work, but right now we don't know what has bigger impact. Anyway for MT depended work 5950x probably remains the best choice, if someone doesn't want a server.


----------



## AnarchoPrimitiv (Nov 4, 2021)

So, if AMD's V-cache does achieve 15% average gaming performance uplift like they've claimed, Alder lake is handily beaten?  Is that the case?  How does a company with 6.5x the R&D budget and 8x the annual revenue not destroy AMD?


----------



## ncrs (Nov 4, 2021)

AnarchoPrimitiv said:


> So, if AMD's V-cache does achieve 15% average gaming performance uplift like they've claimed, Alder lake is handily beaten?  Is that the case?  How does a company with 6.5x the R&D budget and 8x the annual revenue not destroy AMD?


Sometimes growing too big as a company is a detriment, and efficiency is lost.


----------



## docnorth (Nov 4, 2021)

Vya Domus said:


> Only half the story, still need to see how AMD will perform under DDR5.


Unfortunately we don't have an answer to this, so to test Alder Lake with both DDR4 and DDR5 is the next best thing to do when it comes to objectivity.


----------



## HenrySomeone (Nov 4, 2021)

AnarchoPrimitiv said:


> So, if AMD's V-cache does achieve 15% average gaming performance uplift like they've claimed, Alder lake is handily beaten?  Is that the case?  How does a company with 6.5x the R&D budget and 8x the annual revenue not destroy AMD?


Because AMD is using a fab process from a company with an even higher R&D budget...however the price for that is not really getting the volume to make a dent in Intel's sales and profits and Apple occupying more and more of its best nodes will only make things harder...


----------



## Darmok N Jalad (Nov 4, 2021)

HenrySomeone said:


> Because AMD is using a fab process from a company with an even higher R&D budget...however the price for that is not really getting the volume to make a dent in Intel's sales and profits and Apple occupying more and more of its best nodes will only make things harder...











						What chip shortage? AMD books capacity years ahead to ease crunches
					

Chip designer Advanced Micro Devices has been able to skirt most of the problems linked with the global chip supply shortage by forecasting demand years in advance, a top executive said on Tuesday.




					www.reuters.com


----------



## lexluthermiester (Nov 4, 2021)

Vya Domus said:


> No, it's not, DDR4 vs DDR5.
> 
> And it's not about it being unfair, having just one platform on DDR5 isn't enough to infer how good these CPUs actually are. Any CPU with faster memory will also perform better, nothing new here.


Sorry but you're missing context and it's very simple. The comparison here is the top consumer CPU from AMD and the new top consumer CPU from Intel on the given platform in which they run, along with many other previous models to compare against. The RAM type is a part of the testing and is statistically irrelevant in the scope of the testing done. It may not be perfectly apples to apples but it's as close as W1zzard(or anyone else) can get currently due to the specifications of the technology.

Just because there is an aspect of the testing done that does not meet with your satisfaction does NOT invalidate the testing as a whole. Your limited perspective is not the problem of reviewers/testers.


----------



## BorisDG (Nov 5, 2021)

oldwalltree said:


> Looks like my x299 will live on another generation....


Same. It's still beast platform + CPUs. Imma replace it just with an another HEDT.  I want something faster than 15% and with good thermal/power performance.


----------



## Zubasa (Nov 5, 2021)

lexluthermiester said:


> Sorry but you're missing context and it's very simple. The comparison here is the top consumer CPU from AMD and the new top consumer CPU from Intel on the given platform in which they run, along with many other previous models to compare against. The RAM type is a part of the testing and is statistically irrelevant in the scope of the testing done. It may not be perfectly apples to apples but it's as close as W1zzard(or anyone else) can get currently due to the specifications of the technology.
> 
> Just because there is an aspect of the testing done that does meet with your satisfaction does NOT invalidate the testing as a whole. Your limited perspective is not the problem of reviewers/testers.


There is a problem with the particular setup though, most reviewers used the G.skill DDR5-6000 CL36 sticks that came with their review kits.
Those sticks are not listed anywhere and not even up for pre-order. So basically unobtainium as of the time of release.
FYI those sticks are overclocked to the max, in fact GN could not manage to run them at all at 6000 with their particular setup.


----------



## Vya Domus (Nov 5, 2021)

lexluthermiester said:


> Just because there is an aspect of the testing done that does meet with your satisfaction does NOT invalidate the testing as a whole.



It has nothing to do with my criteria for anything, just because these are the only circumstances under which comparisons can be made at the moment that does not mean they also automatically become "fair", that's stupid.  

Other than that, compare them all you want, that's fine. I don't care.


----------



## lightning70 (Nov 5, 2021)

300-350W power consumption is unacceptable. Again, inefficient piece of silicon. Despite being 10nm, the disappointment is in power consumption.


----------



## lexluthermiester (Nov 5, 2021)

Vya Domus said:


> that does not mean they also automatically become "fair"


Your opinion.


Vya Domus said:


> Other than that, compare them all you want, that's fine. I don't care.


Clearly you do or you wouldn't be making a point of it.


Vya Domus said:


> that's stupid.


Now where's that mirror..



Zubasa said:


> There is a problem with the particular setup though, most reviewers used the G.skill DDR5-6000 CL36 sticks that came with their review kits.
> Those sticks are not listed anywhere and not even up for pre-order. So basically unobtainium as of the time of release.


And? You say that like you're a stranger to product launches. We both know you're not. So, seriously with that? You say that like you're implying that such spec'd RAM will never come to market and that only reviewers are going to have it. Much like Vya, your logic is deeply flawed.


----------



## Zubasa (Nov 5, 2021)

lexluthermiester said:


> And? You say that like you're a stranger to product launches. We both know you're not. So, seriously with that? You say that like you're implying that such spec'd RAM will never come to market and that only reviewers are going to have it. Much like Vya, your logic is deeply flawed.


There are no ETA for Equivalent DDR5-6000 CL36 kits from other manufactures either. Also the fact that it is running at a speed that is no stable on at least some CPUs.
By your logic AMD's numbers should be run with DDR4000 1:1, because future silicon might be better?
TBH I am not blaming any of the reviewers, the fact that Intel sends out overclocked products in their review kit that is not tested stable is the issue IMO.


----------



## RandallFlagg (Nov 5, 2021)

Zubasa said:


> There is a problem with the particular setup though, most reviewers used the G.skill DDR5-6000 CL36 sticks that came with their review kits.
> Those sticks are not listed anywhere and not even up for pre-order. So basically unobtainium as of the time of release.
> FYI those sticks are overclocked to the max, in fact GN could not manage to run them at all at 6000 with their particular setup.



Most sites are not using DDR5-6000.  Most are using DDR5-5200 which was supplied from various motherboard vendors, and quite a few are using DDR5-4800 and some even 4400.  They are also using anything from DDR4-3200 CL22 (like AnandTech) to DDR4-3800 on the older Zen / Intel platforms as well as on Alder Lake.  Since these are enthusiast sites I think it is the readers responsibility to know what they are looking at.  If the reader doesn't know what that type of stuff means, they should probably wait for the reviews of prebuilts from Dell Acer Asus HP Lenovo etc on CNET and PCWorld.  That's not a slam, there are certainly better ways to spend time than fussing about RAM speed and all of those companies have decent machines in their lineup.


----------



## Zubasa (Nov 5, 2021)

RandallFlagg said:


> Most sites are not using DDR5-6000.  Most are using DDR5-5200 which was supplied from various motherboard vendors, and quite a few are using DDR5-4800 and some even 4400.  They are also using anything from DDR4-3200 CL22 (like AnandTech) to DDR4-3800 on the older Zen / Intel platforms as well as on Alder Lake.  Since these are enthusiast sites I think it is the readers responsibility to know what they are looking at.  If the reader doesn't know what that type of stuff means, they should probably wait for the reviews of prebuilts from Dell Acer Asus HP Lenovo etc on CNET and PCWorld.  That's not a slam, there are certainly better ways to spend time than fussing about RAM speed and all of those companies have decent machines in their lineup.


There is a reason why Anandtech uses "filthy" JEDEC 3200 / 4800 ram in their reviews, those are the only spec that are guaranteed to work.
Again I am not blaming reviewers themselves on using what is supplied or suggested on their review guide.


----------



## lexluthermiester (Nov 5, 2021)

Zubasa said:


> There are no ETA for Equivalent DDR5-6000 CL36 kits from other manufactures either.


And? Just because there's no ETA doesn't mean it's not coming soon. Remember: Pandemic economy and chip shortage. RAM makers might be holding back release dates until they know when they will have stock. Regardless of why, the fact that reviewers were sent a certain spec of RAM does *NOT* invalidate the testing done. There is nothing unfair about it either. Quit your whining about a non-issue.


----------



## Zubasa (Nov 5, 2021)

I am fully aware of the "Pandemic economy and chip shortage", but the fact is even the GPUs exists when they were launched.
The fact that the scalpers got hold of them means those products exists. Meanwhile not even scalpers can get their hands on these DDR5-6000 CL36 Unicorn.
Sure, brush off anything that is not inline with your views as "whining".


----------



## RandallFlagg (Nov 5, 2021)

Zubasa said:


> There is a reason why Anandtech uses "filthy" JEDEC 3200 / 4800 ram in their reviews, those are the only spec that are guaranteed to work.
> Again I am not blaming reviewers themselves on using what is supplied or suggested on their review guide.



I know why they use those settings, I didn't attack them for that I'm just stating a fact.  I will say that their results are more of a test to see what mid level consumer grade OEM performance will be, in other words he's going to show you performance of something like a Dell Inspiron.   

But who is their audience?  I see in your system spec you use Team T-FORCE XCALIBUR RGB 4000 @3400 CL14.  Do you think AnandTech's benchmarks using DDR4-3200 C22 are highly relevant to someone like you?


----------



## arni-gx (Nov 5, 2021)

i9 12900k = return of the sith......... the darth sidious is coming back alive, with more more powerfull weapons at his disposal........... nice, job..... intel cpu.........


----------



## robb (Nov 5, 2021)

luches said:


> That Temp !!! So you can no longer air-cool intel's flagship even with the top of the line air cooler !!! 100c will turn your room into a furnace .
> I consider my 5900 running at 76c to be pretty high and it heats up my room but 100c.. HELL NO !!!! Not to mention 300W power draw.  My undervolted 3080ti only draws 30W more @ 330W.
> This feels like a very bad trade off. Sacrifice all the efficiency for the sake of performance.


the temperature of a component alone has nothing to do with how much your room will heat up.


----------



## lexluthermiester (Nov 5, 2021)

Zubasa said:


> I am fully aware of the "Pandemic economy and chip shortage", but the fact is even the GPUs exists when they were launched.
> The fact that the scalpers got hold of them means those products exists. Meanwhile not even scalpers can get their hands on these DDR5-6000 CL36 Unicorn.


You're not pulling that "moving the goal-posts" nonsense here. There are a number of reviews all over the net and RAM speed does not seem to be a very serious limitation. Your points and complaints are as invalid as they are illogical.


Zubasa said:


> Sure, brush off anything that is not inline with your views as "whining".


I'm not brushing it off, I'm calling it what it is, nonsense.


----------



## DanglingPointer (Nov 5, 2021)

For those that don't care about Window$ and have move to the Light Side of the OS Force...

That's 146 Tests!
All results here... 
Review here...


----------



## Pilgrim (Nov 5, 2021)

BorisDG said:


> Same. It's still beast platform + CPUs. Imma replace it just with an another HEDT.  I want something faster than 15% and with good thermal/power performance.


Intel HEDT always had an incredible life expectancy. My trusty old X58 system is still chugging along as a file server with a 6 Core Xeon


----------



## Melvis (Nov 5, 2021)

Welcome back to the high end intel.....sorta with a CPU that is basically OC ed and pushed to the max! to get there! While consuming double the power and needing the best cooler you can get to keep the thing under 100c. Im impressed in one aspect but also not impressed, the single core P core performance is great no denying that! but once you configure the system with a DDR5 mobo and Ram and a beefy cooler it just doesnt make any sense to get at this stage. In Aus I can spend literally less then half the cost to get my current 2018 spec PC to around the same lvl of Performance, bit of a no brainer really. 

Im seeing alot of weird results around the net, from high temps to not so high temps and then Cinebench scores for the 5950X been higher then also lower then the 12900K, whats going on there? Pauls Hardware showed low temps for the 12900K but high Cinebench R23 scores for the 5950X then most.....Still some fine tuning to be done I think.


----------



## Vya Domus (Nov 5, 2021)

lexluthermiester said:


> Your opinion.


Fine, my opinion. The correct one.


----------



## AusWolf (Nov 5, 2021)

_"In reality, the power limit is set to 241 W, which pretty much lets the processor suck as much power as it wants and negatively affects energy efficiency at the cost of higher performance."_

Isn't that up to the motherboard to decide? I mean, if I enable "Asus Optimiser" on mine, my 65 W 11700 turns into a 200 W CPU.


----------



## fevgatos (Nov 5, 2021)

AnarchoPrimitiv said:


> So, if AMD's V-cache does achieve 15% average gaming performance uplift like they've claimed, Alder lake is handily beaten?  Is that the case?  How does a company with 6.5x the R&D budget and 8x the annual revenue not destroy AMD?


Alder lake is currently heavily hamstring by ram. Gear 2 runs the IMC in half the speed, so in order to match a 4000c16 kit for example you need something like a 7000+ ddr5 stick. When / if these come along the difference between the 5950x and the 12900k will grow larger.

From my testings, in order to match a 3200c12 gear 1 ram setup you need 4500c17+ in gear 2. Obviously 4500c17 is way better than the current DDR5 offerings. Sure some games and apps actually prefer bandwidth over latency, but generally speaking, the current DDR5 ram we have are really, really, really slow


----------



## laszlo (Nov 5, 2021)

not bad but before buying one i need to know if the induction cookware is compatible...


----------



## lexluthermiester (Nov 5, 2021)

Vya Domus said:


> Fine, my opinion. *The correct one.*


The Arrogance Force is strong with this one...



Melvis said:


> Welcome back to the high end intel


Wait for AMD to respond in a few months. 2022 is going to be a fun year!


----------



## Arcdar (Nov 5, 2021)

W1zzard said:


> Next week  Intel CPUs arrived yesterday. I've been rebenching everything else since the W11 AMD L3 cache fix came out. Will rebench Zen 2 and 9th gen too and add them to the reviews.


Man, I really admire your focus and dedication to this (and am a bit sad that I'm not part of the crew doing all those retests - it sounds like fun. Yes, call me mad, but building 40+ different configs in a week or so is something I actually consider FUN  ).

But again a review where "great" just doesn't cut it. For all three of them actually (but I won't comment on all of them - no need to "mine posts"  ). Glad that there are still some very few gems of unbiased testing and reporting teams out there who also do a fantastic job showing the results in neat and orderly fashion - and know how to compose an A4-Text without boring the reader to death or butchering the syntax  ...

Thanks man and have a great and well deserved weekend (or at least one day for the family, as looking at all those retests I have the feeling your weekend will also still be quite short ^^ )



W1zzard said:


> Yeah my lab is quite warm right now  Just to add a bit here, what heats up your room is the Watts, not the absolute temp. If you slow down your fan speed your CPU temp will go up, yet the heat output of the CPU stays the same, and thus your room will be just as warm.





b4psm4m said:


> It's watts (Joules per second) that heat rooms, not component temperature. If you had the end of a pin at 1000C in a room, it will make hardly any difference; but if you have a 3kW bar fire at 200C, that will het the entire room. But I get what you mean.
> 
> Anyway, it's good to see intel strike back but imo, amd 5000 is still the platform of choice



Glad I didn't have to write this. Thanks for that  . Some people never get that "one spot is hot" doesn't translate to "the whole room is hot" - and some just didn't know but especially for those your explanation is "simple enough" and on point. Energy dissipation and aerodynamics are always topics I get a slight twitch in my eye reading some of the conclusions of people or their "so this is the reason why...." statements *sigh*


----------



## W1zzard (Nov 5, 2021)

AusWolf said:


> _"In reality, the power limit is set to 241 W, which pretty much lets the processor suck as much power as it wants and negatively affects energy efficiency at the cost of higher performance."_
> 
> Isn't that up to the motherboard to decide? I mean, if I enable "Asus Optimiser" on mine, my 65 W 11700 turns into a 200 W CPU.


Check my recent posts, I made a long post about this yesterday. Intel says "default is PL1=PL2=241W, motherboard vendors and users are free to set any value, including 125, 241, any everything else"



Arcdar said:


> Glad I didn't have to write this. Thanks for that  . Some people never get that "one spot is hot" doesn't translate to "the whole room is hot" - and some just didn't know but especially for those your explanation is "simple enough" and on point. Energy dissipation and aerodynamics are always topics I get a slight twitch in my eye reading some of the conclusions of people or their "so this is the reason why...." statements *sigh*


I've always felt like people who know the physics details have the onus on them to help their peers understand the world with simple understandable explanations. Maybe that's why I do what I do


----------



## ratirt (Nov 5, 2021)

I think it is still the same move Intel has been doing. New arch, new gen. Boost Single core performance a bit (though this one has a noticeable boost) the big little which I thought are to lower power consumption but oh boy I was wrong. This big little is simply to tackle the core number so that It appears as a 24t thread product I guess. 
What Intel has achieved is preparing consumers slowly to higher power consumption for CPUs across the board. Just like NV did with RTX 3000 series. I mean the power consumption for the 12900K OC is literally double than a 5950x in stress test etc.
With the performance it is better than the previous gen but in general it is not that great of a leap yet it is a different arch with the big little approach so maybe it needs some time kick in lets say. 
For the price it would have been foolish to account only for a CPU with this gen Intel (just like many others) since this is not a simple upgrade it's the entire platform that we have to consider and this, platform price, is not that great if you ask me.


----------



## lexluthermiester (Nov 5, 2021)

ratirt said:


> the big little which I thought are to lower power consumption but oh boy I was wrong.


Yeah they missed the mark on that one, currently. I think some refinements need to happen on the OS side of things to properly utilize the big/little dynamic. The potential for energy savings is there.


----------



## ratirt (Nov 5, 2021)

lexluthermiester said:


> Yeah they missed the mark on that one, currently. I think some refinements need to happen on the OS side of things to properly utilize the big/little dynamic. The potential for energy savings is there.


I think you are right and I believe, there's a good chance the power consumption will improve over time with some updates. How much that is yet to be seen and I only hope it will not decrease performance.


----------



## Arcdar (Nov 5, 2021)

W1zzard said:


> Check my recent posts, I made a long post about this yesterday. Intel says "default is PL1=PL2=241W, motherboard vendors and users are free to set any value, including 125, 241, any everything else"
> 
> 
> I've always felt like people who know the physics details have the onus on them to help their peers understand the world with simple understandable explanations. Maybe that's why I do what I do


One of the reasons I apreciate your work and thought we'd work well together  ((and why I was glad you two posted it already, else I'd have been compulsively doing the same  )


----------



## Valantar (Nov 5, 2021)

AusWolf said:


> _"In reality, the power limit is set to 241 W, which pretty much lets the processor suck as much power as it wants and negatively affects energy efficiency at the cost of higher performance."_
> 
> Isn't that up to the motherboard to decide? I mean, if I enable "Asus Optimiser" on mine, my 65 W 11700 turns into a 200 W CPU.


From W1zzard's explanations it's essentially a normalization of MCE for K-SKU chips, with ignoring the on-paper 125W spec not only being the norm but expected power programming for motherboards. At least now there should be some modicum of standardization, if nothing else.


fevgatos said:


> Alder lake is currently heavily hamstring by ram. Gear 2 runs the IMC in half the speed, so in order to match a 4000c16 kit for example you need something like a 7000+ ddr5 stick. When / if these come along the difference between the 5950x and the 12900k will grow larger.
> 
> From my testings, in order to match a 3200c12 gear 1 ram setup you need 4500c17+ in gear 2. Obviously 4500c17 is way better than the current DDR5 offerings. Sure some games and apps actually prefer bandwidth over latency, but generally speaking, the current DDR5 ram we have are really, really, really slow


3200c12? Who runs that? Has anyone even sold 3200c12 kits? The same goes for 4500c17. Sure, tuning to that level is possible with some RAM, but it's not something even remotely normal. And this is a CPU review, trying to speak to generalizeable, expected, normal performance, not "we binned and tuned our RAM to within an inch of its life" performance. Tuning things to the extreme is _not_ what you want to do in a product review like this. There are two equally valid test methodologies for a review like this: use a fast, commonly available kit at XMP/DOCP settings, or stick to JEDEC settings. Anything else and you're leaving the realm of reproducible performance results.

Also, while DDR5 does come with a latency regression overall, you can't do a 1:1 comparison to DDR4 due to how differently the two types of RAM work - there are fundamental changes to how data is handled that will impact effective latencies differently across the generations. I'm not saying it's faster, but 1:1 comparisons are flawed. It's kind of obvious that a late-gen high-end DDR4 kit will be better than a first-gen DDR5 kit, even if that kit is "high end" for its generation. Still, several sites have tested the same CPU with both DDR4 and DDR5 and found relatively minor performance differences (though fast DDR4 is generally faster) - screenshots are in this thread. Calling it "heavily hamstrung" is an exaggeration.

*Edit: *Looking at AnandTech's memory and cache subsystem latency testing demonstrates how 1:1 DDR4-to-DDR5 latency spec comparisons are problematic. (Yes, they test at slow JEDEC specs, but that's irrelevant, as DDR4-3200c20 is still much lower latency than DDR5-4800c40 - 12,5ms vs. 16.7ms CAS.) The measured latency difference between DDR4 and DDR5 on the same CPU is less than the 4,3-ish ms advantage indicated by CAS. Of course CAS latency is hardly the be-all, end-all of latency, and even at JEDEC specs memory training is left to the motherboard - but isn't it then safe to assume that the motherboards would do a better job at optimally training mature DDR4 than brand-new DDR5? Yet the latency numbers are nearly identical.

This of course doesn't mean that you can't get much lower latencies with currently available DDR4 kits vs. currently available DDR5 kits - there are very fast DDR4 kits out there, after all, and DDR5 is so far quite slow. But it does show that 1:1 latency spec comparisons aren't really valid across these two memory generations.



lexluthermiester said:


> Yeah they missed the mark on that one, currently. I think some refinements need to happen on the OS side of things to properly utilize the big/little dynamic. The potential for energy savings is there.


That sounds unrealistic to me. Minor improvements? Sure. But in an nT workload, you'll be loading all cores until you hit the power limit no matter what. Unless you want the OS to override the BIOS power limits dynamically, or to artificially limit power and performance by sequestering heavy nT loads to E cores only (or E cores + some arbitrary, low number of P cores), there isn't much that _can_ change there.

For more variably threaded workloads, there might be optimizations in the scheduler and thread handling, but ADL already seems to handle this reasonably well. Not to mention that any test of variable threaded workloads is inherently unrepresentative of other variably threaded workloads, so at the very least you'll need several (that each produce reliable results) so that you can make some claim to representativity.


----------



## Shatun_Bear (Nov 5, 2021)

Yikes at the power consumption and heat output. That's worse than I thought it would be.

Intel made steps forward with performance matching and slightly beating Ryzen 5000 a year later, but at some cost.



luches said:


> Looking at all the charts, what are the odds of Zen3+ completely closing the gap again and taking back the crown ? They did said average 15% uplift in games.



Pretty high, as they can add at least +200Mhz to Ryzen 5000's boost clocks on top of a ~10% acerage boost in gaming.

The big kicker here for Intel is Ryzen is far, far more efficient for similar performance.


----------



## AusWolf (Nov 5, 2021)

W1zzard said:


> Check my recent posts, I made a long post about this yesterday. Intel says "default is PL1=PL2=241W, motherboard vendors and users are free to set any value, including 125, 241, any everything else"


It looks like really a lot has changed compared to Rocket Lake (not just in terms of architecture)!


----------



## Blueberries (Nov 5, 2021)

I don't think most people need more than 4 E-Cores. The 12600/12700 appear to be much better value and efficiency


----------



## SIGSEGV (Nov 5, 2021)

I strongly believe AMD will giggle reading this review result. no price cut then.. pfftttt
.


----------



## qubit (Nov 5, 2021)

Finally an Intel processor worthy of upgrading my ancient 2700K to and it won't have any issues running W11.

This thing is really fast in games which will help ensure a solid 144Hz in many games. Hopefully I'll have upgraded by the end of 2022, depending on circumstances.

I don't like this P/E thing though. It's clearly a compromise to keep power consumption and heat levels down and perhaps increase manufacturing yields. 16 P cores with HT would have smoked this CPU.


----------



## robb (Nov 5, 2021)

qubit said:


> Finally an Intel processor worthy of upgrading my ancient 2700K to and it won't have any issues running W11.
> 
> This thing is really fast in games which will help ensure a solid 144Hz in many games. Hopefully I'll have upgraded by the end of 2022, depending on circumstances.
> 
> I don't like this P/E thing though. It's clearly a compromise to keep power consumption and heat levels down and perhaps increase manufacturing yields. 16 P cores with HT would have smoked this CPU.


Yeah because no other cpu has been worth upgrading to over that old cpu. lol get real. And a solid 144 is important to you yet you held on this long to a cpu that cant even maintain 60 fps in some games.


----------



## Valantar (Nov 5, 2021)

qubit said:


> Finally an Intel processor worthy of upgrading my ancient 2700K to and it won't have any issues running W11.
> 
> This thing is really fast in games which will help ensure a solid 144Hz in many games. Hopefully I'll have upgraded by the end of 2022, depending on circumstances.
> 
> I don't like this P/E thing though. It's clearly a compromise to keep power consumption and heat levels down and perhaps increase manufacturing yields. 16 P cores with HT would have smoked this CPU.


Die size is a more pertinent reason than power consumption and yields - the large ADL die is 208mm², the same size as the 10900k, and as such is quite a large die for an MSDT CPU (RKL was a near unprecedented ~280mm²). The 4-core E clusters take up just barely more area than a single P core. So, they could have gone for a 10-core P-only CPU, or a _huge_ and very expensive die with more P cores, but then you'd be looking at much lower clocks across the cores due to thermals and power consumption. The E cores allow for far higher core density while also alleviating power draws somewhat (though the 55-60W P cores don't make that easy).

If I were you though, I'd hold off until we see how the Zen3 refresh with 3D cache plays out. If their promised 15% average (and up to 25% depending on the application) uplift plays out, that would make those chips notably faster than these. But of course we can't know until we see reviews.



robb said:


> Yeah because no other cpu has been worth upgrading to over that old cpu. lol get real. And a solid 144 is important to you yet you held on this long to a cpu that cant even maintain 60 fps in some games.


Dude, chill. We all have our reasons to hold off on upgrades, and the longer you keep your stuff instead of splurging on an upgrade the better it is for your wallet, your psyche, and the environment, so it's a win-win-win. I agree that singling out these CPUs is a bit odd - they're not _that_ much faster than Zen3, for example - but it's not like they've said anything about their reasoning, so we can't really know. A friendlier way of putting this would be asking a question, like "What makes you consider this, but not something earlier like Zen3?"


----------



## The red spirit (Nov 5, 2021)

ratirt said:


> What Intel has achieved is preparing consumers slowly to higher power consumption for CPUs across the board. Just like NV did with RTX 3000 series. I mean the power consumption for the 12900K OC is literally double than a 5950x in stress test etc.


If that's what Intel will do, then I will boycott this bullshit. CPU shouldn't suck more than 100 watts. For graphics card, my limit is nothing more than 150 watts +- 10 watts.


----------



## robb (Nov 5, 2021)

Valantar said:


> Dude, chill. We all have our reasons to hold off on upgrades, and the longer you keep your stuff instead of splurging on an upgrade the better it is for your wallet, your psyche, and the environment, so it's a win-win-win. I agree that singling out these CPUs is a bit odd - they're not _that_ much faster than Zen3, for example - but it's not like they've said anything about their reasoning, so we can't really know. A friendlier way of putting this would be asking a question, like "What makes you consider this, but not something earlier like Zen3?"


You should really pay closer attention to context...


----------



## The red spirit (Nov 5, 2021)

Valantar said:


> From W1zzard's explanations it's essentially a normalization of MCE for K-SKU chips, with ignoring the on-paper 125W spec not only being the norm but expected power programming for motherboards. At least now there should be some modicum of standardization, if nothing else.


At least, this time they have balls to also admit that they can use nearly 300 watts of power. Not sure about you, but I treat TDP or base power are maximum expected power usage at base clocks without any turbo clocks. But I tested my own i5 and in prime95 small FFTs it uses less than 65 watts (I think it was up to 50 watts) of power with turbo off, so I guess any power number that Intel or AMD releases doesn't mean anything.



Shatun_Bear said:


> Yikes at the power consumption and heat output. That's worse than I thought it would be.
> 
> Intel made steps forward with performance matching and slightly beating Ryzen 5000 a year later, but at some cost.


In FX 9590 era, we called that desperate, in 2021 we call that excellent. "Editor's Choice" and "Highly Recommended".


----------



## ratirt (Nov 5, 2021)

The red spirit said:


> If that's what Intel will do, then I will boycott this bullshit. CPU shouldn't suck more than 100 watts. For graphics card, my limit is nothing more than 150 watts +- 10 watts.


Yeah I get it. It sucks that's the case but I get that Intel had been in a strangled hold by AMD for some time and man, they are eager to finally claim some benchmarks back from AMD and brag about the performance gains despite the power consumption. Intel had been losing on all fronts so if Intel could just get the performance crown and climb to the upper charts sacrificing power consumption they would do it and that is exactly what they did. Since power was high anyway with previous gen, it was an easy pick for Intel. What bugs me is the big little and claims of lower power consumption. I think it will be improved but I have expected more in that department to be fair.


----------



## Valantar (Nov 5, 2021)

The red spirit said:


> At least, this time they have balls to also admit that they can use nearly 300 watts of power. Not sure about you, but I treat TDP or base power are maximum expected power usage at base clocks without any turbo clocks. But I tested my own i5 and in prime95 small FFTs it uses less than 65 watts (I think it was up to 50 watts) of power with turbo off, so I guess any power number that Intel or AMD releases doesn't mean anything.


That's inaccurate. TDP is (was) roughly equivalent to the power draw level at which maintaining base clocks was guaranteed. It could always draw less power at base clocks, just not more - that would be grounds for a replacement under warranty - and was very often lower which is why most CPUs historically have boosted noticeably above base clock even when restricted to TDP power draw levels. This is especially true for lower core count CPUs, those with lower peak boost clocks, or both.



robb said:


> You should really pay closer attention to context...


What context? You responded to a post making a statement. I commented on your post, specifically your tone. Am I missing something?


----------



## arni-gx (Nov 5, 2021)

32 Denuvo games are not compatible with Intel's new Alder Lake CPUs
					

As Intel confirmed, there are currently 32 Denuvo games that do not work with its latest Alder Lake CPUs.




					www.dsogaming.com
				




Intel has just launched its new Alder Lake desktop CPUs. However, and as we’ve already reported, these CPUs could have compatibility issues with a number of DRMs. And, as Intel confirmed, there are currently 32 Denuvo games that do not work with it.

This information comes from PCGamer. As Intel told them, it has yet to resolve an issue with Denuvo on Alder Lake for 32 games, which was causing issues playing these games on the platform.

Assassin’s Creed Valhalla is one of the games that appears to have stability issues. PCGamer could not run this game on their system, and Intel told them that they are working with Ubisoft on a fix. Despite that, it appears that the game can run on other Intel Alder Lake systems. For instance, PCGameshardware has benchmarks for both Assassin’s Creed Valhalla and Watch Dogs Legion.

This could be why numerous publishers and developers have been removing Denuvo from their games lately.

Square Enix and Crytek have removed Denuvo from NieR Replicant Remaster & Crysis Remastered. Additionally, 2K Games has removed Denuvo from Mafia: Definitive Edition. Not only that, but Bandai Namco has removed this anti-tamper tech from Tekken 7 and Ace Combat 7: Skies Unknown.

=== wow, it looks like it will be good news for pc gamers.......


----------



## robb (Nov 5, 2021)

Valantar said:


> What context? You responded to a post making a statement. I commented on your post, specifically your tone. Am I missing something?


My comment to him had nothing to do with looking at Zen or not. My first point was that there had been plenty of cpus worth upgrading to. My next point was that it seemed silly for him to point that 144 fps was his goal yet again he skipped the numerous worthy upgrades he could have gone with over the years. And he is still talking about waiting until the end of 2022 when there will be other cpus by that time. Really his whole comment was just nonsensical. And upgrading costs a lot less when you sell off your other stuff but his stuff will soon be nearly irrelevant as he keeps waiting and waiting.


----------



## HenrySomeone (Nov 5, 2021)

Valantar said:


> If I were you though, I'd hold off until we see how the Zen3 refresh with 3D cache plays out. If their promised 15% average (and up to 25% depending on the application) uplift plays out, that would make those chips notably faster than these. But of course we can't know until we see reviews.


Fair enough advice I suppose (if you don't need/want a new PC right now, especially in the light of ridiculous graphics cards prices, but I suspect in this particular case, he'll hold on to his 2080 for the time being anyway), however the problem is, that we don't know which skus will get the 3D treatment; some indications say only octa-cores and up, some even only 12&16 and none mention the 6-core and the latter is just mind boggling. Not only have they already pretty much abandoned the sub $300 class so far, but with 5600x remaining all they will offer here, they'll lose it completely. Even if they drop its price to $200 (unlikely), there still won't be any competition for the 12600k, especially given that motherboard availability and pricing will only improve with time.


----------



## Valantar (Nov 5, 2021)

robb said:


> My comment to him had nothing to do with looking at Zen or not. My first point was that there had been plenty of cpus worth upgrading to. My next point was that it seemed silly for him to point that 144 fps was his goal yet again he skipped the numerous worthy upgrades he could have gone with over the years. And he is still talking about waiting until the end of 2022 when there will be other cpus by that time. Really his whole comment was just nonsensical. And upgrading costs a lot less when you sell off your other stuff but his stuff will soon be nearly irrelevant as keeps waiting and waiting.


Which is exactly why the reasonable approach would be to ask for the reasoning behind the statement rather than calling it out in the tone you used. How is that constructive or useful? All you're achieving is making them defensive or angry. Also, some of _your_ reasoning here is problematic: just because they can't hit 144fps with their current setup doesn't invalidate that as a desire for a future upgrade. Quite the opposite, I would say. As for there being plenty of CPUs worth upgrading to: sure, but as I said, we all have our reasons not to. So maybe ask, so that a constructive discussion is possible? And yes, I did recommend looking at future reviews myself, didn't I? That argument for upgrades costing less if you sell your parts is also highly variable depending on the used market where you live and a bunch of other factors. It's absolutely possible, but it's a poor general recommendation.



HenrySomeone said:


> Fair enough advice I suppose (if you don't need/want a new PC right now, especially in the light of ridiculous graphics cards prices, but I suspect in this particular case, he'll hold on to his 2080 for the time being anyway), however the problem is, that we don't know which skus will get the 3D treatment; some indications say only octa-cores and up, some even only 12&16 and none mention the 6-core and the latter is just mind boggling. Not only have they already pretty much abandoned the sub $300 class so far, but with 5600x remaining all they will offer here, they'll lose it completely. Even if they drop its price to $200 (unlikely), there still won't be any competition for the 12600k, especially given that motherboard availability and pricing will only improve with time.


That would be a valid response if the decision was between "buy now" or "wait for Zen 3 V-cache reviews". That isn't the scenario here, the scenario is "hopefully I can upgrade before the end of 2022". At that point, all of your questions above will be answered, and then some.


----------



## qubit (Nov 5, 2021)

robb said:


> Yeah because no other cpu has been worth upgrading to over that old cpu. lol get real. And a solid 144 is important to you yet you held on this long to a cpu that cant even maintain 60 fps in some games.


Mind your attitude. 

@Valantar Thanks for the technical explanation, it does sound about right. I knew it was for reasons along these lines and said so in my post.

And yes, it's funny how some people get all personal over a friggin' CPU.   

And yeah, it's been wonderous for my wallet. Contrary to that immature child above, my CPU does well over 60fps in the all games I play, even the latest, but it can't reach the magic 144fps, or even 120fps in many cases although the experience is still surprisingly smooth. This thing probably has something like an 80-100% performance increase over my aged CPU so will have no trouble at all hitting those highs. Can't wait!


----------



## ratirt (Nov 5, 2021)

HenrySomeone said:


> Fair enough advice I suppose (if you don't need/want a new PC right now, especially in the light of ridiculous graphics cards prices, but I suspect in this particular case, he'll hold on to his 2080 for the time being anyway), however the problem is, that we don't know which skus will get the 3D treatment; some indications say only octa-cores and up, some even only 12&16 and none mention the 6-core and the latter is just mind boggling. Not only have they already pretty much abandoned the sub $300 class so far, but with 5600x remaining all they will offer here, they'll lose it completely. Even if they drop its price to $200 (unlikely), there still won't be any competition for the 12600k, especially given that motherboard availability and pricing will only improve with time.


All of them. It will be a new line of CPUs with improved performance. I'm sure AMD will not hold onto both 5000 series CPUs with and without 3d V-cash. The 3d V-cash is supposed to be a refresh of the CPU just like the Zen+ was.


----------



## Melvis (Nov 5, 2021)

Did anyone notice how much actual IPC % gain there is over Zen 3 when clocked at the same clocked speed? Im not sure if I believe it but it showed only 1% over Zen 3 .........


----------



## qubit (Nov 5, 2021)

HenrySomeone said:


> Fair enough advice I suppose (if you don't need/want a new PC right now, especially in the light of ridiculous graphics cards prices, but I suspect in this particular case, he'll hold on to his 2080 for the time being anyway), however the problem is, that we don't know which skus will get the 3D treatment; some indications say only octa-cores and up, some even only 12&16 and none mention the 6-core and the latter is just mind boggling. Not only have they already pretty much abandoned the sub $300 class so far, but with 5600x remaining all they will offer here, they'll lose it completely. Even if they drop its price to $200 (unlikely), there still won't be any competition for the 12600k, especially given that motherboard availability and pricing will only improve with time.


If you're referring to me and my trusty 2080, then absolutely I'm holding onto it. 

When I bought it in March 2020, it was supposed to be a "temporary" purchase to tide me over until I got the upcoming 3080.

I'd previously been stuck with my ancient 780 Ti after the failure of two RTX 1080 cards (got refunds) which made it painful to play current, demanding, games on. Then the market turned to sh* with zero availability and sky high prices, so, indeed, this "temporary" card has turned out to be rather permanent, lol and there's no end in sight, either.


----------



## The red spirit (Nov 5, 2021)

Valantar said:


> That's inaccurate. TDP is (was) roughly equivalent to the power draw level at which maintaining base clocks was guaranteed. It could always draw less power at base clocks, just not more - that would be grounds for a replacement under warranty - and was very often lower which is why most CPUs historically have boosted noticeably above base clock even when restricted to TDP power draw levels. This is especially true for lower core count CPUs, those with lower peak boost clocks, or both.


But if it's significantly lower, then that's also misleading. The crazy thing is that in games my i5 10400F can boost to the max (4GHz all core, 4.3GHz single) and still remain under 56 watts. Other chips like Celerons and Pentiums consumed nearly 2 tiems less power than TDP stated, meanwhile locked i9s or i7s were not able to reach their upper boost states and just barely maintained base clock. Regarding AMD, they just straight up never worked at advertised TDP and did nothing to fix that or be more transparent about it. They don't even properly disclose what is out of spec and what is in spec and they strongly encourage people to keep overclocking chips, without them even realizing it. If I'm not too cynical, then they do this to inflate their benchmark results. If I'm cynical, then I think that they do this so that most users will degrade chips faster and then AMD will be dicks about RMA. Either way, neither chip maker discloses power usage properly and for that matter, cooling requirements too.


----------



## Valantar (Nov 5, 2021)

Melvis said:


> Did anyone notice how much actual IPC % gain there is over Zen 3 when clocked at the same clocked speed? Im not sure if I believe it but it showed only 1% over Zen 3 .........


It's not much, that's for sure. In Anandtech's SPEC2017 testing, the differences are as follows (they tested ADL with both DDR4 and DDR5):





Normalizing for clock speed (they are all likely to maintain peak boost in these ST workloads, so 5.2GHz vs. 4.9GHz):
12900K D5: 1,56538 pts/GHz INT, 2,72308 pts/GHz FP
12900K D4: 1,54038 pts/GHz INT, 2,62885 pts/GHz FP
5950X: 1,56122 pts/GHz INT, 2,48776 pts/GHz FP

Which, using the 5950X as a baseline:
ADL D5 IPC: +0,3% INT, +9,5% FP
ADL D4 IPC: -1,3% INT, +5,7% FP

Of course this is just in one set of workloads, but at least SPEC is an industry standard. The numbers will obviously be different in different workloads. But it's very close overall.


The red spirit said:


> But if it's significantly lower, then that's also misleading. The crazy thing is that in games my i5 10400F can boost to the max (4GHz all core, 4.3GHz single) and still remain under 56 watts. Other chips like Celerons and Pentiums consumed nearly 2 tiems less power than TDP stated, meanwhile locked i9s or i7s were not able to reach their upper boost states and just barely maintained base clock. Regarding AMD, they just straight up never worked at advertised TDP and did nothing to fix that or be more transparent about it. They don't even properly disclose what is out of spec and what is in spec and they strongly encourage people to keep overclocking chips, without them even realizing it. If I'm not too cynical, then they do this to inflate their benchmark results. If I'm cynical, then I think that they do this so that most users will degrade chips faster and then AMD will be dicks about RMA. Either way, neither chip maker discloses power usage properly and for that matter, cooling requirements too.


Yeah, this is where the difference between what TDP actually means vs. what it is understood to mean comes in. After all, TDP is _actually_ a spec for OEMs and the like to say "this is the class of cooler you need for this chip to maintain stock performance", and it is divided into classes for simplicity rather than calculating an accurate TDP for each chip (as that would be a complete mess in terms of coolers). That's why you get those "54W" Celerons running at 30W, and so on. Something similar is true for AMD, though the problem here is that  (just like Intel) they've also used TDP numbers as public-facing marketing classes without really explaining what the numbers mean, or even working to make people understand that TDP does _not_ mean peak power consumption. That the relation between TDP and power draw (to the extent that there is one at all) differs between the two manufacturers just makes this into even more of a mess.

I was initially glad that Intel had ditched TDP for their two-tier base/boost power system, though it seems that is effectively worthless for K SKUs, with boost power being the only number to look at. It'll be interesting to see how this plays out across non-K SKUs though, and I would love to see AMD be more transparent as well.



qubit said:


> Mind your attitude.
> 
> @Valantar Thanks for the technical explanation, it does sound about right. I knew it was for reasons along these lines and said so in my post.
> 
> ...


Heh, I kept my Q9450 for nearly a decade (2008-2017!), and it served me well the entire time. If you buy a good CPU to begin with, it can last for ages. I only upgraded my 1600X to a 5800X because I could get it funded through work, otherwise that chip (which now lives a new and better, calmer life running my NAS) would have stayed in my main PC for at least a few generations more.

Still, as I said, I would keep my options open and take a close look at the upcoming Zen3 V-cache chips. I generally dislike Intel (mostly due to their long history of shitty business practices), but obviously make up your own mind from the factors that matter the most to you - both major CPU manufacturer deliver excellent performance and great platforms these days.


----------



## Melvis (Nov 5, 2021)

Valantar said:


> It's not much, that's for sure. In Anandtech's SPEC2017 testing, the differences are as follows (they tested ADL with both DDR4 and DDR5):
> 
> 
> 
> ...



Seems to be inline then I guess to the results they got at Guru3D


----------



## Valantar (Nov 5, 2021)

Melvis said:


> Seems to be inline then I guess to the results they got at Guru3D
> View attachment 223815


Yep, seems similar. Using just a single application for IPC calculations is _very_ sketchy though. You need a broader selection (ideally stressing different parts of the core and cache/memory subsystems) to get any kind of representative number.


----------



## qubit (Nov 5, 2021)

Valantar said:


> Heh, I kept my Q9450 for nearly a decade (2008-2017!), and it served me well the entire time. *If you buy a good CPU to begin with, it can last for ages.* I only upgraded my 1600X to a 5800X because I could get it funded through work, otherwise that chip (which now lives a new and better, calmer life running my NAS) would have stayed in my main PC for at least a few generations more.
> 
> Still, as I said, I would keep my options open and take a close look at the upcoming Zen3 V-cache chips. I generally dislike Intel (mostly due to their long history of shitty business practices), but obviously make up your own mind from the factors that matter the most to you - both major CPU manufacturer deliver excellent performance and great platforms these days.


Indeed, especially the bold bit.

At the time, the 2500K was all the rage for being the sweet spot between price and performance and it was indeed pretty good, but was definitely not as fast as the 2700K. However, I remember the comparative benchmarks + the HT capability and figured that the top version had better long term life potential and so it has proved to be. I think HT has proved to be more important than it at first seemed.

Dodgy business practices aside (sadly true, plus looking at you, NVIDIA) I still tend to prefer Intel over AMD, but not by the margin that I used to. If, when, it comes to upgrade time (6 months absolute minimum) AMD has something that looks better than Intel, then I'd go for that instead.

Back in 2005, I had the AMD 64-bit CPUs, single core then dual core (Manchester) and they were so fast for their time! I paired them with an Abit AN8 Ultra socket 939 mobo which was an amazing mobo for its day. I still have the hardware to this day.


----------



## The red spirit (Nov 5, 2021)

Valantar said:


> Yeah, this is where the difference between what TDP actually means vs. what it is understood to mean comes in. After all, TDP is _actually_ a spec for OEMs and the like to say "this is the class of cooler you need for this chip to maintain stock performance", and it is divided into classes for simplicity rather than calculating an accurate TDP for each chip (as that would be a complete mess in terms of coolers). That's why you get those "54W" Celerons running at 30W, and so on. Something similar is true for AMD, though the problem here is that  (just like Intel) they've also used TDP numbers as public-facing marketing classes without really explaining what the numbers mean, or even working to make people understand that TDP does _not_ mean peak power consumption. That the relation between TDP and power draw (to the extent that there is one at all) differs between the two manufacturers just makes this into even more of a mess.


I remember GN got deep into what AMD's TDP means and basically the conclusion is that it means nothing as it was way too complicated to understand and was not representing anything of meaning. AMD in particular has an awful track record of measuring power and tried to invent their own marketing friendly numbers (ADC), which meant absolutely nothing for buyer or OEM. At least Intel is a bit better than AMD, but once this shit with boost started (board makers violating it right and left, people expecting boost to work like base, Intel making gazillion names and stages for boost), Intel really sunk to AMD level of "measuring" power usage and heat output.




Valantar said:


> I was initially glad that Intel had ditched TDP for their two-tier base/boost power system, though it seems that is effectively worthless for K SKUs, with boost power being the only number to look at. It'll be interesting to see how this plays out across non-K SKUs though, and I would love to see AMD be more transparent as well.


I will replay like grandpa here and say that Intel should just stop all their TDP and TDP tier bullshit altogether. They should just disclose maximum achievable power usage at base and boost (also manufacturing variation), then chip to heatsink transfer efficiency and for good measure, make this government regulated, because we are all getting screwed over this stuff. You know, it would be nice for once having some numbers that aren't complete hoopla and be able to plan heatsink buying decisions properly. Or at the very least give them heavy fines for lying to public. If they could do it in Pentium 3 era, they certainly can do it now. I'm not buying BS that they can't, because modern chips are too advanced.


----------



## Valantar (Nov 5, 2021)

The red spirit said:


> I remember GN got deep into what AMD's TDP means and basically the conclusion is that it means nothing as it was way too complicated to understand and was not representing anything of meaning. AMD in particular has an awful track record of measuring power and tried to invent their own marketing friendly numbers (ADC), which meant absolutely nothing for buyer or OEM. At least Intel is a bit better than AMD, but once this shit with boost started (board makers violating it right and left, people expecting boost to work like base, Intel making gazillion names and stages for boost), Intel really sunk to AMD level of "measuring" power usage and heat output.


AMD's TDP value is a reverse engineering of Intel's way of doing this, to provide OEMs with comparable classes of coolers to avoid confusion. That is it's only real-world use case. And power doesn't factor into the calculation at all, funnily enough. For consumers, the only value of it is to tell us the other, non-published numbers behind it, such as various power limits, but the TDP number itself is useless (and despite its use in marketing, which is a really harebrained idea) has never been meant to be useful for consumers.


The red spirit said:


> I will replay like grandpa here and say that Intel should just stop all their TDP and TDP tier bullshit altogether. They should just disclose maximum achievable power usage at base and boost (also manufacturing variation), then chip to heatsink transfer efficiency and for good measure, make this government regulated, because we are all getting screwed over this stuff. You know, it would be nice for once having some numbers that aren't complete hoopla and be able to plan heatsink buying decisions properly. Or at the very least give them heavy fines for lying to public. If they could do it in Pentium 3 era, they certainly can do it now. I'm not buying BS that they can't, because modern chips are too advanced.


I would love for there to be a standardized way of measuring and describing this, though that would inevitably be difficult with different die sizes, IHS sizes (and thicknesses), different internal TIMs, and so on. Still, it would be nice to at least give it a try. You'd still need some sort of tiering system though, as the formula you'd need for something like this wouldn't produce a human-readable output, but just some number. Is a higher or lower number better? By how much? Ultimately it wouldn't be any less complex than TDP - though you might have the benefit of removing the "W" and thus not having people think this number directly represents power draw, which would be good. Standardized tiers could lead to standardized cooler classes, which would simplify things quite a bit (no more of the dubiously-labeled "150W" or "250W" coolers, but, say, "tier 5" coolers for "tier 5" CPUs. The divisions would always be somewhat arbitrary, and no cooler design would fit perfectly within a category, but at least you'd have some guarantee that it uphold base performance unless you screw up the installation.


----------



## The red spirit (Nov 5, 2021)

Valantar said:


> I would love for there to be a standardized way of measuring and describing this, though that would inevitably be difficult with different die sizes, IHS sizes (and thicknesses), different internal TIMs, and so on. Still, it would be nice to at least give it a try.


In a way that I described that wouldn't be a problem, since I said power usage and heat transfer from chip to IHS efficiency (for non IHS chips, it's from source to outer surface of it). Thermal paste is completely dependent on OEM or DIY builder.



Valantar said:


> You'd still need some sort of tiering system though, as the formula you'd need for something like this wouldn't produce a human-readable output, but just some number. Is a higher or lower number better? By how much?


That's something that Intel or AMD shouldn't care about as it is solely OEM's business (or DIY builder's). Intel's or AMD's only responsibility is to provide useful and accurate data (which they don't do now).



Valantar said:


> Ultimately it wouldn't be any less complex than TDP - though you might have the benefit of removing the "W" and thus not having people think this number directly represents power draw, which would be good. Standardized tiers could lead to standardized cooler classes, which would simplify things quite a bit (no more of the dubiously-labeled "150W" or "250W" coolers, but, say, "tier 5" coolers for "tier 5" CPUs. The divisions would always be somewhat arbitrary, and no cooler design would fit perfectly within a category, but at least you'd have some guarantee that it uphold base performance unless you screw up the installation.


Sounds like an interesting idea, but I would rather have watts. Most of those dubious measurements exist mostly due to cooler makers trying to guess what AMD's or Intel's TDP is and they want to make sure that they won't specify too weak coolers for chips as it has many negative consequences for them.


----------



## Valantar (Nov 5, 2021)

The red spirit said:


> In a way that I described that wouldn't be a problem, since I said power usage and heat transfer from chip to IHS efficiency (for non IHS chips, it's from source to outer surface of it). Thermal paste is completely dependent on OEM or DIY builder.


Sorry, but you're wrong here. First, I said internal TIM, i.e. between the IHS and die, not cooler and IHS. Similarly, heat transfer from the chip to the IHS isn't a simple linear function, but is dependent on the thermal density of the die, the thickness of the diffusion barrier on top of the die, the internal TIM, and the materials and thickness of the IHS, so "measuring" this is _really_ complicated. Remember, heat isn't generated evenly across the die, and thus doesn't transfer evenly through the IHS. The most important job of the IHS is to spread heat outwards from hot spots, which is where the thickness and materials of the IHS comes into play (a thicker IHS will have a larger corss section through which to spred heat outwards). And of course the thermal density (and overall power consumption/heat output of the die, though the two aren't directly related) of the die forms the basis for this. So, any kind of standardized way of measuring this would by its very nature privilege certain designs over others, depending on the specifics of the testing methodology. For example, do you calculate a single number for the entire IHS, despite this being a gross oversimplification of real-world temperatures? Do you calculate an "overall" and a "hotspot" number? If so, how do you balance the two? And variables like die size, IHS size, the ratio between the two, and several other factors will affect all of this.


The red spirit said:


> That's something that Intel or AMD shouldn't care about as it is solely OEM's business (or DIY builder's). Intel's or AMD's only responsibility is to provide useful and accurate data (which they don't do now).


Again: no. OEMs want to know what they're buying, and want to know the parameters in which the parts work and the necessary capabilities of ancillary components like coolers. The only party capable of reliably specifying this is the chipmaker - though ideally this would be done in a standardized way. The last thing an OEM in a low-margin commodity market wants to do is have to deal with implementing a dynamic system for specifying coolers across systems.


The red spirit said:


> Sounds like an interesting idea, but I would rather have watts. Most of those dubious measurements exist mostly due to cooler makers trying to guess what AMD's or Intel's TDP is and they want to make sure that they won't specify too weak coolers for chips as it has many negative consequences for them.


But then you're entirely missing the point. "Watts" isn't a viable measure for what you need to keep a CPU cool. Two different CPUs with identical 50W power draws can be incredibly easy or essentially impossible to cool - with the same cooler! - depending on factors like thermal density and the capability of the IHS to spread heat out from hotspots. That's why Zen3 CPUs often run hot - they don't consume that much power, but they have extreme thermal density coupled with an off-centre die placement, which leads to very different behaviour at the same wattage as other CPUs. Using "watts" to measure this is why we have a problem to begin with. This is similar to how GPUs are _much_ easier to cool than CPUs thanks to direct-die cooling and much more even thermal loads across the die. For CPUs, a single number for this can only ever be an abstraction calculated from many different metrics, and trying to align this directly with power draw is _really_ problematic.

Also, cooler makers don't need to guess anything. They are provided with the formulas for TDP calculation and know precisely what these mean. This is why the term TDP exists at all. The problem lies in the implementation, as well as tiering and the lack of adherence to TDP-like power draw levels - after all, there literally doesn't exist a >125W Intel MSDT TDP, so how do you design a cooler for a ~250W CPU on that platform? Plus, of course, cooler design and performance varies wildly depending on uncontrollable factors like ambient temperature and case airflow. The dubious ratings come from cooler makers either overselling their products, deviating from TDP standards, shooting higher than those allow for, or trying to account for something else in their numbers.


----------



## chrcoluk (Nov 5, 2021)

Wprime tests not good?

Shame I don't know how it compares to my 9900k or on Windows 10.

Also I wonder if we see DDR4 based reviews, as we don't know how much of the performance gains are from reduced i/o wait time.

I personally see lack of overclocking as a good thing, user's getting more performance out of the box is a good thing and also reduces silicon lottery issues.


----------



## The red spirit (Nov 5, 2021)

Valantar said:


> Sorry, but you're wrong here. First, I said internal TIM, i.e. between the IHS and die, not cooler and IHS. Similarly, heat transfer from the chip to the IHS isn't a simple linear function, but is dependent on the thermal density of the die, the thickness of the diffusion barrier on top of the die, the internal TIM, and the materials and thickness of the IHS, so "measuring" this is _really_ complicated.


That's why I mentioned heat transfer efficiency. 




Valantar said:


> Remember, heat isn't generated evenly across the die, and thus doesn't transfer evenly through the IHS. The most important job of the IHS is to spread heat outwards from hot spots, which is where the thickness and materials of the IHS comes into play (a thicker IHS will have a larger corss section through which to spred heat outwards). And of course the thermal density (and overall power consumption/heat output of the die, though the two aren't directly related) of the die forms the basis for this. So, any kind of standardized way of measuring this would by its very nature privilege certain designs over others, depending on the specifics of the testing methodology. For example, do you calculate a single number for the entire IHS, despite this being a gross oversimplification of real-world temperatures? Do you calculate an "overall" and a "hotspot" number? If so, how do you balance the two? And variables like die size, IHS size, the ratio between the two, and several other factors will affect all of this.


Seems like average temperature of whole IHS would be the best way to do that, while also stating hotspot temp.




Valantar said:


> Again: no. OEMs want to know what they're buying, and want to know the parameters in which the parts work and the necessary capabilities of ancillary components like coolers. The only party capable of reliably specifying this is the chipmaker - though ideally this would be done in a standardized way. The last thing an OEM in a low-margin commodity market wants to do is have to deal with implementing a dynamic system for specifying coolers across systems.


I would argue that they should do it anyway, as OEMs like Dell, HP or Acer have really poor reputation for many overheating machines. They cannot be making crap forever at some point it will hurt their sales.




Valantar said:


> But then you're entirely missing the point. "Watts" isn't a viable measure for what you need to keep a CPU cool. Two different CPUs with identical 50W power draws can be incredibly easy or essentially impossible to cool - with the same cooler! - depending on factors like thermal density and the capability of the IHS to spread heat out from hotspots. That's why Zen3 CPUs often run hot - they don't consume that much power, but they have extreme thermal density coupled with an off-centre die placement, which leads to very different behaviour at the same wattage as other CPUs. Using "watts" to measure this is why we have a problem to begin with. This is similar to how GPUs are _much_ easier to cool than CPUs thanks to direct-die cooling and much more even thermal loads across the die. For CPUs, a single number for this can only ever be an abstraction calculated from many different metrics, and trying to align this directly with power draw is _really_ problematic.


Watts aren't a problem, if you measure watts and then heat transfer from chip to IHS efficiency, what is then left unclear or misinforming? That covers odd chips like Ryzens.




Valantar said:


> Also, cooler makers don't need to guess anything. They are provided with the formulas for TDP calculation and know precisely what these mean. This is why the term TDP exists at all. The problem lies in the implementation, as well as tiering and the lack of adherence to TDP-like power draw levels - after all, there literally doesn't exist a >125W Intel MSDT TDP, so how do you design a cooler for a ~250W CPU on that platform? Plus, of course, cooler design and performance varies wildly depending on uncontrollable factors like ambient temperature and case airflow. The dubious ratings come from cooler makers either overselling their products, deviating from TDP standards, shooting higher than those allow for, or trying to account for something else in their numbers.


GN did a video about AMD's TDP and asked Cooler Master about this, they said that TDP doesn't mean much and that it's certainly not clear how capable coolers they should design.


----------



## RandallFlagg (Nov 5, 2021)

chrcoluk said:


> Wprime tests not good?
> 
> Shame I don't know how it compares to my 9900k or on Windows 10.
> 
> ...



Just keep in mind the DDR5 used here is 6000.  

Computerbase.de shows DDR4-3200 C12 and DDR4-3800 walking all over DDR5-4400.   This is not surprising, but I doubt early adopters are going to be running DDR5-4400.  

Tom's used DDR5-4800 C36 and DDR4-3200 on Alder Lake and the older platforms, a bit more realistic but they didn't specify the DDR4 settings that I saw.  The kit was DDR5-6000 but they set it to one of the more normal speeds.

So far my take is that 'good' DDR4 is faster than DDR5, at least the obtainable DDR5-5200.  I think that is not necessarily true of the DDR5-6000+ when it is full speed, but you can't actually buy that stuff.


----------



## Valantar (Nov 5, 2021)

The red spirit said:


> That's why I mentioned heat transfer efficiency.


But what is "heat transfer efficiency" if not that compound measurement of many different factors that I mentioned? And how do you define it in a way that accounts for variables like hotspot placement? In short: you can't. So you have to compromise in some way.


The red spirit said:


> Seems like average temperature of whole IHS would be the best way to do that, while also stating hotspot temp.


But then you have three numbers: power draw (W), tIHSavg and tIHSpeak. How do you balance the three when designing a cooler? And how do you measure the three at all? With a reference cooler? Without a cooler?


The red spirit said:


> I would argue that they should do it anyway, as OEMs like Dell, HP or Acer have really poor reputation for many overheating machines. They cannot be making crap forever at some point it will hurt their sales.


You can argue that all you want, the most important priority for them is simplifying their production lines and system configurations to increase profit margins. You're not going to convince them to invest millions in complex thermal testing regimes. The only effective way of doing this is enforcing this on a component manufacturer level, ideally through either industry body or government standardization.


The red spirit said:


> Watts aren't a problem, if you measure watts and then heat transfer from chip to IHS efficiency, what is then left unclear or misinforming? That covers odd chips like Ryzens.


Again: "heat transfer efficiency" is an incredibly complex thing, and cannot be reduced to a simple number or measurement without opening the door for a lot of variance. And it doesn't cover odd chip placements unless that number has _huge_ margins built in, in which case it then becomes misleading in the first place.


The red spirit said:


> GN did a video about AMD's TDP and asked Cooler Master about this, they said that TDP doesn't mean much and that it's certainly not clear how capable coolers they should design.


That's because all CPUs boost past TDP and people expect that performance to last forever. This is down to the idiotic mixed usage of TDP (cooler spec _and_ marketing number), as well as the lack of any TDP-like measure for peak/boost power draws, despite all chips far exceeding TDP. If CM (or anyone else) built a cooler following the TDP spec for 105W, it would be fully capable of running a stock 5950X, but the CPU wouldn't be able to maintain its 141W boost spec over any significant amount of time, instead throttling back to whatever it can maintain within thermal limits at the thermal load the cooler can dissipate - i.e. base clock or a bit more.  The issue here is that nobody would call that acceptable - you'd have a CPU running at near 90 degrees under all-core loads and losing significant performance. Most would call that thermal throttling, despite this being inaccurate (it would need to go below base clock for that to be correct), but either way it's easy to see that the cooler is insufficient despite fulfilling the spec. _That_ is why TDP is currently useless for manufacturers, not that the measurement doesn't work but that it doesn't actually cover the desired use cases and behaviours of customers.


----------



## chrcoluk (Nov 5, 2021)

RandallFlagg said:


> Just keep in mind the DDR5 used here is 6000.
> 
> Computerbase.de shows DDR4-3200 C12 and DDR4-3800 walking all over DDR5-4400.   This is not surprising, but I doubt early adopters are going to be running DDR5-4400.
> 
> ...



Yeah I do usually consider latency as more important than bandwidth, I remember my old 3000CL12 config out performing 3200CL14.  Some workloads do benefit from bandwidth a lot though so as you said it will depend.


----------



## Valantar (Nov 5, 2021)

RandallFlagg said:


> Just keep in mind the DDR5 used here is 6000.
> 
> Computerbase.de shows DDR4-3200 C12 and DDR4-3800 walking all over DDR5-4400.   This is not surprising, but I doubt early adopters are going to be running DDR5-4400.
> 
> ...


3200c12 is pretty uneralistic though - yes, you can OC there, but is there even a single kit on the market with those timings? It's clear that available DDR5 is slower than available DDR4, simply as available DDR4 is highly mature and DDR5 is not. What becomes clear from more balanced testing like AnandTech's comprehensive testing at JEDEC speeds (3200c20 and 4800c40) is that the latency disadvantage expected from DDR5 is much less in practice than the numbers would seem to indicate (likely down to more channels and other differences in how data is transferred), and that at those settings - both of which are bad, but of which the DDR5 settings _ought to_ be worse - DDR5 mostly outperforms DDR4 by a slim margin.

That still means that fast DDR4 will be faster until we get fast(er) DDR5 on the market, but it also means that we won't need DDR5-8000c36 to match the performance of DDR4-4000c18.


----------



## RandallFlagg (Nov 5, 2021)

Valantar said:


> 3200c12 is pretty uneralistic though - yes, you can OC there, but is there even a single kit on the market with those timings? It's clear that available DDR5 is slower than available DDR4, simply as available DDR4 is highly mature and DDR5 is not. What becomes clear from more balanced testing like AnandTech's comprehensive testing at JEDEC speeds (3200c20 and 4800c40) is that the latency disadvantage expected from DDR5 is much less in practice than the numbers would seem to indicate (likely down to more channels and other differences in how data is transferred), and that at those settings - both of which are bad, but of which the DDR5 settings _ought to_ be worse - DDR5 mostly outperforms DDR4 by a slim margin.
> 
> That still means that fast DDR4 will be faster until we get fast(er) DDR5 on the market, but it also means that we won't need DDR5-8000c36 to match the performance of DDR4-4000c18.



Latency is just one factor, and specifically the CL it's how long (in clock cycles, not time) it takes for the first word of a read to be available on the output pins of the memory.  After that, the first number (3200, 4400, 4800, etc) is how fast data transmits.  

I think in general for 'normal' applications high MT/s (like 5200) is better, for games lower latency is better.   There are plenty of exceptions, especially when you get into the 'scientific' side of 'applications', but for normal user apps I think high MT/s helps.  

So just to note, here at TPU they used DDR5-6000 C36 Gear 2 (1:2 ratio).   This is some freaky fast DDR5 for now, probably more reflective of what will be available in 1H 2022.   The DDR4 used on older platforms is quite good too though, DDR4-3600 C16-20-20-34 1T Gear 1 and 1:1 IF for AMD is no slouch.  I think these are putting the older platforms pretty close to their best footing, that 90% of folks can get to run properly.


----------



## defaultluser (Nov 5, 2021)

FedericoUY said:


> THere shouldn't be required to turn off E cores to get the best performance out of the cpu. I hope that gets fixed soon...




It's not only that - it's the only way to get the long list of broken games  working again!


----------



## Valantar (Nov 5, 2021)

RandallFlagg said:


> Latency is just one factor, and specifically the CL it's how long (in clock cycles, not time) it takes for the first word of a read to be available on the output pins of the memory.  After that, the first number (3200, 4400, 4800, etc) is how fast data transmits.
> 
> I think in general for 'normal' applications high MT/s (like 5200) is better, for games lower latency is better.   There are plenty of exceptions, especially when you get into the 'scientific' side of 'applications', but for normal user apps I think high MT/s helps.
> 
> So just to note, here at TPU they used DDR5-6000 C36 Gear 2 (1:2 ratio).   This is some freaky fast DDR5 for now, probably more reflective of what will be available in 1H 2022.   The DDR4 used on older platforms is quite good too though, DDR4-3600 C16-20-20-34 1T Gear 1 and 1:1 IF for AMD is no slouch.  I think these are putting the older platforms pretty close to their best footing, that 90% of folks can get to run properly.


Uhm ... what, exactly, in my post gave you the impression that you needed to (rather poorly, IMO) explain the difference between RAM transfer rates and timings to me? And even if this was necessary (which it really wasn't), how does this change anything I said?

Your assumption is also wrong: Most consumer applications are more memory latency sensitive than bandwidth sensitive, generally, though there are obviously exceptions. That's why something like 3200c12 can perform as well as much higher clocked memory with worse latencies. Games are _more_ latency sensitive than most applications, but there are very few realistic consumer applications where memory bandwidth is more important than latency. (iGPU gaming is the one key use case where bandwidth is king outside of server applications, which generally _love_ bandwidth - hence why this is the focus for DDR5, which is largely designed to align with server and datacenter owners' desires.)

And while DDR5-6000 C36 might be fast for now (it's 6 clock cycles faster than the JEDEC 6000A spec, though "freaky fast" is hardly suitable IMO), it is _slow_ compared to the expected speeds of DDR5 in the coming years. That's why I was talking about mature vs. immature tech. DDR5 JEDEC specifications _currently_ go to DDR5-6400, with standards for 8400 in the works. For reference, the absolute highest DDR4 JEDEC specification is 3200. That means we haven't even seen the tip of the iceberg yet of DDR5 speed. So, again, even DDR5-6000c36 is a poor comparison to something like DDR4-3600c16, as one is below even the highest current JEDEC spec (let alone future ones), while the other is faster than the highest JEDEC spec several years into its life cycle.

The comment you responded to was mainly pointing out that the comparison you were talking about from Computerbase.de is _deeply_ flawed, as it compares one highly tuned DDR4 kit to a near-base-spec DDR5 kit. The DDR4 equivalent of DDR5-4400 would be something like DDR4-2133 or 2400. Also, the Computerbase DDR5-4400 timings are JEDEC 4400A timings, at c32. That is a theoretical minimum latency of 14,55ms of latency compared to 7,37ms for DDR4-3800c14. You see how that comparison is _extremely_ skewed? Expecting _anything_ but the DDR4 kits winning in those scenarios would be crazy. So, as I said, mature, low latency, high speed DDR4 will _obviously_, be faster, especially in (mostly) latency-sensitive consumer workloads. What more nuanced reviews show, such as Anandtech's more equal comparison (both at JEDEC speed), is that the expected latency disadvantage of DDR5 is much less than has been speculated.


----------



## The red spirit (Nov 5, 2021)

Valantar said:


> But then you have three numbers: power draw (W), tIHSavg and tIHSpeak. How do you balance the three when designing a cooler? And how do you measure the three at all? With a reference cooler? Without a cooler?


Cooler makes should only specify what they can dissipate. You as consumer would buy a chip, calculate what cooler you need from TDP (fixed) and efficiency. That's all. You as consumer are free to accommodate to peak or not.



Valantar said:


> You can argue that all you want, the most important priority for them is simplifying their production lines and system configurations to increase profit margins. You're not going to convince them to invest millions in complex thermal testing regimes. The only effective way of doing this is enforcing this on a component manufacturer level, ideally through either industry body or government standardization.


It's not that expensive to determine what coolers they would need and savings of metals will quickly outweigh modest RnD costs.



Valantar said:


> Again: "heat transfer efficiency" is an incredibly complex thing, and cannot be reduced to a simple number or measurement without opening the door for a lot of variance. And it doesn't cover odd chip placements unless that number has _huge_ margins built in, in which case it then becomes misleading in the first place.


I don't see that happening tbh. 



Valantar said:


> That's because all CPUs boost past TDP and people expect that performance to last forever. This is down to the idiotic mixed usage of TDP (cooler spec _and_ marketing number), as well as the lack of any TDP-like measure for peak/boost power draws, despite all chips far exceeding TDP. If CM (or anyone else) built a cooler following the TDP spec for 105W, it would be fully capable of running a stock 5950X, but the CPU wouldn't be able to maintain its 141W boost spec over any significant amount of time, instead throttling back to whatever it can maintain within thermal limits at the thermal load the cooler can dissipate - i.e. base clock or a bit more.  The issue here is that nobody would call that acceptable - you'd have a CPU running at near 90 degrees under all-core loads and losing significant performance. Most would call that thermal throttling, despite this being inaccurate (it would need to go below base clock for that to be correct), but either way it's easy to see that the cooler is insufficient despite fulfilling the spec. _That_ is why TDP is currently useless for manufacturers, not that the measurement doesn't work but that it doesn't actually cover the desired use cases and behaviours of customers.


Might as well educate buyers that boost is not guaranteed, but Intel has been doing it for at least a decade and it ended up this way. Perhaps new Alder Lake measurements just make sense.


----------



## HD64G (Nov 5, 2021)

@W1zzard , did you enable SAM on the AM4's board UEFI when testing? I am almost sure Intel doesn't support it as much.


----------



## medi01 (Nov 5, 2021)

This will, no doubt, finish AMD not.

I'm not even convinced AMD needs to change pricing on any of its products.


----------



## The red spirit (Nov 5, 2021)

medi01 said:


> This will, no doubt, finish AMD not.
> 
> I'm not even convinced AMD needs to change pricing on any of its products.


That's kinda obvious, but techtube says otherwise and many people listen to them.


----------



## Ravenas (Nov 5, 2021)

Anyone know where Intel is fabbing these?


----------



## Deleted member 215115 (Nov 5, 2021)

robb said:


> Yeah because no other cpu has been worth upgrading to over that old cpu. lol get real. And a solid 144 is important to you yet you held on this long to a cpu that cant even maintain 60 fps in some games.


This is why you never bother with these low-spec 4K60 peasants. Their statements are so dumb that they could actually work as bait. As soon as you see their 4K monitor and their 10 y/o CPU paired with a 2080 you just know they live in fantasy land. They are so deluded that they lose basic understanding of how tech works. They actually believe that their CPU is still good enough and nothing you do or say will change their mind because keeping that CPU for so long is the only meaningful achievement in their life so they have to defend it. It's like arguing with women. Just don't do it. Total waste of time, especially since staff and other members will always defend them for some reason.

On topic: The 12900K is great & efficient, hail Intel, AMD sucks, yadda yadda yadda. Gonna go buy an i9 right now and keep it for 15 years.


----------



## Steevo (Nov 5, 2021)

Affected Processors: Transient Execution Attacks & Related Security...
					

Review the impact of transient execution attacks and select security issues on currently supported Intel products.




					www.intel.com
				




Alder lake is still vulnerable to attack, what’s the performance going to be when they fix it.


----------



## Xuper (Nov 5, 2021)

from business view and by looking at Die size  , Intel new arch does cost more than Zen3 , therefore Intel had to chose a path in which is less profit than Zen3.


----------



## AusWolf (Nov 5, 2021)

The red spirit said:


> At least, this time they have balls to also admit that they can use nearly 300 watts of power. Not sure about you, but I treat TDP or base power are maximum expected power usage at base clocks without any turbo clocks. But I tested my own i5 and in prime95 small FFTs it uses less than 65 watts (I think it was up to 50 watts) of power with turbo off, so I guess any power number that Intel or AMD releases doesn't mean anything.


So far, TDP on Intel has meant PL1, that is  long term power limit enforced by the motherboard by default. My 11700 can do 2.8 GHz (300 MHz above base clock) in Cinebench all-core while maintaining the factory 65 W limit. I'm not sure with Alder Lake, though.

As for AMD, TDP is nothing more than a recommendation for cooler manufacturers (bull****). It has nothing to do with power.



The red spirit said:


> In FX 9590 era, we called that desperate, in 2021 we call that excellent. "Editor's Choice" and "Highly Recommended".


"Highly recommended"... to slap a huge liquid cooler on it.


----------



## W1zzard (Nov 5, 2021)

HD64G said:


> @W1zzard , did you enable SAM on the AM4's board UEFI when testing? I am almost sure Intel doesn't support it as much.


Enabled on all platforms, it's supported just fine everywhere


----------



## Totally (Nov 5, 2021)

W1zzard said:


> Enabled on all platforms, it's supported just fine everywhere



By any chance is Alder Lake ever going to be benched using DDR4?


----------



## birdie (Nov 5, 2021)

Steevo said:


> Alder lake is still vulnerable to attack, what’s the performance going to be when they fix it.



*All* CPUs featuring out of order speculative execution are vulnerable to Spectre class attacks. No matter if they are Intel, AMD, ARM or MIPS.

Each review of new Intel CPUs has seen at least one person blaming Intel for not fixing HW vulnerabilities. It's a sort of tradition nowadays.

A nice overview of affected CPU architectures and their status is on Wikipedia.


----------



## The red spirit (Nov 5, 2021)

AusWolf said:


> "Highly recommended"... to slap a huge liquid cooler on it.


I really wonder why it was given that award. It's more or less the same as recommending FX 9590, but this time Intel has performance edge at least, but unlike FX 9590, i9 is uncoolable. If 280mm AIO and D15 fails to cool it, then what can? Now minimum spec for it is 360mm AIO or custom loop? Good one Intel, I will wait till their flagship chips will need LN2 pot as minimum cooler.


----------



## mechtech (Nov 5, 2021)

"the new Socket AM5. An LGA package with 1,718 pins, AM5"

AM5 will have more pins therefore it must be faster


----------



## robb (Nov 5, 2021)

qubit said:


> Mind your attitude.
> 
> @Valantar Thanks for the technical explanation, it does sound about right. I knew it was for reasons along these lines and said so in my post.
> 
> ...


can maintain well over 60 fps in the latest games with a 2700k? thanks for proving just how delusional I thought you were with that asinine claim. you are so full of it that it is laughable. even my oced 4700k, which is quite a bit faster, could not maintain 60 fps in some games even 3 years ago and most certainly had plenty of drops well below 60 fps. knock yourself out with the last word as no point in fooling with someone like you.


----------



## ncrs (Nov 5, 2021)

birdie said:


> A nice overview of affected CPU architectures and their status is on Wikipedia.


Sadly that's incomplete, missing 7 CVEs from Intel guidance and a few recent microarchitectures.

Edit: Looking closer at the Intel site it looks like Alder Lake is indeed vulnerable to CVE-2020-24511 and CVE-2020-8698 that Rocket Lake wasn't. Supposedly fixed in microcode and hardware respectively, so most likely release BIOSes are safe.


----------



## Why_Me (Nov 5, 2021)

HenrySomeone said:


> Fair enough advice I suppose (if you don't need/want a new PC right now, especially in the light of ridiculous graphics cards prices, but I suspect in this particular case, he'll hold on to his 2080 for the time being anyway), however the problem is, that we don't know which skus will get the 3D treatment; some indications say only octa-cores and up, some even only 12&16 and none mention the 6-core and the latter is just mind boggling. Not only have they already pretty much abandoned the sub $300 class so far, but with 5600x remaining all they will offer here, they'll lose it completely. Even if they drop its price to $200 (unlikely), there still won't be any competition for the 12600k, especially given that motherboard availability and pricing will only improve with time.


The 5600X looks to be dead in the water if AMD doesn't lower its price.









						Best buy incoming: sub-US$200 Intel Core i5-12400F beats the Ryzen 5 5600X
					

For now, the i5-12600K is the value king, but, if you can wait for the i5-12400F releasing in two months, you could get roughly the same performance for sub-US$200.




					www.notebookcheck.net


----------



## GreiverBlade (Nov 5, 2021)

well, reviews out, although wrong contender ...
nonetheless after "reviewing the reviews" ( errrr ...   ) a 5600/5600x will be more profitable for me if i want an upgrade (not that the 3600 is a slouch, for my usage ) the 300$ 12600K is well ... ~50 more than the cheapest 5600X i can find ...

add that to the fact that i can keep my mobo ... and avoid win 11 (for now) and avoid skittish big.INTEL eerrrr i mean LITTLE scheduler, and for all the cons i see in the reviews kinda outweight the pros...
kinda ironic, "improved efficiency"/"not as efficient as Zen 3" (not the only one, but the most striking for me )

wait and see is the right thing to do at the moment (and keep some upgrade path i guess ... without going full throttle on a new platform )




Why_Me said:


> The 5600X looks to be dead in the water if AMD doesn't lower its price.
> 
> 
> 
> ...


well that's quite true ... ahhhh whatever ... at least i could be a buyer of a dead in the water CPU at  249 for a 5600X found some in listings, also even cheaper from second hand ('round 149/199), i guess future early adopters are selling, not gonna complain (will still be cheaper than a new mobo+cpu  )

mmhhh, Intel innovated i reckon although not really appealing to make the switch again... later maybe, who knows ...
and no, the gap between the 2 is not abyssal, they retook the crown but given all, if looking at the whole picture (regardless of next amd product) the advantage is not that clear (pros/cons in account)
12600k is over a 5600X but consumption is higher too, price wise same same (not factoring new mobo/ram ofc, well my previous 6600K was priced around that, as it was 289 at the time i got it )

always take everything in account, before making a choice/opinion.


----------



## xorbe (Nov 5, 2021)

That 4K gaming summary, everything 95-100%.  I'll be using my 9900KS for a long time.  Because it doesn't matter how fast my Excel is or isn't, it's fast enough.


----------



## arni-gx (Nov 6, 2021)

Spoiler
















i am still dont understand, why there is no test full load watt of pc system when playing pc games ??




Spoiler










but, i have just realised, that i9 comet lake its much cooler than i9 rocket lake 8C 16T and i9 alder lake 16C 24T, eventhough i9 comet lake still has 14nm with 10C 20T.......nice......

also, its so interesting facts, that amd r9 5950x is much cooler than r9 5900x and r7 5800x or all alder lake cpu.......


----------



## qubit (Nov 6, 2021)

robb said:


> can maintain well over 60 fps in the latest games with a 2700k? thanks for proving just how delusional I thought you were with that asinine claim. you are so full of it that it is laughable. even my oced 4700k, which is quite a bit faster, could not maintain 60 fps in some games even 3 years ago and most certainly had plenty of drops well below 60 fps. knock yourself out with the last word as no point in fooling with someone like you.


Yes, easily 70-100+ fps depending on game and exact scene being rendered and level of detail. Who the hell are you to tell me otherwise? You act like you know what you're talking about when you actually don't have a clue.

It's pointless trying to reason with you due to your bad attitude and personal attacks on me. You're breaking the forum rules, reported.

EDIT: And my CPU can do this without an overclock, too.


----------



## TheoneandonlyMrK (Nov 6, 2021)

qubit said:


> Yes, easily 70-100+ fps depending on game and exact scene being rendered and level of detail. Who the hell are you to tell me otherwise? You act like you know what you're talking about when you actually don't have a clue.
> 
> It's pointless trying to reason with you due to your bad attitude and personal attacks on me. You're breaking the forum rules, reported.
> 
> EDIT: And my CPU can do this without an overclock, too.


Never I jest, I was surprised your still rocking a 2600k, the subsystem upgrade alone will be epic.


----------



## robb (Nov 6, 2021)

qubit said:


> Yes, easily 70-100+ fps depending on game and exact scene being rendered and level of detail. Who the hell are you to tell me otherwise? You act like you know what you're talking about when you actually don't have a clue.
> 
> It's pointless trying to reason with you due to your bad attitude and personal attacks on me. You're breaking the forum rules, reported.
> 
> EDIT: And my CPU can do this without an overclock, too.


You are lying and you know it. There are numerous modern games where a 2700k cannot maintain 60 fps.


----------



## qubit (Nov 6, 2021)

robb said:


> You are lying and you know it. There are numerous modern games where a 2700k cannot maintain 60 fps.


I didn't say every game did I? You really need to stop making assumptions about what I'm saying and stop with the personal attacks mkay? Don't go accusing people of being liars. It's offensive and makes you look ever more clueless.

If you'd simply spoken to me in a reasonable manner, I'd have elaborated on the exact circumstances where the performance holds up. As it is, I really can't be arsed. Reported again.


----------



## robb (Nov 6, 2021)

qubit said:


> I didn't say every game did I? You really need to stop making assumptions about what I'm saying and stop with the personal attacks mkay? Don't go accusing people of being liars. It's offensive and makes you look ever more clueless.
> 
> If you'd simply spoken to me in a reasonable manner, I'd have elaborated on the exact circumstances where the performance holds up. As it is, I really can't be arsed. Reported again.


So moving the goal posts I see. And then call me clueless? You said you maintain well over 60 fps even in newer games and all I told you was that you cant maintain over 60 fps in all newer games. Then your ridiculous reply was "Yes, easily 70-100+ fps depending on game and exact scene being rendered and level of detail". No where did you ever admit you cant hold 60 fps in any game so stop with the BS now trying to cover your rear end. But yes keep reporting me like a little child that knows he has been caught lying and does not want to be exposed.


----------



## 95Viper (Nov 6, 2021)

Alrighty, Stop the arguing and bickering.
Discuss the topic, and, not each other.
Follow the Guidelines; and, if, you have not seen or read them... they are here> Forum Guidelines
Read and understand them before posting.
Here are a few quotes to get you started:



> *Posting in a thread*
> Be polite and Constructive, if you have nothing nice to say then don't say anything at all.
> This includes trolling, continuous use of bad language (ie. cussing), flaming, baiting, retaliatory comments, system feature abuse, and insulting others.
> Do not get involved in any off-topic banter or arguments. Please report them and avoid instead.
> ...





> *Reporting and complaining*
> All posts and private messages have a "report post" button on the bottom of the post, click it when you feel something is inappropriate. Do not use your report as a "wild card invitation" to go back and add to the drama and therefore become part of the problem.





> *Behavior that is inappropriate/should be reported*
> Insulting other forum members (calling someone names makes you look stupid anyways).
> Racist, hateful, toxic, and otherwise demeaning comments will not be tolerated; whether meant as a joke or not.
> "Trolling"


----------



## AusWolf (Nov 6, 2021)

mechtech said:


> "the new Socket AM5. An LGA package with 1,718 pins, AM5"
> 
> AM5 will have more pins therefore it must be faster


With that logic, LGA-1156 is exactly 1 faster than 1155.  



The red spirit said:


> I really wonder why it was given that award. It's more or less the same as recommending FX 9590, but this time Intel has performance edge at least, but unlike FX 9590, i9 is uncoolable. If 280mm AIO and D15 fails to cool it, then what can? Now minimum spec for it is 360mm AIO or custom loop? Good one Intel, I will wait till their flagship chips will need LN2 pot as minimum cooler.


It used to be clear with PL1 giving you a reasonable amount of performance and heat, but nothing serious. I don't know why Intel decided to disregard the TDP value with PL1 on Alder Lake. Oh wait, I know: to reclaim the performance crown. But at what cost?


----------



## birdie (Nov 6, 2021)

ncrs said:


> Sadly that's incomplete, missing 7 CVEs from Intel guidance and a few recent microarchitectures.
> 
> Edit: Looking closer at the Intel site it looks like Alder Lake is indeed vulnerable to CVE-2020-24511 and CVE-2020-8698 that Rocket Lake wasn't. Supposedly fixed in microcode and hardware respectively, so most likely release BIOSes are safe.



The article includes only _transient execution vulnerabilities_. Both AMD and Intel have a lot more than that but those are different altogether.


----------



## The red spirit (Nov 6, 2021)

AusWolf said:


> It used to be clear with PL1 giving you a reasonable amount of performance and heat, but nothing serious. I don't know why Intel decided to disregard the TDP value with PL1 on Alder Lake. Oh wait, I know: to reclaim the performance crown. But at what cost?


They could have rereleased reheated Comet Lake once again. All they needed to do was to raise PLs and clocks. Anyway, this whole situation changes nothing, it's obvious that Intel is still in pinch and that AMD is a leader. But then gain, lower end Alder lake chips seem to be more reasonable and are a legit improvement, albeit a small one. So that's okay, flagship gets media pizzaz, low end offers reasonable value. As stopgap until next release it works, but once AMD comes out with something next, it will be RIP Intel on all fronts. 

Regarding current line-up, did you notice that i7 has same 8P cores as i9, but with less E cores? That more or less means that in gaming and anything demanding they will be virtually the same thing, but i7 is possible to tame heat and power wise. Particularly, i7 12700KF seems to be good. It's over 200 dollars cheaper, but only has minimal downsides in gaming rig. It seems like far better value than i9. 12600KF seems to be cheaper than discounted 10600Ks. Still, I think that the most meaningful chip will be i5 12400(F). It has to be priced right (160-170 USD, 150-160 EUR) and be reasonably fast (minimum 5600X performance in games and MT tasks). That could be great release. But the most mysterious release is going to be i3. We have no idea how configured it will be. Maybe 4P/4E, maybe 4P/2E or maybe Intel will go bonkers and give us 6P only part. I think that 4P/2E config would be the most reasonable. I wonder what will happen to Pentium and Celeron lines. They were too unacceptable for gaming, but if Intel managed to make Pentium with 2P/2E config, then chip may actually be quite decent at gaming too. That would be really nice, a massive improvement to parts that nobody cares about. Celeron could be 6E or 4E config, which is not great, but I guess enough for people who buy them.


----------



## AusWolf (Nov 6, 2021)

The red spirit said:


> They could have rereleased reheated Comet Lake once again. All they needed to do was to raise PLs and clocks. Anyway, this whole situation changes nothing, it's obvious that Intel is still in pinch and that AMD is a leader. But then gain, lower end Alder lake chips seem to be more reasonable and are a legit improvement, albeit a small one. So that's okay, flagship gets media pizzaz, low end offers reasonable value. As stopgap until next release it works, but once AMD comes out with something next, it will be RIP Intel on all fronts.


RIP Intel? Are you mad? Alder Lake tears AMD to shreds in terms of pure performance. The downside is an enormous power consumption, but some people don't care about that. For them, the 12900K is the best CPU on the market, period. I don't understand these "Intel is dead" or "AMD is dead" kind of comments. AMD was under the weather for nearly a decade, but now they're alive and well. A lot more is needed to kill a company than a few generations of hot and hungry CPUs. Not to mention the i7 or i5 series which have always been the best selling range. Speaking of which...



The red spirit said:


> Regarding current line-up, did you notice that i7 has same 8P cores as i9, but with less E cores? That more or less means that in gaming and anything demanding they will be virtually the same thing, but i7 is possible to tame heat and power wise. Particularly, i7 12700KF seems to be good. It's over 200 dollars cheaper, but only has minimal downsides in gaming rig. It seems like far better value than i9. 12600KF seems to be cheaper than discounted 10600Ks. Still, I think that the most meaningful chip will be i5 12400(F). It has to be priced right (160-170 USD, 150-160 EUR) and be reasonably fast (minimum 5600X performance in games and MT tasks). That could be great release.


And:


The red spirit said:


> If that's what Intel will do, then I will boycott this bullshit. CPU shouldn't suck more than 100 watts. For graphics card, my limit is nothing more than 150 watts +- 10 watts.


If for you, a CPU isn't meant to eat more than 100 W, then the Core i9 lineup was never meant for you, as it isn't meant for most people. It's always been for those who don't care about power consumption, or heat, or value, just want the best performance available at whatever cost. One has to admit, the 12900K delivers that.



The red spirit said:


> But the most mysterious release is going to be i3. We have no idea how configured it will be. Maybe 4P/4E, maybe 4P/2E or maybe Intel will go bonkers and give us 6P only part. I think that 4P/2E config would be the most reasonable. I wonder what will happen to Pentium and Celeron lines. They were too unacceptable for gaming, but if Intel managed to make Pentium with 2P/2E config, then chip may actually be quite decent at gaming too. That would be really nice, a massive improvement to parts that nobody cares about. Celeron could be 6E or 4E config, which is not great, but I guess enough for people who buy them.


Considering that E cores come in clusters of 4, I doubt we'll ever see configs of 2E. I can imagine a 2P/4E or 4P/4E situation (even a 4P/0E one), but not until Comet Lake i3s are off the shelves.


----------



## Valantar (Nov 6, 2021)

AusWolf said:


> RIP Intel? Are you mad? Alder Lake tears AMD to shreds in terms of pure performance.


I mostly wholeheartedly agree with what you're saying in this post, but that sentence needs moderation. "Tears to shreds" is quite the exaggeration. It wins out in most tests in Anandtech's review, but not all, and the margins aren't for the most part revolutionary. There are some solid wins in there, but also some solid losses. In TPU's CPU testing suite the 5950X is 3% ahead still - but the 12900K has a solid lead in games. All in all it's a good competitor for absolute performance, taking the lead pretty securely (which it ought to, launching a year later) and it's clearly the fastest for gaming (but also in a realm of performance where it really doesn't matter for actually perceptible in-game differences).


----------



## The red spirit (Nov 6, 2021)

AusWolf said:


> RIP Intel? Are you mad? Alder Lake tears AMD to shreds in terms of pure performance. The downside is an enormous power consumption, but some people don't care about that. For them, the 12900K is the best CPU on the market, period. I don't understand these "Intel is dead" or "AMD is dead" kind of comments. AMD was under the weather for nearly a decade, but now they're alive and well. A lot more is needed to kill a company than a few generations of hot and hungry CPUs. Not to mention the i7 or i5 series which have always been the best selling range. Speaking of which...


My remark "RIP Intel" is only about current line-up vs Zen 4, not about whole company going under. 




AusWolf said:


> If for you, a CPU isn't meant to eat more than 100 W, then the Core i9 lineup was never meant for you, as it isn't meant for most people. It's always been for those who don't care about power consumption, or heat, or value, just want the best performance available at whatever cost. One has to admit, the 12900K delivers that.


I actually doubt that. I think that i9 10900K might be faster if it had unlocked PLs and higher clocks.

I know that i9 isn't meant for me. I would never buy it. The problem is that flagships used to be 95 watts (Phenom X4 9950) or at most 125 watts (Sandy Bridge-E). I could see the appeal of those chips back then, but now I'm not sure if I would want something liek i9 even if I had more money than Bezos. Power be damned, but how are you supposed to cool it? So far no cooler has successfully cooled it. And some of those were 280mm AIOs. Is 360mm or 420mm enough? Is closed loop enough? Or do we need phase change to cool it down? I don't know, but to me it would feel like a major pain in ass to tame and if that's true, then what's the point? It doesn't deliver any luxury beyond 10% performance increase over 5950X. If I was rich AF, I wouldn't bother with consumer stuff and would build HEDT machine. Chips like 5950X and 12900K are essentially pointless, as those looking for power, go with HEDT and consumers just want something sane and what works and what is priced reasonably. The current "fuck all" budget chip is TR 3970X (3990X is bit weak in single core). Things like i9 or Ryzen 9 on mainstream platform are just products made for poor people to feel rich (they aren't exactly poor, but I feel snarky). Those platforms are always gimped in terms of PCIe lanes and other features and that's why TR4 platform is ultimate workhorse and "fuck all" budget buyers platform. And if that's too slow, then you slap phase change on TR, OC as far as it goes and enjoy it. Far better, than octa core with some eco fluff. 

But if I was rich and just wanted something something that isn't obscenely excessive, I would just get i2 12600K, since it is the most reasonable high end chip on LGA 1700 platform. 



AusWolf said:


> Considering that E cores come in clusters of 4, I doubt we'll ever see configs of 2E. I can imagine a 2P/4E or 4P/4E situation (even a 4P/0E one), but not until Comet Lake i3s are off the shelves.


That would be a major oversight. What's the point of 2P/4E chip? An Atom W? It would be horrible in games and hell to schedule properly, because P cores are real cores and E ones are just padding. I personally don't see i9 12900K as a real 16 core chips, it's just 8 core chips with some Atoms to deal with background tasks, while running games.


----------



## mechtech (Nov 6, 2021)

Valantar said:


> I mostly wholeheartedly agree with what you're saying in this post, but that sentence needs moderation. "Tears to shreds" is quite the exaggeration. It wins out in most tests in Anandtech's review, but not all, and the margins aren't for the most part revolutionary. There are some solid wins in there, but also some solid losses. In TPU's CPU testing suite the 5950X is 3% ahead still - but the 12900K has a solid lead in games. All in all it's a good competitor for absolute performance, taking the lead pretty securely (which it ought to, launching a year later) and it's clearly the fastest for gaming (but also in a realm of performance where it really doesn't matter for actually perceptible in-game differences).


ya, this is typical for people to say but really, like you mention its way off base.

+/- 0-5% after margins of error I would call it basically tied
+10% - I would say it edged out the competition in everything by 10% (basically like if you have $1and the other guy has $1.10 - not like that's much of a lead but it is noticeable)
+20% - I would say it consistently beats the competition by about 20%
+30% - soundly beats the competition in everything by about 30%
+50% - significantly outperforms the competition by 50%
etc. etc.
+100% - Torn to shreds - the competition doubled the performance of its competitor in everything
+1000% - utterly destroyed (like caveman race vs star trek level race)

That's the problem with words, they are not numbers and people can imagine whatever they what, and that's why it's good reason to put the % in after the words, then there is no imaginiation or beating around the bush.


----------



## fevgatos (Nov 6, 2021)

The red spirit said:


> My remark "RIP Intel" is only about current line-up vs Zen 4, not about whole company going under.
> 
> 
> 
> ...


Unless you are rendering all day because you are working on Disney, I don't see where these arguments come from. Alder lake is EXTREMELY efficient in 99% of productivity workloads and gaming, bar rendering. That's the only task that they are not efficient at. In everything else a 12900k SMOKES the 5950x both in performance and efficiency. People need to actually see the reviews and stop spreading misinformation. 

Check these results from igorslab, if it's efficiency you want, 12900k is your man. Unless you are just rendering of course.


----------



## AusWolf (Nov 6, 2021)

Valantar said:


> I mostly wholeheartedly agree with what you're saying in this post, but that sentence needs moderation. "Tears to shreds" is quite the exaggeration. It wins out in most tests in Anandtech's review, but not all, and the margins aren't for the most part revolutionary. There are some solid wins in there, but also some solid losses. In TPU's CPU testing suite the 5950X is 3% ahead still - but the 12900K has a solid lead in games. All in all it's a good competitor for absolute performance, taking the lead pretty securely (which it ought to, launching a year later) and it's clearly the fastest for gaming (but also in a realm of performance where it really doesn't matter for actually perceptible in-game differences).


Fair enough. Correction: the 12900K is the best CPU at its price range, if you don't mind the heat and power consumption. I don't think it's meant to compete with the 5950X, but with the 5900X.



The red spirit said:


> Things like i9 or Ryzen 9 on mainstream platform are just products made for poor people to feel rich (they aren't exactly poor, but I feel snarky).


Exactly my point.


----------



## efikkan (Nov 6, 2021)

It seems like most of you is missing the most important advances of this CPU; higher performance per core means anything from basic desktop applications to productive tools will get a nice performance boost. In workloads like office applications, web browsing etc., Alder Lake represent a new tier of performance over Rocket Lake and Zen 3, which will result in more responsiveness and a better user experience. Even in popular productive tools for photo and video editing, it should be obvious how faster cores is more useful than more cores. Far too many of you are fixated about useless benchmarks like Cinebench.

Another key takeaway should be that for most real workloads, having more than 6-8 "big cores" isn't really beneficial. Even among enthusiasts such as this forum, most of you should be looking at i5-12600K, even for those "semi-pros" who do a little bit of content creation, editing or programming. The big concerns should be availability and potential issues with Windows for early adapters, but performance i5-12600K is a more attractive all-round desktop CPU than Ryzen 9 5900X.



Ravenas said:


> How is this any different than before with good single threaded performance at the cost of horrible power consumption?





Valantar said:


> Power consumption is still worrying though, and the inability to cool the CPU properly at all with a U14s - which is not a small cooler! - is pretty shocking. This is a top-of-the-line CPU, sure, but it shouldn't _require_ an AIO still.


To both of you;
The important question is under what's the real power consumption under which circumstances.
Does it matter if the CPU gets a little hot under an unrealistic workload? (And putting a CPU under such a load 24-7 is going to wear it out quickly anyways)
I'm pretty sure Noctua NH-U14S is more than sufficient for 99.9% of customers buying this CPU, and if anything an AiO is usually not going to improve that much over NH-U14S in a case, it's much better to upgrade the case fans and calibrate the fan curves. Do that, and you'll get a system that's very quiet under most real workloads, yet still can handle the extreme peaks.
AiO coolers are usually not a good choice anyways, far too much noise for little gains. If you need cooling for extreme overclocking, go custom loop.



rrrrex said:


> What a reason to make E-Cores? Idle consumption isn't great with it, оverall perfomance is about the same. Maybe Windows should be improved a lot in that way, something like OS and it's services work on E-cores and keep other cores for work applications.


Marketing.
Most computers are sold by the big companies like Dell, HP, Lenovo etc., and most of their customers (even businesses) compare products based on "specs". Now that CPUs don't get more gains in clock speeds, they need to increase core count, even if it means adding "pointless" tiny cores. What the customer sees is 5 GHz 24 cores at 65W (for the OEM CPUs), even though they can't operate anywhere close to that sustained. This is why the upcoming mainstream architectures from Intel will contine to boost the tiny core count.



RandallFlagg said:


> That would be because AnandTech uses JEDEC standard RAM.  This is garbage to any DIY builder.


JEDEC speeds are the speeds most of you should use, this is the speed the memory controller is designed to operate at, and running it overclocked over time will result in data loss and eventually stability issues.

Overclocking memory is still overclocking, and should never be the default recommendation for DIY builders. It should be reserved for those wanting to take the risk.

Using overclocked memory in benchmarking is also misleading, as each overclock will be different, and people usually have to decrease their overclock after a while.


----------



## AusWolf (Nov 6, 2021)

The red spirit said:


> That would be a major oversight. What's the point of 2P/4E chip? An Atom W? It would be horrible in games and hell to schedule properly, because P cores are real cores and E ones are just padding. I personally don't see i9 12900K as a real 16 core chips, it's just 8 core chips with some Atoms to deal with background tasks, while running games.


Personally, I'm expecting i3 to be 4P/4E, though I also think speculation is totally pointless. I also think that we won't see them as long as Comet Lake i3 is on the market.



efikkan said:


> It seems like most of you is missing the most important advances of this CPU; higher performance per core means anything from basic desktop applications to productive tools will get a nice performance boost. In workloads like office applications, web browsing etc., Alder Lake represent a new tier of performance over Rocket Lake and Zen 3, which will result in more responsiveness and a better user experience. Even in popular productive tools for photo and video editing, it should be obvious how faster cores is more useful than more cores. Far too many of you are fixated about useless benchmarks like Cinebench.


As a previous owner of several Zen 2 and 3 CPUs ranging from the 3100 all the way to the 5950X, and the current owner of an 11700, I can confirm that we desperately need a boost in office applications. 8 cores near 5 GHz just don't cut it anymore for Excel.



efikkan said:


> The important question is under what's the real power consumption under which circumstances.
> Does it matter if the CPU gets a little hot under an unrealistic workload? (And putting a CPU under such a load 24-7 is going to wear it out quickly anyways)
> I'm pretty sure Noctua NH-U14S is more than sufficient for 99.9% of customers buying this CPU, and if anything an AiO is usually not going to improve that much over NH-U14S in a case, it's much better to upgrade the case fans and calibrate the fan curves. Do that, and you'll get a system that's very quiet under most real workloads, yet still can handle the extreme peaks.
> AiO coolers are usually not a good choice anyways, far too much noise for little gains. If you need cooling for extreme overclocking, go custom loop.


It's pointless to buy a 12900K for situations where a U14S can cool it. You either need all the cores and speed you can get, or you buy something cheaper.


----------



## efikkan (Nov 6, 2021)

AusWolf said:


> As a previous owner of several Zen 2 and 3 CPUs ranging from the 3100 all the way to the 5950X, and the current owner of an 11700, I can confirm that we desperately need a boost in office applications. 8 cores near 5 GHz just don't cut it anymore for Excel.


Excel is super bloated, when I use it for basic spreadsheets at work on a i7-10700K it lags like crazy…
But the core count is not the issue, faster cores will help. But of course, with extreme bloat, there are limits to how much help there can be.



AusWolf said:


> It's pointless to buy a 12900K for situations where a U14S can cool it. You either need all the cores and speed you can get, or you buy something cheaper.


That's nonsense.
NH-U14S can cool it just fine except for the extreme (mostly unrealistic) sustained loads on all cores.
Even for most demanding users, the bulk of their time is spent with mixed workloads, whether you're editing videos, photos etc., having all those short operations be faster greatly improves productivity, and that's where CPUs like i5-12600K - i9-12900K comes in.
If you are planning to run huge rendering jobs etc., you should probably buy a dedicated "server" for that.


----------



## RandallFlagg (Nov 6, 2021)

efikkan said:


> JEDEC speeds are the speeds most of you should use, this is the speed the memory controller is designed to operate at, and running it overclocked over time will result in data loss and eventually stability issues.
> 
> Overclocking memory is still overclocking, and should never be the default recommendation for DIY builders. It should be reserved for those wanting to take the risk.
> 
> Using overclocked memory in benchmarking is also misleading, as each overclock will be different, and people usually have to decrease their overclock after a while.



No enthusiast uses that, and these are enthusiast sites.  I never said it was an invalid test, just that it is an oddball scenario nobody fits into.

Even for those buying OEM, I mean really, who goes out and buys a $600 Maximus Hero and runs it at 2933 or 3200 CL22???   It's a laughable configuration.

Even OEMs will stick you with an A520 on AMD or H510/H610 on Intel when they pair it with DDR4-2666 or 2933 etc.  

AnandTech is like a car enthusiast site that buys Mustang GTs and de-tunes them to see how many MPG they can get, then tries to come to some conclusion about how powerful or efficient the engine is.  And their benchmark DB is so borked now, it's completely useless.  TPU is worlds better, far more relevant, using very common enthusiast grade memory and parts on a very consistent set of benchmarks - and when they can't keep it consistent or something big changes, they retest everything.  

And you know something, according to Alexa stats it pays off.  TPU ranks about 5,600 while AT ranks like 13,500.   

It's what anyone who has had a communications class knows, you speak to your audience, meaning to what they are concerned with and is relevant to them, as opposed to what is relevant and concerning to you the speaker.


----------



## The red spirit (Nov 6, 2021)

AusWolf said:


> Personally, I'm expecting i3 to be 4P/4E, though I also think speculation is totally pointless. I also think that we won't see them as long as Comet Lake i3 is on the market.


That's optimistic for sure, but may be true. The only reason why I suggested 4P/2E is that it should be rather cheap to make and would be a reasonable step up from i3 10100(F). But 4P/4E might just be too expensive for same budget bracket. But if Intel pulls it off, then that's going to be excellent. i3 then would gain tons of performance, it might be nearly 2 times faster than previous one in MT tasks. Sounds very good, but I think that may be bit too good to be true for i3. If it was so good, then it may seriously cut into i5 sales, since it could be seen as a viable replacement for 10400(F) or 11400(F). That's unprecedented for i3. 

Anyway, it would be interesting to see new integrated graphics benches. Alder Lake chips might be competitive against Ryzen APUs. I have looked at this video:









Seems rather weak. I wonder if i3 will have overclockable GPU and fast RAM support and if Intel will stop with "XMP is overclocking" argument to justify making their H chipsets gimped. 

i3 with overclockable iGPU, fast DDR5 and not gimped H610 board would be an interesting budget setup.  




AusWolf said:


> As a previous owner of several Zen 2 and 3 CPUs ranging from the 3100 all the way to the 5950X, and the current owner of an 11700, I can confirm that we desperately need a boost in office applications. 8 cores near 5 GHz just don't cut it anymore for Excel.


I'm curious what kind of Excel work you do that it lags? I have to do various Excel work for university and I'm sure that Pentium III would be more than fine for it, as long as nothing visual is going on.


----------



## AusWolf (Nov 6, 2021)

efikkan said:


> Excel is super bloated, when I use it for basic spreadsheets at work on a i7-10700K it lags like crazy…
> But the core count is not the issue, faster cores will help. But of course, with extreme bloat, there are limits to how much help there can be.





The red spirit said:


> I'm curious what kind of Excel work you do that it lags? I have to do various Excel work for university and I'm sure that Pentium III would be more than fine for it, as long as nothing visual is going on.


I meant that as sarcasm. Excel runs fine on my Atom x5 based Compute Stick.



efikkan said:


> That's nonsense.
> NH-U14S can cool it just fine except for the extreme (mostly unrealistic) sustained loads on all cores.
> Even for most demanding users, the bulk of their time is spent with mixed workloads, whether you're editing videos, photos etc., having all those short operations be faster greatly improves productivity, and that's where CPUs like i5-12600K - i9-12900K comes in.
> If you are planning to run huge rendering jobs etc., you should probably buy a dedicated "server" for that.


It's not nonsense. You need a high performance CPU for high performance applications. You don't spend 600 bucks on a CPU that sits idle 99% of the time, unless you do it for e-peen enlargement.



The red spirit said:


> That's optimistic for sure, but may be true. The only reason why I suggested 4P/2E is that it should be rather cheap to make and would be a reasonable step up from i3 10100(F). But 4P/4E might just be too expensive for same budget bracket. But if Intel pulls it off, then that's going to be excellent. i3 then would gain tons of performance, it might be nearly 2 times faster than previous one in MT tasks. Sounds very good, but I think that may be bit too good to be true for i3. If it was so good, then it may seriously cut into i5 sales, since it could be seen as a viable replacement for 10400(F) or 11400(F). That's unprecedented for i3.


It might also be based on the rumoured 6P/0E chip. I guess we'll know when it comes out.  I'm just speculating that 2E variants won't exist due to what I said above.


----------



## fevgatos (Nov 6, 2021)

AusWolf said:


> It's not nonsense. You need a high performance CPU for high performance applications. You don't spend 600 bucks on a CPU that sits idle 99% of the time, unless you do it for e-peen enlargement.


The 12900k outperforms everything in Autocad (by a big margin) yet it only consumes 87watts doing so. The same applies to a bunch of other productivity applications. Are you saying that one shouldn't buy a 12900k for those applications because tehy don't consume a truckload of power?


----------



## AusWolf (Nov 6, 2021)

fevgatos said:


> The 12900k outperforms everything in Autocad (by a big margin) yet it only consumes 87watts doing so. The same applies to a bunch of other productivity applications. Are you saying that one shouldn't buy a 12900k for those applications because tehy don't consume a truckload of power?


No. I'm saying that one buys a 12900K if its performance is needed, regardless of power consumed - and not for working on Excel spreadsheets.


----------



## The red spirit (Nov 6, 2021)

AusWolf said:


> I meant that as sarcasm. Excel runs fine on my Atom x5 based Compute Stick.


Jokes or not, but I have actually seen Celeron (Pentium 4 era, 3GHz model) lag horrendously in Excel. Once I needed some graphs it straight up froze for liek half minute and once it unfroze, it rendered it at less than 1 fps. It was some IT class hell. But more realistically, I have heard that some companies keep databases in Excel. Thousands of entries, all labeled and with tons of formulas to calculate many things. That kind of stuff might actually bring i7 to its knees.



AusWolf said:


> It might also be based on the rumoured 6P/0E chip. I guess we'll know when it comes out.  I'm just speculating that 2E variants won't exist due to what I said above.


At that point it would be way too close to i5 and might as well be called i5 12400 LE.


----------



## fevgatos (Nov 6, 2021)

AusWolf said:


> No. I'm saying that one buys a 12900K if its performance is needed, regardless of power consumed - and not for working on Excel spreadsheets.


Well I agree, it's just that only a handful of applications actually max it out. Most productivity apps (photoshop / premiere / solidworks / autocad etc.) do not.


----------



## Valantar (Nov 6, 2021)

The red spirit said:


> I actually doubt that. I think that i9 10900K might be faster if it had unlocked PLs and higher clocks.


With a 30%-ish IPC deficit? Yeah, sorry, I don't think so. I mean, unless you are just outright ignoring reality and saying "if the 10900K had infinite clock speed it would be the fastest CPU ever", there are hard limits to how high those CPUs can clock regardless of power under any type of semi-conventional cooling, and I would think a stock 12900K beats a well overclocked golden sample 10900K in all but the most poorly scheduled MT apps, and every single ST app out there.


The red spirit said:


> That would be a major oversight. What's the point of 2P/4E chip? An Atom W? It would be horrible in games and hell to schedule properly, because P cores are real cores and E ones are just padding. I personally don't see i9 12900K as a real 16 core chips, it's just 8 core chips with some Atoms to deal with background tasks, while running games.


As was said before, the E clusters are 4 or 0 cores, nothing in between. They have a single stop on the ring bus and no internal power gating. And IMO, 2P4E would be _fantastic_ for any low power application, whether that's a general purpose desktop or a laptop. Remember, those E cores can't hang with the P cores, but they're not your grandpa's Atom cores. Anandtech confirmed Intel's claims of them matching Skylake at the same clocks (they're _slightly_ slower at 3.9GHz than the 4.2GHz i7-6700K).


The red spirit said:


> Chips like 5950X and 12900K are essentially pointless, as those looking for power, go with HEDT and consumers just want something sane and what works and what is priced reasonably. The current "fuck all" budget chip is TR 3970X (3990X is bit weak in single core). Things like i9 or Ryzen 9 on mainstream platform are just products made for poor people to feel rich (they aren't exactly poor, but I feel snarky).


That's nonsense. I would make the opposite claim: that chips like the 5950X and 12900K have effectively killed HEDT. I mean, how much attention does Intel give to their HEDT lineup these days? They're still on the X299 chipset (four generations old!), and on 14nm Skylake still. Servers are on Ice Lake cores, but there's no indication of that coming to HEDT any time soon. Threadripper was also a bit of a flash in the pan after MSDT Ryzen went to 16 cores - there are just not enough workstation applications performed in high enough volumes that a) scale to >16+16 threads or b) need tons of memory bandwidth to make TR a viable option - especially when MSDT has a massive clock speed _and_ IPC advantage. A 5950X will outperform a 3970X or 3990X in the vast majority of applications (as will a 5800X, tbh), and while there are absolutely applications that can make great use of TR, they are few and highly specialized.


efikkan said:


> JEDEC speeds are the speeds most of you should use, this is the speed the memory controller is designed to operate at, and running it overclocked over time will result in data loss and eventually stability issues.
> 
> Overclocking memory is still overclocking, and should never be the default recommendation for DIY builders. It should be reserved for those wanting to take the risk.
> 
> Using overclocked memory in benchmarking is also misleading, as each overclock will be different, and people usually have to decrease their overclock after a while.


I disagree here. JEDEC specs are designed for servers and for low cost, and are woefully low performance. XMP and DOCP are manufacturer-supported "OC" modes that, while technically OCing, are as out of the box as you can expect for DIY. I would very much agree that hand-tuned memory has no place in a CPU benchmark, but XMP? Perfectly fine.


efikkan said:


> To both of you;
> The important question is under what's the real power consumption under which circumstances.
> Does it matter if the CPU gets a little hot under an unrealistic workload? (And putting a CPU under such a load 24-7 is going to wear it out quickly anyways)
> I'm pretty sure Noctua NH-U14S is more than sufficient for 99.9% of customers buying this CPU, and if anything an AiO is usually not going to improve that much over NH-U14S in a case, it's much better to upgrade the case fans and calibrate the fan curves. Do that, and you'll get a system that's very quiet under most real workloads, yet still can handle the extreme peaks.
> AiO coolers are usually not a good choice anyways, far too much noise for little gains. If you need cooling for extreme overclocking, go custom loop.


The TPU thermal test is in Blender. Is that an unrealistic workload? No. Of course, most people running Blender aren't constantly rendering - but they might have the PC doing so overnight, for example. Would you want it bumping up against tJmax the entire night? I wouldn't.


----------



## efikkan (Nov 6, 2021)

RandallFlagg said:


> No enthusiast uses that, and these are enthusiast sites.  I never said it was an invalid test, just that it is an oddball scenario nobody fits into.


Says who?
Well it may seem that way due to most reviews being oriented towards overclocking and many teenagers tend to scream the loudest in the forums.

Most DIY builders (and even gamers) are 30+, have families, jobs etc. They want computers that works reliably, don't mess up their files etc., and often want capable machines to do some work, some gaming etc.

A baseline benchmark should always be stock, otherwise there is no way to have a fair comparison. How far should you otherwise push the OC? If one reviewer get a good sample and another get a bad sample it may end up changing the conclusion. It should be stock vs. stock or OC vs. OC, not lightly OC vs stock etc.

If *you* want to OC, then by all means OC and enjoy!
But when people push OC on "normal" PC buyers just looking for a good deal, then it annoys me. Just look at all the first time builders, what is their number one problem? It's memory. If they just had gotten memory at the JEDEC speeds for their CPU, they would have saved a lot of money, gotten a stable PC and only sacrificed a negligible performance difference. In most real world scenarios it's less than 5%, and for value buyers it's much smarter to save that money and buy a higher tier GPU or CPU. Running OC memory with tight timings is relevant for those wanting to do benchmarking as a hobby, but has little real world value, especially considering stability issues, file corruption and loss of warranty (CPU) is the trade-off for a minor performance gain.


----------



## Valantar (Nov 6, 2021)

efikkan said:


> Says who?
> Well it may seem that way due to most reviews being oriented towards overclocking and many teenagers tend to scream the loudest in the forums.
> 
> Most DIY builders (and even gamers) are 30+, have families, jobs etc. They want computers that works reliably, don't mess up their files etc., and often want capable machines to do some work, some gaming etc.
> ...


You're not wrong, but this is precisely why XMP/DOCP exists. Boot the PC, get into the BIOS (which you need to do anyway to see that everything registers properly), enable the profile, save and exit. Done. Unless you've been dumb and bought a kit with some stupidly high XMP/DOCP profile (idk, 4400c16 or something), it should work on literally every CPU out there, as none of them have IMCs so bad that they only support JEDEC speeds. XMP is an excellent middle ground.


----------



## RandallFlagg (Nov 6, 2021)

efikkan said:


> Says who?
> Well it may seem that way due to most reviews being oriented towards overclocking and many teenagers tend to scream the loudest in the forums.
> 
> Most DIY builders (and even gamers) are 30+, have families, jobs etc. They want computers that works reliably, don't mess up their files etc., and often want capable machines to do some work, some gaming etc.
> ...



Most people who post here put their system specs up.  They don't use JEDEC RAM settings.  I'm sure there's one somewhere, I just have never seen it.

I'm not saying that there isn't a place for it, have said many times that if you want to see what a system built well inside tolerances works like, you check out CNET and PCWorld and you buy a Dell Inspiron or some such.  I've also said that isn't a bad way to go, many here disagree, but at least when you buy the system you get to see how it all works together (if you research it) vs piecemealing it together like DIY typically do.

But AT isn't speaking to that crowd.  I don't know who they are speaking to because even OEMs don't do things like high end $600 motherboards on slow RAM, Alienware for example has all switched to XMP mode RAM as has HP Omen.  It looks to me like AT wanted to talk to the HEDT crowd, but got distracted by normal desktop hardware?

And just for the record, I get asked once or twice a year from someone what to get.  I almost never recommend DIY or any kind of prebuilt enthusiast rig.   If they are not gamers I usually find a good deal on a reasonable OEM system and recommend that, and I also usually recommend a refurb corporate laptop with a decent SSD if price is an issue - because those things are built like tanks.  Again though, that is not what these sites are about.   You're talking about people who just want it to work and don't know or need to know squat, they should be looking at CNET and PCWorld not picking components out - again that is not a slam, it just is common sense to me.


----------



## Ravenas (Nov 6, 2021)

efikkan said:


> To both of you;
> The important question is under what's the real power consumption under which circumstances.
> Does it matter if the CPU gets a little hot under an unrealistic workload? (And putting a CPU under such a load 24-7 is going to wear it out quickly anyways)
> I'm pretty sure Noctua NH-U14S is more than sufficient for 99.9% of customers buying this CPU, and if anything an AiO is usually not going to improve that much over NH-U14S in a case, it's much better to upgrade the case fans and calibrate the fan curves. Do that, and you'll get a system that's very quiet under most real workloads, yet still can handle the extreme peaks.
> AiO coolers are usually not a good choice anyways, far too much noise for little gains. If you need cooling for extreme overclocking, go custom loop.



You took a sentence out of my post, and lost the the context of the whole post. The power consumption is worse than my 5950x. The multithreaded capability is worse than my 5950x. I would gain slight gains from single threaded performance.

Context of my post which you lost, not worth switching platforms.


----------



## The red spirit (Nov 6, 2021)

Valantar said:


> With a 30%-ish IPC deficit? Yeah, sorry, I don't think so. I mean, unless you are just outright ignoring reality and saying "if the 10900K had infinite clock speed it would be the fastest CPU ever", there are hard limits to how high those CPUs can clock regardless of power under any type of semi-conventional cooling, and I would think a stock 12900K beats a well overclocked golden sample 10900K in all but the most poorly scheduled MT apps, and every single ST app out there.


Sure mate, but that's all? New chip just beating tweaked 2 year old chip. And don't forget that 10900K is actually coolable, less dense, so it's easier to extract higher clocks. 5.4GHz is doable out of 10900K. At that point, it will nearly close the gap.




Valantar said:


> As was said before, the E clusters are 4 or 0 cores, nothing in between. They have a single stop on the ring bus and no internal power gating. And IMO, 2P4E would be _fantastic_ for any low power application, whether that's a general purpose desktop or a laptop. Remember, those E cores can't hang with the P cores, but they're not your grandpa's Atom cores. Anandtech confirmed Intel's claims of them matching Skylake at the same clocks (they're _slightly_ slower at 3.9GHz than the 4.2GHz i7-6700K).


2P/4E still sounds rather lethargic. I will believe that this config is worthy when I see it. That also doesn't change anything about 2E config being impossible now, that's an oversight for sure.




Valantar said:


> That's nonsense. I would make the opposite claim: that chips like the 5950X and 12900K have effectively killed HEDT. I mean, how much attention does Intel give to their HEDT lineup these days? They're still on the X299 chipset (four generations old!), and on 14nm Skylake still.


Not much, but you can't shit on them for staying with Skylake. It's not like Rocket Lake was any good and Comet Lake is still Skylake, so actually that stuff is not that old. And you get many benefits of HEDT platform.



Valantar said:


> Threadripper was also a bit of a flash in the pan after MSDT Ryzen went to 16 cores - there are just not enough workstation applications performed in high enough volumes that a) scale to >16+16 threads or b) need tons of memory bandwidth to make TR a viable option - especially when MSDT has a massive clock speed _and_ IPC advantage. A 5950X will outperform a 3970X or 3990X in the vast majority of applications (as will a 5800X, tbh), and while there are absolutely applications that can make great use of TR, they are few and highly specialized.


Well, 3970X would be my pick if money is no object. 32 cores are way cooler than 16. 3970X isn't really as antiquated as you say. And to be honest, if money is no object, I would rather find Quad FX platform with 2 dual core Athlon 64 FX chips, server motherboard, 16GB DDR2. But those things are really rare and certainly not so quick today. Quad FX machine might not even beat Athlon X4 860K. The crazy thing is that old oddware like Athlon 64 FX-74 goes for 250-300 USD still. Used AMD Socket F dual socket boards are super rare and when they are listed, they cost a lot. Same deal with coolers, they don't exist and I don't think that mainstream ones fit. At least ECC DDR2 is quite common. It's 2021 and I still want to know how good AMD K8 was against K10 or BD. 

Back to topic, I also mentioned phase change cooler with 3970X. It would be fun to tweak 32C/64T monster. It's somewhat lower clocked Ryzen, so it does have some potential, if you can tame that heat output. 5950X isn't as tweakable, it's already nearly maxed out.

Edit:
There seems to be WRX80 platform, which is better than 3970X one. 3975WX looks pretty cool too, but it's at stock slower than 3970X.


----------



## Valantar (Nov 6, 2021)

The red spirit said:


> Sure mate, but that's all? New chip just beating tweaked 2 year old chip. And don't forget that 10900K is actually coolable, less dense, so it's easier to extract higher clocks. 5.4GHz is doable out of 10900K. At that point, it will nearly close the gap.


I never said the 12900K was particularly impressive, I just said that your claim of a 10900K being faster was wrong.


The red spirit said:


> 2P/4E still sounds rather lethargic. I will believe that this config is worthy when I see it. That also doesn't change anything about 2E config being impossible now, that's an oversight for sure.


2E doesn't really make sense - what would be the value of a scant two E cores, if their major point is low-power MT performance and smooth handling of background tasks? Two cores isn't enough for either. And the E cores are small enough that 4 is already a small enough die area to fit on any die they want.


The red spirit said:


> Not much, but you can't shit on them for staying with Skylake. It's not like Rocket Lake was any good and Comet Lake is still Skylake, so actually that stuff is not that old. And you get many benefits of HEDT platform.


Servers have been on Ice Lake for half a year now, with large-scale customers (Google, Facebook and their ilk) having access to it for likely a year before that. There's nothing stopping Intel from releasing those chips for an X699 platform. But clearly they're not interested, hence the lack of updates since late 2019 for that lineup (which was even at that point just a warmed-over refresh of the 9th gen products from the previous year.


The red spirit said:


> Well, 3970X would be my pick if money is no object. 32 cores are way cooler than 16.


Well, then you either have highly specialized workloads or just don't care about real-world performance. Most people don't buy CPUs based on what core count is "cooler", but through balancing what they can afford and what performs well.


The red spirit said:


> 3970X isn't really as antiquated as you say.


I never said it was antiquated, I said it has lower IPC and boosts lower than Zen3 MSDT chips, leaving its only advantage at workloads that either need tons of bandwidth or more than 16+16 threads, which are _very_ rare (essentially nonexistent) for most end users.


The red spirit said:


> And to be honest, if money is no object, I would rather find Quad FX platform with 2 dual core Athlon 64 FX chips, server motherboard, 16GB DDR2. But those things are really rare and certainly not so quick today. Quad FX machine might not even beat Athlon X4 860K. The crazy thing is that old oddware like Athlon 64 FX-74 goes for 250-300 USD still. Used AMD Socket F dual socket boards are super rare and when they are listed, they cost a lot. Same deal with coolers, they don't exist and I don't think that mainstream ones fit. At least ECC DDR2 is quite common. It's 2021 and I still want to know how good AMD K8 was against K10 or BD.


... so you aren't actually looking for a high performance PC at all then? I mean, that's perfectly fine. Everyone has different interests, and I also think older PCs can be really cool (I just neither have the space, time or money to collect and use them). But using that perspective to comment on a new CPU launch? That isn't going to produce good results.


The red spirit said:


> Back to topic, I also mentioned phase change cooler with 3970X. It would be fun to tweak 32C/64T monster. It's somewhat lower clocked Ryzen, so it does have some potential, if you can tame that heat output. 5950X isn't as tweakable, it's already nearly maxed out.


Again, at now you're shifting the frame of reference to something completely different than what a general CPU review is about. And again, if that's your interest that is perfectly valid, but it isn't valid as general commentary on CPUs, as it completely shifts the basis for comparison away from the concerns of the vast majority of people reading this review or discussing the results.


----------



## The red spirit (Nov 6, 2021)

Valantar said:


> 2E doesn't really make sense - what would be the value of a scant two E cores, if their major point is low-power MT performance and smooth handling of background tasks? Two cores isn't enough for either. And the E cores are small enough that 4 is already a small enough die area to fit on any die they want.


How 2 cores aren't enough for background tasks? Even 1 core is totally fine. I'm not talking about having AutoCAD or BOINC opened in background, but just dealing with Windows overhead and for that even Pentium 4 is enough. As long as main cores aren't getting distracted from gaming, E cores have a purpose. As a MT workload boost, those E cores shouldn't be expected to do anything much of value, when you realize that die space is wasted on them instead of P cores.




Valantar said:


> Servers have been on Ice Lake for half a year now, with large-scale customers (Google, Facebook and their ilk) having access to it for likely a year before that. There's nothing stopping Intel from releasing those chips for an X699 platform. But clearly they're not interested, hence the lack of updates since late 2019 for that lineup (which was even at that point just a warmed-over refresh of the 9th gen products from the previous year.


_Intel’s messaging with its new Ice Lake Xeon Scalable (ICX or ICL-SP) steers away from simple single core or multicore performance, and instead is that the unique feature set, such as AVX-512, DLBoost, cryptography acceleration, and security, along with appropriate software optimizations or paired with specialist Intel family products, such as Optane DC Persistent Memory, Agilex FPGAs/SmartNICs, or 800-series Ethernet, offer better performance and better metrics for those actually buying the systems. This angle, Intel believes, puts it in a better position than its competitors that only offer a limited subset of these features, or lack the infrastructure to unite these products under a single easy-to-use brand._
I'm not really sure if that matters to any non enterprise consumer even a tiny bit. All these features sound like they matter in closed temperature and dust controlled server room and offer nothing for consumer with excessive budget. 




Valantar said:


> Well, then you either have highly specialized workloads or just don't care about real-world performance. Most people don't buy CPUs based on what core count is "cooler", but through balancing what they can afford and what performs well.


I clearly said that this is what would be interesting to people who have very excessive budgets. 3970X is more interesting as a toy than 5950X. 




Valantar said:


> I never said it was antiquated, I said it has lower IPC and boosts lower than Zen3 MSDT chips, leaving its only advantage at workloads that either need tons of bandwidth or more than 16+16 threads, which are _very_ rare (essentially nonexistent) for most end users.


And that's where the luxury of HEDT lies, they offer good performance for everything and excellent performance at what you said here, those rare cases, when you are memory bandwidth constrained or need huge core counts.




Valantar said:


> ... so you aren't actually looking for a high performance PC at all then? I mean, that's perfectly fine. Everyone has different interests, and I also think older PCs can be really cool (I just neither have the space, time or money to collect and use them). But using that perspective to comment on a new CPU launch? That isn't going to produce good results.


I'm not seriously looking for one and wouldn't have any use for it. If being into computers were just a hobby, then performance would matter very little for me. I would rather look into unique computers or something random like Phenom IIs. Performance matters the most when it's not plenty and when you can't upgrade frequently. If not some rather modest gaming needs (well, wants to be exact), I would be fine with Celeron. But even in gaming what I have now (i5 10400F) is simply excessive. I could be perfectly served by i3 10100F. And latest or modern games at all make up like 30% of my library. I often play something old like UT2004, Victoria 2, Far Cry and those games don't need modern CPU at all and in fact, modern OS and hardware may even cause compatibility issues. I used to have Athlon 64, socket 754 era correct rig for a while, but frequent part failures made it too expensive and too annoying to keep it running. Besides it, I have tried various computers already and I used to have 3 desktops working and ready to use in single room. It was nice for a while, until I realized that I only have one ass and head and can only meaningfully use only one of them. Those weren't expensive machines either, but still I learned my lesson. Beyond that, maintenance effort also increases and at some point one or two of them will mostly sit abandoned doing nothing. Sure you can use them for BOINC or mining, but still their utility is very limited. I certainly was more impressionable and was into acquiring things that looked interesting, but the sad reality is that beyond initial interest, you still end up with only one daily usage machine. I also tested this, when I had no responsibilities and 24 hours all to myself for literal months. There's really not much benefit in doing that long term. If you work or study, then you really can't properly utilize more than 2 machines (main desktop and back up machine or daily machine and cool project machine or daily desktop and laptop). Despite all that, I would like to test out Quad FX machine. By that I mean that using it for 3 months would be nice and later it might collect dust. i5 10400F machine serves all my needs just fine, while offering some nice extras (extra two cores, that I probably don't really need, but are nice for BOINC and really low power usage) and getting Quad FX machine would only mean a ton of functional overlap. Perhaps all this made me mostly interested in longest lasting configs, that don't need to be upgraded or replaced for many years and that means that I will keep using my current machine for a long time, until it really stops doing what I need and want (well that to limited extent of course).

If you look at what many people own and what are their interests, most people would say that they want reliable, no bullshit and long lasting system. I think that those are important criteria and I judge many part releases by their long term value. i9 is weak on my scale. Sure, it's fast now, but that power consumption a and heat output are really unpleasant. It will be fast for a while, but it will be the fastest only for few months and that's the main reason to get one. Over time you will feel its negative aspects far more than initial positive ones, therefore I think that it's a poor chip. It also is obscenely expensive to maintain, you need just released overpriced board to own one and likely unreliable cooling solution aka water cooling. And on top of that, it's transitionary release between DDR4 and DDR5, meaning that it doesn't take full advantage of new technology. And it also is the first attempt at P and E cores and I don't think that it has a great layout of those. Therefore all in all, it's unpleasant to own chip, with lots of potential to look at a lot worse in future (due to figuring out P and E core layout better and leveraging DDR5 better or so I expect) and it is not priced right and is expensive to buy and maintain. All in all, I don't think it will last as well as i5 2600K or 3770K/4770K. Those chips lasted for nearly decade and started to feel antiquated only relatively recently, this i9 12900K is already feels of somewhat limited potential. Therefore, I don't think that it's really interesting or good. In long term ownership with low TCO and minimal negative factors, this i9 fails hard. Performance only matters so much in that equation. I think that i5 12400 or i7 12700 would fare a lot better than K parts and will be far more pleasant to use in long term. This CPU (and for that matter all hardware) evaluation mentality is certainly not common here at TPU, but I think it is valuable and therefore I won't look at chips by their performance only. Performance matters in long term usage, but only so much and many other things matter just as much as performance.




Valantar said:


> Again, at now you're shifting the frame of reference to something completely different than what a general CPU review is about. And again, if that's your interest that is perfectly valid, but it isn't valid as general commentary on CPUs, as it completely shifts the basis for comparison away from the concerns of the vast majority of people reading this review or discussing the results.


Maybe, but you have to admit that 3970X's overclocked performance would be great. 5950X would never beat it in multithreaded tasks. My point is that if you are looking for luxury CPU, then buy an actual luxury CPU, not just some hyped up mainstream stuff. I'm not shifting frame of reference and some slight benefit of 5950X in single threaded workloads won't make it overall better chip, while it gets completely obliterated in multithreaded loads. 5950X might be more useful to user, that's a good argument to make, but does it feel like luxury and truly "fuck all" budget CPU? I don't think so and I don't think that people looking for high end workhorse CPU would actually care about 5950X either, since Threadripper platform was made for that in made and it has that exclusive feel, just like Xeons. You know, this is similar situation to certain cars. Corvette is well known performance car. It's fast, somewhat affordable and it looks good. Some people don't know that Vettes are actually faster and may even feel nicer to drive than some Ferraris or Lambos, therefore typical Ferrari or Lambo buyer doesn't even consider getting a Vette, despite it most likely being objectively better car than Ferrari or Lambo, while also being a lot cheaper. I think that it's a similar situation here with Threadripper and 5950X or 12900K. Threadripper feels more exclusive and has some features that make it distinctly well performing HEDT chip, which mainstream one doesn't have. Despite mainstream chip, like 5950X, being more useful and overall better performing, it's just not as alluring as Threadripper. This is how I think about this. But full disclosure, if I'm being 100% honest, then most likely I would just rather leave my computer alone and just enjoy for what it is, rather than what it could be. I would only upgrade to 2TB SSD as nothing AAA, except one title that could currently fit onto it and I'm already using NTFS compression.


----------



## AusWolf (Nov 7, 2021)

Valantar said:


> You're not wrong, but this is precisely why XMP/DOCP exists. Boot the PC, get into the BIOS (which you need to do anyway to see that everything registers properly), enable the profile, save and exit. Done. Unless you've been dumb and bought a kit with some stupidly high XMP/DOCP profile (idk, 4400c16 or something), it should work on literally every CPU out there, as none of them have IMCs so bad that they only support JEDEC speeds. XMP is an excellent middle ground.


My golden rule is to buy the RAM kit with the highest officially supported speed of the platform. Currently, I'm on B560 with an 11700. The highest officially supported RAM speed on this is 3200 MHz, so I'm using a 3200 MHz kit. If I were reviewing CPUs, I would do it in exactly the same way.



The red spirit said:


> How 2 cores aren't enough for background tasks? Even 1 core is totally fine. I'm not talking about having AutoCAD or BOINC opened in background, but just dealing with Windows overhead and for that even Pentium 4 is enough. As long as main cores aren't getting distracted from gaming, E cores have a purpose. As a MT workload boost, those E cores shouldn't be expected to do anything much of value, when you realize that die space is wasted on them instead of P cores.


A lot depends on what kind of cores we're talking about. I'm not (yet) familiar with Alder Lake's E cores, so I can't agree or disagree. All I know is, Windows update doesn't even max out one thread on my 11700, while it pegs all 4 cores to 100% usage on the Atom x5 in my Compute Stick. I literally can't use it for anything while it's updating.



The red spirit said:


> And that's where the luxury of HEDT lies, they offer good performance for everything and excellent performance at what you said here, those rare cases, when you are memory bandwidth constrained or need huge core counts.


I disagree. I think the "luxury of HEDT" mainly lies in the extra PCI-e lanes, storage and networking capabilities. If you only need raw CPU power, a mainstream platform with a 5950X is a lot cheaper and perfectly adequate.



The red spirit said:


> Jokes or not, but I have actually seen Celeron (Pentium 4 era, 3GHz model) lag horrendously in Excel. Once I needed some graphs it straight up froze for liek half minute and once it unfroze, it rendered it at less than 1 fps. It was some IT class hell. But more realistically, I have heard that some companies keep databases in Excel. Thousands of entries, all labeled and with tons of formulas to calculate many things. That kind of stuff might actually bring i7 to its knees.


That was 15 years ago. Nowadays, even a Celeron can run basically everything you need in an office - unless you're using some horribly slow implementation of emulated Windows on Citrix, like we do at my job. But then, it's the software's fault and no amount of raw horsepower can fix it.


----------



## The red spirit (Nov 7, 2021)

AusWolf said:


> My golden rule is to buy the RAM kit with the highest officially supported speed of the platform. Currently, I'm on B560 with an 11700. The highest officially supported RAM speed on this is 3200 MHz, so I'm using a 3200 MHz kit. If I were reviewing CPUs, I would do it in exactly the same way.


You are actually running out of spec. That's JEDEC 3200MHz that is supported and that means that it's RAM with 3200Mhz and timings of CL20/CL22/CL24, technically, CL 16 is out of spec. If RAM speed compatibility is listed by CPU manufacturer, then it is listed at JEDEC spec, meaning JEDEC timings too. I'm not sure if that actually impacts stability or anything, but LGA1200 platform is expected to have CAS latency of 12,5-15 ns. If you had B460 board, then XMP profile might not work as XMP is overclocking and there's no overclocking on Comet Lake locked chipsets to absurd degree.




AusWolf said:


> A lot depends on what kind of cores we're talking about. I'm not (yet) familiar with Alder Lake's E cores, so I can't agree or disagree. All I know is, Windows update doesn't even max out one thread on my 11700, while it pegs all 4 cores to 100% usage on the Atom x5 in my Compute Stick. I literally can't use it for anything while it's updating.


We all know that E cores have Skylake IPC and sort of Skylake clocks, so they are close to your 11700's performance. 2E is plenty for background gunk.




AusWolf said:


> I disagree. I think the "luxury of HEDT" mainly lies in the extra PCI-e lanes, storage and networking capabilities. If you only need raw CPU power, a mainstream platform with a 5950X is a lot cheaper and perfectly adequate.


Well that too and also memory bandwidth as well as insane amount of supported memory. You can put 256 GB of RAM in TRX40 boards.




AusWolf said:


> That was 15 years ago. Nowadays, even a Celeron can run basically everything you need in an office - unless you're using some horribly slow implementation of emulated Windows on Citrix, like we do at my job. But then, it's the software's fault and no amount of raw horsepower can fix it.


I disagree. Many low power chips of today fail to match AMD Athlon 64 performance, which was better than Pentium 4's and consequently, Celeron's too. So basically anything Atom or Celeron N (Silver) can actually lag with Excel, but that lag mostly had to do with broken GPU hardware acceleration of that particular computer.


----------



## AusWolf (Nov 7, 2021)

The red spirit said:


> You are actually running out of spec. That's JEDEC 3200MHz that is supported and that means that it's RAM with 3200Mhz and timings of CL20/CL22/CL24, technically, CL 16 is out of spec. If RAM speed compatibility is listed by CPU manufacturer, then it is listed at JEDEC spec, meaning JEDEC timings too. I'm not sure if that actually impacts stability or anything, but LGA1200 platform is expected to have CAS latency of 12,5-15 ns. If you had B460 board, then XMP profile might not work as XMP is overclocking and there's no overclocking on Comet Lake locked chipsets to absurd degree.


Is there even a JEDEC spec for 3200 MHz?

Edit: Where does it say here that memory support up to 3200 MHz means JEDEC standard 3200 MHz? As far as I know, neither Intel, nor AMD specify latency or voltage in their XMP/DOCP recommendations.

Edit 2: Also, show me a RAM kit that runs at 3200 MHz by JEDEC default, without XMP or DOCP. 



The red spirit said:


> I disagree. Many low power chips of today fail to match AMD Athlon 64 performance, which was better than Pentium 4's and consequently, Celeron's too. So basically anything Atom or Celeron N (Silver) can actually lag with Excel, but that lag mostly had to do with broken GPU hardware acceleration of that particular computer.





			UserBenchmark: AMD Athlon 64 3000+ vs Intel Atom x5-Z8330
		


You can't really get any lower power than this (the fact that I've owned both of these CPUs makes me feel nostalgic).









						The Intel Compute Stick (Cherry Trail) Review
					






					www.anandtech.com


----------



## Valantar (Nov 7, 2021)

AusWolf said:


> Is there even a JEDEC spec for 3200 MHz?
> 
> Edit: Where does it say here that memory support up to 3200 MHz means JEDEC standard 3200 MHz? As far as I know, neither Intel, nor AMD specify latency or voltage in their XMP/DOCP recommendations.
> 
> ...


Yep, JEDEC announced three 3200 specs a while after the initial DDR4 launch. The fastest JEDEC 3200 spec is 20-20-20. That (or one of the slower standards) is what is used for 3200-equipped laptops.

There are essentially no consumer-facing JEDEC-spec 3200 kits available though - simply because this doesn't really matter to consumers, and they buy whatever (and enthusiasts want faster stuff and wouldn't touch JEDEC with a ten-foot pole). This also means these DIMMs aren't generally sold at reatil, but they can be found through other channels. All ECC DDR4-3200 is also at JEDEC speeds, as are most if not all DDR4-3200 SODIMMs.



The red spirit said:


> How 2 cores aren't enough for background tasks? Even 1 core is totally fine. I'm not talking about having AutoCAD or BOINC opened in background, but just dealing with Windows overhead and for that even Pentium 4 is enough. As long as main cores aren't getting distracted from gaming, E cores have a purpose. As a MT workload boost, those E cores shouldn't be expected to do anything much of value, when you realize that die space is wasted on them instead of P cores.


Sorry, but no. Try to consider how a PC operates in the real world. Say you have a game that needs 4 fast threads, only one of which consumes a full core, but each of which can hold back performance if the core needs to switch between it and another task. You then have 4 fast cores and 1 background core, and Windows Update, Defender (or other AV software) or any other software update process (Adobe CS, some Steam or EGS or Origin or whatever other automated download) kicks in. That E core is now fully occupied. What happens to other, minor system tasks? One of three scenarios: The scheduler kicks the update/download task to a P core, costing you performance; the scheduler keeps all "minor" tasks on the E core, choking it and potentially causing issues through system processes being delayed; the scheduler starts putting tiny system processes on the P core, potentially causing stutters. Either way, this harms performance. So, 1 E core is insufficient. Period. Two is the bare minimum, but even with a relatively low number of background processes it's not unlikely for the same scenario to play out with two.

Also, the E cores are overall quite fast. So, in that same scenario, a 2P+8E setup is likely to perform _better_ than a 4P+1E (or 2E) setup, as the likelyhood of the game needing more than 2 "faster than a full E core" threads is very low, and you are left with more cores to handle the slightly slower game threads + background tasks.


The red spirit said:


> _Intel’s messaging with its new Ice Lake Xeon Scalable (ICX or ICL-SP) steers away from simple single core or multicore performance, and instead is that the unique feature set, such as AVX-512, DLBoost, cryptography acceleration, and security, along with appropriate software optimizations or paired with specialist Intel family products, such as Optane DC Persistent Memory, Agilex FPGAs/SmartNICs, or 800-series Ethernet, offer better performance and better metrics for those actually buying the systems. This angle, Intel believes, puts it in a better position than its competitors that only offer a limited subset of these features, or lack the infrastructure to unite these products under a single easy-to-use brand._
> I'm not really sure if that matters to any non enterprise consumer even a tiny bit. All these features sound like they matter in closed temperature and dust controlled server room and offer nothing for consumer with excessive budget.


.... so you agree that HEDT is becoming quite useless then? That paragraph essentially says as much. Servers and high end workstations (HEDT's core market!) are moving to specialized workflows with great benefits from specialized acceleration. MSDT packs enough cores and performance to handle pretty much anything else. The classic HEDT market is left as a tiny niche, having lost its "if you need more than 4 cores" selling point, and with PCIe 4.0 eroding its IO advantage, even. There are still uses for it, but they are rapidly shrinking.


The red spirit said:


> I clearly said that this is what would be interesting to people who have very excessive budgets. 3970X is more interesting as a toy than 5950X.


No you didn't. What you said was:


The red spirit said:


> Chips like 5950X and 12900K are essentially pointless, as those looking for power, go with HEDT and consumers just want something sane and what works and what is priced reasonably. The current "fuck all" budget chip is TR 3970X (3990X is bit weak in single core). Things like i9 or Ryzen 9 on mainstream platform are just products made for poor people to feel rich (they aren't exactly poor, but I feel snarky). Those platforms are always gimped in terms of PCIe lanes and other features and that's why TR4 platform is ultimate workhorse and "fuck all" budget buyers platform. And if that's too slow, then you slap phase change on TR, OC as far as it goes and enjoy it. Far better, than octa core with some eco fluff.


Your argumentation here is squarely centered around the previous _practical_ benefits of HEDT platforms - multi-core performance, RAM and I/O. Nothing in this indicates that you were speaking of people buying these as "toys" - quite the opposite. "Ultimate workhorse" is hardly equivalent to "expensive toy", even if the same object can indeed qualify for both.

You're not wrong that there has historically been a subset of the HEDT market that has bought them because they have money to burn and want the performance because they can get it, but that's a small portion of the overall HEDT market, and one that frankly is well served by a $750 16-core AM4 CPU too. Either way, this market isn't big enough for AMD or Intel to spend millions developing products for it - their focus is on high end workstations for professional applications.


The red spirit said:


> And that's where the luxury of HEDT lies, they offer good performance for everything and excellent performance at what you said here, those rare cases, when you are memory bandwidth constrained or need huge core counts.


That's not really true. While 3rd-gen TR does deliver decent ST performance, it's still miles behind MSDT Ryzen. I mean, look at Anandtech's benchmarks, which cover everything from gaming to tons of different workstation tasks as well as industry standard benchmarks like SPEC. The only scenarios where the 3970X wins out are either highly memory bound or among the few tasks that scale well beyond 16 cores and 32 threads. Sure, these tasks exist, but they are quite rare, and not typically found among non-workstation users (or datacenters).

Of course, that the 3970X is significantly behind the 5950X in ST and low threaded tasks doesn't mean that it's _terrible_ for these things. It's generally faster in ST tasks than a 6700K, for example, though not by much. But I sincerely doubt the people you're talking about - the ones with so much money they really don't care about spending it - would find that acceptable. I would expect them to buy (at least) two PCs instead.


The red spirit said:


> I'm not seriously looking for one and wouldn't have any use for it. If being into computers were just a hobby, then performance would matter very little for me. I would rather look into unique computers or something random like Phenom IIs. Performance matters the most when it's not plenty and when you can't upgrade frequently. If not some rather modest gaming needs (well, wants to be exact), I would be fine with Celeron. But even in gaming what I have now (i5 10400F) is simply excessive. I could be perfectly served by i3 10100F. And latest or modern games at all make up like 30% of my library. I often play something old like UT2004, Victoria 2, Far Cry and those games don't need modern CPU at all and in fact, modern OS and hardware may even cause compatibility issues. I used to have Athlon 64, socket 754 era correct rig for a while, but frequent part failures made it too expensive and too annoying to keep it running. Besides it, I have tried various computers already and I used to have 3 desktops working and ready to use in single room. It was nice for a while, until I realized that I only have one ass and head and can only meaningfully use only one of them. Those weren't expensive machines either, but still I learned my lesson. Beyond that, maintenance effort also increases and at some point one or two of them will mostly sit abandoned doing nothing. Sure you can use them for BOINC or mining, but still their utility is very limited. I certainly was more impressionable and was into acquiring things that looked interesting, but the sad reality is that beyond initial interest, you still end up with only one daily usage machine. I also tested this, when I had no responsibilities and 24 hours all to myself for literal months. There's really not much benefit in doing that long term. If you work or study, then you really can't properly utilize more than 2 machines (main desktop and back up machine or daily machine and cool project machine or daily desktop and laptop). Despite all that, I would like to test out Quad FX machine. By that I mean that using it for 3 months would be nice and later it might collect dust. i5 10400F machine serves all my needs just fine, while offering some nice extras (extra two cores, that I probably don't really need, but are nice for BOINC and really low power usage) and getting Quad FX machine would only mean a ton of functional overlap. Perhaps all this made me mostly interested in longest lasting configs, that don't need to be upgraded or replaced for many years and that means that I will keep using my current machine for a long time, until it really stops doing what I need and want (well that to limited extent of course).
> 
> If you look at what many people own and what are their interests, most people would say that they want reliable, no bullshit and long lasting system. I think that those are important criteria and I judge many part releases by their long term value. i9 is weak on my scale. Sure, it's fast now, but that power consumption a and heat output are really unpleasant. It will be fast for a while, but it will be the fastest only for few months and that's the main reason to get one. Over time you will feel its negative aspects far more than initial positive ones, therefore I think that it's a poor chip. It also is obscenely expensive to maintain, you need just released overpriced board to own one and likely unreliable cooling solution aka water cooling. And on top of that, it's transitionary release between DDR4 and DDR5, meaning that it doesn't take full advantage of new technology. And it also is the first attempt at P and E cores and I don't think that it has a great layout of those. Therefore all in all, it's unpleasant to own chip, with lots of potential to look at a lot worse in future (due to figuring out P and E core layout better and leveraging DDR5 better or so I expect) and it is not priced right and is expensive to buy and maintain. All in all, I don't think it will last as well as i5 2600K or 3770K/4770K. Those chips lasted for nearly decade and started to feel antiquated only relatively recently, this i9 12900K is already feels of somewhat limited potential. Therefore, I don't think that it's really interesting or good. In long term ownership with low TCO and minimal negative factors, this i9 fails hard. Performance only matters so much in that equation. I think that i5 12400 or i7 12700 would fare a lot better than K parts and will be far more pleasant to use in long term. This CPU (and for that matter all hardware) evaluation mentality is certainly not common here at TPU, but I think it is valuable and therefore I won't look at chips by their performance only. Performance matters in long term usage, but only so much and many other things matter just as much as performance.
> 
> Maybe, but you have to admit that 3970X's overclocked performance would be great. 5950X would never beat it in multithreaded tasks. My point is that if you are looking for luxury CPU, then buy an actual luxury CPU, not just some hyped up mainstream stuff. I'm not shifting frame of reference and some slight benefit of 5950X in single threaded workloads won't make it overall better chip, while it gets completely obliterated in multithreaded loads. 5950X might be more useful to user, that's a good argument to make, but does it feel like luxury and truly "fuck all" budget CPU? I don't think so and I don't think that people looking for high end workhorse CPU would actually care about 5950X either, since Threadripper platform was made for that in made and it has that exclusive feel, just like Xeons. You know, this is similar situation to certain cars. Corvette is well known performance car. It's fast, somewhat affordable and it looks good. Some people don't know that Vettes are actually faster and may even feel nicer to drive than some Ferraris or Lambos, therefore typical Ferrari or Lambo buyer doesn't even consider getting a Vette, despite it most likely being objectively better car than Ferrari or Lambo, while also being a lot cheaper. I think that it's a similar situation here with Threadripper and 5950X or 12900K. Threadripper feels more exclusive and has some features that make it distinctly well performing HEDT chip, which mainstream one doesn't have. Despite mainstream chip, like 5950X, being more useful and overall better performing, it's just not as alluring as Threadripper. This is how I think about this. But full disclosure, if I'm being 100% honest, then most likely I would just rather leave my computer alone and just enjoy for what it is, rather than what it could be. I would only upgrade to 2TB SSD as nothing AAA, except one title that could currently fit onto it and I'm already using NTFS compression.


There were signs of this above, but man, that's a huge wall of goal post shifting. No, you weren't presenting arguments as if they only applied to you and your specific wants and interests, nor were you making specific value arguments. You were arguing about the general performance of the 12900K, for general audiences - that's what this thread is about, and for anything else you actually do need to specify the limitations of your arguments. It's a given that flagship-tier hardware is poor value - that's common knowledge for anyone with half a brain and any experience watching any market whatsoever. Once you pass the midrange, you start paying a premium for premium parts. That's how premium markets work. But this doesn't invalidate the 12900K - it just means that, like other products in this segment it doesn't make sense economically. That's par for the course. It's expected. And the same has been true for every high-end CPU ever.

Also, you're making a lot of baseless speculations here. Why would future P/E core scheduling improvements not apply to these chips? Why would future DDR5 improvements not apply here? If anything, RAM OC results show that the IMC has plenty left in the tank, so it'll perform better with faster DDR5 - the RAM seems to be the main limitation there. It's quite likely that the Thread ... Director? is sub-optimal and will be improved in future generations, but you're assuming that this is causing massive performance bottlenecks, and that software/firmware can't alleviate these. First off, I've yet to see any major bottlenecks outside of specific applications that either seem to not run on the E cores or get scheduled only to them (and many MT applications seem to scale well across all cores of both types), and if anything there are indications that software and OS issues are the cause of this, and not hardware.

You were also making arguments around absolute performance, such as an OC'd 10900K being faster, which ... well, show me some proof? If not, you're just making stuff up. Testing and reviews strongly contradict that idea. For example, in AT's SPEC2017 testing (which scales well with more cores, as some workstation tasks can), the 12900K with DDR4 outperforms the 10900K by 31%. To beat that with an OC you'd need to be running your 10900K at (depending on how high their unit boosted) somewhere between 5.8 and 6.8GHz to catch up, let alone be faster. And that isn't possible outside of exotic cooling, and certainly isn't useful for practical tasks. And at that point, why wouldn't you just get a 12900K and OC that? You seem to be looking very hard for some way to make an unequal comparison in order to validate your opinions here. That's a bad habit, and one I'd recommend trying to break.

The same goes for things like saying an OC'd 3970X will outperform a 5950X in MT tasks. From your writing it seems that the 5950X is for some reason not OC'd (which is ... uh, yeah, see above). But regardless of that, you're right that the 3970X would be faster, but again - to what end, and for what (material, practical, time, money) cost? The amount of real-world workloads that scale well above 16 cores and 32 threads are quite few (heck, there are few that scale well past 8c16t). So unless what you're building is a PC meant solely for running MT workloads with near-perfect scaling (which generally means rendering, some forms of simulation, ML (though why wouldn't you use an accelerator for that?), video encoding, etc.), this doesn't make sense, as most of the time using the PC would be spent at lower threaded loads, where the "slower" CPU would be noticeably faster. If you're building a video editing rig, ST performance for responsiveness in the timeline is generally more important than MT performance for exporting video, unless your workflow is _very_ specialized. The same goes for nearly everything else that can make use of the performance as well. And nobody puts an overclocked CPU in a mission-critical render box, as that inevitably means stability issues, errors, and other problems. That's where TR-X, EPYC, Xeon-W and their likes come in - and there's a conscious tradeoff there for stability instead of absolute peak performance (as at that point you can likely just buy two PCs instead).

So, while your arguments might apply for the tiny group of users who still insist on buying HEDT as toys (instead of just buying $700+ MSDT CPUs and $1000+ motherboards, which are both plentiful today), they don't really apply to anyone outside of this group.


----------



## The red spirit (Nov 7, 2021)

AusWolf said:


> Is there even a JEDEC spec for 3200 MHz?


Yes



AusWolf said:


> Edit: Where does it say here that memory support up to 3200 MHz means JEDEC standard 3200 MHz? As far as I know, neither Intel, nor AMD specify latency or voltage in their XMP/DOCP recommendations.
> 
> Edit 2: Also, show me a RAM kit that runs at 3200 MHz by JEDEC default, without XMP or DOCP.











						ELITE PLUS U-DIMM DDR4 desktop memory modules
					

ELITE PLUS U-DIMM DDR4 desktop memory modules, New-generation product of DRAM module




					www.teamgroupinc.com
				




They don't specify it, but Intel was particularly vocal that JEDEC is standard and they expect their spec to be respected and that XMP is overclocking. XMP actually voids Intel CPU warranty. I don't think that they post specs that instantly void warranty of their products, so it has to be JEDEC spec that is what they expect. 




AusWolf said:


> UserBenchmark: AMD Athlon 64 3000+ vs Intel Atom x5-Z8330
> 
> 
> 
> ...


It's not multicore performance that sucks, but rather single core performance of those super low power devices. Just like this Atom, many of them have single core performance lower than of Athlon 64, if tested in Cinebench. 

BTW I have some Athlon 64s still. I have Athlon 64 3000+, Athlon 64 3200+ 2.2GHz, Athlon 64 3400+ 2.4GHz. All are socket 754. I also have Sempron 2800+ (s754), Sempron 3000+ (S1G1), Turion X2 TL-60 (S1G1). If I'm not forgetting anything, that's all K8 stuff that I have. I have Athlon X4 845, Athlon X4 870K and Athlon X4 760K too. Just to induce some nostalgia for you, in 2017, I built ultimate 2004 Athlon system with these specs:
DFI K8T800Pro-Alf
Athlon 64 3400+ 2.4GHz (OC to 2.5GHz)
Scythe Andy Samurai Master
2x1GB DDR400 Transcend JetRAM CL3
2X WD Raptor 10k rpm 72GB drives in RAID 0
120GB WD IDE drive
80GB Samsung Spinpoint IDE drive
Asus ATi Radeon X800 Platinum Edition AGP 8X version (Asus AX X800, yep that model with anime waifu and blue LEDs)
Creative Sound Blaster Audigy 2ZS
Sony 3.5" floppy drive
TSST corp DVD drive
TP-Link PCI Wi-Fi card
Old D-Link DFM-562IS 56k modem for lolz
Creative SBS 560 5.1 speakers (era correct) 
Windows XP Pro SP3 32 bit
Fractal Design Define R4 case (certainly not era correct)
Chieftec A-90 modular 550W power supply (didn't want to refurbish an old PSU) 

Considered or partially made projects:
2X SATA SSDs in RAID 0 (didn't work out due to unknown compatibility problems)
ATi Silencer + voltmod --> +150 MHz overclock 
Athlon 64 3700+ upgrade (materialized as 3400+ upgrade from 3200+)
DFI LANParty UT nF3-250Gb + cranking Athlon 64 to 2.8-3 GHz (never found that board for sale anywhere)
Athlon 64 3200+ overclock to 2.5 GHz (At 224 bus speed, RAID 0 with Windows XP install got corrupted permanently, because K8T800Pro chipset from VIA doesn't support independent clock speed locks, that's they I wanted LANParty board)


----------



## AusWolf (Nov 7, 2021)

Valantar said:


> Yep, JEDEC announced three 3200 specs a while after the initial DDR4 launch. The fastest JEDEC 3200 spec is 20-20-20. That (or one of the slower standards) is what is used for 3200-equipped laptops.
> 
> *There are essentially no consumer-facing JEDEC-spec 3200 kits available* though - simply because this doesn't really matter to consumers, and they buy whatever (and enthusiasts want faster stuff and wouldn't touch JEDEC with a ten-foot pole). This also means *these DIMMs aren't generally sold at reatil*, but they can be found through other channels. All ECC DDR4-3200 is also at JEDEC speeds, as are most if not all DDR4-3200 SODIMMs.


This is what I mean. Therefore, to achieve Intel/AMD's recommended maximum RAM speed of 3200 MHz, you need XMP/DOCP. You don't have a choice. Or you could go with your DIMM's standard speeds of 2400-2666 MHz, which is also advised against.



The red spirit said:


> *They don't specify it*, but Intel was particularly vocal that JEDEC is standard and they expect their spec to be respected and that XMP is overclocking. XMP actually voids Intel CPU warranty. I don't think that they post specs that instantly void warranty of their products, so it has to be JEDEC spec that is what they expect.


If they don't specify it, they can't expect it. Simple as. Also, good luck to anyone involved in an RMA process trying to prove that I ever activated XMP. 



The red spirit said:


> It's not multicore performance that sucks, but rather single core performance of those super low power devices. Just like this Atom, many of them have single core performance lower than of Athlon 64, if tested in Cinebench.


Single core performance is getting less and less relevant even in office applications.


----------



## The red spirit (Nov 7, 2021)

Valantar said:


> Sorry, but no. Try to consider how a PC operates in the real world. Say you have a game that needs 4 fast threads, only one of which consumes a full core, but each of which can hold back performance if the core needs to switch between it and another task. You then have 4 fast cores and 1 background core, and Windows Update, Defender (or other AV software) or any other software update process (Adobe CS, some Steam or EGS or Origin or whatever other automated download) kicks in. That E core is now fully occupied. What happens to other, minor system tasks? One of three scenarios: The scheduler kicks the update/download task to a P core, costing you performance; the scheduler keeps all "minor" tasks on the E core, choking it and potentially causing issues through system processes being delayed; the scheduler starts putting tiny system processes on the P core, potentially causing stutters. Either way, this harms performance. So, 1 E core is insufficient. Period. Two is the bare minimum, but even with a relatively low number of background processes it's not unlikely for the same scenario to play out with two.


Why not just clamp down on background junk then? Seems cheaper and easier to do that than try to buy it in form of CPU. I personally would be more than fine with 2E cores.




Valantar said:


> Also, the E cores are overall quite fast. So, in that same scenario, a 2P+8E setup is likely to perform _better_ than a 4P+1E (or 2E) setup, as the likelyhood of the game needing more than 2 "faster than a full E core" threads is very low, and you are left with more cores to handle the slightly slower game threads + background tasks.


I still want 4P/2E chip. Won't change my mind that it's not the best budget setup.




Valantar said:


> .... so you agree that HEDT is becoming quite useless then? That paragraph essentially says as much. Servers and high end workstations (HEDT's core market!) are moving to specialized workflows with great benefits from specialized acceleration. MSDT packs enough cores and performance to handle pretty much anything else. The classic HEDT market is left as a tiny niche, having lost its "if you need more than 4 cores" selling point, and with PCIe 4.0 eroding its IO advantage, even. There are still uses for it, but they are rapidly shrinking.


No, I just showed you why I don't care about server archs and why they have no place in HEDT market yet and on top of that, I clearly say that TR 3970X is my go to choice for HEDT chip right now, not anything Intel.




Valantar said:


> No you didn't. What you said was


And that has literally the same meaning. You need big MT performance for big tasks. Only consumers cares excessively about single threaded stuff. Prosumer may be better served by 3970X, rather than 5950X. More lanes, more RAM, HEDT benefits and etc. 



Valantar said:


> Your argumentation here is squarely centered around the previous _practical_ benefits of HEDT platforms - multi-core performance, RAM and I/O. Nothing in this indicates that you were speaking of people buying these as "toys" - quite the opposite. "Ultimate workhorse" is hardly equivalent to "expensive toy", even if the same object can indeed qualify for both.


Ultimate workhorse can be an expensive toy. Some people use Threadrippers for work, meanwhile others buy them purely for fun. Nothing opposite about that.



Valantar said:


> You're not wrong that there has historically been a subset of the HEDT market that has bought them because they have money to burn and want the performance because they can get it, but that's a small portion of the overall HEDT market, and one that frankly is well served by a $750 16-core AM4 CPU too.


Really? All those Xeon bros with sandy, ivy, Haswell E chips are not that small and the whole reason to get those platforms, was mostly to not buy 2600K, 3770K or 4770K. Typical K chips are cool, but Xeons were next level. Nothing changes with threadripper



Valantar said:


> Either way, this market isn't big enough for AMD or Intel to spend millions developing products for it - their focus is on high end workstations for professional applications.


Their only job is just to put more cores on mainstream stuff. They develop architecture and then scale it to different users. Same Zen works for Athlon buyer and for Epyc buyer. There isn't millions of dollars expenditures specifically for HEDT anywhere. And unlike Athlon or Ryzen buyers, Threadripper buyers can and are willing to pay high profit margin, making HEDT chip development far more attractive to AMD than Athlon or Ryzen development. Those people also don't need stock cooler or much tech help, which makes it even more cheaper for AMD to make them.




Valantar said:


> That's not really true. While 3rd-gen TR does deliver decent ST performance, it's still miles behind MSDT Ryzen. I mean, look at Anandtech's benchmarks, which cover everything from gaming to tons of different workstation tasks as well as industry standard benchmarks like SPEC. The only scenarios where the 3970X wins out are either highly memory bound or among the few tasks that scale well beyond 16 cores and 32 threads. Sure, these tasks exist, but they are quite rare, and not typically found among non-workstation users (or datacenters).
> 
> Of course, that the 3970X is significantly behind the 5950X in ST and low threaded tasks doesn't mean that it's _terrible_ for these things. It's generally faster in ST tasks than a 6700K, for example, though not by much. But I sincerely doubt the people you're talking about - the ones with so much money they really don't care about spending it - would find that acceptable. I would expect them to buy (at least) two PCs instead.


Maybe two PC is a decent idea then, but anyway, those multithreaded tasks aren't so rare in benchmarks. I personally would like to play around with 3970X far more in BOINC and WCG. 3970X's single core performance is decent.




Valantar said:


> There were signs of this above, but man, that's a huge wall of goal post shifting. No, you weren't presenting arguments as if they only applied to you and your specific wants and interests, nor were you making specific value arguments. You were arguing about the general performance of the 12900K, for general audiences - that's what this thread is about, and for anything else you actually do need to specify the limitations of your arguments. It's a given that flagship-tier hardware is poor value - that's common knowledge for anyone with half a brain and any experience watching any market whatsoever. Once you pass the midrange, you start paying a premium for premium parts. That's how premium markets work. But this doesn't invalidate the 12900K - it just means that, like other products in this segment it doesn't make sense economically. That's par for the course. It's expected. And the same has been true for every high-end CPU ever.


But the fact that it's impossible to cool adequately doesn't mean anything right? And the fact, that it doesn't beat 5950X decisively is also fine, right? Premium or not, but I wouldn't want a computer that fires my legs just to beat 5950X by a small percentage. 




Valantar said:


> Why would future DDR5 improvements not apply here? If anything, RAM OC results show that the IMC has plenty left in the tank, so it'll perform better with faster DDR5 - the RAM seems to be the main limitation there. It's quite likely that the Thread ... Director? is sub-optimal and will be improved in future generations, but you're assuming that this is causing massive performance bottlenecks, and that software/firmware can't alleviate these.


Well, you literally said here that it's in hardware, so sure software can't do that and you clearly say here that it may be fixed after few gens. Cool, I will care about those gens then, no need to care about experimental 12900K.



Valantar said:


> You were also making arguments around absolute performance, such as an OC'd 10900K being faster, which ... well, show me some proof? If not, you're just making stuff up.


I extrapolate.



Valantar said:


> Testing and reviews strongly contradict that idea. For example, in AT's SPEC2017 testing (which scales well with more cores, as some workstation tasks can), the 12900K with DDR4 outperforms the 10900K by 31%. To beat that with an OC you'd need to be running your 10900K at (depending on how high their unit boosted) somewhere between 5.8 and 6.8GHz to catch up, let alone be faster. And that isn't possible outside of exotic cooling, and certainly isn't useful for practical tasks. And at that point, why wouldn't you just get a 12900K and OC that? You seem to be looking very hard for some way to make an unequal comparison in order to validate your opinions here. That's a bad habit, and one I'd recommend trying to break.


You posted a link with 11900K benchmarks, not 10900K, making all your points here invalid. 11900K is inferior to 10900K due to 2 cores chopped off for tiny IPC gains. 2C/4T can make a difference. They more or less result in 20% of closing gap with 12900K and then you only need 10% of performance gains, which you can get from simply raising PLs to 12900K levels, you might not even need to overclock 10900K to match 12900K.



Valantar said:


> The same goes for things like saying an OC'd 3970X will outperform a 5950X in MT tasks. From your writing it seems that the 5950X is for some reason not OC'd (which is ... uh, yeah, see above). But regardless of that, you're right that the 3970X would be faster, but again - to what end, and for what (material, practical, time, money) cost? The amount of real-world workloads that scale well above 16 cores and 32 threads are quite few (heck, there are few that scale well past 8c16t). So unless what you're building is a PC meant solely for running MT workloads with near-perfect scaling (which generally means rendering, some forms of simulation, ML (though why wouldn't you use an accelerator for that?), video encoding, etc.), this doesn't make sense, as most of the time using the PC would be spent at lower threaded loads, where the "slower" CPU would be noticeably faster. If you're building a video editing rig, ST performance for responsiveness in the timeline is generally more important than MT performance for exporting video, unless your workflow is _very_ specialized. The same goes for nearly everything else that can make use of the performance as well. And nobody puts an overclocked CPU in a mission-critical render box, as that inevitably means stability issues, errors, and other problems. That's where TR-X, EPYC, Xeon-W and their likes come in - and there's a conscious tradeoff there for stability instead of absolute peak performance (as at that point you can likely just buy two PCs instead).


You seem to still apply value argument and stability argument to literally the maximum e-peen computer imaginable. If you have tons of cash, you can make others just set it up for you, particularly well insulated phase change cooling. 



Valantar said:


> So, while your arguments might apply for the tiny group of users who still insist on buying HEDT as toys (instead of just buying $700+ MSDT CPUs and $1000+ motherboards, which are both plentiful today), they don't really apply to anyone outside of this group.


Maybe, but my point was about maximum computer that money can buy. Value be damned. 5950X or 12900K is not enough. Gotta OC that HEDT chip for maximum performance.


----------



## The red spirit (Nov 7, 2021)

AusWolf said:


> If they don't specify it, they can't expect it. Simple as. Also, good luck to anyone involved in an RMA process trying to prove that I ever activated XMP.


With H410/H470/B460 motherboard, you couldn't even activate it in the first place. And today if Intel cared, they could just put e-fuse on chip and clamp down hard at warranty trolls with overclocked and XMPed chips. Anyway, it's pretty obvious that Intel only ensures correct operation at CSS latency 12.5-15 ns, anything less is experimental.



AusWolf said:


> Single core performance is getting less and less relevant even in office applications.


I'm not really sure about that, office software tends to be really slow to adopt new technologies, including multithreading. Wouldn't be surprised if Excel is still mostly single threaded. Performance for them just doesn't matter.


----------



## efikkan (Nov 7, 2021)

Valantar said:


> Servers have been on Ice Lake for half a year now, with large-scale customers (Google, Facebook and their ilk) having access to it for likely a year before that. There's nothing stopping Intel from releasing those chips for an X699 platform. But clearly they're not interested, hence the lack of updates since late 2019 for that lineup (which was even at that point just a warmed-over refresh of the 9th gen products from the previous year.


There has been references in early official documentation/drivers/etc. referring to a "Ice Lake X", though it never materialized. It's most likely due to the fact that the Ice Lake-SP/X core was unable to reach decent clock speeds (as seen with the Xeon W-3300 family), in fact lower than the predecessor Cascade Lake-SP/X, making it fairly uninteresting for the workstation/HEDT market. Ice Lake has worked well for servers though.

X699 will be based on Sapphire Rapids, which is in the same Golden Cove family as Alder Lake. Hopefully it will boost >4.5 GHz reliably.



Valantar said:


> There are essentially no consumer-facing JEDEC-spec 3200 kits available though - simply because this doesn't really matter to consumers, and they buy whatever (and enthusiasts want faster stuff and wouldn't touch JEDEC with a ten-foot pole). This also means these DIMMs aren't generally sold at reatil, but they can be found through other channels. All ECC DDR4-3200 is also at JEDEC speeds, as are most if not all DDR4-3200 SODIMMs.


Really?
Kingston, Corsair, Crucial and most of the rest have 3200 kits. These are big sellers and usually at great prices.
My current home development machine (5900X, Asus ProArt B550-Creator, Crucial 32 GB CT2K16G4DFD832A), runs 3200 MHz at CL22 flawlessly. CL20 would be better, but that's what I could find in stock at the time. But running overclocked memory for a work computer would be beyond stupid, I've seen how much file corruption and compilation fails it causes over time. An overclock isn't 100% stable just because it passes a few hours of stress tests.



Valantar said:


> <snip>
> Also, the E cores are overall quite fast. So, in that same scenario, a 2P+8E setup is likely to perform _better_ than a 4P+1E (or 2E) setup, as the likelyhood of the game needing more than 2 "faster than a full E core" threads is very low, and you are left with more cores to handle the slightly slower game threads + background tasks.


There is one flaw in your reasoning;
While the E cores are theoretically capable to a lot of lighter loads, games are super sensitive to timing issues. So even though most games only have 1-2 demanding threads and multiple light threads, the light threads may still be timing sensitive. Depending on which thread it is, delays may cause audio glitches, networking issues, IO lag etc. Any user application should probably "only" run on P cores to ensure responsiveness and reliable performance. Remember that the E cores share L2, which means the worst case latency can be quite substantial.


----------



## chrcoluk (Nov 7, 2021)

Valantar said:


> With a 30%-ish IPC deficit? Yeah, sorry, I don't think so. I mean, unless you are just outright ignoring reality and saying "if the 10900K had infinite clock speed it would be the fastest CPU ever", there are hard limits to how high those CPUs can clock regardless of power under any type of semi-conventional cooling, and I would think a stock 12900K beats a well overclocked golden sample 10900K in all but the most poorly scheduled MT apps, and every single ST app out there.
> 
> As was said before, the E clusters are 4 or 0 cores, nothing in between. They have a single stop on the ring bus and no internal power gating. And IMO, 2P4E would be _fantastic_ for any low power application, whether that's a general purpose desktop or a laptop. Remember, those E cores can't hang with the P cores, but they're not your grandpa's Atom cores. Anandtech confirmed Intel's claims of them matching Skylake at the same clocks (they're _slightly_ slower at 3.9GHz than the 4.2GHz i7-6700K).
> 
> ...


XMP is out of spec still, bear that in mind, the dimms are factory tested, but not the rest of the system to go with it.

My 3200CL14 kit, I had to downclock to 3000mhz on my 8600k because the IMC on my 8600k couldnt handle 3200mhz.

On my Ryzen system my 3000CL16 kit worked fine on windows for years, but after I installed proxmox (linux), the ram stopped working properly, and it was unstable.  Sure enough google stress test yielded errors until I downclocked it to 2800MHZ which is still out of spec for the cpu.

Always stress test ram using OS based testing (not memtest86 which is only good at finding hardware defects), dont assume XMP is stable.


----------



## Valantar (Nov 7, 2021)

efikkan said:


> There has been references in early official documentation/drivers/etc. referring to a "Ice Lake X", though it never materialized. It's most likely due to the fact that the Ice Lake-SP/X core was unable to reach decent clock speeds (as seen with the Xeon W-3300 family), in fact lower than the predecessor Cascade Lake-SP/X, making it fairly uninteresting for the workstation/HEDT market. Ice Lake has worked well for servers though.
> 
> X699 will be based on Sapphire Rapids, which is in the same Golden Cove family as Alder Lake. Hopefully it will boost >4.5 GHz reliably.


Just goes to show that high end MSDT is taking over where HEDT used to have its niche. The space between "server/datacenter chip" and "16c24t high clocking new arch MSDT chip" is pretty tiny, both in relevant applications and customers. There are a few, but a fraction of the old HEDT market back when MSDT capped out at 4-6 cores.


efikkan said:


> Really?
> Kingston, Corsair, Crucial and most of the rest have 3200 kits. These are big sellers and usually at great prices.


At JEDEC speeds? That's weird. I've literally never seen a consumer-facing kit at those speeds. Taking a quick look at Corsair DDR4-3200 kits (any capacity, any series) on a Swedish price comparison site doesn't give a single result that isn't c16 in the first page of results (48 different kits) when using the default sorting (popularity). Of course there will also be c14 kits for the higher end stuff. Looking at one of the biggest Swedish PC retailers (inet.se), all DDR4-3200, and sorting by popularity (i.e. sales), the first result that isn't c16 is the 9th one, at c14. Out of 157 listed results, the only ones at JEDEC speeds were SODIMMs, with the rest being c16 (by far the most), c14 (quite a few), and a single odd G.Skill TridentZ at c15. Of course this is just one retailer, but it confirms my previous experiences at least.


efikkan said:


> My current home development machine (5900X, Asus ProArt B550-Creator, Crucial 32 GB CT2K16G4DFD832A), runs 3200 MHz at CL22 flawlessly. CL20 would be better, but that's what I could find in stock at the time. But running overclocked memory for a work computer would be beyond stupid, I've seen how much file corruption and compilation fails it causes over time. An overclock isn't 100% stable just because it passes a few hours of stress tests.


XMP is generally 100% stable though, unless you buy something truly stupidly fast. Of course you should always do thorough testing on anything mission critical, and running JEDEC for that is perfectly fine - but then you have to work to actually find those DIMMs in the first place.


efikkan said:


> There is one flaw in your reasoning;
> While the E cores are theoretically capable to a lot of lighter loads, games are super sensitive to timing issues. So even though most games only have 1-2 demanding threads and multiple light threads, the light threads may still be timing sensitive. Depending on which thread it is, delays may cause audio glitches, networking issues, IO lag etc. Any user application should probably "only" run on P cores to ensure responsiveness and reliable performance. Remember that the E cores share L2, which means the worst case latency can be quite substantial.


You have a point here, though that depends if the less intensive threads for the game are latency-intensive or not. They don't necessarily have to be - though audio processing definitely is, and tends to be one such thing. Core-to-core latencies for E-to-E transfers aren't that bad though, at a ~15ns penalty compared to P-to-P, P-to-E or E-to-P. Memory latency also isn't that bad at just an extra 8ns or so. The biggest regression is the L2 latency (kind of strangely?) that is nearly doubled. I guess we'll see how this plays out when mobile ADL comes around - from current testing it could go either way. There's definitely the potential for a latency bottleneck there though, you're right about that.



AusWolf said:


> This is what I mean. Therefore, to achieve Intel/AMD's recommended maximum RAM speed of 3200 MHz, you need XMP/DOCP. You don't have a choice. Or you could go with your DIMM's standard speeds of 2400-2666 MHz, which is also advised against.


You could always track down one of the somewhat rare JEDEC kits - they are out there. OEM PCs also use them pretty much exclusively (high end stuff like Alienware might splurge on XMP). System integrators using off-the-shelf parts generally use XMP kits though, as they don't get the volume savings of buying thousands of JEDEC kits.



The red spirit said:


> Why not just clamp down on background junk then? Seems cheaper and easier to do that than try to buy it in form of CPU. I personally would be more than fine with 2E cores.


...because I want to _use_ my PC rather than spend my time managing background processes? One of the main advantages of a modern multi-core system is that it can handle these things without sacrificing too much performance. I don't run a test bench for benchmarking, I run a general-purpose PC that gets used for everything from writing my dissertation to gaming to photo and video editing to all kinds of other stuff. Keeping the background services for the various applications used for all of this running makes for a much smoother user experience.


The red spirit said:


> I still want 4P/2E chip. Won't change my mind that it's not the best budget setup.


We'll see if they make one. I have my doubts.


The red spirit said:


> No, I just showed you why I don't care about server archs and why they have no place in HEDT market yet and on top of that, I clearly say that TR 3970X is my go to choice for HEDT chip right now, not anything Intel.
> 
> And that has literally the same meaning. You need big MT performance for big tasks. Only consumers cares excessively about single threaded stuff. Prosumer may be better served by 3970X, rather than 5950X. More lanes, more RAM, HEDT benefits and etc.


But there aren't that many relevant tasks that scale well past 8 cores, let alone 16. That's what we've seen in Threadripper reviews since they launched: if you have the workload for it they're great, but those workloads are quite limited, and outside of those you're left with rather mediocre performance, often beat by MSDT parts.


The red spirit said:


> Ultimate workhorse can be an expensive toy. Some people use Threadrippers for work, meanwhile others buy them purely for fun. Nothing opposite about that.


Did you even read what I wrote, the sentences you just quoted? I literally said that they _can_ be the same, but that they aren't _necessarily_ so. _You_ presented them as if they were _necessarily_ the same, which is not true. I never said anything like those being opposite.


The red spirit said:


> Really? All those Xeon bros with sandy, ivy, Haswell E chips are not that small and the whole reason to get those platforms, was mostly to not buy 2600K, 3770K or 4770K. Typical K chips are cool, but Xeons were next level. Nothing changes with threadripper


Except it does: bakc then, those chips were the only way to get >4 cores and >8 threads, which had _meaningful_ performance gains in common real-world tasks. There's also a reason so many of those people still use the same hardware: they keep up decently with current MSDT platforms, even if they are slower. The issue is that the main argument - that the increase in core count is useful - is essentially gone unless you have a very select set of workloads.


The red spirit said:


> Their only job is just to put more cores on mainstream stuff. They develop architecture and then scale it to different users. Same Zen works for Athlon buyer and for Epyc buyer. There isn't millions of dollars expenditures specifically for HEDT anywhere. And unlike Athlon or Ryzen buyers, Threadripper buyers can and are willing to pay high profit margin, making HEDT chip development far more attractive to AMD than Athlon or Ryzen development. Those people also don't need stock cooler or much tech help, which makes it even more cheaper for AMD to make them.


Wait, do you think developing a new CPU package, new chipset, new platform, BIOS configuration, and everything else, is free? A quick google search tells me a senior hardware or software engineer at AMD earns on average ~$130 000. That means tasking eight engineers with this for a year is a million-dollar cost in salaries alone, before accounting for all the other costs involved in R&D. And even a "hobby" niche platform like TR isn't developed in a year by eight engineers.


The red spirit said:


> Maybe two PC is a decent idea then, but anyway, those multithreaded tasks aren't so rare in benchmarks. I personally would like to play around with 3970X far more in BOINC and WCG. 3970X's single core performance is decent.


Well, that places you in the "I want expensive tools to use as toys" group. It's not the smallest group out there, but it doesn't form a viable market for something with multi-million dollar R&D costs.


The red spirit said:


> But the fact that it's impossible to cool adequately doesn't mean anything right? And the fact, that it doesn't beat 5950X decisively is also fine, right? Premium or not, but I wouldn't want a computer that fires my legs just to beat 5950X by a small percentage.


Wait, and an overclocked 3970X is easy to cool?  I mean, you can at least try to be consistent with your arguments.


The red spirit said:


> Well, you literally said here that it's in hardware, so sure software can't do that and you clearly say here that it may be fixed after few gens. Cool, I will care about those gens then, no need to care about experimental 12900K.


What? I didn't say that. I said there is zero indication of there being significant issues with the hardware part of the scheduler. Do you have any data to suggest otherwise?


The red spirit said:


> You posted a link with 11900K benchmarks, not 10900K, making all your points here invalid. 11900K is inferior to 10900K due to 2 cores chopped off for tiny IPC gains. 2C/4T can make a difference. They more or less result in 20% of closing gap with 12900K and then you only need 10% of performance gains, which you can get from simply raising PLs to 12900K levels, you might not even need to overclock 10900K to match 12900K.


The 10900K is listed in the overall result comparison at the bottom of the page, which is where I got my numbers from. The 11900K is indeed slower in the INT test (faster in FP), but I used the 10900K results for what I wrote here. The differences between the 10900K and 11900K are overall minor. Please at least look properly at the links before responding.


The red spirit said:


> You seem to still apply value argument and stability argument to literally the maximum e-peen computer imaginable. If you have tons of cash, you can make others just set it up for you, particularly well insulated phase change cooling.


Wait, weren't you the one arguing that the 12900K makes no sense? You're wildly inconsistent here - on the one hand you're arguing that some people don't care about value and want extreme toys (which would imply things like exotic cooling and high OCs, no?), and on the other you're arguing for the practical value of high tread count workstation CPUs, and on the third (yes, you seem to be sprouting new hands at will) you're arguing for some weird combination of the two, as if there is a significant market of people running high-end massively overclocked workstation parts _both_ for prestige _and_ serious work. The way you're twisting and turning to make your logic work is rather confusing, and speaks to a weak basis for the argument.


The red spirit said:


> Maybe, but my point was about maximum computer that money can buy. Value be damned. 5950X or 12900K is not enough. Gotta OC that HEDT chip for maximum performance.


But that depends _entirely_ on your use case. And there are many, many scenarios in which a highly overclocked (say, with a chiller) 12900K will outperform an equally OC'd 3970X. And at those use cases and budget levels, the people in question are likely to have access to both, or to pick according to which workloads/benchmarks interest them.


And, to be clear, I'm not even arguing that the 12900K is especially good! I'm just forced into defending it due to your overblown dismissal of it and the weird and inconsistent arguments used to achieve this. As I've said earlier in the thread, I think the 12900K is a decent competitor, hitting where it ought to launching a year after its competition. It's impressive in some aspects (ST performance in certain tasks, E-core performance, MT in tasks that can make use of the E cores), but downright bad in others (overall power consumption, efficiency of the new P core arch, etc.). It's very much a mixed bag, and given that it's a hyper-expensive i9 chip it's also only for the relatively wealthy and especially interested. The i5-12600K makes far more sense in most ways, and is excellent value compared to most options on the market today - but you can find Ryzen 5 5600Xes sold at sufficiently lower prices for those to be equally appealing depending on your region. The issue here is that you're presenting things in a _far_ too black-and-white manner, which is what has lead us into this weird discussion where we're suddenly talking about HEDT CPUs and Athlon 64s in successive paragraphs. So maybe, just maybe, try to inject a bit of nuance into your opinions and/or how they are presented? Because your current black-and-white arguments just miss the mark.


chrcoluk said:


> XMP is out of spec still, bear that in mind, the dimms are factory tested, but not the rest of the system to go with it.
> 
> My 3200CL14 kit, I had to downclock to 3000mhz on my 8600k because the IMC on my 8600k couldnt handle 3200mhz.
> 
> ...


I know, I never said that XMP wasn't OC after all. What generation of Ryzen was that, btw? With XMP currently, as long as you're using reasonably specced DIMMs on a platform with decent memory support, it's a >99% chance of working. I couldn't get my old 3200c16 kit working reliably above 2933 on my previous Ryzen 5 1600X build, but that was solely down to it having a crappy first-gen DDR4 IMC, which 1st (and to some degree 2nd) gen Ryzen was famous for. On every generation since you've been able to run 3200-3600 XMP kits reliably on the vast majority of CPUs. But I agree that I should have added "as long as you're not running a platform with a known poor IMC" to the statement you quoted. With Intel at least since Skylake and with AMD since the 3000-series, XMP at 3600 and below is nearly guaranteed stable. Obviously not 100% - there are always outliers - but as close as makes no difference. And, of course, if memory stability is _that_ important to you, you really should be running ECC DIMMs in the first place.


----------



## ncrs (Nov 7, 2021)

birdie said:


> The article includes only _transient execution vulnerabilities_. Both AMD and Intel have a lot more than that but those are different altogether.


Yes, and the linked Intel article also contains transient execution vulnerabilities, unless you don't consider "*Speculative *Code Store Bypass" a transient execution vulnerability? The wiki article is incomplete and outdated, as I wrote.


----------



## The red spirit (Nov 7, 2021)

Valantar said:


> ...because I want to _use_ my PC rather than spend my time managing background processes? One of the main advantages of a modern multi-core system is that it can handle these things without sacrificing too much performance. I don't run a test bench for benchmarking, I run a general-purpose PC that gets used for everything from writing my dissertation to gaming to photo and video editing to all kinds of other stuff. Keeping the background services for the various applications used for all of this running makes for a much smoother user experience.


Imagine the crazy thing, I actually use my PC too, I just don't leave Cinebench running while I paly games.




Valantar said:


> But there aren't that many relevant tasks that scale well past 8 cores, let alone 16. That's what we've seen in Threadripper reviews since they launched: if you have the workload for it they're great, but those workloads are quite limited, and outside of those you're left with rather mediocre performance, often beat by MSDT parts.


BOINC, HandBrake, Oracle VM, 7 zip...




Valantar said:


> Except it does: back then, those chips were the only way to get >4 cores and >8 threads, which had _meaningful_ performance gains in common real-world tasks.


Um, FX 8350 existed (8C/8T), so did i7 920 (6C/12T) and Phenom X6 1055T (6C/6T).



Valantar said:


> Wait, do you think developing a new CPU package, new chipset, new platform, BIOS configuration, and everything else, is free? A quick google search tells me a senior hardware or software engineer at AMD earns on average ~$130 000. That means tasking eight engineers with this for a year is a million-dollar cost in salaries alone, before accounting for all the other costs involved in R&D. And even a "hobby" niche platform like TR isn't developed in a year by eight engineers.


We have problems with your statements. First of all, it's AWARD that develops BIOS with help from AMD, CPU package is mostly made by TSMC or previously Global Foundries with minimal help from AMD, many things are also not exclusively AMD's business. Just to make TR platform while they already have Zen architecture likely doesn't take an army of senior engineers. Same goes for anything else. You said it takes millions of dollars, that's true, but you seem to imply that it takes hundreds of millions, which is most likely not true.




Valantar said:


> Well, that places you in the "I want expensive tools to use as toys" group. It's not the smallest group out there, but it doesn't form a viable market for something with multi-million dollar R&D costs.


I literally told you that beyond making new arch, you just scale it for different product lines and SKUs. It's not that hard to make TR, when they make Ryzen and EPYC already.



Valantar said:


> Wait, and an overclocked 3970X is easy to cool?  I mean, you can at least try to be consistent with your arguments.


Still easier to cool at stock speeds with air cooler than 12900K. I'm very consistent, you are sloshing around from one argument to another. I said it's doable with phase change cooler.



Valantar said:


> What? I didn't say that. I said there is zero indication of there being significant issues with the hardware part of the scheduler. Do you have any data to suggest otherwise?


_It's quite likely that the *Thread ... Director?* is sub-optimal and will be improved in future generations_ Sure as hell you did. What else "thread director" is supposed to mean? OS scheduler? or CPU's own thread management logic?



Valantar said:


> The 10900K is listed in the overall result comparison at the bottom of the page, which is where I got my numbers from. The 11900K is indeed slower in the INT test (faster in FP), but I used the 10900K results for what I wrote here. The differences between the 10900K and 11900K are overall minor. Please at least look properly at the links before responding.


I literally used search function in web browser. Couldn't you pick any less straight forward link? I just needed a riddle today. Anyway, that's just a single benchmark and it's super synthetic. Basically like Passmark and Passmark's scores rarely translate to real world performance or performance even in other synthetic tasks. This is the link that you should have used: 








						The Intel 12th Gen Core i9-12900K Review: Hybrid Performance Brings Hybrid Complexity
					






					www.anandtech.com
				




New i9 is % faster than old 10900K:
In Agisoft 41%
In 3Dpm -AVX 0%
In 3Dpm +AVX 503%
In yCruncher 250m Pi 30%
In yCruncher 2.5b Pi 66%
In Corona 39%
In Crysis 1%
In Cinebench R23 MT 69% (where did you get 30% performance difference here?)

And I'm too lazy to calculate the rest. So you were full of shit and could argue well, I literally had to provide an argument to myself to realize that I'm full of shit too. Fail. 10900K is more like 60% behind i9 12900K, no reasonable overclock will close gap like that.




Valantar said:


> Wait, weren't you the one arguing that the 12900K makes no sense? You're wildly inconsistent here - on the one hand you're arguing that some people don't care about value and want extreme toys (which would imply things like exotic cooling and high OCs, no?), and on the other you're arguing for the practical value of high tread count workstation CPUs, and on the third (yes, you seem to be sprouting new hands at will) you're arguing for some weird combination of the two, as if there is a significant market of people running high-end massively overclocked workstation parts _both_ for prestige _and_ serious work. The way you're twisting and turning to make your logic work is rather confusing, and speaks to a weak basis for the argument.


Not at all, it's just you who can't follow simple reasoning. If you buy 12900K, you have strong single core perf, but weaker multicore perf. If you buy 3970X you get weaker single core perf and strong multicore perf. You want the best of both, you overclock 3970X to make it balanced. Simple. Except that I found out that 12900K's advantage is much bigger than I thought and 3970X is actually more antiquated than I thought and it's Zen, meaning it doesn't overclock that well. 




Valantar said:


> And, to be clear, I'm not even arguing that the 12900K is especially good! I'm just forced into defending it due to your overblown dismissal of it and the weird and inconsistent arguments used to achieve this. As I've said earlier in the thread, I think the 12900K is a decent competitor, hitting where it ought to launching a year after its competition. It's impressive in some aspects (ST performance in certain tasks, E-core performance, MT in tasks that can make use of the E cores), but downright bad in others (overall power consumption, efficiency of the new P core arch, etc.). It's very much a mixed bag, and given that it's a hyper-expensive i9 chip it's also only for the relatively wealthy and especially interested.


Intel could have just launched it on HEDT platform, those guys don't care about power usage, heat output as much and that will surely mean cheaper LGA 1700 motherboards. Would have been more interesting as 16P/32E part.




Valantar said:


> The i5-12600K makes far more sense in most ways, and is excellent value compared to most options on the market today - but you can find Ryzen 5 5600Xes sold at sufficiently lower prices for those to be equally appealing depending on your region.


Nothing Alder Lake is sold where I live. i5 12600K only makes sense for wealthy buyer. i5 12400 will deliver most performance at much lower cost, Ryzen 5600X complete and utter failure as value chip since day one, but Lisa said that it's "best value" chip on the market, while ignoring 10400 and 11400 and people bought it in droves.


----------



## cadaveca (Nov 8, 2021)

AusWolf said:


> Is there even a JEDEC spec for 3200 MHz?
> 
> Edit: Where does it say here that memory support up to 3200 MHz means JEDEC standard 3200 MHz? As far as I know, neither Intel, nor AMD specify latency or voltage in their XMP/DOCP recommendations.
> 
> ...


3200 MHz sticks JEDEC is 22-22-22

Have them in this laptop I'm typing from.


----------



## Crackong (Nov 8, 2021)

Wow

This thread is slowly turning into a final year essay


----------



## R0H1T (Nov 8, 2021)

Valantar said:


> CPU development cycles for a new arch are in the ~5 year range. In other words, MS has known for at least 3+ years that Intel is developing a big+little-style chip. Test chips have been available for at least a year. If MS haven't managed to make the scheduler work decently with that in that time, *it's their own fault*.


You aren't making any sense here, ok so MS has known about these chips for at least 3+ years right? So Apple, having made the chips & OS, also has at least 5+ years, that is beside their experience of Axx chips & iOS, to make the M1 on desktops a real winner! Heck there were rumors as far back as 2016-17 that these chips were coming, not to mention they are optimizing for essentially a single closed platform.

Do you have any idea about literally *gazillion different combination of hardware & software *(applications) that win11 has to work on? You think Intel, MS or both combined can replicate this in a lab? I've been essentially beta testing Windows (releases) for 10+ years now & your posts just shows how easy you make it sound ~ *except it's not *

No it's not, stop making things up you seem to be on a crusade to somehow make it look like this is child's play if MS(or Intel) had done it properly! You can have all the money in the world it won't mean a thing,* it takes time* ~ that's the bottom line, there's no magic pixie dust you can sprinkle to make everything work the way it should


----------



## Lord_Soth (Nov 8, 2021)

cadaveca said:


> 3200 MHz sticks JEDEC is 22-22-22
> 
> Have them in this laptop I'm typing from.


My Hyperx Fury kit have 3200 JEDEC profile with CL 19-21-21 timing

Edit: from data sheet HX432C18FB2K2/16 3200 JEDEC profile is 18-21-21


----------



## Valantar (Nov 8, 2021)

Lord_Soth said:


> My Hyperx Fury kit have 3200 JEDEC profile with CL 19-21-21 timing
> 
> Edit: from data sheet HX432C18FB2K2/16 3200 JEDEC profile is 18-21-21


Hm, that's weird. I also see the datasheet lists it as "JEDEC/PnP", which _might _allude to it not being an actual JEDEC spec, but interpreting what exactly that means is going to be guesswork either way. Also odd to see that that profile matches the first XMP profile, at the same voltage - I guess some subtimings might be different, but that seems oddly redundant. Even more odd is the second XMP profile at 2933 - I don't think I've ever seen an XMP profile lower than a JEDEC profile.


R0H1T said:


> You aren't making any sense here, ok so MS has known about these chips for at least 3+ years right? So Apple, having made the chips & OS, also has at least 5+ years, that is beside their experience of Axx chips & iOS, to make the M1 on desktops a real winner! Heck there were rumors as far back as 2016-17 that these chips were coming, not to mention they are optimizing for essentially a single closed platform.
> 
> Do you have any idea about literally *gazillion different combination of hardware & software *(applications) that win11 has to work on? You think Intel, MS or both combined can replicate this in a lab? I've been essentially beta testing Windows (releases) for 10+ years now & your posts just shows how easy you make it sound ~ *except it's not *
> 
> No it's not, stop making things up you seem to be on a crusade to somehow make it look like this is child's play if MS(or Intel) had done it properly! You can have all the money in the world it won't mean a thing,* it takes time* ~ that's the bottom line, there's no magic pixie dust you can sprinkle to make everything work the way it should


I never said it doesn't take time. I specifically argued for why MS has had time. And as someone else pointed out, they've also had Lakefield as a test bed for this, extending that time quite a bit back. Also, the amount of hardware combinations isn't especially relevant - we're talking about scheduler optimizations here. The main change is that the scheduler goes from being aware of "real" and SMT threads, i.e. high and low _performance_ threads + preferred/faster cores (but all being the same in principle), to a further differentiation between high and low _power_ cores. Anandtech covers this in their review, in their W10 vs. W11 testing.

Also, whether or not Apple has had _more_ time (which they undoubtedly have) doesn't fundamentally affect whether MS has had _sufficient_ time to make this work. And the test results show that for the most part they have. There are indeed applications where the E cores are left un(der)utilized or some other oddity, but for the most part it works as advertised. That alone speaks to the success of the scheduler changes in question. A further note here: MS already claims that this will work fine on W10 as well, without scheduler optimizations, with the major difference being run-to-run variance as the scheduler might at times shuffle threads around in a less optimal manner, as it only treats the E cores as low performance rather than low power.

But again: I never said this doesn't take time. It obviously does. If you read what I said that should really be abundantly clear.


The red spirit said:


> Imagine the crazy thing, I actually use my PC too, I just don't leave Cinebench running while I paly games.


Ah, yes, the old caricatured straw man argument. Running out of actual responses? Did I say I did? Or did I say that I don't think a modern PC should necessitate you spending time managing your background processes to any significant degree?


The red spirit said:


> BOINC, HandBrake, Oracle VM, 7 zip...


BOINC is an opportunistic "compute sharing" app. Yes, there are people who set up PCs purely for that, but for >99.9999% of users it's not a main workload - and it's a _prioritized_ workload for even fewer, as the core concept is for it to use leftover, unused resources.

Handbrake is for the most part something run rarely and for relatively short periods of time. Most people don't transcode their entire media library weekly.

How many people run several VMs with heavy CPU loads at the same time on a consumer platform? Again, these people are better served with server/workstation hardware, and ST performance is likely to be of little importance to them. Or if it is, they'll run those on a separate PC.

How many long-term compression/decompression operations do you run? Yes, some people have to regularly compress and decompress massive filesets, but even then it's an intermittent and most likely "offline" workload, i.e. something either run in the background while doing other things or run overnight etc. For the most part, compression/decompression is something done intermittently and in relatively short bursts, where the performance difference between a 32c64t CPU and a 16c24t (or 16c32t) at higher clocks will be of negligible importance compared to the other tasks the PC is used for.


The red spirit said:


> Um, FX 8350 existed (8C/8T), so did i7 920 (6C/12T) and Phenom X6 1055T (6C/6T).


Yep. And the FX-series at best performed on par with 4c4t i5s, and was beat soundly by 4c8t i7s (often at much lower power draws). The i7-920 was 4c8t. There were 8c and 6c Nehalem Xeons, but no Core chips above 4c8t - and those were even arguably a proto-HEDT platform due to their 130W TDPs and triple-channel memory. And the Phenom X6 was also quite slow compared to contemporary Core chips, rendering its core advantage rather irrelevant - but some people did indeed prioritize it for running more MT tasks, as AMD delivered better value in those cases.


The red spirit said:


> We have problems with your statements. First of all, it's AWARD that develops BIOS with help from AMD, CPU package is mostly made by TSMC or previously Global Foundries with minimal help from AMD, many things are also not exclusively AMD's business. Just to make TR platform while they already have Zen architecture likely doesn't take an army of senior engineers. Same goes for anything else. You said it takes millions of dollars, that's true, but you seem to imply that it takes hundreds of millions, which is most likely not true.


Wait, "we" have problems? Who is "we"? And secondly, where on earth are you getting "hundreds of millions from"? I have said no such thing, so please get that idea out of your head. It's purely your own invention. 

I'm saying that spending millions of R&D dollars on low-volume products, even in high-margin segments, is often a poor financial choice, and the size of the applicable market is _extremely_ important in whether or not this happens. And as the real-world advantages of HEDT platforms have shrunk dramatically since the launch of Zen1 (8c16t) and then Zen2 (16c32t) for MSDT, making these products is bound to drop ever lower on the list of priorities. The launch of the TR-W series underscores this, as these are essentially identical to Epyc chips, cutting the need for new packaging and developing new trace layouts for a 4000+ pin socket, while also addressing the remaining profitable part of the HEDT market: high end professional workstation users who can make use of tons of threads and/or memory bandwidth.

Also, are you actually implying that AMD using external companies to develop parts of their solutions makes it noticeably cheaper? Because that's nonsense. It makes it easier and lets them have a lower number of specialized staff (which might not have tasks at all times). This is a cost savings, but not one that actually goes into the equation for R&D for a product like this - it still takes the same number of people the same time - plus, of course, these external companies need consultants and supervisors from AMD to ensure that they're sticking to specifications and following the guidelines correctly.


The red spirit said:


> I literally told you that beyond making new arch, you just scale it for different product lines and SKUs. It's not that hard to make TR, when they make Ryzen and EPYC already.


And I told you that this is wrong, for reasons detailed above. The chiplet approach makes this a lot cheaper and simpler, but it does not make developing a whole new platform based on these chiplets cheap or easy.


The red spirit said:


> Still easier to cool at stock speeds with air cooler than 12900K.


Lol, no. Both are manageable, but pushing it. Also, though this is sadly not followed up in the article, Anandtech's review indicates that thermal readings for the 12900K are erroneous:


> Don’t trust thermal software just yet, it says 100C but it’s not





The red spirit said:


> I'm very consistent, you are sloshing around from one argument to another.


As demonstrated above, you are clearly not. You keep shifting between several points of reference and ignoring the differences between them.


The red spirit said:


> I said it's doable with phase change cooler.


And a 12900K can't be OC'd to hell and back with a phase change cooler? Again, for some reason you're insisting on unequal comparisons to try and make your points. You see how that undermines them, right?


The red spirit said:


> _It's quite likely that the *Thread ... Director?* is sub-optimal and will be improved in future generations_ Sure as hell you did. What else "thread director" is supposed to mean? OS scheduler? or CPU's own thread management logic?


I can't believe I have to spell this out, but here goes: "Sub-optimal" means _not perfect_. "Not perfect" is not synonymous with "has significant issues. It means that _there is room for improvement_. So, what I am saying: it's quite likely that the Thread Director (or whatever it's called) _can be improved_, yet we have no evidence of it _failing significantly_. The outliers we can see in reviews are more likely to be fixable in software, OS or microcode updates as requiring hardware changes.


The red spirit said:


> I literally used search function in web browser.


So, for a page with 99% of its content in pictured graphs, you do a word search. 10/10 for effort.


The red spirit said:


> Couldn't you pick any less straight forward link? I just needed a riddle today.


Seriously, if that's too hard for you to parse - I was specifically referencing final scores, not part scores - I can't help you there.


The red spirit said:


> Anyway, that's just a single benchmark and it's super synthetic.


I'm sorry, but you're making it obvious that you have zero idea what you're saying here. SPEC is not a single benchmark, it's a collection of dozens of benchmarks. And it's not "super synthetic", it is purely based on real-world applications. You can read all about the details for every single sub-test here. There is some overlap between the applications used as the basis for SPEC subtests and AT's benchmark suite as well, for example POVray - though of course the specific workloads are different.


The red spirit said:


> Basically like Passmark and Passmark's scores rarely translate to real world performance or performance even in other synthetic tasks.


Again: no. Not even close. See above. We could always discuss the validity of SPEC as a benchmark, as it clearly doesn't cover every concievable (or even common) use case of a PC, but that's not what you're doing here.


The red spirit said:


> This is the link that you should have used:
> 
> 
> 
> ...


Did I mention a specific Cinebench score? Where? Are you confusing me with someone else?

Also, the tests you posted above average out to a 91% advantage for the 12900K, so ... if anything, that further undermines your postulation that an overclocked 10900K would be faster? I really don't see what you're getting at. Even in the cases where they are tied, you could OC both CPUs roughly equally, and the differences would be negligible.


The red spirit said:


> And I'm too lazy to calculate the rest. So you were full of shit and could argue well,


Nice to see you're so invested in having a constructive and civil discussion.


The red spirit said:


> I literally had to provide an argument to myself to realize that I'm full of shit too. Fail. 10900K is more like 60% behind i9 12900K, no reasonable overclock will close gap like that.


.... so, what I was saying all along was actually correct? Oh, no, of course, we were somehow both wrong, it's just that your new conclusion somehow aligns with what I've been saying all along. I mean, come on, man. Chill out. Being wrong is fine. It happens to all of us, all the time. Chill out.


The red spirit said:


> Not at all, it's just you who can't follow simple reasoning. If you buy 12900K, you have strong single core perf, but weaker multicore perf.


No, you're the one failing to grasp that this weaker MT perf _only_ applies in a relatively limited number of use cases. You tried to address this above, yet you listed four workloads. There are clearly more, but very few of them are likely to be significant time expenditures for even an enthusiast user. Hence my point of the ST perf being more important, and the relative importance of the lower MT thus being lower, and the 12900K providing better overall performance even if it does lose out in applications that scale well above 16 cores.


The red spirit said:


> If you buy 3970X you get weaker single core perf and strong multicore perf. You want the best of both, you overclock 3970X to make it balanced. Simple. Except that I found out that 12900K's advantage is much bigger than I thought and 3970X is actually more antiquated than I thought and it's Zen, meaning it doesn't overclock that well.


So ... you agree with what I've been saying, then? Because that's exactly what I've been arguing. (Also, the 3970X is Zen2, not Zen, so it OC's better than TR 1000 and 2000, but it still doesn't OC well due to the massive power requirements of OCing such a massive core and the difficulty of powering and cooling such a dense arch).


The red spirit said:


> Intel could have just launched it on HEDT platform, those guys don't care about power usage, heat output as much and that will surely mean cheaper LGA 1700 motherboards. Would have been more interesting as 16P/32E part.


But they didn't. Which tells us what? I'd say several things:
- The HEDT market is much smaller than MSDT, and less important
- This is reinforced by increasing MSDT core counts, which have taken away one of the main advantages of HEDT over the past decade
- Launching this on HEDT would leave Intel without a real competitor to the 5950X, making them look bad
- HEDT today, to the degree that it exists, only makes sense in either _very_ heavily threaded applications or memory-bound applications

I don't doubt that Intel has some kind of plans for a new generation of HEDT at some point, but it clearly isn't a priority for them.


The red spirit said:


> Nothing Alder Lake is sold where I live. i5 12600K only makes sense for wealthy buyer. i5 12400 will deliver most performance at much lower cost, Ryzen 5600X complete and utter failure as value chip since day one, but Lisa said that it's "best value" chip on the market, while ignoring 10400 and 11400 and people bought it in droves.


This makes it seem like you're arguing not on the basis of the actual merits of the products, but rather based on some specific regional distribution deficiencies. Which is of course a valid point in and of itself, but not a valid argument as to the general viability of these chips, nor how they compare in performance or efficiency to other alternatives. You might have less trouble being understood if you make the premises for your positions clearer? I also completely agree that the 5600X is expensive for what it is and that AMD _needs_ to start prioritizing lower end products (preferably yesterday) but at least it has come down in price somewhat in many regions - but the 11400 and 11600 series that Intel launched as a response are indeed better value. Quite the reversal from a few years ago! I'm also looking forward to seeing lower end Zen3 and ADL chips, as we finally have a truly competitive CPU market.


----------



## Lord_Soth (Nov 8, 2021)

Valantar said:


> Hm, that's weird. I also see the datasheet lists it as "JEDEC/PnP", which _might _allude to it not being an actual JEDEC spec, but interpreting what exactly that means is going to be guesswork either way. Also odd to see that that profile matches the first XMP profile, at the same voltage - I guess some subtimings might be different, but that seems oddly redundant. Even more odd is the second XMP profile at 2933 - I don't think I've ever seen an XMP profile lower than a JEDEC profile.


I searched alot for a kit of 3200 jedec for the Ryzen 2400G, i found it, i bought it in the 2018 and never installed (i have bought a new home and all the pc parts are still in a box).
I found this on internet, seem like the single channel version of the same memory.


----------



## THU31 (Nov 8, 2021)

This is progress, you cannot take that away from them. But man, they are years behind AMD when it comes to efficiency.

And for me the days of chasing the highest performance are long gone. All I care about is efficiency. I have no interest in running a CPU at 200+ watts and a GPU at 300+ watts. No way.

Maybe the 10 nm process can mature a bit before the 13th gen and they can do what AMD does, which is to have top-quality silicon in flagship models.
The 5800X, 5900X and 5950X basically have the same power consumption, with the 5950X being ~70% faster, and much cooler because it has two dies instead of one. This is what I want to see from Intel, amazing performance at 125 watts.


----------



## Lord_Soth (Nov 8, 2021)

THU31 said:


> This is progress, you cannot take that away from them. But man, they are years behind AMD when it comes to efficiency.
> 
> And for me the days of chasing the highest performance are long gone. All I care about is efficiency. I have no interest in running a CPU at 200+ watts and a GPU at 300+ watts. No way.


The same here, the time for triple A gaming is gone, now I look for efficency and integration, for this AMD are still on the top for me.
The real star of AlderLake seem to be the goldmont core. 
I would really like to see 4/8 Ecores in SOC with cheap socket motherboard like AMD did for Jaguar on AM1 platform.


----------



## Valantar (Nov 8, 2021)

Lord_Soth said:


> I would really like to see 4/8 Ecores in SOC with cheap socket motherboard like AMD did for Jaguar on AM1 platform.


That would be really interesting. It could get away with a very small (and thus efficient) ring bus as well - just the IMC, PCIe, and two core clusters. Knowing Intel though, this would be a soldered-only product, and likely a Xeon of some sort. But if they made a DIY/consumer version of this, it would be _really_ cool.


----------



## Lord_Soth (Nov 8, 2021)

Valantar said:


> That would be really interesting. It could get away with a very small (and thus efficient) ring bus as well - just the IMC, PCIe, and two core clusters. Knowing Intel though, this would be a soldered-only product, and likely a Xeon of some sort. But if they made a DIY/consumer version of this, it would be _really_ cool.


I'm sure it will be a BGA platform and i'm not a big fan of them, i rushed to buy my last laptop with the 4th core series (now with DirectX 11 only iGPU) before they go to all BGA cpu.
I still a big fan of all the Atom-type cpu even the first potato generation but i never bought any of them i opted for the AMD for the better GPU.


----------



## AusWolf (Nov 8, 2021)

THU31 said:


> This is progress, you cannot take that away from them. But man, they are years behind AMD when it comes to efficiency.
> 
> And for me the days of chasing the highest performance are long gone. All I care about is efficiency. I have no interest in running a CPU at 200+ watts and a GPU at 300+ watts. No way.
> 
> ...


As much as I agree with you, there are a few key points about efficiency that need to be cleared (not necessarily for you, but for anyone else on this forum).

AMD's TDP figures have nothing to do with power consumption. While Zen 3 is still ahead of Intel in terms of efficiency, their 65 W CPUs have a power limit of 88 W, and their 105 W ones a limit of 142 W. It's easy to be ahead when your TDP numbers mean nothing (for the consumer at least). With the ASUS Core Optimiser enabled in BIOS, my (sold) 5950X consumed somewhere around 180-190 W in Cinebench all-core, and heated slightly above 80 °C with a 240 mm AIO. It was crazy fast, but still... Just saying.
My Core i7-11700 at stock scores the same as a R5 3600. One could say that it's a terrible result considering that we're putting an 8-core chip against an older 6-core. My argument is that the i7 achieves this score while being limited to 65 W at 2.8 GHz, while the 3600 maxes out its 88 W limit at around 4 GHz. The point is, Rocket Lake's efficiency at 14 nm can exceed that of Zen 2 at 7 nm with the proper settings.

Intel's larger chips (compared to AMD's) are easier to cool when configured to consume the same power as Zen 2/3, due to the larger area to dissipate the heat (and better contact with the IHS, maybe?).
Intel does have well-performing chips at 125 W. They achieve this by limiting long-term power consumption (PL1) to that value. At least this has been the case before Alder Lake. Sure, performance at this level doesn't match Zen 3, but it's plenty for gaming and everyday use.
Intel has claimed back the gaming crown by ignoring their own TDP number, and configuring the 12900K to sit way up high on the efficiency curve with a ridiculously high PL1 - at a place where no chip should sit, in my opinion. I'd be curious to see how it performs when it's actually being limited to 125 W.

With all that said, I think this is an exciting time in CPU history, with both Intel and AMD pumping out some very interesting architectures - and very compelling ones from two very different points of view.


----------



## fevgatos (Nov 8, 2021)

THU31 said:


> This is progress, you cannot take that away from them. But man, they are years behind AMD when it comes to efficiency.
> 
> And for me the days of chasing the highest performance are long gone. All I care about is efficiency. I have no interest in running a CPU at 200+ watts and a GPU at 300+ watts. No way.
> 
> ...


I think you dont understand what efficiency is. The 12900k is pushed to the absolute limits, made to consume 240 Watts, way outside its efficiency curve. That doesnt make the cpu inefficienct, it just makes the stock configuration inefficient. Likewise, if you try to push the 5950x to 240watts, it will be inefficient as well. Thankfully a 12900k is an unlocked chip, meaning you can tinker it

I said it 50 times, ill repeat it once more. Alder lake cpus are extremely efficient in 99.9% of productivity or entertainment workloads. They are not efficient mainly in rendering cause of that huge power limit under full load. Thankfully it takes around 5 seconds to lower that power limit to whatever you feel comfortable with. At around 150w you should only be losing 10% performance at most under those full load 100% peg cinebench scenarios. 

The upside is if you want that 10 or 15% you CAN push it to 240 or 300 watts and get that last drop of performance, which you cannot do with zen 3 since they hit a wall pretty early. It is a positive thing being able to do that, not a negative


----------



## THU31 (Nov 8, 2021)

Then why are reviewers showing performance at max power consumption? I tried to find results at 125 W, but I was unsuccessful.

Just look at the review on this site. The 12900K consumes over 100 W more than any AMD CPU.

I cannot estimate performance at a lower TDP, because how could I? Intel wants to show max performance, and that means nothing to me.

And I do not care that you can push AMD CPUs out of spec. The reviews show stock settings at ~125 W real power consumption.


----------



## chrcoluk (Nov 8, 2021)

Valantar said:


> I know, I never said that XMP wasn't OC after all. What generation of Ryzen was that, btw? With XMP currently, as long as you're using reasonably specced DIMMs on a platform with decent memory support, it's a >99% chance of working. I couldn't get my old 3200c16 kit working reliably above 2933 on my previous Ryzen 5 1600X build, but that was solely down to it having a crappy first-gen DDR4 IMC, which 1st (and to some degree 2nd) gen Ryzen was famous for. On every generation since you've been able to run 3200-3600 XMP kits reliably on the vast majority of CPUs. But I agree that I should have added "as long as you're not running a platform with a known poor IMC" to the statement you quoted. With Intel at least since Skylake and with AMD since the 3000-series, XMP at 3600 and below is nearly guaranteed stable. Obviously not 100% - there are always outliers - but as close as makes no difference. And, of course, if memory stability is _that_ important to you, you really should be running ECC DIMMs in the first place.


It's a 2600X on a b450 board.  I had been looking into going to a newer gen Ryzen but the used market is horrible right now probably due to the problems in the brand new market.  The bios after the one I am using added memory compatibility fixes for zen+ but since proxmox is now stable, I decided to not rock the boat.

Also it is a 4 dimm setup and when it was stable on windows it was only 2 dimms (should have mentioned), so take that into account.  The official spec sheets for zen+ and the original zen show a huge supported memory speed drop for 4 dimms, if I remember right original zen only officially supports 1866mhz for 4 dimms?

My current 9900k handles the same ram that my 8600k couldnt manage at XMP speeds fine, I suspect i5's might have lower binned imc's vs i7 and i9.


----------



## AusWolf (Nov 8, 2021)

THU31 said:


> Then why are reviewers showing performance at max power consumption? I tried to find results at 125 W, but I was unsuccessful.


Because 1. that's the new Intel default - or at least that's what the tested motherboards default to, and 2. reviews tend to focus on the best case scenario, that is: one that's not restricted by cooling or power.



THU31 said:


> Just look at the review on this site. The 12900K consumes over 100 W more than any AMD CPU.


That takes a few clicks in the BIOS to change.



THU31 said:


> I cannot estimate performance at a lower TDP, because how could I? Intel wants to show max performance, and that means nothing to me.


That's true, and I sympathise with you on this. Although, by looking at review data, you can expect pretty good performance at humanly acceptable power consumption levels, too. By lowering the power target by 50% on an average chip (regardless of manufacturer), you don't get a decrease in performance anywhere near that 50% value.

By comparison, if I set the power limit on my 2070 to 125 W instead of the default 175 (71%), I get about 5-7% lower performance. That is nowhere near noticeable without an FPS counter on screen.

Edit: If your main goal is gaming, you most likely won't notice any change by lowering the power target on a 12900K, as games don't push it anywhere near its power limit anyway.



THU31 said:


> And I do not care that you can push AMD CPUs out of spec. The reviews show stock settings at ~125 W real power consumption.


Did you read what I wrote above? AMD CPUs do not have a 125 W real power consumption. Their 105 W TDP CPUs have a default power target of 142 W. With PBO and different "core optimiser" features on certain motherboards, the 5950X easily sips 180-190 W. TDP doesn't mean power with AMD.


----------



## The red spirit (Nov 8, 2021)

Valantar said:


> Or did I say that I don't think a modern PC should necessitate you spending time managing your background processes to any significant degree?


Doesn't matter to me anyway. I clearly said, that if you just leave stuff opened in background, that's not a problem, problems arise, if you try to do something really stupid like running Photoshop and playing a game at the same time. Common sense says that you shouldn't be doing something like that and that's why people don't do that. 2E cores are enough for background gunk, unless you are trying to achieve something that you clearly shouldn't.  



Valantar said:


> BOINC is an opportunistic "compute sharing" app. Yes, there are people who set up PCs purely for that, but for >99.9999% of users it's not a main workload - and it's a _prioritized_ workload for even fewer, as the core concept is for it to use leftover, unused resources.
> 
> Handbrake is for the most part something run rarely and for relatively short periods of time. Most people don't transcode their entire media library weekly.
> 
> ...


These are workloads that scale well and I have done for considerable amount of time. They are far more useful benchmarks than what Anandtech actually tests. I like BOINC and contribute from time to time. From CPU loads alone I have accumulated over 1 million points, most of which were achieved with very modest chips like FX 6300, Athlon X4 845 or even Turion X2 TL-60. Took me months to achieve that and purchase of i5 10400F, so far only makes up 10% of all effort, despite being the fastest chip I have. For me that's very meaningful workload. Next is Handbrake. You argue that you only need it for short bursts of time, but I personally found out that it's useful in converting seasons of shows and if you want that done with high quality, good compression ratio, it can take days. Want to do this for several shows? It can take weeks. Obviously it would be a good trade-off to just use GPU, but then you can't achieve quality as good, compression as good and even then it may take half of day to transcode a whole show. So if someone does this stuff in certain frequency, they should think about getting chip with stronger MT performance or just buy a fast graphics card or consider Quadro (now RTX A series). Next load are VMs. For me VMs are cool to test out operating systems, but besides that some BOINC projects require to use BOINC in VMs, even projects that aren't exclusively VM only, sometimes give more projects to linux rather than Windows. And then you need RAM, cores and you can expect to keep some cores permanently pegged to 100% utilization. CPU with more cores (not threads) allows you to also use your computer, instead of leaving it working as server. Better yet, you have enough cores to run BOINC and to run BOINC in VM. And then we have 7zip. I will be brief, if you download files from internet, you most likely will need it very often and often for big files. Some games from Steam are compressed and have to be decompressed. You may also use NTFS compression on SSD.

All in all, depending on user, MT tasks and their performance can be very important and to them, certainly not a rare need. And if they have a need to do it professionally, then faster chip is most likely a financial no-brainer. I personally found that the most demanding tasks are multi threaded well and even then take ages to complete. Just liek I thought in 2014, that multithreaded performance was very important and maybe even at cost of single threaded performance, so I think today, but today that's even more obvious. 




Valantar said:


> Yep. And the FX-series at best performed on par with 4c4t i5s, and was beat soundly by 4c8t i7s (often at much lower power draws). The i7-920 was 4c8t. There were 8c and 6c Nehalem Xeons, but no Core chips above 4c8t - and those were even arguably a proto-HEDT platform due to their 130W TDPs and triple-channel memory. And the Phenom X6 was also quite slow compared to contemporary Core chips, rendering its core advantage rather irrelevant - but some people did indeed prioritize it for running more MT tasks, as AMD delivered better value in those cases.


FX chips were very close to i7s in multithreaded performance and since they were a lot cheaper, literally 2 or maybe even 3 times cheaper, they were no brainer chips for anyone seriously interested in those workloads. They were also easy to overclock. As long as you have cooling, 5Ghz was nothing to them, for nearly 100 USD prices of FX 8320 chips, i7 was complete no go. 

lol I made a mistake about i7 920, but yeah there were 6C/12T chips available. Maybe i7 960 was the lowest end hexa core. Still, those were somewhat affordable if you needed something like that.

Phenom II X6 chips were great high core count chips, lower end models like 1055T were really affordable. If you overclocked one of those, you could have had exceptional value rendering rig for cheap. They sure did costs a lot less than i7 4C/8T parts and were seriously competitive against them. Obviously, later released FX chips were even better value. 

Anyway, my point was that things like high core count chips existed back then and were quite affordable.




Valantar said:


> I'm saying that spending millions of R&D dollars on low-volume products, even in high-margin segments, is often a poor financial choice, and the size of the applicable market is _extremely_ important in whether or not this happens. And as the real-world advantages of HEDT platforms have shrunk dramatically since the launch of Zen1 (8c16t) and then Zen2 (16c32t) for MSDT, making these products is bound to drop ever lower on the list of priorities. The launch of the TR-W series underscores this, as these are essentially identical to Epyc chips, cutting the need for new packaging and developing new trace layouts for a 4000+ pin socket, while also addressing the remaining profitable part of the HEDT market: high end professional workstation users who can make use of tons of threads and/or memory bandwidth.


And that's still better than making 5950X or 5900X. As consumer platforms are made to be cheaper have only small range or power requirements, if they make 5950X and say that it's compatible with AM4 socket and that any board supports it and then some guy does that on cheapest A520 board, most likely it will throttle badly. If they want to avoid lawsuits, then they better make their CPU range limited or make motherboard makers only produce more expensive boards, but that's what they can't really do since AM4 is supposed to be cheap, affordable and flexible platform. Watt marketing from FX era is seemingly not done anymore, even if it would make perfect sense. Intel really suffers from that with shit tier i9 K chips making H510 VRMs burn. I'm surprised that they still don't have lawsuits to deal with, considering that this is blatant case of advertising something that can't happen. Anyway, those are the reasons to not make HEDT chips compatible with mainstream sockets. Intel in Sandy, Ivy and Haswell era managed to do that. That was great for consumers. All this bullshit with pushing HEDT chips to consumer platform does nothing good for anyone, except Intel and AMD.



Valantar said:


> And a 12900K can't be OC'd to hell and back with a phase change cooler? Again, for some reason you're insisting on unequal comparisons to try and make your points. You see how that undermines them, right?


Barely, it's already running obscenely hot and has clocks cranked to the moon. There's very little potential. I wouldn't overclock it, as it has two types of cores with different voltages with many frequency and voltage stages + many power settings in BIOS. 12900K is hardly tweakable unless you spend obscene amount of time to do it and then spend weeks if not months stability testing it in various loads.   That's stupid and makes no sense. Might as well just leave it as it is. 3970X is not much better than i9, but at least it has same type of cores. And potential to benefit from raised power limits (whatever they are called on AMD side). i9 12900K has them set better, therefore less potential for gains.



Valantar said:


> I'm sorry, but you're making it obvious that you have zero idea what you're saying here. SPEC is not a single benchmark, it's a collection of dozens of benchmarks. And it's not "super synthetic", it is purely based on real-world applications. You can read all about the details for every single sub-test here. There is some overlap between the applications used as the basis for SPEC subtests and AT's benchmark suite as well, for example POVray - though of course the specific workloads are different.


Strong disagree, most tasks are super niche and quite synthetic. I wouldn't consider it a realistic test suite. I consider practical testing with most common, widely used software. Anything else may be still practical, but due to nature of being niche, can't be honestly said to be so. 



Valantar said:


> .... so, what I was saying all along was actually correct?


To some extent yes




Valantar said:


> No, you're the one failing to grasp that this weaker MT perf _only_ applies in a relatively limited number of use cases. You tried to address this above, yet you listed four workloads. There are clearly more, but very few of them are likely to be significant time expenditures for even an enthusiast user. Hence my point of the ST perf being more important, and the relative importance of the lower MT thus being lower, and the 12900K providing better overall performance even if it does lose out in applications that scale well above 16 cores.


That's going to depend on person.



Valantar said:


> So ... you agree with what I've been saying, then? Because that's exactly what I've been arguing. (Also, the 3970X is Zen2, not Zen, so it OC's better than TR 1000 and 2000, but it still doesn't OC well due to the massive power requirements of OCing such a massive core and the difficulty of powering and cooling such a dense arch).


Maybe. By Zen I mean Zen as architecture family, not Zen 1. At this point, I'm not sure if Zen 2 is really that dense. New Intel chips might be denser.



Valantar said:


> But they didn't. Which tells us what? I'd say several things:
> - The HEDT market is much smaller than MSDT, and less important
> - This is reinforced by increasing MSDT core counts, which have taken away one of the main advantages of HEDT over the past decade
> - Launching this on HEDT would leave Intel without a real competitor to the 5950X, making them look bad
> ...


Some points you make can be solved with marketing, like making people see HEDT Intel platform as 5950X+Threadripper competitor. and the main reason why HEDT is losing ground is due to Intel pushing HEDT parts into mainstream segment (where they arguably don't belong). It's not that HEDT is not important, it's just how business is done by Intel. 




Valantar said:


> This makes it seem like you're arguing not on the basis of the actual merits of the products, but rather based on some specific regional distribution deficiencies. Which is of course a valid point in and of itself, but not a valid argument as to the general viability of these chips, nor how they compare in performance or efficiency to other alternatives. You might have less trouble being understood if you make the premises for your positions clearer? I also completely agree that the 5600X is expensive for what it is and that AMD _needs_ to start prioritizing lower end products (preferably yesterday) but at least it has come down in price somewhat in many regions - but the 11400 and 11600 series that Intel launched as a response are indeed better value. Quite the reversal from a few years ago! I'm also looking forward to seeing lower end Zen3 and ADL chips, as we finally have a truly competitive CPU market.


Speaking about regional deals, pretty much since C19 lockdown star in my region (Lithuania) there is a great shortage of Athlons, quad core Ryzens, Ryzen APUs in general, Celerons. Lithuanian market is now seemingly flooded with i5 10400Fs and i3 10100Fs. Anything Ryzen has Ryzen tax, seemingly making Intel more competitive here, but in terms of sales, Ryzen is winning, despite having prices inflated and only having liek 2-3 different SKUs available per store. Idiots still think that it's better value than Intel. Ironically, 5950X is seemingly mainstream chip as it is sold the best. Yet at the same time brand new Pentium 4 chips are sold. Pentium 4s outsell i3 10100Fs and i5 11400Fs. That happened in one store, but it's still incredibly fucked up. In other store, most sold chip is 2600X, meanwhile second is 5950X. That second store doesn't have Pentium 4s, but they have refurbished Core 2 Duos. They don't sell well at all there. In Lithuania most computers sold are local prebuilts or laptops, but DIY builders are going bonkers for some reason.


----------



## fevgatos (Nov 8, 2021)

THU31 said:


> Then why are reviewers showing performance at max power consumption? I tried to find results at 125 W, but I was unsuccessful.
> 
> Just look at the review on this site. The 12900K consumes over 100 W more than any AMD CPU.
> 
> ...


Igorslab has tested with 125w PL as well. That results in an insanely efficient 12900k, although I personally wouldn't limit it that low. Around 150-170w should be the sweet spot for those heavily demanding workloads









						Core i9-12900KF, Core i7-12700K and Core i5-12600 in a workstation test with amazing results and an old weakness | Part 2 | igor'sLAB
					

So today I'll get serious and show you where Alder Lake S can really score aside from colorful gaming pixels. Gaming what? Completely overrated if you look at at least some of today's results.




					www.igorslab.de


----------



## THU31 (Nov 8, 2021)

fevgatos said:


> Igorslab has tested with 125w PL as well. That results in an insanely efficient 12900k, although I personally wouldn't limit it that low. Around 150-170w should be the sweet spot for those heavily demanding workloads
> 
> 
> 
> ...



So this time AMD CPUs are pushed way out of spec. They are faster than the 12900K, but their power consumption is also very high.

I dislike this very much. I feel it should be mandatory to test all CPUs in two ways, one observing power limits, the other ignoring them.


----------



## RandallFlagg (Nov 9, 2021)

THU31 said:


> So this time AMD CPUs are pushed way out of spec. They are faster than the 12900K, but their power consumption is also very high.
> 
> I dislike this very much. I feel it should be mandatory to test all CPUs in two ways, one observing power limits, the other ignoring them.



K chips are meant to be configured, enthusiast chips.  

You can run it flat out for performance, which is what they're really for, or if efficiency is your thing you can do that too.

12900K matching an M1 Max for efficiency :





12900K power limited to 125W and 241W :





5950X PBO2 (power unlocked) - drawing 238W peak score 28621 :


----------



## HenrySomeone (Nov 9, 2021)

12900k is looking more and more like a monster performer at 125W! This means you can easily pair it with pretty much the cheapest board available right now and you'll still get 90-95% of its max performance!


----------



## lexluthermiester (Nov 9, 2021)

HenrySomeone said:


> and you'll still get 90-95% of its max performance!


Maybe even more depending on silicon lottery.


----------



## THU31 (Nov 9, 2021)

RandallFlagg said:


> K chips are meant to be configured, enthusiast chips.
> 
> You can run it flat out for performance, which is what they're really for, or if efficiency is your thing you can do that too.
> 
> 12900K matching an M1 Max for efficiency :


These results are pretty crazy.

This is why I stand by my comment about unifying reviews. These factory overclocks are distorting the actual results in my view. In the past reviews showed one default setting, and then you could overclock that any way you wanted.
But now you get these official boost modes, some reviews use it, some do not, it is a mess. Same with reviewing 65 W CPUs on high-end boards and ignoring power limits. Those CPUs will get nowhere near as good performance on entry level boards, which they are meant to be used with.

For now I will wait to see AMD's response. They have to lower the prices of their CPUs, but they will also introduce the 3D cache models early next year I think. We are looking at some exciting competition.
When I upgrade, I will definitely want high multi-threaded performance because I want to use CPU encoding when streaming. As good as NVENC is, CPU encoding offers better quality with those demanding presets.


----------



## W1zzard (Nov 9, 2021)

THU31 said:


> But now you get these official boost modes, some reviews use it, some do not, it is a mess


Intel is 100% crystal clear on what the default is. If some reviewers choose to underclock or overclock the CPU then it's their own fault? Changing the power limits is just like OC, only playing with a different dial


----------



## Valantar (Nov 9, 2021)

The red spirit said:


> Doesn't matter to me anyway. I clearly said, that if you just leave stuff opened in background, that's not a problem, problems arise, if you try to do something really stupid like running Photoshop and playing a game at the same time. Common sense says that you shouldn't be doing something like that and that's why people don't do that. 2E cores are enough for background gunk, unless you are trying to achieve something that you clearly shouldn't.


And a) I never brought that up as a plausible scenario, so please put your straw man away, and b) I explained how even with a completely average setup and workload, including a relatively normal amount of common background applications, even 2 E cores can be low enough to cause intermittent issues. Which, given their very small die area requirements, makes four a good baseline. Removing two more gives you room for slightly more than half of another P core. So, a 2P4E die will be _much_ smaller than a 4P2E die in terms of area spent on CPU cores. A bit simplified (the 4 E cores look _slightly_ larger than a P core), but let's say 1 E core w/cache is X area; 1 P core w/cache is 4X area. That makes the 2P4E layout 12X, while the 4P2E layout is 18X - 50% larger.

As was brought up above there are questions regarding the latency characteristics of a layout like this, but latency benchmarks indicate that things might not be as bad as some might fear.


The red spirit said:


> These are workloads that scale well and I have done for considerable amount of time.


Yes, I never said there weren't. I just said they are relatively few and relatively rare, especially in an end-user usage scenario.


The red spirit said:


> They are far more useful benchmarks than what Anandtech actually tests.


That's your opinion, and as seen below, an opinion that seems rather uninformed.


The red spirit said:


> I like BOINC and contribute from time to time. From CPU loads alone I have accumulated over 1 million points, most of which were achieved with very modest chips like FX 6300, Athlon X4 845 or even Turion X2 TL-60. Took me months to achieve that and purchase of i5 10400F, so far only makes up 10% of all effort, despite being the fastest chip I have. For me that's very meaningful workload.


Cool for you, I guess? As I said: niche workload, with niche hardware, for niche users. No mainstream or mass-market applicability.


The red spirit said:


> Next is Handbrake. You argue that you only need it for short bursts of time, but I personally found out that it's useful in converting seasons of shows and if you want that done with high quality, good compression ratio, it can take days. Want to do this for several shows? It can take weeks. Obviously it would be a good trade-off to just use GPU, but then you can't achieve quality as good, compression as good and even then it may take half of day to transcode a whole show. So if someone does this stuff in certain frequency, they should think about getting chip with stronger MT performance or just buy a fast graphics card or consider Quadro (now RTX A series).


*clears throat* Apparently I have to repeat myself:


Valantar said:


> Most people don't transcode their entire media library weekly.


Which is essentially what you're positing here. And, as you bring up yourself, if this is a relevant workload for you, buy an Intel CPU with QuickSync, an AMD APU or GPU with VCN, or an Nvidia GPU with NVENC. You'll get many times the performance for less power draw, and even a lower cost than one of these CPUs (in a less insane GPU market, that is).

And again: niche workload for niche users. Having this as an occasional workload is common; having this as a common workload (in large quantities) is not.


The red spirit said:


> Next load are VMs. For me VMs are cool to test out operating systems, but besides that some BOINC projects require to use BOINC in VMs, even projects that aren't exclusively VM only, sometimes give more projects to linux rather than Windows. And then you need RAM, cores and you can expect to keep some cores permanently pegged to 100% utilization. CPU with more cores (not threads) allows you to also use your computer, instead of leaving it working as server.


Wait, you have 100% CPU utilization in your VMs from trying out OSes? That sounds wrong. You seem to be contradicting yourself somewhat here. And again: if your workload is "I run many VMs with heavy multi-core workloads", you're well and truly into high end workstation tasks. That is indeed a good spot for HEDT (or even higher end) hardware, but ... this isn't common. Not even close.


The red spirit said:


> Better yet, you have enough cores to run BOINC and to run BOINC in VM.


A niche within a niche! Even better!


The red spirit said:


> And then we have 7zip. I will be brief, if you download files from internet, you most likely will need it very often and often for big files. Some games from Steam are compressed and have to be decompressed. You may also use NTFS compression on SSD.


I have never, ever, ever heard of anyone needing a HEDT CPU for _decompressing their Steam downloads_. I mean, for this to be relevant you would need to spend far more time downloading your games than actually playing them. Any run-of-the-mill CPU can handle this just fine. Steam decompresses on the fly, and your internet bandwidth is always going to bottleneck you more than your CPU's decompression rate (unless you're Linus Tech Tips and use a local 10G local cache for all your Steam downloads). The same goes for whatever other large-scale compressed downloads even an enthusiast user is likely to do as well.


The red spirit said:


> All in all, depending on user, MT tasks and their performance can be very important and to them, certainly not a rare need.


Yes, I have said the whole time this depends on the use case. But you're completely missing the point here: actually seeing a benefit from a massively MT CPU requires you to spend _a lot_ of time on these tasks _every day_, especially when accounting for the high core count CPU being _slower_ for all other tasks. Let's say you use your PC for both work and fun, and your work includes running an MT workload that scales perfectly with added cores and threads. Let's say this workload takes 2h/day on a 3970X. Let's say that workload is a Cinema4D render, which the TR performs well in overall. Going from the relative Cinebench R20 scores, the same job would take 54% more time on the 12900K, or slightly over 3h. That's a significant difference, and the choice of the HEDT CPU would likely be warranted overall, as it would eat into either work hours or possibly free time.

But then let's consider the scenario at hand:
-how many people use a single PC for both rendering workloads this frequent _and _on their free time?
-how many people render things this large this frequently at all?
-how many people with these needs wouldn't just get a second PC, including the redundancy and stability this would bring? (Or hire time on a render farm?)
-how many people would care if their end-of-the-day render took an extra hour, when they would likely be doing something else (eating dinner or whatever)?
-how many people with this specialized a use-case wouldn't just set the render to run when they go to bed, which renders any <8h-ish render time acceptable?

This is of course not the only use case, and there are other similar ones (compiling etc. - where similar questions would typically be relevant), but ultimately, running these workloads frequently enough and with sufficiently large workloads for this to be a significant time savings, and to make up for the worse performance in general usage? That's quite a stretch. You're looking at a _very_ small group of users.

(You can also see from the linked comparison above that the 12900K significantly outperforms the 3970X in several of your proposed workloads, such as Handbrake video transcoding.)


The red spirit said:


> And if they have a need to do it professionally, then faster chip is most likely a financial no-brainer.


Yes, and in that case they have a workstation workload, and are relatively likely to buy a workstation to do so. That's expensive, but at that point you need the reliability and likely want a service agreement. And at that point HEDT is likely the budget/DIY option, with pre-made workstations (TR-X or Xeon) being the main choice. This of course depends on whether you're a freelancer or working for a larger company etc, but for the vast majority of freelancers anything above a 5950X would be silly levels of overkill.


The red spirit said:


> I personally found that the most demanding tasks are multi threaded well and even then take ages to complete. Just liek I thought in 2014, that multithreaded performance was very important and maybe even at cost of single threaded performance, so I think today, but today that's even more obvious.


But you're treating all MT performance as if it scales perfectly. It does not. There are many, many real-world applications that fail to scale meaningfully above a relatively low core count, while those that scale massively are overall quite few.


The red spirit said:


> FX chips were very close to i7s in multithreaded performance and since they were a lot cheaper, literally 2 or maybe even 3 times cheaper, they were no brainer chips for anyone seriously interested in those workloads.


... again:


Valantar said:


> but some people did indeed prioritize it for running more MT tasks, as AMD delivered better value in those cases.


I already said that. You're arguing as if I'm making black-and-white distinctions here, thereby overlooking _huge_ portions of what I'm saying. Please take the time to actually read what I'm saying before responding.


The red spirit said:


> They were also easy to overclock. As long as you have cooling, 5Ghz was nothing to them, for nearly 100 USD prices of FX 8320 chips, i7 was complete no go.


But even at those clock speeds they underperformed. That's an FX-8350 at 4.8GHz roughly matching an i7-3770K (stock!) in a workload that scales very well with cores and threads (video encoding), at nearly 3x the power consumption. Is that really a good value proposition?


The red spirit said:


> lol I made a mistake about i7 920, but yeah there were 6C/12T chips available. Maybe i7 960 was the lowest end hexa core. Still, those were somewhat affordable if you needed something like that.


Lowest end hex core was the i7-970, there were also 980, 980X and 990X. And these were the precursor to Intel's HEDT lineup, which launched a year later.


The red spirit said:


> Phenom II X6 chips were great high core count chips, lower end models like 1055T were really affordable. If you overclocked one of those, you could have had exceptional value rendering rig for cheap. They sure did costs a lot less than i7 4C/8T parts and were seriously competitive against them. Obviously, later released FX chips were even better value.


Sure, they were good for those very MT-heavy tasks. They were also quite terrible for everything else. Again: niche parts for niche use cases.


The red spirit said:


> Anyway, my point was that things like high core count chips existed back then and were quite affordable.


And quite bad for the real-world use cases of most users. I still fail to see the overall relevance here, and how this somehow affects whether a TR 3970X is a better choice overall than a 12900K or 5950X for a large segment of users. Intel's HEDT customer base mainly came from the absence of high-performing many-core alternatives. There were budget many-core alternatives that beat their low core count MSDT parts, but again their HEDT parts drastically outperformed these again - at a higher cost, of course. Horses for courses, and all that.


The red spirit said:


> And that's still better than making 5950X or 5900X. As consumer platforms are made to be cheaper have only small range or power requirements, if they make 5950X and say that it's compatible with AM4 socket and that any board supports it and then some guy does that on cheapest A520 board, most likely it will throttle badly.


That's nonsense. Any AM4 board needs to be able to run any AM4 chip (of a compatible generation) at stock speeds, unless the motherboard maker has _really_ messed up their design (in which case they risk being sanctioned by AMD for not being compliant with the platform spec). A low end board might not allow you to sustain the 144W boost indefinitely, but the spec only guarantees 3.4GHz, which any board should be able to deliver (and if it doesn't, that is grounds for a warranty repair). If you're not able to understand what the spec sheet is telling you and get the wrong impression, that is on you, not AMD. You could always blame the motherboard manufacturer for making a weak VRM, but then that also reflects on you for being dumb enough to pair a $750 CPU with a likely $100-ish motherboard for what must then be a relatively heavy MT workload.


The red spirit said:


> If they want to avoid lawsuits, then they better make their CPU range limited or make motherboard makers only produce more expensive boards, but that's what they can't really do since AM4 is supposed to be cheap, affordable and flexible platform.


Wait, lawsuits? What lawsuits? Given that this platform has been out for a year (and much longer than that if you count 16-core Zen2), those ought to have shown up by now if this was an actual problem. Looks to me like you're making up scenarios that don't exist in the real world.


The red spirit said:


> Watt marketing from FX era is seemingly not done anymore, even if it would make perfect sense.


Because CPUs today have high boost clocks to get more performance out of the chip at stock. A high delta between base and boost clock means a high power delta as well, and as TDP (or its equivalents) to the degree that they relate to power draw at all (they don't really - that's not how TDP is defined, but it tends to be equal to the separate rating for guaranteed max power draw at sustained base clock) relates to base clock and not boost, this becomes more complicated overall. Having two separate ratings is a much better idea - one for base, one for boost. Intel is onto something here, though I really don't like how they're making "PL1=PL2=XW" the default for K-series SKUs. If you were to mandate a single W rating for CPUs today you'd be forcing one of two things: either leaving performance on the table due to lower boost clocks, or forcing motherboard prices up as you'd force every motherboard to be able to maintain the full boost clock of even teh highest end chip on the platform. Both of these are bad ideas.


The red spirit said:


> Intel really suffers from that with shit tier i9 K chips making H510 VRMs burn. I'm surprised that they still don't have lawsuits to deal with, considering that this is blatant case of advertising something that can't happen. Anyway, those are the reasons to not make HEDT chips compatible with mainstream sockets.


Yes, and have I ever argued for that? No. A 141W 5950X is not an HEDT CPU, nor is a 125W Intel CPU. Their 240W boost on these chips is quite insane, and I think configuring them this way out of the box is rather desperate, but there's also an argument for the sheer idiocy of pairing a K-SKU chip with a H510(ish) board. If you think you're gaming the system by buying a dirt-cheap motherboard for your high-end CPU and then pelting that CPU with sustained high-power MT workloads, you're only fooling yourself, as you're buying equipment fundamentally unsuited for the task at hand.

I still think the power throttling we've seen on B560 boards (and below) is unacceptable, but that's on Intel for not mandating strong enough VRMs and power profiles, not on the CPUs themselves - CPUs are flexible and configurable in their power and boost behaviour.


The red spirit said:


> Intel in Sandy, Ivy and Haswell era managed to do that. That was great for consumers. All this bullshit with pushing HEDT chips to consumer platform does nothing good for anyone, except Intel and AMD.


Except that that era was extremely hostile to consumers, limiting them to too-low core counts and forcing them into buying overpriced and unnecessary motherboards for the "privilege" of having more than four cores. I entirely agree that most consumers don't need 12 or 16 cores, but ... so what? It doesn't harm anyone that these chips are available on mainstream platforms. Quite the opposite.


The red spirit said:


> Barely, it's already running obscenely hot and has clocks cranked to the moon. There's very little potential. I wouldn't overclock it, as it has two types of cores with different voltages with many frequency and voltage stages + many power settings in BIOS. 12900K is hardly tweakable unless you spend obscene amount of time to do it and then spend weeks if not months stability testing it in various loads.   That's stupid and makes no sense. Might as well just leave it as it is. 3970X is not much better than i9, but at least it has same type of cores. And potential to benefit from raised power limits (whatever they are called on AMD side). i9 12900K has them set better, therefore less potential for gains.


The E cores can't be OC'd at all, so ... you don't seem to have even read about the CPU you're discussing? And yes, this runs hot and consumes tons of power, but so does a TR 3970X. There isn't anything significant left in the tank for either of these.


The red spirit said:


> Strong disagree, most tasks are super niche and quite synthetic. I wouldn't consider it a realistic test suite. I consider practical testing with most common, widely used software. Anything else may be still practical, but due to nature of being niche, can't be honestly said to be so.


So ... video encoding, code compilation, 3D rendering, 3D rendering with RT, image manipulation are _more _niche workloads than "running several VMs at 100% CPU"? You're joking, right? Yes, SPEC CPU is also mainly geared towards scientific computation and workstation tasks, but it still represents an overall good mix of ST and MT workloads and is a decent gauge for a platform's mixed use performance - especially as it's open, controllable, and can even be compiled by the person running the workload to avoid hidden biases from the developer's side (unlike similar but closed workloads like GeekBench). Is it perfect? Of course not. What it is is possibly the best, and certainly the most controllable pre-packaged benchmark suite available, and the most widely comparable across different operating systems, architectures and so on. It has clear weaknesses - it's a poor indicator of gaming performance, for example, as there are few highly latency-sensitive workloads in it. But it is neither "super niche" nor synthetic in any way. A benchmark based on real-world applications and real-world workloads literally cannot be synthetic, as the definition of a synthetic benchmark is that it is neither of those things.


The red spirit said:


> To some extent yes


Thank you. Took a while, but we got there.


The red spirit said:


> That's going to depend on person.


And I've never said that it doesn't. I've argued for what is broadly, generally applicable vs. what is limited and niche - and my issue with your arguments is that you are presenting niche points as if they have broad, general applicability.


The red spirit said:


> Maybe. By Zen I mean Zen as architecture family, not Zen 1. At this point, I'm not sure if Zen 2 is really that dense. New Intel chips might be denser.


Comparable, at least. But Zen cores are (at least judging by die shots) much smaller than ADL P cores, which makes for increased thermal density.


The red spirit said:


> Some points you make can be solved with marketing, like making people see HEDT Intel platform as 5950X+Threadripper competitor. and the main reason why HEDT is losing ground is due to Intel pushing HEDT parts into mainstream segment (where they arguably don't belong). It's not that HEDT is not important, it's just how business is done by Intel.


But ... marketing doesn't _solve_ that. It would be an attempt at alleviating that. But if HEDT customers have been moving to MSDT platforms because those platforms fulfill their needs, no amount of marketing is going to convince them to move to a more expensive platform that doesn't deliver tangible benefits to their workflow. And the main reason why HEDT is losing ground is _not_ what you're saying, but rather that AMD's move to first 8 then 16 cores completely undercut the USP of Intel's HEDT lineup, and suddenly we have MSDT parts now that do 90% of what HEDT used to, and a lot of it better (due to higher clocks and newer architectures), and the remaining 10% (lots of PCIe, lots of memory bandwidth) are very niche needs. Arguing for some artificial segregation into MSDT and HEDT along some arbitrary core count (what would you want? 6? 8? 10?) is essentially not tenable today, as modern workloads can scale decently to 8-10 cores, especially when accounting for multitasking, while not going overboard on cores keeps prices "moderate" including platform costs. I still think we'll find far better value in a couple of years once things have settled down a bit, but 16-core MSDT flagships are clearly here to stay. If anything, the current AMD and Intel ranges demonstrate that these products work very well in terms of actual performance in actual workloads for the people who want/need them as well as in terms of what the MSDT platforms can handle (even on relatively affordable motherboards - any $200 AM4 motherboard can run a 5950X at 100% all day every day).


The red spirit said:


> Speaking about regional deals, pretty much since C19 lockdown star in my region (Lithuania) there is a great shortage of Athlons, quad core Ryzens, Ryzen APUs in general, Celerons. Lithuanian market is now seemingly flooded with i5 10400Fs and i3 10100Fs. Anything Ryzen has Ryzen tax, seemingly making Intel more competitive here, but in terms of sales, Ryzen is winning, despite having prices inflated and only having liek 2-3 different SKUs available per store. Idiots still think that it's better value than Intel. Ironically, 5950X is seemingly mainstream chip as it is sold the best. Yet at the same time brand new Pentium 4 chips are sold. Pentium 4s outsell i3 10100Fs and i5 11400Fs. That happened in one store, but it's still incredibly fucked up. In other store, most sold chip is 2600X, meanwhile second is 5950X. That second store doesn't have Pentium 4s, but they have refurbished Core 2 Duos. They don't sell well at all there. In Lithuania most computers sold are local prebuilts or laptops, but DIY builders are going bonkers for some reason.


Low end chips have generally been in short supply globally for years - Intel has been prioritizing their higher priced parts since their shortage started back in ... 2018? And AMD is doing the same under the current shortage. Intel has been very smart at changing this slightly to target the $150-200 market with their 400 i5s, which will hopefully push AMD to competing better in those ranges - if the rumored Zen3 price cuts come true those 5000G chips could become excellent value propositions pretty soon.

That sounds like a pretty ... interesting market though. At least it demonstrates the power of image and public perception. The turnaround in these things in the past few years has been downright mind-boggling, from people viewing AMD at best as the value option to now being (for many) the de-facto choice due to a perception of great performance _and_ low pricing, which ... well, isn't true any more  Public perception is never accurate, but this turnaround just shows how slow it can be to turn, how much momentum and inertia matters in these things, and how corporations know to cash out when they get the opportunity.


----------



## RandallFlagg (Nov 9, 2021)

W1zzard said:


> Intel is 100% crystal clear on what the default is. If some reviewers choose to underclock or overclock the CPU then it's their own fault? Changing the power limits is just like OC, only playing with a different dial



It may be the default within the chip - which doesn't last beyond power on, but their data sheet says PL1 / PL2 / PL3 / PL4 should be set based on capabilities of the platform (VRMs) and cooling solution.





__





						Processor Thermal Management - 001 - ID:655258 | 12th Generation Intel® Core™ Processors Datasheet, Volume 1 of 2
					





					edc.intel.com


----------



## W1zzard (Nov 9, 2021)

RandallFlagg said:


> It may be the default within the chip - which doesn't last beyond power on, but their data sheet says PL1 / PL2 / PL3 / PL4 should be set based on capabilities of the platform (VRMs) and cooling solution.
> 
> 
> 
> ...


Yeah that same document only talks about 125 W and never mentions the new defaults


			Processor Line Thermal and Power - 009 - ID:655258 | 12th  Generation Intel®  Core™ Processors
		


Intel marketing in their presentation was 100% clear PL1=PL2=241 W, I posted their presentation slide a few days ago

I suspect what happened is that someone in marketing really wanted to win Cinebench R23 (which heats up the CPU first and usually runs at PL1 without turbo on Intel), so they pushed for that change last minute


----------



## Valantar (Nov 9, 2021)

W1zzard said:


> Yeah that same document only talks about 125 W and never mentions the new defaults
> 
> 
> Processor Line Thermal and Power - 009 - ID:655258 | 12th  Generation Intel®  Core™ Processors
> ...


Ugh, marketing people should never be allowed near a spec sheet.


----------



## The red spirit (Nov 9, 2021)

Valantar said:


> Wait, you have 100% CPU utilization in your VMs from trying out OSes? That sounds wrong. You seem to be contradicting yourself somewhat here. And again: if your workload is "I run many VMs with heavy multi-core workloads", you're well and truly into high end workstation tasks. That is indeed a good spot for HEDT (or even higher end) hardware, but ... this isn't common. Not even close.
> 
> A niche within a niche! Even better!


Well, used to. At some points I ran 3 machines all day with 2 out of 3 with native Windows BOINC loads and one with linux VM and with BOINC loads in both linux and Windows. I don't do that anymore, but when you start out in crunching, you quickly realize how generally decent everyday CPU now suddenly becomes relatively inadequate. And soon you start to want Opterons or Xeons and then you you realize in what rabbit hole you end up. 



Valantar said:


> I have never, ever, ever heard of anyone needing a HEDT CPU for _decompressing their Steam downloads_. I mean, for this to be relevant you would need to spend far more time downloading your games than actually playing them. Any run-of-the-mill CPU can handle this just fine. Steam decompresses on the fly, and your internet bandwidth is always going to bottleneck you more than your CPU's decompression rate (unless you're Linus Tech Tips and use a local 10G local cache for all your Steam downloads). The same goes for whatever other large-scale compressed downloads even an enthusiast user is likely to do as well.


That's just one type of decompressing.




Valantar said:


> But even at those clock speeds they underperformed. That's an FX-8350 at 4.8GHz roughly matching an i7-3770K (stock!) in a workload that scales very well with cores and threads (video encoding), at nearly 3x the power consumption. Is that really a good value proposition?


Or did it? Overclocked FX 83xx was slower at first pass, but faster at second pass. Don't forget that FX octa core chips costed nearly 3 tiems less than i7. And that was close to ideal situation for i7 as that workload clearly benefited from HT, some workloads have negative performance impact from using HT. And FX has real cores, therefore performance of overclocked FX should had been far more predictable than with i7.But power consumption...  yeah, that was rough. But still even at stock speeds, FX was close to i7 and at second pass benchmark beat i7. Also FX chips were massively overvolted from factory. 0.3V undervolts were achievable on nearly any chip. Despite being more power hungry chip, AMD did no favours by setting voltage so unreasonably high. 




Valantar said:


> Sure, they were good for those very MT-heavy tasks. They were also quite terrible for everything else. Again: niche parts for niche use cases.


Phenom II X6 was decent. Phenom II X6 had single core performance of first gen FX chips, which roughly translates as somewhere in between Core 2 Quad and first gen Core i series. It was closer to i7 in that regard than 3970X is to 5950X today. And Phenom II X6 1055T sold for nearly 2 times lower price than i7, so value preposition was great.




Valantar said:


> That's nonsense. Any AM4 board needs to be able to run any AM4 chip (of a compatible generation) at stock speeds, unless the motherboard maker has _really_ messed up their design (in which case they risk being sanctioned by AMD for not being compliant with the platform spec). A low end board might not allow you to sustain the 144W boost indefinitely, but the spec only guarantees 3.4GHz, which any board should be able to deliver (and if it doesn't, that is grounds for a warranty repair). If you're not able to understand what the spec sheet is telling you and get the wrong impression, that is on you, not AMD. You could always blame the motherboard manufacturer for making a weak VRM, but then that also reflects on you for being dumb enough to pair a $750 CPU with a likely $100-ish motherboard for what must then be a relatively heavy MT workload.



























Seems very sketchy, boards clearly overheated, but I'm not sure if it's just boost that got cut or even bellow base speed. In 3900X video CPU clearly throttled bellow base clock, that's a fail by any definition. On Intel side it's even worse:









Gigabyte B560 D2V board throttled i9 way bellow base speed and some boards were just so so. On Intel Z490 side:









Asrock Phantom Gaming 4 was only legally not throttling. I guess it's a pass, but in hotter climate it would be a fail. And that's not really a cheap board, there are many H410 boards with even worse VRMs, which are complete gamble if they would work with i9 or not. I wouldn't have much confidence that they would. 

All in all, there are plenty of shit claims from motherboard manufacturers, they mostly don't get flak for that because media only uses mid tier or high end boards, but if media cared about low end stuff, board makers would face lawsuits. I guess that's an improvement from AM3+ era, when certain MSI boards melted or caught on fire and many low end boards throttling straight to 800MHz




Valantar said:


> Yes, and have I ever argued for that? No. A 141W 5950X is not an HEDT CPU, nor is a 125W Intel CPU. Their 240W boost on these chips is quite insane, and I think configuring them this way out of the box is rather desperate, but there's also an argument for the sheer idiocy of pairing a K-SKU chip with a H510(ish) board. If you think you're gaming the system by buying a dirt-cheap motherboard for your high-end CPU and then pelting that CPU with sustained high-power MT workloads, you're only fooling yourself, as you're buying equipment fundamentally unsuited for the task at hand.


I don't see anything a bad about putting i9K on H510 board. After all, manufacturers claim that they are compatible. If you are fine with less features, lower end chipset and etc, you may as well not pay for fancier board. Also, some people upgrade old system which had i3 later to i7s. Today that would be throttlefest (with i9). I don't see anything unreasonable about upgrading CPU later and I don't think that those people deserve to have their VRMs burning. 



Valantar said:


> I still think the power throttling we've seen on B560 boards (and below) is unacceptable, but that's on Intel for not mandating strong enough VRMs and power profiles, not on the CPUs themselves - CPUs are flexible and configurable in their power and boost behaviour.


It was literally the same on B460, it's just that HWUB didn't test as many boards and didn't have a video with sensational title. In fact it may have been even worse in terms of board quality.



Valantar said:


> Except that that era was extremely hostile to consumers, limiting them to too-low core counts and forcing them into buying overpriced and unnecessary motherboards for the "privilege" of having more than four cores. I entirely agree that most consumers don't need 12 or 16 cores, but ... so what? It doesn't harm anyone that these chips are available on mainstream platforms. Quite the opposite.


On the other hand, you could have bought non K i5 or i7 and see it last for nearly decade with excelent performance. It was unprecedented stagnation, but it wasn't entirely good or bad. Even Core 2 Quad or Phenom II X4 users saw their chips last a lot longer than expected. Game makers made games runable on that hardware too, now core race got restarted and I don't think that we will see chips with usable lifespan as long as Sandy, Ivy or Haswels. You may say that's good. Maybe for servers and HEDT it is, but for average consumer that means more unnecessary upgrading. 




Valantar said:


> But ... marketing doesn't _solve_ that. It would be an attempt at alleviating that. But if HEDT customers have been moving to MSDT platforms because those platforms fulfill their needs, no amount of marketing is going to convince them to move to a more expensive platform that doesn't deliver tangible benefits to their workflow. And the main reason why HEDT is losing ground is _not_ what you're saying, but rather that AMD's move to first 8 then 16 cores completely undercut the USP of Intel's HEDT lineup, and suddenly we have MSDT parts now that do 90% of what HEDT used to, and a lot of it better (due to higher clocks and newer architectures), and the remaining 10% (lots of PCIe, lots of memory bandwidth) are very niche needs. Arguing for some artificial segregation into MSDT and HEDT along some arbitrary core count (what would you want? 6? 8? 10?) is essentially not tenable today, as modern workloads can scale decently to 8-10 cores, especially when accounting for multitasking, while not going overboard on cores keeps prices "moderate" including platform costs. I still think we'll find far better value in a couple of years once things have settled down a bit, but 16-core MSDT flagships are clearly here to stay. If anything, the current AMD and Intel ranges demonstrate that these products work very well in terms of actual performance in actual workloads for the people who want/need them as well as in terms of what the MSDT platforms can handle (even on relatively affordable motherboards - any $200 AM4 motherboard can run a 5950X at 100% all day every day).


Well, I made a point about VRMs, more RAM channels, more PCIe lanes and etc. HEDT boards were clearly made to professional use and those who migrated to mainstream are essentially not getting a full experiences minus performance. Is that really good? Or is it just some people pinching pennies and buying by performance only? 



Valantar said:


> Low end chips have generally been in short supply globally for years - Intel has been prioritizing their higher priced parts since their shortage started back in ... 2018? And AMD is doing the same under the current shortage.


Not at all, there used to be Ryzen 3100s, Ryzen 3200G-3400Gs, various Athlons. On Intel side Celerons and Pentiums were always available without issues, now they became unobtanium, well except Pentium 4 . Budget CPUs are nearly wiped out as a concept, along with GPUs. They don't really exist anymore, but they did in 2018.



Valantar said:


> Intel has been very smart at changing this slightly to target the $150-200 market with their 400 i5s, which will hopefully push AMD to competing better in those ranges - if the rumored Zen3 price cuts come true those 5000G chips could become excellent value propositions pretty soon.


Maybe, but AMD has fanboys, never underestimate fanboys and their appetite for being ripped off. 



Valantar said:


> That sounds like a pretty ... interesting market though. At least it demonstrates the power of image and public perception.


Or is it? I find my country's society mind boggling at times. I was reading comments in phone store about various phones and found out that S20 FE is "budget" phone and that A52 is basically poverty phone. Those people were talking about Z Flips and Folds as if they were somewhat expensive but normal, meanwhile average wage in this country is more than 2 times lower than what latest Z flip costs. And yet this exactly same country loves to bitch and whine how everything is bad, how everyone is poor or close to poverty. I really don't understand Lithuanians. It makes me think that buying 5950X may be far more common than I would like to admit and that those two stores may be a reasonable reflection of society.



Valantar said:


> The turnaround in these things in the past few years has been downright mind-boggling, from people viewing AMD at best as the value option to now being (for many) the de-facto choice due to a perception of great performance _and_ low pricing, which ... well, isn't true any more  Public perception is never accurate, but this turnaround just shows how slow it can be to turn, how much momentum and inertia matters in these things, and how corporations know to cash out when they get the opportunity.


I generally dislike public perception, but you have to admit that whatever AMD did with marketing was genius. People drank AMD's koolaid about those barely functional buggy excuses of CPUs and hyped Zen 1 to the moon. Despite it being really similar to how FX launched. Focus on more cores, poor single threaded performance, worse power consumption than Intel. I genuinely thought that Ryzen will be FX v2, but nah. Somehow buying several kits of RAM to find one compatible, having computer turning on once in 10 tries and overall being inferior than Intel suddenly became acceptable and not only that, but desirable. Later gens were better, but it was the first gen that built most of Ryzen's reputation. And people quickly became militant of idea that Intel is still better, some of them would burn you at stake for saying such heresy. And now people are surprised that Intel is good again, as if Intel was a clear market leader and hasn't been for half century.


----------



## THU31 (Nov 9, 2021)

By the way, can you disable the E-cores in the BIOS? Like if you wanted to push for maximum gaming performance with 8C/16T.


----------



## R0H1T (Nov 9, 2021)

The red spirit said:


> I generally dislike public perception, but you have to admit that whatever AMD did with marketing was genius. People drank AMD's koolaid about those barely functional buggy excuses of CPUs and hyped Zen 1 to the moon. Despite it being really similar to how FX launched. Focus on more cores, poor single threaded performance, worse power consumption than Intel. I genuinely thought that Ryzen will be FX v2, but nah. Somehow buying several kits of RAM to find one compatible, having computer turning on once in 10 tries and overall being inferior than Intel suddenly became acceptable and not only that, but desirable. Later gens were better, but it was the first gen that built most of Ryzen's reputation. And people quickly became militant of idea that Intel is still better, some of them would burn you at stake for saying such heresy. *And now people are surprised that Intel is good again, as if Intel was a clear market leader and hasn't been for half century.*


It's mainly those who follow the narrative that many YTers peddle these days or how clickbait sites like WTFtech publish each passing week! Those who've followed the tech landscape relatively closely over the last two decades know AMD was only really a big big deal post 2k, that was for about half a decade till Conroe entered the arena. Then AMD was absent for nearly another decade till Zen materialized, in between there were also *really informed people who thought Intel would lower prices even if AMD went bankrupt because* ~ ARM, well suffice to say that never happened & Intel would never lower prices especially if it was the only x86 CPU maker left. They only started lowering their prices in almost 1.5 decades because they had to, same goes for AMD now!

Basically the public perception isn't made of lies or fantasies but what you could term as *desires*


----------



## RandallFlagg (Nov 9, 2021)

W1zzard said:


> Yeah that same document only talks about 125 W and never mentions the new defaults
> 
> 
> Processor Line Thermal and Power - 009 - ID:655258 | 12th  Generation Intel®  Core™ Processors
> ...



Honestly this is no different than AMD telling reviewers to use DDR4-3733 / 3800.  Then what actually ships is mostly getting mated with DDR4-3200 C16 at XMP, or worse if it's in an OEM rig.

It's why I always tell people to look at multiple reviews and pay attention to the test configuration.  That actually takes work on the readers part though.

I personally like to see reviews at three levels -

Base 'inexpensive' configs like 'cheap' DDR4-3200 XMP (not the expensive C14 stuff) with recommended base power settings (technically 125/241 for a 12900K) and maybe just XMP enabled
A midrange setting, which I think TPU does and probably reflects what most enthusiasts will actually do, i.e. DDR4-3600 XMP and maybe some limited power limit tinkering.  I actually think PL1/PL2 of 241=241 is too high for this, and 125/241 is too low.  Your Noctua is probably good for 180/241.  Asus AI Tweaker is a good example here, it will tune your power levels to your ability to cool the chip under the workload you are actually putting it under.  
A balls to the wall setup for that system (which nobody does outside of a few youtubers and sometimes computerbase.de, and means that you need to have memory / motherboard and settings that puts that CPU in its best setting).  These guys run open loop coolers and high but stable overclocks. 



R0H1T said:


> It's mainly those who follow the narrative that many YTers peddle these days or how clickbait sites like WTFtech publish each passing week! Those who've followed the tech landscape relatively closely over the last two decades know AMD was only really a big big deal post 2k, that was for about half a decade till Conroe entered the arena. Then AMD was absent for nearly another decade till Zen materialized, in between there were also *really informed people who thought Intel would lower prices even if AMD went bankrupt because* ~ ARM, well suffice to say that never happened & Intel would never lower prices especially if it was the only x86 CPU maker left. They only started lowering their prices in almost 1.5 decades because they had to, same goes for AMD now!
> 
> Basically the public perception isn't made of lies or fantasies but what you could term as *desires*



You said : "..*because* ~ ARM"

Don't think you thought that through.

ARM affected Intel far more negatively than AMD ever did.

The desktop is almost dead because of ARM.   Mobile totally dominates the chip production market, only 9% of TSMCs production is AMD (and that includes all AMD, console, desktop, laptop, GPU, server), the other 91% of their production is fundamentally ARM.   And then there are the other fabs who are all pretty much 100% ARM as far as core type.  

Really the things AMD and Intel have right now are the desktop \ workstation, laptop productivity, and data center markets.  As a percentage of the overall compute device market, these have shrunk while at the same time growing in absolute terms largely in order to service the ever growing mobile segment.  

I am no ARM advocate at all, but realistically unless something changes ARM will probably destroy the conventional desktop / laptop market.  That will likely happen when thin client computing becomes a viable thing and high speed network bandwidth becomes ubiquitous.  Already seeing it to some degree with GPU being put into the cloud.   I give it about 10 more years.


----------



## R0H1T (Nov 9, 2021)

RandallFlagg said:


> Don't think you thought that through.
> 
> ARM affected Intel far more negatively than AMD ever did.


I did, apparently the ones predicting Intel will lower their desktop/notebook (chips) prices eventually due to ARM becoming popular didn't! Till MS is fully on board with ARM you can kiss this pipedream goodbye, also at the time Intel was pushing double digit billion dollars to enter the mobile market. So I'm not sure if you have the context of that debate.


----------



## MKRonin (Nov 9, 2021)

Why are the Database TPC-C scores so low with E-cores enabled? Perhaps the benchmark hasn't been updated to support ADL yet?


----------



## RandallFlagg (Nov 9, 2021)

R0H1T said:


> I did, apparently the ones predicting Intel will lower their desktop/notebook (chips) prices eventually due to ARM becoming popular didn't! Till MS is fully on board with ARM you can kiss this pipedream goodbye, also at the time Intel was pushing double digit billion dollars to enter the mobile market. So I'm not sure if you have the context of that debate.



I was speaking of things that already happened, and simply are.  There's no pipedream there.  I have a couple multiples more invested in devices containing ARM than x86.  Most people do and don't even realize it, obviously.


----------



## R0H1T (Nov 9, 2021)

Of course but I'm strictly speaking about desktop/notebook chips Intel sells. Since the ones in servers, or mobiles, are of no relevance to the point I made.


RandallFlagg said:


> I have a couple multiples more invested in devices containing ARM than x86.


Also how much of that cost is the ARM processor vs say the display on it? You're making this unnecessarily complex, the mobile industry isn't just (about) ARM ~ in fact the most expensive components that go in it are usually the displays!


----------



## RandallFlagg (Nov 9, 2021)

R0H1T said:


> Of course but I'm strictly speaking about desktop/notebook chips Intel sells. Since the ones in servers, or mobiles, are of no relevance to the point I made.



You were talking about ARM like it didn't impact or compete with Intel.  It did, it does, and Intel lost most of the mobile and embedded segments to ARM.  

And I think you still don't understand what I'm pointing out.  Let me put it another way -

PC/Laptop sales broke a record number of sales last year. _ That record had stood for 10 years.  _

So if there had been no ARM, do you think we would have had to go 10 years and experience a pandemic that sent everyone scrambling for a home PC before the market could break that record from 10 years ago for laptop/desktop?  

It's not like the population has been declining or big new populations haven't obtained the means to buy a laptop/desktop - they just aren't buying.  

If as you stated you are talking about PC/Laptop sales only as an enclosed ecosystem, then you are talking about an ecosystem that is slowly dying.   ARM and mobile are slowly crushing it.  That is competition.


----------



## R0H1T (Nov 9, 2021)

Ok, without going into all the specifics here's what the argument was ~

Intel was pouring billions to get into the mobile market, AMD was on the verge of bankruptcy & *Zen was 2-3 years away*.

Now if AMD goes bankrupt at this point in time what happens? The most likely scenario ~

1) Intel monopolizes x86 (traditional PC) market & probably increases pricing. They may or may not have done massive price hikes but we'd certainly not be seeing *8c/16t under $500* by 2017 like from AMD.

2) Intel would likely keep on subsidizing their mobile SoC's with the increased profits & margins after AMD went bankrupt, heck they could essentially pay Apple to keep them on x86 for as along as Apple thought was necessary.

3) Desktops would never tide over to ARM unless MS or Google were subsidizing that push, now with Intel probably paying them both ARM on desktops (viz M1) would likely be a pipedream.


RandallFlagg said:


> It's not like the population has been declining or big new populations haven't obtained the means to buy a laptop/desktop - they just aren't buying.


People today aren't necessarily buying mobiles or tablets as PC replacements, these markets rarely overlap, but supplanting their old PC's in a way. So when we talk about the traditional PC (x86) market you really think ARM would be much of a threat or Intel would lower their chip prices in this segment? Are you forgetting without AMD Intel would be in a much stronger position today?

And that's why we need competition ~ AMD going poof & Intel lowering prices, due to ARM, was & will always be a pipedream!


----------



## W1zzard (Nov 9, 2021)

THU31 said:


> By the way, can you disable the E-cores in the BIOS? Like if you wanted to push for maximum gaming performance with 8C/16T.


Yes, it's an option in the BIOS, you can disable any number of E-Cores, including all of them. You can also disable any number of P-Cores, but at least one P-core has to be active



MKRonin said:


> Why are the Database TPC-C scores so low with E-cores enabled? Perhaps the benchmark hasn't been updated to support ADL yet?


As mentioned in the conclusion, Thread Director/Windows 11 puts MySQL onto the E-Cores and not the P-Cores.


----------



## THU31 (Nov 9, 2021)

W1zzard said:


> Yes, it's an option in the BIOS, you can disable any number of E-Cores, including all of them. You can also disable any number of P-Cores, but at least one P-core has to be active


That is cool! Does that also help with the DRM issues in certain games? I know there is some compatibility option, but I am curious if simply disabling the E-cores does the job.


----------



## W1zzard (Nov 9, 2021)

THU31 said:


> That is cool! Does that also help with the DRM issues in certain games? I know there is some compatibility option, but I am curious if simply disabling the E-cores does the job.


Yeah turning off E cores solves the Denuvo issues


----------



## efikkan (Nov 9, 2021)

Valantar said:


> Just goes to show that high end MSDT is taking over where HEDT used to have its niche. The space between "server/datacenter chip" and "16c24t high clocking new arch MSDT chip" is pretty tiny, both in relevant applications and customers…


Back in the days HEDT used to be mostly about more cores, but as you were saying, these days mainstream has plenty of cores for even _most_ "power users". The issue with mainstream today is IO, something which has become much more important after the arrival of M.2 SSDs. Having the flexibility of running 2-4 SSDs, a GPU, a 10G network card and/or some special cards (for those doing video) quickly exhausts the possibilities of these platforms. These are the typical concerns I hear from other "power users" such as myself, where it feels like there is a hole between mainstream and HEDT.



Valantar said:


> XMP is generally 100% stable though, unless you buy something truly stupidly fast. Of course you should always do thorough testing on anything mission critical, and running JEDEC for that is perfectly fine - but then you have to work to actually find those DIMMs in the first place.


I think you are missing the point.
XMP is about setting a profile which the memory is capable of, but not necessarily the memory controller. Most of the stability issues people hare having are actually related to their CPU sample, and the fact that the memory controller (like any other silicon) degrades over time, depending on use etc.
So no, choosing a XMP profile supported by the motherboard and the memory module doesn't guarantee stability, it may not even complete POST. And over time it will become more unstable, some motherboards may gradually turn down the clock speed or revert to a lower profile after system crashes or failed POSTs, and the end user may not even know it's happening.


----------



## The red spirit (Nov 9, 2021)

R0H1T said:


> It's mainly those who follow the narrative that many YTers peddle these days or how clickbait sites like WTFtech publish each passing week! Those who've followed the tech landscape relatively closely over the last two decades know AMD was only really a big big deal post 2k, that was for about half a decade till Conroe entered the arena. Then AMD was absent for nearly another decade till Zen materialized, in between there were also *really informed people who thought Intel would lower prices even if AMD went bankrupt because* ~ ARM, well suffice to say that never happened & Intel would never lower prices especially if it was the only x86 CPU maker left. They only started lowering their prices in almost 1.5 decades because they had to, same goes for AMD now!
> 
> Basically the public perception isn't made of lies or fantasies but what you could term as *desires*


Actually AMD became big deal also since forever. Since AMD used to make similar parts to Intel, but later. They clocked them higher, sold them cheaper and competed that way with them, Usually they were strong competitors. Around K5 era, Intel had enough and patent trolled AMD away. So now they had to copyright their stuff and develop their own architecture, as well as boards. And while K5 was mostly a loss to AMD, AMD started to make strides with K6. It flopped, but once they improved it with K6-II and started to push 3D-now! instructions, they became strong in market. When they moved to K7 architecture, AMD launched Athlon chips. The fastest CPUs around and soon won 1GHz race. At that time AMD was doing exceptionally well and crushing Intel. They later made Athlon XP chips, which were much cheaper, vastly improved, gained another GHz, but weren't faster than Pentium 4s. Still, due to loads of flak for Intel about their lackluster launch of Pentium4, AMD was perceived very well, until AMD k8 architecture. It was vastly improved architecture, with slightly higher clocks, great thermals and power consumption, now K8 cores had support for 64 bit software and 4GB of RAM, while still remaining affordable. Intel only managed to raise clock speed, meddled with RAMBUS and FSB speeds to keep Pentium 4 on life support, meanwhile AMD was beating them on all fronts, literally everything. Be it power use, thermals, performance, value. Only in cache heavy tasks AMD was a bit behind, but in everything else it was a slaughter of Pentium 4. But AMD was not selling many of those Athlons, due to OEM exclusivity agreements with Intel and very heavy advertising of Pentium 4, while hyping up their clock speed. Then AMD came out with Athlon 64 FX, a fully unlocked, 3GHz server based chip made for consumers. It was very fast and outcompeted Pentium 4 Emergency Edition, but at 1000 dollar price and requirement to buy server board, it failed to get traction. Still, AMD soon came out with first consumer grade dual core chips, the Athlon 64 X2s. Intel had nothing to compete against AMD and had to glue together two Pentium 4s together. One Pentium 4 was dying from heat (https://arstechnica.com/civis/viewtopic.php?t=752419) and had awful thermals and horrendous power usage, why not put two of these into one chip? What a swell idea it was, until reviews came out and proved that Pentium D was slow, ran awfully hot and literally warped motherboards from heat. Intel only regained traction with later Core 2 Duo release. So all in all, AMD was always a competent rival with worst beating of Intel actually being around 1999/2000 and then going strong until 2005. Still, it's not like AMD made crap until Ryzen either. There were underwhelming releases from them, but generally AMD was not much behind Intel. In K10 era, AMD was big on value and managed to sell quad core chips for as little as 100 dollars, while this previously was only possible for over 300 dollars and you could overclock them too. Soon they came out with 6 core chips and managed to sell them at 200 dollars. Not so cheap, but way better than over 500 dollars for Intel one. And with FX chips AMD managed to sell a six core chips for a price of i3. And 8 core chips for a price on higher end i3. That really doesn't sound awful to me. AMD also tried to create some hype with beating world records of overclocking with Phenom II chips and with FX chips, which are still unbeaten today. On more sane side, they made APUs, which could run AAA games at respectable resolutions and nearly 60 fps, which was unprecedented for single chip or integrated graphics in general. They managed to change perception of iGPU being display adapter to a budget gaming powerhouse. AMD was also making very competent graphics cards at the time with Radeon HD 6000 series being strong competitor to nVidia. And nVidia only had Thermi (aka Fermi) cards out, which ran obscenely hot. Even worse nVidia launched second generation of Thermi, called Thermi 2.0 with very modest improvements. Flagship Radeon HD 6950 card consumed literally twice less power than Thermi based GTX 580. So AMD only didn't have a prestige due to their best CPUs not quite measuring up to Intel chips, but they weren't much behind and were unbeatable at budget. Despite all that and despite good times, AMD historically managed to sell chips at low prices (except flagships), it is only with Ryzen that AMD went full wanker with pricing and I'm glad that they finally got a kick in balls to lower prices a bit. To put things in perspective, they sold 6 core chips in 2012 for 120 dollars and 8 core chips for 160 dollars. At launch 5600X sold at 300 dollars, while being physically smaller die chip. It's really unbelievable how overly inflated prices of AMD chips became and mostly due to hype, which was partially meritless.


----------



## Caring1 (Nov 10, 2021)

The red spirit said:


> Actually AMD became big deal also since forever. Since AMD used to make similar parts to Intel, but later. They clocked them higher, sold them cheaper and competed that way with them, Usually they were strong competitors. Around K5 era, Intel had enough and patent trolled AMD away. So now they had to copyright their stuff and develop their own architecture, as well as boards. And while K5 was mostly a loss to AMD, AMD started to make strides with K6. It flopped, but once they improved it with K6-II and started to push 3D-now! instructions, they became strong in market. When they moved to K7 architecture, AMD launched Athlon chips. The fastest CPUs around and soon won 1GHz race. At that time AMD was doing exceptionally well and crushing Intel. They later made Athlon XP chips, which were much cheaper, vastly improved, gained another GHz, but weren't faster than Pentium 4s. Still, due to loads of flak for Intel about their lackluster launch of Pentium4, AMD was perceived very well, until AMD k8 architecture. It was vastly improved architecture, with slightly higher clocks, great thermals and power consumption, now K8 cores had support for 64 bit software and 4GB of RAM, while still remaining affordable. Intel only managed to raise clock speed, meddled with RAMBUS and FSB speeds to keep Pentium 4 on life support, meanwhile AMD was beating them on all fronts, literally everything. Be it power use, thermals, performance, value. Only in cache heavy tasks AMD was a bit behind, but in everything else it was a slaughter of Pentium 4. But AMD was not selling many of those Athlons, due to OEM exclusivity agreements with Intel and very heavy advertising of Pentium 4, while hyping up their clock speed. Then AMD came out with Athlon 64 FX, a fully unlocked, 3GHz server based chip made for consumers. It was very fast and outcompeted Pentium 4 Emergency Edition, but at 1000 dollar price and requirement to buy server board, it failed to get traction. Still, AMD soon came out with first consumer grade dual core chips, the Athlon 64 X2s. Intel had nothing to compete against AMD and had to glue together two Pentium 4s together. One Pentium 4 was dying from heat (https://arstechnica.com/civis/viewtopic.php?t=752419) and had awful thermals and horrendous power usage, why not put two of these into one chip? What a swell idea it was, until reviews came out and proved that Pentium D was slow, ran awfully hot and literally warped motherboards from heat. Intel only regained traction with later Core 2 Duo release. So all in all, AMD was always a competent rival with worst beating of Intel actually being around 1999/2000 and then going strong until 2005. Still, it's not like AMD made crap until Ryzen either. There were underwhelming releases from them, but generally AMD was not much behind Intel. In K10 era, AMD was big on value and managed to sell quad core chips for as little as 100 dollars, while this previously was only possible for over 300 dollars and you could overclock them too. Soon they came out with 6 core chips and managed to sell them at 200 dollars. Not so cheap, but way better than over 500 dollars for Intel one. And with FX chips AMD managed to sell a six core chips for a price of i3. And 8 core chips for a price on higher end i3. That really doesn't sound awful to me. AMD also tried to create some hype with beating world records of overclocking with Phenom II chips and with FX chips, which are still unbeaten today. On more sane side, they made APUs, which could run AAA games at respectable resolutions and nearly 60 fps, which was unprecedented for single chip or integrated graphics in general. They managed to change perception of iGPU being display adapter to a budget gaming powerhouse. AMD was also making very competent graphics cards at the time with Radeon HD 6000 series being strong competitor to nVidia. And nVidia only had Thermi (aka Fermi) cards out, which ran obscenely hot. Even worse nVidia launched second generation of Thermi, called Thermi 2.0 with very modest improvements. Flagship Radeon HD 6950 card consumed literally twice less power than Thermi based GTX 580. So AMD only didn't have a prestige due to their best CPUs not quite measuring up to Intel chips, but they weren't much behind and were unbeatable at budget. Despite all that and despite good times, AMD historically managed to sell chips at low prices (except flagships), it is only with Ryzen that AMD went full wanker with pricing and I'm glad that they finally got a kick in balls to lower prices a bit. To put things in perspective, they sold 6 core chips in 2012 for 120 dollars and 8 core chips for 160 dollars. At launch 5600X sold at 300 dollars, while being physically smaller die chip. It's really unbelievable how overly inflated prices of AMD chips became and mostly due to hype, which was partially meritless.


My brain just had a fit looking at that wall of text.
I can't even.


----------



## The red spirit (Nov 10, 2021)

Caring1 said:


> My brain just had a fit looking at that wall of text.
> I can't even.


Well, I'm disappointed then. A short summary of that would be that AMD also has always been cool, even in pre Ryzen era and pre year 2000.


----------



## AusWolf (Nov 10, 2021)

The red spirit said:


> Well, I'm disappointed then. A short summary of that would be that AMD also has always been cool, even in pre Ryzen era and pre year 2000.


An even better summary (in my point of view) is that AMD has always been good at offering competitive performance to Intel, at usually lower prices and with lower thermals. They only messed up the FX series, which took them a good 5-6 years to get out of, while Intel was riding a high horse with their 4-core chips. Now that Ryzen is good, AMD receives so much unnecessary hype that they can afford to ride the same high horse that Intel did. Now that Alder Lake is good too, AMD hopefully gets the message and lowers prices to competitive levels.

In my opinion, a lot of AMD's hype comes from the "underdog effect". Though I agree that new Ryzen is good, it's good in a very different way than Intel is good. This is the kind of competition and diversification that I like to see.


----------



## lexluthermiester (Nov 10, 2021)

THU31 said:


> By the way, can you disable the E-cores in the BIOS?


You should be able to. And If you're running Windows 10 and you have problems with glitches or bugs, it might help. The Pcores are the shining part of Alder Lake anyway so the loss of the Ecores will not hurt your over-all performance much. However, if you're on Windows 11, leave them on.



THU31 said:


> That is cool! Does that also help with the DRM issues in certain games?


That is the general consensus. However, Denuvo is the overwhelming offender here and most devs are making patches to remove it.


----------



## Valantar (Nov 10, 2021)

lexluthermiester said:


> You should be able to. And If you're running Windows 10 and you have problems with glitches or bugs, it might help. The Pcores are the shining part of Alder Lake anyway so the loss of the Ecores will not hurt your over-all performance much. However, if you're on Windows 11, leave them on.
> 
> 
> That is the general consensus. However, Denuvo is the overwhelming offender here and most devs are making patches to remove it.


There seems to be a (potentially very cool, or very janky, depending on perspective) BIOS-level fix for this. Don't know if it's on current BIOSes or upcoming, but they've announced a toggle ("Legacy game compatibility mode" or something) that when enabled lets you press Scroll Lock to park and effectively disable the E cores on the fly, avoiding Denuvo issues and the like. I'm a bit shocked that this doesn't hard crash the system, but I guess it automatically migrates all threads off of the cores before shutting them down?


AusWolf said:


> An even better summary (in my point of view) is that AMD has always been good at offering competitive performance to Intel, at usually lower prices and with lower thermals. They only messed up the FX series, which took them a good 5-6 years to get out of, while Intel was riding a high horse with their 4-core chips. Now that Ryzen is good, AMD receives so much unnecessary hype that they can afford to ride the same high horse that Intel did. Now that Alder Lake is good too, AMD hopefully gets the message and lowers prices to competitive levels.
> 
> In my opinion, a lot of AMD's hype comes from the "underdog effect". Though I agree that new Ryzen is good, it's good in a very different way than Intel is good. This is the kind of competition and diversification that I like to see.


The underdog effect has _a lot_ to do with AMD's current success, that's for sure. And yes, AMD has historically been everywhere from competitive to clearly ahead to clearly behind, but they have always been a much smaller company than Intel, with all the competitive drawbacks of that - including (but not limited to) various illegal anticompetitive behaviour on Intel's part. That "Dell is the best friend money can buy" quote from internal Intel communications that came to light in one of the antitrust cases summarizes things pretty eloquently IMO. There's one major factor that hasn't been mention though: the size of the total PC market and how this affects things. Intel's late-2000s performance advantage with Core, combined with the tail end of their various dubiously legal exclusivity deals, combined at the point where a) laptops became properly viable for everyday use, and b) the internet had become a properly commonplace thing. So Intel came into a rapidly expanding market (lower prices per unit, but volume increased dramatically) while also rapidly becoming the only viable server alternative. They had some damn good luck with their timing there, and IMO that contributed quite a bit to their dominance in the 2010s, not just that Bulldozer and its ilk were bad. Now, it was pretty bad, but in a less drastically unequal competitive situation that could have evened out in some way. Instead AMD nearly went bankrupt, but thankfully stayed alive thanks to their GPU division churning out great products still. As such, I guess we can thank ATI, and the AMD decision to buy them, for having the competitive and interesting CPU market we have today. And I don't think we would have gotten the underdog effect without the GPUs either - their ability to deliver good value GPUs is likely to have contributed significantly to that.



The red spirit said:


> Actually AMD became big deal also since forever. Since AMD used to make similar parts to Intel, but later. They clocked them higher, sold them cheaper and competed that way with them, Usually they were strong competitors. Around K5 era, Intel had enough and patent trolled AMD away. So now they had to copyright their stuff and develop their own architecture, as well as boards. And while K5 was mostly a loss to AMD, AMD started to make strides with K6. It flopped, but once they improved it with K6-II and started to push 3D-now! instructions, they became strong in market. When they moved to K7 architecture, AMD launched Athlon chips. The fastest CPUs around and soon won 1GHz race. At that time AMD was doing exceptionally well and crushing Intel. They later made Athlon XP chips, which were much cheaper, vastly improved, gained another GHz, but weren't faster than Pentium 4s. Still, due to loads of flak for Intel about their lackluster launch of Pentium4, AMD was perceived very well, until AMD k8 architecture. It was vastly improved architecture, with slightly higher clocks, great thermals and power consumption, now K8 cores had support for 64 bit software and 4GB of RAM, while still remaining affordable. Intel only managed to raise clock speed, meddled with RAMBUS and FSB speeds to keep Pentium 4 on life support, meanwhile AMD was beating them on all fronts, literally everything. Be it power use, thermals, performance, value. Only in cache heavy tasks AMD was a bit behind, but in everything else it was a slaughter of Pentium 4. But AMD was not selling many of those Athlons, due to OEM exclusivity agreements with Intel and very heavy advertising of Pentium 4, while hyping up their clock speed. Then AMD came out with Athlon 64 FX, a fully unlocked, 3GHz server based chip made for consumers. It was very fast and outcompeted Pentium 4 Emergency Edition, but at 1000 dollar price and requirement to buy server board, it failed to get traction. Still, AMD soon came out with first consumer grade dual core chips, the Athlon 64 X2s. Intel had nothing to compete against AMD and had to glue together two Pentium 4s together. One Pentium 4 was dying from heat (https://arstechnica.com/civis/viewtopic.php?t=752419) and had awful thermals and horrendous power usage, why not put two of these into one chip? What a swell idea it was, until reviews came out and proved that Pentium D was slow, ran awfully hot and literally warped motherboards from heat. Intel only regained traction with later Core 2 Duo release. So all in all, AMD was always a competent rival with worst beating of Intel actually being around 1999/2000 and then going strong until 2005. Still, it's not like AMD made crap until Ryzen either. There were underwhelming releases from them, but generally AMD was not much behind Intel. In K10 era, AMD was big on value and managed to sell quad core chips for as little as 100 dollars, while this previously was only possible for over 300 dollars and you could overclock them too. Soon they came out with 6 core chips and managed to sell them at 200 dollars. Not so cheap, but way better than over 500 dollars for Intel one. And with FX chips AMD managed to sell a six core chips for a price of i3. And 8 core chips for a price on higher end i3. That really doesn't sound awful to me. AMD also tried to create some hype with beating world records of overclocking with Phenom II chips and with FX chips, which are still unbeaten today. On more sane side, they made APUs, which could run AAA games at respectable resolutions and nearly 60 fps, which was unprecedented for single chip or integrated graphics in general. They managed to change perception of iGPU being display adapter to a budget gaming powerhouse. AMD was also making very competent graphics cards at the time with Radeon HD 6000 series being strong competitor to nVidia. And nVidia only had Thermi (aka Fermi) cards out, which ran obscenely hot. Even worse nVidia launched second generation of Thermi, called Thermi 2.0 with very modest improvements. Flagship Radeon HD 6950 card consumed literally twice less power than Thermi based GTX 580. So AMD only didn't have a prestige due to their best CPUs not quite measuring up to Intel chips, but they weren't much behind and were unbeatable at budget. Despite all that and despite good times, AMD historically managed to sell chips at low prices (except flagships), it is only with Ryzen that AMD went full wanker with pricing and I'm glad that they finally got a kick in balls to lower prices a bit. To put things in perspective, they sold 6 core chips in 2012 for 120 dollars and 8 core chips for 160 dollars. At launch 5600X sold at 300 dollars, while being physically smaller die chip. It's really unbelievable how overly inflated prices of AMD chips became and mostly due to hype, which was partially meritless.


Paragraphs, man. Paragraphs!


efikkan said:


> I think you are missing the point.
> XMP is about setting a profile which the memory is capable of, but not necessarily the memory controller. Most of the stability issues people hare having are actually related to their CPU sample, and the fact that the memory controller (like any other silicon) degrades over time, depending on use etc.
> So no, choosing a XMP profile supported by the motherboard and the memory module doesn't guarantee stability, it may not even complete POST. And over time it will become more unstable, some motherboards may gradually turn down the clock speed or revert to a lower profile after system crashes or failed POSTs, and the end user may not even know it's happening.


You're right about that, but my point was based on current reality, which is that both major vendors have had IMCs for at least two generations that can reliably handle XMP settings at 3600 or higher. I don't believe I've ever heard of a recent-ish Intel or Zen2/3 CPU that can't handle memory at those speeds. Zen and Zen+ on the other hand ... nah. My 1600X couldn't run my 3200c16 kit at XMP, and saw intermittent crashes at anything above 2933. But AMD IMCs have improved massively since then. So while you're right on paper, reality is that any current CPU is >99% likely to handle XMP just fine (given that, as I've said all the time, you stick to somewhat reasonable speeds).


chrcoluk said:


> It's a 2600X on a b450 board.  I had been looking into going to a newer gen Ryzen but the used market is horrible right now probably due to the problems in the brand new market.  The bios after the one I am using added memory compatibility fixes for zen+ but since proxmox is now stable, I decided to not rock the boat.
> 
> Also it is a 4 dimm setup and when it was stable on windows it was only 2 dimms (should have mentioned), so take that into account.  The official spec sheets for zen+ and the original zen show a huge supported memory speed drop for 4 dimms, if I remember right original zen only officially supports 1866mhz for 4 dimms?
> 
> My current 9900k handles the same ram that my 8600k couldnt manage at XMP speeds fine, I suspect i5's might have lower binned imc's vs i7 and i9.


Yeah, that makes sense. Zen and Zen+ IMCs were pretty bad, and 2dpc definitely didn't help. If those DIMMs had been dual rank as well, you might not have been able to boot at all. Thankfully AMD stepped up their IMC game drastically with Zen2 and beyond, and hopefully they're able to keep that up for DDR5.


The red spirit said:


> Well, used to. At some points I ran 3 machines all day with 2 out of 3 with native Windows BOINC loads and one with linux VM and with BOINC loads in both linux and Windows. I don't do that anymore, but when you start out in crunching, you quickly realize how generally decent everyday CPU now suddenly becomes relatively inadequate. And soon you start to want Opterons or Xeons and then you you realize in what rabbit hole you end up.


I don't doubt it, but that's definitely niche territory 


The red spirit said:


> That's just one type of decompressing.


Yes, obviously, but my point was that compression/decompression workloads tend to be intermittent and relatively short-lived (unless you're working on something that requires on-the-fly decompression of some huge dataset, in which case you're looking at a much more complex workload to begin with). They might be tens or even hundreds of GB, but that doesn't take all that much time with even a mid-range CPU.


The red spirit said:


> Or did it? Overclocked FX 83xx was slower at first pass, but faster at second pass.


I said "roughly even", didn't I? Slower in one, faster in the other, that's roughly even to me.


The red spirit said:


> Don't forget that FX octa core chips costed nearly 3 tiems less than i7. And that was close to ideal situation for i7 as that workload clearly benefited from HT, some workloads have negative performance impact from using HT. And FX has real cores, therefore performance of overclocked FX should had been far more predictable than with i7.But power consumption...  yeah, that was rough. But still even at stock speeds, FX was close to i7 and at second pass benchmark beat i7. Also FX chips were massively overvolted from factory. 0.3V undervolts were achievable on nearly any chip. Despite being more power hungry chip, AMD did no favours by setting voltage so unreasonably high.


Yeah, they were good if you had a low budget and workloads that were specifically MT-heavy and integer-only (with that shared FP unit per two cores making it effectively half the core count in FP loads). So, yes, they were great value for very specific workloads, but they were quite poor overall - and while this gave them a good niche to live in, the lack of flexibility is part of why Intel's perceived advantage grew so huge (in addition to the inefficiency and thus lack of scaling, of course).


The red spirit said:


> Phenom II X6 was decent. Phenom II X6 had single core performance of first gen FX chips, which roughly translates as somewhere in between Core 2 Quad and first gen Core i series. It was closer to i7 in that regard than 3970X is to 5950X today. And Phenom II X6 1055T sold for nearly 2 times lower price than i7, so value preposition was great.
> 
> Seems very sketchy, boards clearly overheated, but I'm not sure if it's just boost that got cut or even bellow base speed. In 3900X video CPU clearly throttled bellow base clock, that's a fail by any definition. On Intel side it's even worse:
> 
> ...


Hm, thanks for linking those. Seems the board partners have been even worse than I thought - and, of course, Intel deserves some flak for not ensuring that they meet the requirements of the platform. This is simply not acceptable IMO.


The red spirit said:


> I don't see anything a bad about putting i9K on H510 board. After all, manufacturers claim that they are compatible. If you are fine with less features, lower end chipset and etc, you may as well not pay for fancier board. Also, some people upgrade old system which had i3 later to i7s. Today that would be throttlefest (with i9). I don't see anything unreasonable about upgrading CPU later and I don't think that those people deserve to have their VRMs burning.


In theory you're right, but if you're buying a high end CPU for a heavy, sustained workload, you really should also buy something with a VRM to match. If not, you're making a poor choice, whether that is due to ignorance or poor judgement. That could still be a H510 board of course, it's just that nobody makes a H510 board with those kinds of VRMs. That doesn't take away the responsibility of Intel and board manufacturers to ensure they can meet spec - i.e. every board should be able to handle PL1 power draws indefinitely for the most power hungry part on the platform, and should also handle PL2 reasonably well as long as conditions aren't terrible (i.e. there is some ventilation cooling the VRMs and ambient temperatures are below 30°C or thereabouts).  The issue, and why I said you shouldn't buy that combo, is that all the H510 boards are bargain-basement boards that generally can't be expected to do this, and will almost guaranteed leave performance on the table - which undermines the "I only need CPU perf, not I/O" argument for going that route in the first place.


The red spirit said:


> On the other hand, you could have bought non K i5 or i7 and see it last for nearly decade with excelent performance. It was unprecedented stagnation, but it wasn't entirely good or bad. Even Core 2 Quad or Phenom II X4 users saw their chips last a lot longer than expected. Game makers made games runable on that hardware too, now core race got restarted and I don't think that we will see chips with usable lifespan as long as Sandy, Ivy or Haswels. You may say that's good. Maybe for servers and HEDT it is, but for average consumer that means more unnecessary upgrading.


*raises hand* I used my C2Q 9450 from 2008 till 2017 
You're entirely right that these chips lasted a _long_ time. However, a huge part of that is that MSDT platforms went quite rapidly from 1 to 4 cores, and then stopped for a decade, meaning software wasn't made to truly make use of them. HEDT platforms had more cores, but were clocked lower, meaning you had to OC them to match the ST perf of MSDT parts - and still topped out at 6 or 8 cores. Now that we've had 8+ core MSDT for four years we're seeing a lot more software make use of it, which is great as it actually makes things faster, rather than the <10% perf/year ST improvements we got for most of the 2010s.


The red spirit said:


> Well, I made a point about VRMs, more RAM channels, more PCIe lanes and etc. HEDT boards were clearly made to professional use and those who migrated to mainstream are essentially not getting a full experiences minus performance. Is that really good? Or is it just some people pinching pennies and buying by performance only?


There's definitely a point there, but the shrinking of the HEDT market shows us that the most important thing for most of these customers has been CPU performance, with I/O being secondary. And of course PCIe 4.0 and current I/O-rich chipsets alleviate that somewhat, as you can now get a lot of fast storage and accelerators even on a high-end MSDT platform. X570 gives you 20 native lanes of PCIe 4.0 plus a bunch off the chipset, and there are _very_ few people who need more than a single GPU/accelerator + 3-4 SSDs. They exist, but there aren't many of them - and at that point you're likely considering going for server hardware anyhow.


The red spirit said:


> Not at all, there used to be Ryzen 3100s, Ryzen 3200G-3400Gs, various Athlons. On Intel side Celerons and Pentiums were always available without issues, now they became unobtanium, well except Pentium 4 . Budget CPUs are nearly wiped out as a concept, along with GPUs. They don't really exist anymore, but they did in 2018.


That's quite different from what I'm used to seeing across these and other forums + in my home markets. Here, Celerons and Pentiums have been nearly nonexistent since ... I want to say 2017, but it might have been 2018, when Intel's supply crunch started. Ryzen 3000G chips have come and gone in waves, being out of stock for months at a time, particularly the 3000G and 3200G, and the Ryzen 3100, 3300 and 3500 have been nearly unobtainable for their entire life cycle. These chips seem to have been distibuted almost entirely to OEMs from what I've seen, but clearly there are regional differences.


The red spirit said:


> Maybe, but AMD has fanboys, never underestimate fanboys and their appetite for being ripped off.


Oh, absolutely. But that just underscores what a massive turnaround this has been. They've always had a small core of fans, but that suddenly turning into a dominant group is downright shocking.


The red spirit said:


> Or is it? I find my country's society mind boggling at times. I was reading comments in phone store about various phones and found out that S20 FE is "budget" phone and that A52 is basically poverty phone. Those people were talking about Z Flips and Folds as if they were somewhat expensive but normal, meanwhile average wage in this country is more than 2 times lower than what latest Z flip costs. And yet this exactly same country loves to bitch and whine how everything is bad, how everyone is poor or close to poverty. I really don't understand Lithuanians. It makes me think that buying 5950X may be far more common than I would like to admit and that those two stores may be a reasonable reflection of society.


Hm, that sounds ... yes, mind-boggling. At least from that description it sounds like a society with a significant fixation on wealth and prestige - not that that's uncommon though. Still, in Norway and Sweden too we've gone from a flagship phone at 4-5 000NOK being considered expensive a decade ago to 10-15 000NOK flagships selling like hotcakes these days. Of course people's usage patterns and dependency on their phones have changed massively, so it makes some sense for them to be valued higher and people being more willing to spend more on them, but that's still a staggering amount of money IMO. PC parts haven't had quite the same rise, possibly due to being more of an established market + more of a niche. There have always been enthusiasts buying crazy expensive stuff, but there's a reason why the GTX 1060 is still the most used GPU globally by a country mile - cheaper stuff that performs well is still a much easier sell overall.


The red spirit said:


> I generally dislike public perception, but you have to admit that whatever AMD did with marketing was genius. People drank AMD's koolaid about those barely functional buggy excuses of CPUs and hyped Zen 1 to the moon. Despite it being really similar to how FX launched. Focus on more cores, poor single threaded performance, worse power consumption than Intel. I genuinely thought that Ryzen will be FX v2, but nah. Somehow buying several kits of RAM to find one compatible, having computer turning on once in 10 tries and overall being inferior than Intel suddenly became acceptable and not only that, but desirable. Later gens were better, but it was the first gen that built most of Ryzen's reputation. And people quickly became militant of idea that Intel is still better, some of them would burn you at stake for saying such heresy. And now people are surprised that Intel is good again, as if Intel was a clear market leader and hasn't been for half century.


I think the promise of Intel HEDT-like performance (largely including the ST performance, though of course you could OC those HEDT chips) was much of the reason here, as well as the 5+ year lack of real competition making an opening for a compelling story of an up-and-coming company. If nothing else, Zen was _exciting_, as it was new, different, and competitive. You're mistaken about the power consumption though: Zen was very efficient from the start. That was perhaps the biggest shock - that AMD went from (discontinued) 250+W 8c CPUs that barely kept up with an i5 to ~100W  8c CPUs that duked it out with more power hungry Intel HEDT parts, and even competed on efficiency with Intel MSDT (though their ST perf was significantly behind).

I also think the surprise at "Intel being good again" is mainly due to Intel first not delivering a meaningful performance improvement for a solid 4 years (incremental clock boosts don't do much), and failing to get their new nodes working well. Rocket Lake underscored that with an IPC regression in some workloads - though that was predictable from the disadvantages of backporting a 10nm design to 14nm. Still, this is the first time in more than half a decade that Intel delivers _true_ improvement in the desktop space. Their mobile parts have been far better (Ice Lake was pretty good, Tiger Lake was rather impressive), but they've had clear scaling issues going above 50-ish watts and the very short boost times of laptops. They've finally demonstrated that they haven't lost what made them great previously - it just took a _long_ time.


----------



## Space Lynx (Nov 10, 2021)

I really appreciate W1zzard's last sentence of the review, "Just to provide a little bit of extra info on why I gave "Recommended" to the 12900K, but "Editor's Choice" to the 12700K and 12600K. I feel like the super-high power limit on the 12900K is just pushing things too high, probably for the sake of wining Cinebench, with limited gains in real-world usage, and too much of a toll on power and heat."

This type of clarity was missing in past reviews on this site and it is much appreciated. More transparency on the awards is a good thing. I agree with this last sentence 100% as well. 12700k seems to be the sweet spot if you want the best of the best without compromising your humanity.


----------



## The red spirit (Nov 10, 2021)

AusWolf said:


> An even better summary (in my point of view) is that AMD has always been good at offering competitive performance to Intel, at usually lower prices and with lower thermals. They only messed up the FX series, which took them a good 5-6 years to get out of, while Intel was riding a high horse with their 4-core chips. Now that Ryzen is good, AMD receives so much unnecessary hype that they can afford to ride the same high horse that Intel did. Now that Alder Lake is good too, AMD hopefully gets the message and lowers prices to competitive levels.
> 
> In my opinion, a lot of AMD's hype comes from the "underdog effect". Though I agree that new Ryzen is good, it's good in a very different way than Intel is good. This is the kind of competition and diversification that I like to see.


I frankly still don't like Ryzen. I think that K10 era was the best, because AMD had tons of skus. Want a single core? Buy Sempron. Want triple core? Phenom II X3 it is. Want power saving chips? Buy e CPUs. Want 100 EUR quad core? Athlon II X4 is there. Want excellent gaming chip? Get Phenom II X4. Want crunching monster? Get Phenom II X6 1100T? Want the same but cheaper? Get 1055T. Don't even have AM3 board? AM2+ is fine. Got FX chip with AM3+ board? You can still use all Athlon IIs, Phenom IIs and Semprons and maybe K10 Opties too. I loved that diversity and it was fun to find less common models. Too bad I was too young to get into computers then, but if that weren't the case, AMD was the ultimate value provider. Now they only have a few skus for each person and AMD sure as hell doesn't offer super cheap, but downclocked 16 core chips. If you are cheap, your only options are Ryzen 3s and even then they are overpriced. Athlons now suck. And every single chips is focused on performance, meaning is clocked as far as they can. There is no real e chip availability. And now the equivalent of those old Athlon IIs would be octa core chips with maybe cut down L3 cache (Athlon IIs had no L3 cache, that was Phenom II exclusive feature). There's no way now that AMD would sell an 8C/16T chip at 100 EUR/USD or one with low heat output. There just aren't deals like that anymore, value AMD has been dead since FX got discontinued. Ryzen is antithesis to value. First gen Ryzen was kinda cool, but nothing beyond that.


----------



## Valantar (Nov 10, 2021)

The red spirit said:


> I frankly still don't like Ryzen. I think that K10 era was the best, because AMD had tons of skus. Want a single core? Buy Sempron. Want triple core? Phenom II X3 it is. Want power saving chips? Buy e CPUs. Want 100 EUR quad core? Athlon II X4 is there. Want excellent gaming chip? Get Phenom II X4. Want crunching monster? Get Phenom II X6 1100T? Want the same but cheaper? Get 1055T. Don't even have AM3 board? AM2+ is fine. Got FX chip with AM3+ board? You can still use all Athlon IIs, Phenom IIs and Semprons and maybe K10 Opties too. I loved that diversity and it was fun to find less common models. Too bad I was too young to get into computers then, but if that weren't the case, AMD was the ultimate value provider. Now they only have a few skus for each person and AMD sure as hell doesn't offer super cheap, but downclocked 16 core chips. If you are cheap, your only options are Ryzen 3s and even then they are overpriced. Athlons now suck. And every single chips is focused on performance, meaning is clocked as far as they can. There is no real e chip availability. And now the equivalent of those old Athlon IIs would be octa core chips with maybe cut down L3 cache (Athlon IIs had no L3 cache, that was Phenom II exclusive feature). There's no way now that AMD would sell an 8C/16T chip at 100 EUR/USD or one with low heat output. There just aren't deals like that anymore, value AMD has been dead since FX got discontinued. Ryzen is antithesis to value. First gen Ryzen was kinda cool, but nothing beyond that.


You're disregarding some pretty major factors here though: those old chips were made on relatively cheap production nodes. Per-wafer costs for recent nodes are easily 2x what they were just a 1-2 generations back, and many times the cost of older nodes (yes, the older nodes were likely more expensive when new than the numbers cited in that article, but nowhere near current cutting-edge nodes). And while these nodes are also much denser, transistor counts are also much higher for current CPUs. The new CCDs are just 80-something mm² compared to over 300 for those 45nm Phenom II X6es, but then those were made on the relatively cheap 45nm node, and of course the Ryzens also have a relatively large IOD as well, on a slightly cheaper node. Substrates and interconnects are also much more expensive due to the high speed I/O in these chips. The older stuff was cutting-edge for its time, obviously, but it was still much, much cheaper to produce. Double, triple and quadruple patterning, more lithographic masks, more time in the machines, and the introduction of EUV are all driving costs up. A $100 downclocked 16-core Ryzen wouldn't be even remotely feasible today.

You're also misrepresenting things a bit IMO. The Zen Athlons have been great value and great bang for the buck as long as they have existed. E and GE parts are available, but mainly for OEMs, as they generally sell in _very_ low numbers on the DIY market, making it not worth the distribution cost - this is the same for Intel's T SKUs, though Intel of course has the advantage of ~80% market share and thus ~5x more product availability. You're also missing that literally every single Ryzen chip has lower power modes built in, which can be toggled through the BIOS - at least 65W for the 105W SKUs and 45/35W for the 65W SKUs, though I've seen screenshots of 45W modes for 105W SKUs as well. You can buy a 5950X and make it a "5950XE" with a single BIOS toggle. The current Zen APUs are also extremely power efficient, and can run _very_ low power if you toggle that mode or hand tune them. You're right that they are currently focusing exclusively on what I would call upper midrange and upwards, which is a crying shame, as lower core count Zen3 CPUs or APUs would be a fantastic value proposition if priced well (there have been some _glowing_ reviews of the 5300G despite it only being available through OEMs). But AMD is sadly following the logic of capitalism: in a constrained market, you focus on what is the most profitable. Which means you bin and sell parts as the highest possible SKU, rather than dividing them between a full range of products. This will likely continue until we are no longer supply constrained, which will be a while. After all, with current laws for publicly traded companies, they could (and would) be sued by shareholders if they did anything but that, as the legislation essentially mandates a focus on maximizing profits above all else, on pain of significant monetary losses from lawsuits. And while I have no doubt AMD's executives wholeheartedly buy into this and think it's a good thing to maximize profits overall, they are also conditioned into this through rules, regulations, systems and societies that all adhere to this logic.


----------



## The red spirit (Nov 10, 2021)

Valantar said:


> I don't doubt it, but that's definitely niche territory


And the fun thing is that bulldozer era Opterons are really cheap and if there are cheaper boards, it might had been good at crunching. 16 FX cores in cheap Opteron? Sounds pretty cool, until you realize how woefully slow each one of them is and how they get trashed by Xeons costing the same and having much stronger board availability. i didn't buy anything, because I really had no space to keep something like that running for a long time, but it was very tempting.



Valantar said:


> Yes, obviously, but my point was that compression/decompression workloads tend to be intermittent and relatively short-lived (unless you're working on something that requires on-the-fly decompression of some huge dataset, in which case you're looking at a much more complex workload to begin with). They might be tens or even hundreds of GB, but that doesn't take all that much time with even a mid-range CPU.


Some compressed files don't have to be big to be hard to decompress, it depends on how complicated archive is. I still remember suffering over 4 hours with Athlon X4 845 with 5-6GB archive. 



Valantar said:


> Yeah, they were good if you had a low budget and workloads that were specifically MT-heavy and integer-only (with that shared FP unit per two cores making it effectively half the core count in FP loads). So, yes, they were great value for very specific workloads, but they were quite poor overall - and while this gave them a good niche to live in, the lack of flexibility is part of why Intel's perceived advantage grew so huge (in addition to the inefficiency and thus lack of scaling, of course).


I bought FX 6300, because it was awesome. I just couldn't accept that my money was only worth some poopy 2C/4T i3. It felt insulting, so got FX 6300 instead. And it aged quite well, i3 was stuttering mess a year or two later.  



Valantar said:


> Hm, thanks for linking those. Seems the board partners have been even worse than I thought - and, of course, Intel deserves some flak for not ensuring that they meet the requirements of the platform. This is simply not acceptable IMO.


It's the same with AMD platforms and to be honest, all brands produce overheating boards. It's just that Gigabyte and Asrock do it more often, but I don't doubt that Biostar would fare quite poorly too. MSI and Asus seem to be generally more robust and MSI stopped making boards that catch on fire. I remember seeing Dawid's video about NZXT board and NZXT proved to be a brand to avoid. Still the general suggestion is to shop by model, not by brand. 

And beyond overheating, you can have really shit experience for entirely different reasons:








						Weird CPU temperature reporting (solved)
					

Hello  I'm having a weird CPU reporting problem with my computer. What happens is that CPU is misreporting temperature. I found this problem while I was monitoring PC with HwInfo64 software. Even during idle, what software reports as CPU package is very high. Usually around 45-50C. During OCCT...




					www.overclock.net
				




Now that board is completely dead.




Valantar said:


> In theory you're right, but if you're buying a high end CPU for a heavy, sustained workload, you really should also buy something with a VRM to match. If not, you're making a poor choice, whether that is due to ignorance or poor judgement. That could still be a H510 board of course, it's just that nobody makes a H510 board with those kinds of VRMs. That doesn't take away the responsibility of Intel and board manufacturers to ensure they can meet spec - i.e. every board should be able to handle PL1 power draws indefinitely for the most power hungry part on the platform, and should also handle PL2 reasonably well as long as conditions aren't terrible (i.e. there is some ventilation cooling the VRMs and ambient temperatures are below 30°C or thereabouts).  The issue, and why I said you shouldn't buy that combo, is that all the H510 boards are bargain-basement boards that generally can't be expected to do this, and will almost guaranteed leave performance on the table - which undermines the "I only need CPU perf, not I/O" argument for going that route in the first place.


But what if you really are fine with H510 features? Or A520 features? I'm aware that this is not a common consideration, but if manufacturers claim that i9 or 5950X works with it and then it doesn't, that's just unacceptable. 




Valantar said:


> *raises hand* I used my C2Q 9450 from 2008 till 2017 You're entirely right that these chips lasted a _long_ time. However, a huge part of that is that MSDT platforms went quite rapidly from 1 to 4 cores, and then stopped for a decade, meaning software wasn't made to truly make use of them. HEDT platforms had more cores, but were clocked lower, meaning you had to OC them to match the ST perf of MSDT parts - and still topped out at 6 or 8 cores. Now that we've had 8+ core MSDT for four years we're seeing a lot more software make use of it, which is great as it actually makes things faster, rather than the <10% perf/year ST improvements we got for most of the 2010s.


And Core 2 Quads lasted, but Core 2 Duos didn't. Just literally 2 years after Core 2 Duo launch, games like Red Faction Guerilla were absolutely hammering Core 2 Duos and you really had to have that Quad to play certain games. Far Cry 2 was another surprisingly CPU heavy title.




Valantar said:


> That's quite different from what I'm used to seeing across these and other forums + in my home markets. Here, Celerons and Pentiums have been nearly nonexistent since ... I want to say 2017, but it might have been 2018, when Intel's supply crunch started. Ryzen 3000G chips have come and gone in waves, being out of stock for months at a time, particularly the 3000G and 3200G, and the Ryzen 3100, 3300 and 3500 have been nearly unobtainable for their entire life cycle. These chips seem to have been distibuted almost entirely to OEMs from what I've seen, but clearly there are regional differences.


In my country, only some graphics cards were missing for a short time, while pricing became insane. Over 1k EUR for Vega 56 or 800 EUR for RX 580, well shit happened. It make C19 situation look meek. 



Valantar said:


> Hm, that sounds ... yes, mind-boggling. At least from that description it sounds like a society with a significant fixation on wealth and prestige - not that that's uncommon though.


And yet historically Lithuanian society has been meek, introverted, extremely independent, hating fellow Lithuanians more than anyone else (unless they are Russians or Poles), poor, industrious and thrifty. If you imagine a Lithuanian living in middle of nowhere or in small village and spending most of the time farming, that's exactly what they were. Hating Lithuanians more than anyone else is a modern bit, previously Lithuanians were very conservative, traditional and judgmental. 




Valantar said:


> Still, in Norway and Sweden too we've gone from a flagship phone at 4-5 000NOK being considered expensive a decade ago to 10-15 000NOK flagships selling like hotcakes these days. Of course people's usage patterns and dependency on their phones have changed massively, so it makes some sense for them to be valued higher and people being more willing to spend more on them, but that's still a staggering amount of money IMO. PC parts haven't had quite the same rise, possibly due to being more of an established market + more of a niche. There have always been enthusiasts buying crazy expensive stuff, but there's a reason why the GTX 1060 is still the most used GPU globally by a country mile - cheaper stuff that performs well is still a much easier sell overall.


My parents went from flip phones straight to flagships phones. My dad has S6 and mom has S9. Somehow they didn't consider going cheaper, despite talking a lot about how valuable money is.




Valantar said:


> I think the promise of Intel HEDT-like performance (largely including the ST performance, though of course you could OC those HEDT chips) was much of the reason here, as well as the 5+ year lack of real competition making an opening for a compelling story of an up-and-coming company. If nothing else, Zen was _exciting_, as it was new, different, and competitive. You're mistaken about the power consumption though: Zen was very efficient from the start. That was perhaps the biggest shock - that AMD went from (discontinued) 250+W 8c CPUs that barely kept up with an i5 to ~100W  8c CPUs that duked it out with more power hungry Intel HEDT parts, and even competed on efficiency with Intel MSDT (though their ST perf was significantly behind).


Zen 1 was quite cool, but very problematic at launch.




Valantar said:


> I also think the surprise at "Intel being good again" is mainly due to Intel first not delivering a meaningful performance improvement for a solid 4 years (incremental clock boosts don't do much), and failing to get their new nodes working well. Rocket Lake underscored that with an IPC regression in some workloads - though that was predictable from the disadvantages of backporting a 10nm design to 14nm. Still, this is the first time in more than half a decade that Intel delivers _true_ improvement in the desktop space. Their mobile parts have been far better (Ice Lake was pretty good, Tiger Lake was rather impressive), but they've had clear scaling issues going above 50-ish watts and the very short boost times of laptops. They've finally demonstrated that they haven't lost what made them great previously - it just took a _long_ time.


We will see if they haven't lost it with next release. Alder Lake is quite okay, just not like Ryzen was and not liek Sandy Bridge was. The key is continuous strong execution.


----------



## W1zzard (Nov 10, 2021)

lynx29 said:


> This type of clarity was missing in past reviews on this site and it is much appreciated. More transparency on the awards is a good thing


There has been so much drama about my awards, so now I'm trying to be a bit more explicit where I suspect people will be like "wtf why is this getting an award? wizz must have been bought by <amd/nvidia/intel/all three>" Breaking down every single award recommendation seems overkill though and would essentially replace the conclusion text


----------



## Space Lynx (Nov 10, 2021)

W1zzard said:


> There has been so much drama about my awards, so now I'm trying to be a bit more explicit where I suspect people will be like "wtf why is this getting an award? wizz must have been bought by <amd/nvidia/intel/all three>" Breaking down every single award recommendation seems overkill though and would essentially replace the conclusion text



I had no idea! I actually wasn't even thinking of your reviews specifically, just reviews in general on this site, usually they are on target, other times I scratch my head and I am like how is this highly recommended yet it has more negatives than pros in the review section... just stuff like that, not necessarily anyone specifically. It all works at end of day though. I think reviews are just fine, I like the way and format they are done here.


----------



## Richards (Nov 10, 2021)

Valantar said:


> There seems to be a (potentially very cool, or very janky, depending on perspective) BIOS-level fix for this. Don't know if it's on current BIOSes or upcoming, but they've announced a toggle ("Legacy game compatibility mode" or something) that when enabled lets you press Scroll Lock to park and effectively disable the E cores on the fly, avoiding Denuvo issues and the like. I'm a bit shocked that this doesn't hard crash the system, but I guess it automatically migrates all threads off of the cores before shutting them down?
> 
> The underdog effect has _a lot_ to do with AMD's current success, that's for sure. And yes, AMD has historically been everywhere from competitive to clearly ahead to clearly behind, but they have always been a much smaller company than Intel, with all the competitive drawbacks of that - including (but not limited to) various illegal anticompetitive behaviour on Intel's part. That "Dell is the best friend money can buy" quote from internal Intel communications that came to light in one of the antitrust cases summarizes things pretty eloquently IMO. There's one major factor that hasn't been mention though: the size of the total PC market and how this affects things. Intel's late-2000s performance advantage with Core, combined with the tail end of their various dubiously legal exclusivity deals, combined at the point where a) laptops became properly viable for everyday use, and b) the internet had become a properly commonplace thing. So Intel came into a rapidly expanding market (lower prices per unit, but volume increased dramatically) while also rapidly becoming the only viable server alternative. They had some damn good luck with their timing there, and IMO that contributed quite a bit to their dominance in the 2010s, not just that Bulldozer and its ilk were bad. Now, it was pretty bad, but in a less drastically unequal competitive situation that could have evened out in some way. Instead AMD nearly went bankrupt, but thankfully stayed alive thanks to their GPU division churning out great products still. As such, I guess we can thank ATI, and the AMD decision to buy them, for having the competitive and interesting CPU market we have today. And I don't think we would have gotten the underdog effect without the GPUs either - their ability to deliver good value GPUs is likely to have contributed significantly to that.
> 
> ...


Amd did good catching  up  and beating  intel with zen 3... but raptor  lake  and metereo lake  will dominate in single core  performance


----------



## AusWolf (Nov 10, 2021)

The red spirit said:


> I frankly still don't like Ryzen. I think that K10 era was the best, because AMD had tons of skus. Want a single core? Buy Sempron. Want triple core? Phenom II X3 it is. Want power saving chips? Buy e CPUs. Want 100 EUR quad core? Athlon II X4 is there. Want excellent gaming chip? Get Phenom II X4. Want crunching monster? Get Phenom II X6 1100T? Want the same but cheaper? Get 1055T. Don't even have AM3 board? AM2+ is fine. Got FX chip with AM3+ board? You can still use all Athlon IIs, Phenom IIs and Semprons and maybe K10 Opties too. I loved that diversity and it was fun to find less common models. Too bad I was too young to get into computers then, but if that weren't the case, AMD was the ultimate value provider. Now they only have a few skus for each person and AMD sure as hell doesn't offer super cheap, but downclocked 16 core chips. If you are cheap, your only options are Ryzen 3s and even then they are overpriced. Athlons now suck. And every single chips is focused on performance, meaning is clocked as far as they can. There is no real e chip availability. And now the equivalent of those old Athlon IIs would be octa core chips with maybe cut down L3 cache (Athlon IIs had no L3 cache, that was Phenom II exclusive feature). There's no way now that AMD would sell an 8C/16T chip at 100 EUR/USD or one with low heat output. There just aren't deals like that anymore, value AMD has been dead since FX got discontinued. Ryzen is antithesis to value. First gen Ryzen was kinda cool, but nothing beyond that.


I think you angered quite a few forum members with the first sentence.  As for me, I can see where you're coming from. I wouldn't say I don't like Ryzen. The proper phrase would be _Ryzen doesn't fit my use case with their current lineup,_ even though they're excellent CPUs. My issues are:

The deal breaker: They run hot. I know, everyone says otherwise, but They. Run. Hot. Period. Of course you can slap a triple-fan AIO, or a Noctua NH-D15 on anything and call it a day, but that doesn't mean easy cooling. Not for me at least. In my terms, easy cooling means building your system in a small form factor case, using a small or medium-sized down-draft cooler like my Shadow Rock LP and stay within safe temperatures without modifying your power limits or volt-modding. Except for my R3 3100 (which I love), no other Ryzen chip can do that as of now. Something's wrong with the heat dissipation of the 7 nm chiplets, or the IHS design, or something else (I don't know and I don't care until it's fixed in a future generation).
And a few minor ones:
Chipset software, Zen Master and Ryzen specific power plan needed for them to run properly. I mean, software controlled CPU? What the F? Thankfully, this doesn't apply to Zen 3.
High idle power consumption. I just can't not mention this. Efficiency under load is great, but how much of its active time does one's PC spend in idle? Mine around 80-90%.
Bonkers clock readings. OK, I know about "effective clocks" in HWinfo, but come on. Do I really need HWinfo just to get a simple clock reading?

The AMD CPU that I loved the most is the Athlon 64 3000+. It destroyed 3 GHz Pentium 4s while it only ran at 2 GHz (some only 1.8, but mine was 2 GHz on Socket 754) and being a lot cooler. Though I have to admit that ever since the introduction of Core, Intel has been a smooth, stable and hassle-free experience on all fronts. I've owned, or at least tried nearly every generation, and didn't really dislike anything about any of them. I even love my 11700 and I don't understand why it's so fashionable to hate 10/11th gen. I would also love to build a SFF rig around Alder Lake, but I don't have any spare money to play with right now.


----------



## The red spirit (Nov 10, 2021)

Valantar said:


> You're disregarding some pretty major factors here though: those old chips were made on relatively cheap production nodes. Per-wafer costs for recent nodes are easily 2x what they were just a 1-2 generations back, and many times the cost of older nodes (yes, the older nodes were likely more expensive when new than the numbers cited in that article, but nowhere near current cutting-edge nodes). And while these nodes are also much denser, transistor counts are also much higher for current CPUs. The new CCDs are just 80-something mm² compared to over 300 for those 45nm Phenom II X6es, but then those were made on the relatively cheap 45nm node, and of course the Ryzens also have a relatively large IOD as well, on a slightly cheaper node. Substrates and interconnects are also much more expensive due to the high speed I/O in these chips. The older stuff was cutting-edge for its time, obviously, but it was still much, much cheaper to produce. Double, triple and quadruple patterning, more lithographic masks, more time in the machines, and the introduction of EUV are all driving costs up. A $100 downclocked 16-core Ryzen wouldn't be even remotely feasible today.


Phenom II X6 at low end was 200 USD/EUR, it were those Athlon II X4 chips that were dirt cheap at 100USD/EUR with 4 K10 cores, but with chopped L3 cache and locked multiplier. But even if new chips are more sophisticated and some costs (that you mentioned) are now higher, isn't there anything that could be done to make them cheaper? Like backporting 5000 series Ryzen chips to older node, to make them cheaper. Somehow Intel still managed to make i5 10400F cheap, at least relatively cheap.




Valantar said:


> You're also misrepresenting things a bit IMO. The Zen Athlons have been great value and great bang for the buck as long as they have existed.


Or are they? They are just barely faster than K10 Athlon IIs and are as fast as FX Athlons. Also overclocking. Athlon X4 760K could clock to 5Ghz on modest board and cooling and at that point it may actually beat 200GE in MT stuff.



Valantar said:


> E and GE parts are available, but mainly for OEMs, as they generally sell in _very_ low numbers on the DIY market, making it not worth the distribution cost - this is the same for Intel's T SKUs, though Intel of course has the advantage of ~80% market share and thus ~5x more product availability.


At least you had an option to have those. I haven't seen Intel T chips available after Haswell.



Valantar said:


> You're also missing that literally every single Ryzen chip has lower power modes built in, which can be toggled through the BIOS - at least 65W for the 105W SKUs and 45/35W for the 65W SKUs, though I've seen screenshots of 45W modes for 105W SKUs as well. You can buy a 5950X and make it a "5950XE" with a single BIOS toggle.


On FM2+ platform low power mode didn't quite work, I'm still recovering trust of AMD after their complete disregard of low power features and their TDP lies.



Valantar said:


> The current Zen APUs are also extremely power efficient, and can run _very_ low power if you toggle that mode or hand tune them. You're right that they are currently focusing exclusively on what I would call upper midrange and upwards, which is a crying shame, as lower core count Zen3 CPUs or APUs would be a fantastic value proposition if priced well (there have been some _glowing_ reviews of the 5300G despite it only being available through OEMs). But AMD is sadly following the logic of capitalism: in a constrained market, you focus on what is the most profitable. Which means you bin and sell parts as the highest possible SKU, rather than dividing them between a full range of products. This will likely continue until we are no longer supply constrained, which will be a while. After all, with current laws for publicly traded companies, they could (and would) be sued by shareholders if they did anything but that, as the legislation essentially mandates a focus on maximizing profits above all else, on pain of significant monetary losses from lawsuits. And while I have no doubt AMD's executives wholeheartedly buy into this and think it's a good thing to maximize profits overall, they are also conditioned into this through rules, regulations, systems and societies that all adhere to this logic.


And cynical me just thinks, that AMD and nV just want to make PC DIY market premium and to destroy cheaper part market as those are super low margin parts. And let's be honest, DIY PC market is a bit in pinch as phone/tablets are getting cheaper and more powerful (and they successfully abandoned most of low end market), Apple has their own chips, Sony/MS are selling out their consoles at incredible rate and people perceive them as value gaming machines (instead of PCs) and importantly, people are now conditioned to pay a lot more for tech than they were.

I mentioned how Ryzen is perceived, as well as flagship phones, unlike in past, ain't nobody gonna laugh at you for attempting to sell a 500 USD phone (like Ballmer did when iPhone 2G launched). Media did a lot of work to normalize typically luxurious hardware and well that's obvious even here. Many people have BFGPUs as Jensen said, despite most likely them being middle class people. They are no longer pushing for lower cost, but for more performance at any price. The era of Radeon HD 4870, a 200 USD flagship killer is dead and so is the era of HD 6950 (300 USD flagship killer). All these shortages also showed how impatient people are and that many are also willing to pay a price of scarcity. And despite Intel launching i5 10400F, it didn't have stellar sales until media stopped to shit on it for being cheaper. i3 10100F, I think, never sold particularly well. Meanwhile, poor value parts like 10600K sold well and obviously, other Intel parts along with Ryzens. Motherboard makers are selling cheaper boards, but if you don't pay gaming tax for proper VRMs, you will only get herpes from them. Now water cooling became normal too, despite still mostly failing to provide anything more than air coolers. People are paying for expensive shit and companies have no reason to stop feeding them. As sad it is to say that, but I think that there's just too little demand for lower end or cheaper parts, despite many strides made by LowSpecGamer, TechYesCity, RandomGamingInHD and others. And despite there seemingly being new technological challenges, the elephant in the room isn't that, but public perception of budget hardware. That's kinda the same reason why super low end cars disappeared altogether (I'm talking about Toyota Aygo, VW Polo, Citroen C1, Ford Puma/Ka). And buying a new car switched from being normal to rich man's endeavor. And beyond normal cars, cheap RWD and FWD coupes also died.

All those reasons are why I think we won't get Ryzen 7 5800GE for 200 EUR or modern equivalent of it. Despite Alder Lake launch and small price cut from AMD, I think that it's all we will get, while most chips will continue to get more and more expensive. I think that Zen 4 or Zen 3+ (whatever AMD will launch next) will generally cost more and deliver more performance, but price/performance ratio will continue to sink.



AusWolf said:


> I think you angered quite a few forum members with the first sentence.







AusWolf said:


> As for me, I can see where you're coming from. I wouldn't say I don't like Ryzen. The proper phrase would be _Ryzen doesn't fit my use case with their current lineup,_ even though they're excellent CPUs.


I'm not sure if I would even like them if AMD made them dirt cheap. There was a ton of braindead AMD marketing, braindead AMD fanboys, braindead media hyping it up to the moon. Many events lead to my feeling like stepped into poop, every single time I see Ryzen. To this day, I still think that Ryzen name is stupid, infantile and egoistic. FX meant nothing and Phenom was awesome. Athlon was between FX and Phenom. I probably care too much about such things tbh.




AusWolf said:


> [*]Chipset software, Zen Master and Ryzen specific power plan needed for them to run properly. I mean, software controlled CPU? What the F? Thankfully, this doesn't apply to Zen 3.


Not a first time that AMD did this. Athlon 64 needed cool and quiet software for it to work with XP. K10 era stuff needed ATi Catalyst (lol yes, that catalyst) for drivers and some things to adjust. In late K10 era and early FX era AMD was pushing AMD Overdrive. It was a bit poop and thankfully was optional. But yeah, Ryzen is the worst offender with various software. Even worse, if you have AMD card and AMD CPU in same machine. That's just asking for various bugs and glitches after every single driver update.



AusWolf said:


> High idle power consumption. I just can't not mention this. Efficiency under load is great, but how much of its active time does one's PC spend in idle? Mine around 80-90%.


That depends on motherboard too. On Intel side, many boards don't have functional C states beyond C3 and will randomly lockup or black screen randomly if left enabled. Also computers consume quite a lot of power at idle, despite chip being very power efficient at low end. Surprisingly fans can consume quite a bit of power, so does chipset. A wrong choice of power supply can also mean significantly higher power use at idle.



AusWolf said:


> Bonkers clock readings. OK, I know about "effective clocks" in HWinfo, but come on. Do I really need HWinfo just to get a simple clock reading?


That's the same on Intel, since it's very hard to avoid observer's effect, when monitoring chip that can rapidly enter various C states.




AusWolf said:


> I even love my 11700 and I don't understand why it's so fashionable to hate 10/11th gen.


Media doesn't care about locked chips, Intel got a lot of flak for various limitations of Comet Lake non Z platforms with 2666Mhz RAM speed being highest supported speed on locked Core i parts and even lower 2400Mhz on Celeron/Pentium. Oh and hype for Ryzen for finally matching Intel (somehow many people perceived that as destroying Intel). Rocket lake had a terrible flagship and locked parts were more expensive than Comet Lake ones, despite tiny performance gains. And on top of that we have that infamous HWUB B560 video, which certainly did no favors in recovering public perception of Intel. 




AusWolf said:


> I would also love to build a SFF rig around Alder Lake, but I don't have any spare money to play with right now.


That 11700 is still cool.


----------



## Valantar (Nov 10, 2021)

Richards said:


> Amd did good catching  up  and beating  intel with zen 3... but raptor  lake  and metereo lake  will dominate in single core  performance


We'll see - they're promising major ST gains just from the V-cache on the updated Zen3 in a few months, and Zen4 will no doubt focus further on ST gains (as increasing core counts for MSDT markets has zero purpose at this point). Given that Intel just barely caught up with Zen3's IPC (a bit ahead or a bit behind depending on whether you're using DDR4 or DDR5 and the specific combination of workloads), it'll be very interesting to see how these things shake out going forward. Intel still manages to clock consistently higher, but at the cost of huge power draws, and their cores are surprisingly large compared to even Zen3 - so both have clear advantages and disadvantages. The next few generations will be really interesting.


AusWolf said:


> I think you angered quite a few forum members with the first sentence.  As for me, I can see where you're coming from. I wouldn't say I don't like Ryzen. The proper phrase would be _Ryzen doesn't fit my use case with their current lineup,_ even though they're excellent CPUs. My issues are:
> 
> The deal breaker: They run hot. I know, everyone says otherwise, but They. Run. Hot. Period. Of course you can slap a triple-fan AIO, or a Noctua NH-D15 on anything and call it a day, but that doesn't mean easy cooling. Not for me at least. In my terms, easy cooling means building your system in a small form factor case, using a small or medium-sized down-draft cooler like my Shadow Rock LP and stay within safe temperatures without modifying your power limits or volt-modding. Except for my R3 3100 (which I love), no other Ryzen chip can do that as of now. Something's wrong with the heat dissipation of the 7 nm chiplets, or the IHS design, or something else (I don't know and I don't care until it's fixed in a future generation).
> And a few minor ones:
> ...


I definitely see where you're coming from, and what you're saying here makes a lot of sense. I have a few comments though:

The current impression of Intel chips running hot is a) down to their choice of clocking their top SKUs crazy high and disregarding power consumption to look good in benchmarks, and b) because they generally only (or at least mostly) seed review samples for high end chips. You'll probably find 10x or more 11900K reviews compared to 11400 reviews. Non-K high end SKUs or T SKUs are essentially never provided for review, despite selling massively in OEM systems (and this might be precisely to avoid unfavorable comparisons to poorly/cheaply configured OEM systems). Public perceptions are always unreliable and flaky, but Intel has done themselves zero favors in this regard.
Zen power ratings are a bit weird - Ryzen master consistently reports 15-ish watts lower power consumption (cores+SoC) compared to package power readings in HWinfo and the like. I frankly don't know what to trust. At least in TPUs reviews, total system idle power is so close as to not make a difference.
As you say, the need for a specific power plan is rather weird, and IMO speaks to some kind of poor cooperation with MS. Thankfully that's long gone, and I doubt we'll have to deal with anything like it going forward. But having software control the CPU? Isn't that what modern OSes do? Sure, you can argue for a difference between the OS and third party software and drivers, but the main difference here is that it's visible to you as a user, not whether the software is there or not. It should auto-install through Windows Update though, that's a minimum requirement IMO. But I also think good software-OS-hardware integration is key for getting the most out of our hardware going forward.
For the clock readings, I think that's an unavoidable consequence of an increasing number of separate clock-gated parts of the CPU, an increasing number of clock planes, different speeds across cores, different speeds according to the load, etc. When the clock speed is controlled per-core in tiny increments at a staggering pace by bespoke hardware, there's no way human-readable software will be able to keep up. I agree that it's pretty weird to see an "effective clock" for a core at 132MHz when the core clock is reporting 3.9GHz, but ... I don't see how it ultimately matters. At least in my case, there is only any real variance between the two at ilde; under load they equalize (within ~25MHz). The fact that effective clocks can differ between main and SMT threads per core tells me there's something in play here that's well beyond my understanding. In other words, it's not that the data reported isn't accurate, but rather that computers are getting so complex my understanding is falling behind. And to be honest, that's a very good thing.
None of that takes away from the fact that lower clocked 10th and 11th gen Intel CPUs are great (and even quite efficient), and your point about Zen2 and Zen3 being difficult to cool is also quite relevant. I don't mind, but then I'm not using one of these chips with an LP air cooler. That my 5800X settles in the mid-to-high 60s at 120W running OCCT with a 280mm rad, the P14s spinning at 1600rpm, and my pump at 3400? Even accounting for the Aquanaut being a mediocre-at-best CPU block, that's pretty high overall considering the thermal load. This might be an advantage Intel gets from their larger core designs, even on the new 7 process. Though I also think us PC builders will need to get used to higher core temperatures in the future, as thermal densities are only increasing. I hope we can avoid laptop-like "everything spends any time under load bouncing off tJmax" scenarios though.


The red spirit said:


> Phenom II X6 at low end was 200 USD/EUR, it were those Athlon II X4 chips that were dirt cheap at 100USD/EUR with 4 K10 cores, but with chopped L3 cache and locked multiplier. But even if new chips are more sophisticated and some costs (that you mentioned) are now higher, isn't there anything that could be done to make them cheaper? Like backporting 5000 series Ryzen chips to older node, to make them cheaper. Somehow Intel still managed to make i5 10400F cheap, at least relatively cheap.


10400F is 14nm though, and Intel has a lot more scale than AMD to cut costs, at ~4x the market share. Backporting _might_ work, but the Zen3 core is designed for the density and physical properties of the 7nm node, so a backport will perform worse and likely run quite hot (just look at rocket lake and its latency regressions). Sadly the only thing to really do is wait for the supply crunch to alleviate and for economies of scale to overtake the margin-hiking AMD is currently riding on.


The red spirit said:


> Or are they? They are just barely faster than K10 Athlon IIs and are as fast as FX Athlons. Also overclocking. Athlon X4 760K could clock to 5Ghz on modest board and cooling and at that point it may actually beat 200GE in MT stuff.


At, what, 7-8x the power and on a platform that barely supports USB3.0, let alone any modern I/O? Yeah, sure, they might tag along with a bottom-end chip like that still, but even for free the value proposition there would be pretty terrible given the effort needed for OCing (especially for a beginner), the more expensive cooler, the noise, and the lack of upgrade path (buy a 200GE and you can move to a used 5950X in time!). Bargain-basement parts always compete with used parts in value, but they have inherent advantages due to being new.


The red spirit said:


> At least you had an option to have those. I haven't seen Intel T chips available after Haswell.


They're found in tons of AIOs and SFF OEM PCs, but they are really rare at retail. SFF afficionados tend to hunt them down from time to time, but you often need grey-market sources like Ebay.


The red spirit said:


> On FM2+ platform low power mode didn't quite work, I'm still recovering trust of AMD after their complete disregard of low power features and their TDP lies.


I haven't heard of any issues with these on AM4 - but then all they do is adjust PPT, TDC and EDC limits for the chip, just like you can do manually, and which is a core method of PBO overclocking or underclocking/volting. I'm currently running my 5800X at a 120W PPT, and it never goes above that, ever (and it still boosts higher than stock).


The red spirit said:


> And cynical me just thinks, that AMD and nV just want to make PC DIY market premium and to destroy cheaper part market as those are super low margin parts. And let's be honest, DIY PC market is a bit in pinch as phone/tablets are getting cheaper and more powerful (and they successfully abandoned most of low end market), Apple has their own chips, Sony/MS are selling out their consoles at incredible rate and people perceive them as value gaming machines (instead of PCs) and importantly, people are now conditioned to pay a lot more for tech than they were.


You're onto something, but I don't think it's down to anyone wanting to destroy anything, it's more down to the fact that entry-level hardware is getting sufficiently powerful to take the place of what used to be low-end gaming hardware. Smartphones and APU laptops suddenly providing decent gaming experiences has all but killed the low-end GPU market. But the real kicker is that this has coincided with the current supply crunch which has completely erased lower midrange and midrange parts as well, pushing prices to astronomical levels. No doubt a lot of execs would love for this to stick around, as it's making them filthy rich. I just hope people don't get used to the idea of $1000 being normal for a GPU. If that happens, PC gaming will transform radically, and not for the better.


The red spirit said:


> I mentioned how Ryzen is perceived, as well as flagship phones, unlike in past, ain't nobody gonna laugh at you for attempting to sell a 500 USD phone (like Ballmer did when iPhone 2G launched). Media did a lot of work to normalize typically luxurious hardware and well that's obvious even here. Many people have BFGPUs as Jensen said, despite most likely them being middle class people. They are no longer pushing for lower cost, but for more performance at any price. The era of Radeon HD 4870, a 200 USD flagship killer is dead and so is the era of HD 6950 (300 USD flagship killer). All these shortages also showed how impatient people are and that many are also willing to pay a price of scarcity. And despite Intel launching i5 10400F, it didn't have stellar sales until media stopped to shit on it for being cheaper. i3 10100F, I think, never sold particularly well. Meanwhile, poor value parts like 10600K sold well and obviously, other Intel parts along with Ryzens. Motherboard makers are selling cheaper boards, but if you don't pay gaming tax for proper VRMs, you will only get herpes from them. Now water cooling became normal too, despite still mostly failing to provide anything more than air coolers. People are paying for expensive shit and companies have no reason to stop feeding them. As sad it is to say that, but I think that there's just too little demand for lower end or cheaper parts, despite many strides made by LowSpecGamer, TechYesCity, RandomGamingInHD and others. And despite there seemingly being new technological challenges, the elephant in the room isn't that, but public perception of budget hardware. That's kinda the same reason why super low end cars disappeared altogether (I'm talking about Toyota Aygo, VW Polo, Citroen C1, Ford Puma/Ka). And buying a new car switched from being normal to rich man's endeavor. And beyond normal cars, cheap RWD and FWD coupes also died.


You're mostly right, but it's not down to a lack of demand IMO - remember, the 1060 is still the most used GPU out there by a massive margin. It's mainly down to a lot of those people having working, okay hardware already, and holding off until something new comes along at an acceptable price. All the while, enthusiasts are going crazy and paying 2-3-4-5x as much for a GPU as most of them would have been willing to do five years ago. There is absolutely a general move in business towards luxury products with higher margins, especially as markets get more commoditized (just look at TridentZ Royal RAM) - it's a way for them to get massively larger profits for essentially the same product. But in most markets this opens the door for new budget brands - this is hard in the PC space with no more X86 licences though. Still, it might happen - or AMD and Intel might realize that they are leaving a massive market unaddressed and ripe for smartphone/console makers, and start catering to the millions if not billions who can't afford a $500 GPU yet still want to game.


The red spirit said:


> All those reasons are why I think we won't get Ryzen 7 5800GE for 200 EUR or modern equivalent of it. Despite Alder Lake launch and small price cut from AMD, I think that it's all we will get, while most chips will continue to get more and more expensive. I think that Zen 4 or Zen 3+ (whatever AMD will launch next) will generally cost more and deliver more performance, but price/performance ratio will continue to sink.


There's another factor here: if AMD has a piece of silicon that qualifies for the speed and power of a $400 5800X, they will sell it as that rather than a potential $200 5800E, simply because it's the same silicon, so the same production cost, and thus the increased price is just more profit. As long as they have a limited supply of silicon and high yields (i.e. a low rate of chips that fail to meet 5800X levels of power and speed), they will stick to the more profitable chip. As I said above, if they don't do this they will literally be sued by their shareholders, and likely for hundreds of millions of dollars if not billions. Current US business laws essentially force them into doing so, and the only way out of prioritizing high margin products is if you can argue that you sell more volume with higher prices - an argument that doesn't work when you're supply constrained and selling out constantly.


The red spirit said:


> I'm not sure if I would even like them if AMD made them dirt cheap. There was a ton of braindead AMD marketing, braindead AMD fanboys, braindead media hyping it up to the moon. Many events lead to my feeling like stepped into poop, every single time I see Ryzen. To this day, I still think that Ryzen name is stupid, infantile and egoistic. FX meant nothing and Phenom was awesome. Athlon was between FX and Phenom. I probably care too much about such things tbh.


Hehe, I can understand that. There's lots of stupid marketing all around. I tend to filter most of it out, as best I can at least. Ryzen marketing has nothing on previous AMD GPU marketing though ... *shudder*


----------



## The red spirit (Nov 11, 2021)

Valantar said:


> 10400F is 14nm though, and Intel has a lot more scale than AMD to cut costs, at ~4x the market share. Backporting _might_ work, but the Zen3 core is designed for the density and physical properties of the 7nm node, so a backport will perform worse and likely run quite hot (just look at rocket lake and its latency regressions). Sadly the only thing to really do is wait for the supply crunch to alleviate and for economies of scale to overtake the margin-hiking AMD is currently riding on.


Intel likely could backport Celeron-i3 SKUs to 14nm. Could be cheaper to make bigger chips on cheaper node. But I guess, it's not going to be worth it for them financially. Who knows, maybe it actually is. Maybe there's an odd demand for low end chips.




Valantar said:


> At, what, 7-8x the power and on a platform that barely supports USB3.0, let alone any modern I/O? Yeah, sure, they might tag along with a bottom-end chip like that still, but even for free the value proposition there would be pretty terrible given the effort needed for OCing (especially for a beginner), the more expensive cooler, the noise, and the lack of upgrade path (buy a 200GE and you can move to a used 5950X in time!). Bargain-basement parts always compete with used parts in value, but they have inherent advantages due to being new.


Athlon II X4 chips go for nothing on fleabay and they were very efficient. Now you shall not buy them as their performance without L3 cache and lack of modern instructions just murders them. But FM2+ stuff is whole another beats. There is proper USB 3 support, unless you buy bottom of the barrel boards and 760Ks are quite inexpensive. 760K is very cool FX based chip and can achieve significant overclock with stock AMD cooler. Richland cores were exceptional overclockers as they ran very cool. You may achieve 600Mhz OC with stock cooler, if not more. It's not worth it to get it now, but those people who got it new, still have little reason to upgrade to new Athlon. Significant upgrades start at i3/Ryzen 3 level for them. 



Valantar said:


> They're found in tons of AIOs and SFF OEM PCs, but they are really rare at retail. SFF afficionados tend to hunt them down from time to time, but you often need grey-market sources like Ebay.


Sucks for them, but I want to mention that S series are also gone.



Valantar said:


> I haven't heard of any issues with these on AM4 - but then all they do is adjust PPT, TDC and EDC limits for the chip, just like you can do manually, and which is a core method of PBO overclocking or underclocking/volting. I'm currently running my 5800X at a 120W PPT, and it never goes above that, ever (and it still boosts higher than stock).


When I tried to use lower cTDP, my board made chip to downclock to lowest P state (800 MHz) and caused stuttering and poor performance, while still boosting to over 4GHz. It was complete trash. I tested that on two boards, same behaviour. Also I must mention that this was available only to rare FM2(+) chips and my particular chip was Athlon X4 845, which is rare, unicorn chip on even rarer architecture, which is Carrizo. Most FM platform chips likely didn't have anything liek cTDP down mode. I used A4 6300 and A6 7400K and those had nothing like that. Carrizo chips were harvested laptop dies and in general behaved differently from most other Athlons. Fun fact, it beats 870K, despite having lower model number. It beats it in performance and power usage.



Valantar said:


> You're onto something, but I don't think it's down to anyone wanting to destroy anything, it's more down to the fact that entry-level hardware is getting sufficiently powerful to take the place of what used to be low-end gaming hardware. Smartphones and APU laptops suddenly providing decent gaming experiences has all but killed the low-end GPU market.


I would strongly disagree with that. APUs are only usable with really old and low resolutions like 720p or 900p, they just can't handle much more. Only top tier APUs can handle 1080p with bearable FPS, but the whole idea of APU is to have it cheap, not to pay premium for better model. And that's today, they will not last much longer.

I know I will get a lot of flak for saying that APUs are not great, but I personally couldn't imagine living with one. And going to resolutions bellow 1080p (and to be honest to 1080p on non 1080 monitor) is rough. And it doesn't end there, but you are effectively forced to play at low/medium settings + you share RAM with CPU and all you get is 40-50 fps average in return. Some titles are unplayable even at 720p low and it sucks. Might as well buy 100 EUR Haswell machine with i5 or i7 slap some random fleabay card and be better off than with APU. And APUs as I said haven't been available where I live at all, minus ridiculous 5700G. RX 580s are still 180-220 EUR where I live. APUs are only great in a sense that maybe you expected them to be a display adapter and they can run a game or two somewhat well, but buying it intentionally for gaming imo is taking things too far.



Valantar said:


> But the real kicker is that this has coincided with the current supply crunch which has completely erased lower midrange and midrange parts as well, pushing prices to astronomical levels. No doubt a lot of execs would love for this to stick around, as it's making them filthy rich. I just hope people don't get used to the idea of $1000 being normal for a GPU. If that happens, PC gaming will transform radically, and not for the better.


And from all I gather, it seems that people are fine with that. Disturbing thing is that typical low end tectubers sort of disappeared. BudgetBuildsOfficial only recently came back. Low Spec Gamer is on some odd hiatus. Oz talks Hw has very low video output. TechYesCity has been obviously talking more about higher end stuff. Good thing that Green Ham Gaming posted a video, but it doesn't seem like he will actually return. Steve from RGinHD is likely still doing like always, but I don't watch him anymore. Kryzzp seems to be rising. So despite some modest successes, it seems that low end PC media is in malaise and has been so for a whole year. And some techtubers like Greg Salazar just completely quite budget builds and Dawid does tech is very similar. 




Valantar said:


> You're mostly right, but it's not down to a lack of demand IMO - remember, the 1060 is still the most used GPU out there by a massive margin. It's mainly down to a lot of those people having working, okay hardware already, and holding off until something new comes along at an acceptable price. All the while, enthusiasts are going crazy and paying 2-3-4-5x as much for a GPU as most of them would have been willing to do five years ago. There is absolutely a general move in business towards luxury products with higher margins, especially as markets get more commoditized (just look at TridentZ Royal RAM) - it's a way for them to get massively larger profits for essentially the same product. But in most markets this opens the door for new budget brands - this is hard in the PC space with no more X86 licenses though. Still, it might happen - or AMD and Intel might realize that they are leaving a massive market unaddressed and ripe for smartphone/console makers, and start catering to the millions if not billions who can't afford a $500 GPU yet still want to game.


To be honest, there's VIA and Zhaoxin with x86 licenses. Most likely they can't produce anything truly good, but they exist and may one day rise like a phoenix. Considering how fast China is growing and that they had some made in China programme to make things in China, Zhaoxin has a slim chance to rise. They sort of had plans to take on Ryzen in one old presentation, how that turned out I don't know. Would be cool if they made something decent.

Wikipedia says that latest Zhaoxin models are Ryzen level, but information is really scarce.



Valantar said:


> There's another factor here: if AMD has a piece of silicon that qualifies for the speed and power of a $400 5800X, they will sell it as that rather than a potential $200 5800E, simply because it's the same silicon, so the same production cost, and thus the increased price is just more profit. As long as they have a limited supply of silicon and high yields (i.e. a low rate of chips that fail to meet 5800X levels of power and speed), they will stick to the more profitable chip. As I said above, if they don't do this they will literally be sued by their shareholders, and likely for hundreds of millions of dollars if not billions. Current US business laws essentially force them into doing so, and the only way out of prioritizing high margin products is if you can argue that you sell more volume with higher prices - an argument that doesn't work when you're supply constrained and selling out constantly.


That honestly sounds like some really retarded legislation, but I guess US had enough of IPO trolls already.



Valantar said:


> Hehe, I can understand that. There's lots of stupid marketing all around. I tend to filter most of it out, as best I can at least. Ryzen marketing has nothing on previous AMD GPU marketing though ... *shudder*


Oh, that's another poop I try to avoid stepping into. Vega was awful and anything beyond it was also not very well marketed. There were some stupid releases of cards that no one asked for (Radeon VII, RX 590). The last well marketed card was RX 480. Polaris in general was well marketed and to be honest Polaris was one of the best GPU launches that AMD had after Radeon HD series. And if we talk about crappy AMD marketing, did you know that AMD tried to sell SSDs and RAM? They called them Radeon for some reason. It couldn't get any more confusing. Imagine that era, buying Radeon card, installing Crimson drivers, installing catalyst motherboard drivers, setting up Radeon SSD and RAM, dealing with 990FX chipset. That was the worst mismatch of legacy and new marketing phrases and then product lines. In 2013-2014, who even knew what Catalyst was (excluding enthusiasts)? Oh and Overdrive, a leftover from Phenom II, which was ported to work with pre Vishera FX and was on life support with Vishera FX chips. Sometimes I really wonder, if AMD wouldn't be better off without marketing at all.


----------



## lexluthermiester (Nov 11, 2021)

W1zzard said:


> There has been so much drama about my awards, so now I'm trying to be a bit more explicit where I suspect people will be like "wtf why is this getting an award?


For the record, and I hope this doesn't seem like ass-kissing, you didn't miss a beat. I think you anticipated the complaints(and whining) that people would have about the testing methodology and presented your results accordingly. 


W1zzard said:


> Breaking down every single award recommendation seems overkill though and would essentially replace the conclusion text


Thinking you really don't need to. While a short general explaination might be useful, anyone with a decade's worth of experince or more is going to intuitively understand. The awards were clearly based on a logical consumer perspective based ethic and were spot on.


----------



## AusWolf (Nov 11, 2021)

Valantar said:


> The current impression of Intel chips running hot is a) down to their choice of clocking their top SKUs crazy high and disregarding power consumption to look good in benchmarks, and b) because they generally only (or at least mostly) seed review samples for high end chips. You'll probably find 10x or more 11900K reviews compared to 11400 reviews. Non-K high end SKUs or T SKUs are essentially never provided for review, despite selling massively in OEM systems (and this might be precisely to avoid unfavorable comparisons to poorly/cheaply configured OEM systems). Public perceptions are always unreliable and flaky, but Intel has done themselves zero favors in this regard.


I see what you mean. Intel wants to look good with their CPU's performance, but end up looking bad with their (unlocked) power consumption. I think they could have saved the reputation of Comet Lake and Rocket Lake by enforcing stricter default values on motherboards. Since the performance crown went to AMD in those generations anyway, Intel could have clawed back on the efficiency and unlocking potential. By that I mean, you can look at 10/11th gen chips the way the media did: as power hogs that can be downgraded to sip less power and be slower - or you can look at them the way I do: as 65/125 W locked chips with some hidden potential that you can unlock, provided you have a good motherboard and a proper cooling setup. They're extremely versatile in this sense, and that's why I love them.



Valantar said:


> As you say, the need for a specific power plan is rather weird, and IMO speaks to some kind of poor cooperation with MS. Thankfully that's long gone, and I doubt we'll have to deal with anything like it going forward. But having software control the CPU? Isn't that what modern OSes do? Sure, you can argue for a difference between the OS and third party software and drivers, but the main difference here is that it's visible to you as a user, not whether the software is there or not. It should auto-install through Windows Update though, that's a minimum requirement IMO. But I also think good software-OS-hardware integration is key for getting the most out of our hardware going forward.


It isn't visible to the user anyway, unless you install Ryzen Master, which never really worked for me for some reason. Whenever I tried to install it separately, it said it's already installed, but I couldn't find it anywhere. If I'd ever managed to run it, I would have considered using it, but in this state, it's a piece of rubbish. I prefer controlling CPU-related things in the BIOS anyway - that way I can make sure that everything works the same way even after an OS reinstall, or if I ever get drunk enough to try a magical new Linux distro.



Valantar said:


> For the clock readings, I think that's an unavoidable consequence of an increasing number of separate clock-gated parts of the CPU, an increasing number of clock planes, different speeds across cores, different speeds according to the load, etc. When the clock speed is controlled per-core in tiny increments at a staggering pace by bespoke hardware, there's no way human-readable software will be able to keep up. I agree that it's pretty weird to see an "effective clock" for a core at 132MHz when the core clock is reporting 3.9GHz, but ... I don't see how it ultimately matters. At least in my case, there is only any real variance between the two at ilde; under load they equalize (within ~25MHz). The fact that effective clocks can differ between main and SMT threads per core tells me there's something in play here that's well beyond my understanding. In other words, it's not that the data reported isn't accurate, but rather that computers are getting so complex my understanding is falling behind. And to be honest, that's a very good thing.


I know, but Intel somehow still manages to report relatively normal idle clocks. On Ryzen, it's also a matter of power plan, I think. When I switched to "maximum energy savings", my Ryzen CPUs did what I would call a proper idle - that is, sub-20 W power consumption and clocks in or near the 1 GHz range. The problem was that whenever I fired up something that needed power, the speed ramp-up was so gradual that a few seconds had to pass before the CPU got up to full speed. It was enough to screw with your Cinebench score or game loading times. I've never seen anything like this before. In "balanced" and "maximum performance" mode, the switch between speed states is instantaneous (as it should be), but idle clocks are reportedly all over the place with power consumption in the 25-30 W range. Considering my PCs idle time, neither is ideal, imo. I just couldn't find the perfect balance, no matter how much I tried.



Valantar said:


> None of that takes away from the fact that lower clocked 10th and 11th gen Intel CPUs are great (and even quite efficient), and your point about Zen2 and Zen3 being difficult to cool is also quite relevant. I don't mind, but then I'm not using one of these chips with an LP air cooler. That my 5800X settles in the mid-to-high 60s at 120W running OCCT with a 280mm rad, the P14s spinning at 1600rpm, and my pump at 3400? Even accounting for the Aquanaut being a mediocre-at-best CPU block, that's pretty high overall considering the thermal load. This might be an advantage Intel gets from their larger core designs, even on the new 7 process. Though I also think us PC builders will need to get used to higher core temperatures in the future, as thermal densities are only increasing. I hope we can avoid laptop-like "everything spends any time under load bouncing off tJmax" scenarios though.


I agree, and I hope so too.  I love small form factor builds, and I don't want to give up building them just because IHS design can't keep up with chip manufacturing technologies. 

High 60s temp on a 5800X tuned to 120 W sounds quite good, but considering that you need a 280 mm rad to achieve that... ugh.  My 11700 set to 125 W runs at 75 °C max in Cinebench with a Shadow Rock LP. In games, it's in the high 60s as well.


----------



## Valantar (Nov 11, 2021)

The red spirit said:


> Intel likely could backport Celeron-i3 SKUs to 14nm. Could be cheaper to make bigger chips on cheaper node. But I guess, it's not going to be worth it for them financially. Who knows, maybe it actually is. Maybe there's an odd demand for low end chips.


Demand for low-end chips is pretty stable and high, but cutting production pushes a lot of those purchases (especially business/OEM ones) into a higher tier rather than not making a purchase at all - which means chipmakers make more money. IMO, they'll rather keep making low-end chips on older nodes and architectures, as that would be cheaper and easier than any backporting, and might perform just as well. I wouldn't be surprised if we saw different architectures across the low end and high end within a couple of generations. (Of course AMD is already doing that in mobile, with combined updated Zen2 and Zen3 across their 5000u series.) Intel has also historically had the scale to produce 2-3 dice across their MSDT range though, so we might just see them prioritize some really small dice for lower end parts.


The red spirit said:


> Sucks for them, but I want to mention that S series are also gone.


S was 65W though, so it has been subsumed into the non-lettered series as TDPs for lower end chips have dropped.


The red spirit said:


> When I tried to use lower cTDP, my board made chip to downclock to lowest P state (800 MHz) and caused stuttering and poor performance, while still boosting to over 4GHz. It was complete trash. I tested that on two boards, same behaviour. Also I must mention that this was available only to rare FM2(+) chips and my particular chip was Athlon X4 845, which is rare, unicorn chip on even rarer architecture, which is Carrizo. Most FM platform chips likely didn't have anything liek cTDP down mode. I used A4 6300 and A6 7400K and those had nothing like that. Carrizo chips were harvested laptop dies and in general behaved differently from most other Athlons. Fun fact, it beats 870K, despite having lower model number. It beats it in performance and power usage.


Hm, that's weird. 45W mode was perfectly fine on my old A8-7600 that used to run my HTPC. Never had any problems with it, but ultimately I went back to 65W mode as that PC ran relatively cool anyhow and the CPU was slow enough that I wanted all the performance I could get.


The red spirit said:


> I would strongly disagree with that. APUs are only usable with really old and low resolutions like 720p or 900p, they just can't handle much more. Only top tier APUs can handle 1080p with bearable FPS, but the whole idea of APU is to have it cheap, not to pay premium for better model. And that's today, they will not last much longer.


Well, that depends on your expectations. I'm perfectly fine with playing games at a mix of 900p and 1080p at mid-low settings on my 4650G HTPC. It's not as good as playing them on my main PC, obviously, but that depends a lot on the game as well. Rocket League at 900p ~90fps on the TV plays great with FreeSync, and is more enjoyable than on my main PC, even if that runs a much higher frame rate, detail level and resolution.


The red spirit said:


> I know I will get a lot of flak for saying that APUs are not great, but I personally couldn't imagine living with one. And going to resolutions bellow 1080p (and to be honest to 1080p on non 1080 monitor) is rough. And it doesn't end there, but you are effectively forced to play at low/medium settings + you share RAM with CPU and all you get is 40-50 fps average in return. Some titles are unplayable even at 720p low and it sucks. Might as well buy 100 EUR Haswell machine with i5 or i7 slap some random fleabay card and be better off than with APU. And APUs as I said haven't been available where I live at all, minus ridiculous 5700G. RX 580s are still 180-220 EUR where I live. APUs are only great in a sense that maybe you expected them to be a display adapter and they can run a game or two somewhat well, but buying it intentionally for gaming imo is taking things too far.


Again: depends on your expectations. And not all regions have well functioning used components markets. That also depends what you're looking for, of course. And you keep making value comparisons between new parts and used ones, which ... well, used parts will _always_ win. That's a given. You don't buy a brand-new APU setup if value is your _only_ priority. You don't buy _anything_ brand new if value is your only priority.


The red spirit said:


> And from all I gather, it seems that people are fine with that. Disturbing thing is that typical low end tectubers sort of disappeared. BudgetBuildsOfficial only recently came back. Low Spec Gamer is on some odd hiatus. Oz talks Hw has very low video output. TechYesCity has been obviously talking more about higher end stuff. Good thing that Green Ham Gaming posted a video, but it doesn't seem like he will actually return. Steve from RGinHD is likely still doing like always, but I don't watch him anymore. Kryzzp seems to be rising. So despite some modest successes, it seems that low end PC media is in malaise and has been so for a whole year. And some techtubers like Greg Salazar just completely quite budget builds and Dawid does tech is very similar.


It's not much of a surprise given the price hikes and how the used markets across the world look right now - there isn't much use in making videos promoting buying cheap used gear to game on when that gear is either not cheap or not available any longer.


The red spirit said:


> To be honest, there's VIA and Zhaoxin with x86 licenses. Most likely they can't produce anything truly good, but they exist and may one day rise like a phoenix. Considering how fast China is growing and that they had some made in China programme to make things in China, Zhaoxin has a slim chance to rise. They sort of had plans to take on Ryzen in one old presentation, how that turned out I don't know. Would be cool if they made something decent.
> 
> Wikipedia says that latest Zhaoxin models are Ryzen level, but information is really scarce.


Those Zhaoxin CPUs are ... well, they're a lot better than what VIA has had before, but they're not great, with low clock speeds and ST performance behind an AMD A10-9700. They could absolutely get some massive cash infusion and suddenly turn into a viable third X86 vendor, but that's highly unlikely - especially now that Intel is buying large parts of VIA's X86 CPU design team.


The red spirit said:


> That honestly sounds like some really retarded legislation, but I guess US had enough of IPO trolls already.


This is not new. IIRC it was made into law in the early 90s, and was a dream of neoliberal economists since the Nixon era. And this does absolutely nothing to combat IPO trolls, it just ensures a downward spiral of ever more predatory and destructive business practices further enriching the wealthy - but then that's the whole point.


The red spirit said:


> Oh, that's another poop I try to avoid stepping into. Vega was awful and anything beyond it was also not very well marketed. There were some stupid releases of cards that no one asked for (Radeon VII, RX 590). The last well marketed card was RX 480. Polaris in general was well marketed and to be honest Polaris was one of the best GPU launches that AMD had after Radeon HD series. And if we talk about crappy AMD marketing, did you know that AMD tried to sell SSDs and RAM? They called them Radeon for some reason. It couldn't get any more confusing. Imagine that era, buying Radeon card, installing Crimson drivers, installing catalyst motherboard drivers, setting up Radeon SSD and RAM, dealing with 990FX chipset. That was the worst mismatch of legacy and new marketing phrases and then product lines. In 2013-2014, who even knew what Catalyst was (excluding enthusiasts)? Oh and Overdrive, a leftover from Phenom II, which was ported to work with pre Vishera FX and was on life support with Vishera FX chips. Sometimes I really wonder, if AMD wouldn't be better off without marketing at all.


I think the RAM and SSD stuff was just an (ill-fated) attempt at getting some traction for the Radeon brand, which was overall seen as decently good at the time. They would have needed to put a _lot_ more effort into it for it to be even remotely succesful though, as it mostly just came off as "hey, we put some stickers on these things, now they're a bit more expensive", which ... yeah. Vega was pretty good in principle, the problem - as with the 12900K! - was that they pushed clocks _way_ too high. A lower clocked Vega 64 or 56 could come pretty close to the efficiency of contemporary Nvidia GPUs, though at lower performance tiers of course. The main issue was the 64CU architectural limit of GCN, which put AMD into a corner until RDNA was ready, with literally the only way of increasing performance being pushing clocks crazy high. What made this issue much, much worse was the harebrained marketing, the "poor Volta" stuff and all that. If they had sold Vega as an upper midrange alternative? It would have kicked butt, but it would have left them without a high end product. Which, of course, they soon realized that they couldn't make anyhow, so they could have saved themselves the embarassment to begin with.

But that's exactly what we're seeing now, and what me and @AusWolf were talking about above: if Intel had only limited their chips to lower power levels at stock, they would have come off looking better, at least IMO. "Not as fast, but good, and pretty efficient" is a better look than "can keep up (and even win at times), but at 2x the power and impossible to cool". Vega and the recent generations of Intel CPUs are impressively analogous in that way.


AusWolf said:


> I see what you mean. Intel wants to look good with their CPU's performance, but end up looking bad with their (unlocked) power consumption. I think they could have saved the reputation of Comet Lake and Rocket Lake by enforcing stricter default values on motherboards. Since the performance crown went to AMD in those generations anyway, Intel could have clawed back on the efficiency and unlocking potential. By that I mean, you can look at 10/11th gen chips the way the media did: as power hogs that can be downgraded to sip less power and be slower - or you can look at them the way I do: as 65/125 W locked chips with some hidden potential that you can unlock, provided you have a good motherboard and a proper cooling setup. They're extremely versatile in this sense, and that's why I love them.


Yeah, that's a good approach, and I really wish Intel went that route. I guess they don't know how to be a good underdog  (Which, to be fair, took AMD the best part of a decade to learn as well.) Recent Intel chips at lower power, including laptops, are pretty great. It's when you let them run free, as Intel does with their "hey, we have a power spec, but you know, just ignore it if you want to" policies, that things go down the crapper.


AusWolf said:


> It isn't visible to the user anyway, unless you install Ryzen Master, which never really worked for me for some reason. Whenever I tried to install it separately, it said it's already installed, but I couldn't find it anywhere. If I'd ever managed to run it, I would have considered using it, but in this state, it's a piece of rubbish. I prefer controlling CPU-related things in the BIOS anyway - that way I can make sure that everything works the same way even after an OS reinstall, or if I ever get drunk enough to try a magical new Linux distro.


That sounds weird. Ryzen Master doesn't work on my 4650G (expected, as it isn't supported), but I've had no problems other than that. I also prefer BIOS control, but I'm learning more and more to enjoy letting the CPU control itself. The new Ryzen tweaking methods - PBO tuning, Curve Optimizer, etc. - are a really great compromise between letting the CPU control itself with a granularity and adaptability that a human could never achieve, yet still tweak things and optimize things past stock.


AusWolf said:


> I know, but Intel somehow still manages to report relatively normal idle clocks. On Ryzen, it's also a matter of power plan, I think. When I switched to "maximum energy savings", my Ryzen CPUs did what I would call a proper idle - that is, sub-20 W power consumption and clocks in or near the 1 GHz range. The problem was that whenever I fired up something that needed power, the speed ramp-up was so gradual that a few seconds had to pass before the CPU got up to full speed. It was enough to screw with your Cinebench score or game loading times. I've never seen anything like this before. In "balanced" and "maximum performance" mode, the switch between speed states is instantaneous (as it should be), but idle clocks are reportedly all over the place with power consumption in the 25-30 W range. Considering my PCs idle time, neither is ideal, imo. I just couldn't find the perfect balance, no matter how much I tried.


Intel's boost systems are much, much simpler than AMD's though. Slower, less dynamic, less opportunistic, and with less granularity. So, coming from my relatively limited understanding of these things, that seems logical to me. I've never really paid much attention to my idle power beyond noticing that it reads much higher in HWinfo and similar tools than in AMD's official tools, which ... well, it's weird and sub-optimal, but ultimately just tells me that if I want an idle power measurement I need to get my power meter out 


AusWolf said:


> I agree, and I hope so too.  I love small form factor builds, and I don't want to give up building them just because IHS design can't keep up with chip manufacturing technologies.


Same. Given how easy to cool AMD's APUs are I still have hope, though it'll sure be interesting to see how more exotic packaging like 3D cache affects these things.


AusWolf said:


> High 60s temp on a 5800X tuned to 120 W sounds quite good, but considering that you need a 280 mm rad to achieve that... ugh.  My 11700 set to 125 W runs at 75 °C max in Cinebench with a Shadow Rock LP. In games, it's in the high 60s as well.


It bears repeating that the Aquanaut is a mediocre CPU block at best though. It has a DDC pump, so flow is decent, but it has a reverse flow layout (i.e. sucks water through the microfins rather than pushing it down and through them), which generally performs worse, and doesn't have the best flow characteristics either. I could probably drop another 10 degrees with a top-of-the-line water block, but then I'd need somewhere to mount my pump. So for me it's fine  Gaming temperatures are definitely better, in the 40s or 50s depending on how much heat the GPU is dumping into the loop, 60s at most under heavy loads. I would really love to see someone test a bunch of different air coolers on MCM Ryzen with those offset mounting kits you can buy, just to see how much the off-centered die placement affects these things.


----------



## The red spirit (Nov 11, 2021)

Valantar said:


> Well, that depends on your expectations. I'm perfectly fine with playing games at a mix of 900p and 1080p at mid-low settings on my 4650G HTPC. It's not as good as playing them on my main PC, obviously, but that depends a lot on the game as well. Rocket League at 900p ~90fps on the TV plays great with FreeSync, and is more enjoyable than on my main PC, even if that runs a much higher frame rate, detail level and resolution.


I meant games more like AAA titles. Something like GTA 5, RDR2. It would perform well in GTA 5 (it's slower than GTX 650 Ti, which I'm familiar with), but in RDR2 it wouldn't. Maybe at 720p low-medium. And it's much easier to speak of APUs, if you don't use it as your only system, they look quite good, but if you used it as daily system, you will soon change your opinion about them. 



Valantar said:


> Again: depends on your expectations. And not all regions have well functioning used components markets. That also depends what you're looking for, of course. And you keep making value comparisons between new parts and used ones, which ... well, used parts will _always_ win. That's a given. You don't buy a brand-new APU setup if value is your _only_ priority. You don't buy _anything_ brand new if value is your only priority.


I disagree, if you buy cheap and long lasting hardware, it makes sense to do that. Also never forget reliability of computers, they tend to become increasingly unreliable after their first decade. There are many risks with buying used hw. But if that means avoiding something like Ryzen APU, eh it may be a reasonable choice. 



Valantar said:


> It's not much of a surprise given the price hikes and how the used markets across the world look right now - there isn't much use in making videos promoting buying cheap used gear to game on when that gear is either not cheap or not available any longer.


I expected a lot more out of them. They could have made a killing during these times, if they started survival guides and taught how to be thrifty and hunting for deals in unusual places. And there were some deals. RX 550 for a very long time was mostly unaffected by C19 and it wasn't completely unbearable survival card. There also was GT 1030. They could have just posted more content about low spec friendly games and that would have been a lot better than hiatus. And at ultra low end market nothing has changed, so they could have made some videos about that too. They are really not that badly affected by all this compared to more middle class channels, which lost their lifeblood. Low end gamers are far more resilient. 




Valantar said:


> Those Zhaoxin CPUs are ... well, they're a lot better than what VIA has had before, but they're not great, with low clock speeds and ST performance behind an AMD A10-9700. They could absolutely get some massive cash infusion and suddenly turn into a viable third X86 vendor, but that's highly unlikely - especially now that Intel is buying large parts of VIA's X86 CPU design team.


Oh well, I kinda wanted the third player in this industry. Zhaoxin could have been a good replacement for Cyrix.




Valantar said:


> This is not new. IIRC it was made into law in the early 90s, and was a dream of neoliberal economists since the Nixon era. And this does absolutely nothing to combat IPO trolls, it just ensures a downward spiral of ever more predatory and destructive business practices further enriching the wealthy - but then that's the whole point.


But doesn't it hurt companies, their long term stability and long term profitability? 




Valantar said:


> I think the RAM and SSD stuff was just an (ill-fated) attempt at getting some traction for the Radeon brand, which was overall seen as decently good at the time. They would have needed to put a _lot_ more effort into it for it to be even remotely succesful though, as it mostly just came off as "hey, we put some stickers on these things, now they're a bit more expensive", which ... yeah. Vega was pretty good in principle, the problem - as with the 12900K! - was that they pushed clocks _way_ too high. A lower clocked Vega 64 or 56 could come pretty close to the efficiency of contemporary Nvidia GPUs, though at lower performance tiers of course. The main issue was the 64CU architectural limit of GCN, which put AMD into a corner until RDNA was ready, with literally the only way of increasing performance being pushing clocks crazy high. What made this issue much, much worse was the harebrained marketing, the "poor Volta" stuff and all that. If they had sold Vega as an upper midrange alternative? It would have kicked butt, but it would have left them without a high end product. Which, of course, they soon realized that they couldn't make anyhow, so they could have saved themselves the embarassment to begin with.


Vega was fine, but marketing was awful. And it wasn't just consumer Vega that got burned by awful PR, but also Vega Frontier Edition and Pro SSG. Ever since AMD got rid of FirePro branding, their enterprise cards have been a mess. Their marketing is awful or nonexisting. 




Valantar said:


> But that's exactly what we're seeing now, and what me and @AusWolf were talking about above: if Intel had only limited their chips to lower power levels at stock, they would have come off looking better, at least IMO. "Not as fast, but good, and pretty efficient" is a better look than "can keep up (and even win at times), but at 2x the power and impossible to cool". Vega and the recent generations of Intel CPUs are impressively analogous in that way.


Maybe, the main problem with Vega was that it was meant to compete with earlier nV gen cards and they did that well, but tons of delays and it arrived as patched up abortion and had to compete with much better cards than it was meant for. To make matters worse, 1080 Ti was also unusually fast.


----------



## Valantar (Nov 12, 2021)

The red spirit said:


> I meant games more like AAA titles. Something like GTA 5, RDR2. It would perform well in GTA 5 (it's slower than GTX 650 Ti, which I'm familiar with), but in RDR2 it wouldn't. Maybe at 720p low-medium. And it's much easier to speak of APUs, if you don't use it as your only system, they look quite good, but if you used it as daily system, you will soon change your opinion about them.


Again: depends on your expectations. If you are expecting to run an APU and play the newest AAA games at anything but rock-bottom settings, then yes, you are going to be disappointed. But a moderately priced APU like the 5600G still outpaces a "cheap" dGPU like the GT 1030 across a wide range of games, both in 1080p and 720p. It can absolutely still provide a good gaming experience - but clearly not a high resolution, high FPS recent AAA gaming experience. But nothing in that price range can.


The red spirit said:


> I disagree, if you buy cheap and long lasting hardware, it makes sense to do that. Also never forget reliability of computers, they tend to become increasingly unreliable after their first decade. There are many risks with buying used hw. But if that means avoiding something like Ryzen APU, eh it may be a reasonable choice.


Of course there are. But now you are bringing a third variable into play - reliability. Between price, performance and reliability, you typically get to pick two. There's always a balance. And, of course, an APU gives you a really fast CPU and a great basis for future upgrades. Obviously it depends what you're looking for, but they really aren't bad.


The red spirit said:


> I expected a lot more out of them. They could have made a killing during these times, if they started survival guides and taught how to be thrifty and hunting for deals in unusual places. And there were some deals. RX 550 for a very long time was mostly unaffected by C19 and it wasn't completely unbearable survival card. There also was GT 1030. They could have just posted more content about low spec friendly games and that would have been a lot better than hiatus. And at ultra low end market nothing has changed, so they could have made some videos about that too. They are really not that badly affected by all this compared to more middle class channels, which lost their lifeblood. Low end gamers are far more resilient.


Depends on their skill sets though. Also, how much useful advice is there to give when the used market is being scoured by scalpers and desperate GPU buyers anyhow? How many videos can you make saying "be patient, be on the lookout"? 'Cause that's about all they can do.


The red spirit said:


> Oh well, I kinda wanted the third player in this industry. Zhaoxin could have been a good replacement for Cyrix.


It would have been cool, but the cash infusion required to be truly competitive is likely too large for anyone to take a chance on.


The red spirit said:


> But doesn't it hurt companies, their long term stability and long term profitability?


Not if you buy into the harebrained idea of infinite economic growth (and the closely related idea of trickle-down economics), which underpins neoliberal economic thinking. Reality strongly contradicts this, but that sadly doesn't have an effect on people when there are short-term economic gains to be had.


The red spirit said:


> Maybe, the main problem with Vega was that it was meant to compete with earlier nV gen cards and they did that well, but tons of delays and it arrived as patched up abortion and had to compete with much better cards than it was meant for. To make matters worse, 1080 Ti was also unusually fast.


Yep, they ended up competing against a particularly good Nvidia generation, but I don't think the delays hurt them that much - it wasn't _that_ late. And the 1080 launched a full year before the Vega 64, after all. The main issue was that they insisted on competing in the high end with a GPU that simply didn't have the necessary hardware resources. The Fury X with 64 CUs  @ 1.05GHz competed against the 2816 CUDA core @ 1.1GHz 980 Ti. Then, in the next generation, the still-64 CU Vega 64 was supposed to compete with the slightly lower core count (2560), but _much_ higher clocked @1.7GHz 1080. The Vega arch just didn't scale that well for frequency, which put them in the dumb position of burning tons of power for at best similar performance, though typically a tad worse. If they had backed off the clocks by 200MHz or so (and thus left some OC potential for those who wanted it too), they could have had a _killer_ alternative in between the 1070 and 1080 instead. Sure, it would have had to sell cheaper, but it would have looked so much better. But I think just the idea of "only" being able to compete with the 1080 (and not the Ti) was too much of a loss of face for the RTG to stomach stepping down even further. Which just goes to show that a short-term "good" plan can be pretty bad in the long term.


Oh, for anyone curious about E-core gaming performance, der8auer did a short test. He has the E-cores OC'd to 4.1GHz (so appartently early reports of E-cores not being OCable were wrong), but overall, performance is surprisingly decent. Still much slower than the P cores, obviously, but definitely workable. No doubt they're being hurt by their higher core-to-core and core-to-L3/DRAM latencies, but for what they are and the power draws seen, this really isn't bad.


----------



## The red spirit (Nov 12, 2021)

Valantar said:


> Again: depends on your expectations. If you are expecting to run an APU and play the newest AAA games at anything but rock-bottom settings, then yes, you are going to be disappointed. But a moderately priced APU like the 5600G still outpaces a "cheap" dGPU like the GT 1030 across a wide range of games, both in 1080p and 720p. It can absolutely still provide a good gaming experience - but clearly not a high resolution, high FPS recent AAA gaming experience. But nothing in that price range can.


My problem is that people call APUs a value purchase, but I never really saw much value in them. How much value could there possibly be in rock bottom GPU, which is just barely acceptable today and will be useless in a year or two? For me that is not a value, but an epitome of buying cheap shit and keep buying it more often and then just wasting cash. For you it works, so I guess you get a reasonable value, but much of media claims an entirely different thing. A value combo was i3 10100F+RX 580. At 300 EUR it can run anything at 1080p 60 fps and usually high settings. I personally find that RX 580 can run anything in 1440p, but I'm willing to trade settings for that. Anyway, this is a value purchase, since for modest budget you get a combo that plays games well today and will do so for 4 years, you can even attempt to scrap by lowering resolution after those 4 years and get even more value out of that combo. Meanwhile, there is 5600G, which sort of costs the same and it already is at nearly scraping by phase. You may be able to use it with cutting resolution, settings and fps expectations for 3 years, but after that, it just doesn't have any potential. So, it's very obvious that i3 and RX 580 combo is much better value. Now you can't buy it new for that kind of money, but even before 'rona, media overhyped APUs a lot. Apparently, Linus once tried to offer some wisdom for budget buyers:









But their community wasn't having any of it and after dealing with consequences, he did one toned down video later and now he more or less started to say that cheap shit may be good (cheap shit - cheap cards and APUs). Low Spec Gamer advised to buy 200GE, but his advice is more for those fucked up markets and his advice is actually adequate, because 200GE sucks, but hey if you can live with it for a while, then it's so cheap to replace with something also cheap later and in fucked up countries, it's a better deal than paying a tithe to some scalper for overpriced Core 2 Duo. Or better yet, just use what you have with low spec hacks. If you pay nothing and get something out of it, your value is technically infinite.

But if you are in country with half decent economy and second hand market, those APUs don't really make much sense to get, unless for HTPC or strictly 720p gaming or retro gaming. 

BTW I'm not buying that GT 1030 is now dethroned. Vega 11 historically was slower than GT 1030. That review doesn't state which GT 1030 they used, so it could have been that e-waste DDR4 version.




Valantar said:


> Of course there are. But now you are bringing a third variable into play - reliability. Between price, performance and reliability, you typically get to pick two. There's always a balance. And, of course, an APU gives you a really fast CPU and a great basis for future upgrades. Obviously it depends what you're looking for, but they really aren't bad.


I'm not sure if APU gives you a good CPU. Didn't they have gimped PCIe lanes to make iGPU work? If that's ture, then they kinda fail as being good CPUs for later graphics card upgrade, especially when there are gimped cards like RX 6600. And then you add into mix, that you may have a PCIe 3 board, then you will be looking at some massive bottlenecking.




Valantar said:


> Depends on their skill sets though. Also, how much useful advice is there to give when the used market is being scoured by scalpers and desperate GPU buyers anyhow? How many videos can you make saying "be patient, be on the lookout"? 'Cause that's about all they can do.


I think that there was potential and competition was really low. The big thing would have been low spec friendly games and finding ways to enjoy older, cheaper and weaker hardware. Looking for deals on various market would be secondary. It's really not that tough, that's more or less what Green Ham Gaming used to be big on. 




Valantar said:


> It would have been cool, but the cash infusion required to be truly competitive is likely too large for anyone to take a chance on.


What about commie government? Don't they fund various projects?




Valantar said:


> Not if you buy into the harebrained idea of infinite economic growth (and the closely related idea of trickle-down economics), which underpins neoliberal economic thinking. Reality strongly contradicts this, but that sadly doesn't have an effect on people when there are short-term economic gains to be had.


But infinite growth is kinda real, just complicated and volatile. If you look at real GDP, that seems to be true, it's just that people have no idea what nominal and real GDP is or worse yet, use something really stupid to evaluate economy, something liek stock markets. Also past some periods being rich in growth, after a while it ends and infinite growth is just close to 1% with some fluctuations. And beyond real GDP, I think that buying power, median wage and some life quality indexes + HDI should be taken into account. I haven't looked at actual data, but HDI seems to have slowly rising in first world countries, so is life quality, but wages grow slowly and actual purchasing power of people + real GDP seem to be either growing at miniscule rate or slowly eroding and they are certainly volatile too. But I guess those people (I have no idea who they are, so I'm just guessing that they are some retarded liberals) that claim things about infinite growth don't really talk about that, do they?

On that note, I looked at AMD's stock. It seems to perform really well, but if you look at their profitability graphs, it seems that AMD has made an absolute killing in 2020. They made tiems more profit than 2019 and 2018 combined. Somehow I don't really think that they are overpricing their stuff just because of tough times, I think that they are ripping us off while they can and Ryzen prices should be 20% if not more lower. 



Valantar said:


> Yep, they ended up competing against a particularly good Nvidia generation, but I don't think the delays hurt them that much - it wasn't _that_ late. And the 1080 launched a full year before the Vega 64, after all. The main issue was that they insisted on competing in the high end with a GPU that simply didn't have the necessary hardware resources. The Fury X with 64 CUs  @ 1.05GHz competed against the 2816 CUDA core @ 1.1GHz 980 Ti. Then, in the next generation, the still-64 CU Vega 64 was supposed to compete with the slightly lower core count (2560), but _much_ higher clocked @1.7GHz 1080. The Vega arch just didn't scale that well for frequency, which put them in the dumb position of burning tons of power for at best similar performance, though typically a tad worse. If they had backed off the clocks by 200MHz or so (and thus left some OC potential for those who wanted it too), they could have had a _killer_ alternative in between the 1070 and 1080 instead. Sure, it would have had to sell cheaper, but it would have looked so much better. But I think just the idea of "only" being able to compete with the 1080 (and not the Ti) was too much of a loss of face for the RTG to stomach stepping down even further. Which just goes to show that a short-term "good" plan can be pretty bad in the long term.


The more time goes on, the more I think that RTG is just retarded at doing business. Since HD 7000 series and Rx 2xx series, they just keep overhyping incompetent products and hope that they sell. Old school ATi seemed to be doing business much better and certainly didn't try to pull "poor Volta" kind of crap. It seems that RTG inherited some ATis problems like still poor reliability of drivers and software and sometimes outright zero quality control (RX 5000 series) + bad qualities of AMD, that is overhyping various crap even if it doesn't deliver and generally awful PR. They are rich tech company, but their actions are no better than moody teen's, who is trying to make his homework seem better than it is.


----------



## AusWolf (Nov 12, 2021)

The red spirit said:


> My problem is that people call APUs a value purchase, but I never really saw much value in them. How much value could there possibly be in rock bottom GPU, which is just barely acceptable today and will be useless in a year or two? For me that is not a value, but an epitome of buying cheap shit and keep buying it more often and then just wasting cash.


APUs aren't primarily meant for AAA gaming. With a modern APU, you essentially get a CPU and a GPU that has all the video decoding capabilities you need in a HTPC, usually for the price of a CPU alone. How is that not awesome value? 

AMD's APUs are great if you include some casual/oldshcool gaming too. The only things that kept me from going that route with my HTPC are the fact that I had a spare R3 3100 lying around, and that the 5300G isn't commercially available. But oh boy, I'd love to have one! 

On Intel's side, basically everything is an APU except for the F SKUs, which are a terrible value, imo. For £10 more, you get an iGPU that is great for diagnostics, or just having something to hook your monitor up to while you're waiting for your new GPU, or for extra display outputs - that's what I use mine for.


----------



## The red spirit (Nov 12, 2021)

AusWolf said:


> APUs aren't primarily meant for AAA gaming. With a modern APU, you essentially get a CPU and a GPU that has all the video decoding capabilities you need in a HTPC, usually for the price of a CPU alone. How is that not awesome value?


Then why not all of them have just Vega 3 instead of 7 and 11? 



AusWolf said:


> AMD's APUs are great if you include some casual/oldshcool gaming too. The only things that kept me from going that route with my HTPC are the fact that I had a spare R3 3100 lying around, and that the 5300G isn't commercially available. But oh boy, I'd love to have one!


Or are they actually? By great, I guess you mean solid 60 fps, 1080p at least and high settings at least. I can say that GTX 650 Ti could actually run most of old games well, but some it couldn't. And well there is Crysis, which GTX 650 Ti only properly ran at 1024x768 medium. By solid I mean not dipping bellow 40 fps. Cards like GT 730 GDDR5 can't do that. GT 730 struggles in Far Cry. GT 1030 might be the lowest end card that is actually great at old school games. And that's Vega 11 territory. GTX 650 Ti also managed to play old school games with nVidia control panel enhancements, but only in non demanding games it managed to achieve solid 60 fps with some supersampling. RX 580 can do some supersampling (or SSAA) in Colin McRae 2005 (up to 4x at 1280x1024), but it can't provide perfect performance in Unreal Tournament 2004), but it could run it well with adaptive 8x MSAA. I will lower bar somewhat and say that old games running at 1080p, solid 60 fps and at least high settings (minus odd stuff like PhysX, which still destroys performance) is a good experience and that card or iGPU that can do it is great. But I really doubt if Vega 7 or 11 can pull that off. Solid 60 fps to me is 0.1% lows of no less than 45 fps and 1% lows of no less than 55 fps. Games like F.E.A.R. are old, but still demanding on lower end hardware. I can say that Vega 7 most likely can't run it well with high settings at 1080p. Far Cry 2 most likely wouldn't run well on Vega 11. UT3, may not even run well at medium settings. And Crysis wouldn't be playable at anything above low and 1024x768. Mafia II also likely wouldn't run well at anything above 1080p low.

All in all, iGPUs most likely aren't great for older games. They are not great and their potential is limited. Instead of saying that they are great, old or not demanding games are just all that they can do without lagging badly. Regarding old games, there might be some odd bugs and glitches on modern hardware, that also contributes to them not being great. Another thing is that something like Vega 7, might actually be beaten by GT 1010. You might say that it's pointless to compare free iGPU with dedicated card, but back in socket FM2 days, APUs had no problems wiping floor with GT x10 GDDR5 cards. Cheap A8 APU had performance times better than GT 710 GDDR5. Even A6 or A4 APU may have been faster than GT 710 GDDR5. But even then they weren't really worth it. The budget banger was cheapest Athlon X4 and Radeon HD 7750. Which meant Athlon X4 740 (MSRP of 71 USD) and then 109 USD for card. That's 180 USD for both and for a bit less, you could get 6800K, which was weaker. If you add 20-30 USD to budget, you get 7770, which had like 100 stream cores more and that's an easy 20% performance improvement. Or 10 USD more, you could get Athlon A4 750K and overclock it by 600Mhz with stock cooler, which beats 760K. Unlike APU, Radeon 7750 can still run some games at 1080p low-medium and 60 fps. APU was already struggling with that in 2014. And that's a good scenario. All dual core APUs were e-waste since the day they were made as their CPU portion was just simply too slow to play games well and their GPU portion was usually half of A10 or A8. I personally own A4 6300 and A6 7400K and have benched them extensively. Neither is great or really acceptable in their era games and they are really slow in day to day stuff, they aged like milk. And there's no argument to make about adding dedicated card later either. There was some value in cheapest quad core APUs as they made cheap daily drivers for work and could run old games, but if you wanted to play games in any capacity, Radeon HD 7750 was a complete no brainer. Along with 7750, there was also GTX 650.

All in all, I stay with my argument that APU value was always questionable for gamers, even cheap gamers. And there wasn't much point in pinching pennies that badly, since even a modest library of games would have made those savings on HW make look silly and insignificant and yet they would have made your gaming experience noticeably worse if not outright garbage. That is, unless you pirate all your games, then maybe penny pinching makes sense then. But then again, 7750 or 7770 were selling with codes for several legit AAA games. So despite saving few dollars on hw at cost of gaming experience, APUs still made little sense if you wanted to stay legal. 




AusWolf said:


> On Intel's side, basically everything is an APU except for the F SKUs, which are a terrible value, imo. For £10 more, you get an iGPU that is great for diagnostics, or just having something to hook your monitor up to while you're waiting for your new GPU, or for extra display outputs - that's what I use mine for.


Good for you, chips with integrated graphics in Lithuania are scalped too. They can easily cost 40% more than F variants and therefore barely make sense, but during these times, it seems that having a display adapter is getting expensive. I guess that beats buying brand new GT 710.


----------



## Valantar (Nov 12, 2021)

The red spirit said:


> My problem is that people call APUs a value purchase, but I never really saw much value in them. How much value could there possibly be in rock bottom GPU, which is just barely acceptable today and will be useless in a year or two? For me that is not a value, but an epitome of buying cheap shit and keep buying it more often and then just wasting cash. For you it works, so I guess you get a reasonable value, but much of media claims an entirely different thing. A value combo was i3 10100F+RX 580. At 300 EUR it can run anything at 1080p 60 fps and usually high settings. I personally find that RX 580 can run anything in 1440p, but I'm willing to trade settings for that. Anyway, this is a value purchase, since for modest budget you get a combo that plays games well today and will do so for 4 years, you can even attempt to scrap by lowering resolution after those 4 years and get even more value out of that combo. Meanwhile, there is 5600G, which sort of costs the same and it already is at nearly scraping by phase. You may be able to use it with cutting resolution, settings and fps expectations for 3 years, but after that, it just doesn't have any potential. So, it's very obvious that i3 and RX 580 combo is much better value. Now you can't buy it new for that kind of money, but even before 'rona, media overhyped APUs a lot. Apparently, Linus once tried to offer some wisdom for budget buyers:


You're making some weird combinations here, a 2020 CPU with a 2017 GPU? Those have AFAIK never been sold new at the same time. The $200 GPU pairing for a 10100 would be an RX 5500 or GTX 1650, neither of which are particularly impressive. You're right that they don't specify the type of 1030, and I've seen some weird stuff from Tom's throughout the years, but I still expect them to be sufficiently competent to use the G5 version of the 1030. But even if it is, that would at best mean that the G5 1030 performs about on par with this. Good luck getting a 6c12t CPU+a 1030G5 for $260.

So, again, either you're arguing for buying used (those 580s you keep bringing up), which is of course a valid argument but _entirely_ different value proposition still, or you have to admit that currently there are no alternatives even remotely close in price and performance. Of course that's partly down to the current market situation, but that affects everything equally.


The red spirit said:


> But their community wasn't having any of it and after dealing with consequences, he did one toned down video later and now he more or less started to say that cheap shit may be good (cheap shit - cheap cards and APUs). Low Spec Gamer advised to buy 200GE, but his advice is more for those fucked up markets and his advice is actually adequate, because 200GE sucks, but hey if you can live with it for a while, then it's so cheap to replace with something also cheap later and in fucked up countries, it's a better deal than paying a tithe to some scalper for overpriced Core 2 Duo. Or better yet, just use what you have with low spec hacks. If you pay nothing and get something out of it, your value is technically infinite.


But the 200GE was never meant as anything but bargain-basement. It literally launched at $60. It was never a _good_ CPU. It paired well with bargain-basement budgets, allowing you to build a PC for next to nothing yet have a warranty. That's about what you can expect for $60.


The red spirit said:


> But if you are in country with half decent economy and second hand market, those APUs don't really make much sense to get, unless for HTPC or strictly 720p gaming or retro gaming.


Unless you want a warranty and/or an upgrade path.


The red spirit said:


> BTW I'm not buying that GT 1030 is now dethroned. Vega 11 historically was slower than GT 1030. That review doesn't state which GT 1030 they used, so it could have been that e-waste DDR4 version.


Vega 11 was paired with old, slow Zen cores, and was clocked much lower than 4000 and 5000-series APUs. This is not an equal comparison, as the newer generations are considerably faster.


The red spirit said:


> I'm not sure if APU gives you a good CPU. Didn't they have gimped PCIe lanes to make iGPU work? If that's ture, then they kinda fail as being good CPUs for later graphics card upgrade, especially when there are gimped cards like RX 6600. And then you add into mix, that you may have a PCIe 3 board, then you will be looking at some massive bottlenecking.


You really should stop commenting on things you're not up to date on. APUs after the 3000-series have 16 PCIe 3.0 lanes for the GPU, and both 4000 and 5000-series (Zen2 and Zen3, but both with less L3 cache than desktop CPUs) perform very well. The 5600G is slower than a 5600X, but not by much at all. So for those $250 you're getting a kick-ass CPU with an entry-level GPU included. Also, gimped? A 1-2% performance loss is not noticeable.


The red spirit said:


> I think that there was potential and competition was really low. The big thing would have been low spec friendly games and finding ways to enjoy older, cheaper and weaker hardware. Looking for deals on various market would be secondary. It's really not that tough, that's more or less what Green Ham Gaming used to be big on.


They would have been drowned out by the YouTube algorithm for making repetitive content. IMO this was likely completely out of their hand. Not to mention just how soul-crushing that content would likely be to make, week after week of the same stuff. Good on them for finding better things to do.


The red spirit said:


> What about commie government? Don't they fund various projects?


I doubt they have any interest in spending billions on financing a CPU architecture where they have to pay licensing fees to a US company, so no. Besides, they have their own efforts.


The red spirit said:


> But infinite growth is kinda real, just complicated and volatile. If you look at real GDP, that seems to be true, it's just that people have no idea what nominal and real GDP is or worse yet, use something really stupid to evaluate economy, something liek stock markets.


Oh dear god no. Besides the blindingly obvious point that infinite economic growth in a world of finite physical resources is literally, logically and physically impossible, the idea that we're seeing "infinite" growth in current economies is an out-and out lie. Firstly, our global economic system is predicated upon rich countries exploiting the poor through labor and material extraction. The "growth" of the past four-five centuries is largely extracting resources in poor areas and processing and selling them in rich ones. Which, in case it isn't obvious, means the poor areas get _poorer_ - just that nobody wants to count this, so it generally isn't talked about in these contexts. The problem is that eventually, some poor areas manage to rid themselves of their oppressors/colonial regimes/corporate overlords and gain some autonomy, which moves the exploitation elsewhere. Recently we've seen this in how Chinese workers are refusing to work under the horrible conditions they have endured for the past couple of decades, which forces a lot of manufacturing to move to new countries where the workforce is as of yet not sufficiently worn down and pissed off. But what happens when this cycle repeats again? At some point you run out of new poor countries to move manufacturing to, just as you run out of poor area to strip of their resources.

Also, GDP is deeply tied into both colonialist and imperialist trade practices as well as financial "industries" that "create" value largely by shuffling numbers around until they look bigger. And, of course, it is possibly the most actively manipulated metric in the world, given how much of a country's future is dependent upon it - unless it increases by ~7% a year you're looking at massive job losses and a rapid collapse of the economy, which forces the never-ending invention of new ways of making the numbers look bigger. GDP is a deeply problematic measure in many ways.

This is of course all looking past the boom-bust-cycle that's built into our current economic systems.


The red spirit said:


> Also past some periods being rich in growth, after a while it ends and infinite growth is just close to 1% with some fluctuations. And beyond real GDP, I think that buying power, median wage and some life quality indexes + HDI should be taken into account. I haven't looked at actual data, but HDI seems to have slowly rising in first world countries, so is life quality, but wages grow slowly and actual purchasing power of people + real GDP seem to be either growing at miniscule rate or slowly eroding and they are certainly volatile too. But I guess those people (I have no idea who they are, so I'm just guessing that they are some retarded liberals) that claim things about infinite growth don't really talk about that, do they?


Nope, because talking about those things undermines the ideology they are selling to voters, with ideas like "low taxes are good for you (never mind that the infrastructure you depend upon is falling apart)" and "cutting taxes for the rich creates jobs (despite literally all evidence contradicting this statement)". The wage stagnations we've seen in most of the Global North in the past 4-5 decades, plus the ever-expanding wealth gap, is a product of a conscious and willed process. And sadly people keep voting for these idiots.


The red spirit said:


> On that note, I looked at AMD's stock. It seems to perform really well, but if you look at their profitability graphs, it seems that AMD has made an absolute killing in 2020. They made tiems more profit than 2019 and 2018 combined. Somehow I don't really think that they are overpricing their stuff just because of tough times, I think that they are ripping us off while they can and Ryzen prices should be 20% if not more lower.


Yeah, as I've said, they're actively focusing on "high ASP" (as in: expensive) components during the combined supply crunch and demand spike. Which is exactly why they're making a killing. There's absolutely a truth to them being supply limited, but they are also making conscious choices to only put out the most expensive and profitable products from what supply they have (as are Intel, as are Nvidia, etc., etc.).


The red spirit said:


> The more time goes on, the more I think that RTG is just retarded at doing business. Since HD 7000 series and Rx 2xx series, they just keep overhyping incompetent products and hope that they sell. Old school ATi seemed to be doing business much better and certainly didn't try to pull "poor Volta" kind of crap. It seems that RTG inherited some ATis problems like still poor reliability of drivers and software and sometimes outright zero quality control (RX 5000 series) + bad qualities of AMD, that is overhyping various crap even if it doesn't deliver and generally awful PR. They are rich tech company, but their actions are no better than moody teen's, who is trying to make his homework seem better than it is.


RTG is shut down though, they were integrated after Raja left AFAIK. And they do seem to have turned a corner since RDNA - yes, the 5700 XT was buggy and problematic, but they're making great architectures with a ton of potential, and their marketing has gone from "overpromise and underdeliver" to "pretty accurate for PR". I just hope they keep developing in this direction, but also that they don't suddenly start expecting these price levels to stay normal going forward. If that's the case, I'm just going full console gamer after this PC gets too slow.


----------



## The red spirit (Nov 12, 2021)

Valantar said:


> You're making some weird combinations here, a 2020 CPU with a 2017 GPU? Those have AFAIK never been sold new at the same time. The $200 GPU pairing for a 10100 would be an RX 5500 or GTX 1650, neither of which are particularly impressive.


RX 580 is still selling new almost everywhere and is superior to 1650 due to 8 GB VRAM and raw power advantage. 1650 Super is closer to RX 580 equivalent, but still lack VRAM. RX 5500 8GB is similar to RX 580 in performance and VRAM.



Valantar said:


> You're right that they don't specify the type of 1030, and I've seen some weird stuff from Tom's throughout the years, but I still expect them to be sufficiently competent to use the G5 version of the 1030. But even if it is, that would at best mean that the G5 1030 performs about on par with this.


"Just buy it" -Tom's hardware, 2018.




Valantar said:


> Good luck getting a 6c12t CPU+a 1030G5 for $260.


That's not really impossible. i5 10400F in Lithuania is 143.99 EUR and cheapest GT 1030 G5 sells at 91.99 EUR. So total is 235.98 EUR. After currency exchange to USD (https://www.xe.com/currencyconverter/convert/?Amount=235.98&From=EUR&To=USD), that's 270.05 USD. That's so close to your specified budget. If you downgrade CPU to Ryzen 1600 AF, then CPU cost is just 131.99 EUR and with GT 1030, USD total is just 256.31. So, um, I fit into budget and have two options of CPUs. i5 is a bit faster than 1600 AF. I'm pretty sure that in USA, there are more store and potentially more deals to be found, you might be able to fit 3600 into that budget. And of course, there is eBay and AliExpress with all those sweet sweet decommissioned Xeons selling for pittance. I found E5-2698 V3 for 149.88 USD (+ 19.39 USD shipping to LTU). That's cheaper than i5 and 1600 AF, but you are getting 16 Haswell cores with HT. Unfortunately with base clock of just 2.3 GHz and boost clock of 3.6 GHz. Still, that's quite amazing chip that fits into APU budget. 




Valantar said:


> So, again, either you're arguing for buying used (those 580s you keep bringing up), which is of course a valid argument but _entirely_ different value proposition still, or you have to admit that currently there are no alternatives even remotely close in price and performance. Of course that's partly down to the current market situation, but that affects everything equally.


i5 + 1030 G5 is close, but depending on how many lanes 5600G/5700G has, maybe that's a better deal. Or maybe it's just worth to go YOLO with Ali Xeon and hope to overclock it a bit. If I found more sensible 8C/16T Haswell Xeon, then I could hunt for deals at Caseking.de. They had some sensibly priced RX 550s, 1050 Tis and 1650s. Ali Xeon and 1050 Ti seems to be quite doable idea, the only problem is finding motherboard. But for 1050 Ti, it might be worth gambling.




Valantar said:


> But the 200GE was never meant as anything but bargain-basement. It literally launched at $60. It was never a _good_ CPU. It paired well with bargain-basement budgets, allowing you to build a PC for next to nothing yet have a warranty. That's about what you can expect for $60.


To be honest, I bought Athlon X4 870K for 40 EUR. It's probably not as good as Athlon 200GE, but so damn close. I saw new Ryzen 1200 going for 60 EUR once and it was new, not refurbished and with cooler included. BTW new Pentium 4 630 goes for 10.04 EUR right now, what an awesome deal . No cooler or old school sticker though.




Valantar said:


> Unless you want a warranty and/or an upgrade path.


Let's be honest here, if you bought Athlon 200GE when it was new and survived with it this long and most likely have A320 board, then you are capped at 3000 Ryzens (unless your board has extended support) and then is it really worth investing in now dead end platform, which is artificially capped to older chips? It might be a better idea to just buy a new platform entirely.




Valantar said:


> Vega 11 was paired with old, slow Zen cores, and was clocked much lower than 4000 and 5000-series APUs. This is not an equal comparison, as the newer generations are considerably faster.


You can overclock old Vega 11 and for Vega 11, those Zen cores are fast enough.




Valantar said:


> You really should stop commenting on things you're not up to date on. APUs after the 3000-series have 16 PCIe 3.0 lanes for the GPU, and both 4000 and 5000-series (Zen2 and Zen3, but both with less L3 cache than desktop CPUs) perform very well. The 5600G is slower than a 5600X, but not by much at all. So for those $250 you're getting a kick-ass CPU with an entry-level GPU included. Also, gimped? A 1-2% performance loss is not noticeable.








						PCIe lanes and Ryzen APU
					

I'm working on a parts list for a high speed PCIe storage solution, and I can't seem to wrap my head around PCIe lanes with onboard graphics. PCIe lanes are the link between a CPU and PCIe device(GPU), right? If the GPU is already on the CPU, does that take any PCIe lanes? Also, are PCIe lanes CP...




					linustechtips.com
				




Only 8x for PCIe and older PCIe gen 3 if you have lower end or older board. Now tell me, how much performance is lost by using less than ideal PCIe gen and then on top of that cutting lanes in half. 



Valantar said:


> They would have been drowned out by the YouTube algorithm for making repetitive content. IMO this was likely completely out of their hand. Not to mention just how soul-crushing that content would likely be to make, week after week of the same stuff. Good on them for finding better things to do.


Or did they actually find anything better? Budget builds official seems to be happy to be back. I'm pretty sure that they may have had C19 related difficulties or didn't think of how to make content.



Valantar said:


> I doubt they have any interest in spending billions on financing a CPU architecture where they have to pay licensing fees to a US company, so no. Besides, they have their own efforts.


I'm staying with Zhaoxin, they have big chances of success. There are many IPO scams regarding CPUs in China and Zhaoxin is not a scam. 


















If I had half working brains at least, I would just pour that money in Zhaoxin, who have been making x86 chips for years and aren't terribly behind AMD and Intel. They have some experience, albeit in limited market, but that's still better than just give it to scammers. And you are suggesting that whole new architecture with no software or hardware or ecosystem is a better deal. It's clearly not. better to invest in Zhaoxin and then claim that it is fully Chinese arch. If we are looking at different arch, then ARM is fine as alternative if x86 becomes unviable. Huawei can make some top tier chips.  There's also RiscV if the worst comes.




Valantar said:


> Oh dear god no. Besides the blindingly obvious point that infinite economic growth in a world of finite physical resources is literally, logically and physically impossible, the idea that we're seeing "infinite" growth in current economies is an out-and out lie. Firstly, our global economic system is predicated upon rich countries exploiting the poor through labor and material extraction. The "growth" of the past four-five centuries is largely extracting resources in poor areas and processing and selling them in rich ones. Which, in case it isn't obvious, means the poor areas get _poorer_ - just that nobody wants to count this, so it generally isn't talked about in these contexts. The problem is that eventually, some poor areas manage to rid themselves of their oppressors/colonial regimes/corporate overlords and gain some autonomy, which moves the exploitation elsewhere. Recently we've seen this in how Chinese workers are refusing to work under the horrible conditions they have endured for the past couple of decades, which forces a lot of manufacturing to move to new countries where the workforce is as of yet not sufficiently worn down and pissed off. But what happens when this cycle repeats again? At some point you run out of new poor countries to move manufacturing to, just as you run out of poor area to strip of their resources.


You don't seem to mention service sector at all, which may not use many if any physical resources at all. And that's a huge sector in advanced countries. As long as we have money and appetite for that, well they can serve us and we very obviously has less money than our wants. It may not be infinite exactly, but with currently unknown potential. 



Valantar said:


> Also, GDP is deeply tied into both colonialist and imperialist trade practices as well as financial "industries" that "create" value largely by shuffling numbers around until they look bigger. And, of course, it is possibly the most actively manipulated metric in the world, given how much of a country's future is dependent upon it - unless it increases by ~7% a year you're looking at massive job losses and a rapid collapse of the economy, which forces the never-ending invention of new ways of making the numbers look bigger. GDP is a deeply problematic measure in many ways.


What an alternative could be? HDI adjusted GDP?



Valantar said:


> This is of course all looking past the boom-bust-cycle that's built into our current economic systems.


Or is it really? Couldn't we just have a perfect stagnation?




Valantar said:


> Nope, because talking about those things undermines the ideology they are selling to voters, with ideas like "low taxes are good for you (never mind that the infrastructure you depend upon is falling apart)" and "cutting taxes for the rich creates jobs (despite literally all evidence contradicting this statement)". The wage stagnations we've seen in most of the Global North in the past 4-5 decades, plus the ever-expanding wealth gap, is a product of a conscious and willed process. And sadly people keep voting for these idiots.


It seems that you think that socialism or other similar right ideology would be a way forward.



Valantar said:


> Yeah, as I've said, they're actively focusing on "high ASP" (as in: expensive) components during the combined supply crunch and demand spike. Which is exactly why they're making a killing. There's absolutely a truth to them being supply limited, but they are also making conscious choices to only put out the most expensive and profitable products from what supply they have (as are Intel, as are Nvidia, etc., etc.).


I disagree with you about Intel. Intel unlike AMD produces far more chips and they seem to have Pentiums and Celerons available. If you look for those, you can find them at reasonable prices. And nVidia is selling GT 710 and GT 730s new, not great, but at least better than literally nothing from AMD.




Valantar said:


> RTG is shut down though, they were integrated after Raja left AFAIK. And they do seem to have turned a corner since RDNA - yes, the 5700 XT was buggy and problematic, but they're making great architectures with a ton of potential, and their marketing has gone from "overpromise and underdeliver" to "pretty accurate for PR". I just hope they keep developing in this direction, but also that they don't suddenly start expecting these price levels to stay normal going forward. If that's the case, I'm just going full console gamer after this PC gets too slow.


5700 XT had stock voltage issues and many cards were affected by that. It may be as much as 10% if not more of those cards. I'm honestly not sure about about RTG (I'm gonna call them that, beat calling them AMD's graphics card division). Beyond 5000 series, I haven't seen anything that has a tiny bit of appeal. I'm completely priced out of market currently (it didn't take much, once prices climbed past 300 EUR, I was done. 200 EUR is my graphics card ideal budget) and getting more and more cynical. 5000 series were already weak with budget cards, they only made 5500 XT, which was underwhelming card and it didn't gain any traction. I have to admit, that I quite liked 5500 XT for some reason, but my RX 580 serves me well.


----------



## AusWolf (Nov 14, 2021)

The red spirit said:


> Then why not all of them have just Vega 3 instead of 7 and 11?


As long as the price difference compared to regular CPUs is negligible, I'm okay with having a bigger iGPU in them. 



The red spirit said:


> Or are they actually? By great, I guess you mean solid 60 fps, 1080p at least and high settings at least. I can say that GTX 650 Ti could actually run most of old games well, but some it couldn't. And well there is Crysis, which GTX 650 Ti only properly ran at 1024x768 medium. By solid I mean not dipping bellow 40 fps. Cards like GT 730 GDDR5 can't do that. GT 730 struggles in Far Cry. GT 1030 might be the lowest end card that is actually great at old school games. And that's Vega 11 territory. GTX 650 Ti also managed to play old school games with nVidia control panel enhancements, but only in non demanding games it managed to achieve solid 60 fps with some supersampling. RX 580 can do some supersampling (or SSAA) in Colin McRae 2005 (up to 4x at 1280x1024), but it can't provide perfect performance in Unreal Tournament 2004), but it could run it well with adaptive 8x MSAA. I will lower bar somewhat and say that old games running at 1080p, solid 60 fps and at least high settings (minus odd stuff like PhysX, which still destroys performance) is a good experience and that card or iGPU that can do it is great. But I really doubt if Vega 7 or 11 can pull that off. Solid 60 fps to me is 0.1% lows of no less than 45 fps and 1% lows of no less than 55 fps. Games like F.E.A.R. are old, but still demanding on lower end hardware. I can say that Vega 7 most likely can't run it well with high settings at 1080p. Far Cry 2 most likely wouldn't run well on Vega 11. UT3, may not even run well at medium settings. And Crysis wouldn't be playable at anything above low and 1024x768. Mafia II also likely wouldn't run well at anything above 1080p low.
> 
> All in all, iGPUs most likely aren't great for older games. They are not great and their potential is limited. Instead of saying that they are great, old or not demanding games are just all that they can do without lagging badly. Regarding old games, there might be some odd bugs and glitches on modern hardware, that also contributes to them not being great. Another thing is that something like Vega 7, might actually be beaten by GT 1010. You might say that it's pointless to compare free iGPU with dedicated card, but back in socket FM2 days, APUs had no problems wiping floor with GT x10 GDDR5 cards. Cheap A8 APU had performance times better than GT 710 GDDR5. Even A6 or A4 APU may have been faster than GT 710 GDDR5. But even then they weren't really worth it. The budget banger was cheapest Athlon X4 and Radeon HD 7750. Which meant Athlon X4 740 (MSRP of 71 USD) and then 109 USD for card. That's 180 USD for both and for a bit less, you could get 6800K, which was weaker. If you add 20-30 USD to budget, you get 7770, which had like 100 stream cores more and that's an easy 20% performance improvement. Or 10 USD more, you could get Athlon A4 750K and overclock it by 600Mhz with stock cooler, which beats 760K. Unlike APU, Radeon 7750 can still run some games at 1080p low-medium and 60 fps. APU was already struggling with that in 2014. And that's a good scenario. All dual core APUs were e-waste since the day they were made as their CPU portion was just simply too slow to play games well and their GPU portion was usually half of A10 or A8. I personally own A4 6300 and A6 7400K and have benched them extensively. Neither is great or really acceptable in their era games and they are really slow in day to day stuff, they aged like milk. And there's no argument to make about adding dedicated card later either. There was some value in cheapest quad core APUs as they made cheap daily drivers for work and could run old games, but if you wanted to play games in any capacity, Radeon HD 7750 was a complete no brainer. Along with 7750, there was also GTX 650.


Let the TPU review of the 5600G speak for itself. Its iGPU matches, or even beats the GT 1030 in almost every game while it currently cost exactly the same as the 5600X in the UK. You might argue that the 5600X has a bigger L3 cache, but considering that a GT 1030 costs between £80-100, you can't argue against the 5600G offering great value. You just simply can't.

Let's not bring the GT 710 and 730 into the picture, either. The GT 1030 is an entirely different class of GPU. I know, I've owned all of these.

Same with the Radeon HD 7000 series that don't even have driver support anymore.



The red spirit said:


> Good for you, chips with integrated graphics in Lithuania are scalped too. They can easily cost 40% more than F variants and therefore barely make sense, but during these times, it seems that having a display adapter is getting expensive. *I guess that beats buying brand new GT 710.*


Exactly.



The red spirit said:


> All in all, I stay with my argument that APU value was always questionable for gamers, even cheap gamers. And there wasn't much point in pinching pennies that badly, since even a modest library of games would have made those savings on HW make look silly and insignificant and yet they would have made your gaming experience noticeably worse if not outright garbage. That is, unless you pirate all your games, then maybe penny pinching makes sense then. But then again, 7750 or 7770 were selling with codes for several legit AAA games. So despite saving few dollars on hw at cost of gaming experience, APUs still made little sense if you wanted to stay legal.


Just read what I said please:


AusWolf said:


> APUs aren't primarily meant for AAA gaming. With a modern APU, you essentially get a CPU and a GPU that has all the video decoding capabilities you need in a HTPC, usually for the price of a CPU alone. How is that not awesome value?


The ability to play games at modest settings on an AMD APU is an extra feature.


----------



## The red spirit (Nov 14, 2021)

AusWolf said:


> As long as the price difference compared to regular CPUs is negligible, I'm okay with having a bigger iGPU in them.


But that is clearly not a case past Vega 7/8. Ryzen 2400G was overpriced.



AusWolf said:


> Let the TPU review of the 5600G speak for itself. Its iGPU matches, or even beats the GT 1030 in almost every game while it currently cost exactly the same as the 5600X in the UK.


Well, that's actually cool and surprising. I don't wanna be a dick again, but that's another review, where GT 1030 model is not specified. I have looked at YT and it seems that GT 1030 is matched by new APUs (Vega 7/8), but ffs why don't reviewers state what they have. DDR4 and GDDR5 versions are not even close. Here's a video of both:









To be honest, they are literally the same, but GT 1030 has an edge in GTA, but then 5600G has an edge in AC or Hitman. It seems that TPU tested GDDR5 version.




AusWolf said:


> You might argue that the 5600X has a bigger L3 cache, but considering that a GT 1030 costs between £80-100, you can't argue against the 5600G offering great value. You just simply can't.


I guess, I can't. But if you state, that just getting a display adapter with good chip is all you need, then 11400 exists. If you are at extreme budget and want to paly games, then Ali Xeon + whatever you can find on eBay for 150 EUR. Maybe GTX 960. Caseking had RX 550 2GB for 110-130 EUR. I saw some RX 550s for 86 EUR locally new. And if you only need a display adapter with just reasonable chip, then there's is i3 10100. So, I'm not so sure if 5600G is such a great deal. It may be for some people, but there are some other options with similar budget or even lower budget. Let's not forget that iGPU also steals some RAM to use as VRM and 16GB is bare minimum and iGPU prefers fast RAM. That's something to consider, if your RAM market is poor. 




AusWolf said:


> Let's not bring the GT 710 and 730 into the picture, either. The GT 1030 is an entirely different class of GPU. I know, I've owned all of these.


As display adapters they all work.



AusWolf said:


> Same with the Radeon HD 7000 series that don't even have driver support anymore.


So what? They have drivers, they are simply not updated anymore. That really isn't a big deal. But performance might be, here is 7750:









It's certainly not as much of e-waste as one would have thought and they are not expensive at this point, maybe.




AusWolf said:


> Just read what I said please:
> 
> The ability to play games at modest settings on an AMD APU is an extra feature.


If you need to have a chip just for videos, then it's an awful deal. 300+ EUR is way too much for that. Just get Celeron and be done with it. AMD A4 chips used to cost 40 EUR in the past. It can still run some games. Far Cry at intro ran at 1080p Ultra and 54 fps, Dirt 3 ran at 900p medium-high 50 fps, Duke Nukem Forever ran at 1080p high (no AA and post processing) at 34 fps, UT 2004 at Ultra and 1080p ran at 103 fps. A4 6300 beat ATi X800 XT Platinum Edition, so if you look at older titles, even a modest APU most likely can run them. A4 6300 was good up to 2008 games, but it could run WRC 4 at 800x600 at unstable 40 fps, so it was survivable with until 2014. The experience isn't great and neither it would be on Ryzen APU today. That's why sticking big iGPU is mostly pointless and if you want something actually respectable, something that can actually run games and do so for a while, then getting a graphics card is a no brainer. I don't see much point in Vega 11 existing. Maybe it's reasonable for modest heterogenous computing or video transcoding tasks, but as display adapter it simply costs way too much. Well, it's better than buying Ryzen and GT 710, which is a meme card at this point and to some extent fails to be a proper display adapter, but is that all that 5600G/5700G can beat in value? 

Even commiting a sin that is buying some part earlier and some later:









Is pointless with 5600G/5700G, just buy 10400/10500/11400/11500 and save some cash for that GPU. I just don't a see a point in these two Ryzen APUs, they underdeliver and cost way too much. C19 changes nothing about its horrendous value proposition. At least where I live, 10500 costs 208 EUR and 5600G costs 313 EUR and there is 10100 for 125 EUR. And that's an exceptional deal for Ryzen. Usually it costs 320 EUR. You can even find 11500 for 200 EUR. So where's the value in those Ryzens?


----------



## AusWolf (Nov 14, 2021)

The red spirit said:


> I guess, I can't. But if you state, that just getting a display adapter with good chip is all you need, then 11400 exists. If you are at extreme budget and want to paly games, then Ali Xeon + whatever you can find on eBay for 150 EUR. Maybe GTX 960. Caseking had RX 550 2GB for 110-130 EUR. I saw some RX 550s for 86 EUR locally new. And if you only need a display adapter with just reasonable chip, then there's is i3 10100. So, I'm not so sure if 5600G is such a great deal. It may be for some people, but there are some other options with similar budget or even lower budget. Let's not forget that iGPU also steals some RAM to use as VRM and 16GB is bare minimum and iGPU prefers fast RAM. That's something to consider, if your RAM market is poor.


Yes, the 11400 is a great HTPC chip as well. I agree that AMD's offerings are a bit overpriced for such use cases, but for what you're getting for the money, they are great products. 6-core Zen 3 with a GT 1030 level iGPU for less than £300 is good. What you use it for is a different story. The 5300G would be a sweet product - it's a shame it's unavailable through retail.

Ebay/aliexpress are totally not descriptive of current (new) market conditions. Everyone knows that the value and quality of used anything varies greatly among individual items, so arguments for cherry-picked deals on used stuff are irrelevant and not helpful. In context: I bought my RTX 2070 on ebay, but I wouldn't recommend everyone to do the same.



The red spirit said:


> As display adapters they all work.


The 3090 works as a display adapter as well. Why is it so expensive, then?

Seriously, comparing the GT 710 and 730 to the 1030 is pointless. Different generation, different video decoder, different performance class. Anno 2021, I can't recommend a Kepler GPU for home theatre purposes. It lacks a 4k 60 Hz output, and it lacks support for decoding increasingly widespread codecs like H.265. Such videos stutter on a GT 710 even at 1080p.



The red spirit said:


> So what? They have drivers, they are simply not updated anymore. That really isn't a big deal. But performance might be, here is 7750:
> 
> 
> 
> ...


Same again: outdated HDMI/DP standards, lack of support for modern video codecs. Maybe OK for oldschool gaming, but useless as a modern HTPC card.

Also, see my comment on used stuff above.



The red spirit said:


> If you need to have a chip just for videos, then it's an awful deal. 300+ EUR is way too much for that. Just get Celeron and be done with it. AMD A4 chips used to cost 40 EUR in the past. It can still run some games. Far Cry at intro ran at 1080p Ultra and 54 fps, Dirt 3 ran at 900p medium-high 50 fps, Duke Nukem Forever ran at 1080p high (no AA and post processing) at 34 fps, UT 2004 at Ultra and 1080p ran at 103 fps. A4 6300 beat ATi X800 XT Platinum Edition, so if you look at older titles, even a modest APU most likely can run them. A4 6300 was good up to 2008 games, but it could run WRC 4 at 800x600 at unstable 40 fps, so it was survivable with until 2014. The experience isn't great and neither it would be on Ryzen APU today. That's why sticking big iGPU is mostly pointless and if you want something actually respectable, something that can actually run games and do so for a while, then getting a graphics card is a no brainer. I don't see much point in Vega 11 existing. Maybe it's reasonable for modest heterogenous computing or video transcoding tasks, but as display adapter it simply costs way too much. Well, it's better than buying Ryzen and GT 710, which is a meme card at this point and to some extent fails to be a proper display adapter, but is that all that 5600G/5700G can beat in value?
> 
> Even commiting a sin that is buying some part earlier and some later:
> 
> ...


So you're saying that an old A4 6300 is good because it can run UT 2004, but the new Ryzen G-series is rubbish because... why exactly? You're not making much sense. 

I agree that a modern Celeron is perfect for watching movies. I still think AMD's APUs offer good value if you want to include some casual low-spec gaming as well.


----------



## The red spirit (Nov 14, 2021)

AusWolf said:


> Yes, the 11400 is a great HTPC chip as well. I agree that AMD's offerings are a bit overpriced for such use cases, but for what you're getting for the money, they are great products. 6-core Zen 3 with a GT 1030 level iGPU for less than £300 is good. What you use it for is a different story. The 5300G would be a sweet product - it's a shame it's unavailable through retail.


5600G is 200 EUR cool, not 300 EUR.




AusWolf said:


> Ebay/aliexpress are totally not descriptive of current (new) market conditions. Everyone knows that the value and quality of used anything varies greatly among individual items, so arguments for cherry-picked deals on used stuff are irrelevant and not helpful. In context: I bought my RTX 2070 on ebay, but I wouldn't recommend everyone to do the same.


But you would totally buy a cheap Xeon. There are 12 core or better Xeons for a bit over 100 EUR. That's just a deal that's hard to pass up. They are Haswell too, so not completely ancient. Boards are a problem to get and definitely a sketchy part of getting a Xeon rig. 




AusWolf said:


> The 3090 works as a display adapter as well. Why is it so expensive, then?


Was not meant to be one maybe?




AusWolf said:


> Seriously, comparing the GT 710 and 730 to the 1030 is pointless. Different generation, different video decoder, different performance class. Anno 2021, I can't recommend a Kepler GPU for home theatre purposes. It lacks a 4k 60 Hz output, and it lacks support for decoding increasingly widespread codecs like H.265. Such videos stutter on a GT 710 even at 1080p.


It doesn't matter that you can or cannot recommend them, they are still sold new and as display adapters. They are supposed to do the same stuff. That they suck is a different matter. 



AusWolf said:


> Same again: outdated HDMI/DP standards, lack of support for modern video codecs. Maybe OK for oldschool gaming, but useless as a modern HTPC card.


I found 7750 port specs, they say that HDMI is 1.4a and supports 1440p. Display port supports 4K. Radeon HD 7750 supports UVD 4.0 and VCE 1.0 that let's you decode H.264 without problems, but H.265 or VP9 or AV1 are not decodable. Only H.265 is a problem. Polaris doesn't decode AV-1 or VP-9, which are used on Youtube. I have been using RX 580 for a while and everything works quite okay. I can't play 4K60 videos. 4K videos work fine. So despite that being a very concrete spec, it doesn't seem to be entirely correct either. If the worst comes, then there's a browser extension called h264ify, which forces YT to use h.264 codec. That means less video quality, same bandwidth requirements and severe resolution limitations. Topping out at 1080p or 720p only. I remember using GTX 650 Ti with Windows 7 last year and it could do 1440p well, but 4K is sketchy and 4K60 is just nope. With linux (Xubuntu maybe?), RX 580 could run 8K video. 8K60 was just not runable at all. So, you would need to get 7750 first to see what it can actually do. GTX 650 Ti doesn't support h.265 or VP-9, but worked well with Youtube. 




AusWolf said:


> So you're saying that an old A4 6300 is good because it can run UT 2004, but the new Ryzen G-series is rubbish because... why exactly? You're not making much sense.


You said that "running game is a bonus", I'm just putting it into perspective what it means. Anything can run games as long as you are willing to accept sacrifices, limit game recency and limit graphics. I's not really an unique selling point that computer can run games. Ryzen APUs just happen to be faster at that, but often experience with them is closer to suffering. That's what you get with A4 6300 too, but cheaper. Sure A4 6300 is much worse, but if they both can't properly run modern games, then does it really matter which of them is faster? 

My bare minimum settings are 900p low with 45 fps average with 1% lows not being into less than 20 fps. 720p just looks awful and certainly not acceptable anymore. I would rather use 1280x1024 than 720p. Anyway, in most demanding games 5600G fails to achieve my minimum spec. The minimum spec meeting card is RX 560 4GB:









I own it myself and have upgraded from it, because it no longer ran games at 1440p (and yes it used to do that just fine up until 2017 or 2018). I could still use it if I needed to, but it didn't feel great. Going to 5600G would mean some very real sacrifices and going from so so experience to unpleasant experience. I would rather get i3 10100F and buy used RX 580 for 200 EUR.  



AusWolf said:


> I agree that a modern Celeron is perfect for watching movies. I still think AMD's APUs offer good value if you want to include some casual low-spec gaming as well.


Not sure what casual low spec gaming means. Even Intel GMA is good for Minesweeper, for me casual gaming still means something 3D and usually similar HW requirements to non casual games. Firewatch may work on 5600G, but it's only 2-3 hours long. Plenty of cheap or free indie games barely run or don't run on GTX 650 Ti. Something like Genshin Impact might run at 900p medium on 5600G, but it's hardly casual game. You need it to run well to properly land attacks. Therefore my minimum performance spec barely works for game like this.


----------



## AusWolf (Nov 14, 2021)

The red spirit said:


> 5600G is 200 EUR cool, not 300 EUR.


Link. If it's 200 EUR in Lithuania, all the better. 



The red spirit said:


> But you would totally buy a cheap Xeon. There are 12 core or better Xeons for a bit over 100 EUR. That's just a deal that's hard to pass up. They are Haswell too, so not completely ancient. Boards are a problem to get and definitely a sketchy part of getting a Xeon rig.


No, I would not buy a cheap Xeon. Server chips are usually clocked significantly lower than consumer ones, and there's no way I could use 12 weak cores.



The red spirit said:


> It doesn't matter that you can or cannot recommend them, they are still sold new and as display adapters. They are supposed to do the same stuff. That they suck is a different matter.


Are you saying that I should recommend them _for HTPC purposes_ because they are still sold new _as display adapters_? Are you aware that we're talking about two entirely different things?



The red spirit said:


> I found 7750 port specs, they say that HDMI is 1.4a and supports 1440p. Display port supports 4K. Radeon HD 7750 supports UVD 4.0 and VCE 1.0 that let's you decode H.264 without problems, but H.265 or VP9 or AV1 are not decodable. Only H.265 is a problem. Polaris doesn't decode AV-1 or VP-9, which are used on Youtube. I have been using RX 580 for a while and everything works quite okay. I can't play 4K60 videos. 4K videos work fine. So despite that being a very concrete spec, it doesn't seem to be entirely correct either. If the worst comes, then there's a browser extension called h264ify, which forces YT to use h.264 codec. That means less video quality, same bandwidth requirements and severe resolution limitations. Topping out at 1080p or 720p only. I remember using GTX 650 Ti with Windows 7 last year and it could do 1440p well, but 4K is sketchy and 4K60 is just nope. With linux (Xubuntu maybe?), RX 580 could run 8K video. 8K60 was just not runable at all. So, you would need to get 7750 first to see what it can actually do. GTX 650 Ti doesn't support h.265 or VP-9, but worked well with Youtube.


So according to you, buying a used GPU on ebay and being forced to employ tricks for smooth video playback is better than buying an APU with a half-decent iGPU that can play all your videos natively for the price of a CPU alone?



The red spirit said:


> You said that "running game is a bonus", I'm just putting it into perspective what it means. Anything can run games as long as you are willing to accept sacrifices, limit game recency and limit graphics. I's not really an unique selling point that computer can run games. Ryzen APUs just happen to be faster at that, but often experience with them is closer to suffering. That's what you get with A4 6300 too, but cheaper. Sure A4 6300 is much worse, but if they both can't properly run modern games, then does it really matter which of them is faster?


How can you even compare an APU from 2013 to the 5600G? The 6300 has a weak CPU, an even weaker iGPU and again, no support for the latest codecs and display standards. Even if you found one on ebay, buying it for a modern HTPC would be a total waste.

It's better to spend 2-300 EUR on something useful than 100 EUR on rubbish.



The red spirit said:


> My bare minimum settings are 900p low with 45 fps average with 1% lows not being into less than 20 fps. 720p just looks awful and certainly not acceptable anymore. I would rather use 1280x1024 than 720p. Anyway, in most demanding games 5600G fails to achieve my minimum spec. The minimum spec meeting card is RX 560 4GB:
> 
> 
> 
> ...


That is your minimum spec. I used to play The Witcher 3 at 1080p medium-low on a GT 1030 without issues. If the 5600G's iGPU is anywhere near that level (according to reviews it is), it's fine. Not great, but fine. I would certainly not try to do the same on an A4 6300.



The red spirit said:


> Not sure what casual low spec gaming means. Even Intel GMA is good for Minesweeper, for me casual gaming still means something 3D and usually similar HW requirements to non casual games. Firewatch may work on 5600G, but it's only 2-3 hours long. Plenty of cheap or free indie games barely run or don't run on GTX 650 Ti. Something like Genshin Impact might run at 900p medium on 5600G, but it's hardly casual game. You need it to run well to properly land attacks. Therefore my minimum performance spec barely works for game like this.


Are you under the assumption that the 5600G can't run 3D games and applications? Maybe you should read some reviews and watch some youtube videos on the topic before we continue this conversation.


----------



## The red spirit (Nov 14, 2021)

AusWolf said:


> Link. If it's 200 EUR in Lithuania, all the better.


I was saying that it would be good for 200 EUR chip, but not for 300 EUR one. 



AusWolf said:


> No, I would not buy a cheap Xeon. Server chips are usually clocked significantly lower than consumer ones, and there's no way I could use 12 weak cores.


Hmmmm....









Either Ali Xeon or i3 10100F.



AusWolf said:


> Are you saying that I should recommend them _for HTPC purposes_ because they are still sold new _as display adapters_? Are you aware that we're talking about two entirely different things?


Yes I am. GT 710 still works as 1080p display adapter.




AusWolf said:


> So according to you, buying a used GPU on ebay and being forced to employ tricks for smooth video playback is better than buying an APU with a half-decent iGPU that can play all your videos natively for the price of a CPU alone?


Yes, but you miss the point. I managed to find CPU and GPU for price of just APU. You can buy i5 10400F and have 140 EUR left over. That budget is enough for R9 370, GTX 960 from eBay. In that quote, I was talking about dirt cheap cards, you are asking different thing here. Anyway, I think that it's worth getting i5 and either of those eBay cards. better yet, you can look for regional deals. I managed to find GTX 1050 Ti for same 140 EUR budget. You employ tricks (well no trick if you reread that previous post) if you buy used card for next to nothing, but if you spend reasonable budget, well you don't do that. And you still miss a point, 7750 is nearly as good as iGPU, 1050 Ti can actually play games at acceptable settings and framerate, unlike iGPU. 




AusWolf said:


> How can you even compare an APU from 2013 to the 5600G? The 6300 has a weak CPU, an even weaker iGPU and again, no support for the latest codecs and display standards. Even if you found one on ebay, buying it for a modern HTPC would be a total waste.
> 
> It's better to spend 2-300 EUR on something useful than 100 EUR on rubbish.


You miss the point again. This replay was to "being able to run modern games". In that one aspect A4 and 5600G aren't too different.




AusWolf said:


> That is your minimum spec. I used to play The Witcher 3 at 1080p medium-low on a GT 1030 without issues. If the 5600G's iGPU is anywhere near that level (according to reviews it is), it's fine. Not great, but fine. I would certainly not try to do the same on an A4 6300.


My minimum spec is 900p low, so that's significantly worse than your GT 1030 experience in Witcher. 




AusWolf said:


> Are you under the assumption that the 5600G can't run 3D games and applications? Maybe you should read some reviews and watch some youtube videos on the topic before we continue this conversation.


All of them at my defined minimum spec? Well, it certainly fails to do that. 1600x900 and low setting with 45 fps average wasn't much to ask in 2009. It certainly isn't that much in 2021.


----------



## AusWolf (Nov 14, 2021)

The red spirit said:


> I was saying that it would be good for 200 EUR chip, but not for 300 EUR one.


I only agree with this because Intel's 6-core CPUs are selling at that price point. It's not only APUs that are overpriced on AMD's side at the moment, but everything.



The red spirit said:


> Hmmmm....
> 
> 
> 
> ...


My money is on the i3 in this matter. I was talking about video playback, you were talking about games. Both of these tasks make better use of fewer but faster cores.



The red spirit said:


> Yes I am. GT 710 still works as 1080p display adapter.


You're talking about a _display adapter_. I'm talking about a _graphics card for a HTPC_. I believe I've already explained how these are two totally different things:


AusWolf said:


> Anno 2021, I can't recommend a *Kepler* GPU for home theatre purposes. It *lacks a 4k 60 Hz output, and it lacks support for decoding increasingly widespread codecs like H.265*. Such videos stutter on a GT 710 even at 1080p.


Being able to send out a 1080p signal on Windows desktop is not enough. Let's leave it at that.



The red spirit said:


> Yes, but you miss the point. I managed to find CPU and GPU for price of just APU. You can buy i5 10400F and have 140 EUR left over. That budget is enough for R9 370, GTX 960 from eBay. In that quote, I was talking about dirt cheap cards, you are asking different thing here. Anyway, I think that it's worth getting i5 and either of those eBay cards. better yet, you can look for regional deals. I managed to find GTX 1050 Ti for same 140 EUR budget. You employ tricks (well no trick if you reread that previous post) if you buy used card for next to nothing, but if you spend reasonable budget, well you don't do that. And you still miss a point, 7750 is nearly as good as iGPU, 1050 Ti can actually play games at acceptable settings and framerate, unlike iGPU.


Again, you're talking about ebay. I'm talking about new. You can't say that AMD/Intel/nvidia's pricing of _current gen_ products is bad just because you found something old that you like on ebay.

With this, I will stop responding to comments about ebay, as they are totally irrelevant.



The red spirit said:


> This replay was to "being able to run modern games". In that one aspect A4 and 5600G aren't too different.


Yes they are.



The red spirit said:


> My minimum spec is 900p low, so that's significantly worse than your GT 1030 experience in Witcher.


Then you should have no problem playing it with a GT 1030 or a 5600G.



The red spirit said:


> All of them at my defined minimum spec? Well, it certainly fails to do that. 1600x900 and low setting with 45 fps average wasn't much to ask in 2009. It certainly isn't that much in 2021.


That depends on the game. Also, just because a chip doesn't meet _your_ desired specs in X game, it doesn't mean that it's a failed product altogether.


----------



## bobbybluz (Nov 14, 2021)

A close friend lives near a Micro Center and wants me to fix a Toyota Prius he fux0red the electrical system on next week. I'm having him pick me up a 12700K as partial payment for the labor. As soon as Newegg gets the ASRock Z690 Steel Legend WiFi 6E back in stock I'm getting one of those to put the 12700K in. Combined with 64GB of G.Skill TridentZ 3600 Cas 16 B die I already have along with the rest of the parts needed to build another new rig I'll be set for a while at a very low cost since all I have to buy is the mobo and that uses DDR4 ($269) The 12700K has the same number of P cores the 12900K does but 4 fewer E cores meaning it sucks a less power with close to the same performance. I'll be using a Arctic Liquid Freezer II 360 on it so cooling won't be an issue either. 
​


----------



## The red spirit (Nov 14, 2021)

AusWolf said:


> Again, you're talking about ebay. I'm talking about new. You can't say that AMD/Intel/nvidia's pricing of _current gen_ products is bad just because you found something old that you like on ebay.
> 
> With this, I will stop responding to comments about ebay, as they are totally irrelevant.


Don't like eBay and get polio from used deals? Well, there is one last resort deal where I live:


			https://www.varle.lt/procesoriai/procesorius-intel-core-i3-10100f-10th-gen-intel--15628436.html
		



			https://www.varle.lt/vaizdo-plokstes/lenovo-thinkstation-nvidia-t600-4gb-mini-dp4-graphics--16631984.html
		


Quadro T600 and i3 10100F, the last resort for modest 1080p gaming. Quadro T600 is cut down GTX 1650, but to compensate that, it got faster VRAM. It has 4GB VRAM, which is pretty much minimum for gaming today and it can play any video. So, that is similar to APU as this setup is very power efficient. You pay with CPU power to fit into budget, but hey that's nearly 1650. It's actually really cool card during these times and it is selling at pretty much its MSRP. 

And very importantly, that combo is still 3 EUR cheaper than Ryzen APU:


			https://www.varle.lt/procesoriai/procesorius-amd-am4-ryzen-5-5600g-tray-39ghz-max--16955958.html
		


And if you complete a whole build, it's actually overall very cheap and unlike with APU, your RAM isn't hijacked by iGPU, so same 16GB are left to CPU. 




AusWolf said:


> Then you should have no problem playing it with a GT 1030 or a 5600G.


No, not really. Any card or APU should do that in modern AAA games. 5600G falls apart in Cyberpunk for example and there are some games that aren't playable at those settings. T600 has a much better chance.




AusWolf said:


> That depends on the game. Also, just because a chip doesn't meet _your_ desired specs in X game, it doesn't mean that it's a failed product altogether.


My initial point was that APUs never really made much sense for gaming. You are always better off with something dedicated, even if lower end.


----------



## AusWolf (Nov 15, 2021)

bobbybluz said:


> A close friend lives near a Micro Center and wants me to fix a Toyota Prius he fux0red the electrical system on next week. I'm having him pick me up a 12700K as partial payment for the labor. As soon as Newegg gets the ASRock Z690 Steel Legend WiFi 6E back in stock I'm getting one of those to put the 12700K in. Combined with 64GB of G.Skill TridentZ 3600 Cas 16 B die I already have along with the rest of the parts needed to build another new rig I'll be set for a while at a very low cost since all I have to buy is the mobo and that uses DDR4 ($269) The 12700K has the same number of P cores the 12900K does but 4 fewer E cores meaning it sucks a less power with close to the same performance. I'll be using a Arctic Liquid Freezer II 360 on it so cooling won't be an issue either.


Congrats!  Post your experiences, please! 



The red spirit said:


> Don't like eBay and get polio from used deals? Well, there is one last resort deal where I live:
> 
> 
> https://www.varle.lt/procesoriai/procesorius-intel-core-i3-10100f-10th-gen-intel--15628436.html
> ...


Personally, I have nothing against ebay. The reason I wouldn't recommend anyone (especially new PC builders) to buy their stuff there is this:
1. Deals are temporary. What you posted may be there today, but may disappear by tomorrow. As such, it does not represent the actual value of the product _in general_. It only represents the value of _that particular piece at that particular moment_. You cannot say that the Quadro T600 is a good deal _in general_ just because you found _one_ on a local second-hand selling site.
2. No warranty.
3. You don't know what you get until you get it. Maybe it's good as new. Maybe it's f***ed. Maybe it's dirty as hell and you spend a whole day cleaning it. Maybe the person you're recommending it to doesn't know how to take a graphics card apart for cleaning. Then again, it's not a good deal _for that person_.
4. You're buying from a person instead of a registered company. Sure, you have buyer's protection on ebay, but you never have 100% protection against scammers.

All in all, if you have a friend who knows his/her stuff about computers, sure, recommend him/her a good deal on ebay. But even then, that's _one_ good deal, and not a picture of the current PC market. Here on TPU, we all have varying levels of understanding about PCs. Some of us have been building them for decades. Some of us are just getting into it now. Recommending ebay deals _to people in general_ is a bad idea.

Like I said, I bought my RTX 2070 on ebay. Would I recommend _you_ to do the same? Maybe, as you know a thing or two about PCs. Would I recommend it _in general_? Hell no.

Prices of new hardware and the second-hand market are two entirely different things.



The red spirit said:


> No, not really. Any card or APU should do that in modern AAA games. 5600G falls apart in Cyberpunk for example and there are some games that aren't playable at those settings. T600 has a much better chance.


Why should a £280 APU play any modern game?



The red spirit said:


> My initial point was that APUs never really made much sense for gaming. You are always better off with something dedicated, even if lower end.


You really like it when I repeat myself, don't you? 


AusWolf said:


> APUs aren't primarily meant for AAA gaming. With a modern APU, you essentially get a CPU and a GPU that has all the video decoding capabilities you need in a HTPC, usually for the price of a CPU alone. How is that not awesome value?





AusWolf said:


> The ability to play games at modest settings on an AMD APU is an extra feature.


----------



## kDude (Nov 15, 2021)

Do the e-cores take care of steam/discord and chrome in the background if you're playing a game?Getting that stress off the main cpu is kinda nice.

For example could having E-cores specifically take care of background stuff like chrome/discord/steam and etc and potentially prevent any minor fps inconsistencies like minor frametimespikes/microstutters if you're playing games at the same time?


----------



## Valantar (Nov 15, 2021)

kDude said:


> Do the e-cores take care of steam/discord and chrome in the background if you're playing a game?Getting that stress off the main cpu is kinda nice.
> 
> For example could having E-cores specifically take care of background stuff like chrome/discord/steam and etc and potentially prevent any minor fps inconsistencies like minor frametimespikes/microstutters if you're playing games at the same time?


That's essentially what they're for, yes. They also help quite a bit in heavily MT tasks like rendering and transcoding, but for gaming they seem well suited for letting the high performance cores be for high performance tasks only.


----------



## The red spirit (Nov 15, 2021)

AusWolf said:


> Personally, I have nothing against ebay. The reason I wouldn't recommend anyone (especially new PC builders) to buy their stuff there is this:
> 1. Deals are temporary. What you posted may be there today, but may disappear by tomorrow. As such, it does not represent the actual value of the product _in general_. It only represents the value of _that particular piece at that particular moment_. You cannot say that the Quadro T600 is a good deal _in general_ just because you found _one_ on a local second-hand selling site.
> 2. No warranty.
> 3. You don't know what you get until you get it. Maybe it's good as new. Maybe it's f***ed. Maybe it's dirty as hell and you spend a whole day cleaning it. Maybe the person you're recommending it to doesn't know how to take a graphics card apart for cleaning. Then again, it's not a good deal _for that person_.
> 4. You're buying from a person instead of a registered company. Sure, you have buyer's protection on ebay, but you never have 100% protection against scammers.


Bruh, that website that I linked is from one of the biggest Lithuanian new hardware retailers. They sell new stuff with warranties and Quadros are certainly not in short supply, just their availability in this particular store may be as Lithuania has low demand for professional cards like that. Adequate criticism for eBay, but not for Varlė.lt



AusWolf said:


> All in all, if you have a friend who knows his/her stuff about computers, sure, recommend him/her a good deal on ebay. But even then, that's _one_ good deal, and not a picture of the current PC market. Here on TPU, we all have varying levels of understanding about PCs. Some of us have been building them for decades. Some of us are just getting into it now. Recommending ebay deals _to people in general_ is a bad idea.
> 
> Like I said, I bought my RTX 2070 on ebay. Would I recommend _you_ to do the same? Maybe, as you know a thing or two about PCs. Would I recommend it _in general_? Hell no.
> 
> Prices of new hardware and the second-hand market are two entirely different things.


That's why you look for local deals, way cheaper than eBay and you can likely test it in person. Sure, you may get hardware with lower lifespan, but better enjoy gaming for a while properly, rather than suffer with GT 1030 or Vega 7.



AusWolf said:


> Why should a £280 APU play any modern game?


If it costs a whole 100 more than market alternatives, then I would expect it to do anything worthy of such high price. They probably don't waste die space for nothing. And I said that if that was a true intention of APUs, they would have Vega 3 in all of them. 

And this is what AMD says on their website (https://www.amd.com/en/processors/ryzen-with-graphics) about 5000 series APUs:
"Get PCs powered by the world’s most advanced processors for high frame rates per second and an immersive experience. Good game, indeed."
"For gamers, creators, and all-around PC users who want enthusiast-class performance; without the need for a discrete graphics card" (This is way worse than I expected, they are selling it like they just slapped together 6900XT with 5950X. What a fail.)
"The World’s Fastest Graphics in a Desktop Processor"
"AMD Ryzen™ 5000 G-Series Desktop Processors deliver the fastest graphics performance available in a desktop processor with AMD Radeon™ Graphics built right in. Enjoy smooth, 1080P gaming right out of the box, no additional graphics card required" (lmao "smooth" 1080p gaming on APU,  that's some nice Koolaid they got there)

We saw AMD's website and media clearly says that they are for gaming (modest, but still gaming). I don't know man, but it seems like they are meant for gaming not for some movies and other work. 




AusWolf said:


> You really like it when I repeat myself, don't you?


Don't you too?

"My initial point was that APUs never really made much sense for gaming." + media hypes them way too much.

 You have your own opinion about what APU is to you, but it's not a narrative that AMD or pretty much anyone in media does. Your point of "APUs aren't primarily meant for AAA gaming" is simply moot as nobody apart reviewers actually said that. And you surely try to make a point about faster graphics portion as if it actually matters. For media you don't need all that extra, Vega 3 is all you need. For media there's no point in anything more than Athlon 3000G. AMD claims that it can playback 4K HDR content, which should be plenty for HTPC. If you really argue about 5600G being a media chip, then I'm sorry that you paid too much for one. AMD is actually underselling those Athlons, as VCN spec says that Picasso can decode even 8K H.264 or H.265 videos, which is really impressive for 60 EUR MSRP product. It does decode VP9 up to 8K, sadly no AV-1 decoding, but it's really not that popular codec and Youtube mostly uses h.264 (for videos up to 1080p30 or 480p, if it's 50-60 fps) or VP9 (for videos above 1080p30 or 720p60). For AV-1 decoding, you would need Navi 21 or Navi 22 GPU, both of them cost way too much just for media playback, so Athlon 3000G is a reasonable media powerhouse on budget. Something like 5600G makes sense if you encode videos too, but that's what most people don't do and it's going to be still a lot slower than with dedicated card. Maybe it was useful in DVR era, but who does that anymore and even worse, who sells TV tuners for computers anymore as well as IR remotes? So, 5600G just doesn't have a compelling reason to get over 3000G, if it is used for media. 

And on that note, if you just like chips for their value (as you have said about 3100 or 3300X), well Athlon may still surprise you with all that value for tiny cost. 5600G just doesn't have this unique selling point.


----------



## AusWolf (Nov 16, 2021)

The red spirit said:


> Bruh, that website that I linked is from one of the biggest Lithuanian new hardware retailers.


That's cool, but how was I supposed to know that?



The red spirit said:


> That's why you look for local deals, way cheaper than eBay and you can likely test it in person. Sure, you may get hardware with lower lifespan, but better enjoy gaming for a while properly, rather than suffer with GT 1030 or Vega 7.


Like I said: IF you know your way around computers. Not everybody (on this forum) does.



The red spirit said:


> If it costs a whole 100 more than market alternatives, then I would expect it to do anything worthy of such high price. They probably don't waste die space for nothing. And I said that if that was a true intention of APUs, they would have Vega 3 in all of them.


It costs 100 more because it has a way faster iGPU.

Cheap ebay graphics cards are a better alternative _for your specific use case_. I get it. It doesn't mean that everybody is able to, or should learn to navigate ebay for old stuff that doesn't necessarily have the display standards and/or media decoding capabilities needed. For people who want a warranty and 100% certainty that they get what they pay for, ebay will never be a viable option.

Also, Quadros aren't meant for gaming, but you probably know that.



The red spirit said:


> And this is what AMD says on their website (https://www.amd.com/en/processors/ryzen-with-graphics) about 5000 series APUs:
> "Get PCs powered by the world’s most advanced processors for high frame rates per second and an immersive experience. Good game, indeed."
> "For gamers, creators, and all-around PC users who want enthusiast-class performance; without the need for a discrete graphics card" (This is way worse than I expected, they are selling it like they just slapped together 6900XT with 5950X. What a fail.)
> "The World’s Fastest Graphics in a Desktop Processor"
> ...


Are we bringing marketing BS into the picture? Then why don't you have a 10900K in your system? According to Intel, that's what you need to play games.



The red spirit said:


> You have your own opinion about what APU is to you, but it's not a narrative that AMD or pretty much anyone in media does. Your point of "APUs aren't primarily meant for AAA gaming" is simply moot as nobody apart reviewers actually said that.


Who do you want to believe? AMD/Intel/nvidia's marketing department, or reviewers? A little bit on that:











The red spirit said:


> And you surely try to make a point about faster graphics portion as if it actually matters. For media you don't need all that extra, Vega 3 is all you need. For media there's no point in anything more than Athlon 3000G. AMD claims that it can playback 4K HDR content, which should be plenty for HTPC. If you really argue about 5600G being a media chip, then I'm sorry that you paid too much for one. AMD is actually underselling those Athlons, as VCN spec says that Picasso can decode even 8K H.264 or H.265 videos, which is really impressive for 60 EUR MSRP product. It does decode VP9 up to 8K, sadly no AV-1 decoding, but it's really not that popular codec and Youtube mostly uses h.264 (for videos up to 1080p30 or 480p, if it's 50-60 fps) or VP9 (for videos above 1080p30 or 720p60). For AV-1 decoding, you would need Navi 21 or Navi 22 GPU, both of them cost way too much just for media playback, so Athlon 3000G is a reasonable media powerhouse on budget. Something like 5600G makes sense if you encode videos too, but that's what most people don't do and it's going to be still a lot slower than with dedicated card. Maybe it was useful in DVR era, but who does that anymore and even worse, who sells TV tuners for computers anymore as well as IR remotes? So, 5600G just doesn't have a compelling reason to get over 3000G, if it is used for media.
> 
> And on that note, if you just like chips for their value (as you have said about 3100 or 3300X), well Athlon may still surprise you with all that value for tiny cost. 5600G just doesn't have this unique selling point.


What would actually surprise me about the 3000G is if it worked in my A520 motherboard. Besides, it's currently on sale on Amazon UK for £90. It is a terrible value for that price.

The 5000G series are great if you need Zen 3's processing power with good multimedia and some light gaming capabilities without the need to use an external GPU. That target group is clearly not you, but that doesn't take away from the product's value. Let's leave it at that.


----------



## Valantar (Nov 16, 2021)

The red spirit said:


> Bruh, that website that I linked is from one of the biggest Lithuanian new hardware retailers. They sell new stuff with warranties and Quadros are certainly not in short supply, just their availability in this particular store may be as Lithuania has low demand for professional cards like that. Adequate criticism for eBay, but not for Varlė.lt
> 
> 
> That's why you look for local deals, way cheaper than eBay and you can likely test it in person. Sure, you may get hardware with lower lifespan, but better enjoy gaming for a while properly, rather than suffer with GT 1030 or Vega 7.
> ...


I think you're doing some rather selective reading here, but I also don't quite agree with @AusWolf's take on this. Current desktop APUs are pretty decent for gaming, and are better suited for it than any previous AMD APU generation was for contemporary games. No, it's not an AAA powerhouse. Nobody is saying that - not even AMD's marketing. They say "smooth 1080p gaming", but crucially not "smooth 1080p AAA gaming". 1080p in esports titles at decent framerates is entirely possible for current APUs. The same is true for any slightly older or less demanding title.

Here's a comparison for you: Anandtech's review of the 2014 7850k vs. Techspot's review of the 2021 5300G and TPU's review of the 5600G. The 7850k ranges from decent to acceptable at 1280x1024, and is clearly unplayable in most games even at 1680x1050. 1080p is out of the question, with most games seeing single-digit framerates. Even for its time, gaming on this chip was _dodgy_. I know - I had an A8-7600 paired with fast DDR3-2133 in my HTPC, and tried to game on it. Even with the iGPU overclocked by a couple hundred MHz, it was barely acceptable in very light fare like platformers. My current 4650G? Rocket League might be from 2015 (though I think it's seen some graphical upgrades since then), and plays beautifully at ~90fps 900p or ~60fps 1080p - and that's not even at minimum settings. I clearly prefer the faster framerate though.

Now, the 5300G and 5600G can't quite manage 30fps at 1080p low in heavy games like AC Valhalla, but they are still _miles_ ahead of older APUs for contemporary AAA titles. And they deliver perfectly playable framerates at lower resolutions. They even mostly match the RX 550, a discrete GPU that you suggested buying. 60fps in DOTA2 at 1080p "Best Looking", >100fps in CS:GO, and a console-like 30fps in a  huge selection of AAA titles? That's not bad. You asked about GTA V at some point: >60fps at 900p low. Looks pretty playable to me!

I think we also need to remember that not everyone has the expectations and demands of us PC enthusiasts, and that we to a large degree get conditioned into wanting a bunch of stuff that isn't strictly necessary for a good gaming experience. Is 30fps a worse experience than 60fps? Sure, and in games dependent on reaction times it can be really bad. But it can also be perfectly fine in a lot of games still. Unless you're accustomed to fancy gear and high performance, low performance isn't necessarily something that gets in the way of enjoyment - it depends on the game, the context, your habits, and a lot of other factors.

So, are these APUs gaming powerhouses? Of course not. Are they capable of gaming in general? Unlike previous generations of APUs, yes - though clearly with compromises. They keep up very well with similarly priced dGPUs still, and crucially deliver great CPU performance and 16 PCIe lanes for a future dGPU addition if you're interested in that. And they are _crazy_ efficient on top of that. If this is all you can afford for your main PC, it's a good deal, and it leaves you with great upgrade potential - far better than an i3-10100, for example. The CPU in a 5600G is just a tad slower than a 5600X, which is an excellent all-round CPU, and great for dGPU gaming.

It's still obvious that if you're going for a bargain-basement, gaming-only build, you're better off buying used parts. That is always true - but it also comes with severe caveats, from needing to watch out for scams (fake GPUs are pretty common these days, not to mention far simpler scams than that), to dodgy component quality and longevity, to the lack of warranties, to the instability of used markets, to component availability, and so on. It's still the best way to get a cheap gaming computer, but it's definitely not the easiest, and it _is _the easiest way to screw up.


I have to admit I've forgotten how this discussion got into APUs in the first place though - isn't this about the 12900K? 

Oh, and in a previous post you started getting into comparisons with the Ryzen 5 1600AF - look at where the 5600G ends up in CPU performance overall, and then compare that with the 1600AF. The 1600AF performs somewhere between a Ryzen 5 2600 and 2600X, so ~80% of a 3600XT (which is ~.3% faster than a 3600X). In the first link here, you see that the 3600X lands at ~88% of the 5600g. So, for $10-20 less you can get a CPU that's 0.88*0.8=~70,4% of the performance of the 5600G, alongside a dGPU that isn't any faster than its iGPU. And that is somehow a better value proposition for you? The 5600g is even nearly 10% faster with a dGPU at 1080p than the 3600X, which is again nearly 10% faster than the 2600X (=1600AF). CPU performance matters.


The red spirit said:


> Let's be honest here, if you bought Athlon 200GE when it was new and survived with it this long and most likely have A320 board, then you are capped at 3000 Ryzens (unless your board has extended support) and then is it really worth investing in now dead end platform, which is artificially capped to older chips? It might be a better idea to just buy a new platform entirely.


Still a pretty hefty upgrade path though. Is the 3950X a weak CPU? And even a 3600X is a hefty upgrade, and can run fine on any A320 board. The upgrade path is absolutely there for these bargain-basement chips.


The red spirit said:


> You can overclock old Vega 11 and for Vega 11, those Zen cores are fast enough.


Sorry, but they aren't. And the OC potential was quite limited. And as you can see in the Techspot review linked before, even the 5300G soundly beats the 3400G, despite a 5CU disadvantage.


The red spirit said:


> Only 8x for PCIe and older PCIe gen 3 if you have lower end or older board. Now tell me, how much performance is lost by using less than ideal PCIe gen and then on top of that cutting lanes in half.


You say you think this is how things are, I say "no, they changed that with the 4000-series and above", and you come back with a source talking about the 3000-series? Come on, man! Turn on your brain, please. Current APUs have 16 PCIe lanes for GPUs.


The red spirit said:


> Or did they actually find anything better? Budget builds official seems to be happy to be back. I'm pretty sure that they may have had C19 related difficulties or didn't think of how to make content.


Yes, sure, one person on youtube "seeiming happy to be back" is conclusive proof that nobody else (including them, really) found anything else useful to do in the meantime. I mean ... you understand that these people _perform_, right?


The red spirit said:


> I'm staying with Zhaoxin, they have big chances of success. There are many IPO scams regarding CPUs in China and Zhaoxin is not a scam.


"Not a scam" and "big chance of success" are quite unrelated statements. One does not follow from the other. Zhaoxin might very well be a success - in the Chinese office and government PC markets, for example, as well as servers and other places - but they're never going to become a proper third option for gaming PCs in global markets.


The red spirit said:


> You don't seem to mention service sector at all, which may not use many if any physical resources at all. And that's a huge sector in advanced countries. As long as we have money and appetite for that, well they can serve us and we very obviously has less money than our wants. It may not be infinite exactly, but with currently unknown potential.


The service sector relies on goods produced somewhere to deliver their services. They are just as imbricated in the flows of global capital as everything else. If anything, the move to a larger proportion of service sector jobs is precisely an indication of the export of difficult, unpleasant and harmful industrial production jobs to poorer areas, as people start refusing to take on shit jobs for shit pay, and companies neither want to make the jobs better/safer or pay more. While there is indeed something to be said for the benefits for moving from "buying stuff" to "paying people to do things" on many levels (from environmental to economic), that doesn't solve anything by itself.


The red spirit said:


> What an alternative could be? HDI adjusted GDP?


I doubt it. I'm not saying I have a viable alternative, I'm just saying GDP is a deeply flawed measure.


The red spirit said:


> It seems that you think that socialism or other similar right ideology would be a way forward.


"Right ideology"? What does that mean? Either you've fundamentally misunderstood the classic political spectrum, you ascribe to that edgelord idea of "Stalinism was totalitarian, so all socialism is right-wing" (which ignores the fact that Stalinism was socialist in name only, and that the soviet union rapidly devolved into an oligarchic extreme predatory capitalism), or you're just using words in some weirdly convoluted way. Either way, I'm not saying I have any conclusive answers, I'm just saying that it's quite widely documented that capitalism, especially in its current guise, is deeply harmful and fundamentally unsustainable. And not "unsustainable" as in "not environmentally friendly" (though that is also very true), "unsustainable" as in "it cannot be upheld over time, it is bound to crash". It's literally built into the workings of the system. Economic crashes are an intrinsic part of our current global economic systems, and avoiding them is literally impossible, as that would require literally infinite resources.


The red spirit said:


> I disagree with you about Intel. Intel unlike AMD produces far more chips and they seem to have Pentiums and Celerons available. If you look for those, you can find them at reasonable prices. And nVidia is selling GT 710 and GT 730s new, not great, but at least better than literally nothing from AMD.


Pentium and Celeron availability has been _very_ spotty for the past three or so years. Just do a quick search, you'll find tons and tons of forum posts decrying the persistent lack of stock for the best value chips in those ranges. It has gotten better as Intel has gotten out of their 2018-to-2020-ish supply crunch and have gotten 10nm/7 working at scale, but it's still not solved.


The red spirit said:


> 5700 XT had stock voltage issues and many cards were affected by that. It may be as much as 10% if not more of those cards. I'm honestly not sure about about RTG (I'm gonna call them that, beat calling them AMD's graphics card division). Beyond 5000 series, I haven't seen anything that has a tiny bit of appeal. I'm completely priced out of market currently (it didn't take much, once prices climbed past 300 EUR, I was done. 200 EUR is my graphics card ideal budget) and getting more and more cynical. 5000 series were already weak with budget cards, they only made 5500 XT, which was underwhelming card and it didn't gain any traction. I have to admit, that I quite liked 5500 XT for some reason, but my RX 580 serves me well.


Again you're mixing tons of factors from all over, from personal tastes to pricing to market conditions to product specs. Current GPU prices are crazy everywhere, so AMD (or Nvidia) can't really be blamed for that, even if their current MSRPs are also stupid - the 6600 XT should have been <$300 MSRP, for example. But that is what it is - nothing anyone can do but wait. You're right that the 5000-series wasn't great for budget cards, but even then AMD was supply constrained from TSMC, and it made sense for them to prioritize higher profile products like the 5700/XT (which were good, but had severe bugs that undermined their quality), and the 5600 XT (which was IMO the best GPU of that generation from either company). I'm definitely hoping we'll see a return to normalcy in the coming years with good value, good performance  $200-ish GPUs again, but for now, there's nothing to do and nobody to blame for any of this - the problems are far too large in scope for that, and the only possible fixes are in international politics.


----------



## The red spirit (Nov 16, 2021)

Valantar said:


> I think you're doing some rather selective reading here, but I also don't quite agree with @AusWolf's take on this. Current desktop APUs are pretty decent for gaming, and are better suited for it than any previous AMD APU generation was for contemporary games. No, it's not an AAA powerhouse. Nobody is saying that - not even AMD's marketing. They say "smooth 1080p gaming", but crucially not "smooth 1080p AAA gaming". 1080p in esports titles at decent framerates is entirely possible for current APUs. The same is true for any slightly older or less demanding title.


Well, AMD states tht in Athlon 300G page, they claim that it runs esports games at 720p well. Ryzen 5000Gs are expected to run anything, that includes AAA games too. I guess that what they call "enthusiast level performance" includes AAA games.




Valantar said:


> Here's a comparison for you: Anandtech's review of the 2014 7850k vs. Techspot's review of the 2021 5300G and TPU's review of the 5600G. The 7850k ranges from decent to acceptable at 1280x1024, and is clearly unplayable in most games even at 1680x1050. 1080p is out of the question, with most games seeing single-digit framerates. Even for its time, gaming on this chip was _dodgy_.


I remember it ran Battlefield at 1080, so I thought they were capable, but still bellow HD 7750 obviously.




Valantar said:


> I know - I had an A8-7600 paired with fast DDR3-2133 in my HTPC, and tried to game on it. Even with the iGPU overclocked by a couple hundred MHz, it was barely acceptable in very light fare like platformers. My current 4650G? Rocket League might be from 2015 (though I think it's seen some graphical upgrades since then), and plays beautifully at ~90fps 900p or ~60fps 1080p - and that's not even at minimum settings. I clearly prefer the faster framerate though.


Your A8 is not A10, A8s had lower CPU clock speed, maybe lower cache too and certainly much less GPU cores.




Valantar said:


> Now, the 5300G and 5600G can't quite manage 30fps at 1080p low in heavy games like AC Valhalla, but they are still _miles_ ahead of older APUs for contemporary AAA titles. And they deliver perfectly playable framerates at lower resolutions. They even mostly match the RX 550, a discrete GPU that you suggested buying. 60fps in DOTA2 at 1080p "Best Looking", >100fps in CS:GO, and a console-like 30fps in a  huge selection of AAA titles? That's not bad. You asked about GTA V at some point: >60fps at 900p low. Looks pretty playable to me!


Still, they are marketed as smooth 1080p gaming solutions by themselves and they are yet to match RX 550 properly.



Valantar said:


> I think we also need to remember that not everyone has the expectations and demands of us PC enthusiasts, and that we to a large degree get conditioned into wanting a bunch of stuff that isn't strictly necessary for a good gaming experience. Is 30fps a worse experience than 60fps? Sure, and in games dependent on reaction times it can be really bad. But it can also be perfectly fine in a lot of games still. Unless you're accustomed to fancy gear and high performance, low performance isn't necessarily something that gets in the way of enjoyment - it depends on the game, the context, your habits, and a lot of other factors.


I strongly disagree. Even before I was enthusiast (whatever that means), I tried to make games to run smoothly and by that I mean targeting at 45-60 fps. 30 fps is a piss and is clearly laggy or unresponsive to me. Not even for fast paced games, but for anything at all. It's not acceptable even for Age of Mythology. My standards were like that with Athlon 64 3200+/FX 5200 machine, even if it meant much worse visual quality to achieve that. I only make an exception to Far Cry as it had surprisingly consistent framerate and didn't feel unresponsive when aiming, but that's just one exception. And to be honest, I never had "enthusiast level" gear. Downclocked RX 580 (which is slower than RX 480) was the best I ever had, you can take my word, that I wouldn't want to go back to 650 Ti 1GB for daily gaming. I remember it being somewhat a potato at Battlefield and almost insufferable in GTA 5. 

I'm certainly not getting too conditioned to fps, it was just plainly obvious before, that fps matters. I never bought into nonsense that 30 fps is minimum playable framerate. For me that would be 40. Perhaps your perspective on that is different, since you own 6900 XT and perhaps a higher refresh rate monitor or at least one with Freesync. I don't and never even saw one in person. You can tell me about "stuff that isn't strictly necessary", when I was getting piss frames in GTA 5, on anything that wasn't 1080p low or when I had to run Far Cry 5 on RX 560 at less than 1600x900 to get 40-50 fps and even then it wasn't too stable. I know full well, that RX 580 is what I need. There's no merit in low end junk, it just sucks your money and doesn't deliver. That's exactly what 320 EUR 5600G is.



Valantar said:


> So, are these APUs gaming powerhouses? Of course not. Are they capable of gaming in general? Unlike previous generations of APUs, yes - though clearly with compromises. They keep up very well with similarly priced dGPUs still, and crucially deliver great CPU performance and 16 PCIe lanes for a future dGPU addition if you're interested in that. And they are _crazy_ efficient on top of that. If this is all you can afford for your main PC, it's a good deal, and it leaves you with great upgrade potential - far better than an i3-10100, for example. The CPU in a 5600G is just a tad slower than a 5600X, which is an excellent all-round CPU, and great for dGPU gaming.


That doesn't change anything about them being grossly overpriced and overadvertised. Obviously, I will stick to my defined minimum spec, 5600G doesn't deliver. Doesn't matter to me if it's close or not, if it doesn't even meet a spec that I define as playable. Besides that, it's already this poor today, so it doesn't have any longevity in it.




Valantar said:


> It's still obvious that if you're going for a bargain-basement, gaming-only build, you're better off buying used parts. That is always true - but it also comes with severe caveats, from needing to watch out for scams (fake GPUs are pretty common these days, not to mention far simpler scams than that), to dodgy component quality and longevity, to the lack of warranties, to the instability of used markets, to component availability, and so on. It's still the best way to get a cheap gaming computer, but it's definitely not the easiest, and it _is _the easiest way to screw up.


At that budget, it's no brainer to avoid garbage like 5600G. i3 with GTX 960 is a way to go. Or new Quadro T600 it is.



Valantar said:


> I have to admit I've forgotten how this discussion got into APUs in the first place though - isn't this about the 12900K?


Just now, we are already on like 5th page on going off topic. Not sure if anyone care anymore.

BTW what i9? Alder Lake is trash /s




Valantar said:


> Still a pretty hefty upgrade path though. Is the 3950X a weak CPU? And even a 3600X is a hefty upgrade, and can run fine on any A320 board. The upgrade path is absolutely there for these bargain-basement chips.


Wait, weren't you just recently claiming that Threadripper makes no sense, because Zen 2 and IPC matters more. Oh my, how tables have turned. 3950X is pretty much a Threadripper on AM4.




Valantar said:


> Sorry, but they aren't. And the OC potential was quite limited. And as you can see in the Techspot review linked before, even the 5300G soundly beats the 3400G, despite a 5CU disadvantage.


I'm really not convinced, unless it has some overclock wall, that shouldn't be a case. Or maybe it's just DDR5. 




Valantar said:


> "Not a scam" and "big chance of success" are quite unrelated statements. One does not follow from the other. Zhaoxin might very well be a success - in the Chinese office and government PC markets, for example, as well as servers and other places - but they're never going to become a proper third option for gaming PCs in global markets.


Still better than pouring millions to yet another chip IPO scam.





Valantar said:


> The service sector relies on goods produced somewhere to deliver their services. They are just as imbricated in the flows of global capital as everything else. If anything, the move to a larger proportion of service sector jobs is precisely an indication of the export of difficult, unpleasant and harmful industrial production jobs to poorer areas, as people start refusing to take on shit jobs for shit pay, and companies neither want to make the jobs better/safer or pay more. While there is indeed something to be said for the benefits for moving from "buying stuff" to "paying people to do things" on many levels (from environmental to economic), that doesn't solve anything by itself.


I still disagree. In service sector you also have people like barbers, that barely use any goods (unless you go to hair salon, but that's entirely different thing). There is finance sector, again barely uses resources. What about government? What about education, which recently proved that it can be done with just internet available? 




Valantar said:


> "Right ideology"? What does that mean? Either you've fundamentally misunderstood the classic political spectrum, you ascribe to that edgelord idea of "Stalinism was totalitarian, so all socialism is right-wing" (which ignores the fact that Stalinism was socialist in name only, and that the soviet union rapidly devolved into an oligarchic extreme predatory capitalism), or you're just using words in some weirdly convoluted way. Either way, I'm not saying I have any conclusive answers, I'm just saying that it's quite widely documented that capitalism, especially in its current guise, is deeply harmful and fundamentally unsustainable. And not "unsustainable" as in "not environmentally friendly" (though that is also very true), "unsustainable" as in "it cannot be upheld over time, it is bound to crash". It's literally built into the workings of the system. Economic crashes are an intrinsic part of our current global economic systems, and avoiding them is literally impossible, as that would require literally infinite resources.


Oh shit, I meant left ideologies. Sorry for snafu, I'm not yet too familiar with formal terms of political parties. By left I mean those that have strong welfare, reject laissez faire capitalism and often private property.




Valantar said:


> Pentium and Celeron availability has been _very_ spotty for the past three or so years. Just do a quick search, you'll find tons and tons of forum posts decrying the persistent lack of stock for the best value chips in those ranges. It has gotten better as Intel has gotten out of their 2018-to-2020-ish supply crunch and have gotten 10nm/7 working at scale, but it's still not solved.


Oh well, in my region those were available, but seemingly there was nearly no demand for them. Athlon 3000G was sold out nearly instantly and I don't see it anywhere to buy anymore.



Valantar said:


> Again you're mixing tons of factors from all over, from personal tastes to pricing to market conditions to product specs. Current GPU prices are crazy everywhere, so AMD (or Nvidia) can't really be blamed for that, even if their current MSRPs are also stupid - the 6600 XT should have been <$300 MSRP, for example. But that is what it is - nothing anyone can do but wait. You're right that the 5000-series wasn't great for budget cards, but even then AMD was supply constrained from TSMC, and it made sense for them to prioritize higher profile products like the 5700/XT (which were good, but had severe bugs that undermined their quality), and the 5600 XT (which was IMO the best GPU of that generation from either company). I'm definitely hoping we'll see a return to normalcy in the coming years with good value, good performance  $200-ish GPUs again, but for now, there's nothing to do and nobody to blame for any of this - the problems are far too large in scope for that, and the only possible fixes are in international politics.


But 5700XT was mid range product, not high profile one. It took on 2060 Super, not rally 2070 and certainly not on 2080.

I can bet that we won't see normalcy for at least 5 years. Normalcy is dead and so are 200 EUR/USD GPUs. We would be doing well, if after 5 years we could start going back to that, but current situation is still a mess and we still have rampant pandemic that murders everyday, with no real supplies to tame it. Supply chains might get even more borked, if some will go bankrupt. Intel or AMD won't build fabs instantly either. And world economy is still in some turmoil, that is at mercy of how we handle pandemic, not really in hands of people doing a strong business otherwise. Countries still can just start a lockdown rather easily, which puts them in inescapable debt. Debt not only means that you are taking others money, but also that you pay interest. The poorer you are, the worse interest is for you and the more on slippery slope you end up. We are still deep in shit and just getting deeper, we are not even close to coming out of it.


----------



## Deleted member 215115 (Nov 16, 2021)

What's the point of these back and forth walls of text besides making the thread unreadable for everyone else?


----------



## Valantar (Nov 16, 2021)

The red spirit said:


> Well, AMD states tht in Athlon 300G page, they claim that it runs esports games at 720p well. Ryzen 5000Gs are expected to run anything, that includes AAA games too. I guess that what they call "enthusiast level performance" includes AAA games.


Do you mean 3000G? Or 300GE?

Either way, this is what AMD's landing page for "Athlon processors with Vega Graphics" tells us:


> With AMD Radeon™ Graphics built right in, you’ll enjoy every pixel as you edit family photos, stream your favorite shows in up to 4K HDR, and *play the most popular esports games in high-definition 720p*. Fueled by AMD advanced 7nm processor core technology, AMD Athlon™ 3000 Series is ready to harness the power of graphics card upgrades for smooth HD+ 1080p gaming – so gamers looking for the flexibility for adding future upgrades like discrete graphics cards will enjoy an easy upgrade path.


(my emphasis)
That latter claim is rather weird as there are only 12nm and 14nm chips in the 3000G series (so far - might be foreshadowing some future launch, or one that has been cancelled due to 7nm shortages I guess?), but they're pretty clear about the scope of performance for these: 720p esports, dGPU upgrades for anything else. (And their use of "HD+ 1080p" there also indicates they're not selling this as the basis of a high end gaming platform.)


The red spirit said:


> I remember it ran Battlefield at 1080, so I thought they were capable, but still bellow HD 7750 obviously.


"Battlefield". Which one? You said you had a 6000-series APU, which came out in 2013. According to that list, there were eight main series Battlefield games out at that point. I can find some BF4 videos of that APU, but none detailing the resolution or with a framerate counter, and judging either from an overcompressed youtube video is a fool's errand.


The red spirit said:


> Your A8 is not A10, A8s had lower CPU clock speed, maybe lower cache too and certainly much less GPU cores.


... so you didn't look at the sources I linked then? The A8-7600 is in that AnandTech review as well. It's slower than the A10-7850K, yes, but not by a huge margin at all. They're both squarely in the same performance class overall.


The red spirit said:


> Still, they are marketed as smooth 1080p gaming solutions by themselves and they are yet to match RX 550 properly.


Where? Can you show me an example of it being marketed as a smooth 1080p gaming solution that, as you've been harping on, somehow implies that this applies for AAA gaming?


The red spirit said:


> I strongly disagree. Even before I was enthusiast (whatever that means), I tried to make games to run smoothly and by that I mean targeting at 45-60 fps. 30 fps is a piss and is clearly laggy or unresponsive to me. Not even for fast paced games, but for anything at all. It's not acceptable even for Age of Mythology. My standards were like that with Athlon 64 3200+/FX 5200 machine, even if it meant much worse visual quality to achieve that. I only make an exception to Far Cry as it had surprisingly consistent framerate and didn't feel unresponsive when aiming, but that's just one exception. And to be honest, I never had "enthusiast level" gear. Downclocked RX 580 (which is slower than RX 480) was the best I ever had, you can take my word, that I wouldn't want to go back to 650 Ti 1GB for daily gaming. I remember it being somewhat a potato at Battlefield and almost insufferable in GTA 5.


The fact that you were aware of framerates at all places you squarely in the enthusiast group. Seriously, most gamers don't know what framerate is or what it indicates. They can feel the difference between something being smooth and not, and might start looking into it if it bothers them too much, but most still have no idea.


The red spirit said:


> I'm certainly not getting too conditioned to fps, it was just plainly obvious before, that fps matters. I never bought into nonsense that 30 fps is minimum playable framerate. For me that would be 40.


40? On a 60Hz panel? That's a juddery (or teary) mess. A steady 30fps feels _far_ smoother than some in-between range.


The red spirit said:


> Perhaps your perspective on that is different, since you own 6900 XT and perhaps a higher refresh rate monitor or at least one with Freesync. I don't and never even saw one in person.


Well, I do have a Freesync monitor, that is true. It's my secondary 75Hz 1080p monitor that I only really use for office work (rotating it into landscape and setting it as the main monitor in Windows is too much of a hassle for some occasional 75fps gaming). My main monitor is a decade old Dell U2711, at 60Hz 1440p. So, no, sorry, not applicable. Or, I guess you could count the 2160p120 TV, but I only rarely game on that, and I've so far not bothered lugging my main PC into the living room to test that out. I'm planning to, as it will no doubt be great, but it's not something I'm used to, no.


The red spirit said:


> You can tell me about "stuff that isn't strictly necessary", when I was getting piss frames in GTA 5, on anything that wasn't 1080p low or when I had to run Far Cry 5 on RX 560 at less than 1600x900 to get 40-50 fps and even then it wasn't too stable. I know full well, that RX 580 is what I need. There's no merit in low end junk, it just sucks your money and doesn't deliver. That's exactly what 320 EUR 5600G is.


But that's the thing: you knew enough to identify what was bothering you. Again, that places you in the enthusiast class. Beyond that, you clearly have strong preferences for higher resolutions as well - remember, both the PS4 and XO render at somewhere between 900p and 720p in the vast majority of games, and that's what the vast majority of gamers are used to. Most games on those consoles are 30fps as well.

Also, as you point out, unsteady framerates exacerbate poor gameplay smoothness. You'd likely have been better off at a locked 30fps than that unstable 40-50.


The red spirit said:


> That doesn't change anything about them being grossly overpriced and overadvertised. Obviously, I will stick to my defined minimum spec, 5600G doesn't deliver. Doesn't matter to me if it's close or not, if it doesn't even meet a spec that I define as playable. Besides that, it's already this poor today, so it doesn't have any longevity in it.


Why overpriced? You get a near-5600X CPU with a moderately capable GPU built in for a lower price than the 5600X. Intel has launched some very competitive offerings since, but their iGPUs are still trash, so they lose out there. And as I've shown, you're not getting an equally fast CPU + equally fast GPU for the same price that way.


The red spirit said:


> At that budget, it's no brainer to avoid garbage like 5600G. i3 with GTX 960 is a way to go. Or new Quadro T600 it is.


If you can find a used 960 for a decent price? And one that isn't run into the ground, being 5+ years old? Also, while TPU doesn't allow for a direct comparison, the 960 isn't _that_ much faster either. The 1060 in the 5600G review is 242% of the 5600G's performance at 1080p, or 41% of the performance. In TPU's database the 960 is 58% of a 1060 6GB. That makes the 960 clearly faster, but it's not a staggering difference. Definitely enough to make games playable on the 960 that aren't on the APU, sure, but for that you have to step down significantly in CPU performance and ease of upgradeability, as that i3 is going to start being a bottleneck long before the 5600G's CPU is. Everything has tradeoffs.


The red spirit said:


> Wait, weren't you just recently claiming that Threadripper makes no sense, because Zen 2 and IPC matters more. Oh my, how tables have turned. 3950X is pretty much a Threadripper on AM4.


... sigh. Seriously? Yes, I did make that argument. And I also made the argument that there is a significant upgrade path from a low-end Athlon on an A320 board even if it's limited to "only" 3000-series CPUs. You see how those two statements _in no way whatsoever_ contradict each other or even conflict with each other, right? One is in the context of "someone wants to maximize performance for a high end PC, which parts are smart to choose", while the other is in the context of "can we speak of a viable upgrade path for an Athlon 300GE on an A320 motherboard". The scenarios are _wildly_ different. If you can't see that, we literally can't have a conversation.


The red spirit said:


> I'm really not convinced, unless it has some overclock wall, that shouldn't be a case. Or maybe it's just DDR5.


DDR5? 5000-series APUs use DDR4. Also, "overclock wall"? Yes, 3000-series APUs have the same limits to overclocking as all 14/12nm Zen/Zen+ CPUs have - they don't go much above 4GHz (and the iGPUs might reach 1700MHz if you're lucky). Meanwhile the 4000 and 5000 series chips have significantly higher IPC (~+15% and ~+35% respectively) and clock their iGPUs _much_ higher even at stock (my 4650G is 1900MHz, and I got a bit of a dud that only OC's to 2100 - 2400 is relatively common). They also have much better memory controllers, which of course help, but ... well, that's part of what makes them better. Yes. They are better. That is the core of the argument here.


The red spirit said:


> Still better than pouring millions to yet another chip IPO scam.


Has anyone here suggested doing so?


The red spirit said:


> I still disagree. In service sector you also have people like barbers, that barely use any goods (unless you go to hair salon, but that's entirely different thing).


Uh ... razors? Shaving cream? Lotions and all that stuff? The equipment they use? The furnishings in the barbershop? Literally everything they need to do their jobs is dependent on the flows of global capital, and the large-scale exploitation of natural resources and labor in poorer parts of the world.


The red spirit said:


> There is finance sector, again barely uses resources.


But also does literally _zero_ of any worth. They shuffle numbers around to make them look bigger, and organize gambling circles for the ultra-rich. Oh, and they spend _massive_ amounts on computers and technology, creating a lot of E-waste.


The red spirit said:


> What about government? What about education, which recently proved that it can be done with just internet available?


"Can be done" is a stretch. That it can be done at a significantly reduced quality by massively overworked staff in a crisis situation is ... well, not proof of anything. Also, what about the stuff you need to teach and learn remotely? Are computers or the internet outside of the flows of global capital? Obviously not. Government is obviously not either. We live in a neoliberal world. The flows of global capital run through _everything_. Unless you live off of subsistence farming and make your own tools, there is no way for any person to avoid this in our current world.


The red spirit said:


> Oh shit, I meant left ideologies. Sorry for snafu, I'm not yet too familiar with formal terms of political parties. By left I mean those that have strong welfare, reject laissez faire capitalism and often private property.


No problem, we all mix up words from time to time 


The red spirit said:


> Oh well, in my region those were available, but seemingly there was nearly no demand for them. Athlon 3000G was sold out nearly instantly and I don't see it anywhere to buy anymore.


Hm, that's odd. Probably down to some weird dynamic of distribution. I don't think the Pentium Gold series has been reliably in stock at all since it launched in the markets I've paid attention to.


The red spirit said:


> But 5700XT was mid range product, not high profile one. It took on 2060 Super, not rally 2070 and certainly not on 2080.


Oh, absolutely. It's just that it was the highest end they had at the time, and thus their focus as they desperately needed to focus on rebuilding the image of their graphic division after five years of not really competing.


The red spirit said:


> I can bet that we won't see normalcy for at least 5 years. Normalcy is dead and so are 200 EUR/USD GPUs. We would be doing well, if after 5 years we could start going back to that, but current situation is still a mess and we still have rampant pandemic that murders everyday, with no real supplies to tame it. Supply chains might get even more borked, if some will go bankrupt. Intel or AMD won't build fabs instantly either. And world economy is still in some turmoil, that is at mercy of how we handle pandemic, not really in hands of people doing a strong business otherwise. Countries still can just start a lockdown rather easily, which puts them in inescapable debt. Debt not only means that you are taking others money, but also that you pay interest. The poorer you are, the worse interest is for you and the more on slippery slope you end up. We are still deep in shit and just getting deeper, we are not even close to coming out of it.


Yeah, I don't think you're necessarily wrong here. At lest it's a good thing that we're seeing pushes for more localized chip production, as the centralization of the industry that we see today has made it - as we're currently experiencing - extremely precarious. Another built-in function of neoliberal thinking: if overhead is seen as bad and detrimental to profits, you start building things _exactly_ to your projected future needs, and if those projections are wrong, you're suddenly in a situation where it takes several years of scrambling to correct for the simple fact that predicting the future accurately is impossible. The chip industry didn't just put all their eggs into ever fewer baskets, they also made sure those baskets were _just_ big enough, so that when crisis struck and we suddenly needed more eggs in more places there was no way to make this happen. The short-sighted, profit-oriented thinking of global neoliberal capital is, when you look at it in certain ways, impressively dumb. Though it's easy enough to think that this is a feature rather than a bug, as those in power are never the ones hurt by these events.



rares495 said:


> What's the point of these back and forth walls of text besides making the thread unreadable for everyone else?


Hey, you're not wrong, but there doesn't seem to have been any interest in actually discussing the 12900K since this kicked off, so ... meh.


----------



## Ahhzz (Nov 16, 2021)

The red spirit said:


> .....






Valantar said:


> .....
> Hey, you're not wrong, but there doesn't seem to have been any interest in actually discussing the 12900K since this kicked off, so ... meh.




Then consider this notice of interest. Start a new thread, or take it to PMs, but get back to the original topic. Blatant disregard of the forums rules is not the best plan of action. thanks!


----------



## lexluthermiester (Nov 16, 2021)

kDude said:


> Do the e-cores take care of steam/discord and chrome in the background if you're playing a game?


That's the general idea. And as of 22000.282, microsoft has got Windows 11 to a state where the p-core/e-core dynamic works the way it should.


kDude said:


> For example could having E-cores specifically take care of background stuff like chrome/discord/steam and etc and potentially prevent any minor fps inconsistencies like minor frametimespikes/microstutters if you're playing games at the same time?


Again, yeah. That's the idea and it's working well so far.

If you're looking at Alder Lake and want to know if it's worth it, I think you can safely jump in at this point.

Advice from me, if you you already have DDR4 and don't want to spend a ton of money on DDR5, go with a DDR4 motherboard. The performance differences between DDR4 and DDR5 in most use cases are margin of error kind of differences and that will remain the case until DDR5 starts edging ahead, which historically happens about 18months to two years after new RAM standards go to market. If you want more details on this dynamic see the article W1zzard did comparing several kits of DDR4 to a kit of DDR5-6000;








						DDR4 vs. DDR5 on Intel Core i9-12900K Alder Lake Review
					

The Intel Alder Lake platform has support for both DDR5 and DDR4 memory. We ran 38 application benchmarks and 10 games at multiple DDR4 configurations to learn what performance to expect when using DDR4 vs. DDR5 on 12th Gen, and whether there's a point at which DDR4 performance can beat the much...




					www.techpowerup.com


----------



## kDude (Nov 16, 2021)

Valantar said:


> That's essentially what they're for, yes. They also help quite a bit in heavily MT tasks like rendering and transcoding, but for gaming they seem well suited for letting the high performance cores be for high performance tasks only.


Tempted to save up for it then.


lexluthermiester said:


> That's the general idea. And as of 22000.282, microsoft has got Windows 11 to a state where the p-core/e-core dynamic works the way it should.
> 
> Again, yeah. That's the idea and it's working well so far.
> 
> ...


I usually upgrade once every 5 years so I usually go big or go home so I don't end up doing small ugprades inbetween so If I upgrade Il go prepared for ddr5.

Damn,I thought GPU prices where the only thing (mainly) effected by the price hikes but motherboards are unusually pricey too.Think il wait another half a year or so maybe some better mid-end motherboards will pop up.


----------



## lexluthermiester (Nov 16, 2021)

kDude said:


> I usually upgrade once every 5 years so I usually go big or go home so I don't end up doing small ugprades inbetween so If I upgrade Il go prepared for ddr5.


To be fair, if you only upgrade every half decade, DDR4 would still be a valid choice as a quality set with tight timings would stand the test of time. Just food for thought.



kDude said:


> Think il wait another half a year or so maybe some better mid-end motherboards will pop up.


Might be wise to do. AMD has the upcoming new-hotness and waiting to see the full scope of this latest generation of CPU's could change your perspective.


----------



## Valantar (Nov 16, 2021)

kDude said:


> I usually upgrade once every 5 years so I usually go big or go home so I don't end up doing small ugprades inbetween so If I upgrade Il go prepared for ddr5.
> 
> Damn,I thought GPU prices where the only thing (mainly) effected by the price hikes but motherboards are unusually pricey too.Think il wait another half a year or so maybe some better mid-end motherboards will pop up.


Motherboards have gotten a lot more expensive in recent years due to the wealth of high-speed I/O in them - PCIe 4.0 (and now 5.0), USB 3.2G2x2, fast DDR4/DDR5, and so on. Especially Z690 with PCIe 4.0 in the chipset is likely to be a bump up from Z590 - we saw the same with X570 on the AMD side. That the cheapest Z690 boards just after launch are ~$200 is pretty much par for the course these days.

As for upgrading, remember that DDR5 only really comes into its own in terms of performance at high speeds, so if you're planning to upgrade in the near future, I would factor in a RAM upgrade a year or two down the line.There simply isn't sufficiently fast DDR5 on the market yet for it to be really valuable. ADL still performs best with DDR5, but it needs to be fast still.


----------



## John Naylor (Nov 30, 2021)

I keep joking that I am going to build my wife a footrest to put under her desk w/ a 4 x 140mm radiator in it.    This would be a good CPU to for that.  But over the last 2 decades +, performance upgrades usually come before efficiency ... mainly because that's what sells.  Much like politics, and "well it's faster in things I don't actually ever do" arguments, it seems the negatives like power consumption only become important when brand loyalty comes into play.  It only matters when one brand is slower and the other is faster and like politics, both teams have accused the other of this failing and then switched arguments when the situation flips.   It's never really factored in to my decision making as everything is custom water cooled to < 10C delta T and I pay a teeny bit more for HVAC in summer and a teeny bit less in winter.


kDude said:


> I usually upgrade once every 5 years so I usually go big or go home so I don't end up doing small ugprades inbetween so If I upgrade Il go prepared for ddr5.
> 
> Damn,I thought GPU prices where the only thing (mainly) effected by the price hikes but motherboards are unusually pricey too.Think il wait another half a year or so maybe some better mid-end motherboards will pop up.



That's seems to be the case these days within our circle of users .... Here in the US, a significant portion of the build price has been the import tariffs, which have hurt American businesses far more than China.   PCs 100% built in China are exempt from the tariffs, leaving stateside system builders and BYO enthusiasts at a distinct disadvantage.... production yields, pandemic related shipping costs also significantly affect build prices.

We are also seeing a wider stratification between players as for example "Nvidia controlled 83% of the discrete graphics card market in the third quarter, with AMD holding the rest."   The concern here is that the wider that difference, the more clout a company gains in securing its supply chain and the lower unit price it pays.  In response to an analysts question on how it was able to do that, Huang responded "we have secured guaranteed supply, very large amounts of it, quite a spectacular amount of it from the world's leading foundry in substrate and packaging and testing certain companies, the integral part of our supply chain."

The interesting thing, to me anyway, is the fact that top end buyers are not significantly influencing sales.  Looking at month to month sales, the three biggest changes (leaving out mobile and integrated GFX) in market share were ....

GeForce GTX 1660 +0.35%
GeForce GTX 1660 SUPER +0.29%
GeForce RTX 3060 Ti + 0.12%

A bit down in sales we have ..

GeForce RTX 3080 +0.05%
GeForce RTX 3080 Ti +0.05%
Radeon RX 570 +0.03%

While it's always true that the lower price cards will dominate sales ... the month to month sales usually see an increase the further along into the release cycle .... From September to October the 3080's market share was up 0.15%, just 1/3 of that from October to November.  

What cards have been the most taken out of service and upgraded ?

GeForce GTX 1050 Ti -0.22%
GeForce RTX 2060 -0.41%
GeForce GTX 1060 -0.56%

In short, what I am seeing here is that users of 'mid-range components are happy enough with the cost of components to have them open their wallets.... the high end, not so much.   Given the current market and economy, I don't see as many folks jumping on a $600 CPU and $400 MoBo as GPU prices for "matching" GFX cards will result in a PC that is well above what folks are used to making.  The power consumption of CPU and GPU also screams for custom water cooling addiing even more expense.

The 12700k and 12600k at more moderate prices should be able to win a substantial amount of builders but personally, I wouldn't be willing to invest in a 12900k + 3-80 Ti custom water cooled build on a new MoBo platform.  I having been doing builds since 1990, had more than a  couple of instances of Buyers Remorse with 1st iterations of new MoBos.  As to how this release might impact market share, AMD hit a 14 year high in this past August with just under 40% but has dropped 2 % since.   Because of the secondary issues of market / economy / component cost issues above but moreso the generally lesser adoption rate of new MoBo platforms, I don't think we'll see components sales at the level we've seen in the past.  So don't see a big shift in CPU market share coming up with the 1st set of DDR5 capable componentry, but AMD's next gen needs to make a splash before the second gen of DDR5 capable MoBos arrives as "mindshare" going in to that 2nd gen will be spiked by those who have put off upgrades for the last couple of years.


----------



## meb (May 2, 2022)

As a designer, and definitely not as a computer hardware/software specialist, not even close, I found the information here to be sobering at some level.  I know so many folks, me included, who want the most powerful processors without consideration for heat and power consumption.  It seems that selecting a processor that is a little better than good enough, rather than selecting a high end product like a 12900K is really the best path for professionals using processor reliant programs - AutoCAD/SketchUp.  For me, which was pointed out above, durability and consistency are most important.  

I suppose that folks in VR and photo-realism may accept higher heat and power consumption as a trade-off for absolute production speed, I simply need every click to be responsive and reliable...for at least 5 years.  The i7700K in my current rig does run hot and its performance dropped off after ~2.5 years.


----------



## fevgatos (Jun 18, 2022)

@W1zzard 

There is something very wrong with your power limited numbers. At 75w I get 18500k points in CBR23, while you review has it at 11k....tried on 2 different mobos. Also at 125w I get 23800-24500. You most have done something wrong


----------



## Dr. Dro (Jun 18, 2022)

fevgatos said:


> @W1zzard
> 
> There is something very wrong with your power limited numbers. At 75w I get 18500k points in CBR23, while you review has it at 11k....tried on 2 different mobos. Also at 125w I get 23800-24500. You most have done something wrong



Not really, no. You must keep in mind that the review is older and it was done on early microcode for a new microarchitecture, and that OS level optimizations for the processor probably weren't very well developed yet. The processor should be faster now, especially if you tweak it like you obviously did.


----------



## fevgatos (Jun 18, 2022)

Dr. Dro said:


> Not really, no. You must keep in mind that the review is older and it was done on early microcode for a new microarchitecture, and that OS level optimizations for the processor probably weren't very well developed yet. The processor should be faster now, especially if you tweak it like you obviously did.


Nope, no tweaking, everything was reset to default with just a power limit. 18500 score @ 75w on  2 different motherboards.


----------



## Dr. Dro (Jun 18, 2022)

fevgatos said:


> Nope, no tweaking, everything was reset to default with just a power limit. 18500 score @ 75w on  2 different motherboards.



The earlier microcode and OS build would still apply, however. Launch day reviews just show tech with all of its newness, you should take a look at 12900KS reviews instead. These processors are newer and have all been reviewed on more updated platforms, to get the ballpark your regular 12900K should be, knock off like 5-10% of the performance the KS shows due to its aggressive clock speeds


----------



## fevgatos (Jun 18, 2022)

Dr. Dro said:


> The earlier microcode and OS build would still apply, however. Launch day reviews just show tech with all of its newness, you should take a look at 12900KS reviews instead. These processors are newer and have all been reviewed on more updated platforms, to get the ballpark your regular 12900K should be, knock off like 5-10% of the performance the KS shows due to its aggressive clock speeds


Well techpowerup did not test the ks with any power limits, so it's really hard to figure out whats up, but there is something definitely wrong with the original 12900k review. It makes the CPU look way more inefficient than it actually is, i mean the actual numbers from my testing are 70% higher. That's a HUGE margin


----------



## qubit (Jun 18, 2022)

fevgatos said:


> Well techpowerup did not test the ks with any power limits, so it's really hard to figure out whats up, but there is something definitely wrong with the original 12900k review. It makes the CPU look way more inefficient than it actually is, i mean the actual numbers from my testing are 70% higher. That's a HUGE margin


W1z is an established reviewer with a great reputation for accuracy, so he's just not gonna screw up so badly. So, sure, 70% is huge, but what makes you so sure that your numbers are accurate, even accounting for early platform differences like @Dr. Dro explained?


----------



## fevgatos (Jun 18, 2022)

qubit said:


> W1z is an established reviewer with a great reputation for accuracy, so he's just not gonna screw up so badly. So, sure, 70% is huge, but what makes you so sure that your numbers are accurate, even accounting for early platform differences like @Dr. Dro explained?


Multiple reasons. First off, reset to default settings ensure that there is nothing wrong with my settings. Second and most importably, it is common sense - wizs numbers show that a 6c zen 3 ties an 8+8c cpu in efficiency in CBR23, with both at 65w. That's just ridiculous, and of course, it is not the actual case.


----------



## qubit (Jun 18, 2022)

fevgatos said:


> Multiple reasons. First off, reset to default settings ensure that there is nothing wrong with my settings. Second and most importably, it is common sense - wizs numbers show that a 6c zen 3 ties an 8+8c cpu in efficiency in CBR23, with both at 65w. That's just ridiculous, and of course, it is not the actual case.


What do other launch day reviews from _reputable_ sites show? I'll bet they agree with TPU, not you.

You're still ignoring those manufacturer optimisations too which will have a measurable effect on the benchmarks.

The bottom line is that you're ignoring confounding factors in your testing and conclusion. All these things have to be taken into account when coming to a valid conclusion. At most, you should point out to W1z that your results differ and you'd like to discuss why, not conclude that he's wrong end of story.


----------



## fevgatos (Jun 18, 2022)

qubit said:


> What do other launch day reviews from _reputable_ sites show? I'll bet they agree with TPU, not you.


Well, good thing we didn't bet cause you'd lose the bet. The only other known reviewer that tested with various power limits is igorslab. Unfortunately they didn't test at 65w, but they did at 125 and it clearly shows that - for example - in every MT workload the 12900 @ 125w easily beats the 12600k and basically ties the 5900x and the 12700k., On the other hand, at TPU's review the 12900@125w gets handily beat by both the 5900x and the 12700, while it only ties the 12600k. There is a huge margin here.


These are the links, check the blender numbers from TPU and igorslab and compare. 









						Core i9-12900KF, Core i7-12700K and Core i5-12600 in a workstation test with amazing results and an old weakness | Part 2 | Page 6 | igor'sLAB
					

So today I'll get serious and show you where Alder Lake S can really score aside from colorful gaming pixels. Gaming what? Completely overrated if you look at at least some of today's results.




					www.igorslab.de
				












						Intel Core i9-12900K Alder Lake Tested at Power Limits between 50 W and 241 W
					

We test Intel's Core i9-12900K at various TDP levels all the way down to 50 W to determine how much efficiency is really in the new Alder Lake core, and how these power limits affect performance. Competing with the efficiency of AMD's Zen 3 Ryzen lineup is just two settings changes away.




					www.techpowerup.com


----------



## qubit (Jun 18, 2022)

fevgatos said:


> Well, good thing we didn't bet cause you'd lose the bet.


That's misrepresenting my argument. I challenged you to check properly before accusing W1z of a bad review, I didn't outright say that you were wrong, hence no bet to lose.

There may be a reasonable explanation for the differences that neither of us have thought of though. It's up to him now to explain the differences between his review and Igor's. Be prepared that he may not reply though.


----------



## lexluthermiester (Jun 18, 2022)

qubit said:


> That's misrepresenting my argument. I challenged you to check properly before accusing W1z of a bad review, I didn't outright say that you were wrong, hence no bet to lose.


I've been watching this exchange, fevgatos seems to be baiting you. They're being a bit subtle about it, but it's definitely some baiting. If they want to be ignorant and fail to see what is staring them in the face let them, but don't let them rope you into to their sad nonsense.


----------



## fevgatos (Jun 18, 2022)

lexluthermiester said:


> I've been watching this exchange, fevgatos seems to be baiting you. They're being a bit subtle about it, but it's definitely some baiting. If they want to be ignorant and fail to see what is staring them in the face let them, but don't let them rope you into to their sad nonsense.


Posting some actual numbers that seem to contradict the review is baiting nowadays? Okay


----------



## qubit (Jun 18, 2022)

lexluthermiester said:


> I've been watching this exchange, fevgatos seems to be baiting you. They're being a bit subtle about it, but it's definitely some baiting. If they want to be ignorant and fail to see what is staring them in the face let them, but don't let them rope you into to their sad nonsense.


I think you're right, Lex.

Anyway, I've set him straight on it now and he's not come back to me even though he's seen my post, so I call that a result lol.


----------



## lexluthermiester (Jun 18, 2022)

qubit said:


> so I call that a result lol.


Right? Very telling..


----------



## fevgatos (Jun 18, 2022)

qubit said:


> I think you're right, Lex.
> 
> Anyway, I've set him straight on it now and he's not come back to me even though he's seen my post, so I call that a result lol.


Whats there to come back to? You said we should wait for wiz to explain, what more do you want me to reply to that? LOL


----------



## qubit (Jun 18, 2022)

Oh dear.


----------



## lexluthermiester (Jun 18, 2022)

fevgatos said:


> You said we should wait for wiz to explain


He's not going to bother as he is not in the wrong. The problem is your understanding of facts and the context thereof. It's as simple as that. We're done here.


----------



## R0H1T (Jun 18, 2022)

fev's been baiting almost everyone here! Let's look at the blender workloads ~








See anything different aside from the different chips themselves 

Ironically the Igor's lab review also shows that if the workload lasts longer 5950x will be better!


----------



## fevgatos (Jun 18, 2022)

R0H1T said:


> fev's been baiting almost everyone here! Let's look at the blender workloads ~
> 
> 
> 
> ...


Different chips cause one 12900 to outperform the 12700k while the other one barely wins the 12600k? LOL



lexluthermiester said:


> He's not going to bother as he is not in the wrong. The problem is your understanding of facts and the context thereof. It's as simple as that. We're done here.


What facts are there to understand. My tests show completely different results. Igors test show completely different results. What facts are there that i don't understand? Can you mention any?


----------



## R0H1T (Jun 18, 2022)

Let's try again, this time slowly ~


> Blender is one of the few professional-grade rendering programs out there that is both free and open source. That fact alone helped build a strong community around the software, making it a highly popular benchmark program due to its ease of use as well. *For our testing, we're using the Blender "BMW 27" benchmark scene with Blender v2.92*.











						Intel Core i9-12900K Alder Lake Tested at Power Limits between 50 W and 241 W
					

We test Intel's Core i9-12900K at various TDP levels all the way down to 50 W to determine how much efficiency is really in the new Alder Lake core, and how these power limits affect performance. Competing with the efficiency of AMD's Zen 3 Ryzen lineup is just two settings changes away.




					www.techpowerup.com
				




This is the one Igor's lab used ~


----------



## fevgatos (Jun 18, 2022)

R0H1T said:


> Let's try again, this time slowly ~
> 
> 
> 
> ...


Dude, do you realize that the TPU's result show the 12600k BEATING or being equal to the 12900k at the same wattage in the MT workloads? You do realize that cannot happen, RIGHT? I really have no idea why you keep insisting when you are obviously wrong.


----------



## R0H1T (Jun 18, 2022)

Same wattage, where? Just for the blender workload the 12600k is beating the i9 with PL1/2 limited to *100W*, you do realize that chip isn't going to run full throttle at those limits right? I'm assuming 12600k isn't power limited. The issue is likely *because of big little(cores) & maybe even thread director*!

Oh look the guys at AT also *making Intel look bad*


----------



## fevgatos (Jun 18, 2022)

R0H1T said:


> Same wattage, where? Just for the blender workload the 12600k is beating the i9 with PL1/2 limited to *100W*, you do realize that chip isn't going to run full throttle at those limits right? I'm assuming 12600k isn't power limited.


Since he measured power consumption in cinebench, it is easier if you just compare cinebench. So the 12900k at 125w draws more power than the 12600k and basically performs identically. Which just CANNOT be the case. I know cause, besides everything else, I tested it in 2 different motherboard, the 12900k gets over 23500 at 125watts


----------



## R0H1T (Jun 18, 2022)

You don't know much about PC then, depending on how the OS scheduler is assigning threads to the cores a task on the 20c/40t big little chip can take 2x as long as a regular 10c/20t big core chip. This is also one of the reasons why the initial Dozer chips were super bad, worse than they actually performed after the Windows updates!


----------



## fevgatos (Jun 18, 2022)

R0H1T said:


> You don't know much about PC then, depending on how the OS scheduler is assigning threads to the cores a task on the 20c/40t big little chip can take 2x as long as a regular 10c/20t big core chip. This is also one of the reasons why the initial Dozer chips were super bad, worse than they actually performed after the Windows updates!


No you don't know much about PC then, the biggest chip won't be consuming more power while performing less or equal in MT workloads.  Besides igor's lab, we are 3 people that tested the 12900k in 4 different motherboards, all with the same results. 18500 to 19200 at 75W in CBR23 and 23500 to 24500 at 125w.  And im talking about DEFAULT settings, no undervolting or anything of the likes.

Also, there is this review, again from TPU









						Intel Core i9-12900K E-Cores Only Performance Review
					

With Alder Lake, Intel is betting big on hybrid CPU core configurations. The Core i9-12900K has eight P(erformance) cores and eight E(fficient) cores. We were curious and tested the processor running the E-Cores only to see how well they perform against architectures like Zen 2, Zen 3, Skylake...




					www.techpowerup.com
				




Here it shows that the Ecores on their own outperform the 8+8 configuration at the same wattage! It also shows the 8p cores with ht OFF also outperform the 8+8 configuration at the same wattage. That obviously cannot be the case. It's blatantly false.


----------



## Dr. Dro (Jun 18, 2022)

fevgatos said:


> No you don't know much about PC then, the biggest chip won't be consuming more power while performing less or equal in MT workloads.  Besides igor's lab, we are 3 people that tested the 12900k in 4 different motherboards, all with the same results. 18500 to 19200 at 75W in CBR23 and 23500 to 24500 at 125w.  And im talking about DEFAULT settings, no undervolting or anything of the likes.
> 
> Also, there is this review, again from TPU
> 
> ...



It should not be surprising that a processor with fewer cores and threads would behave better under a strict power budget. This is why i've also never believed in traditional mobile processors with 8 or more cores, I was actually even a bit distrustful of the 5600H when I bought my laptop - turns out 6 Zen 3 cores is about _fine_ for a 45W processor without sacrificing frequency, if it had a lower design power such as, say, 25W, it would be better for most workloads you'd do on a laptop that it had no more than four cores. I hate throttling that much! 

In the review you linked, the E-cores show the precise reason they exist - high power efficiency, with the P-cores all but disabled and drawing no power, they'll show exactly the aforementioned behavior, they're going to run at their optimal frequencies, throttle free, and produce the best possible benchmark scores. In case you didn't figure out by now, this is what the i9-12900KS does to achieve its high performance, they just throw power limits right off the window and say screw efficiency, letting it chug as much juice as it can to put out performance indiscriminately. That's also why you found no powerlimited benchmarks for the 12900KS, it makes that SKU pointless, it's binned for clocks and clocks alone.


----------



## fevgatos (Jun 18, 2022)

Dr. Dro said:


> It should not be surprising that a processor with fewer cores and threads would behave better under a strict power budget. This is why i've also never believed in traditional mobile processors with 8 or more cores, I was actually even a bit distrustful of the 5600H when I bought my laptop - turns out 6 Zen 3 cores is about _fine_ for a 45W processor without sacrificing frequency, if it had a lower design power such as, say, 25W, it would be better for most workloads you'd do on a laptop that it had no more than four cores. I hate throttling that much!
> 
> In the review you linked, the E-cores show the precise reason they exist - high power efficiency, with the P-cores all but disabled and drawing no power, they'll show exactly the aforementioned behavior, they're going to run at their optimal frequencies, throttle free, and produce the best possible benchmark scores. In case you didn't figure out by now, this is what the i9-12900KS does to achieve its high performance, they just throw power limits right off the window and say screw efficiency, letting it chug as much juice as it can to put out performance indiscriminately. That's also why you found no powerlimited benchmarks for the 12900KS, it makes that SKU pointless, it's binned for clocks and clocks alone.


Actually, that is definitely not the case. More cores perform better at the same wattage in MT workloads.

E cores are NOT more efficient than P cores actually. I dont know why people say that, it is not the case. Testing shows they are less efficient than P cores. Even intel admits so in their own slides


----------



## Dr. Dro (Jun 18, 2022)

fevgatos said:


> Actually, that is definitely not the case. More cores perform better at the same wattage in MT workloads.
> 
> E cores are NOT more efficient than P cores actually. I dont know why people say that, it is not the case. Testing shows they are less efficient than P cores. Even intel admits so in their own slides



No, not always. More execution units at lower frequencies do not always beat less execution units at higher frequencies, that depends on the nature of the workload and also on the instructions per clock rate of the microarchitecture. Over time, this can grow to a disparity that can become quite extreme, as i'm going to show you:

For example, my 18-core, 36-thread Xeon E5-4669 v3 running at 2.4 GHz (and this is on DDR4-2133 CAS 9 memory, about as tweaked as you can do!) scores around 12100 points in Cinebench R23:





You will find this score trivial to match with an affordable consumer-grade 6-core processor today:






The reason is obvious, too, just compare my meager 550 points that are so bad that it's off scale even against low-budget modern CPUs:






End of the day, I don't believe w1zz's testing is incorrect, it simply reflects the experience early adopters had at the time, kinks and all. It doesn't make your processor less valuable or less enjoyable to use, nor his testing methodology flawed. So why be bothered by it? Just enjoy your i9, man. Be thankful that you could afford one, a lot of people want one and can't have them


----------



## lexluthermiester (Jun 18, 2022)

fevgatos said:


> E cores are NOT more efficient than P cores actually.


See, THAT is an example of baiting. You make a totally preposterous and incorrect statement just to get a reaction out of people. Knock it off.


----------



## fevgatos (Jun 18, 2022)

lexluthermiester said:


> See, THAT is an example of baiting. You make a totally preposterous and incorrect statement just to get a reaction out of people. Knock it off.


What do you mean? E cores ARE way less efficient than the P cores in most of their performance / watt curve. They only get more efficient when you drop them down to less than 3-4 watts per core, which is not a thing you do on a desktop CPU.

Do you think that Intel themselves are baiting? According to their very own graphs, they claim P cores ARE indeed more efficient than E cores. Which turns out to be true, and you would already know that if you had the CPU.

Here is the graph from Intel






And here are some graphs from a review that actually tested this exact thing, efficiency of P vs E cores. Pay attention to the graph, E cores are more efficient when they are running like below 2ghz - while the P cores are at like 1ghz. In most other scenarios they lose in terms of efficiency







So, you are basically saying taht Intel is baiting, and they are making a totally preposterous and incorrect statement. About their own CPU. Do you mind if I throw the whole sentence back at you, it seems like you are the one baiting


----------



## 95Viper (Jun 19, 2022)

Stop the arguing.
You made your points... now move on!


----------

