Thursday, August 4th 2022

Potential Ryzen 7000-series CPU Specs and Pricing Leak, Ryzen 9 7950X Expected to hit 5.7 GHz

It's pretty clear that we're getting very close to the launch of AMD's AM5 platform and the Ryzen 7000-series CPUs, with spec details and even pricing brackets tipping up online. Wccftech has posted what the publication believes will be the lineup we can expect to launch in just over a month's time, if rumours are to be believed. The base model is said to be the Ryzen 5 7600X, which the site claims will have a base clock of 4.7 GHz and a boost clock of 5.3 GHz. There's no change in processor core or thread count compared to the current Ryzen 5 5600X, but the L2 cache appears to have doubled, for a total of 38 MB of cache. This is followed by the Ryzen 7 7700X, which starts out a tad slower with a base clock of 4.5 GHz, but it has a slightly higher boost clock of 5.4 GHz. Likewise here, the core and thread count remains unchanged, while the L2 cache also gets a bump here for a total of 40 MB cache. Both these models are said to have a 105 W TDP.

The Ryzen 9 7900X is said to have a 4.7 GHz base clock and a 5.6 GHz boost clock, so a 200 MHz jump up from the Ryzen 7 7700X. This CPU has a total of 76 MB of cache. Finally the Ryzen 9 7950X is said to have the same base clock of 4.5 GHz as the Ryzen 7 7700X, but it has the highest boost clock of all the expected models at 5.7 GHz, while having a total of 80 MB cache. These two SKUs are both said to have a 170 W TDP. Price wise, from top to bottom, we might be looking at somewhere around US$700, US$600, US$300 and US$200, so it seems like AMD has adjusted its pricing downwards by around $100 on the low-end, with the Ryzen 7 part fitting the same price bracket as the Ryzen 7 5700X. The Ryzen 9 7900X seems to have had its price adjusted upwards slightly, while the Ryzen 9 7950X seems to be expected to be priced lower than its predecessors. Take these things with the right helping of scepticism for now, as things can still change before the launch.
Source: Wccftech
Add your own comment

277 Comments on Potential Ryzen 7000-series CPU Specs and Pricing Leak, Ryzen 9 7950X Expected to hit 5.7 GHz

#151
Valantar
fevgatosDo you understand what best case scenario is and what it's used for? If Zen 3 loses in the best case scenario thaen no further testing needs to be done. For example CBR23 is a best case scenario for golden cove, so if they lose in CBR23 they will lose in everything else.
... and you still don't get the fact that you simply can't know that a workload is a "best case scenario" for any given architecture or implementation of that architecture until you've done extensive testing across a wide variety of workloads. CB23 is absolutely not a "best case scenario" for GC - it's a benchmark it does well in. That's it. There are other benchmarks where it has a much more significant lead - and benchmarks where it falls significantly behind. Again: you're desperate for simplification, you seem dead set on wanting a single test that can somehow give a representative overview. As I apparently have to repeat myself until this sticks: this does not exist, and never will.

As for efficiency, I have linked to quite a few tests in which Zen3 is already either faster, more efficient, or both, when compared to ADL. I mean, just look at TPU's 12900K review? At stock, the 12900K loses to or ties with the 5950X in the following tests: Corona, Keyshot, V-Ray, UE4 game dev, Google Tesseract OCR, VMWare Workstation, 7-zip, AES & SHA3 encryption, H.264 & H.265 encoding. Now, we don't have power measurements for each of these tests, sadly. But we do know the stock power limits, as well as the per-core peak power of both CPUs. So, unless the 12900K has some kind of debilitating bottleneck that causes it to essentially sit idle, it is using slightly less power (in very light, low threaded workloads) as much power (in workloads that strike that balance of a few threads, but not very heavy ones) or more (in anything instruction dense or anything lightweight above ~3 active cores) than the 5950X. Some of these - rendering, compression, encryption and encoding, at least - are relatively instruction dense nT workloads, where the 12900K will be using more power than the 144W-limited 5950X. Yet it still loses. So, that kind of disproves your "It's more efficient at everything", no?

Would a low-clocked 16c ADL chip have better efficiency than the 12900K in these tests? That depends on the test, how well it utilizes E cores, and what clocks that chip could sustain at your proposed power levels - including crucial details about the specific silicon implementation that render speculation on this utterly pointless. Still, it is highly unlikely that this would represent a massive, earth-shattering efficiency improvement.
fevgatosRegarding SR, you are missing the point. It doesn't matter at all what it will be competing against, the argument I made was that 16GC cores would wipe the 5950x off the face of the Earth in terms of efficiency, the same way 8 GC cores wipe the 5800x. So when SR will be released and what it will be facing when it does is completely irrelevant to the point im making.
And you entirely missed the point that the GC cores in SR aren't the same as the GC cores in ADL, and due to the different implementations their performance will vary quite a bit. And, once again: you have absoltuely no basis for claiming that a theoretical 16 P core CPU will be more efficient than the 5950X. None.

Heck, look at TPU's 12900K testing at various power limits. Sure, it shines in low threaded workloads even with a 50W limit, demonstrating fantastic efficiency in those tasks, and great ST performance. But in anything multi threaded? In rendering tasks, at 125W it barely beats the 5800X, despite having 3x the threads and 8 E cores to help pull those loads. The same goes for chemistry and physics simulation, AI upscaling, game dev, 7-zip decompression, and all three video encoding tests. It even loses to the 5800X in encryption workloads. Sure, the 5800X pulls a bit more power (138W vs. 125W max), but ... yeah. That amazing scaling you're talking about doesn't exist. ADL scales extremely well in light, low threaded tasks, and otherwise scales fine in everything else. In MT/nT tests where it didn't already win by a ton, it loses a lot of performance as you reduce its power limits.
Posted on Reply
#152
springs113
efikkanZen 2 and 3 turned out well eventually, but had a bumpy ride with BIOS/firmware issues for several months (I believe it was 4+ months for Zen 3).
After maturity, they've been great though. My system which was built nearly one year ago has had zero crashes (if I recall correctly), and I run my computers for many months without reboot.


With the current level of inflation we (as consumers) should be happy if we see prices anywhere close to this. And if we do, and AMD can supply enogh chips, then they should move a huge volume of products.


Achieving something like this would require very good engineering on top of an unusually well performing node.
Do you remember the Zen 2 rumors? At some point the >5 GHz hype was extreme, yet it turned out to be nonsense from a YouTube channel. So we'll see if the details of this article is true or not.


IPC is just the average instructions per clock. There are many changes to CPUs which can improve IPC, yet it varies from workload to workload (sometimes even application) whether these improvements translates into increase performance. Typically, increases in execution units, SIMD, etc. have little impact on games but massive impact on video or software rendering, while improvements to prefetcher, cache, etc. typically have more impact on games, yet both of these impact IPC.

I believe Zen 4 will also increase L2 cache, so a matchup here will be quite interesting.

But as for 5800X3D being an "insane gaming chip", that's more than a little exaggerated. There are some games where the gains are very large, but for most of them the gains are marginal in realistic resolutions. We don't know whether this kind of boost from increased L3 will continue with future games, but we do know that software which exhibit this kind of behavior is caused by instruction cache misses, and any good programmer could tell you that misses in instruction cache is primarily due to software bloat. So my point is that designing a CPU with loads of L3 is a double-edged sword; it will "regain" some performance lost due to bad code, but it may also "encourage" bad software design?

I'm more interested in what AMD may use this stacking technology for in the future. If it's just to add more L3 cache, then it's almost a gimmick in the consumer space. But if this someday leads to a modular CPU design where you can have e.g. 8 cores, but you can choose between a "base" version for gaming or one with extra SIMD for multimedia etc., but seamlessly integrated through multi-layer chiplets, then I'm for it.
Are you for real? Didn't Amd show clock speeds themselves? I also don't recall zen 3 at launch ever having an issue, but maybe i was too busy enjoying my launch purchases of all the un-obtanium back then between the consoles, cpus and gpus. The 5800x3d is a beast of gaming chip, compare it to its predecessor(zen2) and its running mate(5800x).
Posted on Reply
#153
fevgatos
ValantarSo, that kind of disproves your "It's more efficient at everything", no?
No it doesnt cause my claim is that core for core gc is more efficient than zen 3. You cant disprove that claim by comparing 16 zen 3 cores with 8+8
ValantarHeck, look at TPU's 12900K testing at various power limits. Sure, it shines in low threaded workloads even with a 50W limit, demonstrating fantastic efficiency in those tasks, and great ST performance. But in anything multi threaded? In rendering tasks, it barely beats the 5800X, despite having 3x the threads and 8 E cores to help pull those loads. The same goes for chemistry and physics simulation, AI upscaling, game dev, 7-zip decompression, and all three video encoding tests. It even loses to the 5800X in encryption workloads. Sure, the 5800X pulls a bit more power (138W vs. 125W max), but ... yeah. That amazing scaling you're talking about doesn't exist. ADL scales extremely well in light, low threaded tasks, and otherwise scales fine in everything else. In MT/nT tests where it didn't already win by a ton, it loses a lot of performance as you reduce its power limits.
Yeah that review is obviously flawed. I dont know what he did wrong, but he did something. Its obvious from the results themselves, check the cbr23 numbers. The 12600k ties the 12900k in cbr23 at samr power consumption

And i know its wrong cause i have the freaking cpu. At stock with 125w power limit it scores 24k+ in cbr23. Actually you can even compare it with techspots 12700 review, at 65w it scores over 16k while tpu has the 12900k at 18k / 125w. With less cores mind you. Obviously flawed numbers
springs113Are you for real? Didn't Amd show clock speeds themselves? I also don't recall zen 3 at launch ever having an issue, but maybe i was too busy enjoying my launch purchases of all the un-obtanium back then between the consoles, cpus and gpus. The 5800x3d is a beast of gaming chip, compare it to its predecessor(zen2) and its running mate(5800x).
Actually zen 3 had lots of problems, some of them are fixed and some of them wont ever. The x570 specifically had some problems with ssd reads, usb disconnects, ftpm stuttering...
Posted on Reply
#154
trparky
OK seriously, do you get a paycheck from Pat Gelsinger?
Posted on Reply
#155
Valantar
fevgatosNo it doesnt cause my claim is that core for core gc is more efficient than zen 3. You cant disprove that claim by comparing 16 zen 3 cores with 8+8
Sorry, but you're being wildly inconsistent here. Now you're saying your claim is that the GC core is more efficient than the Zen3 core. Which we have conclusive evidence showing that it is not, through Anandtech's per-core power testing. Despite the 12900K being pushed stupidly high, and responding poorly to instruction dense workloads, it is still less efficient in lighter workloads such as most SPEC workloads, consuming 6-7W more than the peak power draw of any single Zen3 core, while barely outperforming it.

As I have written about at length above, there is a strong argument to be made for Alder Lake, the implemented CPUs, being more efficient at lighter (less instruction dense), low threaded workloads than Zen3 CPUs, but - repeating myself a lot here - this is not due to an advantage in core efficiency, but due to lower uncore power draw. AMD's through-package Infinity Fabric gives their CPUs a higher base power draw regardless of the number of cores under load (though not at idle) than Intel's monolithic CPUs, meaning that despite having a less efficient core, they win in chip efficiency comparisons in these workloads because the chip is more than just cores.

I don't need any of TPU's data to disprove your statement that the GC core is more efficient than Zen3, because Anandtech's testing shows conclusively that it is the other way around, and that Zen3 scales extremely well at lower clocks (~<6.5W/core @3.775GHz for the 5950X; average ~2.6W (SPECint) to ~1.9W (SPECfp) @ 2.45GHz or higher for the EPYC 7763). Can you show me even a single GC core implementation that can demonstrate similarly low per-core power draws? Even in the same ballpark?
fevgatosYeah that review is obviously flawed. I dont know what he did wrong, but he did something. Its obvious from the results themselves, check the cbr23 numbers. The 12600k ties the 12900k in cbr23 at samr power consumption

And i know its wrong cause i have the freaking cpu. At stock with 125w power limit it scores 24k+ in cbr23. Actually you can even compare it with techspots 12700 review, at 65w it scores over 16k while tpu has the 12900k at 18k / 125w. With less cores mind you. Obviously flawed numbers
Far too many variables in play here - differences in motherboard, BIOS revision, subsequent Intel microcode updates, and more. Until someone can deliver data of comparable quality that shows the review to be erroneous, I'll trust the review, thanks. You're very welcome to try and do so, but that'll require more than stating "my chip does X
fevgatosActually zen 3 had lots of problems, some of them are fixed and some of them wont ever. The x570 specifically had some problems with ssd reads, usb disconnects, ftpm stuttering...
"Lots of problems" is quite a stretch. fTPM stuttering is relatively rare, and fixed; USB disconnects were B550-only and were fixed long ago, and AFAIK that SSD read speed thing only applied to chipset-connected SSDs (i.e. not CPU-connected ones, as are the majority) and was also fixed.

It's kind of funny, really. Whenever someone brings some nuance to your simplistic arguments and conclusions, you always try to shift the goal posts to suit your liking. The 12900K is more efficient at 125W than the 5950X! No, it's the GC core that's more efficient! No, we can't do comparisons with existing benchmarks - but we can run our own tests(?). No, we can't trust per-core power draw numbers from seasoned reviewers, because look at this benchmark result I got! It's almost as if, oh, I don't know, you have a vested interest in a certain party coming out as conclusively better in this comparison? Seriously though: I understand that you spent a lot of money on your CPU. And it's a great CPU! It's not even a terrible power hog if tuned sensibly, or in lighter workloads. But ... you need to leave that desperate defensiveness behind. It is perfectly okay that the thing you have bought is not conclusively and unequivocally the best. If that's the standard you live by, either you'll go through life deluding yourself, or you'll be consistently sad, angry and disappointed - because the world doesn't work that way. ADL is great. Zen3 is great. ADL is slightly faster; Zen3 is slightly more efficient in heavy or highly threaded loads - generally. There are significant caveats and exceptions to both of those overall trends. Neither is a bad choice. Period. And it's okay for there to be multiple good choices out there - in fact, I'd say it's great! Your desperate need for your chosen brand to be the best is ... well, both leading you to make really bad conclusions in how you're looking at test and performance data, and probably not making you feel very good either. I would really recommend you take a step back and reconsider how you're looking at these things.
Posted on Reply
#156
fevgatos
ValantarSorry, but you're being wildly inconsistent here. Now you're saying your claim is that the GC core is more efficient than the Zen3 core. Which we have conclusive evidence showing that it is not, through Anandtech's per-core power testing. Despite the 12900K being pushed stupidly high, and responding poorly to instruction dense workloads, it is still less efficient in lighter workloads such as most SPEC workloads, consuming 6-7W more than the peak power draw of any single Zen3 core, while barely outperforming it.

As I have written about at length above, there is a strong argument to be made for Alder Lake, the implemented CPUs, being more efficient at lighter (less instruction dense), low threaded workloads than Zen3 CPUs, but - repeating myself a lot here - this is not due to an advantage in core efficiency, but due to lower uncore power draw. AMD's through-package Infinity Fabric gives their CPUs a higher base power draw regardless of the number of cores under load (though not at idle) than Intel's monolithic CPUs, meaning that despite having a less efficient core, they win in chip efficiency comparisons in these workloads because the chip is more than just cores.

I don't need any of TPU's data to disprove your statement that the GC core is more efficient than Zen3, because Anandtech's testing shows conclusively that it is the other way around, and that Zen3 scales extremely well at lower clocks (~<6.5W/core @3.775GHz for the 5950X; average ~2.6W (SPECint) to ~1.9W (SPECfp) @ 2.45GHz or higher for the EPYC 7763). Can you show me even a single GC core implementation that can demonstrate similarly low per-core power draws? Even in the same ballpark?

Far too many variables in play here - differences in motherboard, BIOS revision, subsequent Intel microcode updates, and more. Until someone can deliver data of comparable quality that shows the review to be erroneous, I'll trust the review, thanks. You're very welcome to try and do so, but that'll require more than stating "my chip does X

"Lots of problems" is quite a stretch. fTPM stuttering is relatively rare, and fixed; USB disconnects were B550-only and were fixed long ago, and AFAIK that SSD read speed thing only applied to chipset-connected SSDs (i.e. not CPU-connected ones, as are the majority) and was also fixed.

It's kind of funny, really. Whenever someone brings some nuance to your simplistic arguments and conclusions, you always try to shift the goal posts to suit your liking. The 12900K is more efficient at 125W than the 5950X! No, it's the GC core that's more efficient! No, we can't do comparisons with existing benchmarks - but we can run our own tests(?). No, we can't trust per-core power draw numbers from seasoned reviewers, because look at this benchmark result I got! It's almost as if, oh, I don't know, you have a vested interest in a certain party coming out as conclusively better in this comparison? Seriously though: I understand that you spent a lot of money on your CPU. And it's a great CPU! It's not even a terrible power hog if tuned sensibly, or in lighter workloads. But ... you need to leave that desperate defensiveness behind. It is perfectly okay that the thing you have bought is not conclusively and unequivocally the best. If that's the standard you live by, either you'll go through life deluding yourself, or you'll be consistently sad, angry and disappointed - because the world doesn't work that way. ADL is great. Zen3 is great. ADL is slightly faster; Zen3 is slightly more efficient in heavy or highly threaded loads - generally. There are significant caveats and exceptions to both of those overall trends. Neither is a bad choice. Period. And it's okay for there to be multiple good choices out there - in fact, I'd say it's great! Your desperate need for your chosen brand to be the best is ... well, both leading you to make really bad conclusions in how you're looking at test and performance data, and probably not making you feel very good either. I would really recommend you take a step back and reconsider how you're looking at these things.
The TPU review is absolutely wrong and you dont need any other data, their own data proves it. The 12600k cannot be more efficient than the 12900k. Worse bin, less P cores and half the ecores. Also techspots review tested a 12700 and at 65w it scores more than the 12900k at 100w. It's painfully obvious that the TPU review is wrong. I mean even the 5600x is more efficient at same wattage, LOL.

Personally I tested 3 12900k at 4 different mobos and they all came back with the same results, 23500 to 24500 at 125w. Nowhere near close to TPUs numbers.

I never changed my argument, ive said repeatedly that the Ecores are inefficient at most wattages you would run a desktop CPU on, and that Golden cove cores are vastly more efficient than zen 3 cores at same wattage. That's my argument and it has never changed. I dont care if adl is the best, if it wasnt I would have bought something else. Anyways, there is a thread for people posting their numbers at same wattage, ill be back in 3 days and ill post some numbers. If zen 3 even gets close to 8 gc cores in efficiency ill throw my computer off the window.
Posted on Reply
#157
Vario
I've been thinking about doing a 7700X AM5 upgrade to my i5 8600K.
Posted on Reply
#158
HenrySomeone
VarioI've been thinking about doing a 7700X AM5 upgrade to my i5 8600K.
13700(k) will likely be considerably more potent. Honestly, as it looks right now, only 7950x will have some merit, unless you're willing to play the waiting game of what might eventually get released on AM5 platform, but if you want your performance now...
Posted on Reply
#159
Why_Me
ValantarYeah the US "there might be sales tax, but we won't tell you until the second before you're paying" thing is incredibly shady and misleading.
There are 50 US states and each state has their own individual sales tax not to mention some states such as the one I live in has no sales tax.
Posted on Reply
#160
chrcoluk
The clocks are very impressive but I hope its not at the cost of power efficiency.
Posted on Reply
#161
StrikerRocket
Not going to upgrade anytime soon. This goes too fast and as soon as one gets used to a new system, a new architecture comes around, a new platform etc.
This is becoming too much I think. I'll stick to my 5900X and 3070 Ti for the time being.
Posted on Reply
#162
Valantar
fevgatosThe TPU review is absolutely wrong and you dont need any other data, their own data proves it. The 12600k cannot be more efficient than the 12900k. Worse bin, less P cores and half the ecores. Also techspots review tested a 12700 and at 65w it scores more than the 12900k at 100w. It's painfully obvious that the TPU review is wrong. I mean even the 5600x is more efficient at same wattage, LOL.
Many possible explanations for this - for example, it could be indicative of the low power limit interfering with the boost algorithms, causing the CPU to be stuck in boost/throttle loops, which always kill performance. If this was the case, it would be quite reasonable for Intel to have fixed this afterwards, which would explain your different results.

Oh, btw, have you heard of this new invention called a link? I have linked to literally every single source I've referred to in this discussion. It would be nice if you could do others the same courtesy as is being done to you. It's not my job to corroborate your statements. Post your sources.
fevgatosI never changed my argument, ive said repeatedly that the Ecores are inefficient at most wattages you would run a desktop CPU on, and that Golden cove cores are vastly more efficient than zen 3 cores at same wattage. That's my argument and it has never changed.
But this is the thing: as you keep reformulating this core argument, you keep changing what you are arguing, because this core argument does not make logical sense. How? Simple: the only interpetation of "Golden Cove cores are more efficient than Zen3 cores at the same wattage" that makes logical sense is if you're looking at per-core power, not package power. Yet the only power numbers you care about - consistently, regardless of what other data is provided - is package power. Package power includes other significant power draws than the cores, and crucially these draws differ between chips and architectures, introducing a variable that ruins your data - you literally can't get per-core power from package power, as there's other stuff mixed in there.

There are two possible logically congruent variants of your argument:
- That The GC core is more efficient than the Zen3 core, on a core-for-core, core power only basis, at the same wattage
- That ADL as implemented, as a full chip, including cores and uncore, is more efficient than Zen3 at the same wattage

The first claim has been quite conclusively disproven by AnandTech's per-core power testing. The GC core in instruction dense workloads can scale to insane power levels, and even in lighter workloads needs notably more power than the highest power a Zen3 core ever reaches in order to eke out a small win.

The second point is crucially more complex, as the answer differs wildly across power levels as the effects of uncore power vs. core power scale, and of course carries with it the problem of an uneven playing field, where every ADL chip is operating at a significant downclock from its stock configuration, which privileges it over the more frugal at stock Zen3 CPUs. And, as has been discussed at massive length above: there is no conclusive, simple answer to this. ADL does indeed have an advantage at relatively light, low threaded workloads. It does not if the workload is instruction dense, or if the number of fully loaded cores exceeds ~4. Though again, due to how different workloads execute differently on different architectures, even these are oversimplified generalizations. The real answer: it's damn complicated, and they each have their strengths and weaknesses.
fevgatosI dont care if adl is the best, if it wasnt I would have bought something else. Anyways, there is a thread for people posting their numbers at same wattage, ill be back in 3 days and ill post some numbers. If zen 3 even gets close to 8 gc cores in efficiency ill throw my computer off the window.
It's been a while since I've seen someone contradict themselves so explicitly within the span of three sentences. Well done, I guess? "I don't care!/If I'm wrong I'll throw my PC out the window!" aren't ... quite in line with each other, now, are they?
Why_MeThere are 50 US states and each state has their own individual sales tax not to mention some states such as the one I live in has no sales tax.
I'm well aware of that, but I don't see how that makes it logical for stores to not bake sales tax into their listed prices. A store is generally only located in one state, right? I can't imagine there are many stores straddling a state border. So they should be quite capable of pricing things with what people will actually be paying, as is done literally everywhere else. And for online retailers it would be as simple as having a pop-up asking which state you're in either when you first enter the site, or when you first add something to your cart.
Posted on Reply
#163
Why_Me
ValantarI'm well aware of that, but I don't see how that makes it logical for stores to not bake sales tax into their listed prices. A store is generally only located in one state, right? I can't imagine there are many stores straddling a state border. So they should be quite capable of pricing things with what people will actually be paying, as is done literally everywhere else. And for online retailers it would be as simple as having a pop-up asking which state you're in either when you first enter the site, or when you first add something to your cart.
A store located in New York for example where sales tax is high can only charge a customer the sales tax where said customer lives if the sale is done online. So no matter what store I order from no matter where said store is located I pay no sales tax due to the state I live in has no sales tax.
Posted on Reply
#164
Valantar
StrikerRocketNot going to upgrade anytime soon. This goes too fast and as soon as one gets used to a new system, a new architecture comes around, a new platform etc.
This is becoming too much I think. I'll stick to my 5900X and 3070 Ti for the time being.
Upgrading every generation makes no sense anyway - it just makes progress feel slower by chopping it up into tiny bits, while costing tons of money. That's a great PC you've got, and it'll be great for many years still, so no reason to upgrade for a while still.
chrcolukThe clocks are very impressive but I hope its not at the cost of power efficiency.
Given the increase in base clock it seems efficiency is maintained at least to some degree, though they're definitely pushing these hard. The chips should all do base clock continuously at TDP, which looks decent (from 3.4GHz @ 105W to 4.5GHz @170W), but bumping TDP from 105W to 170W and PPT from 144W to 230W is still quite a lot. PPT/TDC/EDC tuning will likely be even more useful for Zen4 than it is for Zen3 currently, and no doubt there'll be notable gains by setting lower power limits simply as the chips are scaling much higher in power than previously.
Why_MeA store located in New York for example where sales tax is high can only charge a customer the sales tax where said customer lives if the sale is done online. So no matter what store I order from no matter where said store is located I pay no sales tax due to the state I live in has no sales tax.
Yes, exactly. Like I said: this is easily solved.
ValantarAnd for online retailers it would be as simple as having a pop-up asking which state you're in either when you first enter the site, or when you first add something to your cart.
Through this, they could easily adjust the listed price to match with the reality of what the customer will be paying. This really isn't complicated at all.
Posted on Reply
#165
Kelutrel
I'm really excited for the Zen4, but I have a 5900X that I built in march 2021 , and rebuilding my whole system now after a year and a half may not be justified by the assumed performance increase of a Zen4 platform ... I would have liked a 5900X3D but nope :(
Posted on Reply
#166
InVasMani
I hope they introduce some low power E variants perhaps they'll do that alongside 3DStacked Cache models!!? You'll already be paying a bit more for stacked cache may as well be a bit binned for more friendly power at the same time.
Posted on Reply
#167
fevgatos
ValantarMany possible explanations for this - for example, it could be indicative of the low power limit interfering with the boost algorithms, causing the CPU to be stuck in boost/throttle loops, which always kill performance. If this was the case, it would be quite reasonable for Intel to have fixed this afterwards, which would explain your different results.

Oh, btw, have you heard of this new invention called a link? I have linked to literally every single source I've referred to in this discussion. It would be nice if you could do others the same courtesy as is being done to you. It's not my job to corroborate your statements. Post your sources.
You are right, here is the link from techspots 12700 review. With 65w plimit it outscores tpus 12900k at 100w. That's simply preposterous

www.techspot.com/review/2391-intel-core-i7-12700

Also here is a 12900k at 125w from igorslab.


www.igorslab.de/en/intel-macht-ernst-core-i9-12900kf-core-i7-12700k-und-core-i5-12600-im-workstation-einsatz-und-eine-niederlage-fuer-amd-2/6/
ValantarBut this is the thing: as you keep reformulating this core argument, you keep changing what you are arguing, because this core argument does not make logical sense. How? Simple: the only interpetation of "Golden Cove cores are more efficient than Zen3 cores at the same wattage" that makes logical sense is if you're looking at per-core power, not package power. Yet the only power numbers you care about - consistently, regardless of what other data is provided - is package power. Package power includes other significant power draws than the cores, and crucially these draws differ between chips and architectures, introducing a variable that ruins your data - you literally can't get per-core power from package power, as there's other stuff mixed in there.

There are two possible logically congruent variants of your argument:
- That The GC core is more efficient than the Zen3 core, on a core-for-core, core power only basis, at the same wattage
- That ADL as implemented, as a full chip, including cores and uncore, is more efficient than Zen3 at the same wattage

The first claim has been quite conclusively disproven by AnandTech's per-core power testing. The GC core in instruction dense workloads can scale to insane power levels, and even in lighter workloads needs notably more power than the highest power a Zen3 core ever reaches in order to eke out a small win.

The second point is crucially more complex, as the answer differs wildly across power levels as the effects of uncore power vs. core power scale, and of course carries with it the problem of an uneven playing field, where every ADL chip is operating at a significant downclock from its stock configuration, which privileges it over the more frugal at stock Zen3 CPUs. And, as has been discussed at massive length above: there is no conclusive, simple answer to this. ADL does indeed have an advantage at relatively light, low threaded workloads. It does not if the workload is instruction dense, or if the number of fully loaded cores exceeds ~4. Though again, due to how different workloads execute differently on different architectures, even these are oversimplified generalizations. The real answer: it's damn complicated, and they each have their strengths and weaknesses.
Im talking about package power. Anandtech hasnt disproven anything, even if they are just checking core instead of package, they havent done so normalized have they?
Valantarbeen a while since I've seen someone contradict themselves so explicitly within the span of three sentences. Well done, I guess? "I don't care!/If I'm wrong I'll throw my PC out the window!" aren't ... quite in line with each other, now, are they?
Im just trying to tell you im pretty confident it is the case. And im confident cause i tested, repeatedly. Ive seen a tuned to the max 5800x score 16k in cbr23 at 150w, while 8 gc cores need.... 65 to match that. Yes cbr is a good scenario for alderlake but the differen is ridiculously big
Posted on Reply
#168
gffermari
Why does the 12400f use the same amount of power with the 5600X?
Both are 6/12, both consume about the same watts and both score similar numbers. It appears that the 5600X is slightly more efficient, practically no difference, than the 12400.

so does the 12700/12900 have so better binning that is twice more efficient than the ryzens?

it appears to me that the gc cores have similar efficiency to the zen 3 but they are just clocked way higher in order to be faster in apps/benchmarks.

www.techpowerup.com/review/intel-core-i5-12400f/20.html
Posted on Reply
#169
fevgatos
gffermariWhy does the 12400f use the same amount of power with the 5600X?
Both are 6/12, both consume about the same watts and both score similar numbers. It appears that the 5600X is slightly more efficient, practically no difference, than the 12400.

so does the 12700/12900 have so better binning that is twice more efficient than the ryzens?

it appears to me that the gc cores have similar efficiency to the zen 3 but they are just clocked way higher in order to be faster in apps/benchmarks.

www.techpowerup.com/review/intel-core-i5-12400f/20.html
Τhe 12400 is a different die than the rest of the lineup and yes, it is the worst binned alderlake pretty much. The 12900ks is the best bin and should be the most efficient of them all, but havent tested it. According to igorslab though it require 124mv less than the 12900k for same clocks, so yeah, that one will knock efficiency out of the park, we are talking about numbers that zen 5 might not even be able to match

Also the review from TPUP is power from the wall, which is not really indicative. When you are testing that low wattage parts, a 5 or 10w discrepancy from the motherboard makes a huge difference. TPUP uses the maximum hero for the 12400, just the RGB and the actual screen on that motherboard throw the numbers off. You can check igorslab review which tests only CPU power, the 12400 is way more efficient than the 5600x.

It's up to 65% (that's HUGE) more efficient in lightly threaded workloads and around 20-25% more efficient in multithreaded workloads.

www.igorslab.de/en/intel-core-i5-12400-in-the-workstation-test-how-does-real-work-without-taped-e-cores-part-2/9/


Intel's stock settings push the 12900k way way past it's efficiency point. They are trying to make it compete with the 5950x in MT performance, which it has no business doing imo. In all fairness, AMD's stock settings - as shown by the zen 4 leaks will also be out of the park, the only reason they didnt push the wattage with zen 3 is they didn't need to, Intel wasnt competing in MT performance with cometlake, so AMD decided to play the efficiency card. Now that Intel is pushing them AMD also raises the stock wattage
Posted on Reply
#170
Valantar
fevgatosIm talking about package power.
Then please, for the love of all that's good in this world, stop going on about "core efficiency". Package power is only indirectly indicative of core efficiency, and to extract core efficiency from package power you must be able to reliably remove uncore power from package power. Without doing so, there is no way whatsoever of knowing how much power the cores are consuming.
fevgatosAnandtech hasnt disproven anything, even if they are just checking core instead of package, they havent done so normalized have they?
Normalized for what? Your arbitrary power limits? They're running the chips as configured by Intel, allowing it to boost as high as it wants and the workload demands. And they demonstrated a wide range of power behaviours at these stock settings - in instruction dense POV-Ray, they saw a 71W increase over idle, which they estimate to be a 55-60W increase in core power. On the other hand, in the less instruction dense SPEC workloads they estimated core power at 25-30W. At (at least roughly) the same clocks. At which point it delivered marginally better performance than the 5950X, the cores in which peak at 20.6W in POV-Ray and similar to ADL likely consume a bit less across the SPEC suite.

That demonstrates that, as configured from the factory, at very similar clock speeds, Zen3 is more efficient than ADL as ADL beats it by ~5-16% while consuming notably more than 5-16% more power. Lowering the power limit will not change ADL's efficiency in this test, because the CPU is nowhere near hitting any reasonable power limit - even a 50W limit would likely deliver roughly the same performance in SPEC, and it will boost as opportunistically within this limit unless also frequency limited.
fevgatosIm just trying to tell you im pretty confident it is the case.
You're so confident that you're heavily emotionally invested in the outcome, yes, I see that. Doesn't change what I said above.
fevgatosAnd im confident cause i tested, repeatedly. Ive seen a tuned to the max 5800x score 16k in cbr23 at 150w, while 8 gc cores need.... 65 to match that. Yes cbr is a good scenario for alderlake but the differen is ridiculously big
But, even assuming that the numbers you're constantly citing out of nowhere are accurate, to repeat myself for the n'th time: this comparison is deeply, deeply flawed. Heck, this is far more problematic than the purportedly "fundamentally flawed" testing you've been complaining about. Why? 'Cause you're comparing one clocked-to-the-rafters, pushed to extremes tuning of one chip, with a heavily power limited, and thus also clock limited, tuning of another. How does a 5800X perform at 65W? How do each of them perform across a range of sensible wattages? How do they perform outside of the one application that you love to cite because it can be run in one click and performs favorably on your platform of choice?
fevgatosΤhe 12400 is a different die than the rest of the lineup and yes, it is the worst binned alderlake pretty much. The 12900ks is the best bin and should be the most efficient of them all, but havent tested it. According to igorslab though it require 124mv less than the 12900k for same clocks, so yeah, that one will knock efficiency out of the park, we are talking about numbers that zen 5 might not even be able to match

Also the review from TPUP is power from the wall, which is not really indicative. When you are testing that low wattage parts, a 5 or 10w discrepancy from the motherboard makes a huge difference. TPUP uses the maximum hero for the 12400, just the RGB and the actual screen on that motherboard throw the numbers off. You can check igorslab review which tests only CPU power, the 12400 is way more efficient than the 5600x.

It's up to 65% (that's HUGE) more efficient in lightly threaded workloads and around 20-25% more efficient in multithreaded workloads.

www.igorslab.de/en/intel-core-i5-12400-in-the-workstation-test-how-does-real-work-without-taped-e-cores-part-2/9/


Intel's stock settings push the 12900k way way past it's efficiency point. They are trying to make it compete with the 5950x in MT performance, which it has no business doing imo. In all fairness, AMD's stock settings - as shown by the zen 4 leaks will also be out of the park, the only reason they didnt push the wattage with zen 3 is they didn't need to, Intel wasnt competing in MT performance with cometlake, so AMD decided to play the efficiency card. Now that Intel is pushing them AMD also raises the stock wattage
There is something rather iffy going on with those Igor's Lab benchmarks - at least in terms of AMD power consumption. He reports his 5950X consuming 217W, which is ... well, crazy. That's the power consumption of a 5950X tuned to the extreme, with zero regard for efficiency, and it is certainly not reflective of anything resembling stock behaviour. If Igor didn't do that manually, then he should throw his AM4 testing motherboard out that window you're talking about and pick one that isn't garbage. A stock 5950X doesn't exceed 144W whatsoever - though if measuring at the EPS12V cable you'd also need to include VRM conversion losses in that sum - but that would be roughly equal across all platforms.

Edit: looking at Igor's test setup, the motherboard is configured with "PBO: Auto". In other words, it's running a motherboard-dependent auto OC. That is not a stock-v-stock comparison. And that is some pretty bad test methodolgy. This essentially ruins any and all efficiency comparisons based on these numbers, as the motherboard is clearly overriding all power limits and pushing the chips far beyond stock power and voltage.


It's also kind of telling that you're very set on being as generous as possible with Intel, repeating that the 12400 is "the worst possible bin" of ADL, etc, yet when coming to AMD, you consistently compare against the 5800X - by far the most power hungry bin of Zen3, by a massive margin. Remember, it has the same power limits as the 5900X and 5950X, with 50% and 100% more cores respectively, while clocking only a bit higher at base. Again: look at Anandtech's per-core power draw testing. The 5800X consumes notably more power per core in an 8-core load than both of those CPUs, while also clocking lower. So, for Intel, you're ever so generous, while for AMD you're consistently comparing against the worst bins. One might almost say that this is, oh, I don't know, a bit biased?

You're also wrong about your 12400/12900K binning statements - they're not the same die, so they're not comparable bins at all. They're different silicon implementations of the same architecture, and each represents a bin of its implementation. It's entirely possible that the 12400 is a low grade bin of its silicon, but unless you've got detailed clock and power scaling data for several examples of both chips, you can't make comparisons like that.

There's also the complexities of boost algorithms and thermal/power protection systems to take into account, which can throw off simple "more power=faster" assumptions. For example, my 5800X (from testing I did way back when) runs faster in Cinebench when limited to 110W PPT than if let loose at 142W PPT. And significantly so - about 1000 points. Why? I can't say entirely for sure as I have neither the tools, skills nor time to pin-point this, but if I were to guess I'd say it's down to the higher power limit leading to higher boost power, meaning higher thermals, more leakage, and subsequent lower clocks to make up for this and protect the chip. Zen3 has a quite aggressive chip protection system that constantly monitors power, current, voltage, clock frequency, and more, and adjusts it all on the fly, meaning that tuning is complex and non-linear, and highly dependent on cooling.
Posted on Reply
#171
fevgatos
ValantarNormalized for what? Your arbitrary power limits? They're running the chips as configured by Intel, allowing it to boost as high as it wants and the workload demands.
Normalized for either consumption or performance. Great for them that they ran as configured by Intel but that's not my argument at all
ValantarBut, even assuming that the numbers you're constantly citing out of nowhere are accurate, to repeat myself for the n'th time: this comparison is deeply, deeply flawed. Heck, this is far more problematic than the purportedly "fundamentally flawed" testing you've been complaining about. Why? 'Cause you're comparing one clocked-to-the-rafters, pushed to extremes tuning of one chip, with a heavily power limited, and thus also clock limited, tuning of another. How does a 5800X perform at 65W? How do each of them perform across a range of sensible wattages? How do they perform outside of the one application that you love to cite because it can be run in one click and performs favorably on your platform of choice?
You think a comparison normalized for performance is deeply flawed? I mean come on, you cannot possible believe that. I don't believe you believe that. I said it before, normalized for consumption, 8 gc cores are around 20-25% more efficient, normalized for performance the difference is over 100%. So yeah, the 5800x at 65 can get up to 13-14k.

Again, performance normalized the difference will still be huge. You can put the 5800x at 50w for all I care, 8 gc cores will probably match the performance at 30w. I mean, 2 days left, im back and I can test it ;)

Outside of that one application the zen 3 is even more comedically bad. Ive tested gaming performance (granted, only one game), 8GC cores at 25w (yes, power limited to 25) match a 5800x in performance hitting 90+ watts in Farcry 6. They both scored around 110 fps if I remember correctly at 720p ultra + RT
ValantarIt's also kind of telling that you're very set on being as generous as possible with Intel, repeating that the 12400 is "the worst possible bin" of ADL, etc, yet when coming to AMD, you consistently compare against the 5800X - by far the most power hungry bin of Zen3, by a massive margin. Remember, it has the same power limits as the 5900X and 5950X, with 50% and 100% more cores respectively, while clocking only a bit higher at base. Again: look at Anandtech's per-core power draw testing. The 5800X consumes notably more power per core in an 8-core load than both of those CPUs, while also clocking lower. So, for Intel, you're ever so generous, while for AMD you're consistently comparing against the worst bins. One might almost say that this is, oh, I don't know, a bit biased?
Ive no idea what you are talking about. Im comparing core and power normalized, so it doesn't matter which Zen SKU the comparisons are done with. The 5950x with one CCD will perform pretty similarly to the 5800x at the same wattages, no? So your criticism is completely unwarranted.

And yes, ive tested a 12900k with only 6 GC cores active at 65w, it scored way more than the 12400 does, so its pretty apparent the 12400 is a horrible bin. I think i got 14k score, but again, dont remember off the top of my head
ValantarThere is something rather iffy going on with those Igor's Lab benchmarks - at least in terms of AMD power consumption. He reports his 5950X consuming 217W, which is ... well, crazy. That's the power consumption of a 5950X tuned to the extreme, with zero regard for efficiency, and it is certainly not reflective of anything resembling stock behaviour. If Igor didn't do that manually, then he should throw his AM4 testing motherboard out that window you're talking about and pick one that isn't garbage. A stock 5950X doesn't exceed 144W whatsoever - though if measuring at the EPS12V cable you'd also need to include VRM conversion losses in that sum - but that would be roughly equal across all platforms.

Edit: looking at Igor's test setup, the motherboard is configured with "PBO: Auto". In other words, it's running a motherboard-dependent auto OC. That is not a stock-v-stock comparison. And that is some pretty bad test methodolgy. This essentially ruins any and all efficiency comparisons based on these numbers, as the motherboard is clearly overriding all power limits and pushing the chips far beyond stock power and voltage.
But im not using igorslab for efficiency comparisons. Im using them to show you that a 12900k at 125w matches / outperforms a 5900x even at heavy MT workloads. Which is the exact opposite of what TPU said, where a 12900k at 125w is matched by the 12600k and loses to a 65w 12700. If you still cant admit that TPU results are absolutely completely hillariously flawed i don't know what else to tell you man....
Posted on Reply
#172
trparky
fevgatosIf you still cant admit that TPU results are absolutely completely hillariously flawed i don't know what else to tell you man....
And if you think that, I say we get the resident TechPowerUp benchmark king in this thread to give his side of the story. Hey @W1zzard, let's see you give some input on this.
Posted on Reply
#173
fevgatos
trparkyAnd if you think that, I say we get the resident TechPowerUp benchmark king in this thread to give his side of the story. Hey @W1zzard, let's say you give some input on this.
Go ahead, I hope he replies. I guarantee you 100% the benchmarks are flawed. Could be a bios thingy or something else, but its most definitely without a shadow of a doubt flawed. Im not the only one saying it, there is a thread on tom's hardware also making fun of that benchmarks, and even in the discussion of that very benchmark there were people doubting the results. That's cause they just don't make any sense, the 12600k cant be more efficient than the 12900k at same wattage, it's hillariously obvious. The flaw is so monumental, imagine if you clock the 5600x to 125w and suddenly it matches the 5950x. Well thats what you are looking at with those numbers...

Ive tested 3 12900k on 4 motherboards at 125w, all scored pretty much the same in CBR23, between 23500 and 24500. TPU scored 18k, lol
Posted on Reply
#174
Valantar
fevgatosNormalized for either consumption or performance. Great for them that they ran as configured by Intel but that's not my argument at all
I mean, I should just start linking you to previous responses at this point, as everything you bring up is asked and answered four pages ago. "Normalizing" for either of those mainly serves to hide the uneven starting point introduced by said normalization, as each "normalized" operating point represents a different change from the stock behaviour of each chip. In the case of power limits being lowered, this inherently privileges the chip being allowed the biggest reduction from stock, due to how DVFS curves work.
fevgatosYou think a comparison normalized for performance is deeply flawed?
Yes, I really do. Outside of purely academic endeavors, who ever tests PC components normalized for performance? I mean, doing so isn't even possible, given how different architectures perform differently in different tasks. If you adjust a 12900K so it perfectly matches an 11900K in Cinebench, then it will still be faster in some workloads, and possibly slower in others. Normalizing for performance for complex components like this is literally impossible. Unless, that is, you tune normalize for performance in every single workload, and then just record the power data? That sounds incredibly convoluted and time-consuming though.
fevgatosI mean come on, you cannot possible believe that. I don't believe you believe that.
Well, too bad. I have explained the issues with this to you at length multiple times now. If you're unable to grasp that these problems are significant, that's your problem, not mine.
fevgatosI said it before, normalized for consumption, 8 gc cores are around 20-25% more efficient, normalized for performance the difference is over 100%. So yeah, the 5800x at 65 can get up to 13-14k.
And, once again: at what points? "Normalized for consumption" - at what wattage? The only such comparison that would make sense would be a range, as any single test is nothing more than an unrepresentative snapshot. And any single workload, even across a range, is still only representative of itself. For such a comparison to have any hope whatsoever of being representative, you'd need to test a range of wattages in a range of workloads, and then graph out those differences. Anything less than that is worthless. Comparing the 12900K at 65W vs. the 5800X at 65W in CB23 tells us only that exact thing: how each perform at that specific power level in that specific workload. You cannot reliably extrapolate anything from that data - it's just not sufficient for that.

As for your "normalizing for performance": once again, you're just trying to use neutral and quasi-scientific wording to hide the fact that you really want to use a benchmark that's relatively friendly to ADL as the be-all, end-all representation of which of these CPUs is better, rather than actually wanting to gain actual knowledge about this.
fevgatosAgain, performance normalized the difference will still be huge. You can put the 5800x at 50w for all I care, 8 gc cores will probably match the performance at 30w. I mean, 2 days left, im back and I can test it ;)
I'm starting to sound like a broken record here, but: ADL has an advantage at lower power limits in less instruction dense workloads due to its lower uncore power draw.
fevgatosOutside of that one application the zen 3 is even more comedically bad. Ive tested gaming performance (granted, only one game), 8GC cores at 25w (yes, power limited to 25) match a 5800x in performance hitting 90+ watts in Farcry 6. They both scored around 110 fps if I remember correctly at 720p ultra + RT
And once again, pulling numbers out of nowhere as if this is even remotely believeable. Also, 720p? Wtf? And how oddly, unexpectedly convenient that the one game you're testing in is once again a game that's uncharacteristically performant on ADL generally. Hmmmmmm. Almost as if there might be a pattern here?
fevgatosIve no idea what you are talking about. Im comparing core and power normalized, so it doesn't matter which Zen SKU the comparisons are done with. The 5950x with one CCD will perform pretty similarly to the 5800x at the same wattages, no? So your criticism is completely unwarranted.
... no. Did you even look at the AT testing? The 5950X, running 8 cores active, on the same CCX (they control for that in testing), in the same workload, at the same power limit as the 5800X, clocks higher while consuming less power per core.

It would be really, really helpful if you at least tried to understand what is being said to you. The boost behaviours, binning and DVFS characteristics of these chips are not the same. This is what I was saying about your "arguments" about binning on the 12400K: you're infinitely generous with giving Intel the benefit of the doubt, but you consistently pick worst case scenarios for AMD and show zero such generousness in that direction.
fevgatosAnd yes, ive tested a 12900k with only 6 GC cores active at 65w, it scored way more than the 12400 does, so its pretty apparent the 12400 is a horrible bin. I think i got 14k score, but again, dont remember off the top of my head
And yet more unsourced numbers pulled out of thin air. This is starting to get tiring, you know.
fevgatosBut im not using igorslab for efficiency comparisons.
Uhhhhh... what? This is what you said, in literally your previous post:
fevgatosYou can check igorslab review which tests only CPU power, the 12400 is way more efficient than the 5600x.
Could you at least stop flat out lying? That would be nice, thanks.
fevgatosIm using them to show you that a 12900k at 125w matches / outperforms a 5900x even at heavy MT workloads. Which is the exact opposite of what TPU said, where a 12900k at 125w is matched by the 12600k and loses to a 65w 12700. If you still cant admit that TPU results are absolutely completely hillariously flawed i don't know what else to tell you man....
I don't know that TPU's testing is flawed - but I have explicitly said that this might indeed be the case. Given the number of possible reasons for this, and my complete lack of access to data surrounding their testing other than what's published, I really can't know. It's absolutely possible that there's something wrong there.

However, you seem to fail to recognize that the Igor's Lab testing seems to be similarly flawed, only in the other direction. As I explained above, it's entirely possible to harm performance on AMD CPUs through giving them too much power, which drives up thermals, drives down clocks, increases leakage, and results in lower overall performance. Given that Igor's testing is with an auto OC applied and the power levels recorded are astronomical, this is very likely the case. So, if I agree to not look at TPU's results, will you agree to not look at Igor's Lab's results? 'Cause for this discussion, both seem to be equally invalid. (And no, you can't take the Igor's Lab Intel results and compare them to Zen3 results from elsewhere, as this introduces massive error potential into the data, as there's no chance of controlling for variables across the tests.



Oh, and a bit of a side note here: you are constantly switching back and forth between talking about "running the 12900K at X watts" and "8 GC cores at X watts". Are your tests all willy-nilly like this, or are you consistently running with or without E-cores enabled? That represents a pretty significant difference, after all.
Posted on Reply
#175
ratirt
trparkyAnd if you think that, I say we get the resident TechPowerUp benchmark king in this thread to give his side of the story. Hey @W1zzard, let's see you give some input on this.
I'd really like to hear from @W1zzard about this entire, "TPU results are absolutely hilariously flawed".
Posted on Reply
Add your own comment
Jul 23rd, 2024 06:29 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts