Thursday, August 4th 2022
![AMD](https://tpucdn.com/images/news/amd-v1721205152158.png)
Potential Ryzen 7000-series CPU Specs and Pricing Leak, Ryzen 9 7950X Expected to hit 5.7 GHz
It's pretty clear that we're getting very close to the launch of AMD's AM5 platform and the Ryzen 7000-series CPUs, with spec details and even pricing brackets tipping up online. Wccftech has posted what the publication believes will be the lineup we can expect to launch in just over a month's time, if rumours are to be believed. The base model is said to be the Ryzen 5 7600X, which the site claims will have a base clock of 4.7 GHz and a boost clock of 5.3 GHz. There's no change in processor core or thread count compared to the current Ryzen 5 5600X, but the L2 cache appears to have doubled, for a total of 38 MB of cache. This is followed by the Ryzen 7 7700X, which starts out a tad slower with a base clock of 4.5 GHz, but it has a slightly higher boost clock of 5.4 GHz. Likewise here, the core and thread count remains unchanged, while the L2 cache also gets a bump here for a total of 40 MB cache. Both these models are said to have a 105 W TDP.
The Ryzen 9 7900X is said to have a 4.7 GHz base clock and a 5.6 GHz boost clock, so a 200 MHz jump up from the Ryzen 7 7700X. This CPU has a total of 76 MB of cache. Finally the Ryzen 9 7950X is said to have the same base clock of 4.5 GHz as the Ryzen 7 7700X, but it has the highest boost clock of all the expected models at 5.7 GHz, while having a total of 80 MB cache. These two SKUs are both said to have a 170 W TDP. Price wise, from top to bottom, we might be looking at somewhere around US$700, US$600, US$300 and US$200, so it seems like AMD has adjusted its pricing downwards by around $100 on the low-end, with the Ryzen 7 part fitting the same price bracket as the Ryzen 7 5700X. The Ryzen 9 7900X seems to have had its price adjusted upwards slightly, while the Ryzen 9 7950X seems to be expected to be priced lower than its predecessors. Take these things with the right helping of scepticism for now, as things can still change before the launch.
Source:
Wccftech
The Ryzen 9 7900X is said to have a 4.7 GHz base clock and a 5.6 GHz boost clock, so a 200 MHz jump up from the Ryzen 7 7700X. This CPU has a total of 76 MB of cache. Finally the Ryzen 9 7950X is said to have the same base clock of 4.5 GHz as the Ryzen 7 7700X, but it has the highest boost clock of all the expected models at 5.7 GHz, while having a total of 80 MB cache. These two SKUs are both said to have a 170 W TDP. Price wise, from top to bottom, we might be looking at somewhere around US$700, US$600, US$300 and US$200, so it seems like AMD has adjusted its pricing downwards by around $100 on the low-end, with the Ryzen 7 part fitting the same price bracket as the Ryzen 7 5700X. The Ryzen 9 7900X seems to have had its price adjusted upwards slightly, while the Ryzen 9 7950X seems to be expected to be priced lower than its predecessors. Take these things with the right helping of scepticism for now, as things can still change before the launch.
277 Comments on Potential Ryzen 7000-series CPU Specs and Pricing Leak, Ryzen 9 7950X Expected to hit 5.7 GHz
As for efficiency, I have linked to quite a few tests in which Zen3 is already either faster, more efficient, or both, when compared to ADL. I mean, just look at TPU's 12900K review? At stock, the 12900K loses to or ties with the 5950X in the following tests: Corona, Keyshot, V-Ray, UE4 game dev, Google Tesseract OCR, VMWare Workstation, 7-zip, AES & SHA3 encryption, H.264 & H.265 encoding. Now, we don't have power measurements for each of these tests, sadly. But we do know the stock power limits, as well as the per-core peak power of both CPUs. So, unless the 12900K has some kind of debilitating bottleneck that causes it to essentially sit idle, it is using slightly less power (in very light, low threaded workloads) as much power (in workloads that strike that balance of a few threads, but not very heavy ones) or more (in anything instruction dense or anything lightweight above ~3 active cores) than the 5950X. Some of these - rendering, compression, encryption and encoding, at least - are relatively instruction dense nT workloads, where the 12900K will be using more power than the 144W-limited 5950X. Yet it still loses. So, that kind of disproves your "It's more efficient at everything", no?
Would a low-clocked 16c ADL chip have better efficiency than the 12900K in these tests? That depends on the test, how well it utilizes E cores, and what clocks that chip could sustain at your proposed power levels - including crucial details about the specific silicon implementation that render speculation on this utterly pointless. Still, it is highly unlikely that this would represent a massive, earth-shattering efficiency improvement. And you entirely missed the point that the GC cores in SR aren't the same as the GC cores in ADL, and due to the different implementations their performance will vary quite a bit. And, once again: you have absoltuely no basis for claiming that a theoretical 16 P core CPU will be more efficient than the 5950X. None.
Heck, look at TPU's 12900K testing at various power limits. Sure, it shines in low threaded workloads even with a 50W limit, demonstrating fantastic efficiency in those tasks, and great ST performance. But in anything multi threaded? In rendering tasks, at 125W it barely beats the 5800X, despite having 3x the threads and 8 E cores to help pull those loads. The same goes for chemistry and physics simulation, AI upscaling, game dev, 7-zip decompression, and all three video encoding tests. It even loses to the 5800X in encryption workloads. Sure, the 5800X pulls a bit more power (138W vs. 125W max), but ... yeah. That amazing scaling you're talking about doesn't exist. ADL scales extremely well in light, low threaded tasks, and otherwise scales fine in everything else. In MT/nT tests where it didn't already win by a ton, it loses a lot of performance as you reduce its power limits.
And i know its wrong cause i have the freaking cpu. At stock with 125w power limit it scores 24k+ in cbr23. Actually you can even compare it with techspots 12700 review, at 65w it scores over 16k while tpu has the 12900k at 18k / 125w. With less cores mind you. Obviously flawed numbers Actually zen 3 had lots of problems, some of them are fixed and some of them wont ever. The x570 specifically had some problems with ssd reads, usb disconnects, ftpm stuttering...
As I have written about at length above, there is a strong argument to be made for Alder Lake, the implemented CPUs, being more efficient at lighter (less instruction dense), low threaded workloads than Zen3 CPUs, but - repeating myself a lot here - this is not due to an advantage in core efficiency, but due to lower uncore power draw. AMD's through-package Infinity Fabric gives their CPUs a higher base power draw regardless of the number of cores under load (though not at idle) than Intel's monolithic CPUs, meaning that despite having a less efficient core, they win in chip efficiency comparisons in these workloads because the chip is more than just cores.
I don't need any of TPU's data to disprove your statement that the GC core is more efficient than Zen3, because Anandtech's testing shows conclusively that it is the other way around, and that Zen3 scales extremely well at lower clocks (~<6.5W/core @3.775GHz for the 5950X; average ~2.6W (SPECint) to ~1.9W (SPECfp) @ 2.45GHz or higher for the EPYC 7763). Can you show me even a single GC core implementation that can demonstrate similarly low per-core power draws? Even in the same ballpark? Far too many variables in play here - differences in motherboard, BIOS revision, subsequent Intel microcode updates, and more. Until someone can deliver data of comparable quality that shows the review to be erroneous, I'll trust the review, thanks. You're very welcome to try and do so, but that'll require more than stating "my chip does X "Lots of problems" is quite a stretch. fTPM stuttering is relatively rare, and fixed; USB disconnects were B550-only and were fixed long ago, and AFAIK that SSD read speed thing only applied to chipset-connected SSDs (i.e. not CPU-connected ones, as are the majority) and was also fixed.
It's kind of funny, really. Whenever someone brings some nuance to your simplistic arguments and conclusions, you always try to shift the goal posts to suit your liking. The 12900K is more efficient at 125W than the 5950X! No, it's the GC core that's more efficient! No, we can't do comparisons with existing benchmarks - but we can run our own tests(?). No, we can't trust per-core power draw numbers from seasoned reviewers, because look at this benchmark result I got! It's almost as if, oh, I don't know, you have a vested interest in a certain party coming out as conclusively better in this comparison? Seriously though: I understand that you spent a lot of money on your CPU. And it's a great CPU! It's not even a terrible power hog if tuned sensibly, or in lighter workloads. But ... you need to leave that desperate defensiveness behind. It is perfectly okay that the thing you have bought is not conclusively and unequivocally the best. If that's the standard you live by, either you'll go through life deluding yourself, or you'll be consistently sad, angry and disappointed - because the world doesn't work that way. ADL is great. Zen3 is great. ADL is slightly faster; Zen3 is slightly more efficient in heavy or highly threaded loads - generally. There are significant caveats and exceptions to both of those overall trends. Neither is a bad choice. Period. And it's okay for there to be multiple good choices out there - in fact, I'd say it's great! Your desperate need for your chosen brand to be the best is ... well, both leading you to make really bad conclusions in how you're looking at test and performance data, and probably not making you feel very good either. I would really recommend you take a step back and reconsider how you're looking at these things.
Personally I tested 3 12900k at 4 different mobos and they all came back with the same results, 23500 to 24500 at 125w. Nowhere near close to TPUs numbers.
I never changed my argument, ive said repeatedly that the Ecores are inefficient at most wattages you would run a desktop CPU on, and that Golden cove cores are vastly more efficient than zen 3 cores at same wattage. That's my argument and it has never changed. I dont care if adl is the best, if it wasnt I would have bought something else. Anyways, there is a thread for people posting their numbers at same wattage, ill be back in 3 days and ill post some numbers. If zen 3 even gets close to 8 gc cores in efficiency ill throw my computer off the window.
This is becoming too much I think. I'll stick to my 5900X and 3070 Ti for the time being.
Oh, btw, have you heard of this new invention called a link? I have linked to literally every single source I've referred to in this discussion. It would be nice if you could do others the same courtesy as is being done to you. It's not my job to corroborate your statements. Post your sources. But this is the thing: as you keep reformulating this core argument, you keep changing what you are arguing, because this core argument does not make logical sense. How? Simple: the only interpetation of "Golden Cove cores are more efficient than Zen3 cores at the same wattage" that makes logical sense is if you're looking at per-core power, not package power. Yet the only power numbers you care about - consistently, regardless of what other data is provided - is package power. Package power includes other significant power draws than the cores, and crucially these draws differ between chips and architectures, introducing a variable that ruins your data - you literally can't get per-core power from package power, as there's other stuff mixed in there.
There are two possible logically congruent variants of your argument:
- That The GC core is more efficient than the Zen3 core, on a core-for-core, core power only basis, at the same wattage
- That ADL as implemented, as a full chip, including cores and uncore, is more efficient than Zen3 at the same wattage
The first claim has been quite conclusively disproven by AnandTech's per-core power testing. The GC core in instruction dense workloads can scale to insane power levels, and even in lighter workloads needs notably more power than the highest power a Zen3 core ever reaches in order to eke out a small win.
The second point is crucially more complex, as the answer differs wildly across power levels as the effects of uncore power vs. core power scale, and of course carries with it the problem of an uneven playing field, where every ADL chip is operating at a significant downclock from its stock configuration, which privileges it over the more frugal at stock Zen3 CPUs. And, as has been discussed at massive length above: there is no conclusive, simple answer to this. ADL does indeed have an advantage at relatively light, low threaded workloads. It does not if the workload is instruction dense, or if the number of fully loaded cores exceeds ~4. Though again, due to how different workloads execute differently on different architectures, even these are oversimplified generalizations. The real answer: it's damn complicated, and they each have their strengths and weaknesses. It's been a while since I've seen someone contradict themselves so explicitly within the span of three sentences. Well done, I guess? "I don't care!/If I'm wrong I'll throw my PC out the window!" aren't ... quite in line with each other, now, are they? I'm well aware of that, but I don't see how that makes it logical for stores to not bake sales tax into their listed prices. A store is generally only located in one state, right? I can't imagine there are many stores straddling a state border. So they should be quite capable of pricing things with what people will actually be paying, as is done literally everywhere else. And for online retailers it would be as simple as having a pop-up asking which state you're in either when you first enter the site, or when you first add something to your cart.
www.techspot.com/review/2391-intel-core-i7-12700
Also here is a 12900k at 125w from igorslab.
www.igorslab.de/en/intel-macht-ernst-core-i9-12900kf-core-i7-12700k-und-core-i5-12600-im-workstation-einsatz-und-eine-niederlage-fuer-amd-2/6/ Im talking about package power. Anandtech hasnt disproven anything, even if they are just checking core instead of package, they havent done so normalized have they? Im just trying to tell you im pretty confident it is the case. And im confident cause i tested, repeatedly. Ive seen a tuned to the max 5800x score 16k in cbr23 at 150w, while 8 gc cores need.... 65 to match that. Yes cbr is a good scenario for alderlake but the differen is ridiculously big
Both are 6/12, both consume about the same watts and both score similar numbers. It appears that the 5600X is slightly more efficient, practically no difference, than the 12400.
so does the 12700/12900 have so better binning that is twice more efficient than the ryzens?
it appears to me that the gc cores have similar efficiency to the zen 3 but they are just clocked way higher in order to be faster in apps/benchmarks.
www.techpowerup.com/review/intel-core-i5-12400f/20.html
Also the review from TPUP is power from the wall, which is not really indicative. When you are testing that low wattage parts, a 5 or 10w discrepancy from the motherboard makes a huge difference. TPUP uses the maximum hero for the 12400, just the RGB and the actual screen on that motherboard throw the numbers off. You can check igorslab review which tests only CPU power, the 12400 is way more efficient than the 5600x.
It's up to 65% (that's HUGE) more efficient in lightly threaded workloads and around 20-25% more efficient in multithreaded workloads.
www.igorslab.de/en/intel-core-i5-12400-in-the-workstation-test-how-does-real-work-without-taped-e-cores-part-2/9/
Intel's stock settings push the 12900k way way past it's efficiency point. They are trying to make it compete with the 5950x in MT performance, which it has no business doing imo. In all fairness, AMD's stock settings - as shown by the zen 4 leaks will also be out of the park, the only reason they didnt push the wattage with zen 3 is they didn't need to, Intel wasnt competing in MT performance with cometlake, so AMD decided to play the efficiency card. Now that Intel is pushing them AMD also raises the stock wattage
That demonstrates that, as configured from the factory, at very similar clock speeds, Zen3 is more efficient than ADL as ADL beats it by ~5-16% while consuming notably more than 5-16% more power. Lowering the power limit will not change ADL's efficiency in this test, because the CPU is nowhere near hitting any reasonable power limit - even a 50W limit would likely deliver roughly the same performance in SPEC, and it will boost as opportunistically within this limit unless also frequency limited. You're so confident that you're heavily emotionally invested in the outcome, yes, I see that. Doesn't change what I said above. But, even assuming that the numbers you're constantly citing out of nowhere are accurate, to repeat myself for the n'th time: this comparison is deeply, deeply flawed. Heck, this is far more problematic than the purportedly "fundamentally flawed" testing you've been complaining about. Why? 'Cause you're comparing one clocked-to-the-rafters, pushed to extremes tuning of one chip, with a heavily power limited, and thus also clock limited, tuning of another. How does a 5800X perform at 65W? How do each of them perform across a range of sensible wattages? How do they perform outside of the one application that you love to cite because it can be run in one click and performs favorably on your platform of choice? There is something rather iffy going on with those Igor's Lab benchmarks - at least in terms of AMD power consumption. He reports his 5950X consuming 217W, which is ... well, crazy. That's the power consumption of a 5950X tuned to the extreme, with zero regard for efficiency, and it is certainly not reflective of anything resembling stock behaviour. If Igor didn't do that manually, then he should throw his AM4 testing motherboard out that window you're talking about and pick one that isn't garbage. A stock 5950X doesn't exceed 144W whatsoever - though if measuring at the EPS12V cable you'd also need to include VRM conversion losses in that sum - but that would be roughly equal across all platforms.
Edit: looking at Igor's test setup, the motherboard is configured with "PBO: Auto". In other words, it's running a motherboard-dependent auto OC. That is not a stock-v-stock comparison. And that is some pretty bad test methodolgy. This essentially ruins any and all efficiency comparisons based on these numbers, as the motherboard is clearly overriding all power limits and pushing the chips far beyond stock power and voltage.
It's also kind of telling that you're very set on being as generous as possible with Intel, repeating that the 12400 is "the worst possible bin" of ADL, etc, yet when coming to AMD, you consistently compare against the 5800X - by far the most power hungry bin of Zen3, by a massive margin. Remember, it has the same power limits as the 5900X and 5950X, with 50% and 100% more cores respectively, while clocking only a bit higher at base. Again: look at Anandtech's per-core power draw testing. The 5800X consumes notably more power per core in an 8-core load than both of those CPUs, while also clocking lower. So, for Intel, you're ever so generous, while for AMD you're consistently comparing against the worst bins. One might almost say that this is, oh, I don't know, a bit biased?
You're also wrong about your 12400/12900K binning statements - they're not the same die, so they're not comparable bins at all. They're different silicon implementations of the same architecture, and each represents a bin of its implementation. It's entirely possible that the 12400 is a low grade bin of its silicon, but unless you've got detailed clock and power scaling data for several examples of both chips, you can't make comparisons like that.
There's also the complexities of boost algorithms and thermal/power protection systems to take into account, which can throw off simple "more power=faster" assumptions. For example, my 5800X (from testing I did way back when) runs faster in Cinebench when limited to 110W PPT than if let loose at 142W PPT. And significantly so - about 1000 points. Why? I can't say entirely for sure as I have neither the tools, skills nor time to pin-point this, but if I were to guess I'd say it's down to the higher power limit leading to higher boost power, meaning higher thermals, more leakage, and subsequent lower clocks to make up for this and protect the chip. Zen3 has a quite aggressive chip protection system that constantly monitors power, current, voltage, clock frequency, and more, and adjusts it all on the fly, meaning that tuning is complex and non-linear, and highly dependent on cooling.
Again, performance normalized the difference will still be huge. You can put the 5800x at 50w for all I care, 8 gc cores will probably match the performance at 30w. I mean, 2 days left, im back and I can test it ;)
Outside of that one application the zen 3 is even more comedically bad. Ive tested gaming performance (granted, only one game), 8GC cores at 25w (yes, power limited to 25) match a 5800x in performance hitting 90+ watts in Farcry 6. They both scored around 110 fps if I remember correctly at 720p ultra + RT Ive no idea what you are talking about. Im comparing core and power normalized, so it doesn't matter which Zen SKU the comparisons are done with. The 5950x with one CCD will perform pretty similarly to the 5800x at the same wattages, no? So your criticism is completely unwarranted.
And yes, ive tested a 12900k with only 6 GC cores active at 65w, it scored way more than the 12400 does, so its pretty apparent the 12400 is a horrible bin. I think i got 14k score, but again, dont remember off the top of my head But im not using igorslab for efficiency comparisons. Im using them to show you that a 12900k at 125w matches / outperforms a 5900x even at heavy MT workloads. Which is the exact opposite of what TPU said, where a 12900k at 125w is matched by the 12600k and loses to a 65w 12700. If you still cant admit that TPU results are absolutely completely hillariously flawed i don't know what else to tell you man....
Ive tested 3 12900k on 4 motherboards at 125w, all scored pretty much the same in CBR23, between 23500 and 24500. TPU scored 18k, lol
As for your "normalizing for performance": once again, you're just trying to use neutral and quasi-scientific wording to hide the fact that you really want to use a benchmark that's relatively friendly to ADL as the be-all, end-all representation of which of these CPUs is better, rather than actually wanting to gain actual knowledge about this. I'm starting to sound like a broken record here, but: ADL has an advantage at lower power limits in less instruction dense workloads due to its lower uncore power draw. And once again, pulling numbers out of nowhere as if this is even remotely believeable. Also, 720p? Wtf? And how oddly, unexpectedly convenient that the one game you're testing in is once again a game that's uncharacteristically performant on ADL generally. Hmmmmmm. Almost as if there might be a pattern here? ... no. Did you even look at the AT testing? The 5950X, running 8 cores active, on the same CCX (they control for that in testing), in the same workload, at the same power limit as the 5800X, clocks higher while consuming less power per core.
It would be really, really helpful if you at least tried to understand what is being said to you. The boost behaviours, binning and DVFS characteristics of these chips are not the same. This is what I was saying about your "arguments" about binning on the 12400K: you're infinitely generous with giving Intel the benefit of the doubt, but you consistently pick worst case scenarios for AMD and show zero such generousness in that direction. And yet more unsourced numbers pulled out of thin air. This is starting to get tiring, you know. Uhhhhh... what? This is what you said, in literally your previous post: Could you at least stop flat out lying? That would be nice, thanks. I don't know that TPU's testing is flawed - but I have explicitly said that this might indeed be the case. Given the number of possible reasons for this, and my complete lack of access to data surrounding their testing other than what's published, I really can't know. It's absolutely possible that there's something wrong there.
However, you seem to fail to recognize that the Igor's Lab testing seems to be similarly flawed, only in the other direction. As I explained above, it's entirely possible to harm performance on AMD CPUs through giving them too much power, which drives up thermals, drives down clocks, increases leakage, and results in lower overall performance. Given that Igor's testing is with an auto OC applied and the power levels recorded are astronomical, this is very likely the case. So, if I agree to not look at TPU's results, will you agree to not look at Igor's Lab's results? 'Cause for this discussion, both seem to be equally invalid. (And no, you can't take the Igor's Lab Intel results and compare them to Zen3 results from elsewhere, as this introduces massive error potential into the data, as there's no chance of controlling for variables across the tests.
Oh, and a bit of a side note here: you are constantly switching back and forth between talking about "running the 12900K at X watts" and "8 GC cores at X watts". Are your tests all willy-nilly like this, or are you consistently running with or without E-cores enabled? That represents a pretty significant difference, after all.