Friday, January 26th 2024
More AMD Ryzen 9000 "Zen 5" Desktop Processor Details Emerge
AMD is looking to debut its Ryzen 9000 series "Granite Ridge" desktop processors based on the "Zen 5" microarchitecture some time around May-June 2024, according to High Yield YT, a reliable source with AMD leaks. These processors will be built in the existing Socket AM5 package, and be compatible with all existing AMD 600 series chipset motherboards. It remains to be seen if AMD debuts a new line of motherboard chipsets. Almost all Socket AM5 motherboards come with the USB BIOS flashback feature, which means motherboards from even the earliest production batches that are in the retail channel, should be able to easily support the new processors.
AMD is giving its next-gen desktop processors the Ryzen 9000 series processor model numbering, as it used the Ryzen 8000 series for its recently announced Socket AM5 desktop APUs based on the "Hawk Point" monolithic silicon. "Granite Ridge" will be a chiplet-based processor, much like the Ryzen 7000 series "Raphael." In fact, it will even retain the same 6 nm client I/O die (cIOD) as "Raphael," with some possible revisions made to increase its native DDR5 memory frequency (up from the current DDR5-5200), and improve its memory overclocking capabilities. It's being reported that DDR5-6400 could be the new "sweetspot" memory speed for these processors, up from the current DDR5-6000.The "Granite Ridge" processor will feature one or two "Eldora" CPU complex dies (CCDs). Each CCD contains eight "Zen 5" CPU cores (aka "Nirvana" cores), each with 1 MB of L2 cache, and a yet undisclosed amount of on-die L3 cache. The "Zen 5" CCD will be built on the TSMC N4 (4 nm EUV) foundry node, the same node on which the company builds its "Hawk Point" monolithic silicon.
The "Zen 5" CPU core is expected by AMD to achieve a 10-15 percent IPC uplift over "Zen 4," which should put its gaming performance roughly comparable to those of Ryzen 7000X3D series processors, but without the 3D Vertical Cache, yielding higher headroom for clock speeds and overclocking. High Yield YT believes that a May-June launch of Ryzen 9000 "Granite Ridge" could give AMD free reign over the DIY gaming desktop market until Intel comes around to launch its next-generation Core "Arrow Lake-S" desktop processor in the Socket LGA1851 package, some time in September-October 2024, setting the stage for Ryzen 9000X3D processors by CES (January 2025).
It was recently reported that "Zen 5" processors are already in mass production, although this could refer to the "Eldora" CCD that makes its way not just to the "Granite Ridge" desktop processors, but also EPYC "Turin" server processors.
Sources:
High Yield YT (Twitter), HotHardware
AMD is giving its next-gen desktop processors the Ryzen 9000 series processor model numbering, as it used the Ryzen 8000 series for its recently announced Socket AM5 desktop APUs based on the "Hawk Point" monolithic silicon. "Granite Ridge" will be a chiplet-based processor, much like the Ryzen 7000 series "Raphael." In fact, it will even retain the same 6 nm client I/O die (cIOD) as "Raphael," with some possible revisions made to increase its native DDR5 memory frequency (up from the current DDR5-5200), and improve its memory overclocking capabilities. It's being reported that DDR5-6400 could be the new "sweetspot" memory speed for these processors, up from the current DDR5-6000.The "Granite Ridge" processor will feature one or two "Eldora" CPU complex dies (CCDs). Each CCD contains eight "Zen 5" CPU cores (aka "Nirvana" cores), each with 1 MB of L2 cache, and a yet undisclosed amount of on-die L3 cache. The "Zen 5" CCD will be built on the TSMC N4 (4 nm EUV) foundry node, the same node on which the company builds its "Hawk Point" monolithic silicon.
The "Zen 5" CPU core is expected by AMD to achieve a 10-15 percent IPC uplift over "Zen 4," which should put its gaming performance roughly comparable to those of Ryzen 7000X3D series processors, but without the 3D Vertical Cache, yielding higher headroom for clock speeds and overclocking. High Yield YT believes that a May-June launch of Ryzen 9000 "Granite Ridge" could give AMD free reign over the DIY gaming desktop market until Intel comes around to launch its next-generation Core "Arrow Lake-S" desktop processor in the Socket LGA1851 package, some time in September-October 2024, setting the stage for Ryzen 9000X3D processors by CES (January 2025).
It was recently reported that "Zen 5" processors are already in mass production, although this could refer to the "Eldora" CCD that makes its way not just to the "Granite Ridge" desktop processors, but also EPYC "Turin" server processors.
85 Comments on More AMD Ryzen 9000 "Zen 5" Desktop Processor Details Emerge
Plenty of people there were getting their DDR5 to 7200 or 7800 but with terrible timings and seeing basically no improvement just like I said.
The only people who saw nice improvements had to push their timings HARD and had tremendous difficulty with stability no matter how much they were pumping volts into the RAM.
To me its quite telling that your original comment mentioned rather high DDR5 speed (7800) with what is VERY tight timings (c36-38) for AMD at those speeds as 'evidence' for a DDR5 bandwidth limitation.
I suggest you look at benches with much more relaxed timings (like cl48-50 + the crappy secondary and tertiary timings those clocks typically require) at those high clocks. Whatever performance advantages you're seeing will disappear....which shouldn't happen if there was a bandwidth limitation.
Timings effect latency not bandwidth after all.
This is true with Intel too BTW. You can crank the DDR5 clocks as high as you like for more bandwidth, and your AIDA or y cruncher benches will look pretty sweet, but if you don't get the latency down too it won't much matter for nearly all real world apps.
In order to properly compare for the sake of testing bandwidth you need latency to be the same or relatively equal which requires pushing timings and volts to overcome the previously mentioned penalties when at high frequencies (2:1) mode.
What you’re trying to pass off as proof to the opposite of what I’m saying actually proves my point. Latency and IF clocks are worse when attempting 2:1 mode, so why would those pushing volts and timings at those higher frequencies provide any benefit > dual ccd chips and higher bandwidth DO have benefits.
*Also there are absolutely timings that impact bandwidth (tertiary and secondary like SCL for example).
I personally skip every CPU upgrade until 2027 I suppose but I like where AMD CPUs are going. The only thing missing is doubling the RAM channels so you could make more use of iGPUs.
I propose an alternative to raising the features (and cost) of the mainstream platforms; lower the entry for the "HEDT" platforms instead. I've long argued that the upper tier 8+-core CPUs (i9-14900K, Ryzen 7700X-7950X) should be moved (or overlap) to the Xeon-W/Threadripper platforms respectively, as both these platforms lack CPUs with high core speeds (like the good old Sandy Bridge-E, Haswell-E, Skylake-X and early Threadrippers did). Getting both the high core speeds and IO and memory bandwidth would be an ideal offering for many "workstation" buyers like developers and even content creators, which doesn't really need 32 "slow" cores. I'd buy into a such platform in a heartbeat. :)
Like I said, it defeats the purpose of the APU, which is basically a cost saving measure for users who only need "base level" graphics performance. This saves in terms of cost of dedicated memory, PCB, extra cooler, etc. (Not to mention system itegrators love it as it saves time for them.) If for instance they added 4 GB GDDR5/6 to the CPU die, it would certainly help, but what about those who need just a little more, and before you know it, you have three versions of every CPU model. This also adds more complexity, more things that can and will eventually fail, and potentially leads to the user having to replace it quicker.
Imagine a typical desktop PC buyer (not the most hardcore enthusiasts), when they buy a decent computer, they can expect ~5-8 good years out of it, more if they're lucky and the PC is not under heavy load all the time, well cooled and not OCed, perhaps even more with some upgrades. During this lifecycle we should expect a storage upgrade, and perhaps a memory upgrade. But we should also expect a GPU upgrade, as new codec support is crucial (even just for YouTube), and for GPU acceleration is hugely beneficial for most "heavy" workloads, even non-gaming, like photo editing, video editing, CAD, 3D-modeling, etc. And I'm still just assuming a "casual" user her. So the user should factor in a "mid-cycle" GPU upgrade anyways, even if it's a new "low-end" GPU, to get the most out of the computer. The wonderful thing about dedicated graphics cards, but often overlooked, is modularity; if your needs change, it breaks, etc. you can just replace it. It also makes troubleshooting easier. The more crap that's integrated into the CPU and motherboard, the more things can break.
The APU alone is 250 USD. This money buys you a fully built used PC of similar gaming performance. For what you'd spend on a motherboard, RAM etc, you'll get even more performance. So anyone who really cares about their budget don't buy APUs, unless it's some mining histeria going on and dGPUs cost like a couple Boeings. FYI: RX 580 outperforms any existing and planned APU and it's <60 USD on most marketplaces. It also makes it unnecessary to buy 32 GB RAM paying about half of its price off right away.
Target audience also consists of people who absolutely don't care about the price. They want the most performance possible to fit into their mini purse. With everything so simplified, it's not so feasible for such people.
If you're talking about people who build themselves or get theirs custom built, then there isn't any significant savings to APUs.
Comparing a new PC (of any segment) to the used market isn't completely fair. New hardware usually gets 3+ years of warranty, where used stuff does not. Additionally, these days especially GPUs have been "misused" with mining, and I'd advice to stay clear from that. Depending on how old hardware you're talking about, there might be challenges with drivers too, for current or the next gen Windows. And lastly, any included storage should be discarded and replaced with new SSDs and/or HDDs. I've had both great success and failures with used hardware, but too often pricing is to far too high for the risk your're taking.
But if you're going to talk about great value on the used market, and want a reliable and expandable machine at a low price, look at used proper workstations from Dell, HP and Lenovo that are 3 years old. Those generally have overkill and high rated PSUs, have capable (but noisy) cooling, and space for plenty of PCIe cards, disks and SSDs. (Unlike the "base" office PCs, which have a minimal PSU and often little cooling at all.) Throw in a new GPU and you're good to go for many years. But be aware that such systems are quirky to work with, and may not be compatible with normal fans etc. (Just installing an SSD in a Dell Precision is quite an adventure.)
My concern isn't primarily DoA, but stability over time. And what people regard as stable is highly subjective. May I also ask what is your standard for a GPU passing validation? My standard is that a GPU (or any hardware) should handle your use case for at least 3 months without a single crash due to hardware issues. (It doesn't mean I stress test for 3 months though…) Safety wise, sure if you do a proper DBAN it should be safe to use.
And I could agree if you find something less than 2 years old, but that's rather rare. Getting SMART errors on devices >3 years is fairly common, and makes them useless. Espescially SSDs seem to wear out at 2-3 years of daily use, I've seen a lot of them fail. Anything older than Skylake(2015) can be a hit or miss with compatibility for later OS's, but generally speaking anything workstation/server grade generally have very long driver support. Linux will generally work fine.
Part of my reasoning is that something very old might be able to work right now, but will not have as long support as a brand new product, so this should be taken into the value proposition of any used hardware, along with lack of warranty. (e.g. a Zen 5 will offer more support going forward)
I've many times considered buying "old" computer parts, and if you know what to look for you can find quite good deals. Local sources are usually the best, but you can even find gold on Ebay, even though shipping and VAT would challenge the value proposition for some of us. Especially workstation parts can be great deals, better than the usual i7s. Just ~1.5 years ago I was looking at getting 2-3 sets workstation boards, CPUs and RAM at a great deal, I believe it was Cascade Lake or something similar, so very recent and feature rich. I didn't buy it because I was too busy. I also almost pulled the trigger on a box of 3x Intel X550 (10G NIC) NIB for ~$80 a piece, would be amazing for my "home lab", but I'm just too busy on my day job.
What's perhaps more interesting would be what kind of hardware would offer a decent value compared to a brand new Zen 4 or Raptor Lake? For all-round gaming and desktop use, you will get pretty far with anything from the Skylake family with boost >4 GHz, even if you pair it with a RTX 4070 or RX 7800 XT, you'll not be loosing too many FPS in most titles, and if the old parts frees up money for a higher tier GPU it might be worth it. And I don't expect this to change a lot when Zen 5 arrives, for most realistic use cases, there aren't a tremendous gain beyond "Skylake" performance in gaming (except for edge cases). But the bigger question remains; risk and time. For clarity, I was only takling about workstation computers, not baseline office computers or home computers from retail stores, both of those have generally underpowered and low quality PSUs and cooling.
Take for instance the Dell Optiplex, especially the slightly smaller ones; horrible designs. Even if you put a graphics card in there, there is no PSU headroom. There is (usually) no case cooling, only the stock CPU cooler recycling case air. Even when specced with the highest 65W i7, those are utterly useless for development work, or any kind of sustained load. I've seen them throttle like crazy or shut down just from a ~5 min compile job, and those were fairly new at the time. Please be serious. Let's be civil and have a constructive and interesting discussion. ;)
If you read carefully I'm talking about the big PC manufacturers who build systems by the hundreds of thousands. For them, avoiding an add-in card (or two), cabling, cooling means less work hours and less potential points of failure. Every single mistake which have to be manually corrected, cost them "a lot". It's completely different for us enthusiasts building computers, or even ordering a custom built machine, these are not mass produced on an assembly line.
A week before the NY 2024, one of the board asked me what to do with their viri situation but I'm not a virus expert... Ended up knowing all these GPUs are still working fine. They are statistically as reliable as HDDs. Only the most terrible SSDs fail after a couple years. Data removal on SSDs is as monkey as it gets, just format it 10 times and call it a day. No one will recover this data. HDDs... well, will take a long while but also doable. Still a very insignificant chance of catching a spiked HDD, rarely do we see hackers and other suspicious people selling their hardware instead of destroying. And it will also be a miss in terms of $ per FPS (especially any other performance metric if you ain't a gamer). Most feasible 2nd hand offerings are way newer. As of today, it's low and mid-tier systems from 2019 to 2021 (Ryzen 3600/5500/5600, i3-10105F, i3-12100F, i5-10400F). Driver support isn't a problem for such PCs. GPU-wise, it's GTX 1600 series (4 years newer than GTX 900 series which are still supported), RTX 3000 series (effectively almost next-gen) and RX 6000 series (same story but AMD). It's valid up until it crashes for hardware reasons. No matter the use case if clocks/voltages/wattages/fan curves are out-of-the-box and no overheat is going on. Can't run one specific task and it's not a software issue, this GPU is effectively broken. Even if I or another end user never uses such software, this GPU is already marked as a failing one and it'll await replacement when possible. Broken cooling system doesn't count, it's the easiest thing to fix, especially if the heatsink and wiring is intact. Of course in case this failure doesn't bother the user it can be marked "OK" but y'know... These are usually overpriced on my 2nd hand market so didn't bother checking. Fair enough I guess. Won't argue with that because I frankly got no idea about workstation PCs. Stop insulting me. I read it carefully. I added the line of "we didn't rip our customers off enough last decade" precisely for that reason. I know how easy this task is and what corner they cut. This is an efficient business model but it's still extensively greedy (if we account the fact these no-dGPU PCs ain't much cheaper than yes-dGPU ones).
I was actually updating my GPU support overview just yesterday (as I was figuring out which combination of GPUs, systems and OS versions to have for test-PCs for software validation), and like you said in your previous post, both Turing and RDNA are great in support for newer platforms. Nvidia actually still have mainline driver support for Maxwell(10 years), and this is also the earliest that works decently with DX 12 and Vulkan. Turing, which is still being made, is going to be a "long term support" family, as they are still used in the Quadro Txx series, was well as the relatively recent GTX 1630(2022), and can even work as a replacement for users who still want to use old Win 7. (Win 8 or Win7 32-bit is only Pascal and older.)
Despite ever-increasing demands, Pascal and newer hold up very well in non-RT workloads, and I know GTX 1080 owners who have gotten a lot of mileage out them. At least compared to what I experienced in the late 90s and early 2000s, pretty much anything up to Kepler felt "obsolete" within 3-4 years, sometimes even sooner.
But in terms of value, AM5 would be much greater if there were a good selection of solid motherboards at a decent price. This, along with memory costs, is probably the reason why Zen 3 is still a top seller, and I would argue, a great value deal if you can find a good CPU/motherboard combo at a low price. This would probably be a more compelling choice over a used system. It will be interesting to see if Zen 3 becomes a "problem" for Zen 5 due to the excellent value. There are no insults, nor anything remotely condescending in my posts, so please don't attribute things I don't say or mean. And I don't mind people disagreeing or a heated discussion either, just as long as it's civil. I'm very much interested in hearing other peoples opinions when they have something to contribute. :)
I'm just explaining why the business works the way they do. The big PC manufacturers have a very high influence on Intel and AMD. They want to cut costs, which is why they push integrated graphics, and they also want frequent refreshes and gimmicks, to sell new hardware. That's we see e.g. Intel increasing ridiculous amounts of E-cores in their designs, and completely gimmicky "clock speeds" on their low-TDP models. For non-enthusiasts, "numbers" sell, like "cores", "GHz", etc.
But both you and me see through this.
But it's probably the POST time you're complaining about (which is technically pre-boot), which happens when you overclock your memory and the BIOS struggle to get your memory controller to behave (retraining memory). Like 67Elco alluded to, removing overclocking of memory resolves the issue. This is a symptom that you are operating outside the specs of the controller. Run at the speed profile for which it is designed for, and the problem goes away. Intel's current memory controllers are certainly more tolerant, and AMD may improve their controllers somewhat, but this is generally becoming a larger and larger "problem". The solution is to run at the designed spec.