Monday, May 23rd 2022

AMD Unveils 5 nm Ryzen 7000 "Zen 4" Desktop Processors & AM5 DDR5 Platform

AMD today unveiled its next-generation Ryzen 7000 desktop processors, based on the Socket AM5 desktop platform. The new Ryzen 7000 series processors introduce the new "Zen 4" microarchitecture, with the company claiming a 15% single-threaded uplift over "Zen 3" (16-core/32-thread Zen 4 processor prototype compared to a Ryzen 9 5950X). Other key specs about the architecture put out by AMD include a doubling in per-core L2 cache to 1 MB, up from 512 KB on all older versions of "Zen." The Ryzen 7000 desktop CPUs will boost to frequencies above 5.5 GHz. Based on the way AMD has worded their claims, it seems that the "+15%" number includes IPC gains, plus gains from higher clocks, plus what the DDR4 to DDR5 transition achieves. With Zen 4, AMD is introducing a new instruction set for AI compute acceleration. The transition to the LGA1718 Socket AM5 allows AMD to use next-generation I/O, including DDR5 memory, and PCI-Express Gen 5, both for the graphics card, and the M.2 NVMe slot attached to the CPU socket.

Much like Ryzen 3000 "Matisse," and Ryzen 5000 "Vermeer," the Ryzen 7000 "Raphael" desktop processor is a multi-chip module with up to two "Zen 4" CCDs (CPU core dies), and one I/O controller die. The CCDs are built on the 5 nm silicon fabrication process, while the I/O die is built on the 6 nm process, a significant upgrade from previous-generation I/O dies that were built on 12 nm. The leap to 5 nm for the CCD enables AMD to cram up to 16 "Zen 4" cores per socket, all of which are "performance" cores. The "Zen 4" CPU core is larger, on account of more number-crunching machinery to achieve the IPC increase and new instruction-sets, as well as the larger per-core L2 cache. The cIOD packs a pleasant surprise—an iGPU based on the RDNA2 graphics architecture! Now most Ryzen 7000 processors will pack integrated graphics, just like Intel Core desktop processors.
The Socket AM5 platform is capable of up to 24 PCI-Express 5.0 lanes from the processor. 16 of these are meant for the PCI-Express graphics slots (PEG), while four of these go toward an M.2 NVMe slot attached to the CPU—if you recall, Intel "Alder Lake" processors have 16 Gen 5 lanes toward PEG, but the CPU-attached NVMe slot runs at Gen 4. The processor features dual-channel DDR5 (four sub-channel) memory, identical to "Alder Lake," but with no DDR4 memory support. Unlike Intel, the AM5 Socket retains cooler compatibility with AM4, so the cooler you have sitting on your Ryzen CPU right now, will work perfectly fine.

The platform also puts out up to 14 USB 20 Gbps ports, including type-C. With onboard graphics now making it to most processor models, motherboards will feature up to four DisplayPort 2 or HDMI 2.1 ports. The company will also standardize Wi-Fi 6E + Bluetooth WLAN solutions it co-developed with MediaTek, weaning motherboard designers away from Intel-made WLAN solutions.

At its launch, in Fall 2022, AMD's AM5 platform will come with three motherboard chipset options—the AMD X670 Extreme (X670E), the AMD X670, and the AMD B650. The X670 Extreme was probably made by re-purposing the new-generation 6 nm cIOD die to work as a motherboard chipset, which means its 24 PCIe Gen 5 lanes work toward building an "all Gen 5" motherboard platform. The X670 (non-extreme), is very likely a rebadged X570, which means you get up to 20 Gen 4 PCIe lanes from the chipset, while retaining PCIe Gen 5 PEG and CPU-attached NVMe connectivity. The B650 chipset is designed to offer Gen 4 PCIe PEG, Gen 5 CPU-attached NVMe, and likely Gen 3 connectivity from the chipset.
AMD is betting big on next-generation M.2 NVMe SSDs with PCI-Express Gen 5, and is gunning to be the first desktop platform with PCIe Gen 5-based M.2 slots. The company is said to be working with Phison to optimize the first round of Gen 5 SSDs for the platform.
All major motherboard vendors are ready with Socket AM5 motherboards. AMD showcased a handful, including the ASUS ROG Crosshair X670E Extreme, the ASRock X670E Taichi, MSI MEG X670E ACE, GIGABYTE X670E AORUS Xtreme, and the BIOSTAR X670E Valkyrie.

AMD is working to introduce several platform-level innovations like it did with Smart Access Memory with its Radeon RX 6000 series, which builds on top of the PCIe Resizable BAR technology by the PCI-SIG. The new AMD Smart Access Storage technology builds on Microsoft DirectStorage, by adding AMD platform-awareness, and optimization for AMD CPU and GPU architectures. DirectStorage enables direct transfers between a storage device and the GPU memory, without the data having to route through the CPU cores. In terms of power delivery Zen 4 uses the same SVI3 voltage control interface that we saw introduced on the Ryzen Mobile 6000 Series. For desktop this means the ability to address a higher number of VRM phases and to process voltage changes much faster than with SVI2 on AM4.
Taking a closer look at the AMD Footnotes, "RPL-001", we find out that the "15% IPC gain" figure is measured using Cinebench and compares a Ryzen 9 5950X processor (not 5800X3D), on a Socket AM4 platform with DDR4-3600 CL16 memory, to the new Zen 4 platform running at DDR5-6000 CL30 memory. If we go by the measurements from our Alder Lake DDR5 Performance Scaling article, then this memory difference alone will account for roughly 5% of the 15% gains.

The footnotes also reference a "RPL-003" claim that's not used anywhere in our pre-briefing slide deck, but shown in the video presentation. In the presentation we're seeing a live demo comparison between a "Ryzen 7000 Series" processor and Intel's Core i9-12900K "Alder Lake." It's worth mentioning here that AMD isn't disclosing the exact processor model, only that it's a 16-core part, if we follow the Zen 3 naming, that would probably be the Ryzen 9 7950X flagship. The comparison runs the Blender rendering software, which loads all CPU cores. Here we see the Ryzen 7000 chip finish the task in 204 seconds, compared to the i9-12900K and its 297 seconds time, which is a huge 31% difference—very impressive. It's worth mentioning that the memory configurations are slightly mismatched. Intel is running with DDR5-6000 CL30, whereas the Ryzen is tested with DDR5-6400 CL32—lower latency for Intel, higher MHz for Ryzen. While ideally we'd like to see identical memory used, the differences due to the memory configuration should be very small.
AMD is targeting a Fall 2022 launch for the Ryzen 7000 "Zen 4" desktop processor family, which would put this sometime between September thru October. The company is likely to detail the "Zen 4" microarchitecture and the Ryzen 7000 SKU list in the coming weeks.

Update 21:00 UTC: AMD has clarified that the 170 W PPT power numbers seen are the absolute max limits, not "typical" like the 105 W, on Zen 3, which were often exceeded during heavy usage.

Update May 26th: AMD further clarified that the 170 W number is "TDP", not "PPT", which means that when the usual x1.35 factor is applied, actual power usage can go up to 230 W.

You can watch the whole presentation again at YouTube:
Add your own comment

211 Comments on AMD Unveils 5 nm Ryzen 7000 "Zen 4" Desktop Processors & AM5 DDR5 Platform

#176
Blaeza
I shall be sticking with AM4 after an upgrade to a 5700X and a better GPU for many a year. You scallywags and your new fangled £500 DDR5 can test it out for me first and I'll be getting on board at AM4 and DDR4 prices.
Posted on Reply
#177
R0H1T
Valantarbut you're arguing that this power usage is sufficient to make it likely that these are Zen4c cores and not Zen4
No I'm arguing that if these are the big cores, zen4 (with more cache) or zen4D whatever they'd call them, then pairing them with an IGP makes no sense. Because these will probably replace the 16c/32t 5950x at the top end with a 24c/48t(?) chip given Genoa is already 96 cores? So I expect the flagship Ryzen MSDT to not have an IGP ~ that's it, I'm totally guesstimating in this regard & so are you btw o_O
Which gets wasted with an IGP, try again :rolleyes:

I'm guesstimating the bigger cache variants would likely ditch the IGP with massive (L3?) caches near the cores or on the IoD, maybe even an L4 cache.
While you were saying there'd be thinner(lighter?) cores with even less cache & higher density?

Now admittedly there could be 3 variants, with x3d versions also thrown in, but that'd be even more bizarre as far as I'm concerned!
Posted on Reply
#178
Valantar
R0H1TNo I'm arguing that if these are the big cores, zen4 (with more cache) or zen4D whatever they'd call them, then pairing them with an IGP makes no sense. Because these will probably replace the 16c/32t 5950x at the top end with a 24c/48t(?) chip given Genoa is already 96 cores? So I expect the flagship Ryzen MSDT to not have an IGP ~ that's it, I'm totally guesstimating in this regard & so are you btw o_O
Yes, we're all speculating, but you brought Zen4c into this as a counterargument to MSDT getting an iGPU, which ... sorry, I just don't see the connection. The iGPU is a low core count offering that they're adding because the massive node improvement for the IOD lets them implement it relatively cheaply, and it adds a hugely requested feature that will also make these chips much more palatable to the very lucrative, high volume OEM market. OEMs want the option for dGPU-less builds, and this will open up a whole new market for AMD: enterprise PCs/workstations that don't come with a dGPU in their base configuration. And of course consumers have also been saying how nice it would be for a barebones iGPU in their CPUs for troubleshooting or basic system configs ever since Ryzen came out. They're just responding to that. And I would be shocked if they spun out a new, smaller IOD without the iGPU for their high end MSDT chips, as that would be extremely expensive for a very limited market.
Posted on Reply
#179
MarsM4N
ratirtWell i disagree with you and I can say that extrapolating this (like you said) from a variety of application which behave differently and there is such a vast number of them is impossible as well.
For example, 1 cpu is better than another in one application and the another cpu is better than the first one in a different application. If IPC is a metric describing instructions per second which are a constant, the outcome should be the same for every app but it is not. So performance does not always equal IPC.
For instance.
5800x and 5800x3d in games. Normally these are the same processors but they behave differently in gaming and differently in office apps. So out of curiosity, am I talking here about IPC or a performance? Somehow, you say that IPC has to be measured across variety of benchmarks to be valid. I thought that is general performance of a CPU across mostly used applications.
Could even be that they bring out "3D" versions for gamers & "non 3D" versions for pro/consumers. :confused: It would make a lot of sense.
Bomby569I've read they will all inclued igpu, that seems like a waste of die and money for the consumer. I would prefer 2 versions with and without like Intel is doing.
An iGPU doesn't increase the chip price that much. On the plus side you always have a "Backup GPU" on hand, and the resale value will be better (f.e. can be used for HTPC's).

Would <3 if they one day include a "Automatic Toggle Mode", so that the APU runs the desktop/video applications & the (comletely shut down) GPU turns only on for gaming.
Now that would be a real killer feature. :rockout:
Posted on Reply
#180
Bomby569
MarsM4NWould :love: if they one day include a "Automatic Toggle Mode", so that the APU runs the desktop/video applications & the (comletely shut down) GPU turns only on for gaming.
Now that would be a real killer feature. :rockout:
That is a source of problems on the laptops. And i don't know if you saved anything, gpu's are very frugal this days, and if you have a 0db it isn't much of a difference. Some cents in electricity.
Posted on Reply
#181
MarsM4N
ValantarAFAIK W11 has this feature already - it has a toggle for setting a "high performance" and "[something, can't remember, don't think it's "low performance"] GPU, which should allocate the render workload to the appropriate GPU depending on the task at hand. It might not correctly recognize and categorize all applications, but you can override that manually.
Ohh, really? Great! Now that's finally a feature worth uprading to W11. :D
Posted on Reply
#182
Wirko
ValantarBut that's the thing: IPC in current complex CPU architectures is not a constant. It is a constant in very simple, in-order designs. In any out-of-order design with complex instruction queuing, branch prediction, instruction packing and and more, the meaning of "IPC" shifts from "count the execution ports" to "across a representative selection of diverse workloads, how many instructions can the core process per clock". The literal, constant meaning of "instructions per second" is irrelevant in any reasonably modern core design as a) they can process tons at once, but b) the question shifts from base hardware execution capabilities to queuing and instruction handling, keeping the core fed.

That is also why caches and even RAM has significant effects on anything meaningfully described as "IPC" in a modern system, as cache misses, RAM speed, and all other factors relevant to keeping the core fed play into the end result. That is why you need a representative selection of benchmarks to measure IPC - because there is no such thing as constant IPC in modern CPUs, nor is there any such thing as an application that linearly loads every execution port in a way that demonstrates IPC in an absolute sense.
That's a nice explanation. May I add that IPC is not constant even in very simple microprocessors. The Zilog Z80 and the Motorola 6800 do not have a constant execution time for all instructions. In the 80386, IPC also becomes unpredictable: 32-bit integer multiplication takes 9-38 clock cycles, depending on the actual data being multiplied, and many simpler instructions take two cycles.
Posted on Reply
#183
ModEl4
Although I really didn't want to get involved, below my 2 cents:

The common use for faster is to denote that A has higher speed than B
While the common use of Quicker is to denote that A completes something at a shorter time than B
If AMD used quicker it would be fine ,
since they used faster they involve speed, which means they involve rate (speed : the rate at which someone or something moves or operates or is able to move or operate) [or more strictly in Euclidean Physics : Speed is the ratio of the distance traveled by an object to the time required to travel that distance] so you see fast/faster essentially refers to a fraction, with time being just one of the 2 numbers and it's always the denominator and being the denominator it gives 45% not 31% , hence the logic gap in AMD's wording.
Edit:
I just saw the last update regarding 170W being the absolute max limit (probably 125W typical vs 105W), that's great news and clear advantage vs Intel!
I'm curious more for the performance/W comparison between 13400/7600(X?) 65W parts that will be much closer probably.
Posted on Reply
#185
MarsM4N
ModEl4Although I really didn't want to get involved, below my 2 cents:

The common use for faster is to denote that A has higher speed than B
While the common use of Quicker is to denote that A completes something at a shorter time than B
If AMD used quicker it would be fine ,
since they used faster they involve speed, which means they involve rate (speed : the rate at which someone or something moves or operates or is able to move or operate) [or more strictly in Euclidean Physics : Speed is the ratio of the distance traveled by an object to the time required to travel that distance] so you see fast/faster essentially refers to a fraction, with time being just one of the 2 numbers and it's always the denominator and being the denominator it gives 45% not 31% , hence the logic gap in AMD's wording.
Faster vs. Qicker, terms drag racer are very familiar with. :cool:

The winner is not who's faster, but the one who get's the quickest from point A to point B.

ModEl4I just saw the last update regarding 170W being the absolute max limit (probably 125W typical vs 105W), that's great news and clear advantage vs Intel!
I'm curious more for the performance/W comparison between 13400/7600(X?) 65W parts that will be much closer probably.
Good point. :) Intel might still hold the productivity crown with it's brute force power boost, but for the average joe gaming performance & performance/W is more important.

Especially now with the rising energy costs.
Posted on Reply
#186
Mussels
Freshwater Moderator
TheLostSwedeHopefully not, as if the stability is as bas as it was for both X370 and X570, AMD is going to get a lot of unhappy customers.
99% of people had no issues, with the exception of the funky PCI-E/USB 3 related reset bugs that took time to diagnose (But also took some uncommon setups to trigger, like PCI-E risers and high power draw USB 3.x devices in the same system)
Posted on Reply
#187
TheLostSwede
News Editor
Mussels99% of people had no issues, with the exception of the funky PCI-E/USB 3 related reset bugs that took time to diagnose (But also took some uncommon setups to trigger, like PCI-E risers and high power draw USB 3.x devices in the same system)
That's simply not true. There were a lot of UEFI/AGESA issues early on, on both platforms, some took longer to solve than others. Much of which was memory related, but X570 had the boost issues and a lot of people had USB 2.0 problems as well.

As I said, it mostly got resolved after a few months, but some things took quite a while for AMD to figure out.
Posted on Reply
#188
ratirt
ValantarBut that's the thing: IPC in current complex CPU architectures is not a constant. It is a constant in very simple, in-order designs. In any out-of-order design with complex instruction queuing, branch prediction, instruction packing and and more, the meaning of "IPC" shifts from "count the execution ports" to "across a representative selection of diverse workloads, how many instructions can the core process per clock". The literal, constant meaning of "instructions per second" is irrelevant in any reasonably modern core design as a) they can process tons at once, but b) the question shifts from base hardware execution capabilities to queuing and instruction handling, keeping the core fed.

That is also why caches and even RAM has significant effects on anything meaningfully described as "IPC" in a modern system, as cache misses, RAM speed, and all other factors relevant to keeping the core fed play into the end result. That is why you need a representative selection of benchmarks to measure IPC - because there is no such thing as constant IPC in modern CPUs, nor is there any such thing as an application that linearly loads every execution port in a way that demonstrates IPC in an absolute sense.
Of course it is not a constant. This is supposedly the outcome. The instructions per second are not a constant either. How can you measure something that is changing depending on the environment or case of use? Imagine light-speed "r" or electric charge not being a constant. That is why all measurements are wrong no matter how you measure it since you can't measure it correctly either way. So all are wrong but at the same time are some sort of indication. You cant say this is wrong and this is correct. IPC is some sort of enigma that people cling to like dark matter. What we were discussing earlier and what you been trying to explain is not an IPC but general performance across the board. Variety of benchmark perceived as common or a standard to showcase the workload and performance of a processor.
Posted on Reply
#190
ModEl4
PCWorld had a nice interview with Robert Hallock and Frank Azor, interesting questions from PCWorld and sensible answers from AMD team, good stuff:

Posted on Reply
#191
DeathtoGnomes
ValantarThat's an excellent illustration of exactly what we're discussing here: that specific contexts engender specific meanings of words, often in order to highlight specific differences. What this discussion misses, is that such specific meanings do not invalidate the more general meanings of the same words, especially not outside of those contexts - and that in this context, there is no directly applicable sub-meaning that differentiates "faster" from other terms. Which feeds back into your example: the quicker car is still faster in a general sense, after all - it reaches the finish line first; it finishes the task first. The specific definition you're referring to is meant to highlight that if what you mean by "faster" is "reaches the highest top speed", that might not be the same as "finishes the race first". This again illustrates a similar issue to what we're discussing here: that a general measure of a rate - such as mph / km/h - might not give an accurate representation of overall performance in a given workload - such as racing down a quarter mile stretch of road.
I got a headache reading this... :pimp:
So, by this logic Intel being quicker than AMD in the past, should not have lost the race in blender or any other application that Intel lost to. :p:D
ValantarThe only reason why this isn't done in such situations is that the percentage difference would be minuscule and thus meaningless in terms of effectively communicating the difference.
I belive thats how to use that, its called the Margin of Error.
Valantaryou cannot argue that such contextual meanings are universal and overrule all other possible meanings. That isn't how language works.
I agree. Words really have no universal meaning, they have accepted meanings. Webster saw to that, the first dictionaries had a significant amount slang definitions, later changed.

Using this logic fanboi definitions say:
AMD good
Intel bad
Intel quicker
AMD faster

Simple! :twitch:
Posted on Reply
#192
Assimilator
@Valantar please, please, please stop wasting your time on feeding the trolls. For your own sanity, I beg you.
ValantarIt's not the same as ADL - ADL has 5.0 x16 PEG and 5.0 for the chipset (IIRC), but no 5.0 m.2. Not that 4 lanes less matters much, but ADL prioritizing 5.0 for GPUs rather than storage never made sense in the first place - it's doubtful any GPU in the next half decade will be meaningfully limited by PCIe 4.0 x16.
ADL is 16x 5.0 lanes for GPU + 4x 4.0 lanes dedicated to M.2 + an effective additional 4x 4.0 lanes that are dedicated to the chipset via the proprietary DMI link. So it's effectively 24 lanes of PCIe from the CPU, which matches Zen 4. Yes, I agree that in terms of *bandwidth* Zen 4 is far ahead, but lane count is more important than bandwidth IMO.
Valantar... is that any more likely than them buying a shitty $20 AM5 tower cooler? There are plenty of great AM4 coolers out there after all. Retaining compatibility reduces waste in a meaningful and impactful way. You don't fix people being stupid by forcing obsolescence onto fully functional parts.
I'm not arguing that allowing people to reuse existing coolers is a bad thing, I'm merely noting that there will inevitably be those who try to use coolers rated for 65W on 170W parts and blame AMD as a result. Intel's approach has its own downsides, although I imagine the cooler manufacturers like Intel a bit more.

I'm also a little sceptical of the claimed compatibility; surely the dimensions (particularly Z-height) of the new socket and chip are different enough to make a meaningful difference?
ValantarX670E is literally marketed as "PCIe 5.0 everywhere", providing 24 more lanes of 5.0 (and, presumably, another 4 of which go to the CPU interconnect, leaving a total of 40). X670 most likely retains the 5.0 chipset uplink even if it runs its PCIe at 4.0 speeds. The main limitation to this is still the cost of physically implementing this amount of high speed IO on the motherboard, as that takes a lot of layers and possibly higher quality PCB materials.
I'm aware that HSIO is expensive, especially PCIe 5.0, which is why I was hoping the CPU and chipsets would be putting out more lanes. My main concern is that the lowest-end chipset will, as usual, get the lowest PCIe version and number of lanes, and manufacturers will thus not bother with USB4 or USB-C in SKUs using said chipset. Given that I've already seen a few boards and not even the highest-end of them have more than 2 type-C ports on the rear panel, I'll withhold judgement until actual reviews drop.
ValantarSeveral announced motherboards mention it explicitly, so no need to worry on that front. The only unknown is whether it's integrated into the CPU/chipset or not. Support is there.
Thanks, although I'd much prefer for it to be platform-native as opposed to relying on third-party controllers. Experience has shown that those are generally, to put it bluntly, shit (I'm looking at you VIA). To be fair, ASMedia has been pretty good.
ValantarOn this I'd have to disagree with you. DS has a lot of potential - current software just can't make use of our blazing fast storage, and DS goes a long way towards fixing that issue. It just needs a) to be fully implemented, with GPU decompression support, and b) be adopted by developers. The latter is pretty much a given for big name titles given that it's an Xbox platform feature though.
Sure it has potential, but I don't believe that it's been a game-changer (pardon the pun) for anything more than a handful of console titles. If it was so great I'd expect its adoption to be much higher in console land, which would push much higher adoption for PCs to allow ports, but I'm just not seeing it.
Posted on Reply
#193
DeathtoGnomes
ValantarThat is why you need a broad range of tests: because no single test can provide a generalizeable representation of per-clock performance of an architecture.
lets make this broader, yet still applies:
That is why you need a broad range of tests: because no single test can provide a generalizeable adequate representation of performance.
This is why reviews that, lets say video cards, use multiple games to compare performances. However, there is not a lot of different IPC tests to use. (as I understand this conversation.. :shadedshu:)
Posted on Reply
#194
CSG
ValantarPerforming a calculation on data in order to transform its unit is ... transforming the data. It is now different data, in a different format.
Why would one perform a transformation when computing a relative performance? Let us define performance as p=w/t, where w stands for work and t for time, and suppose that computers 1 and 2 perform the same task in times t1 and t2, respectively. Then, the ratio of their performances is p1/p2 = (w/t1) / (w/t2) = t2/t1.
Posted on Reply
#195
the54thvoid
Super Intoxicated Moderator
No more arguments or epic posts about semantics please. It's not fair to derail threads with such long and arduously off-topic posts.
Posted on Reply
#196
TheoneandonlyMrK
So this didn't get locked, yeah 15% still and upwards of that since it's a initial rumour, all good.
Posted on Reply
#197
ModEl4
TheoneandonlyMrKSo this didn't get locked, yeah 15% still and upwards of that since it's a initial rumour, all good.
From the PCWorld interview (although not 100% clear), it seems that the Single Thread performance communicated includes IPC and depending which of the models you compare across stack vs zen3 it's 15% and higher (ST performance)
Also another speculation that comes to mind regarding the blender score and based on the answer regarding multithreading performance seems that it's mainly clock driven with a possible smt uplift.
I don't know what the average all core frequency 5950X would be hitting in a similar blender test but the difference between 7950X and 5950X simingly is 1.45X plus 1.05X ≈ 1.52X
If 5950X SMT implementation has -10% lower uplift vs 7950X SMT implementation, then if 7950X was hitting 5.2GHz on all cores with the AIO liquid cooler that they used, to hit +52% uplift from 5950X, 5950X would be running at 5.2GHz/1.52/0.9≈3.8GHz.
3.8GHz it seems a little low even for Blender, TPU members that have a 5950X would know more.
Edit: i forget the IPC difference so with 5% instead of 3.8GHz for 5950X we would have around 4GHz, with 10% around 4.2GHz and so on, so depending on the IPC difference we may not even have SMT improvements at all (not likely, because we are talking around 11% IPC improvements and 5950X in similar blender test at 3.8GHz all core frequency)
Posted on Reply
#198
Mussels
Freshwater Moderator
TheLostSwedeThat's simply not true. There were a lot of UEFI/AGESA issues early on, on both platforms, some took longer to solve than others. Much of which was memory related, but X570 had the boost issues and a lot of people had USB 2.0 problems as well.

As I said, it mostly got resolved after a few months, but some things took quite a while for AMD to figure out.
I've had so many ryzen systems here for myself, and then all the sales builds - the only issue that ever turned out to be actual AGESA/AMD and not shitty manufacturers, was the RAM incompatibility early on with Zen 1 and 300 chipsets hating odd numbered latencies (which was mostly blown out of proportion by people ignoring that more ranks of RAM = slower max clock speeds. When comparing to intel at the time that said 2133! no more! stay! they'd move to an AMD board that said upto 4000 or whatever, and assume that 4000 MUST. WORK. NOW.)

Of the four original boards i had, i've still got three working perfectly fine. The only ones with unsolveable issues were the budget MSI 300 and 450 boards.
The x370 setup had lingering memory issues i could never resolve, until i moved that RAM over to an intel system... and the issues moved with it. Faulty corsair RAM that got unstable above 45C, so that issue went away every winter and came back every summer to drive me mad.
Posted on Reply
#199
TheLostSwede
News Editor
MusselsI've had so many ryzen systems here for myself, and then all the sales builds - the only issue that ever turned out to be actual AGESA/AMD and not shitty manufacturers, was the RAM incompatibility early on with Zen 1 and 300 chipsets hating odd numbered latencies (which was mostly blown out of proportion by people ignoring that more ranks of RAM = slower max clock speeds. When comparing to intel at the time that said 2133! no more! stay! they'd move to an AMD board that said upto 4000 or whatever, and assume that 4000 MUST. WORK. NOW.)

Of the four original boards i had, i've still got three working perfectly fine. The only ones with unsolveable issues were the budget MSI 300 and 450 boards.
The x370 setup had lingering memory issues i could never resolve, until i moved that RAM over to an intel system... and the issues moved with it. Faulty corsair RAM that got unstable above 45C, so that issue went away every winter and came back every summer to drive me mad.
I could never get my Asus Prime X370 board and Ryzen 7 1700 to work with my Corsair LPX 3200 memory properly. Got up to 2933 at best, as 3000 was never properly stable.
That RAM worked perfectly fine at its rated speed in my previous Intel system.
The first couple of months there were a lot of other weird little issue too, it's in a thread here somewhere...

X570, lots of weird issues again early on and some that took much longer to solve, again, plenty posts about it here in the forums. Biggest blunder was of course boost speeds that were promised but took them 3-4 months to deliver after launch.

I never said they didn't solve the issues, my point was simply that I'm sick and tired of being a beta tester for these companies. Spend an extra six months working on these platforms and make them stable before launch, instead of rushing it out so you can launch before your one single competitor. Obviously this doesn't just apply to AMD and Intel, but also a lot of other companies that have more competition, but even so, the same applies, stop launching beta products or worse sometimes.
Posted on Reply
#200
Mussels
Freshwater Moderator
TheLostSwedeI could never get my Asus Prime X370 board and Ryzen 7 1700 to work with my Corsair LPX 3200 memory properly. Got up to 2933 at best, as 3000 was never properly stable.
That RAM worked perfectly fine at its rated speed in my previous Intel system.
The first couple of months there were a lot of other weird little issue too, it's in a thread here somewhere...

X570, lots of weird issues again early on and some that took much longer to solve, again, plenty posts about it here in the forums. Biggest blunder was of course boost speeds that were promised but took them 3-4 months to deliver after launch.

I never said they didn't solve the issues, my point was simply that I'm sick and tired of being a beta tester for these companies. Spend an extra six months working on these platforms and make them stable before launch, instead of rushing it out so you can launch before your one single competitor. Obviously this doesn't just apply to AMD and Intel, but also a lot of other companies that have more competition, but even so, the same applies, stop launching beta products or worse sometimes.
LPX was the literal worst for ryzen. The LITERAL worst.

I wrote a whole ass paragraph here and gave up, how about just a single image showing that hey - you were overclocking past officially supported speeds?



Intel dodged the issue by locking their cheap boards to 2133Mhz, AMD just gave people headroom to overclock and then realised trusting consumers was a terrible idea
Posted on Reply
Add your own comment
Nov 21st, 2024 11:08 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts