# AMD Unveils 5 nm Ryzen 7000 "Zen 4" Desktop Processors & AM5 DDR5 Platform



## btarunr (May 23, 2022)

AMD today unveiled its next-generation Ryzen 7000 desktop processors, based on the Socket AM5 desktop platform. The new Ryzen 7000 series processors introduce the new "Zen 4" microarchitecture, with the company claiming a 15% single-threaded uplift over "Zen 3" (16-core/32-thread Zen 4 processor prototype compared to a Ryzen 9 5950X). Other key specs about the architecture put out by AMD include a doubling in per-core L2 cache to 1 MB, up from 512 KB on all older versions of "Zen." The Ryzen 7000 desktop CPUs will boost to frequencies above 5.5 GHz. Based on the way AMD has worded their claims, it seems that the "+15%" number includes IPC gains, plus gains from higher clocks, plus what the DDR4 to DDR5 transition achieves. With Zen 4, AMD is introducing a new instruction set for AI compute acceleration. The transition to the LGA1718 Socket AM5 allows AMD to use next-generation I/O, including DDR5 memory, and PCI-Express Gen 5, both for the graphics card, and the M.2 NVMe slot attached to the CPU socket.

Much like Ryzen 3000 "Matisse," and Ryzen 5000 "Vermeer," the Ryzen 7000 "Raphael" desktop processor is a multi-chip module with up to two "Zen 4" CCDs (CPU core dies), and one I/O controller die. The CCDs are built on the 5 nm silicon fabrication process, while the I/O die is built on the 6 nm process, a significant upgrade from previous-generation I/O dies that were built on 12 nm. The leap to 5 nm for the CCD enables AMD to cram up to 16 "Zen 4" cores per socket, all of which are "performance" cores. The "Zen 4" CPU core is larger, on account of more number-crunching machinery to achieve the IPC increase and new instruction-sets, as well as the larger per-core L2 cache. The cIOD packs a pleasant surprise—an iGPU based on the RDNA2 graphics architecture! Now most Ryzen 7000 processors will pack integrated graphics, just like Intel Core desktop processors.



 

 

 

 

 




The Socket AM5 platform is capable of up to 24 PCI-Express 5.0 lanes from the processor. 16 of these are meant for the PCI-Express graphics slots (PEG), while four of these go toward an M.2 NVMe slot attached to the CPU—if you recall, Intel "Alder Lake" processors have 16 Gen 5 lanes toward PEG, but the CPU-attached NVMe slot runs at Gen 4. The processor features dual-channel DDR5 (four sub-channel) memory, identical to "Alder Lake," but with no DDR4 memory support. Unlike Intel, the AM5 Socket retains cooler compatibility with AM4, so the cooler you have sitting on your Ryzen CPU right now, will work perfectly fine.



 

 


The platform also puts out up to 14 USB 20 Gbps ports, including type-C. With onboard graphics now making it to most processor models, motherboards will feature up to four DisplayPort 2 or HDMI 2.1 ports. The company will also standardize Wi-Fi 6E + Bluetooth WLAN solutions it co-developed with MediaTek, weaning motherboard designers away from Intel-made WLAN solutions.

At its launch, in Fall 2022, AMD's AM5 platform will come with three motherboard chipset options—the AMD X670 Extreme (X670E), the AMD X670, and the AMD B650. The X670 Extreme was probably made by re-purposing the new-generation 6 nm cIOD die to work as a motherboard chipset, which means its 24 PCIe Gen 5 lanes work toward building an "all Gen 5" motherboard platform. The X670 (non-extreme), is very likely a rebadged X570, which means you get up to 20 Gen 4 PCIe lanes from the chipset, while retaining PCIe Gen 5 PEG and CPU-attached NVMe connectivity. The B650 chipset is designed to offer Gen 4 PCIe PEG, Gen 5 CPU-attached NVMe, and likely Gen 3 connectivity from the chipset. 



 

 
AMD is betting big on next-generation M.2 NVMe SSDs with PCI-Express Gen 5, and is gunning to be the first desktop platform with PCIe Gen 5-based M.2 slots. The company is said to be working with Phison to optimize the first round of Gen 5 SSDs for the platform.



 
All major motherboard vendors are ready with Socket AM5 motherboards. AMD showcased a handful, including the ASUS ROG Crosshair X670E Extreme, the ASRock X670E Taichi, MSI MEG X670E ACE, GIGABYTE X670E AORUS Xtreme, and the BIOSTAR X670E Valkyrie. 

AMD is working to introduce several platform-level innovations like it did with Smart Access Memory with its Radeon RX 6000 series, which builds on top of the PCIe Resizable BAR technology by the PCI-SIG. The new AMD Smart Access Storage technology builds on Microsoft DirectStorage, by adding AMD platform-awareness, and optimization for AMD CPU and GPU architectures. DirectStorage enables direct transfers between a storage device and the GPU memory, without the data having to route through the CPU cores. In terms of power delivery Zen 4 uses the same SVI3 voltage control interface that we saw introduced on the Ryzen Mobile 6000 Series. For desktop this means the ability to address a higher number of VRM phases and to process voltage changes much faster than with SVI2 on AM4.



 
Taking a closer look at the AMD Footnotes, "RPL-001", we find out that the "15% IPC gain" figure is measured using Cinebench and compares a Ryzen 9 5950X processor (not 5800X3D), on a Socket AM4 platform with DDR4-3600 CL16 memory, to the new Zen 4 platform running at DDR5-6000 CL30 memory. If we go by the measurements from our Alder Lake DDR5 Performance Scaling article, then this memory difference alone will account for roughly 5% of the 15% gains.



 

 


The footnotes also reference a "RPL-003" claim that's not used anywhere in our pre-briefing slide deck, but shown in the video presentation. In the presentation we're seeing a live demo comparison between a "Ryzen 7000 Series" processor and Intel's Core i9-12900K "Alder Lake." It's worth mentioning here that AMD isn't disclosing the exact processor model, only that it's a 16-core part, if we follow the Zen 3 naming, that would probably be the Ryzen 9 7950X flagship. The comparison runs the Blender rendering software, which loads all CPU cores. Here we see the Ryzen 7000 chip finish the task in 204 seconds, compared to the i9-12900K and its 297 seconds time, which is a huge 31% difference—very impressive. It's worth mentioning that the memory configurations are slightly mismatched. Intel is running with DDR5-6000 CL30, whereas the Ryzen is tested with DDR5-6400 CL32—lower latency for Intel, higher MHz for Ryzen. While ideally we'd like to see identical memory used, the differences due to the memory configuration should be very small.



 

 
AMD is targeting a Fall 2022 launch for the Ryzen 7000 "Zen 4" desktop processor family, which would put this sometime between September thru October. The company is likely to detail the "Zen 4" microarchitecture and the Ryzen 7000 SKU list in the coming weeks.

*Update 21:00 UTC*: AMD has clarified that the 170 W PPT power numbers seen are the absolute max limits, not "typical" like the 105 W, on Zen 3, which were often exceeded during heavy usage.

*Update May 26th*: AMD further clarified that the 170 W number is "TDP", not "PPT", which means that when the usual x1.35 factor is applied, actual power usage can go up to 230 W.

You can watch the whole presentation again at YouTube:









*View at TechPowerUp Main Site*


----------



## dgianstefani (May 23, 2022)

15% isn't enough, that brings ST up to Alder Lake, while this will compete with raptor lake.


----------



## TheLostSwede (May 23, 2022)

dgianstefani said:


> 15% isn't enough, that brings ST up to Alder Lake, while this will compete with raptor lake.


At least 15%. As in, 15% is the minimum performance improvement.


----------



## DeathtoGnomes (May 23, 2022)

btarunr said:


> he Ryzen 7000 desktop CPUs will boost to frequencies well over 5 GHz.


She showed 5.45Ghz. IDK if thats under perfect conditions or not. I'm thinking maybe they can get more out of the daily gamer.


----------



## W1zzard (May 23, 2022)

DeathtoGnomes said:


> She showed 5.45Ghz. IDK if thats under perfect conditions or not. I'm thinking maybe they can get more out of the daily gamer.


Just rechecked the video. Highest was 5520 MHz, news post has been updated


----------



## oxrufiioxo (May 23, 2022)

TheLostSwede said:


> At least 15%. As in, 15% is the minimum performance improvement.



I also feel it won't be enough especially if the rumored 10-15% ST performance improvements for Raptor Lake are true... In some applications Zen 3 is 30% behind in ST performance vs Alderlake.


----------



## DeathtoGnomes (May 23, 2022)

btarunr said:


> The platform also puts out up to 14 USB 20 Gbps ports, including type-C.


This is a good sign, now your PC can look more like a squid/octopus sitting on your desktop.


----------



## TheLostSwede (May 23, 2022)

oxrufiioxo said:


> I also feel it won't be enough especially if the rumored 10-15% ST performance improvements for Raptor Lake are true... In some applications Zen 3 is 30% behind in ST performance vs Alderlake.


Still early days and if the rumours about AMD are wrong, then maybe the rumours about Intel are wrong too?
We're just going to have to wait and see.


----------



## DeathtoGnomes (May 23, 2022)

W1zzard said:


> Just rechecked the video. Highest was 5520 MHz, news post has been updated


I must've missed that bit. Still questions, under what conditions was that met? Will it reach that on air cooling, etc.



TheLostSwede said:


> Still early days and if the rumours about AMD are wrong, then maybe the rumours about Intel are wrong too?
> We're just going to have to wait and see.


Wonder if there will be as many agesa updates as previously zens


----------



## oxrufiioxo (May 23, 2022)

DeathtoGnomes said:


> I must've missed that bit. Still questions, under what conditions was that met? Will it reach that on air cooling, etc.
> 
> 
> Wonder if there will be as many agesa updates as previously zens



They showed it in a not overly demanding game with no framerate comparison vs the competition kinda a pointless demonstration.


----------



## W1zzard (May 23, 2022)

DeathtoGnomes said:


> I must've missed that bit. Still questions, under what conditions was that met? Will it reach that on air cooling, etc.



While there's no footnotes for this specific test, the other footnotes (both pages are in the news post) for Zen 4 consistently talk about a DDR5-6000 CL30 system with 6950 XT and Asetek 280mm AIO


----------



## TheLostSwede (May 23, 2022)

DeathtoGnomes said:


> Wonder if there will be as many agesa updates as previously zens


Hopefully not, as if the stability is as bas as it was for both X370 and X570, AMD is going to get a lot of unhappy customers.


----------



## W1zzard (May 23, 2022)

oxrufiioxo said:


> I also feel it won't be enough especially if the rumored 10-15% ST performance improvements for Raptor Lake are true... In some applications Zen 3 is 30% behind in ST performance vs Alderlake.



15% from IPC, plus what looks like 10% from higher clocks, so 100 x 1.15 x 1.10 = 126.5


----------



## GoldenX (May 23, 2022)

Hmm, no mention on AVX-512, L3 size, or low end chipset tuning options...
Performance seems promising, hopefully the mid and low end offers are reasonable.


----------



## DeathtoGnomes (May 23, 2022)

W1zzard said:


> While there's no footnotes for this specific test, the other footnotes (both pages are in the news post) for Zen 4 consistently talk about a DDR5-6000 CL30 system with 6950 XT and Asetek 280mm AIO


could not read the footnotes on the stream (even at full screen), good thing shots were posted here.

GD-150 "bursty sngle-threaded load". WTH is bursty, LOL. This footnote is what made me question _their _conditions.


----------



## oxrufiioxo (May 23, 2022)

W1zzard said:


> 15% from IPC, plus what looks like 10% from higher clocks, so 100 x 1.15 x 1.10 = 126.5



Maybe but the wording in their slide saying ST performance and not IPC leads me to believe they are factoring in the clock speed increases already.

Also the MT performance increase was pretty underwhelming the 5950X is already about 20% faster in Blender vs the 12900k so 11% beyond that is pretty meh.

Hopefully I'm wrong or they are underselling it.


----------



## W1zzard (May 23, 2022)

DeathtoGnomes said:


> bursty


Bursty means high load with pauses in-between, on various core counts. i.e. not a render that loads all cores to 100% all the time


----------



## Rhein7 (May 23, 2022)

For me the igpu is the big deal. At least you don't need another gpu for display incase of the usual failed gpu bios update


----------



## W1zzard (May 23, 2022)

DeathtoGnomes said:


> GD-150


This footnote is for a Ryzen 6000 slide btw



http://imgur.com/LPJo6lS




Rhein7 said:


> For me the igpu is the big deal. At least you don't need another gpu for display incase of the usual failed gpu bios update


For me too, finally people don't complain about my reviews being terrible, because I'm mentioning "no IGP"


----------



## oxrufiioxo (May 23, 2022)

Rhein7 said:


> For me the igpu is the big deal. At least you don't need another gpu for display incase of the usual failed gpu bios update



The igpu and the motherboards/connectivity are definitely the highlight.


----------



## W1zzard (May 23, 2022)

oxrufiioxo said:


> Maybe but the wording in their slide saying ST performance and not IPC leads me to believe they are factoring in the clock speed increases already.


You are correct, it actually seems to be 1T, not IPC. News post fixed


----------



## DeathtoGnomes (May 23, 2022)

W1zzard said:


> This footnote is for a Ryzen 6000 slide btw
> 
> 
> 
> ...


Ha, that just looked like a blurry stray line in the stream.


----------



## ARF (May 23, 2022)

oxrufiioxo said:


> Maybe but the wording in their slide saying ST performance and not IPC leads me to believe they are factoring in the clock speed increases already.
> 
> Also the MT performance increase was pretty underwhelming the 5950X is already about 20% faster in Blender vs the 12900k so 11% beyond that is pretty meh.
> 
> Hopefully I'm wrong or they are underselling it.



Yeah, if the Ryzen 9 5950X is already 20% faster than Core i9-12900K, while the newest Ryzen 7000 CPU is only 30% faster than Core i9-12900K, then Raptor Lake will have the door very wide open to beat it and to reign supreme.

Which means that AMD is in a trouble and in the winter we may see for the first time since 2017, that Intel overtakes AMD in the DIY sales ala German MindFactory...


----------



## oxrufiioxo (May 23, 2022)

ARF said:


> Yeah, if the Ryzen 9 5950X is already 20% faster than Core i9-12900K, while the newest Ryzen 7000 CPU is only 30% faster than Core i9-12900K, then Raptor Lake will have the door very wide open to beat it and to reign supreme.
> 
> Which means that AMD is in a trouble and in the winter we may see for the first time since 2017, that Intel overtakes AMD in the DIY sales ala German MindFactory...



I'll reserve judgment till @W1zzard gets them in for testing but I can't help but be slightly underwhelmed with them at best barely catching intel in ST performance a year late and likely being 5-10% behind.


----------



## Hossein Almet (May 23, 2022)

Here's considering building a gaming rig with the 12400F, with Gen 3 NVME storage, just waiting for the flash sale of the graphics card


----------



## tabascosauz (May 23, 2022)

Listened through the livestream in DCS......some thoughts while running for my life:

The AM4 cooler spacing is a nice touch.
170W TDP is a lot.......currently 180W package power across even 2 chiplets is kinda hot in all-core with a midrange air cooler (although custom water only exceeds 80C after 210W or so). I'm not sure if Ian has left Anand yet (Gavin seems to have taken over), but AT seem to be conflating PPT with TDP. I hope 170W is PPT, not TDP
The "extreme overclocking" tag for X670E is pretty funny when all of the advertised X670E boards are 4DIMMers, lol - not sure if X670 now being the middle child means a relative upgrade for B450/B550 users, or just another price hike justifications a la Z590/Z690
Paste is going to be a bitch to clean off that IHS - falls off the edge only to get caught in between the clusters of MLCCs
All in all, feels pretty familiar from AM4. I'm not concerned with the hardware, more with how AMD plans to improve their overall poopy approach to firmware/software. Hopefully AM5 is a new start for AGESA and they start taking a bit more initiative with ryzen master - the way they take 1 step forward and 2 steps back, that laptop and desktop marketshare ain't going anywhere


----------



## rrrrex (May 23, 2022)

PCI-E 5.0 x16 is what you need for low/mid end gpu with 4 GB of VRAM, but amd doesn't want to make it real.


----------



## TheLostSwede (May 23, 2022)

tabascosauz said:


> All in all, feels pretty familiar from AM4. I'm not concerned with the hardware, more with how AMD plans to improve their overall poopy approach to firmware/software. Hopefully AM5 is a new start for AGESA and they start taking a bit more initiative with ryzen master - the way they take 1 step forward and 2 steps back, that laptop and desktop marketshare ain't going anywhere


I guess they won't start from scratch, but they obviously don't need any kind of backwards compatibility, since you can't plonk an AM4 CPU into an AM5 board.
As someone that jumped on X370 and X570 early on, the AGESA really is AMD's weakness, as in the first case it took about nine months to get an almost fully working platform as promised and in the second case it took six months. 
That said, it seems like Intel has been getting a bit sloppy with their UEFI builds at launch too, but that doesn't mean AMD should be allowed to get away with it.
Consumers shouldn't be beta testers every time a new product is launched.


----------



## ARF (May 23, 2022)

rrrrex said:


> PCI-E 5.0 x16 is what you need for low/mid end gpu with 4 GB of VRAM, but amd doesn't want to make it real.



The scary thing here is if AMD decided to make the Radeon RX 7400 with PCIe 5 *x2* connectivity. Then we are all screwed all along


----------



## Ferrum Master (May 23, 2022)

TheLostSwede said:


> Consumers shouldn't be beta testers every time a new product is launched.



It has become an ill trend everywhere, software and HW... I almost have a feeling they do it on purpose to cut prices.


----------



## ARF (May 23, 2022)

Ferrum Master said:


> It has become an ill trend everywhere, software and HW... I almost have a feeling they do it on purpose to cut prices.



But they don't cut the prices, they keep them at the same levels, while significantly decrease the overall quality of the SW/HW...


----------



## Ferrum Master (May 23, 2022)

ARF said:


> But they don't cut the prices, they keep them at the same levels, while significantly decrease the overall quality of the SW/HW...



Cut prices ie labor cost for themselves to increase margin... If you didn't get the idea.


----------



## zlobby (May 23, 2022)

oxrufiioxo said:


> I also feel it won't be enough especially if the rumored 10-15% ST performance improvements for Raptor Lake are true...


You buy CPU for their spec sheet or to get a job done? If it is the latter, then there are other CPU and solutions that are fater than Raptor Lake.
And that is a big 'if'. Intel really don't show much performance uplift gen-over-gen.


----------



## Richards (May 23, 2022)

W1zzard said:


> This footnote is for a Ryzen 6000 slide btw
> 
> 
> 
> ...


You should switch to the 5800x3d for gpu tests. Its 15% faster than your current  5800x


----------



## zlobby (May 23, 2022)

Ferrum Master said:


> Cut prices ie labor cost for themselves to increase margin... If you didn't get the idea.


People shoul really get in their thick skulls - companies optimize their expenses ONLY to increase profit margins.


----------



## tabascosauz (May 23, 2022)

TheLostSwede said:


> I guess they won't start from scratch, but they obviously don't need any kind of backwards compatibility, since you can't plonk an AM4 CPU into an AM5 board.
> As someone that jumped on X370 and X570 early on, the AGESA really is AMD's weakness, as in the first case it took about nine months to get an almost fully working platform as promised and in the second case it took six months.
> That said, it seems like Intel has been getting a bit sloppy with their UEFI builds at launch too, but that doesn't mean AMD should be allowed to get away with it.
> Consumers shouldn't be beta testers every time a new product is launched.



The hardware makes me want to stay, because everything is familiar and we get better/more affordable/more balanced ITX boards and the Impact (I fear what 2 x PCH means for the Impact)......but man, the firmware just pulls me in the other direction. From 1004 up until 1203c we had a nice thing going, but AMD really been evaporating all our hopes with the AGESA since then (1207 seems finally okay again)

Perhaps it's a TSMC thing, but every one of their SKUs seems to improve SP significantly over time. Too eager on 3700X (mid-2019) and got garbage, too eager on 4650G and got bad SP, too eager on 5900X and got a bad clocking sample, too eager on 5700G and got bad IF. Maybe I'll wait it out.


----------



## Richards (May 23, 2022)

Raptor lake  will eat it up on single performance.. meteor  lake  will  increase  the gap to 45% in single core performance  before  they respond  with zen 5


----------



## W1zzard (May 23, 2022)

Richards said:


> You should switch to the 5800x3d for gpu tests. Its 15% faster than your current  5800x











						AMD Ryzen 7 5800X3D Review - The Magic of 3D V-Cache
					

The AMD Ryzen 7 5800X3D is the company's new flagship gaming processor. It introduces 3D V-Cache, a dedicated piece of silicon with additional L3 capacity. In our review, we're testing how much the larger cache can help intensive gaming workloads and applications and compare it to the Intel Core...




					www.techpowerup.com
				




Kinda wrong thread, but I will upgrade the GPU test system this year, whether 12900K, 5800X3D, 7800X or 7900X I don't know yet. "it's easy just switch" takes about two weeks of full-time testing, running benchmarks all day, same scenes over and over again.


----------



## TheLostSwede (May 23, 2022)

tabascosauz said:


> The hardware makes me want to stay, because everything is familiar and we get better/more affordable/more balanced ITX boards and the Impact (I fear what 2 x PCH means for the Impact)......but man, the firmware just pulls me in the other direction. From 1004 up until 1203c we had a nice thing going, but AMD really been evaporating all our hopes with the AGESA since then (1207 seems finally okay again)
> 
> Perhaps it's a TSMC thing, but every one of their SKUs seems to improve SP significantly over time. Too eager on 3700X (mid-2019) and got garbage, too eager on 4650G and got bad SP, too eager on 5900X and got a bad clocking sample, too eager on 5700G and got bad IF. Maybe I'll wait it out.


AMD has a different solution for mini-ITX boards. You'll see soon enough.

I'm not jumping on anything new this year, besides I only recently upgraded to a 5800X as the prices dropped below MSRP locally, which is really quite rare.



W1zzard said:


> AMD Ryzen 7 5800X3D Review - The Magic of 3D V-Cache
> 
> 
> The AMD Ryzen 7 5800X3D is the company's new flagship gaming processor. It introduces 3D V-Cache, a dedicated piece of silicon with additional L3 capacity. In our review, we're testing how much the larger cache can help intensive gaming workloads and applications and compare it to the Intel Core...
> ...


You mean to say that you have a life outside of running benchmarks all day?


----------



## Jermelescu (May 23, 2022)

Hope they'll have Thunderbolt on the mobile versions. I'd love an slim AMD APU laptop that could use an external gpu for more serious gaming.


----------



## gasolina (May 23, 2022)

15% single core boost with 5.4ghz vs the old one 4,9ghz as expected to be around 17% not over 19 or 20%+ , and by clock different it's is 10% in clock speed thus only 5-7% improvement clock to clock. I would skip this crap this happen just like from zen to zen+ and this is zen4+ on 6nm. And on the same thing will happen to raptor lake roughly 3% -5% clock to clock improvement but i'm a bit more curious about intel power consumption for the 13th gen ( i love number 13th )


----------



## Ferrum Master (May 23, 2022)

TheLostSwede said:


> AMD has a different solution for mini-ITX boards. You'll see soon enough.
> 
> I'm not jumping on anything new this year, besides I only recently upgraded to a 5800X as the prices dropped below MSRP locally, which is really quite rare.
> 
> ...



Hey, you should update your CPU-Z banner...

First gens always were raw... DDR5. AM5, well there are no real PCIe gen 4 or 5 devices to skip really, besides storage and unobtainium GPU's. Year or two is a norm IMHO, just because of the open Beta test.


----------



## ARF (May 23, 2022)

Ferrum Master said:


> Cut prices ie labor cost for themselves to increase margin... If you didn't get the idea.



Prices is for the end user who buys in the retail shop. Cut costs is something else irrelevant. Choose your wording properly..


----------



## TheLostSwede (May 23, 2022)

Jermelescu said:


> Hope they'll have Thunderbolt on the mobile versions. I'd love an slim AMD APU laptop that could use an external gpu for more serious gaming.


There's USB4 support, which means Thunderbolt 3 is supported.



Ferrum Master said:


> Hey, you should update your CPU-Z banner...


Better?


Ferrum Master said:


> First gens always were raw... DDR5. AM5, well there are no real PCIe gen 4 or 5 devices to skip really, besides storage and unobtainium GPU's. Year or two is a norm IMHO, just because of the open Beta test.


Yeah, I really hope, for AMD's sake, that they don't do that again, as it really doesn't help them in terms of winning over customers.


----------



## Ferrum Master (May 23, 2022)

ARF said:


> Prices is for the end user who buys in the retail shop. Cut costs is something else irrelevant. Choose your wording properly..



Dude, I barely decipher what your idea is only guess. But free market always care for margin, cutting costs always mean getting lower BOM/labor costs for the device itself and it is cutting costs/price, thinking about the last bit, how much customer pays is all dependent from various bits past the maker, ie distribution net, logistics, taxes etc, seeing from your perspective and treat it with some socialist logic ain't meant here and it is not irrelevant. We speak about the makers themselves and deciding to deploy a raw product it is meant only about maker and their margin. The cut is unwelcome and common trend, thus my remark.

I have to admit that hardware and software complexity has really risen and you cant really compare it with like products from 10 years ago. It really takes longer to get things run... AGESA development wasn't smooth for sure.


----------



## Verpal (May 23, 2022)

DDR5 only.... well I hope the DDR4 to DDR5 transition would become smoother in next few months, since platform cost can be quite steep even if you are upgrading from AM4.


----------



## Wirko (May 23, 2022)

GoldenX said:


> Performance seems promising, hopefully the mid and low end offers are reasonable.


I think the mid and low end will stay on AM4 + DDR4 for quite some time - I mean a year or more. There's a reason AMD launched several AM4 processors in April 2022.


----------



## GoldenX (May 23, 2022)

Wirko said:


> I think the mid and low end will stay on AM4 + DDR4 for quite some time - I mean a year or more. There's a reason AMD launched several AM4 processors in April 2022.


While I would agree, only the 5600 is a valid option, the rest are overshadowed by Gen 10 low end offerings, not to say Gen 12 ones.


----------



## ARF (May 23, 2022)

Ferrum Master said:


> Dude, I barely decipher what your idea is only guess.



My idea is that the maker can cut the costs (BOM) but increase the end user prices. This is exactly what happens in reality.


----------



## AlwaysHope (May 23, 2022)

I was thinking of going to ADL now with more of it's bugs ironed out with bios updates so far, also same with win 11.... however... I'm keen to know when AM5 will actually be launched for retail globally?
Watching the DDR5 market indicates prices are coming down in my part of the world, ever so slowly, but by the time AM5 is really actually available I presume the kits will be cheaper still.
Also, happy my AM4 cooler mounting kits will work on AM5!


----------



## TheLostSwede (May 23, 2022)

AlwaysHope said:


> I was thinking of going to ADL now with more of it's bugs ironed out with bios updates so far, also same with win 11.... however... I'm keen to know when AM5 will actually be launched for retail globally?
> Watching the DDR5 market indicates prices are coming down in my part of the world, ever so slowly, but by the time AM5 is really actually available I presume the kits will be cheaper still.
> Also, happy my AM4 cooler mounting kits will work on AM5!


August/September maybe?


----------



## jesdals (May 23, 2022)

Looks good but will still wait for gen 2 AM5 motherboards because of DDR5 implementation - but hopefully we will se Intels counter showing the true performance with DDR5. But nice to see progress.


----------



## Durvelle27 (May 23, 2022)

I’m making the switch


----------



## Verpal (May 23, 2022)

ARF said:


> The scary thing here is if AMD decided to make the Radeon RX 7400 with PCIe 5 *x2* connectivity. Then we are all screwed all along


I wanted to laugh at this statement but I also laughed at the idea of AMD will launch something with pcie x4 before so......


----------



## Pumper (May 23, 2022)

Disappointing. Only 15% up when the CPU is boosting 400-500MHz higher is really not impressive.

And what is up with that comparison to 12900K? In 5950x vs Zen4 they are using 3600CL16 vs 6000CL30, but vs Intel they use the same 6000CL30 for Intel and 6400CL32 for Zen4, for whatever reason - why not use the same RAM, especially when you already have the Zen4 +6000CL30 RAM system at hand?

MT looks impressive, but then again they are using some in house benchmark for that instead of Cinebench, which they could have used instead, so it's impossible to tell if it will perform that good in the real world (5950X is only ~5% faster than 12900K in rendering, according to TPU tests).

No gaming numbers also shows that AMD is not confident in the performance advantage vs the current Intel CPU, not to mention their own 5800X3D. No wonder they chose not to release 5900X3D and 5950X3D as these would just end up beating Zen4.

Was waiting for Zen4 to see if I should upgrade from my 3900X to 5950X, now that these are going for 500€. Looks like there is no point in going for Zen4, especially when it requires a new mobo, new ram and new cooling on top of the CPU.


----------



## zlobby (May 23, 2022)

We now need the mobile parts!


----------



## ARF (May 23, 2022)

Pumper said:


> Disappointing. Only 15% up when the CPU is boosting 400-500MHz higher is really not impressive.
> 
> And what is up with that comparison to 12900K? In 5950x vs Zen4 they are using 3600CL16 vs 6000CL30, but vs Intel they use the same 6000CL30 for Intel and 6400CL32 for Zen4, for whatever reason - why not use the same RAM, especially when you already have the Zen4 +6000CL30 RAM system at hand?
> 
> ...



Well, I guess the potential negative reviews will push AMD to decrease the prices of the platform.
Because otherwise, AMD will not sell well. The prices are too high as is.


----------



## Denver (May 23, 2022)

oxrufiioxo said:


> Maybe but the wording in their slide saying ST performance and not IPC leads me to believe they are factoring in the clock speed increases already.
> 
> Also the MT performance increase was pretty underwhelming the 5950X is already about 20% faster in Blender vs the 12900k so 11% beyond that is pretty meh.
> 
> Hopefully I'm wrong or they are underselling it.


Nope.


----------



## Daven (May 23, 2022)

The biggest missing spec is the L3 cache configuration, specifically 3D V-cache. I wonder if AMD is still deciding on what to do.

Edit: oh and also AMD is staying max power is 170W but most sites think this means base power. To me it seems like 170W is the power at max boost which would compare to the 240W of Alder Lake.

Edit: upon further looking at MSI AM5 mboard marketing material, it compares the 65-105W TDP of AM4 to 65-170W TDP of AM5. Nothing released today insinuates 170W base TDP so I’m guessing this is future proofing for higher core counts.

Edit: even more confusing, we have a 5.5 ghz clock in a gaming demo and a 15% ST gain in cinebench. Impossible to tell if clocks were the same in both cases.

Edit: I’m gonna go out on a limb here and say clocks were similar between the 5950X and the Zen 4 processor. The 15% is probably mostly IPC. Since IPC is usually the mean or median clock for clock increase in performance between two parts over like 30 applications, I don’t think AMD has this number yet. They are still tuning for more apps and final specs may not be set in stone yet.


----------



## Denver (May 23, 2022)

I realized now that the difference is 45%, not 31%. *12900k is 45% slower. *

Ps*.  *The AMD marketing team's terrible.


----------



## Valantar (May 23, 2022)

Denver said:


> I realized now that the difference is 45%, not 31%. *12900k is 45% slower. *
> 
> Ps*.  *The AMD marketing team's terrible.


Those two numbers are literally the same thing. 204s is 31% faster than 297s; 297s is 45% slower than 204s. They chose the more conservative wording, which uses the existing product as the baseline for comparison. That's the only sensible, good-faith comparison to make - especially as a "slower than" wording in marketing is _guaranteed_ to be flipped into a "faster than" wording by readers who don't consider how this changes the percentage. And that would be a shitshow for AMD.




While I haven't watched the presentation, I have to say this sounds a tad underwhelming. If that 15% is IPC, that's pretty good even when accounting for DDR5. If it's ST performance? That's underwhelming, especially when you take into account a 10% clock speed increase. Also a bit disappointed to not see any major packaging changes here - the CCDs are stacked closer, but other than that it looks like we're still getting through-package IF with its high power draw. And that's a damn shame. Here's hoping 7000 APUs will be the true focus of this generation, with MCM APUs finally entering the ring.


----------



## Daven (May 23, 2022)

Valantar said:


> Those two numbers are literally the same thing. 204s is 31% faster than 297s; 297s is 45% slower than 204s. They chose the more conservative wording, which uses the existing product as the baseline for comparison. That's the only sensible, good-faith comparison to make - especially as a "slower than" wording in marketing is _guaranteed_ to be flipped into a "faster than" wording by readers who don't consider how this changes the percentage. And that would be a shitshow for AMD.
> 
> 
> 
> ...


The 5.5 Ghz number was in a game demo. The 15%+ number was in a list with a 5 ghz+ number with footnotes that it was a cinebench test. It’s crazy confusing.


----------



## Chrispy_ (May 23, 2022)

So this is the official reveal, when is the official launch (and review embargo date?)

As cool as Zen4 is likely to be Zen2 and Alder Lake were both troubled launches, Zen3 was smoother because it was so similar to Zen2 and fully platform-compatible. I'd be inclined to let some other guinea-pigs pay through the nose to beta-test the new platform for me for at least a couple of months.


----------



## 529th (May 23, 2022)

I got the same impression as most sentiments here.  This will be an interesting launch to hold back on, purchase wise, and watch what things really boil down to let alone AGESA changes.  I have a feeling the 3D will be close to AM4 performance in gaming.


----------



## thegnome (May 23, 2022)

Err they must be sandbagging or something for later... Zen 4 has a better node and higher clocks that would alone make up ground to Alderlake mostly... Zen 3 was 19% higher IPC compared to Zen 2, and that was with a few design changes and slightly higher clocks, thats without a new node, memory, cache, etc.


----------



## Arkz (May 23, 2022)

So B550 will be PCI-E 4.0 on the 16 lane slot, but then 5.0 on the M.2 slots. Hmm, well that's probably still fine. I don't think any GPU comes close to saturating all 16 lanes in 4.0. Hell a 3080 on a 3.0 slot runs fine.


----------



## ARF (May 23, 2022)

Chrispy_ said:


> So this is the official reveal, when is the official launch (and review embargo date?)



Sometime September or October this year.

I wonder why they hurried so much to reveal it today... is it anything to do to get the partners ready for the software ecosystem update?



Arkz said:


> So B550 will be PCI-E 4.0 on the 16 lane slot, but then 5.0 on the M.2 slots. Hmm, well that's probably still fine. I don't think any GPU comes close to saturating all 16 lanes in 4.0. Hell a 3080 on a 3.0 slot runs fine.



You probably talk about the B650, and by the way, are there any PCIe 5-ready M.2 SSDs?
Even if there are, they would be price-wise prohibitive. Just get a good-old PCIe 3 NVMe M.2 SSD


----------



## Arkz (May 23, 2022)

ARF said:


> Sometime September or October this year.
> 
> I wonder why they hurried so much to reveal it today... is it anything to do to get the partners ready for the software ecosystem update?
> 
> ...


Yeah B650 ha ha. And yeah probably not, although there will be eventually. But until all new games start supporting Direct Storage I'm not that fussed with NVMe drives anyway. Generally the performance in games isn't even that different between a decent NVMe and a SATA SSD.


----------



## Valantar (May 23, 2022)

thegnome said:


> Err they must be sandbagging or something for later... Zen 4 has a better node and higher clocks that would alone make up ground to Alderlake mostly... Zen 3 was 19% higher IPC compared to Zen 2, and that was with a few design changes and slightly higher clocks, thats without a new node, memory, cache, etc.


Zen3 was a ground-up redesign, not "a few design changes".


----------



## R0H1T (May 23, 2022)

Come on it's obvious they have something up their sleeve, these are either zen_*4c*_ cores or they have x3d designs lined up for later!


birdie said:


> *It's still quite underwhelming* considering a 100% increase in L2 cache, a lot more memory bandwidth and 10% higher clocks. And Zen 3 was launched almost two years ago.


I guess you've benchmarked them, right?


----------



## ratirt (May 23, 2022)

I'm really confused with some comments here. 15% ST gain is quite a bit in my opinion. Then you have the IPC gain which some people misinterpret. AL has the same IPC as 5000 Series AMD CPUs. According to GURU 3d. 




So 15% increase is not nothing I would think. Also you have the frequency boost. I'm puzzled how AMD measures the IPC to be honest. 
Consider this. The 5800x3d has the same IPC as 5800x and 12900k according to guru3d and yet it is way faster in games due to 3dvcache but lacks in other apps like MT apps in comparison to 5800x due to lower frequency. So IPC is one thing, ST performance is another and general performance is totally different thing. I haven't watched the presentation yet but I'm really going to refrain from speculations and guess what it will be like. Especially, if this is supposed to be something totally different than 5000 series CPUs. 
What I'm trying to say is, the IPC and frequency etc. is misleading in any way. You have to look at the bigger picture here.


----------



## R0H1T (May 23, 2022)

ratirt said:


> The 5800x3d has the same IPC as 5800x and 12900k according to guru3d


For *CB15*, Intel & AMD generally use a suite of benchmarks to avg IPC gains across a variety of workloads. ADL is definitely faster than zen3, but not that much (purely) on IPC.


----------



## noel_fs (May 23, 2022)

oxrufiioxo said:


> I also feel it won't be enough especially if the rumored 10-15% ST performance improvements for Raptor Lake are true... In some applications Zen 3 is 30% behind in ST performance vs Alderlake.


what applications 30% behind?

they need to work on marketing jesus christ


Anyway, seems good enough, maybe would have expected a little better but im positive that benchmarks will only make it look better


Some people saying amd is behind intel is quite delusional imo. Intel is the one a year behind, Alder lake barely outperformed zen3 while being considerably less efficient. There is no way intel will come out with a >15% ipc improvement when they struggled to get alder lake out. 


I believe its actually around around 30% performance improvement when taking clocks into account, thats really good.

Pcie5 also makes the platform more future proof for early adopters.


----------



## phanbuey (May 23, 2022)

noel_fs said:


> what applications 30% behind?



Just some:
Adobe Premier, Handbrake, Microsoft Office (Word, Excel etc.), older single threaded titles (age of empires 4 etc.).

Still though, I would be surprised if Zen 4 isn't faster than Raptor lake.  3Dcache +15% IPC + massive clock boosts + 16 full fat cores vs big.little should be able to beat Raptor Lake, but we'll see.

If intel boosts cache size, ring speeds, pumps clocks and adds more cores they might be able to keep up in some things, but I just don't see Raptor Lake beating a 7950 X3D in anything except a few ST outlier apps.

The mid range will be a price-performance fight tho, which is exciting.


----------



## ratirt (May 23, 2022)

R0H1T said:


> For *CB15*, Intel & AMD generally use a suite of benchmarks to avg IPC gains across a variety of workloads. ADL is definitely faster than zen3, but not that much (purely) on IPC.


Well, You say purely on IPC but on GURU3d it looks like these 2 are equal. IS there a different way to measure IPC than GURU3d did? If so, which is is the right one since they can't be two ways and each one gives a different result. Maybe different application gives different impression of the CPU's speed. If that is the case (i believe it is) maybe we should wait for the general performance metric instead of believing something that cannot be clearly quantified with numbers.


----------



## R0H1T (May 23, 2022)

IPC numbers can be different depending on applications, Intel & AMD generally use 10+ varying workloads to show avg IPC gains.


ratirt said:


> If so, which is is the right one since they can't be two ways and each one gives a different result.


Both strictly speaking, since we're not solely depending on just one benchmark.



ratirt said:


> Maybe different application gives different impression of the CPU's speed.


Yes & even (different) RAM configurations will give different "IPC" numbers.


ratirt said:


> maybe we should wait for the general performance metric instead


I'll wait for that fully loaded zen4 (4d?) die!


----------



## DeathtoGnomes (May 23, 2022)

ARF said:


> Well, I guess the potential negative reviews will push AMD to decrease the prices of the platform.
> Because otherwise, AMD will not sell well. The prices are too high as is.


Price creep cant be helped.


----------



## Valantar (May 23, 2022)

TheinsanegamerN said:


> My favorite part about NFTs is if someone drops an image in your wallet, like Cheese Pizza, you cant delete it, you can only hide it or send it somewhere. Super secure!
> 
> I thought that asrock's repeated failure of low end motherboard designs and them going after reviewers would do it for you.





ratirt said:


> I'm really confused with some comments here. 15% ST gain is quite a bit in my opinion. Then you have the IPC gain which some people misinterpret. AL has the same IPC as 5000 Series AMD CPUs. According to GURU 3d.
> View attachment 248539
> 
> So 15% increase is not nothing I would think. Also you have the frequency boost. I'm puzzled how AMD measures the IPC to be honest.
> ...


That ... is a very poor way of measuring IPC. Fixed frequency is ... fine, though not ideal (as IPC is highly dependent on caches, interconnects and RAM, locking clocks can present an unrealistic image of actual real-world IPC as you're changing the relative speeds of those separate clock domains), but a single application is not an IPC benchmark. If you're going to talk about real-world IPC (and not architectural-level execution ports etc.), you need a broad range of applications to give any kind of representative overview. A single application just isn't enough.


----------



## HisDivineOrder (May 23, 2022)

I was expecting more. That being said, having been there for the early days on AM4, I'll let others beta test AMD's new platform. Hopefully, they'll have ironed it out by Zen5.


----------



## dont whant to set it"' (May 23, 2022)

*"pre-production model" its in the video , as words by company's CEO.


----------



## Valantar (May 23, 2022)

R0H1T said:


> Come on it's obvious they have something up their sleeve, these are either zen_*4c*_ cores or they have x3d designs lined up for later!


Isn't Zen 4c supposed to be a lower area, lower clocked core for higher density implementations? That definitely makes zero sense for an MSDT platform topping out at the same core count as its predecessor. It's definitely not unlikely that there will be X3D SKUs later on, but who knows when/if those will arrive? At least there's no mention of them for now.


----------



## HD64G (May 23, 2022)

Nice thing for AMD and us customers that they seem to sandbag with Zen4 performance. Intel won't know what will be competing with and will be cautious in pricing. Many people will stop going for DDR5 platforms for months and prices will drop. If AMD isn't snadbagging then they won't have crazy pricing and Intel won't be able to increase their pricing after launching their 13th gen CPUs. If Zen4 is faster than it seems so, Intel will cut pricing as they did vs Zen3 when they lost the performance crown and AMD will follow suit. Competition helps us more than the duopole they retain in the CPU market for decades now.


----------



## Steevo (May 23, 2022)

I really am doubting any double digit actual IPC increase at this point in X86-64 architecture **unless** we all agree that they should sell us CPUs that have KNOWN speculation and branch vulnerabilities since they don’t need to use hardware to check threads.

I would be OK with 5.5Ghz and a few vulnerabilities for another 10% IPC.


----------



## Crackong (May 23, 2022)

Valantar said:


> 204s is 31% faster than 297s; 297s is 45% slower than 204s.


No
It is the other way around

If you treat the whole job as 1
Ryzen 7000 needs 204s to finish it make it 1/204 = 0.00490 jobs per second
12900k needs 297s to finish it  = 1/297 = 0.00337 jobs per second
Take the zeros off
(490-337)/490 = 31% slower
(490-337)/337 = 45% Faster

So
12900k is 31% slower than the 16 core Ryzen 7000
Ryzen 7000 is 45% faster than the 12900k


----------



## bobsled (May 23, 2022)

Crackong said:


> No
> It is the other way around
> 
> If you treat the whole job as 1
> ...


----------



## Crackong (May 23, 2022)

bobsled said:


>


"Who is faster?" is a "Higher is better" scale.
It might sounds a little confused for the question when the data is presented with time and time is "Lower is better"
Therefore we need to convert them to"Higher is better"
For example
David needs 10s to run a 30m distance
Paul needs 20s

To find out "Who is faster" we need to divide and find the speed
Therefore it is 30/10 vs 30/20
So 3m/s vs 1.5m/s

So David runs 3m/s and he is 100% faster than Paul's 1.5m/s 

Same situation applies to the test today.


----------



## TheLostSwede (May 23, 2022)

Not sure if this is a 100% accurate, there are definitely some discrepancies compared to the information that has come out today, but it seems pretty close.



















						Site Launch Exclusive: All the Juicy Details on AMD's Quirky Chipset Solutions for AM5!
					

Eastbridge? Westbridge?




					angstronomics.substack.com


----------



## Deleted member 24505 (May 23, 2022)

ARF said:


> Yeah, if the Ryzen 9 5950X is already 20% faster than Core i9-12900K, while the newest Ryzen 7000 CPU is only 30% faster than Core i9-12900K, then Raptor Lake will have the door very wide open to beat it and to reign supreme.
> 
> Which means that AMD is in a trouble and in the winter we may see for the first time since 2017, that Intel overtakes AMD in the DIY sales ala German MindFactory...


Was considering jumping to AM5 but might wait and see what raptor is like as it will fit in my 690 board.


----------



## oxrufiioxo (May 23, 2022)

Denver said:


> Nope.
> View attachment 248535




I should've been more specific the 5950X can be as much as 20% faster in Blender depending on what workload is chosen especially for the longer ones.











Unfortunately unless comparing identical workloads you can't compare at the same time both Intel/AMD choose what makes them looks best also them not talking about increases over the previous flagship is odd. Hopefully the comparison was made on a workload that the 5950X finishes in about the same time as the 12900k.


----------



## FeelinFroggy (May 23, 2022)

Does it use the 3d V-Cache?  I assume that it does, but I did not see it mentioned anywhere and since the 5800x3d is a locked chip I did not know if it could support 5.5ghz clock speed.


----------



## oxrufiioxo (May 23, 2022)

Tigger said:


> Was considering jumping to AM5 but might wait and see what raptor is like as it will fit in my 690 board.




Guessing they will be showing off Raptor Lake soon I think it will be similar enough that buying an AM5 motherboard won't be worth it. Kinda like reverse from already owning a decent AM4 motherboard and switching to Z690. The X670E boards the really interesting ones IO wise will probably be stupidly expensive as well. 



FeelinFroggy said:


> Does it use the 3d V-Cache?  I assume that it does, but I did not see it mentioned anywhere and since the 5800x3d is a locked chip I did not know if it could support 5.5ghz clock speed.



I think if it did they would have mentioned it. I do find it odd that there was only mention of L2 cache increasing. Wouldn't be shocked if they just make a cpu focused on gaming that uses it again. The gains in application performance seem to be nonexistent or lower on the 5800X3D vs the 5800X as it is.


----------



## Valantar (May 23, 2022)

Crackong said:


> No
> It is the other way around
> 
> If you treat the whole job as 1
> ...


That makes no sense. We already have a unit: seconds (per job), not jobs per second. There was a single job run. 297/204=1.45, i.e. 297s is 1.45x 204s, or 297s is 45% slower than 204s. 204/297=0.68686868=~0.69, i.e. 204s is 0.69x 297s, or 204s is 31% faster than 297s. No amount of juggling numbers and changing units will change those basic relations.



Crackong said:


> "Who is faster?" is a "Higher is better" scale.
> It might sounds a little confused for the question when the data is presented with time and time is "Lower is better"
> Therefore we need to convert them to"Higher is better"
> For example
> ...


This ... is some really creative mathematical nonsense. "Who is faster" is _only_ a "higher is better" scale if you _translate it_ from lower time=better to something inversely proportional to that. The base assumption of asking "who is faster" for a predefined task is "who performs that task the quickest", not "who performs the most iterations of that task in a given amount of time". That is an entirely different question. Unless it is defined beforehand that what you're looking for is _rate_ (repetitions/time) and not _speed_ (time/repetition), then the base assumption is that anyone asking what is fastest is that they're asking about speed.


----------



## R0H1T (May 23, 2022)

Valantar said:


> Isn't Zen 4c supposed to be a lower area, lower clocked core for higher density implementations?


Why would they clock lower the (lower cache) variants? You think that makes sense. There's a ton of rumors around this but bigger cache variants are almost certainly "more dense" & *IMO *lower clocked as well.


----------



## Valantar (May 23, 2022)

R0H1T said:


> Why would they clock lower the (lower cache) variants? You think that makes sense. There's a ton of rumors around this but bigger cache variants are almost certainly "more dense" & *IMO *lower clocked as well.


You clock them lower because that allows a higher number of cores within a given power limit - which is also why you want a smaller core to begin with. The architectural differences mean these clock drops aren't linear compared to clock scaling for the regular core, but they will most definitely be there, unless 4c manages to trade off its simplicity with astoundingly good clock scaling.

Which of course also means that Zen4c makes no sense in a low core count implementation, as the sacrifices made to afford higher core densities in HPC/server applications don't make any sense in that scenario, even if you would be able to clock them higher than in a dense server die.


----------



## Crackong (May 23, 2022)

Valantar said:


> That makes no sense. We already have a unit: seconds (per job), not jobs per second. There was a single job run. 297/204=1.45, i.e. 297s is 1.45x 204s, or 297s is 45% slower than 204s. 204/297=0.68686868=~0.69, i.e. 204s is 0.69x 297s, or 204s is 31% faster than 297s. No amount of juggling numbers and changing units will change those basic relations.


However your explanation doesn't make sence mathematically.
Because 0.69 only means 0.69 or 69% , it doesn't mean 31%

Your equation doesn't work bothways since you need to decide when to put the 1 in front / after the answer (ie 1-0.69 or 1.45-1)

My equation is consistent 
I suggest you check #87 to see how to calculate "Who is faster" which is a "bigger is better" scale.
The thing is
You need to calculate the "Speed" of the process.
Dividing both time doesn't give you the speed
"Distance / time" gives you the speed.

It is simple maths.


----------



## oxrufiioxo (May 23, 2022)

I did find her wording on AM4 during the presentation interesting. 









						AMD confirms its AM4 platform "will continue for many years to come" - VideoCardz.com
					

AMD AM4 socket is not going anywhere yet AMD next-gen AM5 platform might have just been unveiled at Computex, but AMD is not saying goodbye to AM4 just yet.  The AM4 socket has been a great success for AMD and consumers, especially those on the first gen 300-series motherboards, which have just...




					videocardz.com


----------



## Valantar (May 23, 2022)

Crackong said:


> However your explanation doesn't make sence mathematically.
> Because 0.69 only means 0.69 or 69% , it doesn't mean 31%


Uh ... percentages are relative. Relative to something else. If you're talking time to finish a job, and your baseline is 100%, then a result of 69% is indeed 31% faster. If, on the other hand, you redefine the baseline to be your new result, then that old 100% result becomes 145% of that, making it 45% slower.

What you're doing here is attempting to redefine the base variable from "time elapsed" to "jobs performed". This is an explicit reversal of what was presented.


Crackong said:


> I suggest you check #87 to see how to calculate "Who is faster" which is a "bigger is better" scale.


....did you miss the part where I quoted that post directly? Also, that post entirely fails to explain this supposed point.


Crackong said:


> The thing is
> You need to calculate the "Speed" of the process.


You're confusing speed with rate. In this case, speed is time per job, rate is jobs per time.


Crackong said:


> Dividing both time doesn't give you the speed
> "Distance / time" gives you the speed.


... there is no "distance" here, except metaphorically. But let's go with that metaphor: the "distance" is a single Blender render. That makes _speed_ "how quickly do you finish one render?", not "how many renders/time are you capable of". The latter question asks for a rate, not a speed.


Crackong said:


> It is simple maths.


Except that your maths fail to understand the questions being asked, and are thus being misapplied.



oxrufiioxo said:


> I did find her wording on AM4 during the presentation interesting.
> 
> 
> 
> ...


Hm, that's indeed interesting. Though most likely she's just referring to the fact that the platform will be supported for quite a while yet - i.e. CPUs aren't being discontinued immediately, nor will AM4 be aimed at a full-stack replacement any time soon. I could also see OEMs and business partners continue making AM4-based products for low cost markets, entry business PCs, etc. that don't need the fast I/O or extreme performance of AM5 - especially given how AMD doesn't have Intel's massive chipset tier list with delineations of PCIe generations, DDR support, etc. Still, there's always the potential of that meaning 6nm AM4 refreshes (even if only for OEM markets) down the line, as that should be pretty cheap and easy for them to make.


----------



## Crackong (May 23, 2022)

Valantar said:


> Uh ... percentages are relative. Relative to something else. If you're talking time to finish a job, and your baseline is 100%, then a result of 69% is indeed 31% faster. If, on the other hand, you redefine the baseline to be your new result, then that old 100% result becomes 145% of that, making it 45% slower.
> 
> What you're doing here is attempting to redefine the base variable from "time elapsed" to "jobs performed". This is an explicit reversal of what was presented.
> 
> ...




Here is a really really simple example
Person A uses 100s to finish a job
Person B uses 200s

With YOUR equation
200/100 so B is 100% slower than A
100/200 so A is 50% faster than B

Your equation is fundamentally flawed because in your equation, A will NEVER be 100% faster than B becasue it has to be finished with 0 second to do that, in YOUR equation.
Please, use your common sense.
If a person finish his job in 10s when the other guy needs 200s, he is 20x faster than the other guy, but in YOUR equation, he is just 95% faster.

Even if the first person only needs 1s to finish the job, in YOUR equation he is just 99.5% faster.
In reality he is 200x faster

Comon it is simple Maths


----------



## ARF (May 23, 2022)

Tigger said:


> Was considering jumping to AM5 but might wait and see what raptor is like as it will fit in my 690 board.



Zen 4 on AM5 and Raptor Lake on Z790 will launch around the same time, plus minus a few weeks.


----------



## Drash (May 23, 2022)

Crackong said:


> Here is a really really simple example
> Person A uses 100s to finish a job
> Person B uses 200s
> 
> ...


Perspective. change/original x 100 is the formula, where change = new - original (or the reverse, it's just a sign). Which is the original is the point. Faster implies slower is original, slower ...

eg I take a work cut  - reduce to 4 days from 5 = 20% cut, but I get them back (4 days to 5) is a 25% increase!


----------



## Crackong (May 23, 2022)

Drash said:


> Perspective. change/original x 100 is the formula, where change = new - original (or the reverse, it's just a sign). Which is the original is the point. Faster implies slower is original, slower ...
> 
> eg I take a work cut  - reduce to 4 da


Put your formula into the example I 've mentioned above and tell me "a job finished in 1s" is how many % faster than "a job finished in 200s"


----------



## wheresmycar (May 23, 2022)

I'm not blown away with the 15% IPC increase... was hoping something doubled up to consider a platform swap with DDR5. Hope TPU gaming benchmarks show more significant performance gains to sway the opinion.

What happened to 3D-CACHE... not incorporated on Zen 4? i didnt see any mention or am i missing something Or is AMD to use this feature on one-off refresh novelty chips?

For me, the graphics integration is a BIG +1. On my personal gaming/work build I've never opted for anything otherwise... i like the idea of having a trouble-shoot iGPU, needed more than ever with these crazy power consuming modern graphics cards and their higher than usual fail rate.


----------



## Drash (May 23, 2022)

Crackong said:


> Put your formula into the example I 've mentioned above and tell me "a job finished in 1s" is how many % faster than "a job finished in 200s"


No, it isn't "my" formula, it is "the" formula. You do it, what is the answer? What do you learn from that.


----------



## Crackong (May 23, 2022)

Drash said:


> No, it isn't "my" formula, it is "the" formula. You do it, what is the answer? What do you learn from that.


The answer is
"a job finished in 1s" is 20000% faster than "a job finished in 200s"

Now did you learn anything from that?


----------



## R0H1T (May 23, 2022)

Valantar said:


> You clock them lower because that allows a* higher number of cores within a given power limit*


Which gets wasted with an IGP, try again 

I'm guesstimating the bigger cache variants would likely ditch the IGP with massive (L3?) caches near the cores or on the IoD, maybe even an L4 cache.


----------



## Drash (May 23, 2022)

Crackong said:


> The answer is
> "a job finished in 1s" is 20000% faster than "a job finished in 200s"
> 
> Now did you learn anything from that?





> Valantar said:
> 
> 
> 204s is 31% faster than 297s; 297s is 45% slower than 204s.


>>No
>>It is the other way around

Apologies, apparently I cannot type and nest quotes properly. Valantar was right.


----------



## roberto888 (May 23, 2022)

Drash said:


> >>No
> >>It is the other way around
> 
> Apologies, apparently I cannot type and nest quotes properly. Valantar was right.


The difference is 93s, which is 45% of 204s and 31% of 297s. Thus 297s is 45% slower than 204s, and 204s is 31% faster than 297s.


----------



## defaultluser (May 23, 2022)

I know you folks keep  cracking  the ""Raptor Lake ill destroy this" whip like it's gong-out-of-style, but i think you're missing the large part:

*Raptor lake just doubles the e-cores (so as most real-world loads hit a scaling wall, Raptor lake will also hit that same scaling wall earlier than Zen 4 (8P + 16e versus 16 P!)*

it's going to take a perfectly-scaling application for Raptor Lake to rape 7950!


----------



## Crackong (May 23, 2022)

Drash said:


> >>No
> >>It is the other way around
> 
> Apologies, apparently I cannot type and nest quotes properly. Valantar was right.



Can you just answer my question?
"a job finished in 1s" is how many % faster than "a job finished in 200s"  ?

Answer that with common sense and compare with Valantar's formula.

I am tired explaining primary school maths online.



roberto888 said:


> The difference is 93s, which is 45% of 204s and 31% of 297s. Thus 297s is 45% slower than 204s, and 204s is 31% faster than 297s.


Same


----------



## ArcanisGK507 (May 23, 2022)

it will remain smoke until it comes to light...


----------



## Deleted member 24505 (May 23, 2022)

defaultluser said:


> I know you folks keep  cracking  the ""Raptor Lake ill destroy this" whip like it's gong-out-of-style, but i think you're missing the large part:
> 
> *Raptor lake just doubles the e-cores (so as most real-world loads hit a scaling wall, Raptor lake will also hit that same scaling wall earlier than Zen 4 (8P + 16e versus 16 P!)*
> 
> it's going to take a perfectly-scaling application for Raptor Lake to rape 7950!


Raptor Lake processors will offer performance increases of 30-40% in multi-threaded workloads, compared to Alder Lake. This double digit increase extends to single-threaded tasks too, with a reported 8-15% increase, which should boost fps in the best PC games.
https://www.pcgamesn.com/intel/raptor-lake-40-percent-faster

With Raptor Lake, Intel is said to be improving the performance of these cores, and the leaked road map published by Videocardz suggests that we should see “new hybrid CPU core changes for improved performance” as well as “improved CPU cache for gaming” improvements for desktop Raptor Lake. It’s unclear what these changes will be at this time, however.
https://www.digitaltrends.com/computing/intel-raptor-lake-cpu-rumors-news-specs/

Pretty sure they are not "just" adding more E cores
https://wccftech.com/roundup/intel-13th-gen-raptor-lake-cpus/


----------



## Dimitriman (May 23, 2022)

Pumper said:


> Disappointing. Only 15% up when the CPU is boosting 400-500MHz higher is really not impressive.
> 
> And what is up with that comparison to 12900K? In 5950x vs Zen4 they are using 3600CL16 vs 6000CL30, but vs Intel they use the same 6000CL30 for Intel and 6400CL32 for Zen4, for whatever reason - why not use the same RAM, especially when you already have the Zen4 +6000CL30 RAM system at hand?
> 
> ...


All of your questions and comments can be answered by this: it was not a CPU launch. This was a launch of the AM5 platform. I don't think AMD will reveal (and neither will Intel for that matter) anything detailed about their CPU before launch as it would give their competitor the opportunity to fine tune their product and better compete.


----------



## Tomgang (May 23, 2022)

5.5 ghz in gaming is a little better than i hoped for and that rumors around 5.4 ghz to 5.6 ghz is actually true.

With that said, my own 5950X can stock boost up to 5.05 ghz on the best cores. With some tingering in bios the 3 best cores can go to 5.25 ghz and all cores capable of reaching 5 ghz or above at light load. Clock Will be lower at high load and anything above 5.25 ghz and it crashes at just the slides load, it can boot but not handle much stress over 5.25 ghz. By the way my cpu is aircooled, so it doesn't need exotic cooling to reach these clocks, just a good motherboard.







But a 5950X3D and I would be more than happy and dosent need a new socket. But I don't know, guess a 3d version dosent make sense as v-cashe mostly benefits games only.


----------



## R0H1T (May 23, 2022)

Tigger said:


> This double digit increase extends to single-threaded tasks too, with a reported 8-15% increase


And AMD can do SMT4 with zen5, kinda chicken & egg thing there. But AMD with their chiplet approach will have the core advantage going in to the future, till Intel copies them.


----------



## Richards (May 23, 2022)

Valantar said:


> Zen3 was a ground-up redesign, not "a few design changes".


Even apple got 20% from tsmc 5nm.. is the node a flop or amd zen 4 architecture  has weaknesses in it ?


----------



## MarsM4N (May 23, 2022)

Is this just a show prop she's holding up or are chip dies now *golden*?  Remember them only silver'ish.

Guess it's for some thermal advantages, right?


----------



## Drash (May 23, 2022)

Crackong said:


> Can you just answer my question?
> "a job finished in 1s" is how many % faster than "a job finished in 200s"  ?
> 
> Answer that with common sense and compare with Valantar's formula.
> ...


Can you not work it out? Are you not confident in your own reasoning? What's your PhD in? Mine is in Electronic Engineering, with some novel new maths I invented to prove 802.11 is a "pile of wank" and Hiperlan was actually on the right track. Then I moved on. 

TLR Valantar was right. Go Away and think about this.

PS I also like beer. Try it, it helps.


----------



## Valantar (May 23, 2022)

Crackong said:


> Here is a really really simple example
> Person A uses 100s to finish a job
> Person B uses 200s
> 
> ...


Yes, that is exactly correct. When you say "slower than A" that wording explicitly takes the time spent by A to finish the job as the point of reference. Working with percentages, that would then be 100%. Similarly, "faster than B" means the point of reference is B, setting the time spent by B as 100%. The difference is thus relative to the base number, whether that is higher or lower. Whereever you set the baseline, the comparisons follow from that.


Crackong said:


> Your equation is fundamentally flawed because in your equation, A will NEVER be 100% faster than B becasue it has to be finished with 0 second to do that, in YOUR equation.


... is that a problem? It is literally impossible for something to be infinitely fast, that's just how nature works, so ... yes? Is there some fundamental problem with the impossibility of a 100% increase in a relative percentage measuring towards zero? You can't do the task in zero time, and you can't have a 100% increase, because 100% is then the span between a theoretical zero time expenditure and the real time expenditure. This is literally the only common sense approach of comparing time expenditures trending towards zero - the only method that takes into account that zero will never be reached, and that doesn't exaggerate the difference bewteen minute real-world changes. 

I mean, this is even included in your hackneyed reformulation, which tries to avoid this by reformulating the variables in question to "units of work per time" (which might be zero, but only at zero work done) rather than "time spent per one unit of work", which is what the slide here (and nearly all such benchmarks) presents.

You're arguing as if it's _better_ if a change between, say, 20s and 10s compared to a 200s baseline were presented as "10x faster" and "20x faster", despite the fact that this _grossly_ exaggerates the difference between the two. You see that, right? Presenting those two as 90% and 95% faster is a far more accurate representation of their absolute time expenditure.


Crackong said:


> Please, use your common sense.
> If a person finish his job in 10s when the other guy needs 200s, he is 20x faster than the other guy, but in YOUR equation, he is just 95% faster.
> 
> Even if the first person only needs 1s to finish the job, in YOUR equation he is just 99.5% faster.
> In reality he is 200x faster


But all of those are still true. Your "in reality" statements, which seem to be meant as rebuttals, are literally the same ratio. _They're saying the same thing_. _And all are equally valid_ - but which is more useful or appropriate depends on context, of course. And the context is what you're misapplying here. The context is not a question looking for a _rate of work_, but a _time till completion of work_. And in terms of marketing, the application you're arguing for is one that exaggerates the actual improvements. When you're comparing two things to see how fast they can finish a task, it's the reduction in task completion time that matters, not the fact that a 100% reduction is impossible. That's just how the world works.


----------



## Dyatlov A (May 23, 2022)

Can it reach similar high single score as Alder Lake, like 800+ points in CPU-Z? Just because i dont care very many cores, i want fast cores.


----------



## Assimilator (May 23, 2022)

"the company claiming a 15% single-threaded uplift over "Zen 3"" - 15% is really not much to write home about for a generational change, especially considering there's the transition to faster memory too. Is the Zen design running out of steam finally?

"AI compute acceleration" is vague enough to mean precisely nothing - why would consumers care?

"up to 24 PCI-Express 5.0 lanes from the processor" - that's not nearly as many as I was hoping, it's the same number as ADL (I'm including the latter's chipset DMI link here). Granted, 8 PCIe 5.0 lanes are superior to 8 4.0 lanes, but if AMD has to spend 4 lanes on the chipset(s) then you're back to parity with ADL. Which means it should be easy for RKL to match or even exceed this count.

"the AM5 Socket retains cooler compatibility with AM4" - I wonder how many idiots are going to reuse their shitty $20 tower coolers on Zen 4 CPUs, then complain the CPUs are slow because they throttle.

"up to 14 USB 20 Gbps ports" - lovely marketing weasel-words, you still need an entire lane of PCIe 5.0 or 2 lanes of 4.0 to reach 20Gbps. Unless the chipset(s) themselves are 5.0-capable, which I strongly doubt due to cost implications, they will need to have a shitton of 4.0 lanes to be able to provide that level of USB connectivity. I'm expecting the same thing that we saw on X370, namely one or two USB-C ports at the highest speed and the rest still being ye olde 3.1 gen 1 type-A.

No explicit mention of USB4 anywhere, which is ominous. I can't imagine AMD would be stupid enough to launch a platform that lacks USB4, but also... AMD.

"will also standardize Wi-Fi 6E + Bluetooth WLAN solutions it co-developed with MediaTek, weaning motherboard designers away from Intel-made WLAN solutions" - that MediaTek-branded solution will have to be incredibly good to pull board manufacturers away from Intel's tried-and-true WiFi hardware. My fear is that it'll instead be incredibly cheap and instead of having decent Intel WiFi on everything, we'll get crappy MediaTek on lower-end SKUs and need to pay more for Intel on the better ones.

"AMD is betting big on next-generation M.2 NVMe SSDs with PCI-Express Gen 5" - nobody cares, PCIe 4.0 SSDs are stupidly fast already, no ordinary consumer wants or needs 5.0 SSDs, what they want and need are cheaper and more energy-efficient SSDs. Console tards will lap this up though.

"The new AMD Smart Access Storage technology builds on Microsoft DirectStorage" - something else nobody cares about.


----------



## Yraggul666 (May 23, 2022)

Yeeeaaah...NO!
Don't get me wrong, i'm ECSTATIC about AMD being in THE GAME again and insanely happy about the competition;
also LOVING the new AM5/DDR5/Pci-E5.0 stuff.
Now i'll just enjoy this AM4/DDR4/Pci-E4.0 system for the next 5/6 years while at the same time watching the AMD/INtel/nGreedia wars.
Next upgrade is bound for 2027/2028, unless smth goes poof outside the warranty period but even then i'm not switching platform.


----------



## Tech Ninja (May 23, 2022)

Seems this is a lot worse than fanboys predicted.

They need a 3D cache version ASAP or AMD is AMDead.

7000 a series is gonna lose to LoveLace as well


----------



## dont whant to set it"' (May 23, 2022)

@Tech Ninja 
Your point beeing?

We( consumers) still win, or should win.

I'm still gaming at 1080p 144Hz, had a RX 5700 XT $385 in 2020 including shipped from 2000miles, under ran it by both power (sometimes at 50% for the kicks of it) and gpu clocks / gpu Tension supply , card mustered trough exceding expectations and then some.

A pretty much similar story( mind the price) after upgrading to a RX 6900XT, heavily under run , not much heat output , fans speed set on minimum( only because I currently find it better compared to the fan stop feature).

It might of well been a couple of nVidia based gpu graphics cards my last two, but it wasnt.

Notable mention : PSU rated 550 Watts.


----------



## Jism (May 23, 2022)

15% ST increase should yield a higher overal MT performance, right? 

I dont see the issue(s) here. New platform with future CPU releases.

They need some time, but they will come.

PCi-E 5.0 is'nt a requirement either...Theres no card taxing even PCI-E 4.0.


----------



## 529th (May 23, 2022)

USB4 is connected to the CPU.  Shown on a diagram


----------



## Valantar (May 23, 2022)

R0H1T said:


> Which gets wasted with an IGP, try again
> 
> I'm guesstimating the bigger cache variants would likely ditch the IGP with massive (L3?) caches near the cores or on the IoD, maybe even an L4 cache.


What, you think that 3-4CU iGPU is going to consume a noticeable amount of power? Yeah, no, sorry. Considering AMD's iGPUs run fine with 3-4x the CUs in 15W U-series APUs, I really don't think that cut-down variant will make even a dent in the power consumption of their desktop chips.

It's quite likely that Zen4c has less L3 cache, yes - that's one of the easiest ways of cutting down on area. But it's also likely tweaked in other ways - just like Zen2 was significantly smaller than Zen3 on the same process, Zen4 is another die size increase, so Zen4c might be closer to Zen2/3 in various areas to keep it slim. It's meant for applications where the sheer number of threads matters far more than their absolute peak performance after all, so some concessions are expected.


Assimilator said:


> "the company claiming a 15% single-threaded uplift over "Zen 3"" - 15% is really not much to write home about for a generational change, especially considering there's the transition to faster memory too. Is Zen running out of steam finally?


We'll see. If that's 15% IPC, that's okay, if that's 15% including the clock speed increase it's a let-down for sure. Acceptable overall performance boost, barely, but only through pushing clocks ridiculously high, which kills efficiency.


Assimilator said:


> "AI compute acceleration" is vague enough to mean precisely nothing - why would consumers care?


Yep. Guess they're matching Intel there though; they've been advertising the same for at least the past generation.


Assimilator said:


> "up to 24 PCI-Express 5.0 lanes from the processor" - that's not nearly as many as I was hoping, it's the same number as ADL (I'm including the latter's chipset DMI link here). Granted, 8 PCIe 5.0 lanes are superior to 8 4.0 lanes, but if AMD has to spend 4 lanes on the chipset(s) then you're back to parity with ADL. Which means it should be easy for RKL to match or even exceed this count.


It's not the same as ADL - ADL has 5.0 x16 PEG and 5.0 for the chipset (IIRC), but no 5.0 m.2. Not that 4 lanes less matters much, but ADL prioritizing 5.0 for GPUs rather than storage never made sense in the first place - it's doubtful any GPU in the next half decade will be meaningfully limited by PCIe 4.0 x16.


Assimilator said:


> "the AM5 Socket retains cooler compatibility with AM4" - I wonder how many idiots are going to reuse their shitty $20 tower coolers on Zen 4 CPUs, then complain the CPUs are slow because they throttle.


... is that any more likely than them buying a shitty $20 AM5 tower cooler? There are plenty of great AM4 coolers out there after all. Retaining compatibility reduces waste in a meaningful and impactful way. You don't fix people being stupid by forcing obsolescence onto fully functional parts.


Assimilator said:


> "up to 14 USB 20 Gbps ports" - lovely marketing weasel-words, you still need an entire lane of PCIe 5.0 or 2 lanes of 4.0 to reach 20Gbps. Unless the chipset(s) themselves are 5.0-capable, which I strongly doubt due to cost implications, they will need to have a shitton of 4.0 lanes to be able to provide that level of USB connectivity. I'm expecting the same thing that we saw on X370, namely one or two USB-C ports at the highest speed and the rest still being ye olde 3.1 gen 1 type-A.


X670E is literally marketed as "PCIe 5.0 everywhere", providing 24 more lanes of 5.0 (and, presumably, another 4 of which go to the CPU interconnect, leaving a total of 40). X670 most likely retains the 5.0 chipset uplink even if it runs its PCIe at 4.0 speeds. The main limitation to this is still the cost of physically implementing this amount of high speed IO on the motherboard, as that takes a lot of layers and possibly higher quality PCB materials.


Assimilator said:


> No explicit mention of USB4 anywhere, which is ominous. I can't imagine AMD would be stupid enough to launch a platform that lacks USB4, but also... AMD.


Several announced motherboards mention it explicitly, so no need to worry on that front. The only unknown is whether it's integrated into the CPU/chipset or not. Support is there.


Assimilator said:


> "AMD is betting big on next-generation M.2 NVMe SSDs with PCI-Express Gen 5" - nobody cares, PCIe 4.0 SSDs are stupidly fast already, no ordinary consumer wants or needs 5.0 SSDs, what they want and need are cheaper and more efficient SSDs.


This is mostly true, and I agree that PCIe 5.0 SSDs are pretty dumb, but that's how competition works in tech - if your competitor has a feature, you need another feature on top of that again.


Assimilator said:


> "The new AMD Smart Access Storage technology builds on Microsoft DirectStorage" - something else nobody cares about.


On this I'd have to disagree with you. DS has _a lot_ of potential - current software just can't make use of our blazing fast storage, and DS goes a long way towards fixing that issue. It just needs a) to be fully implemented, with GPU decompression support, and b) be adopted by developers. The latter is pretty much a given for big name titles given that it's an Xbox platform feature though.




Jism said:


> 15% ST increase should yield a higher overal MT performance, right?


Depends how that increase is reached, and whether the same thing is maintainable in MT. If it's only from pushing clocks and that means increasing power, it might not. If it's from improved efficiency and IPC, most likely yes. But there's tons of gray area and nuance.


----------



## tfdsaf (May 23, 2022)

oxrufiioxo said:


> I also feel it won't be enough especially if the rumored 10-15% ST performance improvements for Raptor Lake are true... In some applications Zen 3 is 30% behind in ST performance vs Alderlake.


What? Which applications are those? Even the top end Intel processor the 12900ks runs at exactly the same performance as the 5950x according to techpowerup database and other websites. In terms on applications still only running on 1 core those don't really exist anymore. You'd have to find older 2019 apps and before to test, but not many reviewers test those as they are usually obscure apps and most have updated versions running multiple cores.

In terms of benchmarking apps that only test a single core there can be a difference of 20% percent, but with the Intel processor using faster DDR5 memory!


----------



## Oasis (May 23, 2022)

That AM5 CPU is going to be hard to get thermal paste off with all those gaps on the outside


----------



## MarsM4N (May 23, 2022)

Oasis said:


> That AM5 CPU is going to be hard to get thermal paste off with all those gaps on the outside



Yep. And all the tiny transistors lurking between the kerbs. You need a steam cleaner to get it out there.


----------



## Oasis (May 23, 2022)

MarsM4N said:


> Yep. And all the tiny transistors lurking between the kerbs. You need a steam cleaner to get it out there.


I don't see why they did it? Maybe for easier RMA testing?


----------



## oxrufiioxo (May 23, 2022)

tfdsaf said:


> What? Which applications are those? Even the top end Intel processor the 12900ks runs at exactly the same performance as the 5950x according to techpowerup database and other websites. In terms on applications still only running on 1 core those don't really exist anymore. You'd have to find older 2019 apps and before to test, but not many reviewers test those as they are usually obscure apps and most have updated versions running multiple cores.
> 
> In terms of benchmarking apps that only test a single core there can be a difference of 20% percent, but with the Intel processor using faster DDR5 memory!







Just not a huge fan with AMD being so much later and just at best matching Alderlake ST performance on what seems like a much more advanced node. I gave intel crap and even ditched their platform entirely for taking so long to release something worthwhile over 9th gen. MT performance is just as important but ignoring ST performance is also foolish. Again just mildly disappointed and expected more especially after Zen2 and Zen3. Looking forward to seeing actual reviews of these and hopefully AMD is being conservative. There are a lot of fanboys that just want to see AMD or Intel die depending on what team they imaginarily think they are a part of but we as consumers need both chip vendors to be competitive because that will mean much better products for us and not just quad cores for 7 generations....


Good times already an installation video out....


----------



## Crackong (May 24, 2022)

Valantar said:


> Yes, that is exactly correct. When you say "slower than A" that wording explicitly takes the time spent by A to finish the job as the point of reference. Working with percentages, that would then be 100%. Similarly, "faster than B" means the point of reference is B, setting the time spent by B as 100%. The difference is thus relative to the base number, whether that is higher or lower. Whereever you set the baseline, the comparisons follow from that.
> 
> ... is that a problem? It is literally impossible for something to be infinitely fast, that's just how nature works, so ... yes? Is there some fundamental problem with the impossibility of a 100% increase in a relative percentage measuring towards zero? You can't do the task in zero time, and you can't have a 100% increase, because 100% is then the span between a theoretical zero time expenditure and the real time expenditure. This is literally the only common sense approach of comparing time expenditures trending towards zero - the only method that takes into account that zero will never be reached, and that doesn't exaggerate the difference bewteen minute real-world changes.
> 
> ...



Again, No.

Again, using the same example as above
Comparing 1s and 200s jobs
Using YOUR equations
1s is 99.5% faster than 200s and 200s is 19900% slower than 1s
So your statement " Presenting those two as 90% and 95% faster is a far more accurate representation of their absolute time expenditure." is totally flawed because the 19900% is STILL THERE.

The only thing there is "faster" and "slower" doesn't apply to "Time" alone.
"Time" is just "No. of seconds" and there is no "Speed" there.
"Speed" only happens when "Something divided by Time"
You cannot describe something is "faster" or "slower" without a "Speed" element
Therefore in the equation you must calculate the "Speed" first
And that's the fundamental flaw in your statements / equations.

"No. of seconds" alone has no meaning
"Work done within No. of seconds" is what we needed.
1s and 200s has no meaning if the amount of work done is unknown.
The person could have spent 1s and get 1 work done and stop there while the other guy spent 200s and did 300 work done.
Therefore the "Amount of work done" must be put into the equation to calculate "Speed" before anyone could describe who is "Faster".

Comparing 204s vs 297s alone
You can say 204s is 31% "Smaller" or "Shorter" or "Less" or "reduction" than 297s, but you should never, never say it is "Faster" than 297s without a work done in the equation.

In this CPU case it is "1 test case done in 204 seconds" vs "1 test case done in 297 seconds"
So the "1" must be put into the equation
1/204 = 0.00490
1/297 = 0.00337
(490-337)/490 = 31% slower
(490-337)/337 = 45% Faster
Without the "1" , your equation does not represent any "Speed" element.

Please do realize we are comparing "How quickly the CPU works"
Your explanations only represents "Time reductions in 2 tests"

Replace the "Faster" word with "shorter" and your statements are totally fine.
But if you need to use the word "Faster" please include "Speed" into your equations.

Enough said


----------



## ModEl4 (May 24, 2022)

I was here to write about how AVX-512 (or SMT improvements) might have played a role in the blender test, but seeing the slower/faster arguments I was exhausted.
It's 7950X being 45% faster than 12900K or 12900K being 31% slower than 7950X (not 7950X being 31% faster than 12900K), it's so simple,  basic logic stuff really, AMD's marketing guys probably are from financial sector measuring everything in margins (even when we are talking about mark-ups), it's better for profits for sure lol


----------



## LabRat 891 (May 24, 2022)

What's with the lack of expansion? Hopefully lane bifurcation is fully supported; I'm only seeing 3 PCIe slots *at most*. Huge waste of lanes to put an x1-x8 lane NIC, Video Capture Card, Storage Controller, Thunderbolt/USB4 card, etc. in an x16 slot. Are Bifurcated risers, (popular with cryptominers) going to come to the enthusiast and prosumer space? Or, do the big boys think they've included everything we could possible want on mobo or SoC?


----------



## Metroid (May 24, 2022)

I'm a lot more optimistic than AMD itself, I mean, core clock increased 10% alone, means 10% more performance even if is one core only or even 2 cores, 5nm = more transistors + ddr5 + some minor fixes and tweaks that is 5% more, 15% to expect is likely the minimum to expect from it, don't think amd tried much here, they could have increased its ipc a lot more, anyway, I'm not sure if i will upgrade from my 5900x, ddr5 still very expensive and i would want at least 64gb, currently using 32gb, it has been enough so far, sometimes few minor issues with many programs opened at once. I will likely wait for the next cpu after 7xxx series.


----------



## ir_cow (May 24, 2022)

Oasis said:


> That AM5 CPU is going to be hard to get thermal paste off with all those gaps on the outside


I'm planning on just leaving it. I swap CPUs daily for testing and that would become a part-time job removing it all haha.


Personally I am mostly invested in the FLCK. Hopefully it is 3000MHz (DDR5-6000) for starting. Just speculation of course before the NDAs steal my soul and I become aware of the truth. Beside the AM5 memory, I like the idea of more PCIe 5.0 Lanes. Dual chipset is a interesting approach as a solution. Will the consumer market need Gen5 NVMe drives? No way! but, it does make sense for content creators.

AMD iGPU??? I want to see a option without it. A waste of silicon in my opinion. Only time iGPU is useful is video encoding (Intel QUICKSYNC), laptops and low-end pre-builts. Give me a CPU at a lower price point and gut the iGPU for my desktop!


----------



## Minus Infinity (May 24, 2022)

dgianstefani said:


> 15% isn't enough, that brings ST up to Alder Lake, while this will compete with raptor lake.


Ha ha, yeah sure AMD is going to right up front tell Intel what the performance improvements from Zen 4 really are 4-5 months ahead of release and even worse admit they aren't great. This is a classic misdirect. You really think AMD would basically say we can't even match Alder Lake with Zen 4 and with Raptor Lake still to come this year. This would be a total failure from them. The internal leaks from AMD are saying ST will be >> 15% uplift and MT > ST. This is a bigger uplift than Zen 3 was over Zen 2 if the leaks are to be believed.

I would not worry at all at this stage. Now Intel may have a real surprise for RL and Zen4 may well find itself in trouble again at the end of the year but only time will tell. The one thing Intel has going for it is MT performance will be greatly enhanced due to doubling E-cores. AMD I doubt can get that crown back before Zen 5 at best.


----------



## Nephilim666 (May 24, 2022)

ir_cow said:


> I'm planning on just leaving it. I swap CPUs daily for testing and that would become a part-time job removing it all haha.


Just caulk the gaps


----------



## sweet (May 24, 2022)

Minus Infinity said:


> Ha ha, yeah sure AMD is going to right up front tell Intel what the performance improvements from Zen 4 really are 4-5 months ahead of release and even worse admit they aren't great. This is a classic misdirect. You really think AMD would basically say we can't even match Alder Lake with Zen 4 and with Raptor Lake still to come this year. This would be a total failure from them. The internal leaks from AMD are saying ST will be >> 15% uplift and MT > ST. This is a bigger uplift than Zen 3 was over Zen 2 if the leaks are to be believed.
> 
> I would not worry at all at this stage. Now Intel may have a real surprise for RL and Zen4 may well find itself in trouble again at the end of the year but only time will tell. The one thing Intel has going for it is MT performance will be greatly enhanced due to doubling E-cores. AMD I doubt can get that crown back before Zen 5 at best.


If AMD can keep the power efficiency of Zen 3 than even 15% is way better than Intel. Those Alder Lake are hot and consuming tons of power, not recommended.


----------



## ModEl4 (May 24, 2022)

Zen 4 should be a little bit better in gaming than Raptor Lake but Alder Lake was such a great improvement over previous architecture regarding gaming, that's why Intel in their official slides had a reference about Alder Lake's gaming performance:





Now seriously, the only thing that I would bet is that since 7nm CPU chiplet+14nm I/O die was better than Intel's 7nm 12th gen attempt in performance/Watt, I can't find a reason why a 5nm CPU chiplet+6nm I/O die wouldn't be better than Intel's 7nm 13th gen attempt, especially since on Intel's side we would also have double the efficiency cores and much higher cache and higher frequency also according to rumors.


----------



## AlwaysHope (May 24, 2022)

ir_cow said:


> ...
> Personally I am mostly invested in the FLCK. Hopefully it is 3000MHz (DDR5-6000) for starting. Just speculation of course before the NDAs steal my soul and I become aware of the truth. Beside the AM5 memory, I like the idea of more PCIe 5.0 Lanes. Dual chipset is a interesting approach as a solution. Will the consumer market need Gen5 NVMe drives? No way! but, it does make sense for content creators.
> 
> AMD iGPU??? I want to see a option without it. A waste of silicon in my opinion. Only time iGPU is useful is video encoding (Intel QUICKSYNC), laptops and low-end pre-builts. Give me a CPU at a lower price point and gut the iGPU for my desktop!


From a gamers point of view, devs still haven't optimised for PCIe 4.0 yet, let alone 5.0. 3.0 is still enough for about 99% of games atm, all this summarized from TPU reviews on storage this year too. 
As has been mentioned before in this thread, iGPU is very handy indeed in case of borked driver upgrades or early failing stages of dGPUs. I would expect the option of turning it off completely in bios as is the case already with Rocket lake K cpus.


----------



## InVasMani (May 24, 2022)

Oasis said:


> That AM5 CPU is going to be hard to get thermal paste off with all those gaps on the outside


You could just use a thermal grizzle pad to avoid that issue not to mention they don't dry out even if they they are a touch worse than paste at thermal conductivity. They probably are better than dried out paste that hasn't been reapplied after a period however imagine. It would be really interesting to compare it against paste that's been in use for a year.


----------



## Why_Me (May 24, 2022)

Pricing and availability will play a big part imo.


----------



## usiname (May 24, 2022)

Stop this dispute and just calculate the rendered area for given time.
If 12900k can render 1 image for 297s and 7950x can render the same image for 204s then Let the ryzen to render second image for 93s. That is the both CPUs will render for 297s, then the 12900k will have 1 full rendered image while 7950x for the same time will render 1.45 images or 45% faster. 7950x can finish 45% more work per unit time

Example for 45% faster rendering


----------



## Richards (May 24, 2022)

This is amd's rocket lake lol


----------



## TiN (May 24, 2022)

Interesting how CCX dies seem to have gold plating , most likely for soldering to IHS, while chipset die just usual silicon color, perhaps just for usual PCM paste


----------



## ratirt (May 24, 2022)

Valantar said:


> That ... is a very poor way of measuring IPC. Fixed frequency is ... fine, though not ideal (as IPC is highly dependent on caches, interconnects and RAM, locking clocks can present an unrealistic image of actual real-world IPC as you're changing the relative speeds of those separate clock domains), but a single application is not an IPC benchmark. If you're going to talk about real-world IPC (and not architectural-level execution ports etc.), you need a broad range of applications to give any kind of representative overview. A single application just isn't enough.


I get your point but still. Depending on which apps you use and how your system is configured the IPC number will be different. Especially with rumors and other simple benchmarks on the internet it is really hard to determine what the performance will be exactly. Since it can change and there are so many other things in the equation, it is not possible to say.
Not saying this is right way to do it, I'm saying it gives some sort of information about the performance. I think you meant, real-world performance not IPC.


----------



## Frick (May 24, 2022)

Crackong said:


> Again, No.
> 
> Again, using the same example as above
> Comparing 1s and 200s jobs
> ...



But we know the amount of work already, from context. The primary school version of this would be something like "Kim finished juggling melons in 204 seconds. Zim finished juggling melons in 297 seconds. How much faster than Zim was Kim, expressed as percent?"


----------



## Valantar (May 24, 2022)

ratirt said:


> I get your point but still. Depending on which apps you use and how your system is configured the IPC number will be different. Especially with rumors and other simple benchmarks on the internet it is really hard to determine what the performance will be exactly. Since it can change and there are so many other things in the equation, it is not possible to say.
> Not saying this is right way to do it, I'm saying it gives some sort of information about the performance. I think you meant, real-world performance not IPC.


No, I meant IPC. What that graph shows isn't IPC, it's clock-normalized real world (Cinebench) performance. As for the variability through the That's why you make a decision on how to configure test systems - say, whether to stick to the fastest supported JEDEC RAM spec, to go for "reasonably attainable XMP" etc. Either way, you make a decision and stick to it. And while motherboard choice does affect performance in some ways, most of those are down to power delivery and boosting - i.e. again something you can control for. And then you run a representative suite of benchmarks, not just one.

Using a single benchmark to indicate IPC is just as invalid as using a single benchmark to indicate the overall performance of a product. Or, arguably, even more invalid, as using the term IPC purports to speak to more fundamental architectural characteristics, which is then undermined all the more by using a single benchmark with its specific characteristics, quirks and specific performance requirements. IPC as a high-level description of real world performance per clock _must_ be calculated across a wide range of tests in order to have any validity whatsoever.



Crackong said:


> Again, No.
> 
> Again, using the same example as above
> Comparing 1s and 200s jobs
> ...


Again: these are saying the same thing. What does "the 19900% is still there" even mean? You understand that these numbers are relative representations of difference, right? That they don't exist in an absolute sense, but only exist as comparisons relative to a baseline? There is no contradiction here.


Crackong said:


> The only thing there is "faster" and "slower" doesn't apply to "Time" alone.
> "Time" is just "No. of seconds" and there is no "Speed" there.
> "Speed" only happens when "Something divided by Time"
> You cannot describe something is "faster" or "slower" without a "Speed" element
> ...


....This _is_ accounted for in my equations, as "297" or "204" isn't _seconds_, it's _seconds to finish the work_. The explicit context here is the question of "how much time does it take each of the CPUs to finish this task", not "which duration is longest". We wouldn't be talking about these numbers if they weren't the time to finish a workload, thus they can only be understood as seconds/workload, not seconds. If I was speaking of time alone, as you say I wouldn't be using terms like "faster" or "slower", I would be talking about "less" or "more" time. But I'm not. I'm talking about time to finish the work.


Crackong said:


> In this CPU case it is "1 test case done in 204 seconds" vs "1 test case done in 297 seconds"
> So the "1" must be put into the equation
> 1/204 = 0.00490
> 1/297 = 0.00337
> ...


But this is where you're reversing things. Again: you're calculating a _rate_: work-per-second, not seconds-per-workload. The numbers given are seconds-per-workload. You are transforming this into something that the data given is not - a rate of fractional units of work per second. The calculation for seconds per workloads - the speed in this context - is 204/1 and 297/1, which means the division by one is omitted as it is entirely redundant. You don't need to write out 204/1=204. Nobody before you here, and certainly not AMD's marketing team, has said anything about how many units of a given workload the CPU can complete per second. They compared the time spent to finish a specific workload. That's the opposite equation of what you're drawing up here.

Your base equation above is the following:
1 second/X seconds per workload = Y workload per second.
You're calculating percentages from this unit you're producing: workloads per second.
I don't have an equivalent equation, as my percentages are calculated from the unit given: secondsd per workload. You are explicitly transforming the data given into a different unit; I am not. This is where your confusion stems from. No such transformation is necessary in order to compare the speed of these processors, as we're not talking about their rate of work, but their relative speed in completing a given task.


Crackong said:


> Please do realize we are comparing "How quickly the CPU works"
> Your explanations only represents "Time reductions in 2 tests"


... but that's what we're comparing: the time difference between two CPUs finishing a single workload. We are _not_ comparing "how quickly the CPU works". Not at all. If, for example, we were talking clock speeds (which are a rate), then you would be correct. But we are talking a comparison of the duration for a single workload.


----------



## Deleted member 24505 (May 24, 2022)

Until Zen 4 appears it is all piss in the wind as nobody really knows


----------



## spnidel (May 24, 2022)

now that's a new generation right there


----------



## R0H1T (May 24, 2022)

Valantar said:


> What, you think that 3-4CU iGPU is going to consume a noticeable amount of power? Yeah, no, sorry. Considering AMD's iGPUs run fine with 3-4x the CUs in 15W U-series APUs, I really don't think that cut-down variant will make even a dent in the power consumption of their desktop chips


You said power limit or budget, you think the IGP on a desktop chip has the same limits like on ULV notebook ones 


ModEl4 said:


> It's 7950X being 45% faster than 12900K


Maybe I missed it but where do you see this chip being the flagship(?) 7950x no model numbers were revealed *IIRC*.


TiN said:


> *Interesting how CCX dies seem to have gold plating* , most likely for soldering to IHS, while chipset die just usual silicon color, perhaps just for usual PCM paste


Interesting observation, but I'm thinking they're hiding the (maximum) core count here! Almost certainly nothing to do with paste or gold plating.


----------



## dont whant to set it"' (May 24, 2022)

So:

1) obtain the render scene program;
2) set up a similar 12900k with as close to as posiible memory type, size and timings;
3) run program with core affinity set for E-cores only;
4) go to bios and dissable E-cores or core affinity for P-cores only;

would solve it?


----------



## mahirzukic2 (May 24, 2022)

Tigger said:


> Until Zen 4 appears it is all piss in the wind as nobody really knows


But we are kinda looking forward to it.
I might finally upgrade my i7 2600k processor either with RocketLake or Zen 4, whichever turns out to be better and be done for the next 10 years.


----------



## ratirt (May 24, 2022)

Valantar said:


> No, I meant IPC. What that graph shows isn't IPC, it's clock-normalized real world (Cinebench) performance. As for the variability through the That's why you make a decision on how to configure test systems - say, whether to stick to the fastest supported JEDEC RAM spec, to go for "reasonably attainable XMP" etc. Either way, you make a decision and stick to it. And while motherboard choice does affect performance in some ways, most of those are down to power delivery and boosting - i.e. again something you can control for. And then you run a representative suite of benchmarks, not just one.
> 
> Using a single benchmark to indicate IPC is just as invalid as using a single benchmark to indicate the overall performance of a product. Or, arguably, even more invalid, as using the term IPC purports to speak to more fundamental architectural characteristics, which is then undermined all the more by using a single benchmark with its specific characteristics, quirks and specific performance requirements. IPC as a high-level description of real world performance per clock _must_ be calculated across a wide range of tests in order to have any validity whatsoever.


What do you mean it is not an IPC metric? It clearly shows what's the score in an controlled environment when CPUs are locked to a certain frequency to estimate their Instructions per clock cycle. I think that is as valid as any other. Maybe it has not been done on many benchmarks but it is valid IPC metric. If you want to test IPC on a CPU you must have controlled environment thus frequency cap. Obviously, testing with a brother range of apps would have given different results of the IPC but it is still valid and uses the most common benchmark considered as adequate for that type of measurement. Anyway, what you are talking about, measuring IPC with more benchmarks is rather a general performance than an IPC indicator.


----------



## AusWolf (May 24, 2022)

I hope the new heat spreader eliminates the heat dissipation issues of current gen - that is, I'll be able to cool anything more powerful than a Ryzen 3 in a SFF build.


----------



## Valantar (May 24, 2022)

R0H1T said:


> You said power limit or budget, you think the IGP on a desktop chip has the same limits like on ULV notebook ones


No, I'm just giving an illustrative example of how little power an iGPU _needs_, compared to your assertion that it _will_ meaningfully affect overall power draws. And remember: this is a tiny, low power iGPU, not one tuned for performance. This is not an APU, which is the term AMD uses for all their hardware with performance-oriented iGPUs. It's meant to give you a display output without a dGPU, not to run complex 3D scenes at high performance. Could it clock very high and consume some power? Sure! Will it at stock? Not likely. The drastically reduced CU count compared to even the mobile iGPUs will reduce base power consumption for base desktop rendering _and_ peak power draws. And it certainly won't eat up a meaningful amount of the CPU's power budget when runnning a CPU-heavy compute workload - the power required for a modern iGPU displaying a simple desktop is a few watts. What I'm saying is that the effect of this will be negligible in this context, which directly contradicts your argument that the iGPU would somehow draw so much power that these might be Zen4c cores, that the iGPU power draw is equivalent to the per-core power draw of going from 96 to 128 cores.


TiN said:


> Interesting how CCX dies seem to have gold plating , most likely for soldering to IHS, while chipset die just usual silicon color, perhaps just for usual PCM paste


It's very unlikely to be gold plating, rather it's just the color of the diffusion barrier material used for TSMC's 5nm process. Plenty of dice throughout the ages have had a golden sheen to their top surface - Intel's Sapphire Rapids and Ponte Vecchio are a good example, but there are plenty. AMD CPUs (and APUs) have been soldered already for several generations after all, you don't need to gold plate the die for that to work. (And there's no way they're combining different TIMs under the same IHS - the chance of contamination between the two would be far too high, and the high temperatures for soldering would likely harm the paste.)



ratirt said:


> What do you mean it is not an IPC metric? It clearly shows what's the score in an controlled environment when CPUs are locked to a certain frequency to estimate their Instructions per clock cycle. I think that is as valid as any other. Maybe it has not been done on many benchmarks but it is valid IPC metric. If you want to test IPC on a CPU you must have controlled environment thus frequency cap. Obviously, testing with a brother range of apps would have given different results of the IPC but it is still valid and uses the most common benchmark considered as adequate for that type of measurement. Anyway, what you are talking about, measuring IPC with more benchmarks is rather a general performance than an IPC indicator.


It's an IPC metric for a single workload, which is fundamentally unrepresentative, and thus fails to meaningfully represent IPC in any general sense of the term. That is literally what the second paragraph you quoted says. The term IPC inherently makes a claim of broadly describing the per-clock performance of a given implementation of a given architecture - "instructions" is pretty general, after all. Attempting to extrapolate this from a single workload is essentially impossible, as that workload will have highly specific traits in how it loads the different parts of the core design, potentially/likely introducing significant bias, and thus failing to actually represent the architecture's ability to execute instructions generally. That's why you need some sort of representative sample of benchmkars to talk about IPC in any meaningful sense. It can still be a somewhat interesting comparison, but using it as the basis on which to say "X has A% higher IPC than Y" is very, very flawed.


----------



## ratirt (May 24, 2022)

Valantar said:


> It's an IPC metric for a single workload, which is fundamentally unrepresentative, and thus fails to meaningfully represent IPC in any general sense of the term. That is literally what the second paragraph you quoted says. The term IPC inherently makes a claim of broadly describing the per-clock performance of a given implementation of a given architecture - "instructions" is pretty general, after all. Attempting to extrapolate this from a single workload is essentially impossible, as that workload will have highly specific traits in how it loads the different parts of the core design, potentially/likely introducing significant bias, and thus failing to actually represent the architecture's ability to execute instructions generally. That's why you need some sort of representative sample of benchmkars to talk about IPC in any meaningful sense. It can still be a somewhat interesting comparison, but using it as the basis on which to say "X has A% higher IPC than Y" is very, very flawed.


Well i disagree with you and I can say that extrapolating this (like you said) from a variety of application which behave differently and there is such a vast number of them is impossible as well.
For example, 1 cpu is better than another in one application and the another cpu is better than the first one in a different application. If IPC is a metric describing instructions per second which are a constant, the outcome should be the same for every app but it is not. So performance does not always equal IPC.
For instance.
5800x and 5800x3d in games. Normally these are the same processors but they behave differently in gaming and differently in office apps. So out of curiosity, am I talking here about IPC or a performance? Somehow, you say that IPC has to be measured across variety of benchmarks to be valid. I thought that is general performance of a CPU across mostly used applications.


----------



## ModEl4 (May 24, 2022)

R0H1T said:


> Maybe I missed it but where do you see this chip being the flagship(?) 7950x no model numbers were revealed *IIRC*.


Just being brief I guess, it could be a lower TDP 16 core model.
In that case how do you know 7950X is the flagship and not the lower one?
It could be 7950X and 7950XT!jk


----------



## TheoneandonlyMrK (May 24, 2022)

defaultluser said:


> I know you folks keep  cracking  the ""Raptor Lake ill destroy this" whip like it's gong-out-of-style, but i think you're missing the large part:
> 
> *Raptor lake just doubles the e-cores (so as most real-world loads hit a scaling wall, Raptor lake will also hit that same scaling wall earlier than Zen 4 (8P + 16e versus 16 P!)*
> 
> it's going to take a perfectly-scaling application for Raptor Lake to rape 7950!


Indeed, unless they reign in power use their desktop designs will follow their laptops, IE underutilized because POWER.
An I5 can beat a I7 in laptop land.

Go see.

As for this 15% ST /30% MT and pciex 5 all round, sounds good can't wait for the competition.

I wouldn't be buying gen 1 straight away though.

I do like the Intel fanbois declaration of failure, without the adequate facts available or tests to validate there concerns.

Plus Rocket lake , could be late , Intel likes late these days, so much still to be resolved.


----------



## R0H1T (May 24, 2022)

Valantar said:


> And remember: this is a tiny, low power iGPU, not one tuned for performance.


It can easily consume 5-15W power, more if it's overclocked! Fact is it's hogging "TDP" of at least 1-2 cores in there, everything else is irrelevant.


ModEl4 said:


> It could be 7950X and 7950XT!jk


Yes and it could be *7970x* (6)Ghz edition, it's been what 11 years since that infamous slip up against Kepler 


ModEl4 said:


> In that case how do you know 7950X is the flagship and not the lower one?



Hence the question mark.


----------



## ModEl4 (May 24, 2022)

R0H1T said:


> Yes and it could be *7970x* (6)Ghz edition, it's been what 11 years since that infamous slip up against Kepler


Those were the days, it lost in performance/W but it had a more forward looking architecture design vs Kepler.(plus 3GB instead of 2GB)


----------



## btk2k2 (May 24, 2022)

Valantar said:


> Those two numbers are literally the same thing. 204s is 31% faster than 297s; 297s is 45% slower than 204s. They chose the more conservative wording, which uses the existing product as the baseline for comparison. That's the only sensible, good-faith comparison to make - especially as a "slower than" wording in marketing is _guaranteed_ to be flipped into a "faster than" wording by readers who don't consider how this changes the percentage. And that would be a shitshow for AMD.



No. That is wrong.

Completing a workload in 31% less time means the rate of work done is 45% higher.

faster / slower refers to a comparison of value / time (like Frames Per Second for example 145fps is 45% faster than 100fps). Now AMD did not use faster / slower in the slide they said it took 31% less time which is the correct wording because they are doing a seconds / workload comparison and the seconds for the Zen 4 rig was 31% less than the 12900K rig. (297 * 0.69)

If you want to use faster / slower you need to calculate the rate which is easy enough, just do 1/204 to get the renders / s which is 0.0049. Do the same for the 12900K and you get 1/297 which is 0.0034

0.0049 is a 45% faster rate than 0.0034. 0.0034 is 31% slower than 0.0049.

On a TPU graph of rate with 12900K at 100% Zen 4 would be 145%. If Zen 4 was at 100% the 12900K would be at 69%. In both cases 12900K * 1.45 = zen 4 (100*1.45 = 145 and 69*1.45 = 100)

If you don't want to use rate you need to avoid faster / slower wording and stick to less time / more time wording where you can say that Zen 4 took 31% less time or the 12900K took 45% more time. These are simple calculations though so re-arranging them is pretty trivial.


----------



## ModEl4 (May 24, 2022)

btk2k2 said:


> Now AMD did not use faster / slower in the slide they said it took 31% less time which is the correct wording because they are doing a seconds / workload comparison and the seconds for the Zen 4 rig was 31% less than the 12900K rig. (297 * 0.69)


The problem is that they said it:


----------



## Wirko (May 24, 2022)

ratirt said:


> If IPC is a metric describing instructions per second which are a constant


I think you meant instructions per clock here, is that right?

However, the number of instructions that a given CPU core executes in one clock cycle is most certainly NOT a constant. Rather, it varies in a very wide range.


----------



## Punkenjoy (May 24, 2022)

The performance number of 15% in cinebench R23 is underwhelming as it do not allow AMD to catch up Intel. But that is not the thing that disappoint me the most.

For me it's the Fall announcement. that is quite late in my opinion and it should have been released in my opinion early summer. Fall make it very close to Raptor lake and AMD will have to truly deliver.

We still don't know what AMD have made and many assumed that AMD went the intel way and went wider cores. They may have not. Cinebench R23 is not really cache/memory sensitive so if they went to improve the cache bandwidth and latency + increased the size + reworked the memory subsystem, their gain wouldn't show up really in CB R23. But they will show up on many others applications. 

We will see, a reworked and improved memory subsystem will improve multithread score and gaming. 

But it's way too early to tell. I am not sure that AMD sandbag that much. I think they went to design a CPU that will rock where they have the highest margin. EPYC lineup. People say AMD is dead, but if AMD suck 1 gen or 2 on desktop while still destroying everything on server, the company will still thrive. They make so much more money on a chiplet in an EPYC cpu than in a Ryzen. 

I will wait to see the review number but right now i am neutral on the product. Not really hype but not really disappointed


----------



## ModEl4 (May 24, 2022)

Punkenjoy said:


> But it's way too early to tell. I am not sure that AMD sandbag that much.


I'm not sure also, but sandbagging+6400 CL32 is an odd combination


----------



## HD64G (May 24, 2022)

ModEl4 said:


> The problem is that they said it:


That analogy means that Zen4 will need to become 31% slower to get to the 12900K performance BUT 12900K needs to become 45% faster to match Zen4. So, will Raptor Lake do that jump? And is Zen4 as small a leap as many seem to believe? Btw, >15% single core performance improvement vs Zen3 could mean that in 20 apps the minimum increase is 15%, not the average. Sandbagging for sure there imho.


----------



## Makaveli (May 24, 2022)

So much guessing and speculation in this thread. Wait until this is out reviewed by TPU people calm down lol


----------



## Punkenjoy (May 24, 2022)

ModEl4 said:


> I'm not sure also, but sandbagging+6400 CL32 is an odd combination


Sandbagging by using application that are not really benefiting of the Zen 4 improvement. But yeah, i know it's still fishy. Maybe it's how AMD like to get mind shares.

That is a huge departure from the era of finely crafted benchmark to show the new product in his best light. AMD sandbagged a bit Zen 3, but that much? i don't know.


----------



## btk2k2 (May 24, 2022)

ModEl4 said:


> The problem is that they said it:


AMD need better proof readers then. I thought it said 31% less time but yea, can't argue with a picture (well you can but it's stupid).


HD64G said:


> That analogy means that Zen4 will need to become 31% slower to get to the 12900K performance BUT 12900K needs to become 45% faster to match Zen4. So, will Raptor Lake do that jump? And is Zen4 as small a leap as many seem to believe? Btw, >15% single core performance improvement vs Zen3 could mean that in 20 apps the minimum increase is 15%, not the average. Sandbagging for sure there imho.


>15% was just in CB R23 ST. Zen 3 was +13% over Zen 2 in that same test scenario.

AMD are keeping true performance close to their chest.


----------



## ModEl4 (May 24, 2022)

HD64G said:


> That analogy means that Zen4 will need to become 31% slower to get to the 12900K performance BUT 12900K needs to become 45% faster to match Zen4. So, will Raptor Lake do that jump? And is Zen4 as small a leap as many seem to believe? Btw, >15% single core performance improvement vs Zen3 could mean that in 20 apps the minimum increase is 15%, not the average. Sandbagging for sure there imho.


Your guess is as good as mine, I don't know, the 12900K blender comparison can have many interpretations, on the other hand the ST Cinebench score not so many, what the preproduction sample could hit 5.5GHz during actual gameplay but in the ST Cinebench test had trouble reaching even Zen 3 clocks, not so likely, it was more than 30% or 25% or 20% (perfectly fine round numbers) but AMD decided to just tease with a >15%, also seems unlikely.
With nearly +10% frequency in 1T (and much more in nT with 170W), IPC would just be 5%, deduct 2-5% or whatever due to high memory that they used and we are talking zen->zen+ IPC difference which I refuse to believe.
For the sake of competition they better deliver, I just want the pricing to be competitive (with that I mean if in 1080p gaming 7800X/7600X is similar to 13700K/13600K in  performance (+3% is similar imo) while they lose with much higher margins in multithreading tests like Cinebench, V-ray, transcoding etc, they better be cheaper than Raptor Lake...)


----------



## Dr_b_ (May 24, 2022)

Moving in the wrong direction with the mediatek wifi chip


----------



## Bomby569 (May 24, 2022)

I've read they will all inclued igpu, that seems like a waste of die and money for the consumer. I would prefer 2 versions with and without like Intel is doing.


----------



## Punkenjoy (May 24, 2022)

Bomby569 said:


> I've read they will all inclued igpu, that seems like a waste of die and money for the consumer. I would prefer 2 versions with and without like Intel is doing.


Well to please you they can always make SKU where they disable the iGPU in the I/O die. That is what Intel does. not sure what is the real benefits of that to be honest. I prefer having it in case i need it (GPU problem by example) and just deactivating it in the bios.


----------



## ModEl4 (May 24, 2022)

Bomby569 said:


> I've read they will all inclued igpu, that seems like a waste of die and money for the consumer. I would prefer 2 versions with and without like Intel is doing.


A while ago there was some rumors that it could be RDNA3 based, at that time i thought that if RDNA3 inherits the MI250's BF16/Int8/Int4 capabilities then for AI (there was an AI mention from Lisa Su also) application it would be suitable but weak based on the 4CU rumor, but seeing the below slide suggests that all the AI acceleration comes from Zen 4 core itself:





So probably RDNA2 based?
But on 6nm it wouldn't be insignificant die size addition because it doesn't matter if it's just 4 CU, all the unslice (ACEs, HWS etc)+media engine+display engine+ etc is a lot of space.
I'm curious to see if it's only 256SP how much faster it would be vs Raptor Lake at 1080p (if Raptor Lake is 256EU, isn't time to upgrade the damn thing, since it isn't going to be ARC based at least Intel should made it a 1.6GHz 384EU design, since with DDR4 we had 1.3GHz 256EU design with 14nm Rocket Lake)


----------



## Blaeza (May 24, 2022)

I shall be sticking with AM4 after an upgrade to a 5700X and a better GPU for many a year. You scallywags and your new fangled £500 DDR5 can test it out for me first and I'll be getting on board at AM4 and DDR4 prices.


----------



## Valantar (May 24, 2022)

ratirt said:


> Well i disagree with you and I can say that extrapolating this (like you said) from a variety of application which behave differently and there is such a vast number of them is impossible as well.
> For example, 1 cpu is better than another in one application and the another cpu is better than the first one in a different application. If IPC is a metric describing instructions per second which are a constant, the outcome should be the same for every app but it is not. So performance does not always equal IPC.
> For instance.
> 5800x and 5800x3d in games. Normally these are the same processors but they behave differently in gaming and differently in office apps. So out of curiosity, am I talking here about IPC or a performance? Somehow, you say that IPC has to be measured across variety of benchmarks to be valid. I thought that is general performance of a CPU across mostly used applications.


But that's the thing: IPC in current complex CPU architectures is not a constant. It is a constant in very simple, in-order designs. In any out-of-order design with complex instruction queuing, branch prediction, instruction packing and and more, the meaning of "IPC" shifts from "count the execution ports" to "across a representative selection of diverse workloads, how many instructions can the core process per clock". The literal, constant meaning of "instructions per second" is irrelevant in any reasonably modern core design as a) they can process _tons_ at once, but b) the question shifts from base hardware execution capabilities to queuing and instruction handling, keeping the core fed.

That is also why caches and even RAM has significant effects on anything meaningfully described as "IPC" in a modern system, as cache misses, RAM speed, and all other factors relevant to keeping the core fed play into the end result. That is why you need a representative selection of benchmarks to measure IPC - because _there is no such thing _as constant IPC in modern CPUs, nor is there any such thing as an application that linearly loads every execution port in a way that demonstrates IPC in an absolute sense.


R0H1T said:


> It can easily consume 5-15W power, more if it's overclocked! Fact is it's hogging "TDP" of at least 1-2 cores in there, everything else is irrelevant.


... but you're arguing that this power usage is sufficient to make it likely that these are Zen4c cores and not Zen4 - and those cores, for reference, allow for a 33% increase in core counts in the same power envelope for servers (or more usefully for this comparison: Zen4 allows for 25% less cores per power envelope versus 4c). So, if the iGPU were to cause MSDT CPUs to move to Zen4c, it would need to consume _far_ more tan the power you're mentioning here - these are likely 105W (or higher!) CPUs, after all. For your logic to apply, the iGPU would need to consume as much power as ~4 Zen4 cores, not 1-2. It just doesn't add up. There is literally zero reason to assume these are Zen4c cores.


btk2k2 said:


> No. That is wrong.
> 
> Completing a workload in 31% less time means the rate of work done is 45% higher.


.... yes, but we're not talking about _rate of work_, we're talking about _time to finish_. Completing a task in 31% less time means you finish 31% faster. Thus you are 31% faster. Right?


btk2k2 said:


> faster / slower refers to a comparison of value / time (like Frames Per Second for example 145fps is 45% faster than 100fps). Now AMD did not use faster / slower in the slide they said it took 31% less time which is the correct wording because they are doing a seconds / workload comparison and the seconds for the Zen 4 rig was 31% less than the 12900K rig. (297 * 0.69)


They did say "faster", which is a perfectly valid wording for this comparison. There is absolutely nothing explicitly and exclusively linking the word "faster" only to a rate. If a sprinter finishes the 100m dash in 9 seconds and another in 10 seconds, will you be comparing their rate of movement? No, you compare their time to finish. And the one finishing in 9 seconds is then 1 second faster than the other, or 10% faster if we for some reason insist on using percentages.


btk2k2 said:


> If you want to use faster / slower you need to calculate the rate which is easy enough, just do 1/204 to get the renders / s which is 0.0049. Do the same for the 12900K and you get 1/297 which is 0.0034


This is an utterly arbitrary delineation with no root in the meaning of the word "faster". These words apply to literally any measure of speed you want, in any comparison you want. In this case, the use case is "time to finish a given workload", in which lower time expenditure thus is faster.


btk2k2 said:


> 0.0049 is a 45% faster rate than 0.0034. 0.0034 is 31% slower than 0.0049.


... again with the rates. There is no rate being discussed here. Check the damn slide. It's _comparing time to finish a workload_, _not workload processing per unit of time_. These are two different things that can be calculated from the same data, but only the former is what AMD used in their marketing, and transforming that to a rate to prove a point is an immense exercise in pedantic bad-faith arguing and goal post shifting.


btk2k2 said:


> If you don't want to use rate you need to avoid faster / slower wording and stick to less time / more time wording where you can say that Zen 4 took 31% less time or the 12900K took 45% more time. These are simple calculations though so re-arranging them is pretty trivial.


Sorry, but what you're saying here is utter nonsense. There is absolutely nothing in the word "faster" that says it only applies to a rate. Please stop this absurd exercise in arbitrarily delimiting the meaning of words. You're welcome to have your own private definition, but you can't force that onto the world - that's not how language works.


----------



## R0H1T (May 24, 2022)

Valantar said:


> but you're arguing that this power usage is sufficient to make it likely that these are Zen4c cores and not Zen4


No I'm arguing that if these are the big cores, zen4 (with more cache) or zen4D whatever they'd call them, then pairing them with an IGP makes no sense. Because these will probably replace the 16c/32t 5950x at the top end with a 24c/48t(?) chip given Genoa is already 96 cores? So I expect the flagship Ryzen MSDT to not have an IGP ~ that's it, I'm totally guesstimating in this regard & so are you btw 


> Which gets wasted with an IGP, try again
> 
> I'm guesstimating the bigger cache variants would likely ditch the IGP with massive (L3?) caches near the cores or on the IoD, maybe even an L4 cache.


While you were saying there'd be thinner(lighter?) cores with even less cache & higher density?

Now admittedly there could be 3 variants, with x3d versions also thrown in, but that'd be even more bizarre as far as I'm concerned!


----------



## Valantar (May 24, 2022)

R0H1T said:


> No I'm arguing that if these are the big cores, zen4 (with more cache) or zen4D whatever they'd call them, then pairing them with an IGP makes no sense. Because these will probably replace the 16c/32t 5950x at the top end with a 24c/48t(?) chip given Genoa is already 96 cores? So I expect the flagship Ryzen MSDT to not have an IGP ~ that's it, I'm totally guesstimating in this regard & so are you btw


Yes, we're all speculating, but you brought Zen4c into this as a counterargument to MSDT getting an iGPU, which ... sorry, I just don't see the connection. The iGPU is a low core count offering that they're adding because the massive node improvement for the IOD lets them implement it relatively cheaply, and it adds a hugely requested feature that will also make these chips _much_ more palatable to the very lucrative, high volume OEM market. OEMs want the option for dGPU-less builds, and this will open up a whole new market for AMD: enterprise PCs/workstations that don't come with a dGPU in their base configuration. And of course consumers have also been saying how nice it would be for a barebones iGPU in their CPUs for troubleshooting or basic system configs ever since Ryzen came out. They're just responding to that. And I would be shocked if they spun out a new, smaller IOD without the iGPU for their high end MSDT chips, as that would be extremely expensive for a very limited market.


----------



## MarsM4N (May 24, 2022)

ratirt said:


> Well i disagree with you and I can say that extrapolating this (like you said) from a variety of application which behave differently and there is such a vast number of them is impossible as well.
> For example, 1 cpu is better than another in one application and the another cpu is better than the first one in a different application. If IPC is a metric describing instructions per second which are a constant, the outcome should be the same for every app but it is not. So performance does not always equal IPC.
> For instance.
> 5800x and 5800x3d in games. Normally these are the same processors but they behave differently in gaming and differently in office apps. So out of curiosity, am I talking here about IPC or a performance? Somehow, you say that IPC has to be measured across variety of benchmarks to be valid. I thought that is general performance of a CPU across mostly used applications.



Could even be that they bring out *"3D"* versions for gamers & _"*non 3D*"_ versions for pro/consumers.  It would make a lot of sense.



Bomby569 said:


> I've read they will all inclued igpu, that seems like a waste of die and money for the consumer. I would prefer 2 versions with and without like Intel is doing.



An iGPU doesn't increase the chip price that much. On the plus side you always have a _"Backup GPU"_ on hand, and the resale value will be better (f.e. can be used for HTPC's).

Would  if they one day include a *"Automatic Toggle Mode"*, so that the APU runs the desktop/video applications & the (comletely shut down) GPU turns only on for gaming.
Now that would be a real killer feature.


----------



## btk2k2 (May 24, 2022)

Valantar said:


> .... yes, but we're not talking about _rate of work_, we're talking about _time to finish_. Completing a task in 31% less time means you finish 31% faster. Thus you are 31% faster. Right?
> 
> They did say "faster", which is a perfectly valid wording for this comparison. There is absolutely nothing explicitly and exclusively linking the word "faster" only to a rate. If a sprinter finishes the 100m dash in 9 seconds and another in 10 seconds, will you be comparing their rate of movement? No, you compare their time to finish. And the one finishing in 9 seconds is then 1 second faster than the other, or 10% faster if we for some reason insist on using percentages.
> 
> ...



Nope, pretty much all wrong.

Frame Times. Is 8.83ms 50% faster than 16.67ms? No, it is 100% faster because 16.67ms is 60FPS and 8.83ms is 120FPS.

AMD gave us the data in the form of a render time.

You do correctly state.



> These words apply to literally any measure of speed you want



Do you want the definition of speed? It has been given before but here it is again. Speed = Distance / Time. With Frame times you get Speed = Distance (1 frame) / Time (8.83ms) = 120FPS.
With Render times you get Speed = Distance (1 render) / Time (204s) = 0.0049 RPS.

As for the definition of faster.



> adjective
> comparative adjective: faster
> 
> 1.
> ...



For A to be faster than B it needs to have a higher speed and from the data AMD gave you get speed by doing what I have shown above.

It is impossible to not have Rate / Speed involved because otherwise the equation breaks.

AMD also got it wrong. This is why many places will convert 'lower is better' times to 'higher is better' speeds and then compare because the math is more intuitive when doing your ratios.


----------



## Bomby569 (May 24, 2022)

MarsM4N said:


> Would  if they one day include a *"Automatic Toggle Mode"*, so that the APU runs the desktop/video applications & the (comletely shut down) GPU turns only on for gaming.
> Now that would be a real killer feature.



That is a source of problems on the laptops. And i don't know if you saved anything, gpu's are very frugal this days, and if you have a 0db it isn't much of a difference. Some cents in electricity.


----------



## Valantar (May 24, 2022)

btk2k2 said:


> Nope, pretty much all wrong.
> 
> Frame Times. Is 8.83ms 50% faster than 16.67ms? No, it is 100% faster because 16.67ms is 60FPS and 8.83ms is 120FPS.


Again: just because the same information presented one way has one relative percentage difference does not make that relative percentage different transferable to other functions of the same data. When you perform calculations on the base data, you change the basis for comparison fundamentally, as you are no longer working with the same representation of the data. This is so fundamentally basic I'm frankly shocked that you keep harping on this.


btk2k2 said:


> AMD gave us the data in the form of a render time.
> 
> You do correctly state.
> 
> ...


This is an impressive amount of nonsensical pedantry, I have to say. "Faster" with a given, fixed workload - such as one render of the model in question - means _takes shorter time to finish_. That meaning is fully compatible with a dictionary definition of "faster". Oh, and I find it _highly_ interesting that you're clearly quoting _one_ listing of _several numbered meanings_ from a dictionary definition to prove your argument. I wonder what those subsequent entries might say? Also, isn't the core of your argument that there is only one acceptable understanding of the word? Come on, you could at least try to not be that blatant in twisting things. I mean, this is quite simple:




Is your definition valid? Yes. Is mine? Again: yes. Both are equally true. There is literally nothing that says "faster" - or "speed" - can only mean a rate of movement/change. That is pure nonsense.

This is a question of different tools for different uses. Rates are widely comparable and generalizeable - 120 km/h is the same whether if it's on an F1 track or in your grandma's Corolla. Rates are great when the workload is either unknown or the point is comparisons across workloads. Rates are useless and overly complicated when the workload is clearly defined and delineated - the time it takes your grandma's Corolla to get to the grocery store vs. the time it would take an F1 driver is best presented as _time to completion_, not as their average speed during that drive. Time to completion is a much better and more intuitive representation of the relative difference within the same workload than adding a layer of abstraction through converting the concrete data into a rate.


btk2k2 said:


> It is impossible to not have Rate / Speed involved because otherwise the equation breaks.


A rate is one way of presenting a speed - a broad and general one, either an average or a momentary measurement. Time spent to finish a fixed task is another method of presenting a speed, which presents a broad representation that isn't an average but rather a representation of the response to that specific task. If an F1 racer wins a race, what were they, compared to the competition? _Faster_. Yet results are presented in time to finish, not average speed (rate, km/h or mph) for the race. _Time to finish a given task is just as valid a measure of speed as any rate is_.


btk2k2 said:


> AMD also got it wrong. This is why many places will convert 'lower is better' times to 'higher is better' speeds and then compare because the math is more intuitive when doing your ratios.


It might be more intuitive, but it isn't any more correct or true. Also, converting "lower is better" data into the opposite is a question of _readability_, not about math. It's about data presentation, not accuracy or truth. Please stop projecting your own biases onto others - just because you have a strong preference for speed presented as rates doesn't mean the world has to conform to that, nor that others have to agree. IMO, for a fixed workload, presenting it as a rate is confusing and misleading verging on the nonsensical. A rate is only meaningful if the unit measured is clearly defined, easily understood, and makes sense in the overall context - km/h, m/s, sausages per 30 minutes in a sausage eating contest, whatever. If it isn't - such as presenting fractions of an arbitrary workload as the unit, as in this case - presenting it as a rate becomes utterly meaningless. It's the completion of the full task that matters, not the rate of fractional work performed per second. You're welcome to disagree with that sentiment, but please stop presenting your opinion as if it is somehow an universal truth.



MarsM4N said:


> Would  if they one day include a *"Automatic Toggle Mode"*, so that the APU runs the desktop/video applications & the (comletely shut down) GPU turns only on for gaming.
> Now that would be a real killer feature.


AFAIK W11 has this feature already - it has a toggle for setting a "high performance" and "[something, can't remember, don't think it's "low performance"] GPU, which should allocate the render workload to the appropriate GPU depending on the task at hand. It might not correctly recognize and categorize all applications, but you can override that manually.



Bomby569 said:


> That is a source of problems on the laptops. And i don't know if you saved anything, gpu's are very frugal this days, and if you have a 0db it isn't much of a difference. Some cents in electricity.


An iGPU consuming anything from a fraction of a watt to a handful of watts will always be more efficient than powering up a dGPU - even if current dGPUs can get very efficient at idle, you still need to power a whole other piece of silicon, its VRAM, the VRMs, and so on. It won't be many watts, but the difference will be real. And why waste energy when you can just _not_ waste energy?


----------



## MarsM4N (May 25, 2022)

Valantar said:


> AFAIK W11 has this feature already - it has a toggle for setting a "high performance" and "[something, can't remember, don't think it's "low performance"] GPU, which should allocate the render workload to the appropriate GPU depending on the task at hand. It might not correctly recognize and categorize all applications, but you can override that manually.



Ohh, really? Great! Now that's finally a feature worth uprading to W11.


----------



## Wirko (May 25, 2022)

Valantar said:


> But that's the thing: IPC in current complex CPU architectures is not a constant. It is a constant in very simple, in-order designs. In any out-of-order design with complex instruction queuing, branch prediction, instruction packing and and more, the meaning of "IPC" shifts from "count the execution ports" to "across a representative selection of diverse workloads, how many instructions can the core process per clock". The literal, constant meaning of "instructions per second" is irrelevant in any reasonably modern core design as a) they can process _tons_ at once, but b) the question shifts from base hardware execution capabilities to queuing and instruction handling, keeping the core fed.
> 
> That is also why caches and even RAM has significant effects on anything meaningfully described as "IPC" in a modern system, as cache misses, RAM speed, and all other factors relevant to keeping the core fed play into the end result. That is why you need a representative selection of benchmarks to measure IPC - because _there is no such thing _as constant IPC in modern CPUs, nor is there any such thing as an application that linearly loads every execution port in a way that demonstrates IPC in an absolute sense.


That's a nice explanation. May I add that IPC is not constant even in very simple microprocessors. The Zilog Z80 and the Motorola 6800 do not have a constant execution time for all instructions. In the 80386, IPC also becomes unpredictable: 32-bit integer multiplication takes 9-38 clock cycles, depending on the actual data being multiplied, and many simpler instructions take two cycles.


----------



## ModEl4 (May 25, 2022)

Although I really didn't want to get involved, below my 2 cents:

The common use for faster is to denote that A has higher speed than B
While the common use of Quicker is to denote that A completes something at a shorter time than B
If AMD used quicker it would be fine ,
since they used faster they involve speed, which means they involve rate (speed : the rate at which someone or something moves or operates or is able to move or operate) [or more strictly in Euclidean Physics : Speed is the ratio of the distance traveled by an object to the time required to travel that distance] so you see fast/faster essentially refers to a fraction, with time being just one of the 2 numbers and it's always the denominator and being the denominator it gives 45% not 31% , hence the logic gap in AMD's wording.
Edit:
I just saw the last update regarding  170W being the absolute max limit (probably 125W typical vs 105W), that's great news and clear advantage vs Intel!
I'm curious more for the performance/W comparison between 13400/7600(X?) 65W parts that will be much closer probably.


----------



## mechtech (May 25, 2022)

Well I kind of knew the 29% rumour was with a dash of salt https://www.techpowerup.com/278321/amd-zen-4-reportedly-features-a-29-ipc-boost-over-zen-3

15% (well up to), isn't too bad, but as always, wait for the reviews.

Actually still waiting for W1zz to review the 5500, 5600, and 5700x cpus


----------



## btk2k2 (May 25, 2022)

Valantar said:


> Again: just because the same information presented one way has one relative percentage difference does not make that relative percentage different transferable to other functions of the same data. When you perform calculations on the base data, you change the basis for comparison fundamentally, as you are no longer working with the same representation of the data. This is so fundamentally basic I'm frankly shocked that you keep harping on this.



Incorrect.

Speed is a fundamental property of this kind of data. Just like area is a fundamental property of an object. In this case just because we were not given speed does not mean we cannot calculate it because we do have the number of workloads completed and the time to complete them. (1 and 204 or 297 respectively). Just like if we have a rectangle and are given the length and width we can calculate the area. Just because we do a calculation does not change weather it is a fundamental property or not.



Valantar said:


> This is an impressive amount of nonsensical pedantry, I have to say. "Faster" with a given, fixed workload - such as one render of the model in question - means _takes shorter time to finish_. That meaning is fully compatible with a dictionary definition of "faster". Oh, and I find it _highly_ interesting that you're clearly quoting _one_ listing of _several numbered meanings_ from a dictionary definition to prove your argument. I wonder what those subsequent entries might say? Also, isn't the core of your argument that there is only one acceptable understanding of the word? Come on, you could at least try to not be that blatant in twisting things. I mean, this is quite simple:
> 
> 
> 
> ...



Your definition is valid in the context of X is 10s faster than Y. We were not given X is 10s faster than Y though. We were given Zen 4 is 31% faster than 12900K but we were also given the time to complete 1 render and when you work out the speed you realise that actually Zen 4 is 45% faster than the 12900K.

The definition I provided is the correct one for describing A is x% faster than B.



Valantar said:


> This is a question of different tools for different uses. Rates are widely comparable and generalizeable - 120 km/h is the same whether if it's on an F1 track or in your grandma's Corolla. Rates are great when the workload is either unknown or the point is comparisons across workloads. Rates are useless and overly complicated when the workload is clearly defined and delineated - the time it takes your grandma's Corolla to get to the grocery store vs. the time it would take an F1 driver is best presented as _time to completion_, not as their average speed during that drive. Time to completion is a much better and more intuitive representation of the relative difference within the same workload than adding a layer of abstraction through converting the concrete data into a rate.
> 
> A rate is one way of presenting a speed - a broad and general one, either an average or a momentary measurement. Time spent to finish a fixed task is another method of presenting a speed, which presents a broad representation that isn't an average but rather a representation of the response to that specific task. If an F1 racer wins a race, what were they, compared to the competition? _Faster_. Yet results are presented in time to finish, not average speed (rate, km/h or mph) for the race. _Time to finish a given task is just as valid a measure of speed as any rate is_.



They are indeed presented in time to finish. As in Max was 13.072s faster than Perez. We do not get Max was x% faster than Perez but when you do work out average speed and do the comparison find out that actually Max was Y% faster than Perez. Also a 13s delta over a 5840s time scale is a fractional % so displaying such information in that way would be unusable.



Valantar said:


> It might be more intuitive, but it isn't any more correct or true. Also, converting "lower is better" data into the opposite is a question of _readability_, not about math. It's about data presentation, not accuracy or truth. Please stop projecting your own biases onto others - just because you have a strong preference for speed presented as rates doesn't mean the world has to conform to that, nor that others have to agree. IMO, for a fixed workload, presenting it as a rate is confusing and misleading verging on the nonsensical. A rate is only meaningful if the unit measured is clearly defined, easily understood, and makes sense in the overall context - km/h, m/s, sausages per 30 minutes in a sausage eating contest, whatever. If it isn't - such as presenting fractions of an arbitrary workload as the unit, as in this case - presenting it as a rate becomes utterly meaningless. It's the completion of the full task that matters, not the rate of fractional work performed per second. You're welcome to disagree with that sentiment, but please stop presenting your opinion as if it is somehow an universal truth.



The display of it is correct either way.

The description of a performance delta in terms of X% faster is easier to understand with bigger = better charts but regardless of what kind of charts are being used if using A is x% faster than B the relative % difference needs to be correct. AMD did not get it correct for the formulation of the sentence on their slide.

The universal truth here is that speed = distance / time. The English language truth here is that faster when refering to relative % differences means speed and then you can refer to the universal truth to calculate it. From there you can compare the speed of 2 or more things.


----------



## MarsM4N (May 25, 2022)

ModEl4 said:


> Although I really didn't want to get involved, below my 2 cents:
> 
> The common use for faster is to denote that A has higher speed than B
> While the common use of Quicker is to denote that A completes something at a shorter time than B
> ...



*Faster*_ vs. _*Qicker*, terms drag racer are very familiar with. 

The winner is not who's faster, but the one who get's the quickest from point A to point B.












ModEl4 said:


> I just saw the last update regarding  170W being the absolute max limit (probably 125W typical vs 105W), that's great news and clear advantage vs Intel!
> I'm curious more for the performance/W comparison between 13400/7600(X?) 65W parts that will be much closer probably.



Good point.  Intel might still hold the productivity crown with it's brute force power boost, but for the average joe gaming performance & performance/W is more important.

Especially now with the rising energy costs.


----------



## Mussels (May 25, 2022)

TheLostSwede said:


> Hopefully not, as if the stability is as bas as it was for both X370 and X570, AMD is going to get a lot of unhappy customers.


99% of people had no issues, with the exception of the funky PCI-E/USB 3 related reset bugs that took time to diagnose (But also took some uncommon setups to trigger, like PCI-E risers and high power draw USB 3.x devices in the same system)


----------



## TheLostSwede (May 25, 2022)

Mussels said:


> 99% of people had no issues, with the exception of the funky PCI-E/USB 3 related reset bugs that took time to diagnose (But also took some uncommon setups to trigger, like PCI-E risers and high power draw USB 3.x devices in the same system)


That's simply not true. There were a lot of UEFI/AGESA issues early on, on both platforms, some took longer to solve than others. Much of which was memory related, but X570 had the boost issues and a lot of people had USB 2.0 problems as well. 

As I said, it mostly got resolved after a few months, but some things took quite a while for AMD to figure out.


----------



## ratirt (May 25, 2022)

Valantar said:


> But that's the thing: IPC in current complex CPU architectures is not a constant. It is a constant in very simple, in-order designs. In any out-of-order design with complex instruction queuing, branch prediction, instruction packing and and more, the meaning of "IPC" shifts from "count the execution ports" to "across a representative selection of diverse workloads, how many instructions can the core process per clock". The literal, constant meaning of "instructions per second" is irrelevant in any reasonably modern core design as a) they can process _tons_ at once, but b) the question shifts from base hardware execution capabilities to queuing and instruction handling, keeping the core fed.
> 
> That is also why caches and even RAM has significant effects on anything meaningfully described as "IPC" in a modern system, as cache misses, RAM speed, and all other factors relevant to keeping the core fed play into the end result. That is why you need a representative selection of benchmarks to measure IPC - because _there is no such thing _as constant IPC in modern CPUs, nor is there any such thing as an application that linearly loads every execution port in a way that demonstrates IPC in an absolute sense.


Of course it is not a constant. This is supposedly the outcome. The instructions per second are not a constant either. How can you measure something that is changing depending on the environment or case of use? Imagine light-speed "r" or electric charge not being a constant. That is why all measurements are wrong no matter how you measure it since you can't measure it correctly either way. So all are wrong but at the same time are some sort of indication. You cant say this is wrong and this is correct. IPC is some sort of enigma that people cling to like dark matter. What we were discussing earlier and what you been trying to explain is not an IPC but general performance across the board. Variety of benchmark perceived as common or a standard to showcase the workload and performance of a processor.


----------



## DeathtoGnomes (May 25, 2022)

W1zzard said:


> Just rechecked the video. Highest was 5520 MHz, news post has been updated


This claims AMD did no overclocking. Maybe doable on a non-X chip? cant wait to see reviews. 









						AMD confirms Ryzen 7000 "5.5 GHz demo" did not involve overclocking - VideoCardz.com
					

AMD clarifies their 5.5 GHz demo was running stock settings AMD’s Robert Hallock and Frank Azor took part in PCWorld’s ‘The Full Nerd” interview where they answered some burning questions regarding the Computex keynote with Ryzen 7000 CPUs showcase.  Among the disclosure of new AM5 platform and...




					videocardz.com


----------



## Valantar (May 25, 2022)

MarsM4N said:


> *Faster*_ vs. _*Qicker*, terms drag racer are very familiar with.
> 
> The winner is not who's faster, but the one who get's the quickest from point A to point B.


That's an excellent illustration of exactly what we're discussing here: that specific contexts engender specific meanings of words, often in order to highlight specific differences. What this discussion misses, is that such specific meanings do not invalidate the more general meanings of the same words, especially not outside of those contexts - and that in this context, there is no directly applicable sub-meaning that differentiates "faster" from other terms. Which feeds back into your example: the quicker car _is _still faster in a general sense, after all - it reaches the finish line first; it finishes the task first. The specific definition you're referring to is meant to highlight that if what you mean by "faster" is "reaches the highest top speed", that might not be the same as "finishes the race first". This again illustrates a similar issue to what we're discussing here: that a general measure of a rate - such as mph / km/h - might not give an accurate representation of overall performance in a given workload - such as racing down a quarter mile stretch of road.



btk2k2 said:


> Incorrect.
> 
> Speed is a fundamental property of this kind of data. Just like area is a fundamental property of an object. In this case just because we were not given speed does not mean we cannot calculate it because we do have the number of workloads completed and the time to complete them. (1 and 204 or 297 respectively). Just like if we have a rectangle and are given the length and width we can calculate the area. Just because we do a calculation does not change weather it is a fundamental property or not.


But ... the workload here is essentially arbitrary. Which means that your "fundamental property" of speed is _arbitrarily defined_. That in and of itself makes your strict definition meaningless. On top of that, we are not operating within the realm of physics here. We are operating within the broader world, and general, everyday language - which does not conform to such strict definitions. Ever. "Faster" in everyday language, as the dictionary definition showed above demonstrated (and as I suspect the one you yourself quoted very selectively also showed) is that the term "faster" has _many possible meanings_.


btk2k2 said:


> Your definition is valid in the context of X is 10s faster than Y. We were not given X is 10s faster than Y though. We were given Zen 4 is 31% faster than 12900K but we were also given the time to complete 1 render and when you work out the speed you realise that actually Zen 4 is 45% faster than the 12900K.
> 
> The definition I provided is the correct one for describing A is x% faster than B.


No, it is correct if that percentage relates to a rate. It does not - an arbitrarily defined rate can be calculated from the data given, but the rate was not the data given, nor was the percentage based on a rate. The percentage was based on improvement (towards zero, which is obviously unreachable) compared to a known comparison, for which both data points were in seconds to complete one unit of work.

It doesn't matter that you can calculate a rate from this: the rate wasn't in the data presented, nor was a rate discussed by anyone involved. The use of the word "faster" here was clearly indicative of an improvement in time to finish a single workload, and not an increased rate of work per second.


btk2k2 said:


> They are indeed presented in time to finish. As in Max was 13.072s faster than Perez. We do not get Max was x% faster than Perez but when you do work out average speed and do the comparison find out that actually Max was Y% faster than Perez. Also a 13s delta over a 5840s time scale is a fractional % so displaying such information in that way would be unusable.


Again: for your definition to be true, a significant effort in manipulating the data is required in order to change it into a unit of measure that is not represented in the base data. Yes, you get time to finish for the winner + additional time for those following (which then adds up to their total time through simple addition).  Can you calculate their average rate of movement from that? Absolutely! Just like you can calculate a whole host of other things. None of that would invalidate anyone starting with the base data and saying "Max finished .0032% faster than Perez" - that would be an entirely accurate statement. The only reason why this isn't done in such situations is that the percentage difference would be minuscule and thus meaningless in terms of effectively communicating the difference. The base unit of this data is not velocity, it is time to finish a known workload. You can _produce_ an average velocity from that, but that is fundamentally irrelevant to the argument of whether "finishing X seconds earlier" or transforming that directly into a percentage are valid applications of the word "faster" or not. They are. Unless you are a physicist writing an academic paper, "faster" is perfectly applicable to "finished X% or Y seconds earlier".


btk2k2 said:


> The display of it is correct either way.


... so why on earth have you been spending several pages arguing that AMD's application of it is wrong?


btk2k2 said:


> The description of a performance delta in terms of X% faster is easier to understand with bigger = better charts but regardless of what kind of charts are being used if using A is x% faster than B the relative % difference needs to be correct. AMD did not get it correct for the formulation of the sentence on their slide.


But they did. "Faster" perfectly encapsulates what they presented. And due to the presentation _not_ being a chart or a graph, but a written sentence accompanied by two illustrated data points, the confusion you're referring to _just doesn't exist_. The problem you're bringing up has some validity, but it isnt' applicable to this situation.


btk2k2 said:


> The universal truth here is that speed = distance / time. The English language truth here is that faster when refering to relative % differences means speed and then you can refer to the universal truth to calculate it. From there you can compare the speed of 2 or more things.


This is, once again, just not true. "Faster" in everyday language doesn't have a single universal meaning. It has many different meanings - which you yourself have illustrated. That you insist on the primacy of one of those meanings regardless of context doesn't say anything meaningful about the application of the word here, but rather it says something about an inflexible and unrealistic approach to the use of language. Exceptionally few words have singular, fixed meanings, and while many do so in specific contexts (lord knows I use _a lot_ of terms in my work that mean _entirely_ different things in my application than in colloquial language), you cannot argue that such contextual meanings are universal and overrule all other possible meanings. That isn't how language works.


----------



## ModEl4 (May 25, 2022)

PCWorld had a nice interview with Robert Hallock and Frank Azor, interesting questions from PCWorld and sensible answers from AMD team, good stuff:


----------



## DeathtoGnomes (May 25, 2022)

Valantar said:


> That's an excellent illustration of exactly what we're discussing here: that specific contexts engender specific meanings of words, often in order to highlight specific differences. What this discussion misses, is that such specific meanings do not invalidate the more general meanings of the same words, especially not outside of those contexts - and that in this context, there is no directly applicable sub-meaning that differentiates "faster" from other terms. Which feeds back into your example: the quicker car _is _still faster in a general sense, after all - it reaches the finish line first; it finishes the task first. The specific definition you're referring to is meant to highlight that if what you mean by "faster" is "reaches the highest top speed", that might not be the same as "finishes the race first". This again illustrates a similar issue to what we're discussing here: that a general measure of a rate - such as mph / km/h - might not give an accurate representation of overall performance in a given workload - such as racing down a quarter mile stretch of road.


I got a headache reading this... 
So, by this logic Intel being quicker than AMD in the past, should not have lost the race in blender or any other application that Intel lost to. 



Valantar said:


> The only reason why this isn't done in such situations is that the percentage difference would be minuscule and thus meaningless in terms of effectively communicating the difference.


I belive thats how to use that, its called the Margin of Error.



Valantar said:


> you cannot argue that such contextual meanings are universal and overrule all other possible meanings. That isn't how language works.


I agree. Words really have no _universal_ meaning, they have accepted meanings. Webster saw to that, the first dictionaries had a significant amount slang definitions, later changed.

 Using this logic fanboi definitions say:
AMD good
Intel bad
Intel quicker
AMD faster

Simple!


----------



## Assimilator (May 25, 2022)

@Valantar please, please, _please_ stop wasting your time on feeding the trolls. For your own sanity, I beg you.



Valantar said:


> It's not the same as ADL - ADL has 5.0 x16 PEG and 5.0 for the chipset (IIRC), but no 5.0 m.2. Not that 4 lanes less matters much, but ADL prioritizing 5.0 for GPUs rather than storage never made sense in the first place - it's doubtful any GPU in the next half decade will be meaningfully limited by PCIe 4.0 x16.


ADL is 16x 5.0 lanes for GPU + 4x 4.0 lanes dedicated to M.2 + an effective additional 4x 4.0 lanes that are dedicated to the chipset via the proprietary DMI link. So it's effectively 24 lanes of PCIe from the CPU, which matches Zen 4. Yes, I agree that in terms of *bandwidth* Zen 4 is far ahead, but lane count is more important than bandwidth IMO.


Valantar said:


> ... is that any more likely than them buying a shitty $20 AM5 tower cooler? There are plenty of great AM4 coolers out there after all. Retaining compatibility reduces waste in a meaningful and impactful way. You don't fix people being stupid by forcing obsolescence onto fully functional parts.


I'm not arguing that allowing people to reuse existing coolers is a bad thing, I'm merely noting that there will inevitably be those who try to use coolers rated for 65W on 170W parts and blame AMD as a result. Intel's approach has its own downsides, although I imagine the cooler manufacturers like Intel a bit more.

I'm also a little sceptical of the claimed compatibility; surely the dimensions (particularly Z-height) of the new socket and chip are different enough to make a meaningful difference?


Valantar said:


> X670E is literally marketed as "PCIe 5.0 everywhere", providing 24 more lanes of 5.0 (and, presumably, another 4 of which go to the CPU interconnect, leaving a total of 40). X670 most likely retains the 5.0 chipset uplink even if it runs its PCIe at 4.0 speeds. The main limitation to this is still the cost of physically implementing this amount of high speed IO on the motherboard, as that takes a lot of layers and possibly higher quality PCB materials.


I'm aware that HSIO is expensive, especially PCIe 5.0, which is why I was hoping the CPU and chipsets would be putting out more lanes. My main concern is that the lowest-end chipset will, as usual, get the lowest PCIe version and number of lanes, and manufacturers will thus not bother with USB4 or USB-C in SKUs using said chipset. Given that I've already seen a few boards and not even the highest-end of them have more than 2 type-C ports on the rear panel, I'll withhold judgement until actual reviews drop.


Valantar said:


> Several announced motherboards mention it explicitly, so no need to worry on that front. The only unknown is whether it's integrated into the CPU/chipset or not. Support is there.


Thanks, although I'd much prefer for it to be platform-native as opposed to relying on third-party controllers. Experience has shown that those are generally, to put it bluntly, shit (I'm looking at you VIA). To be fair, ASMedia has been pretty good.


Valantar said:


> On this I'd have to disagree with you. DS has _a lot_ of potential - current software just can't make use of our blazing fast storage, and DS goes a long way towards fixing that issue. It just needs a) to be fully implemented, with GPU decompression support, and b) be adopted by developers. The latter is pretty much a given for big name titles given that it's an Xbox platform feature though.


Sure it has potential, but I don't believe that it's been a game-changer (pardon the pun) for anything more than a handful of console titles. If it was so great I'd expect its adoption to be much higher in console land, which would push much higher adoption for PCs to allow ports, but I'm just not seeing it.


----------



## btk2k2 (May 25, 2022)

Valantar said:


> But ... the workload here is essentially arbitrary. Which means that your "fundamental property" of speed is _arbitrarily defined_. That in and of itself makes your strict definition meaningless. On top of that, we are not operating within the realm of physics here. We are operating within the broader world, and general, everyday language - which does not conform to such strict definitions. Ever. "Faster" in everyday language, as the dictionary definition showed above demonstrated (and as I suspect the one you yourself quoted very selectively also showed) is that the term "faster" has _many possible meanings_.



It is a bog standard triangle equation. You can re-arrange the terms as you need. If you have two values of the triangle you don't need to be given the 3rd you can calculate it and it is trivial.

Your PC has a power supply. If you look at the sticker it will usually give you the max current on the 12v rail. From that you can calculate the resistance because V = IR and we have Voltage and we have Current so to get resistance you re-arrange and get R = V/I and boom. The alternative here is you can grab a multimeter, load up the 12v rail to max load and you can measure the resistance, you will get the same answer +/- the accuracy of the meter.

The fact that you need to calculate the resistance does not stop it from existing because it is inextricably linked to the other values and is required for it to work.

Same for speed = distance / time or the more apt but still actually the same aside from semantics: rate = work done / time. We have the work done (1 render) we have the time (204s for Zen 4, 297s for 12900K) ergo by definition we have the rate as well. You can't not have the rate when given the other two pillars of the equation.



Valantar said:


> No, it is correct if that percentage relates to a rate. It does not - an arbitrarily defined rate can be calculated from the data given, but the rate was not the data given, nor was the percentage based on a rate. The percentage was based on improvement (towards zero, which is obviously unreachable) compared to a known comparison, for which both data points were in seconds to complete one unit of work.
> 
> It doesn't matter that you can calculate a rate from this: the rate wasn't in the data presented, nor was a rate discussed by anyone involved. The use of the word "faster" here was clearly indicative of an improvement in time to finish a single workload, and not an increased rate of work per second.



The rate is in the data presented because it has to be when giving a number of pieces of work done and a time to complete the work. It would be like if a business gave you their revenue and their expenses and then you said 'the profit is not in the data presented and calculating it takes significant effort in manipulating the data to come to that figure' it is total nonsense.



Valantar said:


> Again: for your definition to be true, a significant effort in manipulating the data is required in order to change it into a unit of measure that is not represented in the base data. Yes, you get time to finish for the winner + additional time for those following (which then adds up to their total time through simple addition).  Can you calculate their average rate of movement from that? Absolutely! Just like you can calculate a whole host of other things. None of that would invalidate anyone starting with the base data and saying "Max finished .0032% faster than Perez" - that would be an entirely accurate statement. The only reason why this isn't done in such situations is that the percentage difference would be minuscule and thus meaningless in terms of effectively communicating the difference. The base unit of this data is not velocity, it is time to finish a known workload. You can _produce_ an average velocity from that, but that is fundamentally irrelevant to the argument of whether "finishing X seconds earlier" or transforming that directly into a percentage are valid applications of the word "faster" or not. They are. Unless you are a physicist writing an academic paper, "faster" is perfectly applicable to "finished X% or Y seconds earlier".



If calculting speed when presented with number of units of work done and a time to do it in is 'a significant effort in manipulating the data' it might explain why you are not getting it.

This base unit of data you are harping on about is a fiction you have invented. The units are swapable if you do the maths correctly because we have been given enough information with which to do so.

Further we are in the arena of comparitive benchmarks which is ideally done with a certain amount of rigor. That makes it scientific in nature so sticking to the scientific / mathmatical definition of words is the correct call. AMD did not do that in this case.



Valantar said:


> ... so why on earth have you been spending several pages arguing that AMD's application of it is wrong?



Presenting time to completion or presenting work done / s on a chart or as raw numbers are perfectly valid ways to present the data. Comparing them is where AMD went wrong because they had a smaller is better measure and did the comparison backwards.



Valantar said:


> But they did. "Faster" perfectly encapsulates what they presented. And due to the presentation _not_ being a chart or a graph, but a written sentence accompanied by two illustrated data points, the confusion you're referring to _just doesn't exist_. The problem you're bringing up has some validity, but it isnt' applicable to this situation.



If AMD were answering a GCSE maths or physics exam and gave that result they would lose marks for an incorrect answer.



Valantar said:


> This is, once again, just not true. "Faster" in everyday language doesn't have a single universal meaning. It has many different meanings - which you yourself have illustrated. That you insist on the primacy of one of those meanings regardless of context doesn't say anything meaningful about the application of the word here, but rather it says something about an inflexible and unrealistic approach to the use of language. Exceptionally few words have singular, fixed meanings, and while many do so in specific contexts (lord knows I use _a lot_ of terms in my work that mean _entirely_ different things in my application than in colloquial language), you cannot argue that such contextual meanings are universal and overrule all other possible meanings. That isn't how language works.



Well we certainly don't mean fast as in to not eat and we don't mean fast as in stuck fast but why not use those definitions in this context as well, oh wait because they are the wrong definitions for this use case.

EDIT:

Fast Quick or Quickly - Cambridge Dictionary.


----------



## Valantar (May 25, 2022)

DeathtoGnomes said:


> I got a headache reading this...
> So, by this logic Intel being quicker than AMD in the past, should not have lost the race in blender or any other application that Intel lost to.


Lol  No, just trying to exemplify that within any single test, different measurements can tell us different things, even if some measurements might seem to contradict others (like the quicker/faster distinction) - and that applying a definition of a word from a different context might then cause you to misunderstand things quite severely.


DeathtoGnomes said:


> I belive thats how to use that, its called the Margin of Error.


I don't think margin of error is generally discussed with these types of measurements? It's relevant, but that's per result, not in the comparisons between them. My point was that you don't see percentage comparisons of something like the results of a race because the differences would be minuscule - say, a 10 second win in a 30-minute race. 10 seconds describes that far better than whatever percentage or speed difference that would equate to.


Assimilator said:


> @Valantar please, please, _please_ stop wasting your time on feeding the trolls. For your own sanity, I beg you.


Heh, I guess it's a hobby of mine? I can generally tire them out, and at times that can actually make a meningful difference in the end. We'll see how this plays out.


Assimilator said:


> ADL is 16x 5.0 lanes for GPU + 4x 4.0 lanes dedicated to M.2 + an effective additional 4x 4.0 lanes that are dedicated to the chipset via the proprietary DMI link. So it's effectively 24 lanes of PCIe from the CPU, which matches Zen 4. Yes, I agree that in terms of *bandwidth* Zen 4 is far ahead, but lane count is more important than bandwidth IMO.


I agree on that - IIRC I was just pointing out that AMD has more 5.0 lanes, even if the lane count is the same.


Assimilator said:


> I'm not arguing that allowing people to reuse existing coolers is a bad thing, I'm merely noting that there will inevitably be those who try to use coolers rated for 65W on 170W parts and blame AMD as a result. Intel's approach has its own downsides, although I imagine the cooler manufacturers like Intel a bit more.
> 
> I'm also a little sceptical of the claimed compatibility; surely the dimensions (particularly Z-height) of the new socket and chip are different enough to make a meaningful difference?


They seem to be claiming no change, though that would surprise me a bit. Guess we'll see - we might get a similar situation to "compatible" ADL coolers, or it might be perfectly fine.


Assimilator said:


> I'm aware that HSIO is expensive, especially PCIe 5.0, which is why I was hoping the CPU and chipsets would be putting out more lanes. My main concern is that the lowest-end chipset will, as usual, get the lowest PCIe version and number of lanes, and manufacturers will thus not bother with USB4 or USB-C in SKUs using said chipset. Given that I've already seen a few boards and not even the highest-end of them have more than 2 type-C ports on the rear panel, I'll withhold judgement until actual reviews drop.


I think we share that concern - quite frankly I don't care much about PCIe 4.0 or 5.0 for my use cases, and care more about having enough m.2 and rear connectivity. Possibly the worst part of specs-based marketing is that anyone trying to build a feature-rich midrange product gets shit on for their product not having the newest, fanciest stuff, rather than being lauded for providing a broad range of midrange, useful features, which essentially means nobody ever makes those product - instead it's everything including the kitchen sink at wild prices, or stripped to the bone, with very little in between.


Assimilator said:


> Thanks, although I'd much prefer for it to be platform-native as opposed to relying on third-party controllers. Experience has shown that those are generally, to put it bluntly, shit (I'm looking at you VIA). To be fair, ASMedia has been pretty good.


Yeah, that would be nice, though I doubt we'll see that on socketed CPUs any time soon - the pin count would likely be difficult to defend in terms of engineering. I hope AMD gets this into their mobile APUs though.


Assimilator said:


> Sure it has potential, but I don't believe that it's been a game-changer (pardon the pun) for anything more than a handful of console titles. If it was so great I'd expect its adoption to be much higher in console land, which would push much higher adoption for PCs to allow ports, but I'm just not seeing it.


AFAIK all titles developed only for Xbox Series X/S use it, but most titles seem to be cross-compatible still, and might thus leave it out (unless you want reliance on it to absolutely murder performance on older HDD-based consoles). I think we'll see far, far more of it in the coming years, as these older consoles get left behind. I'm frankly surprised that PC adoption hasn't been faster given that SSD storage has been a requirement for quite a few games for years now. Still, as with all new APIs it's pretty much random whether it gains traction or not.




btk2k2 said:


> It is a bog standard triangle equation. You can re-arrange the terms as you need. If you have two values of the triangle you don't need to be given the 3rd you can calculate it and it is trivial.


I never said it wasn't. I said you're not basing your percentage on the data presented, but on a transformation of said data, which invalidates you comparing it to percentages based on that data.


btk2k2 said:


> Your PC has a power supply. If you look at the sticker it will usually give you the max current on the 12v rail. From that you can calculate the resistance because V = IR and we have Voltage and we have Current so to get resistance you re-arrange and get R = V/I and boom. The alternative here is you can grab a multimeter, load up the 12v rail to max load and you can measure the resistance, you will get the same answer +/- the accuracy of the meter.
> 
> The fact that you need to calculate the resistance does not stop it from existing because it is inextricably linked to the other values and is required for it to work.


Except for the fact that your PC is not a resistive load, you would be right, but... why on earth are you going on about this irrelevant nonsense?


btk2k2 said:


> Same for speed = distance / time or the more apt but still actually the same aside from semantics: rate = work done / time. We have the work done (1 render) we have the time (204s for Zen 4, 297s for 12900K) ergo by definition we have the rate as well. You can't not have the rate when given the other two pillars of the equation.


Again: I never said it couldn't be calculated from the data provided; I said it _wasn't _the data provided. In order to get a rate, you must first perform a calculation. That's it. The rate is inherent to the data provided, but the data provided isn't the rate, nor is the percentage presented a percentage that relates directly to the rate of work - it relates to the time to completion. This is literally the entire dumb misunderstanding that you've been harping on this entire time.


btk2k2 said:


> The rate is in the data presented because it has to be when giving a number of pieces of work done and a time to complete the work. It would be like if a business gave you their revenue and their expenses and then you said 'the profit is not in the data presented and calculating it takes significant effort in manipulating the data to come to that figure' it is total nonsense.


Performing a calculation on data in order to transform its unit is ... transforming the data. It is now different data, in a different format. Is this difficult to grasp?


btk2k2 said:


> This base unit of data you are harping on about is a fiction you have invented.


The base unit of data is literally the unit in which the data was provided. AMD provided data in the format of time to complete one render, and a percentage difference between said times.


btk2k2 said:


> The units are swapable if you do the maths correctly because we have been given enough information with which to do so.


I have never said anything to contradict this, and your apparent belief that I have is rather crucial to the problem here.


btk2k2 said:


> Further we are in the arena of comparitive benchmarks which is ideally done with a certain amount of rigor. That makes it scientific in nature so sticking to the scientific / mathmatical definition of words is the correct call. AMD did not do that in this case.


There is no "mathematical" definition of "faster", as _speed_ isn't a _mathematical_ concept, even if the strict physical definition of it is described using math as a tool (as physics generally does). Also: if computer benchmarks belong to a scientific discipline, it is computer science, which is distinct from math, physics, etc. even if it builds on a complex combination of those and other fields. Within that context, and especially within this not being a scientific endeavor but a PR event - one focused on communication! - using strict scientific definitions of words that differ from colloquial meanings would be _really dumb_. That's how you get people misunderstanding you.


btk2k2 said:


> Presenting time to completion or presenting work done / s on a chart or as raw numbers are perfectly valid ways to present the data.


... did I say that it wasn't? I said that that wasn't what AMD did here, nor that it would be useful to make a chart with just two data points, and that their presentation was clearer than such a chart would have been for the purposes it was used here.


btk2k2 said:


> Comparing them is where AMD went wrong because they had a smaller is better measure and did the comparison backwards.


It isn't backwards - the measure _is_ "smaller is better". _Your opinion_ is that they should have converted it to a rate, which would have been "higher is better". You're welcome to that opinion, but you don't have the right to force that on anyone else, nor can you make any valid claim towards it being the only correct one.


btk2k2 said:


> If AMD were answering a GCSE maths or physics exam and gave that result they would lose marks for an incorrect answer.


I guess it's a good thing marketing and holding a presentation for press and the public isn't a part of GCSE math or physics exams then ... almost as if, oh, I don't know_, this is a different context where other terms are better descriptors_?


btk2k2 said:


> Well we certainly don't mean fast as in to not eat and we don't mean fast as in stuck fast but why not use those definitions in this context as well, oh wait because they are the wrong definitions for this use case.


Correct! But it would seem that you are implying that because _those_ meanings are wrong for this use case, all meanings beyond yours are also wrong? 'Cause the data doesn't support your conclusions in that case; you're making inferences not supported by evidence. Please stop doing that. You're allowed to have an opinion that converting "lower is better" data to "higher is better" equivalents is more clear, easier to read, etc. You can argue for that. What you can't do is what this started out with: you arguing that because this data _can_ be converted this way, that makes the numbers as presented _wrong_. This is even abundantly clear from your own arguments - that these numbers can be transformed into other configurations that represent the same things differently. On that basis, arguing that AMD's percentage is the wrong way around is plain-faced absurdity. Arguing for your preferred presentation being inherently superior is directly contradicted by saying that all conversions of the same data are equally valid. Pick one or the other, please.



ratirt said:


> Of course it is not a constant. This is supposedly the outcome. The instructions per second are not a constant either. How can you measure something that is changing depending on the environment or case of use? Imagine light-speed "r" or electric charge not being a constant. That is why all measurements are wrong no matter how you measure it since you can't measure it correctly either way. So all are wrong but at the same time are some sort of indication. You cant say this is wrong and this is correct. IPC is some sort of enigma that people cling to like dark matter. What we were discussing earlier and what you been trying to explain is not an IPC but general performance across the board. Variety of benchmark perceived as common or a standard to showcase the workload and performance of a processor.


The way IPC is used in the industry today, it essentially means generalizeable performance per clock for the architecture - which is the only reasonable meaning it can have given the variability of current architectures across workloads. That is why you need a broad range of tests: because no single test can provide a generalizeable representation of per-clock performance of an architecture. The result of any single benchmark will never be broadly representative. Which, when purportedly doing comparative measurements of something characteristic of the  architecture, is then methodologically flawed to such a degree that the result is essentially rendered invalid. You're not then measuring generalizeable performance per clock, you're measuring performance per clock in that specific workload and nothing else. And that's a major difference.


----------



## btk2k2 (May 25, 2022)

Valantar said:


> I never said it wasn't. I said you're not basing your percentage on the data presented, but on a transformation of said data, which invalidates you comparing it to percentages based on that data.



Rate is inversely proportional to time taken, they are linked so if you have one you have the other by default. 

So it means 31% less time == 0.69 time == 1/0.69 rate == 1.45 rate == 45% faster. These are all different ways of saying the same thing.

If we say 31% faster == 1.31 rate = 1/0.76 == 0.76 time = 24% less time. It does not work when starting from a position of 31% faster.

Now I know you are going to say 'but in the context of time 31% faster == 0.69 time ...'. Sure you can believe that but it is an incorrect use of the word faster and is where quicker might be used. Hence the prior reference to drag racing wher quicker refers to acceleration (so 0-60 times) and fast refers to speed (I topped out at 150 MPH on the quater mile run).

I am sure you will argue 'but your definition does is not supreme over other definitions' and sure it is not. The use of faster and quicker in these contexts has convention though and if you want to stick to the convetion you don't do 31% faster == 0.69 time. AMD broke the convention of the use of the term faster which is why it is considered incorrect.



Valantar said:


> Again: I never said it couldn't be calculated from the data provided; I said it _wasn't _the data provided. In order to get a rate, you must first perform a calculation. That's it. The rate is inherent to the data provided, but the data provided isn't the rate, nor is the percentage presented a percentage that relates directly to the rate of work - it relates to the time to completion. This is literally the entire dumb misunderstanding that you've been harping on this entire time.
> 
> Performing a calculation on data in order to transform its unit is ... transforming the data. It is now different data, in a different format. Is this difficult to grasp?



The use of the word faster ties the % to rate. The only correct way to tie it to time in the context of a benchmark is to give a discrete time saving like 93s faster.

Fast Quick or Quickly - Cambridge Dictionary.



Valantar said:


> The base unit of data is literally the unit in which the data was provided. AMD provided data in the format of time to complete one render, and a percentage difference between said times.



They got the percentage difference wrong when combining it with the term faster. There are words that are perfectly fine for describing a 31% reduction in time taken, faster is not one of them.



Valantar said:


> There is no "mathematical" definition of "faster", as _speed_ isn't a _mathematical_ concept, even if the strict physical definition of it is described using math as a tool (as physics generally does). Also: if computer benchmarks belong to a scientific discipline, it is computer science, which is distinct from math, physics, etc. even if it builds on a complex combination of those and other fields. Within that context, and especially within this not being a scientific endeavor but a PR event - one focused on communication! - using strict scientific definitions of words that differ from colloquial meanings would be _really dumb_. That's how you get people misunderstanding you.



The reason there is an issue is because AMD used a term with a well understood colloquial meaning in a non conventional way, not that great an idea if you are meant to be focusing on communicaiton.



Valantar said:


> It isn't backwards - the measure _is_ "smaller is better". _Your opinion_ is that they should have converted it to a rate, which would have been "higher is better". You're welcome to that opinion, but you don't have the right to force that on anyone else, nor can you make any valid claim towards it being the only correct one.



They can display it as lower is better without issue. The problem comes when you write the blurb for that in terms of A is x% faster than Y and get the relative % incorrect. If AMD wanted to highlight the reduction in time over the increase in computational performance they needed to use a different term to faster. That is it. That is the issue, nothing more than that.



Valantar said:


> I guess it's a good thing marketing and holding a presentation for press and the public isn't a part of GCSE math or physics exams then ... almost as if, oh, I don't know_, this is a different context where other terms are better descriptors_?



Yes there are better descriptors you can use when trying to describe a reduction in time taken as a relative % value than faster. 



Valantar said:


> Correct! But it would seem that you are implying that because _those_ meanings are wrong for this use case, all meanings beyond yours are also wrong? 'Cause the data doesn't support your conclusions in that case; you're making inferences not supported by evidence. Please stop doing that. You're allowed to have an opinion that converting "lower is better" data to "higher is better" equivalents is more clear, easier to read, etc. You can argue for that. What you can't do is what this started out with: you arguing that because this data _can_ be converted this way, that makes the numbers as presented _wrong_. This is even abundantly clear from your own arguments - that these numbers can be transformed into other configurations that represent the same things differently. On that basis, arguing that AMD's percentage is the wrong way around is plain-faced absurdity. Arguing for your preferred presentation being inherently superior is directly contradicted by saying that all conversions of the same data are equally valid. Pick one or the other, please.



Fast Quick or Quickly - Cambridge Dictionary. This is the evidence of the deliniation between fast, quick and quickly.

The numbers as presented are wrong when combined with the term faster because the convetional use of the term faster when using a relative % value refers to speed.


----------



## DeathtoGnomes (May 25, 2022)

Valantar said:


> That is why you need a broad range of tests: because no single test can provide a generalizeable representation of per-clock performance of an architecture.


lets make this broader, yet still applies:


> That is why you need a broad range of tests: because no single test can provide a generalizeable adequate representation of performance.


This is why reviews that, lets say video cards, use multiple games to compare performances. However, there is not a lot of different IPC tests to use. (as I understand this conversation.. )


----------



## CSG (May 25, 2022)

Valantar said:


> Performing a calculation on data in order to transform its unit is ... transforming the data. It is now different data, in a different format.


Why would one perform a transformation when computing a relative performance? Let us define performance as p=w/t, where w stands for work and t for time, and suppose that computers 1 and 2 perform the same task in times t1 and t2, respectively. Then, the ratio of their performances is p1/p2 = (w/t1) / (w/t2) = t2/t1.


----------



## the54thvoid (May 26, 2022)

No more arguments or epic posts about semantics please. It's not fair to derail threads with such long and arduously off-topic posts.


----------



## TheoneandonlyMrK (May 26, 2022)

So this didn't get locked, yeah 15% still and upwards of that since it's a initial rumour, all good.


----------



## ModEl4 (May 26, 2022)

TheoneandonlyMrK said:


> So this didn't get locked, yeah 15% still and upwards of that since it's a initial rumour, all good.


From the PCWorld interview (although not 100% clear), it seems that the Single Thread performance communicated includes IPC and depending which of the models you compare across stack vs zen3 it's 15% and higher (ST performance)
Also another speculation that comes to mind regarding the blender score and based on the answer regarding multithreading performance seems that it's mainly clock driven with a possible smt uplift.
I don't know what the average all core frequency 5950X would be hitting in a similar blender test but the difference between 7950X and 5950X simingly is 1.45X plus 1.05X ≈ 1.52X
If 5950X SMT implementation has -10% lower uplift vs 7950X SMT implementation, then if 7950X was hitting 5.2GHz on all cores with the AIO liquid cooler that they used, to hit +52% uplift from 5950X, 5950X would be running at 5.2GHz/1.52/0.9≈3.8GHz.
3.8GHz it seems a little low even for Blender, TPU members that have a 5950X would know more.
Edit: i forget the IPC difference so with 5% instead of 3.8GHz for 5950X we would have around 4GHz, with 10% around 4.2GHz and so on, so depending on the IPC difference we may not even have SMT improvements at all (not likely, because we are talking around 11% IPC improvements and 5950X in similar blender test at 3.8GHz all core frequency)


----------



## Mussels (May 28, 2022)

TheLostSwede said:


> That's simply not true. There were a lot of UEFI/AGESA issues early on, on both platforms, some took longer to solve than others. Much of which was memory related, but X570 had the boost issues and a lot of people had USB 2.0 problems as well.
> 
> As I said, it mostly got resolved after a few months, but some things took quite a while for AMD to figure out.


I've had so many ryzen systems here for myself, and then all the sales builds - the only issue that ever turned out to be actual AGESA/AMD and not shitty manufacturers, was the RAM incompatibility early on with Zen 1 and 300 chipsets hating odd numbered latencies (which was mostly blown out of proportion by people ignoring that more ranks of RAM = slower max clock speeds. When comparing to intel at the time that said 2133! no more! stay! they'd move to an AMD board that said upto 4000 or whatever, and assume that 4000 MUST. WORK. NOW.)

Of the four original boards i had, i've still got three working perfectly fine. The only ones with unsolveable issues were the budget MSI 300 and 450 boards.
The x370 setup had lingering memory issues i could never resolve, until i moved that RAM over to an intel system... and the issues moved with it. Faulty corsair RAM that got unstable above 45C, so that issue went away every winter and came back every summer to drive me mad.


----------



## TheLostSwede (May 28, 2022)

Mussels said:


> I've had so many ryzen systems here for myself, and then all the sales builds - the only issue that ever turned out to be actual AGESA/AMD and not shitty manufacturers, was the RAM incompatibility early on with Zen 1 and 300 chipsets hating odd numbered latencies (which was mostly blown out of proportion by people ignoring that more ranks of RAM = slower max clock speeds. When comparing to intel at the time that said 2133! no more! stay! they'd move to an AMD board that said upto 4000 or whatever, and assume that 4000 MUST. WORK. NOW.)
> 
> Of the four original boards i had, i've still got three working perfectly fine. The only ones with unsolveable issues were the budget MSI 300 and 450 boards.
> The x370 setup had lingering memory issues i could never resolve, until i moved that RAM over to an intel system... and the issues moved with it. Faulty corsair RAM that got unstable above 45C, so that issue went away every winter and came back every summer to drive me mad.


I could never get my Asus Prime X370 board and Ryzen 7 1700 to work with my Corsair LPX 3200 memory properly. Got up to 2933 at best, as 3000 was never properly stable.
That RAM worked perfectly fine at its rated speed in my previous Intel system.
The first couple of months there were a lot of other weird little issue too, it's in a thread here somewhere...

X570, lots of weird issues again early on and some that took much longer to solve, again, plenty posts about it here in the forums. Biggest blunder was of course boost speeds that were promised but took them 3-4 months to deliver after launch.

I never said they didn't solve the issues, my point was simply that I'm sick and tired of being a beta tester for these companies. Spend an extra six months working on these platforms and make them stable before launch, instead of rushing it out so you can launch before your one single competitor. Obviously this doesn't just apply to AMD and Intel, but also a lot of other companies that have more competition, but even so, the same applies, stop launching beta products or worse sometimes.


----------



## Mussels (May 29, 2022)

TheLostSwede said:


> I could never get my Asus Prime X370 board and Ryzen 7 1700 to work with my Corsair LPX 3200 memory properly. Got up to 2933 at best, as 3000 was never properly stable.
> That RAM worked perfectly fine at its rated speed in my previous Intel system.
> The first couple of months there were a lot of other weird little issue too, it's in a thread here somewhere...
> 
> ...


LPX was the literal worst for ryzen. The LITERAL worst.

I wrote a whole ass paragraph here and gave up, how about just a single image showing that hey - you were overclocking past officially supported speeds?






Intel dodged the issue by locking their cheap boards to 2133Mhz, AMD just gave people headroom to overclock and then realised trusting consumers was a terrible idea


----------



## TheLostSwede (May 29, 2022)

Mussels said:


> LPX was the literal worst for ryzen. The LITERAL worst.
> 
> I wrote a whole ass paragraph here and gave up, how about just a single image showing that hey - you were overclocking past officially supported speeds?
> 
> ...


Well, most people managed 3000 just fine, many 3200 and the lucky few 3466. So only getting to 2933 was 

You know as well as I do that the official memory speeds mean very little in reality.

As I said, no issue with the same RAM at 3200 on Intel, which was the rated rated speed of those modules. But as you say, LPX and AMD was a match made in a septic tank. That RAM didn't work any better with my 3700X either...


----------



## Bronan (Oct 2, 2022)

I really do not get why people want a so called 32 cores cpu while for games it does almost nothing as most games do not even use 2 to 4 cores
Sure some newer games use 2% to 4% of the other cores but if you turn all that extra cores off, you will not see any difference
If your doing CAD/CAM or similar stuff you actually use them but most never ever need them
But funny enough those are the biggest whiners over this nonsense

Anyway my point is that i do not get why new games are so darn slow and has endless load times
I was looking at a friend playing RDR2 and was not really impressed especially how slow the loading times where for each simple new challenge in the game
Overall it runned like a snail in my eyes on the latest XBOX

For me having more than 8 cores seems more than enough as i really do not need 32 cores ever
It makes me so sad that the cpu with the most cores these days gets the highest clocks.
For me that is counter productive i never need more than 8 cores for real

Regarding memory do not make me start, all the brands promise it will work a set of 4 on your motherboard well after 6 new sets corsair gave up and removed the whole set from sales.
I do not even want to begin over the LPX disaster for me that is still the worst ever in my long experience with pc building.
Especially because you needed them to be able to install the massive coolers you needed to OC your system

But i still want to see the 2 chip makers come with a new design for the socket so you never can mount the watercoolers/coolers tighter than the socket can handle
As i admit to be apparently a brute and do not really feel how much force i use on any tool
As exeample I helped a truck driver to mount his reserve wheel and he said pull it tight so i did .... result i broke that very thick bolts off 
The man could not believe a human was able todo that, he send the movie he made with his phone to his boss to prove it really happened and he was not the fault  ... LOL
He constant felt my arms and constant kept saying what, i cant believe it, how is this possible, it is impossible 
Now you think that might be a fluke but it happened to me often when changing tires on cars in the paste as well.

So when i mount a cooler on my pc i have a high chance to kill the socket again, i even have ask others todo it for me which helped a bit but you do not ask your friends 80 to 180  miles away to come mount your cooler 
That they will do once but not when you have issues and need to mount/dismount it constant.


----------



## Mussels (Oct 3, 2022)

Bronan said:


> I really do not get why people want a so called 32 cores cpu while for games it does almost nothing as most games do not even use 2 to 4 cores
> Sure some newer games use 2% to 4% of the other cores but if you turn all that extra cores off, you will not see any difference
> If your doing CAD/CAM or similar stuff you actually use them but most never ever need them
> But funny enough those are the biggest whiners over this nonsense
> ...


RDR2 is a console game. That's why it has slow loads.

I'm not as familiar with AM5 as i am AM4, but AM4 was great in that the 5600x had 95% (or more) the gaming performance of a 5950x - you never needed more than midrange, for top tier AM4 gaming in the Zen3 lineup

No one should want a high core count CPU for gaming, i still advise people to get 6-8 cores at most. The higher core count and higher TDP are workstation CPU's and 100% not worth it - only a small loud minority of people would add 150W for 1% higher FPS, just to have "the best"


----------

