• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Unveils 5 nm Ryzen 7000 "Zen 4" Desktop Processors & AM5 DDR5 Platform

Joined
Oct 27, 2020
Messages
791 (0.54/day)
I've read they will all inclued igpu, that seems like a waste of die and money for the consumer. I would prefer 2 versions with and without like Intel is doing.
A while ago there was some rumors that it could be RDNA3 based, at that time i thought that if RDNA3 inherits the MI250's BF16/Int8/Int4 capabilities then for AI (there was an AI mention from Lisa Su also) application it would be suitable but weak based on the 4CU rumor, but seeing the below slide suggests that all the AI acceleration comes from Zen 4 core itself:

So probably RDNA2 based?
But on 6nm it wouldn't be insignificant die size addition because it doesn't matter if it's just 4 CU, all the unslice (ACEs, HWS etc)+media engine+display engine+ etc is a lot of space.
I'm curious to see if it's only 256SP how much faster it would be vs Raptor Lake at 1080p (if Raptor Lake is 256EU, isn't time to upgrade the damn thing, since it isn't going to be ARC based at least Intel should made it a 1.6GHz 384EU design, since with DDR4 we had 1.3GHz 256EU design with 14nm Rocket Lake)
 
Joined
May 2, 2022
Messages
1,621 (1.75/day)
Location
G-City, UK
System Name AMDWeapon
Processor Ryzen 7 7800X3D
Motherboard X670E MSI Tomahawk WiFi
Cooling Thermalright Peerless Assassin 120 ARGB with Silverstone Air Blazer 2200rpm fans
Memory G-Skill Trident Z Neo RGB 6000 CL30 32GB@EXPO
Video Card(s) Powercolor 7900 GRE Red Devil
Storage Samsung 870 QVO 1TB x 2, Lexar 256 GB, TeamGroup MP44L 2TB, Crucial T700 1TB, Seagate Firecuda 2TB
Display(s) 32" LG UltraGear GN600-B
Case Montech 903 MAX AIR
Audio Device(s) Corsair void wireless/Sennheiser EPOS 670
Power Supply MSI MPG AGF 850 watt gold
Mouse Glorious Model D l Pad GameSir G7 SE
Keyboard Redragon Vara K551P
Software Windows 11 Pro 24H2
Benchmark Scores Fast Enough.
I shall be sticking with AM4 after an upgrade to a 5700X and a better GPU for many a year. You scallywags and your new fangled £500 DDR5 can test it out for me first and I'll be getting on board at AM4 and DDR4 prices.
 
Low quality post by Valantar
Joined
May 2, 2017
Messages
7,762 (2.82/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Well i disagree with you and I can say that extrapolating this (like you said) from a variety of application which behave differently and there is such a vast number of them is impossible as well.
For example, 1 cpu is better than another in one application and the another cpu is better than the first one in a different application. If IPC is a metric describing instructions per second which are a constant, the outcome should be the same for every app but it is not. So performance does not always equal IPC.
For instance.
5800x and 5800x3d in games. Normally these are the same processors but they behave differently in gaming and differently in office apps. So out of curiosity, am I talking here about IPC or a performance? Somehow, you say that IPC has to be measured across variety of benchmarks to be valid. I thought that is general performance of a CPU across mostly used applications.
But that's the thing: IPC in current complex CPU architectures is not a constant. It is a constant in very simple, in-order designs. In any out-of-order design with complex instruction queuing, branch prediction, instruction packing and and more, the meaning of "IPC" shifts from "count the execution ports" to "across a representative selection of diverse workloads, how many instructions can the core process per clock". The literal, constant meaning of "instructions per second" is irrelevant in any reasonably modern core design as a) they can process tons at once, but b) the question shifts from base hardware execution capabilities to queuing and instruction handling, keeping the core fed.

That is also why caches and even RAM has significant effects on anything meaningfully described as "IPC" in a modern system, as cache misses, RAM speed, and all other factors relevant to keeping the core fed play into the end result. That is why you need a representative selection of benchmarks to measure IPC - because there is no such thing as constant IPC in modern CPUs, nor is there any such thing as an application that linearly loads every execution port in a way that demonstrates IPC in an absolute sense.
It can easily consume 5-15W power, more if it's overclocked! Fact is it's hogging "TDP" of at least 1-2 cores in there, everything else is irrelevant.
... but you're arguing that this power usage is sufficient to make it likely that these are Zen4c cores and not Zen4 - and those cores, for reference, allow for a 33% increase in core counts in the same power envelope for servers (or more usefully for this comparison: Zen4 allows for 25% less cores per power envelope versus 4c). So, if the iGPU were to cause MSDT CPUs to move to Zen4c, it would need to consume far more tan the power you're mentioning here - these are likely 105W (or higher!) CPUs, after all. For your logic to apply, the iGPU would need to consume as much power as ~4 Zen4 cores, not 1-2. It just doesn't add up. There is literally zero reason to assume these are Zen4c cores.
No. That is wrong.

Completing a workload in 31% less time means the rate of work done is 45% higher.
.... yes, but we're not talking about rate of work, we're talking about time to finish. Completing a task in 31% less time means you finish 31% faster. Thus you are 31% faster. Right?
faster / slower refers to a comparison of value / time (like Frames Per Second for example 145fps is 45% faster than 100fps). Now AMD did not use faster / slower in the slide they said it took 31% less time which is the correct wording because they are doing a seconds / workload comparison and the seconds for the Zen 4 rig was 31% less than the 12900K rig. (297 * 0.69)
They did say "faster", which is a perfectly valid wording for this comparison. There is absolutely nothing explicitly and exclusively linking the word "faster" only to a rate. If a sprinter finishes the 100m dash in 9 seconds and another in 10 seconds, will you be comparing their rate of movement? No, you compare their time to finish. And the one finishing in 9 seconds is then 1 second faster than the other, or 10% faster if we for some reason insist on using percentages.
If you want to use faster / slower you need to calculate the rate which is easy enough, just do 1/204 to get the renders / s which is 0.0049. Do the same for the 12900K and you get 1/297 which is 0.0034
This is an utterly arbitrary delineation with no root in the meaning of the word "faster". These words apply to literally any measure of speed you want, in any comparison you want. In this case, the use case is "time to finish a given workload", in which lower time expenditure thus is faster.
0.0049 is a 45% faster rate than 0.0034. 0.0034 is 31% slower than 0.0049.
... again with the rates. There is no rate being discussed here. Check the damn slide. It's comparing time to finish a workload, not workload processing per unit of time. These are two different things that can be calculated from the same data, but only the former is what AMD used in their marketing, and transforming that to a rate to prove a point is an immense exercise in pedantic bad-faith arguing and goal post shifting.
If you don't want to use rate you need to avoid faster / slower wording and stick to less time / more time wording where you can say that Zen 4 took 31% less time or the 12900K took 45% more time. These are simple calculations though so re-arranging them is pretty trivial.
Sorry, but what you're saying here is utter nonsense. There is absolutely nothing in the word "faster" that says it only applies to a rate. Please stop this absurd exercise in arbitrarily delimiting the meaning of words. You're welcome to have your own private definition, but you can't force that onto the world - that's not how language works.
 
Joined
Apr 12, 2013
Messages
7,494 (1.77/day)
but you're arguing that this power usage is sufficient to make it likely that these are Zen4c cores and not Zen4
No I'm arguing that if these are the big cores, zen4 (with more cache) or zen4D whatever they'd call them, then pairing them with an IGP makes no sense. Because these will probably replace the 16c/32t 5950x at the top end with a 24c/48t(?) chip given Genoa is already 96 cores? So I expect the flagship Ryzen MSDT to not have an IGP ~ that's it, I'm totally guesstimating in this regard & so are you btw o_O
Which gets wasted with an IGP, try again :rolleyes:

I'm guesstimating the bigger cache variants would likely ditch the IGP with massive (L3?) caches near the cores or on the IoD, maybe even an L4 cache.
While you were saying there'd be thinner(lighter?) cores with even less cache & higher density?

Now admittedly there could be 3 variants, with x3d versions also thrown in, but that'd be even more bizarre as far as I'm concerned!
 
Joined
May 2, 2017
Messages
7,762 (2.82/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
No I'm arguing that if these are the big cores, zen4 (with more cache) or zen4D whatever they'd call them, then pairing them with an IGP makes no sense. Because these will probably replace the 16c/32t 5950x at the top end with a 24c/48t(?) chip given Genoa is already 96 cores? So I expect the flagship Ryzen MSDT to not have an IGP ~ that's it, I'm totally guesstimating in this regard & so are you btw o_O
Yes, we're all speculating, but you brought Zen4c into this as a counterargument to MSDT getting an iGPU, which ... sorry, I just don't see the connection. The iGPU is a low core count offering that they're adding because the massive node improvement for the IOD lets them implement it relatively cheaply, and it adds a hugely requested feature that will also make these chips much more palatable to the very lucrative, high volume OEM market. OEMs want the option for dGPU-less builds, and this will open up a whole new market for AMD: enterprise PCs/workstations that don't come with a dGPU in their base configuration. And of course consumers have also been saying how nice it would be for a barebones iGPU in their CPUs for troubleshooting or basic system configs ever since Ryzen came out. They're just responding to that. And I would be shocked if they spun out a new, smaller IOD without the iGPU for their high end MSDT chips, as that would be extremely expensive for a very limited market.
 
Joined
Apr 6, 2021
Messages
1,131 (0.86/day)
Location
Bavaria ⌬ Germany
System Name ✨ Lenovo M700 [Tiny]
Cooling ⚠️ 78,08% N² ⌬ 20,95% O² ⌬ 0,93% Ar ⌬ 0,04% CO²
Audio Device(s) ◐◑ AKG K702 ⌬ FiiO E10K Olympus 2
Mouse ✌️ Corsair M65 RGB Elite [Black] ⌬ Endgame Gear MPC-890 Cordura
Keyboard ⌨ Turtle Beach Impact 500
Well i disagree with you and I can say that extrapolating this (like you said) from a variety of application which behave differently and there is such a vast number of them is impossible as well.
For example, 1 cpu is better than another in one application and the another cpu is better than the first one in a different application. If IPC is a metric describing instructions per second which are a constant, the outcome should be the same for every app but it is not. So performance does not always equal IPC.
For instance.
5800x and 5800x3d in games. Normally these are the same processors but they behave differently in gaming and differently in office apps. So out of curiosity, am I talking here about IPC or a performance? Somehow, you say that IPC has to be measured across variety of benchmarks to be valid. I thought that is general performance of a CPU across mostly used applications.

Could even be that they bring out "3D" versions for gamers & "non 3D" versions for pro/consumers. :confused: It would make a lot of sense.

I've read they will all inclued igpu, that seems like a waste of die and money for the consumer. I would prefer 2 versions with and without like Intel is doing.

An iGPU doesn't increase the chip price that much. On the plus side you always have a "Backup GPU" on hand, and the resale value will be better (f.e. can be used for HTPC's).

Would :love: if they one day include a "Automatic Toggle Mode", so that the APU runs the desktop/video applications & the (comletely shut down) GPU turns only on for gaming.
Now that would be a real killer feature. :rockout:
 
Low quality post by btk2k2
Joined
Apr 21, 2005
Messages
184 (0.03/day)
.... yes, but we're not talking about rate of work, we're talking about time to finish. Completing a task in 31% less time means you finish 31% faster. Thus you are 31% faster. Right?

They did say "faster", which is a perfectly valid wording for this comparison. There is absolutely nothing explicitly and exclusively linking the word "faster" only to a rate. If a sprinter finishes the 100m dash in 9 seconds and another in 10 seconds, will you be comparing their rate of movement? No, you compare their time to finish. And the one finishing in 9 seconds is then 1 second faster than the other, or 10% faster if we for some reason insist on using percentages.

This is an utterly arbitrary delineation with no root in the meaning of the word "faster". These words apply to literally any measure of speed you want, in any comparison you want. In this case, the use case is "time to finish a given workload", in which lower time expenditure thus is faster.

... again with the rates. There is no rate being discussed here. Check the damn slide. It's comparing time to finish a workload, not workload processing per unit of time. These are two different things that can be calculated from the same data, but only the former is what AMD used in their marketing, and transforming that to a rate to prove a point is an immense exercise in pedantic bad-faith arguing and goal post shifting.

Sorry, but what you're saying here is utter nonsense. There is absolutely nothing in the word "faster" that says it only applies to a rate. Please stop this absurd exercise in arbitrarily delimiting the meaning of words. You're welcome to have your own private definition, but you can't force that onto the world - that's not how language works.

Nope, pretty much all wrong.

Frame Times. Is 8.83ms 50% faster than 16.67ms? No, it is 100% faster because 16.67ms is 60FPS and 8.83ms is 120FPS.

AMD gave us the data in the form of a render time.

You do correctly state.

These words apply to literally any measure of speed you want

Do you want the definition of speed? It has been given before but here it is again. Speed = Distance / Time. With Frame times you get Speed = Distance (1 frame) / Time (8.83ms) = 120FPS.
With Render times you get Speed = Distance (1 render) / Time (204s) = 0.0049 RPS.

As for the definition of faster.

adjective
comparative adjective: faster

1.
moving or capable of moving at high speed.
"a fast and powerful car"

For A to be faster than B it needs to have a higher speed and from the data AMD gave you get speed by doing what I have shown above.

It is impossible to not have Rate / Speed involved because otherwise the equation breaks.

AMD also got it wrong. This is why many places will convert 'lower is better' times to 'higher is better' speeds and then compare because the math is more intuitive when doing your ratios.
 
Joined
May 17, 2021
Messages
3,005 (2.36/day)
Processor Ryzen 5 5700x
Motherboard B550 Elite
Cooling Thermalright Perless Assassin 120 SE
Memory 32GB Fury Beast DDR4 3200Mhz
Video Card(s) Gigabyte 3060 ti gaming oc pro
Storage Samsung 970 Evo 1TB, WD SN850x 1TB, plus some random HDDs
Display(s) LG 27gp850 1440p 165Hz 27''
Case Lian Li Lancool II performance
Power Supply MSI 750w
Mouse G502
Would :love: if they one day include a "Automatic Toggle Mode", so that the APU runs the desktop/video applications & the (comletely shut down) GPU turns only on for gaming.
Now that would be a real killer feature. :rockout:

That is a source of problems on the laptops. And i don't know if you saved anything, gpu's are very frugal this days, and if you have a 0db it isn't much of a difference. Some cents in electricity.
 
Low quality post by Valantar
Joined
May 2, 2017
Messages
7,762 (2.82/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Nope, pretty much all wrong.

Frame Times. Is 8.83ms 50% faster than 16.67ms? No, it is 100% faster because 16.67ms is 60FPS and 8.83ms is 120FPS.
Again: just because the same information presented one way has one relative percentage difference does not make that relative percentage different transferable to other functions of the same data. When you perform calculations on the base data, you change the basis for comparison fundamentally, as you are no longer working with the same representation of the data. This is so fundamentally basic I'm frankly shocked that you keep harping on this.
AMD gave us the data in the form of a render time.

You do correctly state.



Do you want the definition of speed? It has been given before but here it is again. Speed = Distance / Time. With Frame times you get Speed = Distance (1 frame) / Time (8.83ms) = 120FPS.
With Render times you get Speed = Distance (1 render) / Time (204s) = 0.0049 RPS.

As for the definition of faster.



For A to be faster than B it needs to have a higher speed and from the data AMD gave you get speed by doing what I have shown above.
This is an impressive amount of nonsensical pedantry, I have to say. "Faster" with a given, fixed workload - such as one render of the model in question - means takes shorter time to finish. That meaning is fully compatible with a dictionary definition of "faster". Oh, and I find it highly interesting that you're clearly quoting one listing of several numbered meanings from a dictionary definition to prove your argument. I wonder what those subsequent entries might say? Also, isn't the core of your argument that there is only one acceptable understanding of the word? Come on, you could at least try to not be that blatant in twisting things. I mean, this is quite simple:

Is your definition valid? Yes. Is mine? Again: yes. Both are equally true. There is literally nothing that says "faster" - or "speed" - can only mean a rate of movement/change. That is pure nonsense.

This is a question of different tools for different uses. Rates are widely comparable and generalizeable - 120 km/h is the same whether if it's on an F1 track or in your grandma's Corolla. Rates are great when the workload is either unknown or the point is comparisons across workloads. Rates are useless and overly complicated when the workload is clearly defined and delineated - the time it takes your grandma's Corolla to get to the grocery store vs. the time it would take an F1 driver is best presented as time to completion, not as their average speed during that drive. Time to completion is a much better and more intuitive representation of the relative difference within the same workload than adding a layer of abstraction through converting the concrete data into a rate.
It is impossible to not have Rate / Speed involved because otherwise the equation breaks.
A rate is one way of presenting a speed - a broad and general one, either an average or a momentary measurement. Time spent to finish a fixed task is another method of presenting a speed, which presents a broad representation that isn't an average but rather a representation of the response to that specific task. If an F1 racer wins a race, what were they, compared to the competition? Faster. Yet results are presented in time to finish, not average speed (rate, km/h or mph) for the race. Time to finish a given task is just as valid a measure of speed as any rate is.
AMD also got it wrong. This is why many places will convert 'lower is better' times to 'higher is better' speeds and then compare because the math is more intuitive when doing your ratios.
It might be more intuitive, but it isn't any more correct or true. Also, converting "lower is better" data into the opposite is a question of readability, not about math. It's about data presentation, not accuracy or truth. Please stop projecting your own biases onto others - just because you have a strong preference for speed presented as rates doesn't mean the world has to conform to that, nor that others have to agree. IMO, for a fixed workload, presenting it as a rate is confusing and misleading verging on the nonsensical. A rate is only meaningful if the unit measured is clearly defined, easily understood, and makes sense in the overall context - km/h, m/s, sausages per 30 minutes in a sausage eating contest, whatever. If it isn't - such as presenting fractions of an arbitrary workload as the unit, as in this case - presenting it as a rate becomes utterly meaningless. It's the completion of the full task that matters, not the rate of fractional work performed per second. You're welcome to disagree with that sentiment, but please stop presenting your opinion as if it is somehow an universal truth.

Would :love: if they one day include a "Automatic Toggle Mode", so that the APU runs the desktop/video applications & the (comletely shut down) GPU turns only on for gaming.
Now that would be a real killer feature. :rockout:
AFAIK W11 has this feature already - it has a toggle for setting a "high performance" and "[something, can't remember, don't think it's "low performance"] GPU, which should allocate the render workload to the appropriate GPU depending on the task at hand. It might not correctly recognize and categorize all applications, but you can override that manually.

That is a source of problems on the laptops. And i don't know if you saved anything, gpu's are very frugal this days, and if you have a 0db it isn't much of a difference. Some cents in electricity.
An iGPU consuming anything from a fraction of a watt to a handful of watts will always be more efficient than powering up a dGPU - even if current dGPUs can get very efficient at idle, you still need to power a whole other piece of silicon, its VRAM, the VRMs, and so on. It won't be many watts, but the difference will be real. And why waste energy when you can just not waste energy?
 
Joined
Apr 6, 2021
Messages
1,131 (0.86/day)
Location
Bavaria ⌬ Germany
System Name ✨ Lenovo M700 [Tiny]
Cooling ⚠️ 78,08% N² ⌬ 20,95% O² ⌬ 0,93% Ar ⌬ 0,04% CO²
Audio Device(s) ◐◑ AKG K702 ⌬ FiiO E10K Olympus 2
Mouse ✌️ Corsair M65 RGB Elite [Black] ⌬ Endgame Gear MPC-890 Cordura
Keyboard ⌨ Turtle Beach Impact 500
AFAIK W11 has this feature already - it has a toggle for setting a "high performance" and "[something, can't remember, don't think it's "low performance"] GPU, which should allocate the render workload to the appropriate GPU depending on the task at hand. It might not correctly recognize and categorize all applications, but you can override that manually.

Ohh, really? Great! Now that's finally a feature worth uprading to W11. :D
 
Joined
Jan 3, 2021
Messages
3,454 (2.45/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
But that's the thing: IPC in current complex CPU architectures is not a constant. It is a constant in very simple, in-order designs. In any out-of-order design with complex instruction queuing, branch prediction, instruction packing and and more, the meaning of "IPC" shifts from "count the execution ports" to "across a representative selection of diverse workloads, how many instructions can the core process per clock". The literal, constant meaning of "instructions per second" is irrelevant in any reasonably modern core design as a) they can process tons at once, but b) the question shifts from base hardware execution capabilities to queuing and instruction handling, keeping the core fed.

That is also why caches and even RAM has significant effects on anything meaningfully described as "IPC" in a modern system, as cache misses, RAM speed, and all other factors relevant to keeping the core fed play into the end result. That is why you need a representative selection of benchmarks to measure IPC - because there is no such thing as constant IPC in modern CPUs, nor is there any such thing as an application that linearly loads every execution port in a way that demonstrates IPC in an absolute sense.
That's a nice explanation. May I add that IPC is not constant even in very simple microprocessors. The Zilog Z80 and the Motorola 6800 do not have a constant execution time for all instructions. In the 80386, IPC also becomes unpredictable: 32-bit integer multiplication takes 9-38 clock cycles, depending on the actual data being multiplied, and many simpler instructions take two cycles.
 
Joined
Oct 27, 2020
Messages
791 (0.54/day)
Although I really didn't want to get involved, below my 2 cents:

The common use for faster is to denote that A has higher speed than B
While the common use of Quicker is to denote that A completes something at a shorter time than B
If AMD used quicker it would be fine ,
since they used faster they involve speed, which means they involve rate (speed : the rate at which someone or something moves or operates or is able to move or operate) [or more strictly in Euclidean Physics : Speed is the ratio of the distance traveled by an object to the time required to travel that distance] so you see fast/faster essentially refers to a fraction, with time being just one of the 2 numbers and it's always the denominator and being the denominator it gives 45% not 31% , hence the logic gap in AMD's wording.
Edit:
I just saw the last update regarding 170W being the absolute max limit (probably 125W typical vs 105W), that's great news and clear advantage vs Intel!
I'm curious more for the performance/W comparison between 13400/7600(X?) 65W parts that will be much closer probably.
 
Last edited:
Joined
Dec 26, 2006
Messages
3,812 (0.58/day)
Location
Northern Ontario Canada
Processor Ryzen 5700x
Motherboard Gigabyte X570S Aero G R1.1 BiosF5g
Cooling Noctua NH-C12P SE14 w/ NF-A15 HS-PWM Fan 1500rpm
Memory Micron DDR4-3200 2x32GB D.S. D.R. (CT2K32G4DFD832A)
Video Card(s) AMD RX 6800 - Asus Tuf
Storage Kingston KC3000 1TB & 2TB & 4TB Corsair MP600 Pro LPX
Display(s) LG 27UL550-W (27" 4k)
Case Be Quiet Pure Base 600 (no window)
Audio Device(s) Realtek ALC1220-VB
Power Supply SuperFlower Leadex V Gold Pro 850W ATX Ver2.52
Mouse Mionix Naos Pro
Keyboard Corsair Strafe with browns
Software W10 22H2 Pro x64
Low quality post by btk2k2
Joined
Apr 21, 2005
Messages
184 (0.03/day)
Again: just because the same information presented one way has one relative percentage difference does not make that relative percentage different transferable to other functions of the same data. When you perform calculations on the base data, you change the basis for comparison fundamentally, as you are no longer working with the same representation of the data. This is so fundamentally basic I'm frankly shocked that you keep harping on this.

Incorrect.

Speed is a fundamental property of this kind of data. Just like area is a fundamental property of an object. In this case just because we were not given speed does not mean we cannot calculate it because we do have the number of workloads completed and the time to complete them. (1 and 204 or 297 respectively). Just like if we have a rectangle and are given the length and width we can calculate the area. Just because we do a calculation does not change weather it is a fundamental property or not.

This is an impressive amount of nonsensical pedantry, I have to say. "Faster" with a given, fixed workload - such as one render of the model in question - means takes shorter time to finish. That meaning is fully compatible with a dictionary definition of "faster". Oh, and I find it highly interesting that you're clearly quoting one listing of several numbered meanings from a dictionary definition to prove your argument. I wonder what those subsequent entries might say? Also, isn't the core of your argument that there is only one acceptable understanding of the word? Come on, you could at least try to not be that blatant in twisting things. I mean, this is quite simple:

Is your definition valid? Yes. Is mine? Again: yes. Both are equally true. There is literally nothing that says "faster" - or "speed" - can only mean a rate of movement/change. That is pure nonsense.

Your definition is valid in the context of X is 10s faster than Y. We were not given X is 10s faster than Y though. We were given Zen 4 is 31% faster than 12900K but we were also given the time to complete 1 render and when you work out the speed you realise that actually Zen 4 is 45% faster than the 12900K.

The definition I provided is the correct one for describing A is x% faster than B.

This is a question of different tools for different uses. Rates are widely comparable and generalizeable - 120 km/h is the same whether if it's on an F1 track or in your grandma's Corolla. Rates are great when the workload is either unknown or the point is comparisons across workloads. Rates are useless and overly complicated when the workload is clearly defined and delineated - the time it takes your grandma's Corolla to get to the grocery store vs. the time it would take an F1 driver is best presented as time to completion, not as their average speed during that drive. Time to completion is a much better and more intuitive representation of the relative difference within the same workload than adding a layer of abstraction through converting the concrete data into a rate.

A rate is one way of presenting a speed - a broad and general one, either an average or a momentary measurement. Time spent to finish a fixed task is another method of presenting a speed, which presents a broad representation that isn't an average but rather a representation of the response to that specific task. If an F1 racer wins a race, what were they, compared to the competition? Faster. Yet results are presented in time to finish, not average speed (rate, km/h or mph) for the race. Time to finish a given task is just as valid a measure of speed as any rate is.

They are indeed presented in time to finish. As in Max was 13.072s faster than Perez. We do not get Max was x% faster than Perez but when you do work out average speed and do the comparison find out that actually Max was Y% faster than Perez. Also a 13s delta over a 5840s time scale is a fractional % so displaying such information in that way would be unusable.

It might be more intuitive, but it isn't any more correct or true. Also, converting "lower is better" data into the opposite is a question of readability, not about math. It's about data presentation, not accuracy or truth. Please stop projecting your own biases onto others - just because you have a strong preference for speed presented as rates doesn't mean the world has to conform to that, nor that others have to agree. IMO, for a fixed workload, presenting it as a rate is confusing and misleading verging on the nonsensical. A rate is only meaningful if the unit measured is clearly defined, easily understood, and makes sense in the overall context - km/h, m/s, sausages per 30 minutes in a sausage eating contest, whatever. If it isn't - such as presenting fractions of an arbitrary workload as the unit, as in this case - presenting it as a rate becomes utterly meaningless. It's the completion of the full task that matters, not the rate of fractional work performed per second. You're welcome to disagree with that sentiment, but please stop presenting your opinion as if it is somehow an universal truth.

The display of it is correct either way.

The description of a performance delta in terms of X% faster is easier to understand with bigger = better charts but regardless of what kind of charts are being used if using A is x% faster than B the relative % difference needs to be correct. AMD did not get it correct for the formulation of the sentence on their slide.

The universal truth here is that speed = distance / time. The English language truth here is that faster when refering to relative % differences means speed and then you can refer to the universal truth to calculate it. From there you can compare the speed of 2 or more things.
 
Joined
Apr 6, 2021
Messages
1,131 (0.86/day)
Location
Bavaria ⌬ Germany
System Name ✨ Lenovo M700 [Tiny]
Cooling ⚠️ 78,08% N² ⌬ 20,95% O² ⌬ 0,93% Ar ⌬ 0,04% CO²
Audio Device(s) ◐◑ AKG K702 ⌬ FiiO E10K Olympus 2
Mouse ✌️ Corsair M65 RGB Elite [Black] ⌬ Endgame Gear MPC-890 Cordura
Keyboard ⌨ Turtle Beach Impact 500
Although I really didn't want to get involved, below my 2 cents:

The common use for faster is to denote that A has higher speed than B
While the common use of Quicker is to denote that A completes something at a shorter time than B
If AMD used quicker it would be fine ,
since they used faster they involve speed, which means they involve rate (speed : the rate at which someone or something moves or operates or is able to move or operate) [or more strictly in Euclidean Physics : Speed is the ratio of the distance traveled by an object to the time required to travel that distance] so you see fast/faster essentially refers to a fraction, with time being just one of the 2 numbers and it's always the denominator and being the denominator it gives 45% not 31% , hence the logic gap in AMD's wording.

Faster vs. Qicker, terms drag racer are very familiar with. :cool:

The winner is not who's faster, but the one who get's the quickest from point A to point B.


I just saw the last update regarding 170W being the absolute max limit (probably 125W typical vs 105W), that's great news and clear advantage vs Intel!
I'm curious more for the performance/W comparison between 13400/7600(X?) 65W parts that will be much closer probably.

Good point. :) Intel might still hold the productivity crown with it's brute force power boost, but for the average joe gaming performance & performance/W is more important.

Especially now with the rising energy costs.
 

Mussels

Freshwater Moderator
Joined
Oct 6, 2004
Messages
58,413 (7.96/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
Hopefully not, as if the stability is as bas as it was for both X370 and X570, AMD is going to get a lot of unhappy customers.
99% of people had no issues, with the exception of the funky PCI-E/USB 3 related reset bugs that took time to diagnose (But also took some uncommon setups to trigger, like PCI-E risers and high power draw USB 3.x devices in the same system)
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,523 (2.40/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
99% of people had no issues, with the exception of the funky PCI-E/USB 3 related reset bugs that took time to diagnose (But also took some uncommon setups to trigger, like PCI-E risers and high power draw USB 3.x devices in the same system)
That's simply not true. There were a lot of UEFI/AGESA issues early on, on both platforms, some took longer to solve than others. Much of which was memory related, but X570 had the boost issues and a lot of people had USB 2.0 problems as well.

As I said, it mostly got resolved after a few months, but some things took quite a while for AMD to figure out.
 
Joined
May 31, 2016
Messages
4,437 (1.44/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
But that's the thing: IPC in current complex CPU architectures is not a constant. It is a constant in very simple, in-order designs. In any out-of-order design with complex instruction queuing, branch prediction, instruction packing and and more, the meaning of "IPC" shifts from "count the execution ports" to "across a representative selection of diverse workloads, how many instructions can the core process per clock". The literal, constant meaning of "instructions per second" is irrelevant in any reasonably modern core design as a) they can process tons at once, but b) the question shifts from base hardware execution capabilities to queuing and instruction handling, keeping the core fed.

That is also why caches and even RAM has significant effects on anything meaningfully described as "IPC" in a modern system, as cache misses, RAM speed, and all other factors relevant to keeping the core fed play into the end result. That is why you need a representative selection of benchmarks to measure IPC - because there is no such thing as constant IPC in modern CPUs, nor is there any such thing as an application that linearly loads every execution port in a way that demonstrates IPC in an absolute sense.
Of course it is not a constant. This is supposedly the outcome. The instructions per second are not a constant either. How can you measure something that is changing depending on the environment or case of use? Imagine light-speed "r" or electric charge not being a constant. That is why all measurements are wrong no matter how you measure it since you can't measure it correctly either way. So all are wrong but at the same time are some sort of indication. You cant say this is wrong and this is correct. IPC is some sort of enigma that people cling to like dark matter. What we were discussing earlier and what you been trying to explain is not an IPC but general performance across the board. Variety of benchmark perceived as common or a standard to showcase the workload and performance of a processor.
 
Joined
Jul 16, 2014
Messages
8,197 (2.17/day)
Location
SE Michigan
System Name Dumbass
Processor AMD Ryzen 7800X3D
Motherboard ASUS TUF gaming B650
Cooling Artic Liquid Freezer 2 - 420mm
Memory G.Skill Sniper 32gb DDR5 6000
Video Card(s) GreenTeam 4070 ti super 16gb
Storage Samsung EVO 500gb & 1Tb, 2tb HDD, 500gb WD Black
Display(s) 1x Nixeus NX_EDG27, 2x Dell S2440L (16:9)
Case Phanteks Enthoo Primo w/8 140mm SP Fans
Audio Device(s) onboard (realtek?) - SPKRS:Logitech Z623 200w 2.1
Power Supply Corsair HX1000i
Mouse Steeseries Esports Wireless
Keyboard Corsair K100
Software windows 10 H
Benchmark Scores https://i.imgur.com/aoz3vWY.jpg?2
Just rechecked the video. Highest was 5520 MHz, news post has been updated
This claims AMD did no overclocking. Maybe doable on a non-X chip? cant wait to see reviews.

 
Low quality post by Valantar
Joined
May 2, 2017
Messages
7,762 (2.82/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Faster vs. Qicker, terms drag racer are very familiar with. :cool:

The winner is not who's faster, but the one who get's the quickest from point A to point B.
That's an excellent illustration of exactly what we're discussing here: that specific contexts engender specific meanings of words, often in order to highlight specific differences. What this discussion misses, is that such specific meanings do not invalidate the more general meanings of the same words, especially not outside of those contexts - and that in this context, there is no directly applicable sub-meaning that differentiates "faster" from other terms. Which feeds back into your example: the quicker car is still faster in a general sense, after all - it reaches the finish line first; it finishes the task first. The specific definition you're referring to is meant to highlight that if what you mean by "faster" is "reaches the highest top speed", that might not be the same as "finishes the race first". This again illustrates a similar issue to what we're discussing here: that a general measure of a rate - such as mph / km/h - might not give an accurate representation of overall performance in a given workload - such as racing down a quarter mile stretch of road.

Incorrect.

Speed is a fundamental property of this kind of data. Just like area is a fundamental property of an object. In this case just because we were not given speed does not mean we cannot calculate it because we do have the number of workloads completed and the time to complete them. (1 and 204 or 297 respectively). Just like if we have a rectangle and are given the length and width we can calculate the area. Just because we do a calculation does not change weather it is a fundamental property or not.
But ... the workload here is essentially arbitrary. Which means that your "fundamental property" of speed is arbitrarily defined. That in and of itself makes your strict definition meaningless. On top of that, we are not operating within the realm of physics here. We are operating within the broader world, and general, everyday language - which does not conform to such strict definitions. Ever. "Faster" in everyday language, as the dictionary definition showed above demonstrated (and as I suspect the one you yourself quoted very selectively also showed) is that the term "faster" has many possible meanings.
Your definition is valid in the context of X is 10s faster than Y. We were not given X is 10s faster than Y though. We were given Zen 4 is 31% faster than 12900K but we were also given the time to complete 1 render and when you work out the speed you realise that actually Zen 4 is 45% faster than the 12900K.

The definition I provided is the correct one for describing A is x% faster than B.
No, it is correct if that percentage relates to a rate. It does not - an arbitrarily defined rate can be calculated from the data given, but the rate was not the data given, nor was the percentage based on a rate. The percentage was based on improvement (towards zero, which is obviously unreachable) compared to a known comparison, for which both data points were in seconds to complete one unit of work.

It doesn't matter that you can calculate a rate from this: the rate wasn't in the data presented, nor was a rate discussed by anyone involved. The use of the word "faster" here was clearly indicative of an improvement in time to finish a single workload, and not an increased rate of work per second.
They are indeed presented in time to finish. As in Max was 13.072s faster than Perez. We do not get Max was x% faster than Perez but when you do work out average speed and do the comparison find out that actually Max was Y% faster than Perez. Also a 13s delta over a 5840s time scale is a fractional % so displaying such information in that way would be unusable.
Again: for your definition to be true, a significant effort in manipulating the data is required in order to change it into a unit of measure that is not represented in the base data. Yes, you get time to finish for the winner + additional time for those following (which then adds up to their total time through simple addition). Can you calculate their average rate of movement from that? Absolutely! Just like you can calculate a whole host of other things. None of that would invalidate anyone starting with the base data and saying "Max finished .0032% faster than Perez" - that would be an entirely accurate statement. The only reason why this isn't done in such situations is that the percentage difference would be minuscule and thus meaningless in terms of effectively communicating the difference. The base unit of this data is not velocity, it is time to finish a known workload. You can produce an average velocity from that, but that is fundamentally irrelevant to the argument of whether "finishing X seconds earlier" or transforming that directly into a percentage are valid applications of the word "faster" or not. They are. Unless you are a physicist writing an academic paper, "faster" is perfectly applicable to "finished X% or Y seconds earlier".
The display of it is correct either way.
... so why on earth have you been spending several pages arguing that AMD's application of it is wrong?
The description of a performance delta in terms of X% faster is easier to understand with bigger = better charts but regardless of what kind of charts are being used if using A is x% faster than B the relative % difference needs to be correct. AMD did not get it correct for the formulation of the sentence on their slide.
But they did. "Faster" perfectly encapsulates what they presented. And due to the presentation not being a chart or a graph, but a written sentence accompanied by two illustrated data points, the confusion you're referring to just doesn't exist. The problem you're bringing up has some validity, but it isnt' applicable to this situation.
The universal truth here is that speed = distance / time. The English language truth here is that faster when refering to relative % differences means speed and then you can refer to the universal truth to calculate it. From there you can compare the speed of 2 or more things.
This is, once again, just not true. "Faster" in everyday language doesn't have a single universal meaning. It has many different meanings - which you yourself have illustrated. That you insist on the primacy of one of those meanings regardless of context doesn't say anything meaningful about the application of the word here, but rather it says something about an inflexible and unrealistic approach to the use of language. Exceptionally few words have singular, fixed meanings, and while many do so in specific contexts (lord knows I use a lot of terms in my work that mean entirely different things in my application than in colloquial language), you cannot argue that such contextual meanings are universal and overrule all other possible meanings. That isn't how language works.
 
Joined
Oct 27, 2020
Messages
791 (0.54/day)
PCWorld had a nice interview with Robert Hallock and Frank Azor, interesting questions from PCWorld and sensible answers from AMD team, good stuff:

 
Joined
Jul 16, 2014
Messages
8,197 (2.17/day)
Location
SE Michigan
System Name Dumbass
Processor AMD Ryzen 7800X3D
Motherboard ASUS TUF gaming B650
Cooling Artic Liquid Freezer 2 - 420mm
Memory G.Skill Sniper 32gb DDR5 6000
Video Card(s) GreenTeam 4070 ti super 16gb
Storage Samsung EVO 500gb & 1Tb, 2tb HDD, 500gb WD Black
Display(s) 1x Nixeus NX_EDG27, 2x Dell S2440L (16:9)
Case Phanteks Enthoo Primo w/8 140mm SP Fans
Audio Device(s) onboard (realtek?) - SPKRS:Logitech Z623 200w 2.1
Power Supply Corsair HX1000i
Mouse Steeseries Esports Wireless
Keyboard Corsair K100
Software windows 10 H
Benchmark Scores https://i.imgur.com/aoz3vWY.jpg?2
That's an excellent illustration of exactly what we're discussing here: that specific contexts engender specific meanings of words, often in order to highlight specific differences. What this discussion misses, is that such specific meanings do not invalidate the more general meanings of the same words, especially not outside of those contexts - and that in this context, there is no directly applicable sub-meaning that differentiates "faster" from other terms. Which feeds back into your example: the quicker car is still faster in a general sense, after all - it reaches the finish line first; it finishes the task first. The specific definition you're referring to is meant to highlight that if what you mean by "faster" is "reaches the highest top speed", that might not be the same as "finishes the race first". This again illustrates a similar issue to what we're discussing here: that a general measure of a rate - such as mph / km/h - might not give an accurate representation of overall performance in a given workload - such as racing down a quarter mile stretch of road.
I got a headache reading this... :pimp:
So, by this logic Intel being quicker than AMD in the past, should not have lost the race in blender or any other application that Intel lost to. :p:D

The only reason why this isn't done in such situations is that the percentage difference would be minuscule and thus meaningless in terms of effectively communicating the difference.
I belive thats how to use that, its called the Margin of Error.

you cannot argue that such contextual meanings are universal and overrule all other possible meanings. That isn't how language works.
I agree. Words really have no universal meaning, they have accepted meanings. Webster saw to that, the first dictionaries had a significant amount slang definitions, later changed.

Using this logic fanboi definitions say:
AMD good
Intel bad
Intel quicker
AMD faster

Simple! :twitch:
 
Last edited:
Joined
Feb 18, 2005
Messages
5,800 (0.80/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
@Valantar please, please, please stop wasting your time on feeding the trolls. For your own sanity, I beg you.

It's not the same as ADL - ADL has 5.0 x16 PEG and 5.0 for the chipset (IIRC), but no 5.0 m.2. Not that 4 lanes less matters much, but ADL prioritizing 5.0 for GPUs rather than storage never made sense in the first place - it's doubtful any GPU in the next half decade will be meaningfully limited by PCIe 4.0 x16.
ADL is 16x 5.0 lanes for GPU + 4x 4.0 lanes dedicated to M.2 + an effective additional 4x 4.0 lanes that are dedicated to the chipset via the proprietary DMI link. So it's effectively 24 lanes of PCIe from the CPU, which matches Zen 4. Yes, I agree that in terms of *bandwidth* Zen 4 is far ahead, but lane count is more important than bandwidth IMO.
... is that any more likely than them buying a shitty $20 AM5 tower cooler? There are plenty of great AM4 coolers out there after all. Retaining compatibility reduces waste in a meaningful and impactful way. You don't fix people being stupid by forcing obsolescence onto fully functional parts.
I'm not arguing that allowing people to reuse existing coolers is a bad thing, I'm merely noting that there will inevitably be those who try to use coolers rated for 65W on 170W parts and blame AMD as a result. Intel's approach has its own downsides, although I imagine the cooler manufacturers like Intel a bit more.

I'm also a little sceptical of the claimed compatibility; surely the dimensions (particularly Z-height) of the new socket and chip are different enough to make a meaningful difference?
X670E is literally marketed as "PCIe 5.0 everywhere", providing 24 more lanes of 5.0 (and, presumably, another 4 of which go to the CPU interconnect, leaving a total of 40). X670 most likely retains the 5.0 chipset uplink even if it runs its PCIe at 4.0 speeds. The main limitation to this is still the cost of physically implementing this amount of high speed IO on the motherboard, as that takes a lot of layers and possibly higher quality PCB materials.
I'm aware that HSIO is expensive, especially PCIe 5.0, which is why I was hoping the CPU and chipsets would be putting out more lanes. My main concern is that the lowest-end chipset will, as usual, get the lowest PCIe version and number of lanes, and manufacturers will thus not bother with USB4 or USB-C in SKUs using said chipset. Given that I've already seen a few boards and not even the highest-end of them have more than 2 type-C ports on the rear panel, I'll withhold judgement until actual reviews drop.
Several announced motherboards mention it explicitly, so no need to worry on that front. The only unknown is whether it's integrated into the CPU/chipset or not. Support is there.
Thanks, although I'd much prefer for it to be platform-native as opposed to relying on third-party controllers. Experience has shown that those are generally, to put it bluntly, shit (I'm looking at you VIA). To be fair, ASMedia has been pretty good.
On this I'd have to disagree with you. DS has a lot of potential - current software just can't make use of our blazing fast storage, and DS goes a long way towards fixing that issue. It just needs a) to be fully implemented, with GPU decompression support, and b) be adopted by developers. The latter is pretty much a given for big name titles given that it's an Xbox platform feature though.
Sure it has potential, but I don't believe that it's been a game-changer (pardon the pun) for anything more than a handful of console titles. If it was so great I'd expect its adoption to be much higher in console land, which would push much higher adoption for PCs to allow ports, but I'm just not seeing it.
 
Low quality post by btk2k2
Joined
Apr 21, 2005
Messages
184 (0.03/day)
But ... the workload here is essentially arbitrary. Which means that your "fundamental property" of speed is arbitrarily defined. That in and of itself makes your strict definition meaningless. On top of that, we are not operating within the realm of physics here. We are operating within the broader world, and general, everyday language - which does not conform to such strict definitions. Ever. "Faster" in everyday language, as the dictionary definition showed above demonstrated (and as I suspect the one you yourself quoted very selectively also showed) is that the term "faster" has many possible meanings.

It is a bog standard triangle equation. You can re-arrange the terms as you need. If you have two values of the triangle you don't need to be given the 3rd you can calculate it and it is trivial.

Your PC has a power supply. If you look at the sticker it will usually give you the max current on the 12v rail. From that you can calculate the resistance because V = IR and we have Voltage and we have Current so to get resistance you re-arrange and get R = V/I and boom. The alternative here is you can grab a multimeter, load up the 12v rail to max load and you can measure the resistance, you will get the same answer +/- the accuracy of the meter.

The fact that you need to calculate the resistance does not stop it from existing because it is inextricably linked to the other values and is required for it to work.

Same for speed = distance / time or the more apt but still actually the same aside from semantics: rate = work done / time. We have the work done (1 render) we have the time (204s for Zen 4, 297s for 12900K) ergo by definition we have the rate as well. You can't not have the rate when given the other two pillars of the equation.

No, it is correct if that percentage relates to a rate. It does not - an arbitrarily defined rate can be calculated from the data given, but the rate was not the data given, nor was the percentage based on a rate. The percentage was based on improvement (towards zero, which is obviously unreachable) compared to a known comparison, for which both data points were in seconds to complete one unit of work.

It doesn't matter that you can calculate a rate from this: the rate wasn't in the data presented, nor was a rate discussed by anyone involved. The use of the word "faster" here was clearly indicative of an improvement in time to finish a single workload, and not an increased rate of work per second.

The rate is in the data presented because it has to be when giving a number of pieces of work done and a time to complete the work. It would be like if a business gave you their revenue and their expenses and then you said 'the profit is not in the data presented and calculating it takes significant effort in manipulating the data to come to that figure' it is total nonsense.

Again: for your definition to be true, a significant effort in manipulating the data is required in order to change it into a unit of measure that is not represented in the base data. Yes, you get time to finish for the winner + additional time for those following (which then adds up to their total time through simple addition). Can you calculate their average rate of movement from that? Absolutely! Just like you can calculate a whole host of other things. None of that would invalidate anyone starting with the base data and saying "Max finished .0032% faster than Perez" - that would be an entirely accurate statement. The only reason why this isn't done in such situations is that the percentage difference would be minuscule and thus meaningless in terms of effectively communicating the difference. The base unit of this data is not velocity, it is time to finish a known workload. You can produce an average velocity from that, but that is fundamentally irrelevant to the argument of whether "finishing X seconds earlier" or transforming that directly into a percentage are valid applications of the word "faster" or not. They are. Unless you are a physicist writing an academic paper, "faster" is perfectly applicable to "finished X% or Y seconds earlier".

If calculting speed when presented with number of units of work done and a time to do it in is 'a significant effort in manipulating the data' it might explain why you are not getting it.

This base unit of data you are harping on about is a fiction you have invented. The units are swapable if you do the maths correctly because we have been given enough information with which to do so.

Further we are in the arena of comparitive benchmarks which is ideally done with a certain amount of rigor. That makes it scientific in nature so sticking to the scientific / mathmatical definition of words is the correct call. AMD did not do that in this case.

... so why on earth have you been spending several pages arguing that AMD's application of it is wrong?

Presenting time to completion or presenting work done / s on a chart or as raw numbers are perfectly valid ways to present the data. Comparing them is where AMD went wrong because they had a smaller is better measure and did the comparison backwards.

But they did. "Faster" perfectly encapsulates what they presented. And due to the presentation not being a chart or a graph, but a written sentence accompanied by two illustrated data points, the confusion you're referring to just doesn't exist. The problem you're bringing up has some validity, but it isnt' applicable to this situation.

If AMD were answering a GCSE maths or physics exam and gave that result they would lose marks for an incorrect answer.

This is, once again, just not true. "Faster" in everyday language doesn't have a single universal meaning. It has many different meanings - which you yourself have illustrated. That you insist on the primacy of one of those meanings regardless of context doesn't say anything meaningful about the application of the word here, but rather it says something about an inflexible and unrealistic approach to the use of language. Exceptionally few words have singular, fixed meanings, and while many do so in specific contexts (lord knows I use a lot of terms in my work that mean entirely different things in my application than in colloquial language), you cannot argue that such contextual meanings are universal and overrule all other possible meanings. That isn't how language works.

Well we certainly don't mean fast as in to not eat and we don't mean fast as in stuck fast but why not use those definitions in this context as well, oh wait because they are the wrong definitions for this use case.

EDIT:

Fast Quick or Quickly - Cambridge Dictionary.
 
Last edited:
Low quality post by Valantar
Joined
May 2, 2017
Messages
7,762 (2.82/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I got a headache reading this... :pimp:
So, by this logic Intel being quicker than AMD in the past, should not have lost the race in blender or any other application that Intel lost to. :p:D
Lol :p No, just trying to exemplify that within any single test, different measurements can tell us different things, even if some measurements might seem to contradict others (like the quicker/faster distinction) - and that applying a definition of a word from a different context might then cause you to misunderstand things quite severely.
I belive thats how to use that, its called the Margin of Error.
I don't think margin of error is generally discussed with these types of measurements? It's relevant, but that's per result, not in the comparisons between them. My point was that you don't see percentage comparisons of something like the results of a race because the differences would be minuscule - say, a 10 second win in a 30-minute race. 10 seconds describes that far better than whatever percentage or speed difference that would equate to.
@Valantar please, please, please stop wasting your time on feeding the trolls. For your own sanity, I beg you.
Heh, I guess it's a hobby of mine? I can generally tire them out, and at times that can actually make a meningful difference in the end. We'll see how this plays out.
ADL is 16x 5.0 lanes for GPU + 4x 4.0 lanes dedicated to M.2 + an effective additional 4x 4.0 lanes that are dedicated to the chipset via the proprietary DMI link. So it's effectively 24 lanes of PCIe from the CPU, which matches Zen 4. Yes, I agree that in terms of *bandwidth* Zen 4 is far ahead, but lane count is more important than bandwidth IMO.
I agree on that - IIRC I was just pointing out that AMD has more 5.0 lanes, even if the lane count is the same.
I'm not arguing that allowing people to reuse existing coolers is a bad thing, I'm merely noting that there will inevitably be those who try to use coolers rated for 65W on 170W parts and blame AMD as a result. Intel's approach has its own downsides, although I imagine the cooler manufacturers like Intel a bit more.

I'm also a little sceptical of the claimed compatibility; surely the dimensions (particularly Z-height) of the new socket and chip are different enough to make a meaningful difference?
They seem to be claiming no change, though that would surprise me a bit. Guess we'll see - we might get a similar situation to "compatible" ADL coolers, or it might be perfectly fine.
I'm aware that HSIO is expensive, especially PCIe 5.0, which is why I was hoping the CPU and chipsets would be putting out more lanes. My main concern is that the lowest-end chipset will, as usual, get the lowest PCIe version and number of lanes, and manufacturers will thus not bother with USB4 or USB-C in SKUs using said chipset. Given that I've already seen a few boards and not even the highest-end of them have more than 2 type-C ports on the rear panel, I'll withhold judgement until actual reviews drop.
I think we share that concern - quite frankly I don't care much about PCIe 4.0 or 5.0 for my use cases, and care more about having enough m.2 and rear connectivity. Possibly the worst part of specs-based marketing is that anyone trying to build a feature-rich midrange product gets shit on for their product not having the newest, fanciest stuff, rather than being lauded for providing a broad range of midrange, useful features, which essentially means nobody ever makes those product - instead it's everything including the kitchen sink at wild prices, or stripped to the bone, with very little in between.
Thanks, although I'd much prefer for it to be platform-native as opposed to relying on third-party controllers. Experience has shown that those are generally, to put it bluntly, shit (I'm looking at you VIA). To be fair, ASMedia has been pretty good.
Yeah, that would be nice, though I doubt we'll see that on socketed CPUs any time soon - the pin count would likely be difficult to defend in terms of engineering. I hope AMD gets this into their mobile APUs though.
Sure it has potential, but I don't believe that it's been a game-changer (pardon the pun) for anything more than a handful of console titles. If it was so great I'd expect its adoption to be much higher in console land, which would push much higher adoption for PCs to allow ports, but I'm just not seeing it.
AFAIK all titles developed only for Xbox Series X/S use it, but most titles seem to be cross-compatible still, and might thus leave it out (unless you want reliance on it to absolutely murder performance on older HDD-based consoles). I think we'll see far, far more of it in the coming years, as these older consoles get left behind. I'm frankly surprised that PC adoption hasn't been faster given that SSD storage has been a requirement for quite a few games for years now. Still, as with all new APIs it's pretty much random whether it gains traction or not.


It is a bog standard triangle equation. You can re-arrange the terms as you need. If you have two values of the triangle you don't need to be given the 3rd you can calculate it and it is trivial.
I never said it wasn't. I said you're not basing your percentage on the data presented, but on a transformation of said data, which invalidates you comparing it to percentages based on that data.
Your PC has a power supply. If you look at the sticker it will usually give you the max current on the 12v rail. From that you can calculate the resistance because V = IR and we have Voltage and we have Current so to get resistance you re-arrange and get R = V/I and boom. The alternative here is you can grab a multimeter, load up the 12v rail to max load and you can measure the resistance, you will get the same answer +/- the accuracy of the meter.

The fact that you need to calculate the resistance does not stop it from existing because it is inextricably linked to the other values and is required for it to work.
Except for the fact that your PC is not a resistive load, you would be right, but... why on earth are you going on about this irrelevant nonsense?
Same for speed = distance / time or the more apt but still actually the same aside from semantics: rate = work done / time. We have the work done (1 render) we have the time (204s for Zen 4, 297s for 12900K) ergo by definition we have the rate as well. You can't not have the rate when given the other two pillars of the equation.
Again: I never said it couldn't be calculated from the data provided; I said it wasn't the data provided. In order to get a rate, you must first perform a calculation. That's it. The rate is inherent to the data provided, but the data provided isn't the rate, nor is the percentage presented a percentage that relates directly to the rate of work - it relates to the time to completion. This is literally the entire dumb misunderstanding that you've been harping on this entire time.
The rate is in the data presented because it has to be when giving a number of pieces of work done and a time to complete the work. It would be like if a business gave you their revenue and their expenses and then you said 'the profit is not in the data presented and calculating it takes significant effort in manipulating the data to come to that figure' it is total nonsense.
Performing a calculation on data in order to transform its unit is ... transforming the data. It is now different data, in a different format. Is this difficult to grasp?
This base unit of data you are harping on about is a fiction you have invented.
The base unit of data is literally the unit in which the data was provided. AMD provided data in the format of time to complete one render, and a percentage difference between said times.
The units are swapable if you do the maths correctly because we have been given enough information with which to do so.
I have never said anything to contradict this, and your apparent belief that I have is rather crucial to the problem here.
Further we are in the arena of comparitive benchmarks which is ideally done with a certain amount of rigor. That makes it scientific in nature so sticking to the scientific / mathmatical definition of words is the correct call. AMD did not do that in this case.
There is no "mathematical" definition of "faster", as speed isn't a mathematical concept, even if the strict physical definition of it is described using math as a tool (as physics generally does). Also: if computer benchmarks belong to a scientific discipline, it is computer science, which is distinct from math, physics, etc. even if it builds on a complex combination of those and other fields. Within that context, and especially within this not being a scientific endeavor but a PR event - one focused on communication! - using strict scientific definitions of words that differ from colloquial meanings would be really dumb. That's how you get people misunderstanding you.
Presenting time to completion or presenting work done / s on a chart or as raw numbers are perfectly valid ways to present the data.
... did I say that it wasn't? I said that that wasn't what AMD did here, nor that it would be useful to make a chart with just two data points, and that their presentation was clearer than such a chart would have been for the purposes it was used here.
Comparing them is where AMD went wrong because they had a smaller is better measure and did the comparison backwards.
It isn't backwards - the measure is "smaller is better". Your opinion is that they should have converted it to a rate, which would have been "higher is better". You're welcome to that opinion, but you don't have the right to force that on anyone else, nor can you make any valid claim towards it being the only correct one.
If AMD were answering a GCSE maths or physics exam and gave that result they would lose marks for an incorrect answer.
I guess it's a good thing marketing and holding a presentation for press and the public isn't a part of GCSE math or physics exams then ... almost as if, oh, I don't know, this is a different context where other terms are better descriptors?
Well we certainly don't mean fast as in to not eat and we don't mean fast as in stuck fast but why not use those definitions in this context as well, oh wait because they are the wrong definitions for this use case.
Correct! But it would seem that you are implying that because those meanings are wrong for this use case, all meanings beyond yours are also wrong? 'Cause the data doesn't support your conclusions in that case; you're making inferences not supported by evidence. Please stop doing that. You're allowed to have an opinion that converting "lower is better" data to "higher is better" equivalents is more clear, easier to read, etc. You can argue for that. What you can't do is what this started out with: you arguing that because this data can be converted this way, that makes the numbers as presented wrong. This is even abundantly clear from your own arguments - that these numbers can be transformed into other configurations that represent the same things differently. On that basis, arguing that AMD's percentage is the wrong way around is plain-faced absurdity. Arguing for your preferred presentation being inherently superior is directly contradicted by saying that all conversions of the same data are equally valid. Pick one or the other, please.

Of course it is not a constant. This is supposedly the outcome. The instructions per second are not a constant either. How can you measure something that is changing depending on the environment or case of use? Imagine light-speed "r" or electric charge not being a constant. That is why all measurements are wrong no matter how you measure it since you can't measure it correctly either way. So all are wrong but at the same time are some sort of indication. You cant say this is wrong and this is correct. IPC is some sort of enigma that people cling to like dark matter. What we were discussing earlier and what you been trying to explain is not an IPC but general performance across the board. Variety of benchmark perceived as common or a standard to showcase the workload and performance of a processor.
The way IPC is used in the industry today, it essentially means generalizeable performance per clock for the architecture - which is the only reasonable meaning it can have given the variability of current architectures across workloads. That is why you need a broad range of tests: because no single test can provide a generalizeable representation of per-clock performance of an architecture. The result of any single benchmark will never be broadly representative. Which, when purportedly doing comparative measurements of something characteristic of the architecture, is then methodologically flawed to such a degree that the result is essentially rendered invalid. You're not then measuring generalizeable performance per clock, you're measuring performance per clock in that specific workload and nothing else. And that's a major difference.
 
Top