I got a headache reading this...
So, by this logic Intel being quicker than AMD in the past, should not have lost the race in blender or any other application that Intel lost to.
Lol
No, just trying to exemplify that within any single test, different measurements can tell us different things, even if some measurements might seem to contradict others (like the quicker/faster distinction) - and that applying a definition of a word from a different context might then cause you to misunderstand things quite severely.
I belive thats how to use that, its called the Margin of Error.
I don't think margin of error is generally discussed with these types of measurements? It's relevant, but that's per result, not in the comparisons between them. My point was that you don't see percentage comparisons of something like the results of a race because the differences would be minuscule - say, a 10 second win in a 30-minute race. 10 seconds describes that far better than whatever percentage or speed difference that would equate to.
@Valantar please, please,
please stop wasting your time on feeding the trolls. For your own sanity, I beg you.
Heh, I guess it's a hobby of mine? I can generally tire them out, and at times that can actually make a meningful difference in the end. We'll see how this plays out.
ADL is 16x 5.0 lanes for GPU + 4x 4.0 lanes dedicated to M.2 + an effective additional 4x 4.0 lanes that are dedicated to the chipset via the proprietary DMI link. So it's effectively 24 lanes of PCIe from the CPU, which matches Zen 4. Yes, I agree that in terms of *bandwidth* Zen 4 is far ahead, but lane count is more important than bandwidth IMO.
I agree on that - IIRC I was just pointing out that AMD has more 5.0 lanes, even if the lane count is the same.
I'm not arguing that allowing people to reuse existing coolers is a bad thing, I'm merely noting that there will inevitably be those who try to use coolers rated for 65W on 170W parts and blame AMD as a result. Intel's approach has its own downsides, although I imagine the cooler manufacturers like Intel a bit more.
I'm also a little sceptical of the claimed compatibility; surely the dimensions (particularly Z-height) of the new socket and chip are different enough to make a meaningful difference?
They seem to be claiming no change, though that would surprise me a bit. Guess we'll see - we might get a similar situation to "compatible" ADL coolers, or it might be perfectly fine.
I'm aware that HSIO is expensive, especially PCIe 5.0, which is why I was hoping the CPU and chipsets would be putting out more lanes. My main concern is that the lowest-end chipset will, as usual, get the lowest PCIe version and number of lanes, and manufacturers will thus not bother with USB4 or USB-C in SKUs using said chipset. Given that I've already seen a few boards and not even the highest-end of them have more than 2 type-C ports on the rear panel, I'll withhold judgement until actual reviews drop.
I think we share that concern - quite frankly I don't care much about PCIe 4.0 or 5.0 for my use cases, and care more about having enough m.2 and rear connectivity. Possibly the worst part of specs-based marketing is that anyone trying to build a feature-rich midrange product gets shit on for their product not having the newest, fanciest stuff, rather than being lauded for providing a broad range of midrange, useful features, which essentially means nobody ever makes those product - instead it's everything including the kitchen sink at wild prices, or stripped to the bone, with very little in between.
Thanks, although I'd much prefer for it to be platform-native as opposed to relying on third-party controllers. Experience has shown that those are generally, to put it bluntly, shit (I'm looking at you VIA). To be fair, ASMedia has been pretty good.
Yeah, that would be nice, though I doubt we'll see that on socketed CPUs any time soon - the pin count would likely be difficult to defend in terms of engineering. I hope AMD gets this into their mobile APUs though.
Sure it has potential, but I don't believe that it's been a game-changer (pardon the pun) for anything more than a handful of console titles. If it was so great I'd expect its adoption to be much higher in console land, which would push much higher adoption for PCs to allow ports, but I'm just not seeing it.
AFAIK all titles developed only for Xbox Series X/S use it, but most titles seem to be cross-compatible still, and might thus leave it out (unless you want reliance on it to absolutely murder performance on older HDD-based consoles). I think we'll see far, far more of it in the coming years, as these older consoles get left behind. I'm frankly surprised that PC adoption hasn't been faster given that SSD storage has been a requirement for quite a few games for years now. Still, as with all new APIs it's pretty much random whether it gains traction or not.
It is a bog standard triangle equation. You can re-arrange the terms as you need. If you have two values of the triangle you don't need to be given the 3rd you can calculate it and it is trivial.
I never said it wasn't. I said you're not basing your percentage on the data presented, but on a transformation of said data, which invalidates you comparing it to percentages based on that data.
Your PC has a power supply. If you look at the sticker it will usually give you the max current on the 12v rail. From that you can calculate the resistance because V = IR and we have Voltage and we have Current so to get resistance you re-arrange and get R = V/I and boom. The alternative here is you can grab a multimeter, load up the 12v rail to max load and you can measure the resistance, you will get the same answer +/- the accuracy of the meter.
The fact that you need to calculate the resistance does not stop it from existing because it is inextricably linked to the other values and is required for it to work.
Except for the fact that your PC is not a resistive load, you would be right, but... why on earth are you going on about this irrelevant nonsense?
Same for speed = distance / time or the more apt but still actually the same aside from semantics: rate = work done / time. We have the work done (1 render) we have the time (204s for Zen 4, 297s for 12900K) ergo by definition we have the rate as well. You can't not have the rate when given the other two pillars of the equation.
Again: I never said it couldn't be calculated from the data provided; I said it
wasn't the data provided. In order to get a rate, you must first perform a calculation. That's it. The rate is inherent to the data provided, but the data provided isn't the rate, nor is the percentage presented a percentage that relates directly to the rate of work - it relates to the time to completion. This is literally the entire dumb misunderstanding that you've been harping on this entire time.
The rate is in the data presented because it has to be when giving a number of pieces of work done and a time to complete the work. It would be like if a business gave you their revenue and their expenses and then you said 'the profit is not in the data presented and calculating it takes significant effort in manipulating the data to come to that figure' it is total nonsense.
Performing a calculation on data in order to transform its unit is ... transforming the data. It is now different data, in a different format. Is this difficult to grasp?
This base unit of data you are harping on about is a fiction you have invented.
The base unit of data is literally the unit in which the data was provided. AMD provided data in the format of time to complete one render, and a percentage difference between said times.
The units are swapable if you do the maths correctly because we have been given enough information with which to do so.
I have never said anything to contradict this, and your apparent belief that I have is rather crucial to the problem here.
Further we are in the arena of comparitive benchmarks which is ideally done with a certain amount of rigor. That makes it scientific in nature so sticking to the scientific / mathmatical definition of words is the correct call. AMD did not do that in this case.
There is no "mathematical" definition of "faster", as
speed isn't a
mathematical concept, even if the strict physical definition of it is described using math as a tool (as physics generally does). Also: if computer benchmarks belong to a scientific discipline, it is computer science, which is distinct from math, physics, etc. even if it builds on a complex combination of those and other fields. Within that context, and especially within this not being a scientific endeavor but a PR event - one focused on communication! - using strict scientific definitions of words that differ from colloquial meanings would be
really dumb. That's how you get people misunderstanding you.
Presenting time to completion or presenting work done / s on a chart or as raw numbers are perfectly valid ways to present the data.
... did I say that it wasn't? I said that that wasn't what AMD did here, nor that it would be useful to make a chart with just two data points, and that their presentation was clearer than such a chart would have been for the purposes it was used here.
Comparing them is where AMD went wrong because they had a smaller is better measure and did the comparison backwards.
It isn't backwards - the measure
is "smaller is better".
Your opinion is that they should have converted it to a rate, which would have been "higher is better". You're welcome to that opinion, but you don't have the right to force that on anyone else, nor can you make any valid claim towards it being the only correct one.
If AMD were answering a GCSE maths or physics exam and gave that result they would lose marks for an incorrect answer.
I guess it's a good thing marketing and holding a presentation for press and the public isn't a part of GCSE math or physics exams then ... almost as if, oh, I don't know
, this is a different context where other terms are better descriptors?
Well we certainly don't mean fast as in to not eat and we don't mean fast as in stuck fast but why not use those definitions in this context as well, oh wait because they are the wrong definitions for this use case.
Correct! But it would seem that you are implying that because
those meanings are wrong for this use case, all meanings beyond yours are also wrong? 'Cause the data doesn't support your conclusions in that case; you're making inferences not supported by evidence. Please stop doing that. You're allowed to have an opinion that converting "lower is better" data to "higher is better" equivalents is more clear, easier to read, etc. You can argue for that. What you can't do is what this started out with: you arguing that because this data
can be converted this way, that makes the numbers as presented
wrong. This is even abundantly clear from your own arguments - that these numbers can be transformed into other configurations that represent the same things differently. On that basis, arguing that AMD's percentage is the wrong way around is plain-faced absurdity. Arguing for your preferred presentation being inherently superior is directly contradicted by saying that all conversions of the same data are equally valid. Pick one or the other, please.
Of course it is not a constant. This is supposedly the outcome. The instructions per second are not a constant either. How can you measure something that is changing depending on the environment or case of use? Imagine light-speed "r" or electric charge not being a constant. That is why all measurements are wrong no matter how you measure it since you can't measure it correctly either way. So all are wrong but at the same time are some sort of indication. You cant say this is wrong and this is correct. IPC is some sort of enigma that people cling to like dark matter. What we were discussing earlier and what you been trying to explain is not an IPC but general performance across the board. Variety of benchmark perceived as common or a standard to showcase the workload and performance of a processor.
The way IPC is used in the industry today, it essentially means generalizeable performance per clock for the architecture - which is the only reasonable meaning it can have given the variability of current architectures across workloads. That is why you need a broad range of tests: because no single test can provide a generalizeable representation of per-clock performance of an architecture. The result of any single benchmark will never be broadly representative. Which, when purportedly doing comparative measurements of something characteristic of the architecture, is then methodologically flawed to such a degree that the result is essentially rendered invalid. You're not then measuring generalizeable performance per clock, you're measuring performance per clock in that specific workload and nothing else. And that's a major difference.