• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Unveils 5 nm Ryzen 7000 "Zen 4" Desktop Processors & AM5 DDR5 Platform

Low quality post by btk2k2
I never said it wasn't. I said you're not basing your percentage on the data presented, but on a transformation of said data, which invalidates you comparing it to percentages based on that data.

Rate is inversely proportional to time taken, they are linked so if you have one you have the other by default.

So it means 31% less time == 0.69 time == 1/0.69 rate == 1.45 rate == 45% faster. These are all different ways of saying the same thing.

If we say 31% faster == 1.31 rate = 1/0.76 == 0.76 time = 24% less time. It does not work when starting from a position of 31% faster.

Now I know you are going to say 'but in the context of time 31% faster == 0.69 time ...'. Sure you can believe that but it is an incorrect use of the word faster and is where quicker might be used. Hence the prior reference to drag racing wher quicker refers to acceleration (so 0-60 times) and fast refers to speed (I topped out at 150 MPH on the quater mile run).

I am sure you will argue 'but your definition does is not supreme over other definitions' and sure it is not. The use of faster and quicker in these contexts has convention though and if you want to stick to the convetion you don't do 31% faster == 0.69 time. AMD broke the convention of the use of the term faster which is why it is considered incorrect.

Again: I never said it couldn't be calculated from the data provided; I said it wasn't the data provided. In order to get a rate, you must first perform a calculation. That's it. The rate is inherent to the data provided, but the data provided isn't the rate, nor is the percentage presented a percentage that relates directly to the rate of work - it relates to the time to completion. This is literally the entire dumb misunderstanding that you've been harping on this entire time.

Performing a calculation on data in order to transform its unit is ... transforming the data. It is now different data, in a different format. Is this difficult to grasp?

The use of the word faster ties the % to rate. The only correct way to tie it to time in the context of a benchmark is to give a discrete time saving like 93s faster.

Fast Quick or Quickly - Cambridge Dictionary.

The base unit of data is literally the unit in which the data was provided. AMD provided data in the format of time to complete one render, and a percentage difference between said times.

They got the percentage difference wrong when combining it with the term faster. There are words that are perfectly fine for describing a 31% reduction in time taken, faster is not one of them.

There is no "mathematical" definition of "faster", as speed isn't a mathematical concept, even if the strict physical definition of it is described using math as a tool (as physics generally does). Also: if computer benchmarks belong to a scientific discipline, it is computer science, which is distinct from math, physics, etc. even if it builds on a complex combination of those and other fields. Within that context, and especially within this not being a scientific endeavor but a PR event - one focused on communication! - using strict scientific definitions of words that differ from colloquial meanings would be really dumb. That's how you get people misunderstanding you.

The reason there is an issue is because AMD used a term with a well understood colloquial meaning in a non conventional way, not that great an idea if you are meant to be focusing on communicaiton.

It isn't backwards - the measure is "smaller is better". Your opinion is that they should have converted it to a rate, which would have been "higher is better". You're welcome to that opinion, but you don't have the right to force that on anyone else, nor can you make any valid claim towards it being the only correct one.

They can display it as lower is better without issue. The problem comes when you write the blurb for that in terms of A is x% faster than Y and get the relative % incorrect. If AMD wanted to highlight the reduction in time over the increase in computational performance they needed to use a different term to faster. That is it. That is the issue, nothing more than that.

I guess it's a good thing marketing and holding a presentation for press and the public isn't a part of GCSE math or physics exams then ... almost as if, oh, I don't know, this is a different context where other terms are better descriptors?

Yes there are better descriptors you can use when trying to describe a reduction in time taken as a relative % value than faster.

Correct! But it would seem that you are implying that because those meanings are wrong for this use case, all meanings beyond yours are also wrong? 'Cause the data doesn't support your conclusions in that case; you're making inferences not supported by evidence. Please stop doing that. You're allowed to have an opinion that converting "lower is better" data to "higher is better" equivalents is more clear, easier to read, etc. You can argue for that. What you can't do is what this started out with: you arguing that because this data can be converted this way, that makes the numbers as presented wrong. This is even abundantly clear from your own arguments - that these numbers can be transformed into other configurations that represent the same things differently. On that basis, arguing that AMD's percentage is the wrong way around is plain-faced absurdity. Arguing for your preferred presentation being inherently superior is directly contradicted by saying that all conversions of the same data are equally valid. Pick one or the other, please.

Fast Quick or Quickly - Cambridge Dictionary. This is the evidence of the deliniation between fast, quick and quickly.

The numbers as presented are wrong when combined with the term faster because the convetional use of the term faster when using a relative % value refers to speed.
 
That is why you need a broad range of tests: because no single test can provide a generalizeable representation of per-clock performance of an architecture.
lets make this broader, yet still applies:
That is why you need a broad range of tests: because no single test can provide a generalizeable adequate representation of performance.
This is why reviews that, lets say video cards, use multiple games to compare performances. However, there is not a lot of different IPC tests to use. (as I understand this conversation.. :shadedshu:)
 
Performing a calculation on data in order to transform its unit is ... transforming the data. It is now different data, in a different format.
Why would one perform a transformation when computing a relative performance? Let us define performance as p=w/t, where w stands for work and t for time, and suppose that computers 1 and 2 perform the same task in times t1 and t2, respectively. Then, the ratio of their performances is p1/p2 = (w/t1) / (w/t2) = t2/t1.
 
No more arguments or epic posts about semantics please. It's not fair to derail threads with such long and arduously off-topic posts.
 
So this didn't get locked, yeah 15% still and upwards of that since it's a initial rumour, all good.
 
So this didn't get locked, yeah 15% still and upwards of that since it's a initial rumour, all good.
From the PCWorld interview (although not 100% clear), it seems that the Single Thread performance communicated includes IPC and depending which of the models you compare across stack vs zen3 it's 15% and higher (ST performance)
Also another speculation that comes to mind regarding the blender score and based on the answer regarding multithreading performance seems that it's mainly clock driven with a possible smt uplift.
I don't know what the average all core frequency 5950X would be hitting in a similar blender test but the difference between 7950X and 5950X simingly is 1.45X plus 1.05X ≈ 1.52X
If 5950X SMT implementation has -10% lower uplift vs 7950X SMT implementation, then if 7950X was hitting 5.2GHz on all cores with the AIO liquid cooler that they used, to hit +52% uplift from 5950X, 5950X would be running at 5.2GHz/1.52/0.9≈3.8GHz.
3.8GHz it seems a little low even for Blender, TPU members that have a 5950X would know more.
Edit: i forget the IPC difference so with 5% instead of 3.8GHz for 5950X we would have around 4GHz, with 10% around 4.2GHz and so on, so depending on the IPC difference we may not even have SMT improvements at all (not likely, because we are talking around 11% IPC improvements and 5950X in similar blender test at 3.8GHz all core frequency)
 
Last edited:
That's simply not true. There were a lot of UEFI/AGESA issues early on, on both platforms, some took longer to solve than others. Much of which was memory related, but X570 had the boost issues and a lot of people had USB 2.0 problems as well.

As I said, it mostly got resolved after a few months, but some things took quite a while for AMD to figure out.
I've had so many ryzen systems here for myself, and then all the sales builds - the only issue that ever turned out to be actual AGESA/AMD and not shitty manufacturers, was the RAM incompatibility early on with Zen 1 and 300 chipsets hating odd numbered latencies (which was mostly blown out of proportion by people ignoring that more ranks of RAM = slower max clock speeds. When comparing to intel at the time that said 2133! no more! stay! they'd move to an AMD board that said upto 4000 or whatever, and assume that 4000 MUST. WORK. NOW.)

Of the four original boards i had, i've still got three working perfectly fine. The only ones with unsolveable issues were the budget MSI 300 and 450 boards.
The x370 setup had lingering memory issues i could never resolve, until i moved that RAM over to an intel system... and the issues moved with it. Faulty corsair RAM that got unstable above 45C, so that issue went away every winter and came back every summer to drive me mad.
 
I've had so many ryzen systems here for myself, and then all the sales builds - the only issue that ever turned out to be actual AGESA/AMD and not shitty manufacturers, was the RAM incompatibility early on with Zen 1 and 300 chipsets hating odd numbered latencies (which was mostly blown out of proportion by people ignoring that more ranks of RAM = slower max clock speeds. When comparing to intel at the time that said 2133! no more! stay! they'd move to an AMD board that said upto 4000 or whatever, and assume that 4000 MUST. WORK. NOW.)

Of the four original boards i had, i've still got three working perfectly fine. The only ones with unsolveable issues were the budget MSI 300 and 450 boards.
The x370 setup had lingering memory issues i could never resolve, until i moved that RAM over to an intel system... and the issues moved with it. Faulty corsair RAM that got unstable above 45C, so that issue went away every winter and came back every summer to drive me mad.
I could never get my Asus Prime X370 board and Ryzen 7 1700 to work with my Corsair LPX 3200 memory properly. Got up to 2933 at best, as 3000 was never properly stable.
That RAM worked perfectly fine at its rated speed in my previous Intel system.
The first couple of months there were a lot of other weird little issue too, it's in a thread here somewhere...

X570, lots of weird issues again early on and some that took much longer to solve, again, plenty posts about it here in the forums. Biggest blunder was of course boost speeds that were promised but took them 3-4 months to deliver after launch.

I never said they didn't solve the issues, my point was simply that I'm sick and tired of being a beta tester for these companies. Spend an extra six months working on these platforms and make them stable before launch, instead of rushing it out so you can launch before your one single competitor. Obviously this doesn't just apply to AMD and Intel, but also a lot of other companies that have more competition, but even so, the same applies, stop launching beta products or worse sometimes.
 
I could never get my Asus Prime X370 board and Ryzen 7 1700 to work with my Corsair LPX 3200 memory properly. Got up to 2933 at best, as 3000 was never properly stable.
That RAM worked perfectly fine at its rated speed in my previous Intel system.
The first couple of months there were a lot of other weird little issue too, it's in a thread here somewhere...

X570, lots of weird issues again early on and some that took much longer to solve, again, plenty posts about it here in the forums. Biggest blunder was of course boost speeds that were promised but took them 3-4 months to deliver after launch.

I never said they didn't solve the issues, my point was simply that I'm sick and tired of being a beta tester for these companies. Spend an extra six months working on these platforms and make them stable before launch, instead of rushing it out so you can launch before your one single competitor. Obviously this doesn't just apply to AMD and Intel, but also a lot of other companies that have more competition, but even so, the same applies, stop launching beta products or worse sometimes.
LPX was the literal worst for ryzen. The LITERAL worst.

I wrote a whole ass paragraph here and gave up, how about just a single image showing that hey - you were overclocking past officially supported speeds?

1653788539651.png


Intel dodged the issue by locking their cheap boards to 2133Mhz, AMD just gave people headroom to overclock and then realised trusting consumers was a terrible idea
 
LPX was the literal worst for ryzen. The LITERAL worst.

I wrote a whole ass paragraph here and gave up, how about just a single image showing that hey - you were overclocking past officially supported speeds?

View attachment 249175

Intel dodged the issue by locking their cheap boards to 2133Mhz, AMD just gave people headroom to overclock and then realised trusting consumers was a terrible idea
Well, most people managed 3000 just fine, many 3200 and the lucky few 3466. So only getting to 2933 was :(

You know as well as I do that the official memory speeds mean very little in reality.

As I said, no issue with the same RAM at 3200 on Intel, which was the rated rated speed of those modules. But as you say, LPX and AMD was a match made in a septic tank. That RAM didn't work any better with my 3700X either...
 
I really do not get why people want a so called 32 cores cpu while for games it does almost nothing as most games do not even use 2 to 4 cores
Sure some newer games use 2% to 4% of the other cores but if you turn all that extra cores off, you will not see any difference
If your doing CAD/CAM or similar stuff you actually use them but most never ever need them
But funny enough those are the biggest whiners over this nonsense

Anyway my point is that i do not get why new games are so darn slow and has endless load times
I was looking at a friend playing RDR2 and was not really impressed especially how slow the loading times where for each simple new challenge in the game
Overall it runned like a snail in my eyes on the latest XBOX

For me having more than 8 cores seems more than enough as i really do not need 32 cores ever
It makes me so sad that the cpu with the most cores these days gets the highest clocks.
For me that is counter productive i never need more than 8 cores for real

Regarding memory do not make me start, all the brands promise it will work a set of 4 on your motherboard well after 6 new sets corsair gave up and removed the whole set from sales.
I do not even want to begin over the LPX disaster for me that is still the worst ever in my long experience with pc building.
Especially because you needed them to be able to install the massive coolers you needed to OC your system

But i still want to see the 2 chip makers come with a new design for the socket so you never can mount the watercoolers/coolers tighter than the socket can handle
As i admit to be apparently a brute and do not really feel how much force i use on any tool
As exeample I helped a truck driver to mount his reserve wheel and he said pull it tight so i did .... result i broke that very thick bolts off :D
The man could not believe a human was able todo that, he send the movie he made with his phone to his boss to prove it really happened and he was not the fault :) ... LOL
He constant felt my arms and constant kept saying what, i cant believe it, how is this possible, it is impossible :D
Now you think that might be a fluke but it happened to me often when changing tires on cars in the paste as well.

So when i mount a cooler on my pc i have a high chance to kill the socket again, i even have ask others todo it for me which helped a bit but you do not ask your friends 80 to 180 miles away to come mount your cooler :D
That they will do once but not when you have issues and need to mount/dismount it constant.
 
Last edited:
I really do not get why people want a so called 32 cores cpu while for games it does almost nothing as most games do not even use 2 to 4 cores
Sure some newer games use 2% to 4% of the other cores but if you turn all that extra cores off, you will not see any difference
If your doing CAD/CAM or similar stuff you actually use them but most never ever need them
But funny enough those are the biggest whiners over this nonsense

Anyway my point is that i do not get why new games are so darn slow and has endless load times
I was looking at a friend playing RDR2 and was not really impressed especially how slow the loading times where for each simple new challenge in the game
Overall it runned like a snail in my eyes on the latest XBOX

For me having more than 8 cores seems more than enough as i really do not need 32 cores ever
It makes me so sad that the cpu with the most cores these days gets the highest clocks.
For me that is counter productive i never need more than 8 cores for real

Regarding memory do not make me start, all the brands promise it will work a set of 4 on your motherboard well after 6 new sets corsair gave up and removed the whole set from sales.
I do not even want to begin over the LPX disaster for me that is still the worst ever in my long experience with pc building.
Especially because you needed them to be able to install the massive coolers you needed to OC your system

But i still want to see the 2 chip makers come with a new design for the socket so you never can mount the watercoolers/coolers tighter than the socket can handle
As i admit to be apparently a brute and do not really feel how much force i use on any tool
As exeample I helped a truck driver to mount his reserve wheel and he said pull it tight so i did .... result i broke that very thick bolts off :D
The man could not believe a human was able todo that, he send the movie he made with his phone to his boss to prove it really happened and he was not the fault :) ... LOL
He constant felt my arms and constant kept saying what, i cant believe it, how is this possible, it is impossible :D
Now you think that might be a fluke but it happened to me often when changing tires on cars in the paste as well.

So when i mount a cooler on my pc i have a high chance to kill the socket again, i even have ask others todo it for me which helped a bit but you do not ask your friends 80 to 180 miles away to come mount your cooler :D
That they will do once but not when you have issues and need to mount/dismount it constant.
RDR2 is a console game. That's why it has slow loads.

I'm not as familiar with AM5 as i am AM4, but AM4 was great in that the 5600x had 95% (or more) the gaming performance of a 5950x - you never needed more than midrange, for top tier AM4 gaming in the Zen3 lineup

No one should want a high core count CPU for gaming, i still advise people to get 6-8 cores at most. The higher core count and higher TDP are workstation CPU's and 100% not worth it - only a small loud minority of people would add 150W for 1% higher FPS, just to have "the best"
 
Back
Top