• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Reportedly Preparing B2 Stepping of Ryzen 5000 Series "Vermeer" Processors, Boost Speeds to Reach 5.0 GHz

I know this is a wrong topic and all, but what the hell is Nvidia manufacturing on 7nm this year?
Both are using a small enough part of 5nm but on 7nm AMD continues with Zen3 (and possibly Zen3+) and RDNA2 at 27% of 7nm capacity while Nvidia uses 21% for... something?

Good question. Maybe that is really just all other nodes?
 
I know this is a wrong topic and all, but what the hell is Nvidia manufacturing on 7nm this year?
Both are using a small enough part of 5nm but on 7nm AMD continues with Zen3 (and possibly Zen3+) and RDNA2 at 27% of 7nm capacity while Nvidia uses 21% for... something?
Umm what, did you forget this Godzilla o_O
The single biggest GPU out there!

Nvidia probably makes more $$$ off this than all of their other chips combined :pimp:
 
1621364308824.png

Of course all core could be interesting, but I think I will keep my 5950x
 
Wrong use of margin of error. If you can consistently show that one product is even 0.05% better than the other then that's not margin of error. For Intel SKUs it's certainly not margin of error as they have different clocks on the same CPU so it's very easy to say that the higher clocked CPUs will always perform x% better.

No, it's the right use of margin of error. Just go ask GamersNexus or HWUB, who frequently use margin of error despite all their results being the culmination of many runs.

"For Intel SKUs it's certainly not margin of error as they have different clocks on the same CPU so it's very easy to say that the higher clocked CPUs will always perform x% better."

This is factually incorrect as performance on Intel's latest 11000 series CPUs can vary as much as 45% simply based on motherboard selection as HardwareUnboxed recently demonstrated. Mind you clock speed isn't the only factor as the Intel 5775C has proven, aside from of course core clocks. Even when reviewers minimize variables and do multiple runs, there is certainly room for margin of error. If you still question that fact, I suggest you try and seriously benchmark some games following industry protocol. From setting uniform game settings to plotting an in-game benchmark route, to ensuring your software environment is correct, to ensuring your data is valid. I know personally that even with all those steps taken, there is still certainly variance and other reviewers like HWUB frequently express this as well.

This is precisely why margin of error exists. Regardless of who many times you run the test, every game is going to have a level of variances to the results, every CPU a bit different performance, and the test itself is limited in it's resolution.
 
View attachment 200832
Of course all core could be interesting, but I think I will keep my 5950x

This refresh product isn't really for someone on a 5950x so there would be no point in you looking at it.

The refresh chips makes sense for someone on a Zen 2 or lower chip. Same for the Zen 2 XT chips they made no sense to anyone on a 3700X or 3800X.

Your next upgrade is zen 4 and to be honest you can probably skip the first gen of the new stuff then leap at the 2nd gen Zen 4 stuff.
 
I guess this means no 5600 non X at $180?, ok i'll go back to hibernate.
 
Well if this means the non XT chips are going to sell at a lower base price then bring it on. This reinforces the view Zen3+ is now dead however.
 
SOME PEOPLE NEED TO STOP COMPLAINING.....SAYING WHY EVEN BOTHER US X570 USERS AND B550 USERS NEED AT LEAST 3 GENERATIONS OF CPU SUPPORT AND WE ARE HAPPY THAT AMD IS DOING THIS!!! IF THE 5950X IS COMING OUT I WILL UPGRADE FROM A 3950X WHICH IM CURRENTLY HAVING NOW!
 
I'm probably the only one who is going to mention that, but XT sku might be an awful idea. Stock Ryzen CPUs are already difficult to cool well with good cooling or are very PPT EDC or TDC restricted. Once AMD had to step down with boost on all chips by 150MHz and people went apeshit. It was for durability reasons. Adding more boost clock is kinda pointless without also increasing wattage. And if they increase wattage, heat output will increase. It potentially causes long term durability problems. Another thing is that timing is awful. Ryzen 5000 series lifespan as current product line is ending, next year we are going to have Ryzen 6000, it comes with new socket, new memory type and likely improved chips. 5600XT will do nothing, other than gaining some negative perception about AMD as reviewers will point out that it's poor value chip and that everyone should just buy 5600X instead. It seems that AMD doesn't learn that people don't care about their late refreshes about soon to be obsolete products. RX 590, A10 7890K didn't go so well and 3600XT gained some bad rap. Perhaps it would be better to sell those better chips as 5600X, but with new stepping that ensures that they maintain all core boost longer at higher frequencies and stop making pointless products that nobody should buy. That's even more so true in chip shortage era. And yet AMD this gen didn't have any true value chips, which 5600 should had been. AMD lost quite a bit of sale to i5 11400F (and also because Intel has their own fabs and seemingly aren't affected as badly as AMD in terms of being apply to supply required quantities).

This is factually incorrect as performance on Intel's latest 11000 series CPUs can vary as much as 45% simply based on motherboard selection as HardwareUnboxed recently demonstrated. Mind you clock speed isn't the only factor as the Intel 5775C has proven, aside from of course core clocks. Even when reviewers minimize variables and do multiple runs, there is certainly room for margin of error. If you still question that fact, I suggest you try and seriously benchmark some games following industry protocol. From setting uniform game settings to plotting an in-game benchmark route, to ensuring your software environment is correct, to ensuring your data is valid. I know personally that even with all those steps taken, there is still certainly variance and other reviewers like HWUB frequently express this as well.

This is precisely why margin of error exists. Regardless of who many times you run the test, every game is going to have a level of variances to the results, every CPU a bit different performance, and the test itself is limited in it's resolution.
You are wrong, all those differences existed, because variables weren't reduced and many chips ran "out of spec". Once you set same PL and Tau values, they perform pretty much the same with minimal variation. If you control variables well and ensure that you only test just exactly what you want, margin of error will be small and results will be logical. Higher clock speed n same architecture will always mean higher performance (unless you test high TDPs and CPU already ran out of additional clock speed steps to boost to, then there will be zero performance scaling, but that won't mean that PL values are generally meaningless). Due to Windows background tasks and unequally started benchmark times, thermals and power budget could be affected and slightly affect benchmark results. Still, you are looking at up to 5% variance and not at 45% variance. Also margin of error gets slimmer, if you run same benchmark more times, then you can reliably spot even slight differences in clock speed.

I guess this means no 5600 non X at $180?, ok i'll go back to hibernate.
i5 10400F was great seller and i5 11400F will be. Weird that AMD doesn't care about very profitable CPU tier, more sales to Intel.
 
SOME PEOPLE NEED TO STOP COMPLAINING.....SAYING WHY EVEN BOTHER US X570 USERS AND B550 USERS NEED AT LEAST 3 GENERATIONS OF CPU SUPPORT AND WE ARE HAPPY THAT AMD IS DOING THIS!!! IF THE 5950X IS COMING OUT I WILL UPGRADE FROM A 3950X WHICH IM CURRENTLY HAVING NOW!
Why are you shouting?
Are you deaf?
 
You need your first post to be heard :rockout:
 
So what is the advantage of the 5600XT if it has the same frequency than the 5600X? Did I miss something here? Maybe higher mem clocks?
Really don't know. So far this is the only information we have.
 
You are wrong, all those differences existed, because variables weren't reduced and many chips ran "out of spec". Once you set same PL and Tau values, they perform pretty much the same with minimal variation. If you control variables well and ensure that you only test just exactly what you want, margin of error will be small and results will be logical.

No, that difference in performance was a result in testing methodology.

Typically review outlets chart out their testing methodology with the highest performance in mind. In the case of HWUB's recent video though, the goal was to see performance with mid range B560 motherboards. This wasn't a failure on their end to isolate variables, it was a very valid change in methodology. Objectively neither method (testing with the best vs testing the reasonable) is invalid and show how much variation you can get by changing a single variable.

Heck I've done plenty of benchmark runs myself and I can say as a matter of fact that you'll still see runs that have abnormal variation that need to be investigated and potentially re-done. You clearly have never done serious benchmark yourself as you have no idea of the variance that can exist even when all variables are accounted for.


Higher clock speed n same architecture will always mean higher performance (unless you test high TDPs and CPU already ran out of additional clock speed steps to boost to, then there will be zero performance scaling, but that won't mean that PL values are generally meaningless). Due to Windows background tasks and unequally started benchmark times, thermals and power budget could be affected and slightly affect benchmark results. Still, you are looking at up to 5% variance and not at 45% variance. Also margin of error gets slimmer, if you run same benchmark more times, then you can reliably spot even slight differences in clock speed.

The original argument being made was that higher clock speed equals more performance. No one said anything about clock speed across the same architecture.

5% is the number I originally stated for margin of error as that's the limit of modern testing methodology. My 45% example was in response to another point posited by a prior poster. It was not as to say that all benchmarks have that level of variance.
 
This just in! 5950X already does 5GHz!!

:kookoo:
 
No, that difference in performance was a result in testing methodology.

Typically review outlets chart out their testing methodology with the highest performance in mind. In the case of HWUB's recent video though, the goal was to see performance with mid range B560 motherboards. This wasn't a failure on their end to isolate variables, it was a very valid change in methodology. Objectively neither method (testing with the best vs testing the reasonable) is invalid and show how much variation you can get by changing a single variable.

Heck I've done plenty of benchmark runs myself and I can say as a matter of fact that you'll still see runs that have abnormal variation that need to be investigated and potentially re-done. You clearly have never done serious benchmark yourself as you have no idea of the variance that can exist even when all variables are accounted for.
They were testing motherboard default settings, not CPUs. And no, benchmarks shouldn't have abnormal variation besides miniscule differences that don't matter.


The original argument being made was that higher clock speed equals more performance. No one said anything about clock speed across the same architecture.

5% is the number I originally stated for margin of error as that's the limit of modern testing methodology. My 45% example was in response to another point posited by a prior poster. It was not as to say that all benchmarks have that level of variance.
I doubt it. 5% is still a high variation, I would say that 2-3% are closer to acceptable variation. And even then, you can notice clear patterns.
 
Back
Top