• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

New AMD Chipset Drivers Tested on Ryzen 9 3900X

It's been less than a month since Zen2 and X570 launched. Do you think the amount of "crap" that happened is reasonable? :)

Sure, I know there are people (a lot here) that have spent this month updating, tweaking and benchmarking their shiny 3700X. And they loved it.
But wasn't AMD hoping for a broader audience? Weren't Zen CPUs supposed to be robust and mainstream one day?
Does AMD expect the 90% PC users they can't reach to take tweaking PCs as a hobby? Seriously?

And as usual: no love from OEMs at launch, but maybe in few months... ;-)
Yes, I think it's reasonable. Not perfect, but reasonable.
 
Yeah, I think they fully realize that when desktop Ice Lake strikes, the game will be pretty much over so they are trying to get get as much headstart as they possibly can, screw optimisation and compatibility :rolleyes:

Desktop Ice Lake is not 'striking' until sometime in 2021. By that time it'll be more of a soft tap on the shoulder of Ryzen 5000, which will be what that will be competing against.

For now, we can enjoy Ice Lake mobile and its 3.5% performance improvement over 14nm Whiskey Lake.
 
That's your conjecture, but several rumors are pointing out more towards sometime around q2 next year and I think we can all see what 18% IPC uplift will mean, even if frequencies will be slightly lower :cool:
 
That's your conjecture, but several rumors are pointing out more towards sometime around q2 next year and I think we can all see what 18% IPC uplift will mean, even if frequencies will be slightly lower :cool:

No, you're misinformed. What's coming Q2 next year (which will have to take on Ryzen 4000 with +2-5 ipc and +5% frequency) is yet another 14nm desktop release. So basically Skylake again, very likely with zero ipc gains because of that fact.

And it's not my conjecture it's on the leaked roadmap.
 
No, you're misinformed. What's coming Q2 next year (which will have to take on Ryzen 4000 with +2-5 ipc and +5% frequency) is yet another 14nm desktop release. So basically Skylake again, very likely with zero ipc gains because of that fact.

And it's not my conjecture it's on the leaked roadmap.
Your conjecture is in Ryzen 4000 release and any IPC increases it may have...

I'd be surprised if there was any frequency increase considering these are maxed out..

Yes, I think it's reasonable. Not perfect, but reasonable.
Damn... and I thought I let things slide... lol
 
Your conjecture is in Ryzen 4000 release and any IPC increases it may have...

I'd be surprised if there was any frequency increase considering these are maxed out..

Damn... and I thought I let things slide... lol

So you think Ryzen 4000 on 7nm+ process node will net zero frequency gains from Ryzen 3000 on 7nm?? Wishful thinking. AMD will get a couple hundred more mhz on 7nm just as that process matures. 3950X is already being binned with another 100Mhz to get to 4.7Ghz.
 
So you think Ryzen 4000 on 7nm+ process node will net zero frequency gains from Ryzen 3000 on 7nm?? Wishful thinking. AMD will get a couple hundred more mhz on 7nm just as that process matures. 3950X is already being binned with another 100Mhz to get to 4.7Ghz.
Not much, yeah. These (each bin) are topped out my homeh. Not even reaching stock boost for some (too many.. forums are littered with complaints) currently. Why do you think we're waiting so long for the 3950x? It isnt because they have tens of thousands to sell 7/7... they need to filter through and bin the best of the best chiplets to put in it (among other reasons).

Anyway, conjecture is conjecture... it goes both ways. ;)
 
"I guess the people complaining on social media about losing 100 points in Cinebench nT "

With that quote you summarized the basis of far too many folk decision maiking process. I don't like to say that anyone is wrong, they just a) have just been misinformed or b. Didn't take the time or bother to get inormed. All too often I see recommendation saying, "You will want this CPU cause, or example," it has better multicore performance". And while this is certainly true for PCs focused on blender, benchmarks or esoteric scientific apps, it rarely is in response to a build that will actually be using them. The great majority will not. But the issue here is not that one or the other is faster..... but why it is even in the conversation ?

The folks who built PCs so they can run Cinebench and post their scores on forums, must feel they lost some of their thunder. The folks who were apologizing for lower thread count everyday app performance now can puff their chests a bit. But again ...

1. Is the metric relevant to my application needs ? If not, who cares ?

2. Is the result relevant ? If a script of 10 things completes on 0.75 seconds and another competes in 0.65, is that relevant ? You would have had to press at least 1 key between each, is it still relevant ?


To my eyes, for the 1st time I can remember in a long long time, there's actually a market niche where it's hard to argue against the 3900X /or 9900KF i the application mix favors one or the other.
 
Regardless, the gaming 'gap' between a 3900X and 9900k is now merely 3.8% using a 2080 Ti @ 1080p after these chipset drivers. If it was negligible before it certainly is now :cool:

This proves wrong the naysayers claiming there wouldn't be any nice maturing of these CPUs with respect to performance.
 
It's weird how a reviewer infers 1% or so difference in gaming tangible/improvement/better and others find 3.8% gap 'negligible'.

Feels like those descriptions should be switched. lol!
 
1% or so difference in gaming tangible
1% across the board in gaming, for free, just weeks after launch, especially at higher resolution, is a big improvement, if you consider that generation-to-generation IPC gains are sometimes in that range.

You are of course free to have a different opinion, I actually like that you thought about this yourself
 
1% across the board in gaming, for free, just weeks after launch, especially at higher resolution, is a big improvement, if you consider that generation-to-generation IPC gains are sometimes in that range.

You are of course free to have a different opinion, I actually like that you thought about this yourself
I guess in my mind... where does one draw the line for what is can be attributed to run variance? That is a pretty pliable number, lol. I guess where I land is that 1% land is run variance. Once you get over that, 2%+, then it gets the nod.
 
Sorry, not sure I follow here.

Look at the graph:

games-1080p.jpg


At 1080p, of the 10 games re-tested, 7 posted improved performance and none saw any negative performance numbers. Two of the games saw a 3% uplift and two saw around 2%.

I'm not 100% sure what you mean by runtime but if the positive results were the result of margin of error variance you'd expect to see some results with negative performance numbers as well. Instead he found 7/10 with improved performance, hence we can conclude under the tests a real 1.2% average performance lift.
 
Last edited:
At 1080p, of the 10 games re-tested, 7 posted improved performance and none saw any negative performance numbers. Two of the games saw a 3% uplift and two saw around 2%.

I'm not 100% sure what you mean by runtime but if the positive results were the result of margin of error variance you'd expect to see some results with negative performance numbers as well. Instead he found 7/10 with improved performance.
Literally four games of 10 showed improvements that were not within margin of error/run variance (~1%).

Who said runtime? Run variance. :)

Just because there are positive results doesnt mean there has to be negative results either. If I ran a test 5 times and saw, 100/100/100/100/101 and 100/100/100/101/101 are we really calling that 1 FPS an improvement or does it fall within a typical run variance? The difference is so small. There is also more run variance in games without canned benchmarks (not sure if these are those offhand). We can run a test 5 times and get 5 different results. Just because one test popped a fps higher doesnt necessarily mean it showed improvement...but that one result could then bump an average up a couple/few tenths.

All I'm saying is any result like this and a 1% difference can be attributed to margin of error/run variance in the testing. 4 games showed improvement (with two not even reaching 2%), 6 did not. Also, didn't The Witcher 3 show a negative result?

hence we can conclude under the tests a real 1.8% average performance lift.
You've just cherry picked the positive results to formulate an average? Can't say I math like that...

Anyway, thanks for the crack at it... I'll wait for W1z. ;)
 
Last edited:
Low quality post by bug
Literally four games of 10 showed improvements that were not within margin of error/run variance.
Give it a rest. AMD is the darling of the Internet these days, you're supposed to always say something good about them, otherwise you don't get the page clicks ;)
 
Literally four games of 10 showed improvements that were not within margin of error/run variance (~1%).

Who said runtime? Run variance. :)

Just because there are positive results doesnt mean there has to be negative results either. If I ran a test 5 times and saw, 100/100/100/100/101 and 100/100/100/101/101 are we really calling that 1 FPS an improvement or does it fall within a typical run variance? The difference is so small. There is also more run variance in games without canned benchmarks (not sure if these are those offhand). We can run a test 5 times and get 5 different results. Just because one test popped a fps higher doesnt necessarily mean it showed improvement...but that one result could then bump an average up a couple/few tenths.

All I'm saying is any result like this and a 1% difference can be attributed to margin of error/run variance in the testing. 4 games showed improvement (with two not even reaching 2%), 6 did not. Also, didn't The Witcher 3 show a negative result?

You've just cherry picked the positive results to formulate an average? Can't say I math like that...

Anyway, thanks for the crack at it... I'll wait for W1z. ;)

I explained it very simply but you still don't seem to understand, so not sure W1zzard will be able to help you understand something so straightforward ;)
 
I explained it very simply but you still don't seem to understand, so not sure W1zzard will be able to help you understand something so straightforward ;)
And I countered. What you said doesn't make much sense and I explained why I felt that way. Simply dismissing a counterpoint isn't much of a discussion/learning experience. :(

It's tough to take much from it because of the weird run time comment and the fact that we saw whatever negative result you think one needs to have. I though I proved you didn't with the 100/100...etc example above. If you were talking at a games level, you missed the witcher 3 going negative in the first place.

You also cherry pick results to make a point...You can't discount the other titles and call it an average.

I just don't call 1% an improvement due to run variances in game testing and benchmarks. When you can run it multiple times and see variances over 1%... I don't understand how it can be called an improvement when the result is within a run variance of the benchmark/game.

...so... again, thanks, but.. maybe W1z will say it differently and I will see the light. Who knows.
 
Last edited:
I think what this comes down to is that we can see the improvement in Ryzen chips since launch and that they are "already" catching the 9900K. A CPU with no future performance upgrade path.
 
The problem is that ryzen is not as consistent as Intel. The 9900k will consistently boost up to its max boost speed on 99% of the cpus. You can even easily set it to keep its boost speed on all cores. The boost speed is so much higher. My 3900x has once in a while boosted up to 4550mhz, but only at idle. During benchmarks or games I average about 4200mhz. It is great that amd has seen some improvements since launching but the product consistently is horrible.
 
The problem is that ryzen is not as consistent as Intel. The 9900k will consistently boost up to its max boost speed on 99% of the cpus. You can even easily set it to keep its boost speed on all cores. The boost speed is so much higher. My 3900x has once in a while boosted up to 4550mhz, but only at idle. During benchmarks or games I average about 4200mhz. It is great that amd has seen some improvements since launching but the product consistently is horrible.
I don't think it's inconsistency as much as Intel having spent years into perfecting SpeedStep/SpeedShift vs AMD's more recent habit of binning their chips just out of their sweet spot.
 
Whatever you want to call, intel has it and amd doesn't.
 
Man, that "boost clocks" graph looks like it never ends :D
That's what I was thinking too. 12 cores / 24 threads, all running at 4.2 GHz... on a stock air cooler...
So much computing power makes me forget what Intel has that AMD doesn't. Heat maybe?
 
Last edited:
It runs at 4.2ghz with better than stock cooling too! It very fast for 4.2ghz. Better IPC than intel.
 
Back
Top