# Possible Listings of AMD Ryzen 9 3800X, Ryzen 7 3700X, Ryzen 5 3600X Surface in Online Stores



## Raevenlord (May 3, 2019)

Remember to bring your osmosis process to the table here, as a good deal of salt is detected present in this story's environment. Some online webstores from Vietnam and Turkey have started listing AMD's 3000 series CPUs based on the Zen 2 architecture. The present company stands at a Ryzen 9 3800X, Ryzen 7 3700X, and Ryzen 5 3600X, and the specs on these are... Incredible, to say the least.

The Ryzen 9 3800X is being listed with 32 threads, meaning a base 16-core processor. Clock speeds are being reported as 3.9 GHz base with up to 4.7 GHz Turbo on both a Turkish and Vietnamese etailer's webpages. The Turkish Store then stands alone in listing AMD's Ryzen 7 3700X CPU, which is reported as having 12 cores, 24 threads, and operating at an extremely impressive 4.2 GHz base and 5.0 GHz Boost clocks. Another listing by the same website, in the form of the Ryzen 5 3600X, details the processor as having 8 physical cores and running at 4.0 GHz base and 4.8 Boost clocks. 



 

 

 



*View at TechPowerUp Main Site*


----------



## Mussels (May 3, 2019)

If those specs are real, ryzens going to destroy intel


----------



## Valantar (May 3, 2019)

*speaking from inside of a large pile of salt*

This looks amazing. Fingers crossed that real-world specs look anything like this, if so, my 1600X might be looking at early retirement.

I see the "3800X" listed with a 125W TDP (which is definitely high for MSDT, but would be perfectly fine for those specs). Any similar specs listed for the other two chips?

Multi-core turbo scaling would be _very_ interesting for these chips given their high core counts. Still, the base clocks are high enough that I wouldn't worry too much (again, if any of this is true). If the 3700X can sustain something like 4.8GHz at 4 cores, it would likely be the only CPU 99% of people need for the next 5 years or so, at least if the rumored 15% IPC increase over Zen+ rings true.


----------



## TheLostSwede (May 3, 2019)

Well, it looks like the AdoredTV numbers, plus cache configuration. Could be real, could be made up...
Not long to go now by the looks of it though.


----------



## RH92 (May 3, 2019)

Let's wait for the actual launch but  if those clock speeds happen to be confirmed  3000 series will sell like crazy !


----------



## Valantar (May 3, 2019)

This just makes me all the more depressed that I'll have to wait until 2020 to see Zen 2 in an MCM APU with a kick-ass GPU. Maybe I'll just get a 5XX-series motherboard and a used 2200G or something for my planned HTPC upgrade, and replace the CPU a bit down the line.

Still, the wait until the 27th seems _very _long right now.


----------



## londiste (May 3, 2019)

The differences between X and non-X models are pretty stark, they are pushing the process to its limits.
3600 vs 3600X (8c/16t) - 3.6/4.0GHz vs 4.0/4.8GHz (400MHz) - 55W vs 95W (72%)
3300 vs 3300X (6c/12t) - 3.2/4.0GHz vs 3.5/4.3GHz (300MHz) - 50W vs 65W (30%)

On the other hand, compared to the ones above bigger models make little sense. Unless the variance in chip quality is very large and these are heavily binned:
3700 vs 3700X (12c/24t) - 3.8/4.6GHz vs 4.2/5.0GHz (400MHz) - 95W vs 105W (10%)
3800X vs 3850X (16c/32t) - 3.9/4.7GHz vs 4.3/5.1GHz (400MHz) - 125W vs 125W


----------



## Tsukiyomi91 (May 3, 2019)

the R5 3600X looks like a damn fine upgrade. IF all the specs are true, then Intel are gonna either start lowering their processor prices (which is unlikely, I know) OR we'll see them rushing on more variations of the Core Series SKUs down the line while getting more flak from everyone else.


----------



## Deleted member 172152 (May 3, 2019)

3700x pls because 4 extra cores will do me just fine and I get the best gaming performance! Pls be real specs...


----------



## The Quim Reaper (May 3, 2019)

One of those 3700X's will be mine....12c 24t, 4.2Ghz/5Ghz...Damn!

...Not a bad little upgrade from my i5 4690K


----------



## Mussels (May 3, 2019)

I plan to upgrade so that my 2700x can feed my VR rig, so i dont need a massive upgrade - anything that matches my core count (or beats it) and has a high boost for gaming is on the cards... and these all look worthy


----------



## Vya Domus (May 3, 2019)

londiste said:


> The differences between X and non-X models are pretty stark, they are pushing the process to its limits.
> 
> ...
> 
> Unless the variance in chip quality is very large and these are heavily binned:



It's all mostly due to the chiplet design, easier binning and higher chances of having chips that clock high.


----------



## NdMk2o1o (May 3, 2019)

3600 all core oc 4.6+ would be very nice...


----------



## kings (May 3, 2019)

That store probably knows nothing... or nothing more than we all know at this point based on rumors.

In the 3600X specs for example, they even put in the discription "possible cores: 8"... Yeah, I can also make guesses...

I would not give too much credit to this.


----------



## medi01 (May 3, 2019)

Fake based on AdoredTV speculations/leaks.


----------



## oxidized (May 3, 2019)

I don't understand how lower thread count should have lower frequency both at stock and turbo compared to higher thread count according to these "possible listings"


----------



## TheLostSwede (May 3, 2019)

kings said:


> That store probably knows nothing... or nothing more than we all know at this point based on rumors.
> 
> In the 3600X specs for example, they even put in the discription "possible cores: 8"... Yeah, I can also make guesses...
> 
> I would not give too much credit to this.



Those two stores you mean?

Then again, the motherboard makers haven't even been told if 12 or 16 cores will be the top core count at launch so...


----------



## Crackong (May 3, 2019)

If these specs are real then Intel is simply doomed in DIY market LUL


----------



## Valantar (May 3, 2019)

oxidized said:


> I don't understand how lower thread count should have lower frequency both at stock and turbo compared to higher thread count according to these "possible listings"


Remember that SKUs are created based on market segmentation, not silicon. In other words, boosting clocks on lower core count SKUs would either require them to be priced too similarly to higher core count SKUs with lower clocks (meaning needless internal competition) or cannibalize sales of the higher-end SKU. This likely means that the lower core count SKUs have more OC headroom.


----------



## HwGeek (May 3, 2019)

Ryzen 9 Box is fake since you see Ryzen 3 box cooler .
Can't wait to see the reviews + this year I hope more reviews will come about Passive cooling PC's since we can get really love power 8C CPU's + maybe nice mid tier GPU.


----------



## chaosmassive (May 3, 2019)

if this turned out to be real, I take one ryzen 5 3600 please
thank you


----------



## oxidized (May 3, 2019)

Valantar said:


> Remember that SKUs are created based on market segmentation, not silicon. In other words, boosting clocks on lower core count SKUs would either require them to be priced too similarly to higher core count SKUs with lower clocks (meaning needless internal competition) or cannibalize sales of the higher-end SKU. This likely means that the lower core count SKUs have more OC headroom.



Still that doesn't always happen in old CPUs, or at least the base clock is the same, and then boost is perhaps lower which makes sense.


----------



## Kucuboy (May 3, 2019)

My real concern is if the demand is so great, when will we be able to purchase the procs at is MSRP prices? Hate it if the price goes up for a long time because of limited availability.


----------



## Shatun_Bear (May 3, 2019)

TheLostSwede said:


> Well, it looks like the AdoredTV numbers, plus cache configuration. Could be real, could be made up...
> Not long to go now by the looks of it though.



They are not. They're just using the same made-up numbers from that AdoredTV video in December. I called him out on that video as a 4.2Ghz base clock on 16 cores is laughable and whoever made it up doesnt know much about CPUs.


----------



## R0H1T (May 3, 2019)

Well I was hoping this wouldn't end up on the *FP *


Shatun_Bear said:


> They are not. They're just using the same made-up numbers from that AdoredTV video in December. I called him out on that video as a* 4.2Ghz base clock on 16 cores is laughable *and whoever made it up doesnt know much about CPUs.


As fantastical as the claims are, this isn't the reason to diss them.


----------



## Shatun_Bear (May 3, 2019)

The problem with AdoredTV is he's such an AMD fanboy he does AMD more harm than good. This is like the 3rd time his made-up numbers have been used by retailers or websites, and the effect will be disappointment when the real base/boost clocks are revealed closer to launch.

These numbers were made up in this 'leak' extravaganza video he made in December in an attempt to increase his Patreon subscribers. And it worked, it was one of his most popular videos ever. But he fabricated that whole chart. Come on lads; he claimed his 'source' gave him the prices of every single Ryzen 3000 CPU...in DECEMBER 2018. Laughable.


----------



## Daven (May 3, 2019)

oxidized said:


> I don't understand how lower thread count should have lower frequency both at stock and turbo compared to higher thread count according to these "possible listings"



It’s just the TDP. Higher core counts require higher TDP (125W or more). When not using all the cores, the higher TDP allows higher clocks.

Edit: in respect to your other comment, older CPUs had less cores and most of them were in use at all times.


----------



## Shatun_Bear (May 3, 2019)

R0H1T said:


> Well I was hoping this wouldn't end up on the *FP *
> As fantastical as the claims are, this isn't the reason to diss them.



Um, yes it is. He's claiming the base clock on a Ryzen 3000 CPU will be almost as high as the BOOST clock on the previous generation. To make it even more fantastical, this 4.2Ghz base clock is apparently for the 16-core SKU!

So I can guarantee right now you won't see a 4.2Ghz base clock on any 3000-series CPU. We won't even see a 4.1Ghz or 4Ghz base clock. That's not how modern CPUs operate. If you think we will, let's make a bet.


----------



## R0H1T (May 3, 2019)

Shatun_Bear said:


> The problem with AdoredTV is he's such an AMD fanboy he does AMD more harm than good. This is like the 3rd time his made-up numbers have been used by retailers or websites, and the effect will be disappointment when the real base/boost clocks are revealed closer to launch.


I get that & you remember the claims about Zen before launch - it could never get close to Intel's IPC, if it does the clocks will be low. It exceeded both expectations, so it is possible that AMD is now trying to get them to clock high. At this point AMD is limited by the TSMC 7nm node, more than anything else. You could say that it's a matter of when, not if because I doubt even Intel matches their 14nm++ desktop clocks on smaller nodes.


Shatun_Bear said:


> So I can guarantee right now you won't see a 4.2Ghz base clock on any 3000-series CPU. We won't even see a 4.1Ghz or 4Ghz base clock. That's not how modern CPUs operate. If you think we will, let's make a bet.


Alright, let's do.


----------



## MDDB (May 3, 2019)

Shatun_Bear said:


> The problem with AdoredTV is he's such an AMD fanboy he does AMD more harm than good.



I guess Lisa Su is stupid, then, as she personally thanked Jim on twitter for his videos that are doing AMD more harm than good.


----------



## Shatun_Bear (May 3, 2019)

MDDB said:


> I guess Lisa Su is stupid, then, as she personally thanked Jim on twitter for his videos that are doing AMD more harm than good.



You on first name terms with him?! lol. And I did see that, but that wasn't in response to that fake Ryzen specs and prices reveal video. It was a more recent one where he was speculating about the I/O.


----------



## Vya Domus (May 3, 2019)

Shatun_Bear said:


> The problem with AdoredTV is he's such an AMD fanboy he does AMD more harm than good. This is like the 3rd time his made-up numbers have been used by retailers or websites, and the effect will be disappointment when the real base/boost clocks are revealed closer to launch.
> 
> These numbers were made up in this 'leak' extravaganza video he made in December in an attempt to increase his Patreon subscribers. And it worked, it was one of his most popular videos ever. But he fabricated that whole chart. Come on lads; he claimed his 'source' gave him the prices of every single Ryzen 3000 CPU...in DECEMBER 2018. Laughable.



You people just can't contain yourself, why does someone always has to show up and stir the shit with this fanboy crap. Stop with this garbage, we don't need it here.

Nice profile picture by the way.


----------



## Shatun_Bear (May 3, 2019)

R0H1T said:


> Alright, let's do.



There will be no Ryzen 3000 desktop CPU with a 4.2Ghz base clock or within 200Mhz of that frequency, that's the bet. £10 worth of Bitcoins do you?


----------



## Enterprise24 (May 3, 2019)

Ryzen 9 get wraith spire ?! Enough salt.


----------



## NdMk2o1o (May 3, 2019)

Shatun_Bear said:


> There will be no Ryzen 3000 desktop CPU with a 4.2Ghz base clock or within 200Mhz of that frequency, that's the bet. £10 worth of Bitcoins do you?


I guarantee there will be, what would be the point otherwise if they had the same base and boost as the 2*** series


----------



## notb (May 3, 2019)

The rumored CPU specs mentioned here have been around since January. I don't know why people are so excited now.



Mussels said:


> If those specs are real, ryzens going to destroy intel


I still think we should expect more from moderators. But maybe I'm old.


----------



## gasolina (May 3, 2019)

the 3200g and 3400g still 4c/4t and 4c/8t i highly doubt that the 3700 3800 will be 16 cores , the maximum i guess is around 10 to 12 cores with higher clock speed.


----------



## Shatun_Bear (May 3, 2019)

NdMk2o1o said:


> I guarantee there will be, what would be the point otherwise if they had the same base and boost as the 2*** series



The boost yes, we can expect 4.5-4.7Ghz but you don't just jack up the base clock when it's totally unnecessary.


----------



## R0H1T (May 3, 2019)

Shatun_Bear said:


> There will be no Ryzen 3000 desktop CPU with a 4.2Ghz base clock or within 200Mhz of that frequency, that's the bet. £10 worth of Bitcoins do you?


No bitcoins, Paypal? Just to confirm ~ no 3xxx SKU will have a *base clock of 4GHz* or above


----------



## kastriot (May 3, 2019)

Lol so many ryzen/ryzen+ gonna be  sold on ebay soon.


----------



## Shatun_Bear (May 3, 2019)

R0H1T said:


> No bitcoins, Paypal? Just to confirm ~ no 3xxx SKU will have a *base clock of 4GHz* or above



Cool I'll send you a PM. Easy money


----------



## M2B (May 3, 2019)

A minimum of 15%~ per-core improvment (Better IPC + Higher clocks) across all different workloads seems fine to me and is what I'm expecting, 5GHz or not.


----------



## Caring1 (May 3, 2019)

The numbers don't add up to me, 3700X has a 50% increase in core count OVER THE 2700x for less than 20% increase in TDP.
If anything, AMD should retain current core count and increase clocks using increased efficiency to retain the TDP as it is now.


----------



## oxidized (May 3, 2019)

Mark Little said:


> It’s just the TDP. Higher core counts require higher TDP (125W or more). When not using all the cores, the higher TDP allows higher clocks.
> 
> Edit: in respect to your other comment, older CPUs had less cores and most of them were in use at all times.



That's not my point tho.


----------



## Vya Domus (May 3, 2019)

Caring1 said:


> The numbers don't add up to me, 3700X has a 50% increase in core count OVER THE 2700x for less than 20% increase in TDP.



The 8 core sample showed by AMD seems to be a 65W part or thereabouts and it matched the 9900K meaning the clock speeds couldn't have been anemic. The TDP headroom is there and let's not forget how liberal AMD/Intel/Nivida have been with their TDP ratings in the past either.


----------



## chaosmassive (May 3, 2019)

no need to diss AdoredTV, if you dont trust/like him no need to turn this thread into attacking other people,
that said with this picture  is real or not we do not know, its only few months away, one can speculate/analyze anything they want
one can simply look back at history, the leaker/alleged product before current product launch see if the it was indeed accurate or not


----------



## Wilson (May 3, 2019)

Literally the same info from December "leak", nothing new still


----------



## Vayra86 (May 3, 2019)

This is what we've been waiting for.

IF AMD can push out boost with these clocks, Intel is done for a good while in the consumer desktop segment. From top to bottom. They won't have anything in the entire stack that is better. And they can't surpass it either because they've already capped out on clocks too.

AMD now gets a potential CPU with much better boost tech at a much lower power ceiling and peak power draw, while being an efficient baseclock CPU at the same time. Sprinkle extra cores/threads on top plus all the other minor perks they have... yep. Time to switch, at last.



notb said:


> Tthe rumored CPU specs mentioned here have been around since January. I don't know why people are so excited now.



Because rumors could be true.


----------



## 0x6A7232 (May 3, 2019)

Shatun_Bear said:


> They are not. They're just using the same made-up numbers from that AdoredTV video in December. I called him out on that video as a 4.2Ghz base clock on 16 cores is laughable and whoever made it up doesnt know much about CPUs.


You,  my friend, are looking to have some *DELECTABLE *tears to harvest at launch. Because ending your statement with "doesn't know much about CPUs" is a very _sharp_ two edged sword, as it means if you are wrong, *you *know very little about CPUs.
So, if these numbers are right, will you eat your hat? Pretending they are proved accurate, what would be your response?


----------



## Caring1 (May 3, 2019)

0x6A7232 said:


> ….is a very _sharp_ two edged sword, as it means if you are wrong, *you *know very little about CPUs.


Or his knowledge is based on current standards, which can and will change in the future. That does not mean he is wrong.


----------



## krykry (May 3, 2019)

I'm curious about one thing. The prices and availability of high-performance high-core count CPUs will definitely improve...in which case, how will game developers react to it? How will they utilize the additional power they will be given?


----------



## Vayra86 (May 3, 2019)

krykry said:


> I'm curious about one thing. The prices and availability of high-performance high-core count CPUs will definitely improve...in which case, how will game developers react to it? How will they utilize the additional power they will be given?



They will optimize around the mainstream. We're already seeing much better use of higher core count CPUs, they pay off already up to 8 cores. That coincides with the slow adoption of new APIs that are also better at threading.


----------



## 0x6A7232 (May 3, 2019)

krykry said:


> I'm curious about one thing. The prices and availability of high-performance high-core count CPUs will definitely improve...in which case, how will game developers react to it? How will they utilize the additional power they will be given?



Consoles have been 8 core since OG PS4 and XB1, so they should optimize for that at least. And there's tech hardware and software based that can split single threaded or poorly optimized workloads between threads for benefits up to I think 32 threads? Check at about the 8 minute mark heee:


----------



## M2B (May 3, 2019)

Caring1 said:


> The numbers don't add up to me, 3700X has a 50% increase in core count OVER THE 2700x for less than 20% increase in TDP.
> If anything, AMD should retain current core count and increase clocks using increased efficiency to retain the TDP as it is now.



Adding more cores is a less expensive way of improving performance and is way better in terms of marketing.
They can significantly improve the clocks and add more cores at the same time, why not?


----------



## Vya Domus (May 3, 2019)

M2B said:


> Adding more cores is a less expensive way of improving performance



Nope, *it's the only way*. You will likely never see again any major increase in single thread performance on silicon.


----------



## M2B (May 3, 2019)

Vya Domus said:


> Nope, *it's the only way*



Nope, it's not the only way, it however might be the only worthwhile way.


----------



## 0x6A7232 (May 3, 2019)

If it's not worthwhile then it is the only way unless you're selling to the US government something they want (F-35, anyone?).


----------



## Mindweaver (May 3, 2019)

Tsukiyomi91 said:


> the R5 3600X looks like a damn fine upgrade. IF all the specs are true, then *Intel are gonna either start lowering their processor prices* (which is unlikely, I know) OR we'll see them rushing on more variations of the Core Series SKUs down the line while getting more flak from everyone else.



Intel has lowered their prices in the past with Core2Duo when AMD was on top. It just wouldn't be profitable to premature lower their price until they have too. Trust me if AMD could sale at a higher price then they would as well. AMD dual core processors were very high when Intel announced the C2D. I'm excited to see the price war for processor supremacy. hehe I need a new system yesterday so cheaper the better or more bang for the buck.


----------



## notb (May 3, 2019)

Vayra86 said:


> IF AMD can push out boost with these clocks, Intel is done for a good while in the consumer desktop segment. From top to bottom. They won't have anything in the entire stack that is better. And they can't surpass it either because they've already capped out on clocks too.


The situation is very similar to what we had in 2017. AMD leaps ahead in core count. Nothing more.
Mainstream Intel's 8C were announced 2 years after Ryzen (but not available yet).
"Intel is done"?
"End of Intel"?
"No reason to buy Intel anymore"?

There's no reason why Intel wouldn't make a 16C competitor until 2021.



> Because rumors could be true.


They could have been true back then as well. I don't understand the excitement.

Also, people are drooling over a leak of 12 and 16C Ryzen like if this wasn't expected literally since AMD showed the chiplet idea.

Nothing here is shocking. Even if the specs are true, pricing will decide how these CPUs line up against Intel's.


----------



## efikkan (May 3, 2019)

These specs are purely speculative, no vendor knows the final details yet.

And they are on the optimistic end of the scale…



Shatun_Bear said:


> These numbers were made up in this 'leak' extravaganza video he made in December in an attempt to increase his Patreon subscribers. And it worked, it was one of his most popular videos ever. But he fabricated that whole chart. Come on lads; he claimed his 'source' gave him the prices of every single Ryzen 3000 CPU...in DECEMBER 2018. Laughable.


The final clocks and pricing is always the last step of the qualification process. People should know this by now. Since these facts didn't even exist back in December 2018, anyone who claimed to know them is either actually psychic, lying or delusional, and I don't know which one is worse…

There is nothing wrong in speculation, but what is wrong is calling your own speculation a "leak" of facts, facts that nobody can even know yet. And it doesn't matter if it turns out to be 70% accurate or 80% accurate, it's still fake news to attract traffic to their Youtube channels, blogs, webpages, or whatever.


----------



## Berfs1 (May 3, 2019)

Mussels said:


> If those specs are real, ryzens going to destroy intel


Just now realized that? Lmao intel won’t win for the next three years... I have attached an single thread comparison on CB, and I also attached a future prediction for how they will fare, and it seems that AMD has room for improvement.


----------



## ZoneDymo (May 3, 2019)

Again all, pls dont just believe this, I feel these are put out deliberately just to make people disappointed with the actual products no matter how good they are.


----------



## Divide Overflow (May 3, 2019)

You're responsible for managing your own expectations!


----------



## ZhangirDuyseke (May 3, 2019)

Total bullcrap! AMD will never achieve this clock speeds, lmao.


----------



## HwGeek (May 3, 2019)

Why not? Remember that now the CPU core is stand alone and separated from anything else .


----------



## eidairaman1 (May 3, 2019)

ZhangirDuyseke said:


> Total bullcrap! AMD will never achieve this clock speeds, lmao.



I did in 2014 with my system specs.



0x6A7232 said:


> If it's not worthwhile then it is the only way unless you're selling to the US government something they want (F-35, anyone?).



F-35 is fine.


----------



## HD64G (May 3, 2019)

High binned models will have sustainable all-core turbo close to 4.5GHz and single-threaded one close to 5GHz. And imho 7nm can provide those clocks. If IPC is +10% better than the one in Zen+ we are talking about matching Intel's gaming perofrmance and surpassing them in heavily-threaded apps. And imho 7nm can provide those clocks. Simple as that. Pricing and availability is the big questions.


----------



## Manu_PT (May 3, 2019)

HD64G said:


> High binned models will have sustainable all-core turbo close to 4.5GHz and single-threaded one close to 5GHz. And imho 7nm can provide those clocks. If IPC is +10% better than the one in Zen+ we are talking about matching Intel's gaming perofrmance and surpassing them in heavily-threaded apps. And imho 7nm can provide those clocks. Simple as that. Pricing and availability is the big questions.



Nop, imc and ccx latencies are the big questions. Ryzen performance metrics arent all about IPC and clocks like Intel.


----------



## eidairaman1 (May 3, 2019)

Manu_PT said:


> Nop, imc and ccx latencies are the big questions. Ryzen performance metrics arent all about IPC and clocks like Intel.



Each revision it improves, it's not staying stagnant.


----------



## TheLostSwede (May 3, 2019)

Now, now, children, no need to fight over this, We should all have a much better idea of what AMD has in store for us in a few weeks time. 
I don't understand the aggressiveness between people here over this. Do all of you live so boring lives that the only place you can make a point of some kind is here?


----------



## eidairaman1 (May 3, 2019)

I look forward to what is being brought out. I just know if Asus were serious with TUF like they were in 2012 I would pick up a Sabertooth X570 with a 3700X.


----------



## Turmania (May 3, 2019)

I did make a purchase from the mentioned online shop in Turkey.but it was couple years ago. They are reliable as far as I know.ryzen5 3600X will seem to be a top seller if it is correct. But I will not raise me expectations just wait couple of weeks we probably know more in June.


----------



## Manu_PT (May 3, 2019)

eidairaman1 said:


> Each revision it improves, it's not staying stagnant.



Unless they improved it by 100%, wich won´t happen, a 4,8ghz Zen 2 chip with 15% increase IPC won´t be enough to beat Intel on a lot of tasks, like gaming, for example.

Honestly, the bigger you dream, the harder it will be to face the truth. Let´s not get too excited because that only harms AMD itself.


----------



## eidairaman1 (May 3, 2019)

Manu_PT said:


> Unless they improved it by 100%, wich won´t happen, a 4,8ghz Zen 2 chip with 15% increase IPC won´t be enough to beat Intel on a lot of tasks, like gaming, for example.
> 
> Honestly, the bigger you dream, the harder it will be to face the truth. Let´s not get too excited because that only harms AMD itself.



Not dreaming fyi. Improvement is good.


----------



## R0H1T (May 3, 2019)

*Third hand* confirmation via *reddit* ~


----------



## r9 (May 3, 2019)

I just hope they didn't sacrifice IPC to achieve those high clocks.
Actually I'm hoping for those 5-15% promised IPC improvement.


----------



## NdMk2o1o (May 3, 2019)

Manu_PT said:


> Unless they improved it by 100%, wich won´t happen, a 4,8ghz Zen 2 chip with 15% increase IPC won´t be enough to beat Intel on a lot of tasks, like gaming, for example.
> 
> Honestly, the bigger you dream, the harder it will be to face the truth. Let´s not get too excited because that only harms AMD itself.


They're less then 10% behind Intel in gaming now so yes it will beat or at least match Intel


----------



## Manu_PT (May 3, 2019)

NdMk2o1o said:


> They're less then 10% behind Intel in gaming now so yes it will beat or at least match Intel



Less than 10%? How delusional..... 2700x at 4,2ghz on Battlefield V multiplayer can´t even sustain 144fps locked, while 9700k/9900k fly at 180-200... 10% yes right...


----------



## notb (May 3, 2019)

eidairaman1 said:


> Each revision it improves, it's not staying stagnant.


Each revision as in Zen+ improved over Zen. So it happened once? 
Also, can you link a test that confirms that latency is actually lower in Zen+?

This issue is crucial for servers and has been very meticulously tested for EPYC, showing that because of latency they fall behind Xeon in particular (but popular) scenarios.
Problem is: AMD didn't launch a Zen+ EPYC.

2990WX was tested as the most powerful Zen+ CPU available and it turned out it's just as bad, maybe worse. Although the high core count surely contributed as well.

Problem with Zen2 is that it's a new architecture. AMD goes even further with cost cutting by using a separate I/O die. We'll see how this ends up.


HD64G said:


> High binned models will have sustainable all-core turbo close to 4.5GHz and single-threaded one close to 5GHz. And imho 7nm can provide those clocks.


I'm always slightly anxious when I see statements like this one.
In your honest opinion 7nm can provide these clocks... because we've seen countless 5 GHz chips made with TSMC 7nm? Because you work for TSMC? Because you're a quantum physicist working on semiconductors? Because you had a vision on your AMD altar?

Jokes aside, I'm really curious where do people get this kind of knowledge.


----------



## R0H1T (May 3, 2019)

Manu_PT said:


> Less than 10%? How delusional..... 2700x at 4,2ghz on Battlefield V multiplayer can´t even sustain 144fps locked, while 9700k/9900k fly at 180-200... 10% yes right...


Have you seen any tests with fixed clocks? If not then go check them out, Zen is indeed only about 5~10% behind Intel clock for clock. So your assumption that AMD can't match Intel @4.8 GHz is BS, I bet you didn't even count the impact of smeltdown ~ hint it's non zero


----------



## NdMk2o1o (May 3, 2019)

Manu_PT said:


> Less than 10%? How delusional..... 2700x at 4,2ghz on Battlefield V multiplayer can´t even sustain 144fps locked, while 9700k/9900k fly at 180-200... 10% yes right...


On average though if you want to cherry pick we can all do that


----------



## notb (May 3, 2019)

Berfs1 said:


> Just now realized that? Lmao intel won’t win for the next three years... I have attached an single thread comparison on CB, and I also attached a future prediction for how they will fare, and it seems that AMD has room for improvement.


One could think that people on a "computer enthusiast forum" would know how to make a screenshot.


----------



## Vayra86 (May 3, 2019)

Manu_PT said:


> Less than 10%? How delusional..... 2700x at 4,2ghz on Battlefield V multiplayer can´t even sustain 144fps locked, while 9700k/9900k fly at 180-200... 10% yes right...



10% is the IPC gap, give or take.
The _clock_ gap is higher, so if AMD can tackle both, they're basically on par stock vs stock.

The only space left for Intel is the overclocked K parts that can do all-core boost to the single core turbo frequency. But there isn't much more left otherwise, and 100-200mhz on 4.8 or higher is not even worth mentioning. Beyond that we've already seen that even first gen Ryzen loses most of its latency issues with a decent kit of RAM. Consider that 'Ryzen overclocking' compared to Intel's hot mess at high clocks and they're even again, both in additional cost and additional performance. We already know that XFR is pretty damn good at maximizing potential on its own - a perk Intel's chips do not have.

Besides, beyond 120~160 FPS, who cares, there are far bigger influences on that region than CPU is; most of them being network/engine/game related anyway. The only thing you might be left with as a bonus on Intel is that a specific set of engines/games excel on it while others excel on a Ryzen CPU. Its going to be a similar game to the GPU comparison, choose your poison, either will do fine. That is what AMD needs and that is what we consider 'equal' in hardware performance.


----------



## 0x6A7232 (May 3, 2019)

eidairaman1 said:


> I did in 2014 with my system specs.
> 
> 
> 
> F-35 is fine.



...do you honestly believe it would have been approved if the ACTUAL costs (before overruns) were known?


----------



## Manu_PT (May 3, 2019)

R0H1T said:


> Have you seen any tests with fixed clocks? If not then go check them out, Zen is indeed only about 5~10% behind Intel clock for clock. So your assumption that AMD can't match Intel @4.8 GHz is BS, I bet you didn't even count the impact of smeltdown ~ hint it's non zero



Another delusional one. Even at same clocks (wich makes no sense as Intel clocks easily at 5,2ghz), Intel still beats the crap out of Ryzen due to CCX latencies. You can even have a Zen 2 5,5ghz, if CCX latencies and IMC still suck, it won´t beat Intel. If IPC was the only difference I would have got a Ryzen, 10% wouldn´t bother me. But there´s a lot more than that:


----------



## R0H1T (May 3, 2019)

I said fixed (same) clocks, if you don't have the numbers for that then *don't bother making that claim*! As for delusional ~ you seem to be high or something atm


----------



## Shatun_Bear (May 3, 2019)

Manu_PT said:


> Another delusional one. Even at same clocks (wich makes no sense as Intel clocks easily at 5,2ghz), Intel still beats the crap out of Ryzen due to CCX latencies. You can even have a Zen 2 5,5ghz, if CCX latencies and IMC still suck, it won´t beat Intel. If IPC was the only difference I would have got a Ryzen, 10% wouldn´t bother me. But there´s a lot more than that:



I haven't read as much nonsense coming from one person in a while. Well done.


----------



## ironcerealbox (May 3, 2019)

It would be nice if true. Could we, perhaps, have a repeat of 2005/2006?

Could this happen?


----------



## Manu_PT (May 3, 2019)

Shatun_Bear said:


> I haven't read as much nonsense coming from one person in a while. Well done.



Care to elaborate, post videos, tests etc? Or will you just use the non valid argument "I havent read as much nonsense". Mr Amd bot. You guys are the ones that ruin AMD, I bet you are also subscribed to adoredtv. Then when the products are finally released the whole internet gets disapointed because you retards create false expectations. Let amd do their job and wait for the full reviews. Never forget the "poor volta" ad.


----------



## Vayra86 (May 3, 2019)

Manu_PT said:


> Another delusional one. Even at same clocks (wich makes no sense as Intel clocks easily at 5,2ghz), Intel still beats the crap out of Ryzen due to CCX latencies. You can even have a Zen 2 5,5ghz, if CCX latencies and IMC still suck, it won´t beat Intel. If IPC was the only difference I would have got a Ryzen, 10% wouldn´t bother me. But there´s a lot more than that:



You need to learn to interpret numbers then, because what I see here is a Ryzen CPU vs a *much higher clocked* Intel CPU missing out on a mere 7-15 FPS with both in comfortably playable ranges. In fact what we see is often the opposite, Ryzen with lower clocks seems to hit far closer to the Intel CPU than you'd expect. Second gen solved most of the negative outliers we saw in the first.

Do the math yourself, pick any moment in that video between the two games' FPS and calculate the % gap. Its not much over 10% most of the time, and quite a few times its even under 10%. That is _despite a clock difference in favor of Intel._



Manu_PT said:


> Care to elaborate, post videos, tests etc? Or will you just use the non valid argument "I havent read as much nonsense". Mr Amd bot. You guys are the ones that ruin AMD, I bet you are also subscribed to adoredtv. Then when the products are finally released the whole internet gets disapointed because you retards create false expectations



You already posted the perfect evidence yourself, you just don't see it.


----------



## 0x6A7232 (May 3, 2019)

With Ryzen, if your RAM is clocked under about 3000, your CPU performance suffers, don't forget.



Manu_PT said:


> Care to elaborate, post videos, tests etc? Or will you just use the non valid argument "I havent read as much nonsense". Mr Amd bot. You guys are the ones that ruin AMD, I bet you are also subscribed to adoredtv. Then when the products are finally released the whole internet gets disapointed because you retards create false expectations. Let amd do their job and wait for the full reviews. Never forget the "poor volta" ad.



Dude he's not an AMD bot, read the freaking thread.  Your tears will be the most delicious of all.


----------



## Manu_PT (May 3, 2019)

Vayra86 said:


> You need to learn to interpret numbers then, because what I see here is a Ryzen CPU vs a *much higher clocked* Intel CPU missing out on a mere 7-15 FPS with both in comfortably playable ranges. In fact what we see is often the opposite, Ryzen with lower clocks seems to hit far closer to the Intel CPU than you'd expect.
> 
> Do the math yourself, pick any moment in that video between the two games' FPS and calculate the % gap. Its not much over 10% most of the time, and quite a few times its even under 10%. That is _despite a clock difference in favor of Intel._
> 
> ...



Watch the video again then and pause it many times, and then tell me is only 10%. There are some 50fps differences in some situations. You think that would be solved with a mere clock increase? LeL

Full of AMD bots everywhere, don´t cry when the cpus are finally released.


----------



## Vayra86 (May 3, 2019)

Manu_PT said:


> Watch the video again then and pause it many times, and then tell me is only 10%. There are some 50fps differences in some situations. You think that would be solved with a mere clock increase? LeL
> 
> Full of AMD bots everywhere, don´t cry when the cpus are finally released.



50? Give me the times in that video and I'll believe you. I'm not going to sit there staring at it for 10 minutes to prove _your _point.

Also, a momentary gap, last I checked is not what determines the overall performance gap between two CPUs. You base that on _average FPS._ And for that, my 7-15 FPS number is pretty accurate.

Bottom line, stop grasping at straws and admit you made a BS comment. Its no biggie, then we can move on. AMD bot... lol. You're one click away from ignore if you take that route with me. You know better.


----------



## Manu_PT (May 3, 2019)

Vayra86 said:


> 50? Give me the times in that video and I'll believe you. I'm not going to sit there staring at it for 10 minutes to prove _your _point.
> 
> Also, a momentary gap, last I checked is not what determines the overall performance gap between two CPUs. You base that on _average FPS._ And for that, my 7-15 FPS number is pretty accurate.
> 
> Bottom line, stop grasping at straws and admit you made a BS comment. Its no biggie, then we can move on. AMD bot... lol. You're one click away from ignore if you take that route with me. You know better.



Even the 9400F clocked lower than the 2600x, wins in almost all games; on pubg was even a 30% difference (100fps vs 130fps) gtfo please, ignore me, you will do me a favour:


----------



## R0H1T (May 3, 2019)

You know I haven't used the ignore function on any tech forum in over half a decade, till now that is


----------



## Manu_PT (May 3, 2019)

R0H1T said:


> You know I haven't used the ignore function on any tech forum in over half a decade, till now that is



Glad you did it mr Amd Bot. AMD sucks for high refresh gaming right now due to CCX and IMC latencies. Deal with it. Zen 2 won´t beat Intel if that´s not fixed, doesn´t matter the higher IPC. Keep believing AdoredTv. Ignore me.


----------



## r9 (May 3, 2019)

Manu_PT said:


> Less than 10%? How delusional..... 2700x at 4,2ghz on Battlefield V multiplayer can´t even sustain 144fps locked, while 9700k/9900k fly at 180-200... 10% yes right...



The average it's actually %10. 
Yes if you play at 720p and goes into hundreds of FPS than it's more than 10%.
But than again if you game at 720p you are too dumb to own a PC so it's should not be an issue.
And Ryzen has lower utilization so if you game and stream for example the Ryzen comes on top.
And it's cheaper.
So you can take the money you save put it towards the GPU and get more FPS.


----------



## Manu_PT (May 3, 2019)

r9 said:


> The average it's actually %10.
> Yes if you play at 720p and goes into hundreds of FPS than it's more than 10%.
> But than again if you game at 720p you are too dumb to own a PC so it's should not be an issue.
> And Ryzen has lower utilization so if you game and stream for example the Ryzen comes on top.
> ...



I don´t play at 720p and Ryzen with my GTX 1080ti wasn´t even using the GPU on most games. Get the 720p excuse out of here. If you want a CPU for 60hz then save money and get a 1300x or an i3 8100.

As for streaming, nothing beats a dual PC setup, every CPU will get a big hit once you stream, be it 2700x or 8700k, GamerNexus did a review on that. Dual setup ftw if you take streaming seriously. If you don´t, you can get away with 720p 30fps streaming on an intel chip anyway.



Vayra86 said:


> Don't worry, its just the reality in his little niche of the eternal quest for a CPU that'll feed his 240hz TN monitor properly.
> 
> Tiny little secret here: he'll never find it




I found it, is called 9700k wich overclocks to 5,2ghz with a cooler bundled with my MSI motherboard, paired with 4000mhz CL18 ram wich Ryzen can only dream to achieve. I´m rocking 200fps-240fps on every multiplayer game at 1080p and 144fps on every single player game. Good luck with Ryzen. Btw didn´t you ignore me yet? :O


----------



## Nkd (May 3, 2019)

Shatun_Bear said:


> The problem with AdoredTV is he's such an AMD fanboy he does AMD more harm than good. This is like the 3rd time his made-up numbers have been used by retailers or websites, and the effect will be disappointment when the real base/boost clocks are revealed closer to launch.
> 
> These numbers were made up in this 'leak' extravaganza video he made in December in an attempt to increase his Patreon subscribers. And it worked, it was one of his most popular videos ever. But he fabricated that whole chart. Come on lads; he claimed his 'source' gave him the prices of every single Ryzen 3000 CPU...in DECEMBER 2018. Laughable.



He is anything but AMD fanboy. Go watch all his videos. lol


----------



## M2B (May 3, 2019)

What games?
My 8600K at 4.9GHz paired with a GTX 1080 that is pushed to its limits barely keeps the framerate above 60FPS in AC Odyssey, how do you get 200FPS on every game?


----------



## Manu_PT (May 3, 2019)

M2B said:


> What games?
> My 8600K at 4.9GHz paired with a GTX 1080 that is pushed to its limits barely keeps the framerate above 60FPS in AC Odyssey, how do you get 200FPS on every game?



Go read the post please. Also, your 8600k is not a 9700k, You have 6 threads with I bet are spanked on that game at 100%.

This people....

Simple fact you wasted money on a 8600k with 6 threads for almost 300€, makes no sense. I would stick to a 8400 or 9400f or just go i7 8700 non K route. 8600k has no place in the market. This is the kind of people I´m arguing with... geez.


----------



## M2B (May 3, 2019)

Manu_PT said:


> Go read the post please. Also, your 8600k is not a 9700k, You have 6 threads with I bet are spanked on that game at 100%.
> 
> This people....



Your 9700K is 3x as fast as my CPU? I'm gonna kill myself.


----------



## Deleted member 178884 (May 3, 2019)

Manu_PT said:


> I found it, is called 9700k wich overclocks to 5,2ghz


That's called *pre binned. *Unless you stop talking smack and find proof 100% of 9700k's do 5.2ghz without shoving absurd voltages which will degrade the CPU within a year, second off not everyone is lucky on the silicon lottery not all CPUs do that and a rather low percentage will and that pre binned CPU would cost more than a regular one, much more.


Manu_PT said:


> paired with 4000mhz CL18


More proof you overpaid on a crappy bundle.


Manu_PT said:


> I´m rocking 200fps-240fps on every multiplayer game at 1080p and 144fps on every single player game.


Proof or get lost, and what settings? Low? Lol


----------



## M2B (May 3, 2019)

Manu_PT said:


> Simple fact you wasted money on a 8600k with 6 threads for almost 300€, makes no sense. I would stick to a 8400 or 9400f or just go i7 8700 non K route. 8600k has no place in the market. This is the kind of people I´m arguing with... geez.



300€?
What are you talking about? I'm not living in your country.
8600K can be clocked 1.2GHz higher than a 8400, 1.2GHz is a lot of MHz, you know.


----------



## Manu_PT (May 3, 2019)

Xx Tek Tip xX said:


> That's called *pre binned. *Unless you stop talking smack and find proof 100% of 9700k's do 5.2ghz without shoving absurd voltages which will degrade the CPU within a year, second off not everyone is lucky on the silicon lottery not all CPUs do that and a rather low percentage will and that pre binned CPU would cost more than a regular one, much more.
> 
> More proof you overpaid on a crappy bundle.
> 
> Proof or get lost, and what settings? Low? Lol



1- Almost every 9700k does 5,1ghz/5,2ghz without a problem and under 1,35v wich is more than safe. Go to overclock.net 9700k thread and surprise yourself. While you bots think that Coffee Lake refresh is just equal to Skylakle + higher clocks, It isn´t, Intel kept refining and that´s why you don´t need as high voltages as 8700k, plus no tooth paste.

2- I didn´t overpaid for anything, I got a good Samsung B-die 3200 CL14 Teamgroup ram and overclocked it to 4000mhz, you know, because z390 allows you to, unlike AM4.

3- Everywhere on youtube, go look for it. And who cares about ultra graphics on multiplayer? Black Ops 4, Quake Champions, Apex Legends, Overwatch, World War Z, etc I have more than 200fps with a 1080ti at 1080p on all of them, and not even using low settings on those. Do the research.


----------



## HD64G (May 3, 2019)

notb said:


> I'm always slightly anxious when I see statements like this one.
> In your honest opinion 7nm can provide these clocks... because we've seen countless 5 GHz chips made with TSMC 7nm? Because you work for TSMC? Because you're a quantum physicist working on semiconductors? Because you had a vision on your AMD altar?
> 
> Jokes aside, I'm really curious where do people get this kind of knowledge.



Since Radeon 7 is the 1st product of this new 7nm process and it already goes close to 2.15GHz on water vs the 1.85GHz of Vega64 LC, we can safely assume that the Zen2 can gain at least the same 15% of headroom for their single-threaded clocks, so from the 4.35GHz of Zen+ we go exactly to 5GHz. Personally, I can see that later on we will see 5.1-5.2GHz turbo on the best binned Zen2 chips. As anyone can see in this post, all those numbers are based on simple math logic and not prediction based on personal preference. Any more meaninful questions?


----------



## Deleted member 178884 (May 3, 2019)

Manu_PT said:


> 1- Almost every 9700k does 5,1ghz/5,2ghz without a problem and under 1,35v wich is more than safe. Go to overclock.net 9700k thread and surprise yourself. While you bots think that Coffee Lake refresh is just equal to Skylakle + higher clocks, It isn´t, Intel kept refining and that´s why you don´t need as high voltages as 8700k, plus no tooth paste.






"Almost every"
>11% do it at a greater voltage than you specified.


----------



## eidairaman1 (May 3, 2019)

0x6A7232 said:


> ...do you honestly believe it would have been approved if the ACTUAL costs (before overruns) were known?



You can say the same about the F-22 the F-15, F-16, C-17, C-5. then.

And just so you know the operational cost of F-35 isn't nearly as much as the E-3.
Pilots are loving it it's also easier to maintain and plus just like the F4 in the past most parts are interchangeable between service branches and also Global units


----------



## londiste (May 3, 2019)

Vayra86 said:


> Beyond that we've already seen that even first gen Ryzen loses most of its latency issues with a decent kit of RAM. Consider that 'Ryzen overclocking' compared to Intel's hot mess at high clocks and they're even again, both in additional cost and additional performance.


That is actually one of the open questions on Zen2. IO Die should introduce additional latency. How much exactly and how it is handled we will have to wait and see.


----------



## FlanK3r (May 3, 2019)

*All specs are based on old ADORE TV video and are fakes! Please update the news.*


----------



## notb (May 3, 2019)

HD64G said:


> Since Radeon 7 is the 1st product of this new 7nm process and it already goes close to 2.15GHz on water vs the 1.85GHz of Vega64 LC, we can safely assume that the Zen2 can gain at least the same 15% of headroom for their single-threaded clocks, so from the 4.35GHz of Zen+ we go exactly to 5GHz.


It's not how semiconductors work. There's no magical scaling of frequency limits.

But whatever. This is not a physics forum.
In few weeks all will be revealed. I'm sure you'll be happy with (and able to defend) whatever frequencies Zen2 comes up with.


----------



## dicktracy (May 3, 2019)

Uber fake


----------



## windwhirl (May 3, 2019)

HwGeek said:


> Ryzen 9 Box is fake since you see Ryzen 3 box cooler .



And the typeface for the "9" is different.

Besides, isn't "Ryzen 9" actually where current Threadripper stands?


----------



## Gasaraki (May 3, 2019)

HD64G said:


> Since Radeon 7 is the 1st product of this new 7nm process and it already goes close to 2.15GHz on water vs the 1.85GHz of Vega64 LC, we can safely assume that the Zen2 can gain at least the same 15% of headroom for their single-threaded clocks, so from the 4.35GHz of Zen+ we go exactly to 5GHz. Personally, I can see that later on we will see 5.1-5.2GHz turbo on the best binned Zen2 chips. As anyone can see in this post, all those numbers are based on simple math logic and not prediction based on personal preference. Any more meaninful questions?



LOL. I love it. i've been burned by the AMD hype train for the first gen Ryzen. I thought it was going to be the second coming of AMD.


----------



## mahoney (May 3, 2019)

R0H1T said:


> Have you seen any tests with fixed clocks? If not then go check them out, Zen is indeed only about 5~10% behind Intel clock for clock. So your assumption that AMD can't match Intel @4.8 GHz is BS, I bet you didn't even count the impact of smeltdown ~ hint it's non zero


In synthetic benches they're very close but in real world like gaming at same clock speed intel is at least 15% better in some cases even more.









Just look at how the 2700x is dropping frames while the 9900k is almost consistent. Now add in the higher clocks of Intel and you get over 20%  difference. It's also the reason why 2700x is bottlenecking the high end rtx cards


----------



## aQi (May 3, 2019)

Where are the other specs ?
Set of instructions ?
Pci express support
Ram channels and speeds
Voltage?


----------



## EarthDog (May 3, 2019)

Not known.

Pcie 4.0

Dual channel - no idea of JEDEC

Will vary with every chip like all other processors.


----------



## efikkan (May 3, 2019)

NdMk2o1o said:


> They're less then 10% behind Intel in gaming now so yes it will beat or at least match Intel


I just want to point out that gaming is one of those workloads where the actual performance is not proportional to CPU performance, since it's about how bottlenecked the GPU is.

While IPC is absolutely very important for gaming up to a point, gaming performance (which in reality is GPU performance) is not a benchmark of the CPU. Even a 10% IPC gain will not yield a 10% improvement in gaming performance, unless the GPU is extremely bottlenecked. While most CPU bound tasks will scale "forever", rendering performance will not, they will scale until your GPU is no longer bottlenecked, and Intel is basically already there for current games, and Zen(1) is close but not quite there. Even with more modest improvements in Zen 2 than most of you guys expect, Zen 2 should still be able to come _close enough_ in gaming to where it's probably good enough, and Intel will probably only retain a marginal symbolic lead (~2-3%).


----------



## AnarchoPrimitiv (May 3, 2019)

Valantar said:


> This just makes me all the more depressed that I'll have to wait until 2020 to see Zen 2 in an MCM APU with a kick-ass GPU. Maybe I'll just get a 5XX-series motherboard and a used 2200G or something for my planned HTPC upgrade, and replace the CPU a bit down the line.
> 
> Still, the wait until the 27th seems _very _long right now.



Yeah, I really wish AMD would make a powerhouse APU....the finished product having similar package dimensions to a Threadripper CPU (hell, they can even use the same socket).  Basically, I'd like at least 8 zen 2 cores, and enough GPU CUs to have no problem running AAA titles at 1080p and hit 60fps all day....itdbe a "guerrilla" means of threatening nvidia's lower tiers too.  Don't want to get greedy, but 4-8GB of on package DRAM would be nice too, giving it a pretty big buffer before it has to start tapping into system memory.  

Ive got all brand new hardware waiting to do a new build, have everythig except a videocard, but I just can't reward nvidia's avarice so I've held off...guess that's why I'm daydreaming about an APU that'd allow me to nix the videocard altogether


----------



## Valantar (May 3, 2019)

AnarchoPrimitiv said:


> Yeah, I really wish AMD would make a powerhouse APU....the finished product having similar package dimensions to a Threadripper CPU (hell, they can even use the same socket).  Basically, I'd like at least 8 zen 2 cores, and enough GPU CUs to have no problem running AAA titles at 1080p and hit 60fps all day....itdbe a "guerrilla" means of threatening nvidia's lower tiers too.  Don't want to get greedy, but 4-8GB of on package DRAM would be nice too, giving it a pretty big buffer before it has to start tapping into system memory.
> 
> Ive got all brand new hardware waiting to do a new build, have everythig except a videocard, but I just can't reward nvidia's avarice so I've held off...guess that's why I'm daydreaming about an APU that'd allow me to nix the videocard altogether


An APU on TR4 would be rather meaningless considering that it lacks display outputs - you'd need some sort of PCIe card to provide the ports anyhow.

Other than that, I don't think you need TR4 for what you describe once we get DDR5. Ryzen 3000 already fits two CPU chiplets, just replace one with a GPU die and call it a day. If you want on-package HBM that's another matter.


----------



## Aquinus (May 3, 2019)

I've been wanting to get a 16c machine. If this is true, I'll wait and not even consider Threadripper.


----------



## RH92 (May 3, 2019)

Shatun_Bear said:


> The problem with AdoredTV is he's such an AMD fanboy he does AMD more harm than good. .
> 
> These numbers were made up in this 'leak'



Stop spreading BS please  !

1) Adored TV is far from being an AMD fanboy . He is an AMD fan indeed ( he has the right to ) BUT  when it comes to his job he always makes very detailed and  well though out analyses + he never hesitates to call out and criticize AMD on their bad  products ( GPU's lately ) .

2) Adored TV doesn't make up  numbers ! He is well respected  within the PC hardware community ( he earned that respect )  as for example Hardware Unboxed gave him credits in their latest video and if that's not enough he got credits from AMD CEO herself  !  Weird for someone who makes up numbers don't you think ?  Yeah NO you like it or not he has real sources !


Should his words be taken as gospel ? Ofcourse NOT but fact is  Adored TV always tries to verify and publish only the leaks  who come from trusted sources , sometimes leaks are spot on  sometimes they aren't  .... saying Adored TV is an AMD fanboy or that he makes up numbers shows that you know nothing about the guy and his work so yeah......


----------



## NdMk2o1o (May 3, 2019)

Xx Tek Tip xX said:


> View attachment 122257
> "Almost every"
> >11% do it at a greater voltage than you specified.


I wouldn't waste your time, h


AnarchoPrimitiv said:


> Yeah, I really wish AMD would make a powerhouse APU....the finished product having similar package dimensions to a Threadripper CPU (hell, they can even use the same socket).  Basically, I'd like at least 8 zen 2 cores, and enough GPU CUs to have no problem running AAA titles at 1080p and hit 60fps all day....itdbe a "guerrilla" means of threatening nvidia's lower tiers too.  Don't want to get greedy, but 4-8GB of on package DRAM would be nice too, giving it a pretty big buffer before it has to start tapping into system memory.
> 
> Ive got all brand new hardware waiting to do a new build, have everythig except a videocard, but I just can't reward nvidia's avarice so I've held off...guess that's why I'm daydreaming about an APU that'd allow me to nix the videocard altogether


Just one little problem, for that kind of GPU power you are looking at probably 150w at the least, with AMD anyway, add another 95w for the CPU/IO/Interconnects etc and you have a 250w behmoth APU that will need some serious cooling not too mention you couldnt fit such a GPU die onto even a TR size package, if you could we would have GPU's on a PCIE card the size of a small PCIE wifi/ethernet card


----------



## Manu_PT (May 4, 2019)

My 9700k does 5,2 at 1,31v full stable too. So there are 2 chips already doing it, yet you say I have no credibility? Also I said 5,1/5,2. I don´t know about a single 9700k that doesn´t do at least 5,1ghz with 1,35v. Not my fault that you can´t read properly.

Internet is full of AMD bots everywhere, because that´s trendy. People like to bash Intel and Nvidia, yet they still deliver the best products. Facts. Deal with it.

I will believe in AMD Zen 2 performance when I see it, as always. I don´t feed dumb rumours spreaded by the game guy that said Radeon VII would obliterate every nvidia card (AdoredTv). You believe in what you want. Let´s wait for the full release and then we can comment on the performance.



RH92 said:


> Stop spreading BS please  !
> 
> 1) Adored TV is far from being an AMD fanboy . He is an AMD fan indeed ( he has the right to ) BUT  when it comes to his job he always makes very detailed and  well though out analyses + he never hesitates to call out and criticize AMD on their bad  product ( GPU's lately ) .
> 
> ...



He is indeed a fanboy. THe guy that said Zen+ would obliterate Intel in gaming because the IPC was up to 20% higher, and said Zen+ would reach 4,5ghz clocks. Short story, they reach 4,2ghz and that´s already a stretch, and IPC wasn´t that much improved. He also said Radeon VII would obliterate RTX 2080ti, but then he proceeds to delete the videos where he "preditcs" stuff.. ironic.


----------



## ch3w2oy (May 4, 2019)

Manu_PT said:


> My 9700k does 5,2 at 1,31v full stable too. So there are 2 chips already doing it, yet you say I have no credibility? Also I said 5,1/5,2. I don´t know about a single 9700k that doesn´t do at least 5,1ghz with 1,35v. Not my fault that you can´t read properly.
> 
> Internet is full of AMD bots everywhere, because that´s trendy. People like to bash Intel and Nvidia, yet they still deliver the best products. Facts. Deal with it.
> 
> ...



Okay, so possibly more chips can hit 5ghz than I originally thought.. If you take a look at Siliconlottery.com they tell you how likely each chip is to reach that speed.. They just updated it because the numbers were lower when I looked a while back..

I don't agree with that if you need high end performance you buy Intel and Nvidia. Maybe if you need "the best of the best." and even then, at 4k, Ryzen works just as good as Intel. I sold my entire 9700k @5.2 with a ftw3 2080 for a Ryzen build with a Radeon VII. I also went custom loop after I legit considered a 2080 ti.. I don't just game, though, and Zen WILL BE the better buy. 2% slower or not. Once I can get zen 2, it's a wrap. If you don't consider that high end then I just don't know what to tell you.. 

This isn't Bulldozer at 4.8ghz..

Not everyone is playing 1080p and need the most FPS they can get. Everyone knows the CPU matters less when going up in resolution.. The only GPU I would get at this point in time from Nvidia is a 2080 ti. AMD as of now just plain offers a better experience with their software and drivers.. On top of that they get better and better. And this is coming from someone who had a Vega 64, Radeon VII, 1080 ti, and 2080.. All of those being the highest of the highest end, unless you specifically need Intel for a certain task.. 

Zen 2 may not match Intels performance but if you really think it's not going to close the gap enough to not make a damn difference then you're just delusional. Also, even if AMD remains 5% behind Intel in preformance, guess what? The only thing Intel will be good for is gaming. That's it. Of course there's a few things/programs that benefits from each platform but even at 5% behind Intel it would be stupid to buy a more expensive processor with less cores for a 5% boost in only gaming


----------



## Metroid (May 4, 2019)

Pownage countdown here we go and this won't be pretty for Intel!!!

Just like core duo was the best thing to ever happen in 2006, Ryzen 3000 is the best thing to ever happen in 2019 for the pc comunity as a whole.


----------



## EarthDog (May 4, 2019)

ch3w2oy said:


> but even at 5% behind Intel it would be stupid to buy a more expensive processor with less cores for a 5% boost in only gaming


When IPC and clockspeeds are there (very close on the first account, in the ballpark on the second), I agree 100% with your statement.

I'm really interested to see what this does to CPUs people can actually UTILIZE (not use - there is a difference). Since, someone buying a system today for example, can easily max out any game title with a 6c/12t CPU or 8c/8t CPU for the next few years, unless the cores are utilized or hell, even used, what is the point in more cores? Bragging rights used to be Megahertzzzzzzz and IPC.... but now its core count which few people can use more than what already exists on the mainstream... still waiting for software to catch up. 

I really can't wait to see what Zen 2 brings to the table... I hope it beats Intel in IPC and I hope it reaches the the same clock speeds and still keeps pricing in the ball park. This should force intel's hand a bit to lower prices...


----------



## Fatalfury (May 4, 2019)

Curiosity kills the cat(Hype)[AMD Fans]


----------



## R0H1T (May 4, 2019)

mahoney said:


> In synthetic benches they're very close but in real world like gaming at same clock speed intel is at least 15% better in some cases even more.
> 
> 
> 
> ...


In real world as well they are pretty close. Now the numbers I have are over 2 years old & since then things have changed, Zen is handicapped by 2400MHz RAM but so is Intel ~ https://www.hardware.fr/articles/956-6/piledriver-zen-broadwell-e-3-ghz.html

The results vary wildly with different applications or games & in some cases, like Komodo & x264, AMD beats Intel. Keep in mind this is zen 1 vs BDW-E  w/2400MHz RAM, pre smeltdown patched OS & older games. The 9900k today will come out tops in 9 out of 10 tests but I still believe the results will be in the region of ~10% on avg with certain outliers.

Sadly no major outlet does this test anymore ~ at fixed clocks we'd get better info about the actual (IPC) difference between various uarches.


----------



## MT66 (May 4, 2019)

Hopefully we can learn about Ryzen 3000 SKUs at Computex, I want to hear it from Lisa and see the big Graphics on the screen. Until then there is nothing major to be hyped about.


----------



## eraser666 (May 4, 2019)

Complete fake news. It's very funny.


----------



## robot zombie (May 4, 2019)

EarthDog said:


> I'm really interested to see what this does to CPUs people can actually UTILIZE (not use - there is a difference). Since, someone buying a system today for example, can easily max out any game title with a 6c/12t CPU or 8c/8t CPU for the next few years, unless the cores are utilized or hell, even used, what is the point in more cores? Bragging rights used to be Megahertzzzzzzz and IPC.... but now its core count which few people can use more than what already exists on the mainstream... still waiting for software to catch up.


For most people, I kinda get that. Like, if all you wanna do is play vidya games and faff on the net, it really is just for the sake of knowing you have a friggen McLaren... even if you only drive to work with it to show off. Plenty of people get nice cars they almost never even take out of the garage.

Personally I would never go that far, which is why for my build I stuck with the 2600, because it works great for my needs and nothing about its capabilities goes to waste between my gaming, wantonly disorganized multitasking, and music production (yes, it does come into play with some DAWs - some use every thread they can grab onto... handy when you've got a bunch of tracks with HQ emulation or synths/instrument sims with huge sample banks shuffling double-digit gigs of data around in memory simultaneously.) I was super happy to have a CPU geared for that at well under $200. Can't see myself needing more.

But I also get the appeal of more cores... or really any piece of hardware with extraneous capabilities. I like tech, in general. If it's intricate/complicated and possesses capabilities that I can tap into by exploring those nuances, I want one. I'll figure out what to do with it just as an excuse to have it and motivate myself to figure out how it works... just to see for myself what it can do. I will pick something up just to play around with it, even if I didn't know much/hadn't cared what it's meant for before it caught my interest. I'll adopt the relevant activities just to get another piece of technology in my hands. For instance, I've picked up photography out of an infatuation with DSLR's and optics. A lot of people are like that with tech, I think. Actually I think most people invested in anything tech-related are, even if a large number of them don't realize. We convince ourselves to find uses for kit that we find interesting just because it's interesting and we see possibilities to explore or learn about.

Some people are into people, and things to do with people. Other people are perhaps a bit more into things. Tech nerds tend to like things more than people. Maybe that's why we argue unproductively so much 

All I know is that if you put in front of me (and probably a lot of people here) a CPU that's very fast and has an absurd amount of cores/threads at a price that I can swing, I am going to want to buy it in order to discover my own ways of putting it to use, even if I don't really *need* those capabilities. Basically I'll figure out what I 'need' it for to justify buying it - I will find something beyond just wanting some fancy new high technology in my life, simply through acquiring it and messing around with it. When you're an enthusiast, pragmatism isn't the be-all, end-all to getting the most out of your interests. So much of it is entwined in discovery. I am pragmatic when budgeting and constructing builds for other people, because to them, it's only a tool. So paying more than you need to in order to get the job done to your liking is indeed wasteful. I recognize that not everybody will appreciate the nuances of premium tech. Even within the circles of interests, one can't expect everyone to appreciate the same things in the same tech. We're nerds. Our interests are weirdly obsessive and almost arbitrarily specific.

For me, personally, it's about so much more than just how useful it is to me - so it's not a waste at all. Being able to do what I need is only the *minimum* requirement. Sometimes going over the top furthers your love for the stuff you seek better understanding of - brings you inherently closer to it by imparting personal meaning onto it. I am the sort of person to occasionally take my fancy technological wonders out just to inspect and marvel at them while I ponder their inner workings and dream-up scenarios. That makes me happier than a lot of things in life.

Going back to cars... a lot of people love fast cars with all of these crazy "bragging rights" features even though in their daily lives, they're really no more useful than a typical sedan - everything that makes it good on a track does you no good on a city roadway and actually is less optimal than a humble commuter in terms of maintenance and mileage. But then, when you do take the car out and _really_ open it up, or even if you're just in the garage meticulously caring for, tuning, and modding it, you remember what it's all for - there's a simple wholesomeness to those moments. A man and his machine.


Now... sorry, I gotta rant a little. I'm super-excited about Ryzen 3, but stuff like this... honestly I don't know what to think. Speculating so passionately just seems like a waste of time. You can try to make it this or that and argue over it with people, but really... why bother? People are gonna see what they wanna see. You can pretend it's whatever you want. I'd rather be drooling over something that actually exists... something that I can actually get. The other sort of hype, to me, is about as unproductive as dismissing a figment over this or that assumption about what it actually is before anyone even knows. Personally I'm just excited to see what actually drops. I don't care whether it can do this or is better than that. I just want to see what those limits and abilities actually are... see whats new and then maybe decide if I want to delve further into it. I don't really care who or what and I really don't get the obsession there.

Why does it have to be so damned personal? Why does it have to come down to measuring a person by their interests and purchasing decisions? I'll never understand what gets people so invested in shutting other people down and hoping they wind up disappointed. How do you like it when something you're excited about lets you down? Why would anybody ever want that for someone they share interests with? In what way is it good if anything new sucks? Because it makes you right? Or maybe because it puts down someone you think is wrong...? Doesn't really matter what faction you belong to... if you even see yourself that way. Why can't people just like what they like and stick to that?

I mean, people bicker over this shit like there's real, life altering shit at stake for them. It kinda blows my mind... just the level people will take things to in order to either put something up on a pedestal or bury it in the ground. Everyone can and should form their own opinions, but attatching yourself to them so voraciously really does everyone a disservice, whether you're wary, optimistic, or both. The person you are attacking and putting down because they are more or less excited than you about a certain thing is EXACTLY like you. You are both in it for the same things. We all share a common goal in seeing, acquiring, and learning about the latest and greatest. It's petty to bring baggage and insecurities into the conversation. If you know that you are right, then you have nothing to prove about yourself. The truth will back you up. Let it be.

Besides... if you're wrong, who cares? A few scenarios... you think Ryzen 3 is gonna suck and it winds up being awesome. Just means it was better than expected - any rational person who cares about the technology as a whole ought to be happy about that. It's an advancement! It's like being wrong about a hurricane wiping out a town. Or maybe you think Ryzen 3 is your Moses and it's hot garbage. Oh well, for it to be considered shitty means there is something else that must be awesome, so you can just shift your focus to that - there is more for you to discover that you may have missed in your previous fixation.

When nobody knows the truth, all a lot of people can seem to do is go in circles, getting madder and madder at each other with each pass. Now instead of being happy to be personally involved and invested in the pursuit of all of the incredible technology in the world TODAY, people are getting upset and frustrated with trying to justify their enthusiasm (or lack thereof) for what may come TOMORROW. To me, it's just ego games. There's absolutely no excuse for it. It's not fun or useful. Nobody is really learning or appreciating anything - only growing bitter and building walls between their own brethren. And maybe missing out on stuff they might otherwise pick up and find some real merit and satisfaction in.

In reality, there is going to be truth in both sides of the speculation. In some way, everybody is probably going to be as right as they are wrong! So maybe don't sweat it and just be happy with what you have and the things that are available to us all, should we want them. Tech is so vast, nobody can be ahead of the curve for more than 5 minutes. Being the guy who's right about something before anyone else in these pockets of humanity is overrated. It's fun to speculate and all... some of it gets pretty interesting to think about. But it gets ridiculous when people get all black and white about it simply for the sake of going at each other.

I hope Ryzen 3 is as awesome as people hope it will be. Just like I hope Intel comes up with an equally or more awesome answer to it. Anything that's not awesome, I don't really bother to engage with. I leave that to people for whom it is awesome.


----------



## Manu_PT (May 4, 2019)

ch3w2oy said:


> Okay, so possibly more chips can hit 5ghz than I originally thought.. If you take a look at Siliconlottery.com they tell you how likely each chip is to reach that speed.. They just updated it because the numbers were lower when I looked a while back..
> 
> I don't agree with that if you need high end performance you buy Intel and Nvidia. Maybe if you need "the best of the best." and even then, at 4k, Ryzen works just as good as Intel. I sold my entire 9700k @5.2 with a ftw3 2080 for a Ryzen build with a Radeon VII. I also went custom loop after I legit considered a 2080 ti.. I don't just game, though, and Zen WILL BE the better buy. 2% slower or not. Once I can get zen 2, it's a wrap. If you don't consider that high end then I just don't know what to tell you..
> 
> ...



You sold a 9700k @ 5,2ghz + RTX 2080 and bought a Ryzen + VEGA VII? I have no words. Imagine paying money to downgrade, and on top of that use even more power from the hardware while having less performance. No comments.

And btw, 60hz, doesn´t matter if it´s 720p or 8k, is not high-end to me. If you want a CPU for 60hz you grab an i3 8100 or a Ryzen 1300x. High-End to me is 1080p 240hz and 1440p 165hz. Ryzen can´t eve sustain 130fps LOCKED on most engines. Fact.

With the money you spent with the downgrade process, you would have got an i9 9900k + RTX 2080ti, and it would obliterate Ryzen in every possible scenario, from gaming to productivity.


----------



## Solaris17 (May 4, 2019)

Lets try not to lash out. Please keep it civil so I don't need to dole more points. Thanks a bunch.


----------



## Melvis (May 4, 2019)

Still relevant  I think lol


----------



## InVasMani (May 4, 2019)

NdMk2o1o said:


> I guarantee there will be, what would be the point otherwise if they had the same base and boost as the 2*** series


 To be fair that might not be untrue and with how precision boost works might not be a bad thing either. They don't need to run all clock at the highest frequency at all times to get the best performance. In applications that benifit more from higher single core performance boost perhaps precision boost will recognize and adjust accordingly. If that were the case down clocking the unneeded cores further to keep heat lower so a single core or fewer cores scaled higher might make more sense. Application adaptive core clock boost scaling like Radeon Chill.



Metroid said:


> Pownage countdown here we go and this won't be pretty for Intel!!!
> 
> Just like core duo was the best thing to ever happen in 2006, Ryzen 3000 is the best thing to ever happen in 2019 for the pc comunity as a whole.


 AMD 64 lead to Intels CPU's lineup ever C2D/C2Q. I can seen Ryzen doing the same especially 7nm and with their fumbling it should really light a fire under them which is great for consumers they'll bounce back with a vengeance or in a new competitive CPU war between them either way it's good to see and needed since CPU's became too stagnant for too long.


----------



## R0H1T (May 4, 2019)

Melvis said:


> Still relevant  I think lol


Can't be any worse than GoT season 8 , surely 

Absolutely butchered that *Night King* story line


----------



## Caring1 (May 4, 2019)

Melvis said:


> Still relevant  I think lol


Look like Corsairs to me


----------



## Shatun_Bear (May 4, 2019)

Gasaraki said:


> LOL. I love it. i've been burned by the AMD hype train for the first gen Ryzen. I thought it was going to be the second coming of AMD.



I know you didn't mean this but Ryzen actually has been 'the second coming of AMD' as they went from 5-10% sales ratio and irrelevence to +50% compared to Intel in some markets. We dont need reminding how disaterous Bulldozer was for them.


----------



## NdMk2o1o (May 4, 2019)

Manu_PT said:


> You sold a 9700k @ 5,2ghz + RTX 2080 and bought a Ryzen + VEGA VII? I have no words. Imagine paying money to downgrade, and on top of that use even more power from the hardware while having less performance. No comments.
> 
> And btw, 60hz, doesn´t matter if it´s 720p or 8k, is not high-end to me. If you want a CPU for 60hz you grab an i3 8100 or a Ryzen 1300x. High-End to me is 1080p 240hz and 1440p 165hz. Ryzen can´t eve sustain 130fps LOCKED on most engines. Fact.
> 
> With the money you spent with the downgrade process, you would have got an i9 9900k + RTX 2080ti, and it would obliterate Ryzen in every possible scenario, from gaming to productivity.


Go troll elsewhere. Here's a better idea why don't you go take a picture of your beautiful Intel rig and have some alone time with it in the privacy of your bathroom, though you probably already do by the sounds of you


----------



## Vayra86 (May 4, 2019)

robot zombie said:


> For most people, I kinda get that. Like, if all you wanna do is play vidya games and faff on the net, it really is just for the sake of knowing you have a friggen McLaren... even if you only drive to work with it to show off. Plenty of people get nice cars they almost never even take out of the garage.
> 
> Personally I would never go that far, which is why for my build I stuck with the 2600, because it works great for my needs and nothing about its capabilities goes to waste between my gaming, wantonly disorganized multitasking, and music production (yes, it does come into play with some DAWs - some use every thread they can grab onto... handy when you've got a bunch of tracks with HQ emulation or synths/instrument sims with huge sample banks shuffling double-digit gigs of data around in memory simultaneously.) I was super happy to have a CPU geared for that at well under $200. Can't see myself needing more.
> 
> ...



Yep I read it all. This one stuck with me

"People are gonna see what they wanna see "

/thread


----------



## Chomiq (May 4, 2019)

I'm willing to bet that 16/32 part will be a threadripper, not an actual desktop cpu.


----------



## Joss (May 4, 2019)

ch3w2oy said:


> Not everyone is playing 1080p and need the most FPS they can get. Everyone knows the CPU matters less when going up in resolution..


This.
I don't know the percentage of 60Hz monitors among gamers but I suspect it's well above 50%, and at that rate it doesn't matter a fart if the CPU has higher IPC or whatever.
Same when you go up in resolution, the GPU is so busy dealing with all those pixels that it can't produce much FPS, and again it doesn't matter a fart if the CPU has higher IPC or whatever.
The discussion about Intel vs AMD in meaningless to the majority of gamers and users in general.


----------



## Anymal (May 4, 2019)

Yes, true, Stock status: Sold out, ahahahaha


----------



## Aquinus (May 4, 2019)

Chomiq said:


> I'm willing to bet that 16/32 part will be a threadripper, not an actual desktop cpu.


Threadripper already has a 16c variant, the 2950X. Considering that AMD is doing the chiplet thing, I don't think it's out of the realm of possibility for this to be the higher end of the mainstream platform's lineup. The only thing I would be skeptical about dual channel memory being enough to feed 16 cores. We'll just have to wait and see.


----------



## efikkan (May 4, 2019)

While a 16-core AM4 part is certainly theoretically possible, a 16-core part with decent clocks will require the node to be extremely good. I'm afraid that a launch of such a product may end up as a "paper launch", if launched before the node is able to produce enough good chips. But at some point later, it can be quite possible.

But what would be the market for such a chip? As *Aquinus* touched upon, many users of heavily threaded workloads require good memory bandwidth to go along with it, like video encoding. And those heavy workloads that are not memory bound usually are synchronous workloads which scales better on fewer faster cores than more slower cores, so many real world usages would suffer if a 16-core AM4 part is not able to retain high enough clock speed.

But then again, marketing and hype is everything these days, so perhaps they'll do it just for that…


----------



## metalkhor (May 4, 2019)

although specs are the same as the previous leaks, but box images are extremely fake and unreal.


----------



## Aquinus (May 4, 2019)

efikkan said:


> While a 16-core AM4 part is certainly theoretically possible, a 16-core part with decent clocks will require the node to be extremely good.


I think maintaining clocks is less of a big deal if its two 8c chiplets. Yields for smaller dies tend to be better than bigger monolithic ones and we already know AMD is doing the chiplet thing with the separate I/O chip. I would expect the same kind of clocks as the rest of the lineup as they're really pretty consistent, even throughout threadripper. Everything else you said I completely agree with though.


----------



## notb (May 4, 2019)

Aquinus said:


> I think maintaining clocks is less of a big deal if its two 8c chiplets. Yields for smaller dies tend to be better than bigger monolithic ones and we already know AMD is doing the chiplet thing with the separate I/O chip. I would expect the same kind of clocks as the rest of the lineup as they're really pretty consistent, even throughout threadripper. Everything else you said I completely agree with though.


Yes, there's no reason why a 16C CPU wouldn't be able to hit clocks as high as other models using the same dies (putting aside power draw and heat, obviously).
But should we really be amazed by this?

Intel does the exact same thing - with monolithic chips. More expensive CPUs have the same or higher clocks than cheaper ones. The idea behind Intel's lineup is that the more you spend, the faster the CPU should be - no matter what type of load it has to take care of.

As for the rumored specs. You've mentioned 2950X, thanks for that.
16 cores, 3.5/4.4GHz, 180W.
Assuming 7nm lower power draw by 1/3 (based on Radeon VII vs Vega64), we're getting pretty much to the 125W TDP. I think it's totally believable.
Where will they get these extra 10% clocks - I have no idea. But lets say they do.

2950X has been tested by multiple reviewers, in wide range of scenarios (from gaming and web browsing to rendering and scientific computing).
Everyone can open these tests and check if 2950X performance vs 2700X or 8700K is something that would change their lives.
Some 9900K reviews included a 2950X as well:
https://www.guru3d.com/articles_pages/intel_core_i9_9900k_processor_review,14.html

3800X may have a frequency advantage, but a lot of it will be consumed by dual channel RAM limitation.


----------



## mahoney (May 4, 2019)

Shatun_Bear said:


> I know you didn't mean this but Ryzen actually has been 'the second coming of AMD' as they went from 5-10% sales ratio and irrelevence to +50% compared to Intel in some markets. We dont need reminding how disaterous Bulldozer was for them.


With all the hype most people were expecting an Athlon 64 - if you don't remember that cpu was destroying Intel in games and synthetic benches  despite having 1ghz lower clocks. Though it didn't last long. While Ryzen sucks/is decentish at gaming and is ridicilusly good at productivity. That's why im hoping Ryzen 2 is the real deal.


----------



## Aquinus (May 4, 2019)

notb said:


> a lot of it will be consumed by dual channel RAM limitation.


I think that remains to be seen since it really depends on the workload(s) that would cause the CPU to run at full tilt because two different tasks can have very different demands on system memory and cache. Also, even if memory bandwidth does become more of a bottleneck, that also just means that memory speed matters. I don't necessarily think that's a bad thing... but that's all running under one big assumption: performance is the only thing that's important.

Consider for a moment that the speed of the CPU could be tuned for the amount of memory performance you're expecting to have, so even if the there isn't enough memory bandwidth to drive all the cores at max clocks, it would allow the CPU to distribute parallel load to more cores at lower clocks. That very well might be more efficient than using fewer cores at a higher frequency when it comes to power draw.


----------



## ch3w2oy (May 4, 2019)

Manu_PT said:


> You sold a 9700k @ 5,2ghz + RTX 2080 and bought a Ryzen + VEGA VII? I have no words. Imagine paying money to downgrade, and on top of that use even more power from the hardware while having less performance. No comments.
> 
> And btw, 60hz, doesn´t matter if it´s 720p or 8k, is not high-end to me. If you want a CPU for 60hz you grab an i3 8100 or a Ryzen 1300x. High-End to me is 1080p 240hz and 1440p 165hz. Ryzen can´t eve sustain 130fps LOCKED on most engines. Fact.
> 
> With the money you spent with the downgrade process, you would have got an i9 9900k + RTX 2080ti, and it would obliterate Ryzen in every possible scenario, from gaming to productivity.



Edit: 
I also want to let you know that I get over 140 fps in BF5 (since thats your game of choice) on multi-player.. Very similar to the same exact performance I got on my 2080 and 9700k. I actually feel I'm getting better performance. Both at 1440p with same settings.. Jokes on you bud. Stop watching YouTube videos for performance metrics.. 

Some people understand that you don't need a 9900k to get things done. I wanted a nice quiet computer. I also stated that the build is being updated to Zen 2 right away.. So it might just be better than a 9900k, we don't know yet.. But even if it's not, who cares.. And who cares about power draw when you have a Prime Ultra Titanium 1000w PSU. My entire point was that you don't need Intel and Nvidia for a high end PC.

You sound petty thinking the 9900k is the only way to go when spending lots of money. It's also apparent you haven't used an AMD card otherwise you would realize the Wattman settings are superior to Nvidias offerings. Who spends a lot of money and then complains about power draw..

I can honestly say for a fact, 100%, that I am so much happier with my AMD build with Ryzen (new gen soon), Radeon VII, 1tb SX8200 Pro, 3600 c16 Samsung B Die Trident Z RGB and Titanium PSU, all in a Custom loop.. Than I was with my 9700k and FTW3 2080. It's not even a comparison in my book. When I had the Intel PC I just felt like a little bitch that followed the crowd and constantly felt I was missing something.. My PC is now the exact way that I wanted it because I didn't have to waste money on Intel and Nvidia tax..

Now, I never said that if you need absolute best don't go Intel or Nvidia.. By all means, do what makes you happy. I'm way happier with my quiet ass water cooled build than I was with my Intel + 2080.. And I guarantee you I get similar performance too. My Firestrike graphics score is almost 35k. My superposition score is higher than any other Reason VII score that you can find online. Userbenchmark doesn't mean much but my VII does %173, my 1080 ti did %170 and my 2080 did %178. Ya my 2080 pulled ahead a little further so what.. Doesn't mean the VII isn't high end.. My Vega 64 LC had amazing performance as well.. Better than any 1080 lol. People need to stop watching reviews on YouTube because most of them put AMD in a bad light. Gamers Nexus is like the only one that a actually tries.

Can't believe people care about power consumption, like it matters.. Are you one of those people that think the TDP on the 9900k is really 95w hahaha lol rofl lmao. Try over 200w to reach peak turbo on all cores.

New build with Radeon VII.. https://pcpartpicker.com/b/ZcsZxr





Old build with 9700k and 2080.. Lame homie.. https://pcpartpicker.com/b/BsFtt6


----------



## Vayra86 (May 4, 2019)

ch3w2oy said:


> Some people understand that you don't need a 9900k to get things done. I wanted a nice quiet computer. I also stated that the build is being updated to Zen 2 right away.. So it might just be better than a 9900k, we don't know yet.. But even if it's not, who cares.. And who cares about power draw when you have a Prime Ultra Titanium 1000w PSU. My entire point was that you don't need Intel and Nvidia for a high end PC.
> 
> You sound petty thinking the 9900k is the only way to go when spending lots of money. It's also apparent you haven't used an AMD card otherwise you would realize the Wattman settings are superior to Nvidias offerings. Who spends a lot of money and then complains about power draw..
> 
> ...



Mouth watering build and lighting there bud. Sweet. You seriously nailed it on that top pic.


----------



## ch3w2oy (May 4, 2019)

Vayra86 said:


> Mouth watering build and lighting there bud. Sweet. You seriously nailed it on that top pic.



Thank you!


----------



## kings (May 4, 2019)

Metroid said:


> Just like core duo was the best thing to ever happen in 2006, Ryzen 3000 is the best thing to ever happen in 2019 for the pc comunity as a whole.



I wouldn´t go that far. Yes, it's cool to have CPUs with 12, 16 cores and all, but for maybe 95% (if not more) of the people the current CPUs, whether from Intel or AMD, are already overkill.

Some people may have the necessity of many cores for some specific tasks, but that is a niche market. For the majority, having a Zen with 6~8 cores or a Zen 2 with 12~16 cores it will be the same in the end.


----------



## notb (May 4, 2019)

Aquinus said:


> I think that remains to be seen since it really depends on the workload(s) that would cause the CPU to run at full tilt because two different tasks can have very different demands on system memory and cache. Also, even if memory bandwidth does become more of a bottleneck, that also just means that memory speed matters. I don't necessarily think that's a bad thing... but that's all running under one big assumption: performance is the only thing that's important.


Look at 2990WX. 32 cores, 4 channels. Awful results.
Zen2 brings the I/O die and chiplets. It's even more complicated and theoretically even more prone to latency issues that plagued Zen 1.


> Consider for a moment that the speed of the CPU could be tuned for the amount of memory performance you're expecting to have, so even if the there isn't enough memory bandwidth to drive all the cores at max clocks, it would allow the CPU to distribute parallel load to more cores at lower clocks. That very well might be more efficient than using fewer cores at a higher frequency when it comes to power draw.


But with ideas like that, your CPU moves from being a system-on-chip to a cluster-on-chip. It spends a lot of time thinking what to do with instructions (checking cores/nodes, queuing). The result of this could mean more efficient and cheaper CPUs, but it also makes them less responsive. All this is visible in Zen and will be even more pronounced in Zen2.

This is not the future of computing that I'd like to see. I'd rather see a single, fast (like 50GHz) core with lots of cache and a rich instruction set. And I hope at least some of Intel's and AMD's R&D money is spent on that, not just on finding new ways to connect more and more cores.


----------



## londiste (May 4, 2019)

R0H1T said:


> Sadly no major outlet does this test anymore ~ at fixed clocks we'd get better info about the actual (IPC) difference between various uarches.


Sweclockers does it sometimes:
https://www.sweclockers.com/test/25500-amd-ryzen-7-2700x-och-ryzen-5-2600x-pinnacle-ridge/28#content


----------



## ch3w2oy (May 4, 2019)

Manu_PT said:


> Less than 10%? How delusional..... 2700x at 4,2ghz on Battlefield V multiplayer can´t even sustain 144fps locked, while 9700k/9900k fly at 180-200... 10% yes right...



Hey, smarty pants, I get 140+fps consistently at 1440p with my Radeon VII on my r5 2600, above 160 often as well. About the same, if not better performance than my 9700k and ftw3 2080. Keep eating what the media keeps feeding little kids like you. I bet you think Apple also makes the best products in the whole world.. 

You're literally one of those people that fall for YouTube reviews. My Vega 64 and Radeon VII both do better than any review you can find on the internet. You need a break at life. Go on vacation or something.. Learn something.. Because as it stands, you know nothing about hardware.


----------



## Metroid (May 4, 2019)

kings said:


> I wouldn´t go that far. Yes, it's cool to have CPUs with 12, 16 cores and all, but for maybe 95% (if not more) of the people the current CPUs, whether from Intel or AMD, are already overkill.
> 
> Some people may have the necessity of many cores for some specific tasks, but that is a niche market. For the majority, having a Zen with 6~8 cores or a Zen 2 with 12~16 cores it will be the same in the end.



I already discussed this many times, game and most general developers are making the use of that core that used to be idle, yes, like many, you may not know about it, I myself am surprised by how fast multi-thread is been used and I am a developer. I thought It would take longer but look at how things are, to date resident evil 2 remake using 4 cores or less is unplayable, i was surprised by that. I could not believe my quadcore could not handle it well. I needed more than 4 cores to make it playable and that is today, cities skylines is laggy if you use less than 8 cores if population is 400k, if you make the use of all 20 tiles, 16 cores is not enough, using smt helps a lot on that and 32 threads may handle it well. So is more and more common games and anything to use more and more cores because that is the cheapest way to get performance out of it and devs have been using this strategy for sometime. A normal game like resident evil 2 remake to need 6 or more cores to be rendered properly, imagine in 2 years or so, so ryzen 3000 3800x 16 cores, 32 threads is not unrealistic as it can be used for today and probably well be fine in 5 years or so.

The other thing is people assume a person will not use all the cores of something, that could be true in the past, nowadays, no, like I said resident evil 2 remake, anybody can buy and play that game and not having the cores needed the game will just be unplayable, so, nothing wrong about people buying quad cores or even hexa cores but be realistic about limitations you are or will be facing in a year or 2 if you decide to use your computer for something else other than browsing the internet, listening to music, watching a movie or playing a video or so.


----------



## Aquinus (May 4, 2019)

notb said:


> But with ideas like that, your CPU moves from being a system-on-chip to a cluster-on-chip. It spends a lot of time thinking what to do with instructions (checking cores/nodes, queuing). The result of this could mean more efficient and cheaper CPUs, but it also makes them less responsive. All this is visible in Zen and will be even more pronounced in Zen2.


The problem is that a lot of this isn't on AMD to get right with the hardware, it's on Microsoft to get right with the CPU scheduler in Windows and the NT kernel. The 2990WX actually performs fairly well in Linux and a lot of people think that the 2950X is a sweet spot in Windows because it doesn't seem very competent at handling the additional cores, but you're right. It's not really any different than what we're seeing with Threadripper though other than the memory controllers not being spread around the CPU. I do think that unifying memory access and having a common last level cache will make a difference.

Either way, you're not getting one of these CPUs if the only thing you care about is single threaded performance.


notb said:


> But with ideas like that, your CPU moves from being a system-on-chip to a cluster-on-chip. It spends a lot of time thinking what to do with instructions (checking cores/nodes, queuing). The result of this could mean more efficient and cheaper CPUs, but it also makes them less responsive. All this is visible in Zen and will be even more pronounced in Zen2.


It's really not any different than what we have already seen with Threadripper though, but I do think that going to an I/O chip from spreading I/O resources out will definitely make a difference.


notb said:


> This is not the future of computing that I'd like to see. I'd rather see a single, fast (like 50GHz) core with lots of cache and a rich instruction set. And I hope at least some of Intel's and AMD's R&D money is spent on that, not just on finding new ways to connect more and more cores.


Even CPU design has to be measured against reality. If it were that easy to crank really high clocks without a problem, it would have been done already. I honestly think that what we seeing is a logical evolution of CPU design. Hoping for a single fast core "like 50Ghz" as you suggest is wishful thinking and isn't really grounded in reality.

Edit: Honestly, we're already seeing this in mobile CPUs where single threaded boost clocks are pretty high and multi-core boost clocks are really low to keep CPUs within power limits. This is just taking that to another level.


----------



## efikkan (May 4, 2019)

Aquinus said:


> The problem is that a lot of this isn't on AMD to get right with the hardware, it's on Microsoft to get right with the CPU scheduler in Windows and the NT kernel. The 2990WX actually performs fairly well in Linux and a lot of people think that the 2950X is a sweet spot in Windows because it doesn't seem very competent at handling the additional cores, but you're right.


Well, the NT kernel is "ancient" and has fallen behind, but that's a whole new discussion. Threadripper 2990WX can perform better in Linux, since scheduling in Linux is better and allow some tweaking, but is still really hit and miss.

When things gets so bad as with 2990WX where it has to boot in different modes depending on the application you want to run, then its unsuitable for a workstation CPU. We shouldn't and can't start to redesign kernels and applications around the "design flaws" of specific CPUs, it should be the responsibility of the CPU maker to design products that work well. And even if the CPU needs adjustments it should be limited to parameters for the scheduler (like it already does with core configs, boost ranges, etc.), not a redesigned scheduler for Threadripper 3, one for Ryzen 3, one for Threadripper 4, etc. etc.


----------



## TheMadDutchDude (May 5, 2019)

I just wanted to chime in here with gaming results...

I have a high refresh rate and often see my monitor at 1440P being maxed at 144 Hz. It is obviously game dependent, but BFV regularly sits above 120 with all things cranked up.

Whoever was claiming that a 5.2 GHz Intel chip is faster than an AMD chip at 1 GHz or more less... no shit! Clock speed absolutely can make up for a massive FPS difference.

This is a thread about AMD and we all love the systems. Stop slaying them into the ground because your beliefs are different. I’ve had all Intel systems up until Kaby and can honesty tell you that there is next to zero difference if you didn’t look at the numbers. 

Such a fool... honestly. If you believe that AMD is more than 10% behind Intel in terms of IPC because this, that, or the other, then you’ve no idea what “IPC” even means. Go and educate yourself, young padawan, and then come back later. If they aren’t even that close, then explain how the 8c16t next Ryzen chips are ahead of Intel in compute workloads like R15 while drawing 30% less power...? That was compared to a 9900K running at the same clocks. You don’t compare IPC at different clock speeds. 

Enough said. Go and educate yourself before belittling more people.


----------



## TheGuruStud (May 5, 2019)

Gotta love blaming AMD instead of the incompetent idiots at microsoft. So, if microshit didn't jump on x64, you'd blame AMD for not making hardware that works? This is so goddamn laughable. Stop talking. Just go buy an Intel quad core and be happy lol That's all you deserve.

Design flaws? Hmmm, works just fine on linux. Again, the design flaw is WINDOWS (or the app that can't handle high core count)!
JFC, everyday is like I'm taking crazy pills.

What's your excuse gonna be when Intel releases chiplets? Exactly, you'll be mum or praise it (like apple tards do to every iPhone with 5 yr old hardware).


----------



## notb (May 5, 2019)

Aquinus said:


> The problem is that a lot of this isn't on AMD to get right with the hardware, it's on Microsoft to get right with the CPU scheduler in Windows and the NT kernel.


Disagree. Computers are about software. Hardware is just a tool - it should follow needs, not force changes.
Why do you expect Microsoft to adjust? Why can't AMD make a PC CPU that work well with the dominant PC OS? 

And it doesn't end there, right? Literally since Zen came out with 8 slow cores, AMD fans have been arguing that all software in the world is written wrong.

Imagine AMD were making screwdrivers and one day decided they can somehow save a lot of money by making Zen octo keys instead of hex keys. And AMD fans, instead of having a laugh, said "yay! So innovative! 8 is more! All screws are wrong!"


> The 2990WX actually performs fairly well in Linux


Linux is made with high-core multi-node systems in mind. It isn't surprising that it works better with a CPU like that one. It's quite a bit better on 2P machines as well.


> Either way, you're not getting one of these CPUs if the only thing you care about is single threaded performance.


Why not?
I'm not getting a TR4 platform - that's for sure (I think it's pointless - just like Intel HEDT).
But I am very interested in server CPU performance and EPYC has the exact same problems Threadripper has.


> I honestly think that what we seeing is a logical evolution of CPU design.


Just how exactly is making more cores more logical than making faster cores?

Also, computing is fundamentally single-threaded. There are relatively few situations when you really need many independent cores (to run programs at exactly same time).
Most software, even that which seems to utilize many cores perfectly well, has to be forced to work like that. And it doesn't benefit in any way, i.e. 2 slow cores could be replaced with one 2x faster and it would work equally well.
The opposite is rarely true. And making a program parallel - assuming it's possible in a particular case - always greatly complicates both designing and coding.

I'm pretty positive that if we asked every programmer, every algorithm scientist and every system architect in the world, how much money he could save by making everything single-threaded, it could easily provide an R&D budget for GaN. It's just that world doesn't work this way. We have to get there in a more self-organized, evolutionary way.


> Hoping for a single fast core "like 50Ghz" as you suggest is wishful thinking and isn't really grounded in reality.


Of course it is. But we stick to silicon and invest in 16 core gaming CPUs.
I'm sure that as we hit a node wall on silicon, PC CPUs will try different routes.
GaN CPUs are pretty much possible - just expensive.
Graphene CPUs are being developed. Expect hundreds of GHz.


----------



## R0H1T (May 5, 2019)

notb said:


> Disagree. Computers are about software. *Hardware is just a tool - it should follow needs, not force changes.*
> Why do you expect Microsoft to adjust? *Why can't AMD make a PC CPU that work well with the dominant PC OS?*
> 
> And it doesn't end there, right? *Literally since Zen came out with 8 slow cores*, AMD fans have been arguing that all software in the world is written wrong.
> ...


And there you go on with your inane rant mode!

Software changes according to the hardware prevalent at that time - it's called progress, without AMD you'd probably be gluing 10GHz Netbursts atm - thanks AMD!
And that dominant PC OS has also had to change, *DX12* for instance - thanks again, AMD!

Slow 8 cores, WoW 

Linux is made by some of the brightest minds in the world, nothing to do with multi node systems! If it were just limited to that then you wouldn't have the most dominant OS in the world running on Linux Kernel, but hey rant away! Windows caters to the lowest multiple, that's their problem.

Yes must be the reason why Intel glued 56 cores instead of releasing their 5GHz 28 core 1(2?) KW chilled monstrosity.

No it's not.

That's like - I'm not sure what to say here, did you sniff Intel's glue?

You know what's cheaper, improving Windows scheduler.

Way more expensive than you last proposition, but probably the future.


----------



## efikkan (May 5, 2019)

TheGuruStud said:


> Gotta love blaming AMD instead of the incompetent idiots at microsoft. So, if microshit didn't jump on x64, you'd blame AMD for not making hardware that works? This is so goddamn laughable. Stop talking. Just go buy an Intel quad core and be happy lol That's all you deserve.
> 
> Design flaws? Hmmm, works just fine on linux. Again, the design flaw is WINDOWS (or the app that can't handle high core count)!


This is the same excuse which was used back in the Bulldozer days. For years AMD fans claimed the Bulldozer was superior, it was just bad OS kernels and applications.

But that's where you are totally wrong; while we certainly should focus on efficient multithreading, SIMD and cache optimizations in software, there is a *huge difference in optimizing for good design vs. optimizing for "design flaws" in hardware*.

Zen's problems is luckily small compared to Bulldozer's fundamental design issues, but claiming that the problem for Threadripper is lack of proper multicore scaling in software is 100% wrong; the problems Threadripper have is tied to its own self-inflicted design limitations causing issues with latency and memory operations, it has nothing to do with core count, as evident by Intel not having these issues.




TheGuruStud said:


> What's your excuse gonna be when Intel releases chiplets? Exactly, you'll be mum or praise it (like apple tards do to every iPhone with 5 yr old hardware).


As always, what matters is real world performance, how it's achieved is less important.
Intel is working on chip stacking, and how they choose to interconnect these will determine how they perform, not if it's one or more chips.



notb said:


> Hardware is just a tool - it should follow needs, not force changes.
> …
> And it doesn't end there, right? Literally since Zen came out with 8 slow cores, AMD fans have been arguing that all software in the world is written wrong.


Exactly.
It reminds me how many engineers approaches a task; redesigning the problem to fit the solution instead of designing the solution to match the problem.


----------



## Vayra86 (May 5, 2019)

notb said:


> Disagree. Computers are about software. Hardware is just a tool - it should follow needs, not force changes.
> Why do you expect Microsoft to adjust? Why can't AMD make a PC CPU that work well with the dominant PC OS?
> 
> And it doesn't end there, right? Literally since Zen came out with 8 slow cores, AMD fans have been arguing that all software in the world is written wrong.
> ...



Did Intel offer you that CPU engineering job yet?


----------



## notb (May 5, 2019)

R0H1T said:


> And there you go on with your inane rant mode!
> 
> Software changes according to the hardware prevalent at that time


No, it doesn't. Performance changes according to the hardware.

Software is designed to do tasks that we need it for. Sometimes it ends up being slow because that's how computers look at given moment. We can't help that. We still need to get the job done.


> Linux is made by some of the brightest minds in the world, nothing to do with multi node systems!


LOL on the "brightest minds in the world". They're just programmers. Good ones, but let's not get overexcited.
Time of "brightest minds in the world" is a bit too valuable for writing code.
Even in a scope of a single company or software team, the best or most experienced people usually spend relatively little time coding.

And yes, Linux development today is driven by enterprises that need it for high-performance systems (from big SAP servers to supercomputers). Companies like Intel, Red Hat, IBM, SUSE, Oracle, AMD, Nvidia and Mellanox are among the top contributors. The rest is focused mostly on smartphones/embedded.

The importance of Linux in PCs is very small. It's really not that hard to understand why a rebranded EPYC works better with Linux than with Windows.
But in the end you need PCs for people to actually benefit from what these powerful servers provide. And PCs need purpose-built hardware and software (including OS).


> Windows caters to the lowest multiple, that's their problem.


Windows caters to a normal user and aims at easy and smooth operation (like Mac OS). It's a different target than that of most Linux distros.


> Yes must be the reason why Intel glued 56 cores instead of releasing their 5GHz 28 core 1(2?) KW chilled monstrosity.


Intel glued 56 cores because they could. Because why not? Because it's an attractive, cost-efficient product.

As for high-core models, Intel offers a very wide choice of CPUs boosting upwards of 3.5 GHz. A few Xeons are past 4.0 barrier already.
It's becoming a standard today.
In the newly announced Cascade Lake-SP majority of CPUs will be able to boost to 3.9 or 4.0.
Sadly, this is not the case with the 56-core monster. It can go "just" to 3.8GHz.

For continuous high load there are also CPUs with high base clock, like Xeon 8168 (24C, 3.4/3.7) or 6154 (18C, 3.7/3.7).

But you would have to know how server CPUs are used to understand why this is important. 

AMD EPYC are very slow by comparison (left to rot by AMD), but they will most likely catch up next year.


----------



## Aquinus (May 5, 2019)

notb said:


> Disagree. Computers are about software. Hardware is just a tool - it should follow needs, not force changes.
> Why do you expect Microsoft to adjust? Why can't AMD make a PC CPU that work well with the dominant PC OS?


I expect a CPU scheduler that isn't garbage and if you're buying 32c/64t but you don't need them (like with gaming,), then you're just an idiot who likes to piss away money. It's like buying a 20 core Xeon or something and then whining about single threaded performance when you opt'ed for more cores. It's laughable.


notb said:


> Linux is made with high-core multi-node systems in mind. It isn't surprising that it works better with a CPU like that one. It's quite a bit better on 2P machines as well.


Hence why the NT kernel's scheduler is shit, but that's not a problem with hardware if the OS can't effectively use the hardware. AMD can't fix poor decide decisions in the OS and it's even more laughable to think that they can or that they should bend over backwards for it. This kind of mentality would have said we should never have gotten NT and should still be using DOS-based Windows.


notb said:


> Why not?
> I'm not getting a TR4 platform - that's for sure (I think it's pointless - just like Intel HEDT).
> But I am very interested in server CPU performance and EPYC has the exact same problems Threadripper has.


Because normally one buys more cores to... you know... get more cores? If you're only interested in single threaded performance, you're not interested in TR4 chips, you're interested in burning a hole in your pocket. 


notb said:


> Just how exactly is making more cores more logical than making faster cores?
> 
> Also, computing is fundamentally single-threaded. There are relatively few situations when you really need many independent cores (to run programs at exactly same time).
> Most software, even that which seems to utilize many cores perfectly well, has to be forced to work like that. And it doesn't benefit in any way, i.e. 2 slow cores could be replaced with one 2x faster and it would work equally well.
> ...


Of course it's easier to write single-threaded code. You have fewer issues to deal with, but that doesn't mean it's the right decision given the workload. Also, I write multithreaded code all the time and I do it in the day job and let me tell you something, I don't write any data processing job that uses a single core. I use stream abstractions and pipelines all over the place because changing a single argument to a function call can change the amount of parallelism I get at any stage in the pipeline. It also helps when you use a language that's conducive to writing multi-threaded code. Take my main language of choice, Clojure, it's a Lisp-1 with immutability through and through with a bunch of mechanisms to have controlled behavior around mutable state. It's a very different animal than writing multi-threaded code in say, Java or C# and it's really not that difficult.

More cores makes sense, because it can more effectively distribute load without running at higher clocks and for workloads where you already have a bunch of threads running (even if they're not fully taxing the system,) that there is an efficiency benefit there, but we have boost clocks because we still care about single-threaded performance.

Also, you're running on the assumption that time to write the application is the only cost. What about the time it takes for that application to run? Time is money. My ETL jobs would be practically useless if they take a full day to run which is why they're setup in a way where concurrency is tunable in terms of both parallelism and batch size.



notb said:


> Of course it is. But we stick to silicon and invest in 16 core gaming CPUs.
> I'm sure that as we hit a node wall on silicon, PC CPUs will try different routes.
> GaN CPUs are pretty much possible - just expensive.
> Graphene CPUs are being developed. Expect hundreds of GHz.


You don't need 16c CPU for a gaming machine which is sort of my point. Also Graphene is vaporware until we actually see it in production at a price that's not outlandish, otherwise it's just a pipe dream. We can make CPUs out of a number of different materials, but that doesn't mean it's a viable option. Once again, all of this needs to be measure in reality.


efikkan said:


> Exactly.
> It reminds me how many engineers approaches a task; redesigning the problem to fit the solution instead of designing the solution to match the problem.


That's bullshit and a terrible answer to the problem. Go back to using MS DOS If that's how you feel. Oh wait, you like multi-core scheduling. This is like saying that everything should be built around the crappiest part of the product which makes no sense. You fix the shitty part, you don't build around it. 


notb said:


> And yes, Linux development today is driven by enterprises that need it for high-performance systems (from big SAP servers to supercomputers). Companies like Intel, Red Hat, IBM, SUSE, Oracle, AMD, Nvidia and Mellanox are among the top contributors. The rest is focused mostly on smartphones/embedded.
> 
> The importance of Linux in PCs is very small. It's really not that hard to understand why a rebranded EPYC works better with Linux than with Windows.
> But in the end you need PCs for people to actually benefit from what these powerful servers provide. And PCs need purpose-built hardware and software (including OS).


You mean like how the importance for these kinds of chips for gaming is really small? 


notb said:


> Windows caters to a normal user and aims at easy and smooth operation (like Mac OS). It's a different target than that of most Linux distros.


...and normal users don't need a Threadripper, right? 


notb said:


> For continuous high load there are also CPUs with high base clock, like Xeon 8168 (24C, 3.4/3.7) or 6154 (18C, 3.7/3.7).


If you have a workload that can saturate one of those CPUs, then you're going to benefit from more cores, so I don't really see what your problem is. It's almost like you want a server CPU and a CPU that's good for gaming at the same time. Yet another pipedream.

I'd love to get some of that stuff your smoking though. It's gotta be a hell of a drug.


----------



## EarthDog (May 5, 2019)

Aquinus said:


> and normal users don't need a Threadripper, right?


Normal users dont need more than mainstream had to offer two generations ago as far as c/t count and wont need more for another few years at least. But due to limitations in silicon it seems we cant get much faster clocks and IPC gains have been a joke for the most part from both camps (outside of zen after nearly a decade of incremental trash from both sides).

At least in the past clock speed and IPC increases _improved everyone's computing experience_. Today buying more cores yields nothing unless you actually have software which can utilize (not use) it or if you have a use case and are close to being maxed out. Maybe its gaining traction now on the software side, heavy multithreading... it sure hadn't before even though we had octo cores on the cheap from AMD for several years already so I cant say I have any buy in. IMO, It wont be this generation that does it either where the switch flips.

The real deals here for "normal users" is buying an appropriately sized (c/t) CPU for you needs for the next few years on the cheap. And for 95% of people, even here at a so called enthusiast site, that is still no more than a 6c/12t or 8c/t CPU. More than 8c/16t on the mainstream platform, right now, is absolutely ridiculous and a ploy for those not in the know to buy simply because there are more cores.


----------



## Aquinus (May 5, 2019)

EarthDog said:


> The real deals here for "normal users" is buying an appropriately sized (c/t) CPU for you needs for the next few years on the cheap. And for 95% of people, even here at a so called enthusiast site, that is still no more than a 6c/12t or 8c/t CPU. More than 8c/16t on the mainstream platform, right now, is absolutely ridiculous and a ploy for those not in the know to buy simply because there are more cores.


People's ignorance doesn't make these CPUs useless through. It's like saying you bought a huge vehicle but then was taken back by the terrible gas mileage with a huge V8. That's not the vehicle's fault, it's the owner's fault for not understanding what they bought. Also, software for the run of the mill consumer is going to be built for what they expect to be in a mainline system. Now that quad cores are pervasive, a lot more software can take advantage of those cores. This is exactly why building hardware around arguably garbage software is dumb, because hardware advances against platforms normal people have is what influences software design, otherwise we'd still be using single core CPUs and MS DOS because that's all DOS supported.



EarthDog said:


> And for 95% of people, even here at a so called enthusiast site, that is still no more than a 6c/12t or 8c/t CPU.


Enthusiast sometimes means people who actually use computers to do useful things, like software engineers, DBAs, or people who do things genomics. Other times it's people who want the best hardware for gaming. Other times it's people who just have money money than brains. The reality is that an "enthusiast" when it comes to computers isn't likely the kind of person who actually needs this kind of compute power. Buying a CPU just for cores when you're a run of the mill enthusiast is like buying a huge truck with a huge diesel engine because its got a lot of displacement. It's a poor decision on the part of the enthusiast.

The reality is that most people here at TPU who calls themself an enthusiast tends to buy machines for gaming or for bragging rights (benchmarking,) not because they actually need those cores.


----------



## Metroid (May 5, 2019)

Aquinus said:


> The reality is that most people here at TPU who calls themself an enthusiast tends to buy machines for gaming or for bragging rights (benchmarking,) not because they actually need those cores.



Desktop users I would not blame for wanting more cores, even if they dont use now they might use it later, although it will always be more expensive more cores now than later as we progress core count are becoming cheaper and cheaper, so lets say is kind of a wasted money but like i said many times, the owner nowadays might be using the cores and he does not even know the cores are being in use, right now bragging rights that I see worldwide has a name, iphone, too expensive, peopley usually buy iphone or those expensive mobile phones for whatsapp, chrome etc, although chrome might use those cores if many tabs are open, what I see is a waste of money, they can buy a multicore octa core phones for $150 but they prefer to buy an iphone. Probably they are used to apple services and products. The brainwash is strong to see something that cost 10 times less and the usefulness will be the same as if it is an iphone.


----------



## Aquinus (May 5, 2019)

Metroid said:


> The brainwash is strong to see something that might cost 10 times and the usefulness will be the same as if it was an iphone.


That's probably a wee-bit of an exaggeration, but I'd agree with the overall sentiment.


----------



## TheoneandonlyMrK (May 5, 2019)

notb said:


> Look at 2990WX. 32 cores, 4 channels. Awful results.
> Zen2 brings the I/O die and chiplets. It's even more complicated and theoretically even more prone to latency issues that plagued Zen 1.
> 
> But with ideas like that, your CPU moves from being a system-on-chip to a cluster-on-chip. It spends a lot of time thinking what to do with instructions (checking cores/nodes, queuing). The result of this could mean more efficient and cheaper CPUs, but it also makes them less responsive. All this is visible in Zen and will be even more pronounced in Zen2.
> ...


A fast single 50Ghz core, Your so far away from possible your into dream land , we are no where near buying optic based transistors or any grephene version of transistor but i doubt they would be sold in single units / cores , that makes for a binning nightmare ,it works or its actually in the bin, shows what you knows.

The nodes in the future clearly pre dictate that its too expensive to make all the chip on the cutting edge node so you Will see intel follow suit with chiplets and they have already stated they will , they're busy on that now, see Foveros and intels many statements towards a modular future with Emib connects and 3d stacking.

@Aquinus I agree , more people should join WCG, let's get some research done and out the way.


----------



## EarthDog (May 5, 2019)

I guess that's what an enthusiast is. It's too bad the desktop market is supported not by enthusiasts, but by the mainstream. Its THOSE people, the overwhelming majority, who tend to lose out in the premature core wars.

Dont get me wrong, i understand the hardware needs to be here, but weve had hex and octo cores for 8/6 years already and we havent really seen a momentum shift yet. I think it's a lot closer to reality, but still a generation or two away from really making a difference for the majority. The 'use it later' argument is something of a given as weve been hearing that argument for years. It just depends on use models for the pc/user.


----------



## Aquinus (May 5, 2019)

EarthDog said:


> Dont get me wrong, i understand the hardware needs to be here, but weve had hex and octo cores for 8/6 years already and we havent really seen a momentum shift yet. I think it's a lot closer to reality, but still a generation or two away from really making a difference for the majority. The 'use it later' argument is something of a given as weve been hearing that argument for years. It just depends on use models for the pc/user.


Sure, but you have to consider what that hardware has been in and what typical consumers are buying. The reality is that it hasn't been in laptops and the market is hungry for mobile devices. We're only now starting to see laptops with 6c/12t.


----------



## bpgt64 (May 5, 2019)

I am really hopeful AMD pulls out a winner(s) here.  Competition is always good for consumers.


----------



## R0H1T (May 5, 2019)

*notb* Specifically for 2990WX an argument can be made that it's not the best TR chip out there, in fact I did say that 2970WX is better at launch time. The reason is clear ~ AMD disabled 4 memory channels & the (dis)connected dies needed an additional hop to access memory. Having said that Windows should still be blamed for the awful performance we see on that platform specific to these high core count CPUs, especially wrt Linux. So coming from 2950x or 2920x, the WX variants aren't great VFM. However if your software isn't memory bound, chances are you'll make good use of the additional cores, on Linux!

You also seem to think that Zen suffers from high latencies, which is absolutely wrong. AFAIK the *IF* itself is (arguably) the biggest bottleneck in their memory subsystem, in fact Zen+ does beat Intel in L2/L3 cache latencies ~ https://www.anandtech.com/show/12625/amd-second-generation-ryzen-7-2700x-2700-ryzen-5-2600x-2600/3
















Intel's much better with mem latency, that may change slightly with *IF2* & *zen2* however.


----------



## notb (May 5, 2019)

Aquinus said:


> Enthusiast sometimes means people who actually use computers to do useful things, like software engineers, DBAs, or people who do things genomics.


By all means, *no*.
Enthusiast is someone who is enthusiastic about PCs. He likes to talk about them, he likes to read reviews, he likes to spend more than needed.
It has absolutely nothing to do with how you use a PC.


Aquinus said:


> It's like buying a 20 core Xeon or something and then whining about single threaded performance when you opt'ed for more cores. It's laughable.


You don't know much about how servers are used, do you - mister "enthusiast"? ;-)


> Because normally one buys more cores to... you know... get more cores?


Nope. Single-thread performance is improving slowly and will hit a wall soon. If one wants more processing power, he is forced to buy more cores.
But having more cores doesn't automatically mean software will run faster. Someone has to write them to do so (assuming it's possible in the first place).
That's the main advantage of increasing single-core performance.


> Of course it's easier to write single-threaded code. You have fewer issues to deal with, but that doesn't mean it's the right decision given the workload. Also, I write multithreaded code all the time and I do it in the day job and let me tell you something, I don't write any data processing job that uses a single core. I use stream abstractions and pipelines all over the place because changing a single argument to a function call can change the amount of parallelism I get at any stage in the pipeline. It also helps when you use a language that's conducive to writing multi-threaded code. Take my main language of choice, Clojure, it's a Lisp-1 with immutability through and through with a bunch of mechanisms to have controlled behavior around mutable state. It's a very different animal than writing multi-threaded code in say, Java or C# and it's really not that difficult.


Not everyone is a programmer and not everyone has the comfort you do. Data processing is by definition the best case possible for multi-threaded computing.

I know it may be difficult, but you have to consider that coding today is also done by analyst and it has to happen fast. I also write code as a day job (and as a hobby). But I also train analysts to use stuff like R or VBA. They need high single-core performance. And yes, they work on Xeons.

And as usual, I have to repeat the fundamental fact: some programs are sequential - no matter how well you code and what language you use. It can't be helped.


> Also, you're running on the assumption that time to write the application is the only cost. What about the time it takes for that application to run? Time is money. My ETL jobs would be practically useless if they take a full day to run which is why they're setup in a way where concurrency is tunable in terms of both parallelism and batch size.


Well exactly! The balance between writing and running a program is the key issue.
Some people write ETLs and spend hours optimizing code. It's their job.
But other people have other needs. You have to open a bit and try to understand them as well.
I was doing physics once, now I'm an analyst. A common fact: a lot of the programs are going to be used few times at best, often just once.
There's really no point in spending a lot of time optimizing (and often you don't have time to do it).



> Also Graphene is vaporware until we actually see it in production at a price that's not outlandish, otherwise it's just a pipe dream.


I don't know what you mean. Graphene exists and PoC transistors have been made a while ago.
Yes, it is a distant future for personal computers, but that's the whole point of technological advancement - we have to plan many years ahead. And that's what makes science and engineering interesting.
When graphene CPUs arrive in laptops, they'll be very well controlled and boring.


> We can make CPUs out of a number of different materials, but that doesn't mean it's a viable option. Once again, all of this needs to be measure in reality.


And the reality is that some applications would benefit from very fast cores, not dozens of them. That's all I'm saying.



theoneandonlymrk said:


> A fast single 50Ghz core, Your so far away from possible your into dream land , we are no where near buying optic based transistors or any grephene version of transistor but i doubt they would be sold in single units / cores , that makes for a binning nightmare ,it works or its actually in the bin, shows what you knows.


To be honest, I don't really care when such CPUs will be available for PCs. I'm interested when they'll arrive in datacenters. I always hoped it'll happen before quantum processors, but who knows?
50GHz will produce a lot of heat. In fact the whole idea of GaN in processors is that it can sustain much higher temperatures than silicon.
Quantum computers need extreme cooling solutions just to actually work.

It's always good to look back at the progress computers made in the last decade. Just how much faster have cores became? And why stop now?

Moreover, can you imagine the discussions people had in the early 50s? Transistors already existed. PoC microprocessors as well. And still many didn't believe microprocessors will be stable enough to make the idea feasible. So it wasn't very different from the situation we have with GaN and graphene in 2019.
A few years later microprocessors were already mass produced. When I was born ~30 years later, computers were as normal and omnipresent as meat mincers.


> The nodes in the future clearly pre dictate that its too expensive to make all the chip on the cutting edge node so you Will see intel follow suit with chiplets and they have already stated they will , they're busy on that now, see Foveros and intels many statements towards a modular future with Emib connects and 3d stacking.


I have absolutely nothing against MCM. It's a very good idea.



R0H1T said:


> *notb* Specifically for 2990WX an argument can be made that it's not the best TR chip out there


Well, I'm precisely mentioning 2990WX because it represents a similar scenario to how Zen2 will work.
And not just the 16-core variant. Every Zen2 processor will have to make an additional "hop" because the whole communication with memory will be done via the I/O die. No direct wires.
Some of this will be mitigated by huge cache, but in many loads the disadvantage will be obvious. You'll see soon enough.

Also, I understand many people are waiting for Zen2 APUs (8 cores + Navi or whatever). Graphics memory will also have to be accessed via the I/O die. Good luck with that.


> Intel's much better with mem latency, that may change slightly with *IF2* & *zen2* however.


Just don't bet your life savings on that. ;-)


----------



## Aquinus (May 5, 2019)

notb said:


> By all means, *no*.
> Enthusiast is someone who is enthusiastic about PCs. He likes to talk about them, he likes to read reviews, he likes to spend more than needed.
> It has absolutely nothing to do with how you use a PC.


If you read the two sentences that followed, you would realize that's what I was saying. 


notb said:


> You don't know much about how servers are used, do you - mister "enthusiast"? ;-)


Yeah, I do. Servers have more cores and lower clocks instead of higher clocks and fewer cores for a reason... Mister "enthusiast". A person buying it for gaming will be thoroughly disappointed.


notb said:


> Nope. Single-thread performance is improving slowly and will hit a wall soon. If one wants more processing power, he is forced to buy more cores.
> But having more cores doesn't automatically mean software will run faster. Someone has to write them to do so (assuming it's possible in the first place).
> That's the main advantage of increasing single-core performance.


...and reality suggests that single threaded performance has already hit a wall which is why we're seeing more cores. None of that invalidates what I'm saying.


notb said:


> Not everyone is a programmer and not everyone has the comfort you do. Data processing is by definition the best case possible for multi-threaded computing.


Data processing is literally what 90% of programs do. You might have stateful portions of your application, but most of it is data processing most of the time. Existing code is hard to make multi-threaded because a lot of times it's done in a language or technologies with poor constructs for effectively doing concurrent workloads because it had already been made to not be, I can even use your own statement as an example:


notb said:


> I know it may be difficult, but you have to consider that coding today is also done by analyst and it has to happen fast. I also write code as a day job (and as a hobby). But I also train analysts to use stuff like R or VBA. They need high single-core performance. And yes, they work on Xeons.


They need high single core performance because the software is the limitation. Simple fact is more cores is easier than higher clocks. It also scales better. Another fun fact, R and VBA are *archaic*. Another excellent example of why we shouldn't make architectual decisions for hardware based on old, archaic designs, designed for older machines. When VBA and R were released, computers had 1 core, so you know what they were designed for? 1 core.


notb said:


> And as usual, I have to repeat the fundamental fact: some *percentage of* programs are sequential - no matter how well you code and what language you use. It can't be helped.


Fixed that for you. Most applications aren't purely sequential in nature. An entire workload doesn't need to be made to run in parallel so long as there are parts of it that you can.


notb said:


> Well exactly! The balance between writing and running a program is the key issue.
> Some people write ETLs and spend hours optimizing code. It's their job.


Writing ETL jobs is hardly the entirety of my job and making it multithreaded wasn't a substantial cost for me to do. That's my point, but you seem to like making a lot of assumptions about what using different technologies that aren't garbage gets you in these situations.


notb said:


> I don't know what you mean. Graphene exists and PoC transistors have been made a while ago.
> Yes, it is a distant future for personal computers, but that's the whole point of technological advancement - we have to plan many years ahead. And that's what makes science and engineering interesting.
> When graphene CPUs arrive in laptops, they'll be very well controlled and boring.


Can I buy it and run my software on it? How about in 5 years? 10? Yeah, definitely a pipedream. Maybe one day, but that gets us nowhere right now or even in the foreseeable long term. That gets us nothing right now. What we have right now, are more cores. A real, tangible, thing that can be bought and used.


notb said:


> Well, I'm precisely mentioning 2990WX because it represents a similar scenario to how Zen2 will work.


Performance of the 2950X isn't too shabby for what it is and it's got the same design. Also, having I/O resources spread out instead of having them centralized is a big difference. We should be careful about equating the two because they definitely are apples and oranges with different benefits and shortcomings.


----------



## drayzen (May 6, 2019)

The image of the 9 box is fake.
The perspective of the number is wrong.


----------



## Caring1 (May 6, 2019)

drayzen said:


> The image of the 9 box is fake.
> The perspective of the number is wrong.


Some people fail to realise that it hasn't been released yet, and a placeholder has been mocked up 
it has also been mentioned by a couple of others that fail to grasp the concept of a placeholder.


----------



## drayzen (May 6, 2019)

Caring1 said:


> Some people fail to realise that it hasn't been released yet, and a placeholder has been mocked up
> it has also been mentioned by a couple of others that fail to grasp the concept of a placeholder.


Sheesh, no need to get bitchy.
I've spent many years working in online retail/WS so am well aware of what placeholders are. It's simply another indicator that the entire thing could be fake. Given some of the name formatting it's even more likely.
I didn't see any other examples posted so I put it up. Relax huh...


----------



## InVasMani (May 6, 2019)

IPC will narrow the single threaded gap and widen the multi threaded advantages so it's in AMD's interest to have a mix of emphasis on IPC as well as additional cores. It's also important in terms of efficiency so it'll help them better compete in SFF and mobile market segments as well. Ryzen is actually more well catered toward a stronger IPC emphasis than Intel's current chip designs. Better IPC also means precision boost should be able to work even better in turn as well. I don't have much doubt about the 15% IPC gains on the new 7nm Ryzen chips. I think it was Lisa Su that alluded to it in the first place in "certain" work loads compared to 14nm Ryzen. It's really hard to say what it'll be on average in terms of IPC gain, but I'd suspect around 10-12.5% gains to be had. I don't think Intel's IPC gains over the last decade at all reflects the potential for larger IPC gains to be had from AMD. For starters AMD being further behind at present in terms of IPC just means they've got a wider gap to improve upon in terms of IPC it would a lot harder in Intel's current position to have gains that large comparatively speaking. It's also readily obvious based on Intel's own designs that there is little reason why AMD can't follow suit and improve some of the weaker differences between the two companies designs. 

People that are being hard headed about it since it's AMD rather than Intel are being foolish is the bottom line. It shouldn't really be any harder for AMD to improve it's IPC as it is for Intel to copy it's chiplet approach is how I see it. The one clear difference is Intel obviously has a larger budget to work from, but AMD today isn't the same cash strapped mismanaged company from a decade ago that also had to contend with Intel's anti competitive behavior on top of all of that at the time. I've got a lot of faith 7nm Ryzen will be great overall and the closest thing to AMD64 performance and competitiveness out of AMD on the CPU side since that point in time. I'm sure Intel will bounce  back and do so aggressively, but we could see a good boxing match between the two companies in the next 5-6years if I had to guess.


----------



## R0H1T (May 6, 2019)

notb said:


> Well, I'm precisely mentioning 2990WX because it represents a similar scenario to how Zen2 will work.
> And not just the 16-core variant. *Every Zen2 processor will have to make an additional "hop" because the whole communication with memory will be done via the I/O die*. No direct wires.
> Some of this will be mitigated by huge cache, but in many loads the disadvantage will be obvious. You'll see soon enough.
> 
> Also, I understand many people are waiting for Zen2 APUs (8 cores + Navi or whatever). Graphics memory will also have to be accessed via the I/O die. Good luck with that.


That's not true either & I suspect you know it. This is zen 2 ~





This is TR 2 ~






The IO die is strategically placed between zen 2 dies & there is no additional hop, though admittedly we don't know how TR3 will look but I'd be seriously disappointed if AMD redid this *disable entire memory channels* for a couple of dies!


----------



## ratirt (May 6, 2019)

This zen2 looks amazing. If this is true then Intel is in trouble. Maybe Jim from AdoredTV was right. Reaching 5GHz for Ryzen is outstanding. I might be changing my CPU soon for that 12core monster


----------



## Vayra86 (May 6, 2019)

notb said:


> Well, I'm precisely mentioning 2990WX because it represents a similar scenario to how Zen2 will work.
> 
> 
> Also, I understand many people are waiting for Zen2 APUs (8 cores + Navi or whatever). Graphics memory will also have to be accessed via the I/O die. Good luck with that.



I'm astounded by your logic sometimes. 2990WX was the worst, most situational performing TR part of the whole line up, and you think they straight up copy paste that design to a whole CPU stack to make sure it sucks just as hard.

Yep. AMD offer you that engineering job yet?


----------



## notb (May 6, 2019)

Vayra86 said:


> I'm astounded by your logic sometimes. 2990WX was the worst, most situational performing TR part of the whole line up, and you think they straight up copy paste that design to a whole CPU stack to make sure it sucks just as hard.


Kind of. TR are limited by memory access. Too few channels for so many cores.
To use all cores, you have to configure it as NUMA, basically adding a layer (a "hop") that centralizes memory access.

To limit latency and get better performance in interactive software (like games), you could have run it in "game mode", which uses just 8 cores.

Zen2 Ryzen may be subject to similar treatment.

And another thing is ratio of cores vs memory channels which could give 16-core Ryzen similar problems the 32-core Threadripper had.



> Yep. AMD offer you that engineering job yet?


I don't know why you keep writing this (and why AMD this time? I preferred Intel!). What's the point?


----------



## Vayra86 (May 6, 2019)

notb said:


> Kind of. TR are limited by memory access. Too few channels for so many cores.
> To use all cores, you have to configure it as NUMA, basically adding a layer (a "hop") that centralizes memory access.
> 
> To limit latency and get better performance in interactive software (like games), you could have run it in "game mode", which uses just 8 cores.
> ...



Its tongue-in-cheek really because of the things you write. Half of it is absolutely true and then the other half makes zero sense.


----------



## InVasMani (May 6, 2019)

AMD should make a TR based APU setup with two APU's flanked by a I/O hub and HBM on opposite sides of them both and twin NVMe M.2 hardwired to it on the reverse side of the CPU socket on the motherboard. That setup would probably be a screamer. The NVMe devices would be for HBCC and along with the quad channel and HBM they could have a tiered storage managed by the I/O die itself between the two APU die's. Hopefully for Zen3 AMD does something along that line among other things. 

I'd be shocked if AMD doesn't make a post process die for scaling/denoise and other stuff at some point. Essentially RTX/tensor cores for Turing is basically those two things. It wouldn't be a bad idea for AMD to have die that can do those things on the fly quickly and efficiently for it's APU's and even for TR/Epyc that were actually pretty quick at ray tracing and could be quicker if they had specialized instruction sets or die's to do some of those things better.


----------



## efikkan (May 6, 2019)

R0H1T said:


> The IO die is strategically placed between zen 2 dies & there is no additional hop, though admittedly we don't know how TR3 will look but I'd be seriously disappointed if AMD redid this disable entire memory channels for a couple of dies!


Technically there is an additional "hop" in Zen 2:
Zen(1): Die -> Memory (best case or single die)
Zen(1): Die -> Die -> Memory (worst case)
Zen 2: Die -> IO controller -> Memory

Zen 2 should at least be more consistent, and benchmarks will reveal the actual latencies and performance penalties. But thinking that Zen 2 will have no such issues is naive.


----------



## lexluthermiester (May 7, 2019)

Mussels said:


> If those specs are real, ryzens going to destroy intel


I don't know that I would say "destroy", but definitely kick some more ass!


----------



## 0x6A7232 (May 7, 2019)

What's your thoughts on this?  (software / hardware taking single-threaded code and distributing it using AI)
At 8 minutes in:


----------



## efikkan (May 7, 2019)

0x6A7232 said:


> What's your thoughts on this?  (software / hardware taking single-threaded code and distributing it using AI)
> At 8 minutes in:


Aaah, Amdahl's law, I always cringe when I hear people talking about it.
In most cases it doesn't matter how much of the code is parallel or not, but how much of the execution time is spent on which part of the code, i.e. in some cases 99% of the execution time is spent in 1% of the code.
A much better way of thinking of it (even for non-coders) is how many tasks/subtasks/work chunks can be done independently, because you can scale into hundreds if not thousands of threads as long as each thread doesn't need to be synchronized, and it's the synchronization between cores which kills your performance. A good example of a workload which scales this way is a web server which spawns a thread per request, or a software rendering which splits pars of the scene up into separate worker threads. Workloads like this can scale well with very high core counts, but they do so because each thread essentially work on their own subtask, which is also why Amdahl's law is irrelevant, if anything it has to be applied on this level, not the application level.

Most real world applications are highly synchronized, and it has little to do with the skills or willingness of the developers, but the nature of the task the application solves, and as I will get back to, the overall structure of the codebase. The "heavy" parts of most applications is usually an algorithm where the application is stuck in a loop before it proceeds, but most applications are usually written in overly complex and abstracted codebases making it nearly impossible to separate out algorithms and the CPU also spends most cycles stuck idling due to the bloat. The first step to optimize the code is always to make it more dense, remove all possible abstractions and make it cache optimized. Then it usually becomes obvious at which level the task can be split into subtasks and potentially even multiple threads. I usually refer to this tendency among developers and "software architects" to overcomplicate and abstract things as a "decease".
-
Back to the video you referred to;
Well, it will be "impossible" to take an existing thread and split it up across multiple cores on an OS scheduling level, by "impossible" I mean impossible in real time and without a slowdown of 10.000x or more.
What this video appears to be showing is just "smarter" OS scheduling. Even if you make your glorious single threaded application, it probably relies on system calls or libraries which will span multiple threads. If your application relies heavily on this kind of interaction, your slowdown may actually be OS scheduling overhead, not overhead within your application. Most desktops and even mobile devices these days run hundreds of tiny background threads which constantly "disturbs" the scheduler with "unnecessary" scheduling overhead. If only the OS could prioritize better which threads are waiting for each other etc. you can get a huge improvement in performance for certain use cases. But this, as with anything is just trying to remove a bottleneck, not actually making code more parallel, so the scaling here will also be declining with core count.

But tweaking kernel scheduling is not new, it's well known in the industry. In Linux you can choose between various schedulers which have their pros/cons depending on workload. One of them is the "low latency" kernel which is optional in some Linux distributions, which increases the scheduling interval and is more aggressive in prioritizing threads, which have huge impacts on latencies in some thread-heavy workloads. There are probably more potential to do smarter schedulers which uses more statistics for the thread allocation, or "AI" as they call it these days.

As for optimizations in hardware, CPUs already do instruction level parallelism, and Intel CPUs since the first Pentium have been superscalar. The automatic optimizations today are however very limited due to branching in code. Even with branch prediction, CPUs are pretty much guaranteed a stall after just a few branching instructions, which is why most applications are actually stalled 95-99% of the time. If however the CPU was given more context and able to distinguish branching which only affects the "local scope(s)" (which is probably what they mean by "threadlets" in the video) and branching which affects the control flow of the program, then we could see huge improvements in performance, 2-3x is quite possible in the long term.


----------



## F-man4 (May 8, 2019)

Ryzen 3800X will not only fuck up all Xeon E CPUs but also affect Xeon D & Xeon W market.
With Ryzen 3800X & ECC UDIMM we can easily build a 16C32T workstation superior to all I mentioned above.


----------



## notb (May 8, 2019)

F-man4 said:


> Ryzen 3800X will not only fuck up all Xeon E CPUs but also affect Xeon D & Xeon W market.
> With Ryzen 3800X & ECC UDIMM we can easily build a 16C32T workstation superior to all I mentioned above.


AFAIK no AM4 Ryzen to date had ECC certification. Grow up.

Not to mention these are very different CPUs and different platforms. No unintentional forced sex is going to happen.


----------



## Vlada011 (May 8, 2019)

Intel harder and harder compete with new versions of AMD Zen Core. 
When you add price on top really become ugly investing in Intel 10 core example, or even i9-9900K.


----------



## TheLostSwede (May 8, 2019)

Shatun_Bear said:


> Cool I'll send you a PM. Easy money



Sorry, you lost.


----------



## HwGeek (May 8, 2019)

notb said:


> AFAIK no AM4 Ryzen to date had ECC certification. Grow up.
> 
> Not to mention these are very different CPUs and different platforms. No unintentional forced sex is going to happen.


https://www.asrockrack.com/general/productdetail.asp?Model=X470D4U#Specifications 
More will pop out I am sure.


----------



## notb (May 8, 2019)

HwGeek said:


> https://www.asrockrack.com/general/productdetail.asp?Model=X470D4U#Specifications
> More will pop out I am sure.


Oh. This is interesting.
Nevertheless, no AM4 Zen CPU to date is ECC-certified.

IMO we'll see AM4 EPYC CPUs - competitors for Xeon E (LGA1151). There was some talk about EPYC APU as well.
That would be rather nice.


----------



## InVasMani (May 8, 2019)

notb said:


> Oh. This is interesting.
> Nevertheless, no AM4 Zen CPU to date is ECC-certified.
> 
> IMO we'll see AM4 EPYC CPUs - competitors for Xeon E (LGA1151). There was some talk about EPYC APU as well.
> That would be rather nice.


 Epyc APU would be very interesting they could couple several APU chips octa channel bandwidth then if they CF multiple the APU dies together and go a step further use HBCC to 1-4 NVMe M.2 devices on rear of the motherboard behind the CPU with shorts traces and icing on the cake would be a bit of HBM. It would be great with TR as well, but with Epyc you'd get that extra bandwidth.


----------



## ratirt (May 8, 2019)

notb said:


> Oh. This is interesting.
> Nevertheless, no AM4 Zen CPU to date is ECC-certified.
> 
> IMO we'll see AM4 EPYC CPUs - competitors for Xeon E (LGA1151). There was some talk about EPYC APU as well.
> That would be rather nice.


Actually all Ryzens support ECC. The only difference is, Ryzen Pro is validated for ECC (certified if you wish) and is an OEM. Regular Ryzen CPUs aren't validated but still support ECC.


----------



## efikkan (May 8, 2019)

Vlada011 said:


> Intel harder and harder compete with new versions of AMD Zen Core.
> 
> When you add price on top really become ugly investing in Intel 10 core example, or even i9-9900K.


Why so gloomy? This is when the fun starts, when Intel and AMD are close enough to create real competition.
When Zen 2 launches, Intel will adjust their prices, and will soon replace the Coffee Lake (2) with Comet Lake, which may not be a shiny new architecture, but still a very good contender in the market. Intel have really good margins on i7-9700K/i9-9900K, and their production capacity reserved for 14nm CPUs is the highest yet, so we can look forward to an autumn with price drops for good 8-cores and good supplies.


----------



## notb (May 8, 2019)

ratirt said:


> Actually all Ryzens support ECC. The only difference is, Ryzen Pro is validated for ECC (certified if you wish) and is an OEM. Regular Ryzen CPUs aren't validated but still support ECC.


Pretty much all x86 CPUs support ECC, i.e. you can force them to work in ECC mode. People successfully booted Pentiums in ECC (despite them being officially locked).

ECC has to be verified to make sense for a corporate user. Ryzens aren't verified (PRO neither - someone lied to you ).


----------



## R0H1T (May 8, 2019)

The fact is Ryzen & TR work with ECC ram, with certain boards. You can't dance around it, heck Intel disables many feature in their unlocked CPUs - how cheap is that?


----------



## efikkan (May 8, 2019)

You might be able to get a memory controller to run a configuration it's not certified for, but that doesn't mean you are guaranteed to make it work or that it will remain working over time. There are even examples of motherboards which lets Skylake-X CPUs run registered memory. When a CPU have a feature that is not enabled, it doesn't mean it wouldn't work, it means it's not tested, so you don't know if it will work and work reliably throughout the lifetime of the product. ECC is a feature to increase reliability, and also requires more expensive memory. Running ECC memory on an "unsupported" system makes no sense, even if you can sort-of get it working.


----------



## lexluthermiester (May 9, 2019)

efikkan said:


> but that doesn't mean you are guaranteed to make it work or that it will remain working over time.


Rubbish. When Intel/AMD/ARM/Etc. state that a memory type will work it's because they have internally certified that will work with 100% reliability. Otherwise they physically remove the ability for it to run in ECC mode.


----------



## ratirt (May 9, 2019)

notb said:


> Pretty much all x86 CPUs support ECC, i.e. you can force them to work in ECC mode. People successfully booted Pentiums in ECC (despite them being officially locked).
> 
> ECC has to be verified to make sense for a corporate user. Ryzens aren't verified (PRO neither - someone lied to you ).


Intel has ECC Disabled for desktop CPUs. AMD never did this. for Intel, ECC is exclusive only for server CPUs for example for which you have to pay a lot more than a desktop CPU.

here's some stuff you may want to read.





						ECC Memory & AMD's Ryzen - A Deep Dive
					

One of the more interesting aspects of the pre- and post-Ryzen launch was how many people were wondering about support for ECC memory, otherwise known as error-correcting code memory. Every forum had at least one thread and most articles with a comments section had people talking about this...



					www.hardwarecanucks.com
				











						AMD confirms that Ryzen supports ECC memory
					

AMD confirms that Ryzen support ECC memory




					www.overclock3d.net
				






efikkan said:


> You might be able to get a memory controller to run a configuration it's not certified for, but that doesn't mean you are guaranteed to make it work or that it will remain working over time. There are even examples of motherboards which lets Skylake-X CPUs run registered memory. When a CPU have a feature that is not enabled, it doesn't mean it wouldn't work, it means it's not tested, so you don't know if it will work and work reliably throughout the lifetime of the product. ECC is a feature to increase reliability, and also requires more expensive memory. Running ECC memory on an "unsupported" system makes no sense, even if you can sort-of get it working.


Skylake-X doesn't support ECC only Xeons do.








						Intel Skylake-X vs Skylake-W
					

The new Intel core-i9 and core-i7 "enthusiast" "X", Skylake-X processors and the single socket Xeon Skylake-W (Workstation) processors seem nearly identical. I'll discuss the differences and make my recommendation on which to use.




					www.pugetsystems.com


----------



## efikkan (May 9, 2019)

lexluthermiester said:


> Rubbish. When Intel/AMD/ARM/Etc. state that a memory type will work it's because they have internally certified that will work with 100% reliability. Otherwise they physically remove the ability for it to run in ECC mode.


You're wrong. While the design supports it, they haven't validated the sample.
It may still fully work, but it's not guaranteed.



ratirt said:


> Skylake-X doesn't support ECC only Xeons do.


And I never said they did.
ECC and registered memory are two different things.
Who knows, perhaps even ECC can be enabled if the BIOS wants to.


----------



## Berfs1 (May 9, 2019)

notb said:


> One could think that people on a "computer enthusiast forum" would know how to make a screenshot.


Not sure if you are talking to me about that, but if you are, then clearly you can’t read and understand a graph ‍


----------



## EarthDog (May 9, 2019)

Berfs1 said:


> Not sure if you are talking to me about that, but if you are, then clearly you can’t read and understand a graph ‍♂


He was poking at the circa 1990s screencap you posted as opposed to a print screen which, unlike that screencap, is clear and easier to read.


----------



## Shatun_Bear (May 9, 2019)

R0H1T said:


> No bitcoins, Paypal? Just to confirm ~ no 3xxx SKU will have a *base clock of 4GHz* or above



Here we go:









						AMD Ryzen 3000 Series 16 Core, 32 Thread CPU Details Leaked - Early Sample With 7nm Zen 2 Cores Clocks In At 3.3 GHz Base and 4.2 GHz Boost
					

The AMD Ryzen 3000 series 16 core and 32 thread early CPU sample has leaked out which uses the 7nm Zen 2 cores and clocks up to 4.2 GHz.




					wccftech.com
				




Ryzen eng. sample, 16-core, *3.3Ghz BASE CLOCK*. The leaker is legit, so the eng. sample is real. Now, remember the numpty Adored was claiming the 16-core Ryzen 3000 has a BASE CLOCK of 4.3Ghz.

So according to him, from this eng. sample, another....1Ghz frequency is going to be added to retail chips, right!??! Madness. Now, at best, we'll get another 300Mhz on top of this sample's base clock. Please have your £10 paypal payment ready to send me once this is confirmed. Cheers


----------



## efikkan (May 9, 2019)

I really wouldn't read too much in to clock speeds of engineering samples, not unless we have some additional context about the quality of it, as engineering samples can be all over the place, ranging from very low clocks to golden samples, all depending on their intended usage.
When you see them demonstrate benchmarks in public, like on CES, it usually represents their target performance (in the cherry-picked benchmark), but final clocks are only set after the final stepping arrives, so there can be some deviation in either direction.


----------



## R0H1T (May 9, 2019)

Shatun_Bear said:


> Here we go:
> 
> 
> 
> ...


That's just one chip, I thought we had a bet on all 3xxx SKU 


efikkan said:


> You're wrong. While the design supports it, they haven't validated the sample.
> It may still fully work, but it's not guaranteed.


Validation & certifications are costly, without them it doesn't mean that ECC (memory) don't work or it stops working randomly ~








						OnePlus 7 and 7 Pro won't have an IP rating for water and dust resistance because it's expensive
					

The company chose to buy a bucket instead, and dunk one of its new phones in it.




					www.gsmarena.com
				




Also AMD isn't known to block out unlocked features, unlike their two competitors ~








						AMD Athlon 220GE and 240GE review
					

Looking for an affordable processor that will cover all your browsing or media center needs including an integrated graphics unit?, hey, AMD might just have the perfect value proc available. The Athlo... Overclocking an Athlon 240GE




					www.guru3d.com


----------



## Shatun_Bear (May 9, 2019)

R0H1T said:


> That's just one chip, I thought we had a bet on all 3xxx SKU
> Validation & certifications are costly, without them it doesn't mean that ECC (memory) don't work or it stops working randomly ~
> 
> 
> ...



Yes, none of them will have a base clock of 4Ghz or above. You need to realise eng. samples are indicative of retail silicon, and tend to only deviate by about ~+300Mhz~ from final. Release of this chip is only about 4-5 months away.

This real leak here is important as the frequencies in this TPU news story come from that fraudster Adored. If he was wrong about this chip, then the rest of that horse crap SKU list with all the clocks is wrong, and by extension hopes of base clocks over 4Ghz are wrong. Anyway this will be revealed soon.


----------



## R0H1T (May 9, 2019)

You remember the earliest zen ES leaks? The 1800x exceeded the boost clocks by 700~800Mhz & base clocks by a similar margin IIRC. You might well be right but let's wait for the entire lineup to be revealed first.


----------



## HwGeek (May 9, 2019)

Also it can be the low clock 16C and not the top SKU.


----------



## Shatun_Bear (May 9, 2019)

R0H1T said:


> You remember the earliest zen ES leaks? The 1800x exceeded the boost clocks by 700~800Mhz & base clocks by a similar margin IIRC. You might well be right but let's wait for the entire lineup to be revealed first.



I don't remember there being such a gap. And we're too close to release for anything more than 400Mhz to be added. At latest 16-core release October.



HwGeek said:


> Also it can be the low clock 16C and not the top SKU.



Unlikely. These are about the numbers you'd expect. But to be clear, my issue all along is fake base clock numbers. I expect boost up to 4.7Ghz.


----------



## R0H1T (May 9, 2019)

Here's one from TPU, you'll find more if you look closer ~ AMD "Summit Ridge" ZEN CPU at 2.80 GHz Beats 3.40 GHz Core i5-4670K


----------



## lexluthermiester (May 9, 2019)

efikkan said:


> You're wrong. While the design supports it, they haven't validated the sample.
> It may still fully work, but it's not guaranteed.


Incorrect. When a manufacturer makes a product with support for a certain function and that function is made available to the public, you can bet your life on the fact that they have tested it completely.


----------



## 0x6A7232 (May 10, 2019)

*cough, cough*








						AMD Ryzen 9 3000 is a 16-core Socket AM4 Beast
					

AMD is giving finishing touches to its 3rd generation Ryzen socket AM4 processor family which is slated for a Computex 2019 unveiling, followed by a possible E3 market availability. Based on the "Matisse" multi-chip module that combines up to two 8-core "Zen 2" chiplets with a 14 nm I/O...




					www.techpowerup.com


----------



## notb (May 10, 2019)

lexluthermiester said:


> Rubbish. When Intel/AMD/ARM/Etc. state that a memory type will work it's because they have internally certified that will with 100% reliability. Otherwise they physically remove the ability for it to run.


And AMD *does not* state Ryzens support ECC, does it?


ratirt said:


> Intel has ECC Disabled for desktop CPUs. AMD never did this. for Intel, ECC is exclusive only for server CPUs for example for which you have to pay a lot more than a desktop CPU.
> 
> here's some stuff you may want to read.


AMD doesn't disable ECC, but it doesn't mean it works as it should. It's just there.


lexluthermiester said:


> Incorrect. When a manufacturer makes a product with support for a certain function and that function is made available to the public, you can bet your life on the fact that they have tested it completely.


Like the RNG in Excavator?

You don't understand enterprise computing - I've told you that many times.
ECC has to be validated to make sense. Just like helmets and ropes are certified/attested/rated to be used in a construction zone. It doesn't mean non-rated helmets don't protect your head.
What's the point of an untested security feature?


R0H1T said:


> That's just one chip, I thought we had a bet on all 3xxx SKU
> Validation & certifications are costly, without them it doesn't mean that ECC (memory) don't work or it stops working randomly ~


Without them it means you can't really expect ECC to work. And no one is liable when it stops.
That's the whole point of certification. It is important not in the 99% of time a feature works, but in the 1% time it doesn't.
Certified ECC systems sometimes don't work properly, just like a certified airbag doesn't always save your life in a crash. But until someone gives you a guarantee that an airbag should work in a particular way, it's just a small bomb with a baloon. WTF would you willingly put a bomb in your car?


Berfs1 said:


> Not sure if you are talking to me about that, but if you are, then clearly you can’t read and understand a graph ‍♂


First of all: they aren't graphs. A graph is a graphical representation of information, for that you need things like properly described axes. There's no horizontal axis in your case - maybe you assumed/checked it is time - the viewer doesn't know. In one of the photos you've missed the vertical axis as well.
Second: I was talking about the way you've shared this - as photos made with a smartphone. Why?
Also this: 








And that's all pretty objective and obvious. I could now start making fun of your forecasting, but as you see - I don't. I'm in "nice mode" today. But it may change tomorrow, so weight your words carefully. ;-)


R0H1T said:


> The fact is Ryzen & TR work with ECC ram, with certain boards. You can't dance around it, heck Intel disables many feature in their unlocked CPUs - how cheap is that?


Intel has a very enterprise-oriented approach, with all features being pretty well documented. They ride on an image of being a solid enterprise partner. They can't afford to put ECC in CPUs that may not support it properly.
AMD has a different target client structure and they can afford not to properly describe what features a product has. We had a nice discussion about this lately in the NVENC thread (with AMD you don't know what features are supported by GPU, there's no documentation).

Honestly, I think you guys even like it. I think you like being forced to test and search and ask on forums instead of just checking in the datasheet.
But assuming AMD is hoping for a larger share of business clients, they'll have to really focus on more than just performance.


----------



## ratirt (May 10, 2019)

lexluthermiester said:


> Incorrect. When a manufacturer makes a product with support for a certain function and that function is made available to the public, you can bet your life on the fact that they have tested it completely.


I think the same. Besides Ryzen TR's are validated. It is even mentioned on AMD's webpage. So if TR is validated then sure as hell other chips will work as if they are created to work with ECC.  Tested for sure validated not necessarily.



notb said:


> AMD doesn't disable ECC, but it doesn't mean it works as it should. It's just there.


It does support and works the problem is finding motherboard which will allow using ECC ram modules.



notb said:


> And AMD *does not* state Ryzens support ECC, does it?


Actually AMD stated that all ECC features on Ryzen CPUs are working fine and never been disabled.








						AMD confirms that Ryzen supports ECC memory
					

AMD confirms that Ryzen support ECC memory




					www.overclock3d.net
				








						ECC Memory & AMD's Ryzen - A Deep Dive
					

One of the more interesting aspects of the pre- and post-Ryzen launch was how many people were wondering about support for ECC memory, otherwise known as error-correcting code memory. Every forum had at least one thread and most articles with a comments section had people talking about this...



					www.hardwarecanucks.com


----------



## lexluthermiester (May 11, 2019)

notb said:


> You don't understand enterprise computing - I've told you that many times.


You are welcome to your opinion. Doesn't mean you are correct.


----------



## 0x6A7232 (May 11, 2019)

notb said:


> What's the point of an untested security feature?
> 
> Without them it means you can't really expect ECC to work. And no one is liable when it stops.
> That's the whole point of certification. It is important not in the 99% of time a feature works, but in the 1% time it doesn't.
> Certified ECC systems sometimes don't work properly, just like a certified airbag doesn't always save your life in a crash. But until someone gives you a guarantee that an airbag should work in a particular way, it's just a small bomb with a baloon. WTF would you willingly put a bomb in your car?



First off, ECC isn't a security feature, it's a data integrity feature, unless everything I've ever read about it is wrong.  It is for when the electronics make a mistake and mangle a bit, it can then be corrected instead of possibly corrupting something.  Unless you mean security from hardware failure??

Second off, comparing it with a safety device (airbags, also helmets) makes absolutely no sense.  That's not even apples and oranges.  It's apples and mutton.  An airbag protects you in an accident, ECC corrects hardware data errors should they occur. 

I'm pretty sure ECC will work properly on Ryzen if a motherboard supports it, and not if it doesn't.  I'm guessing AMD didn't spec Ryzen as supporting ECC to avoid the inevitable lawsuit when someone snags a non-ECC or no-brand "ECC" mobo and it doesn't work right, even though the Ryzen would operate according to ECC spec (that's just my guess, maybe it actually has a flaw or something?).  Pretty sure you could find out by checking what the specs are on the Ryzen memory controller.  *shrug*

ECC is normally only used in applications where downtime is unacceptable, like data centers, mission critical (like maybe military, medical, perhaps ATC and possibly AI driving?) applications, as it's more expensive than non-ECC, even though once upon a time, ECC was the norm (if I remember my computer history right).









						ECC memory - Wikipedia
					






					en.wikipedia.org


----------



## notb (May 11, 2019)

0x6A7232 said:


> First off, ECC isn't a security feature, it's a data integrity feature, unless everything I've ever read about it is wrong.


Well, maybe it's a broader definition of "security" than people here may be used to. Sorry. Risk management comes out once in a while.


> Unless you mean security from hardware failure??


Yeah, I meant "security" as the general idea of mitigating risk.


> An airbag protects you in an accident, ECC corrects hardware data errors should they occur.


What if airbags are designed on a computer without ECC and we're so unlucky that the simulation data got corrupted or lost?
I know this may sound funny, but that's what risk management is usually about: mitigating the risk of very rare but significant events.

Yes, corruption/instability stemming from a class of RAM errors that ECC targets is very rare. But it can happen and we have a technology that makes it few orders of magnitude less likely. So this technology became a standard in production systems.


> I'm pretty sure ECC will work properly on Ryzen if a motherboard supports it, and not if it doesn't.


OK, you may be sure. I'm pretty sure it won't. These are just opinions.

I was speaking from a PoV of an enterprise, so the party actually interested in ECC memory.
The system has to officially support ECC, i.e. someone has to take responsibility. And that's the whole point: responsibility.

Maybe Ryzen today can work in ECC mode, we don't know that. And honestly, do we really know whether Threadripper, EPYC or Xeon support ECC properly? No, we don't.
But one CPU has an "ECC validated" sticker and one doesn't. And that sticker changes everything.


> ECC is normally only used in applications where downtime is unacceptable, like data centers, mission critical (like maybe military, medical, perhaps ATC and possibly AI driving?) applications, as it's more expensive than non-ECC, even though once upon a time, ECC was the norm (if I remember my computer history right).


No, ECC is required in virtually all production systems in large enterprises.

Also, you have a very military understanding of something being "mission critical" (gaming much? ;-)).
A mission critical system is any system essential for an organization to perform its core tasks.
For example the system responsible for selling products is "mission critical", because selling is the most important activity in a company. The database that holds client or sales data is critical as well.
If your company designs fans on CAD workstations, they may also be considered "production" and "mission critical", i.e. it's very unlikely this job will be given to ordinary office desktops - even if they're fast enough.


----------



## lexluthermiester (May 11, 2019)

notb said:


> Well, maybe it's a broader definition of "security" than people here may be used to. Sorry.


There a difference between data integrity/redundancy/reliability and data security. You clearly do not work in business level IT or you would understand that distinction.


----------



## ratirt (May 13, 2019)

notb said:


> OK, you may be sure. I'm pretty sure it won't. These are just opinions.
> 
> I was speaking from a PoV of an enterprise, so the party actually interested in ECC memory.
> The system has to officially support ECC, i.e. someone has to take responsibility. And that's the whole point: responsibility.
> ...


I don't understand why you keep arguing about this. The threadripper is validated and supports ECC and you can see it on the AMD webpage. So whenever a company decides about they need ECC they can go with this. Xeon and TR support ECC correctly for sure. So as Ryzen and it has been stated and confirmed by the AMD company that they do and yet you keep saying we don't know or it doesn't.


----------



## notb (May 13, 2019)

ratirt said:


> So as Ryzen and it has been stated and confirmed by the AMD company that they do and yet you keep saying we don't know or it doesn't.


So show me the document from AMD that says Ryzen supports ECC. I'm sure this is not a problem since you're so certain.


----------



## ratirt (May 13, 2019)

notb said:


> So show me the document from AMD that says Ryzen supports ECC. I'm sure this is not a problem since you're so certain.


I don't know if there is a document. I'm referring to Papermaster's and Lisa SU's information about Ryzen CPUs and ECC support. Ryzen TR has this information on the AMD's webpage that it supports ECC. I don't think two head members of AMD would lie about ECC support for Ryzen processors.


----------



## notb (May 13, 2019)

ratirt said:


> I don't know if there is a document. I'm referring to Papermaster's and Lisa SU's information about Ryzen CPUs and ECC support. Ryzen TR has this information on the AMD's webpage that it supports ECC. I don't think two head members of AMD would lie about ECC support for Ryzen processors.


OK. So give a link to an interview or a slideshow. That will be enough.


----------



## ratirt (May 13, 2019)

notb said:


> OK. So give a link to an interview or a slideshow. That will be enough.


I already did. Previously but I guess you didn't bother to read it. 
For starters read about TR on the AMD's webpage. It clearly states that Ryzen TR supports ECC memory as a feature.


----------



## Redwoodz (May 13, 2019)

notb said:


> OK. So give a link to an interview or a slideshow. That will be enough.



Do you really belive AMD does not support ECC or are you just trolling. Either way you are done.


----------



## 0x6A7232 (May 14, 2019)

Predicted response: that's AsRock's website, not AMD's, therefore not valid.


----------



## londiste (May 14, 2019)

That is Ryzen Pro, competing with (workstation) Xeons which also have ECC support.
It is a separate product segment.


----------



## R0H1T (May 14, 2019)

Pro APUs i.e. Raven Ridge, for the regular non IGP variants ASrock supports ECC according to that list.

Another set of results from an ES chip ~ https://cpu.userbenchmark.com/SpeedTest/697865/AMD-Eng-Sample--2D3212BGMCWH2-3734-N


----------



## 0x6A7232 (May 17, 2019)

Looks like the APU specs are in (supposedly) - the Ryzen 3000 APU is a Zen+ 12nm optical shrink of 14nm - the "Ryzen 3 3200G comes with 3.60 GHz nominal clock-speed and 4.00 GHz maximum Precision Boost frequency; while the Ryzen 5(?) 3400G ships with 3.70 GHz clock speeds along with 4.20 GHz max Precision Boost". 









						AMD Ryzen "Picasso" APU Clock Speeds Revealed
					

AMD is giving finishing touches to its Ryzen 3000 "Picasso" family of APUs, and Thai PC enthusiast TUM_APISAK has details on their CPU clock speeds. The Ryzen 3 3200G comes with 3.60 GHz nominal clock-speed and 4.00 GHz maximum Precision Boost frequency; while the Ryzen 5(?) 3400G ships with...




					www.techpowerup.com


----------



## EarthDog (May 17, 2019)

0x6A7232 said:


> Looks like the APU specs are in (supposedly) - the Ryzen 3000 APU is a Zen+ 12nm optical shrink of 14nm - the "Ryzen 3 3200G comes with 3.60 GHz nominal clock-speed and 4.00 GHz maximum Precision Boost frequency; while the Ryzen 5(?) 3400G ships with 3.70 GHz clock speeds along with 4.20 GHz max Precision Boost".
> 
> 
> 
> ...


What does that have to do with this? Its not even Zen2...


----------



## notb (May 17, 2019)

Yeah, I forgot about this lovely discussion. :/


ratirt said:


> I already did. Previously but I guess you didn't bother to read it.
> For starters read about TR on the AMD's webpage. It clearly states that Ryzen TR supports ECC memory as a feature.


OK. You gave a link to a Reddit discussion.
ECC support is not mentioned in specs. So is the whole "Ryzen supports ECC" internet gag based on this?




Call me Intel fanboy or whatever you want. This is not how a major listed CPU manufacturer should do business.


R0H1T said:


> Pro APUs i.e. Raven Ridge, for the regular non IGP variants ASrock supports ECC according to that list.


Quite a few Pentium, Celeron and Atom processors support ECC (officially, i.e. are validated), so we shouldn't be shocked that APUs do as well. These CPUs are running many enterprise products.


londiste said:


> That is Ryzen Pro, competing with (workstation) Xeons which also have ECC support.
> It is a separate product segment.


This is all quite weird.
AMD doesn't call any Ryzen ECC-validated (Pro or not). Suddenly some mobo makers list Ryzen Pro as supporting ECC.

Would it be possible that AMD made a mistake on their website? They paid for validation and forgot to tell us?
Mess. :/


----------



## ratirt (May 18, 2019)

notb said:


> Yeah, I forgot about this lovely discussion. :/
> 
> OK. You gave a link to a Reddit discussion.
> ECC support is not mentioned in specs. So is the whole "Ryzen supports ECC" internet gag based on this?
> View attachment 123179


It's not this one but whatever suit you bro 
Anyway it is supported and it works. Deal with it


----------



## 0x6A7232 (May 21, 2019)

It would be helpful to know what exactly is the procedure for being officially ECC validated, and WHO does the validation.  If it is, for example, done by a body that was founded or controlled by Intel, this would explain AMD not bothering.  Sort of like how nVidia came up with their own requirements for "3D accelerator" and it just so conveniently matched them being the first (forget about all the others that came before).  Does anyone know who validates ECC, and what the requirements are?  Or is it just as meaningless as the company doing internal testing and slapping a label on, which can mean different things depending on which company it is?

Also, if you have a mobo claiming ECC support, validated or not, you have a case for a lawsuit if it doesn't actually.  This leans in favor of ECC being supported for all intents and purposes, as long as the mobo OEM clearly states support.

If you guys haven't seen this,  watch (among other goodies, 4,278 Cinebench on 16-core @4.2GHz, 12-core boosting to 5GHz, so 16-core 5GHz boost part likely)


----------



## Berfs1 (Jun 7, 2019)

notb said:


> And AMD *does not* state Ryzens support ECC, does it?
> 
> AMD doesn't disable ECC, but it doesn't mean it works as it should. It's just there.
> 
> ...


Wanna tell me my prediction was wrong now?


----------



## gronetwork (Jun 19, 2019)

AMD Ryzen 7 3800X has been benchmarked on Geekbench 4 with the Windows 64-bit operating system and it seems that it could easily beat the beast of Intel, i.e the Intel Core i7-9700K processor. For the moment, the chip would generate 17% more power. However there is only one result currently. We will have to be patient to see if the difference is confirmed during the next weeks.






						AMD Ryzen 7 3800X vs Intel Core i7-9700K - GadgetVersus
					

Comparison between AMD Ryzen 7 3800X and Intel Core i7-9700K with the specifications of the processors, the number of cores, threads, cache memory, also the performance in benchmark platforms such as Geekbench 4, Passmark, Cinebench or AnTuTu.



					gadgetversus.com


----------

