# AMD Readying a 10-core AM4 Processor to Thwart Core i9-9900K?



## btarunr (Sep 18, 2018)

To sustain its meteoric rise at the stock markets, AMD needs to keep investors convinced it has a competitive edge over Intel, even if it means investing heavily on short-term roadmap changes. According to an Elchapuzas Informatico article, AMD could be working on a new 10-core/20-thread processor for the AM4 platform, to compete with the upcoming Core i9-9900K 8-core/16-thread processor from Intel. The said processor is being labeled "Ryzen 7 2800X" and plastered over CineBench nT screenshots, where due to the sheer weight of its 10 cores, it tops the nT test in comparison to Intel's mainstream-desktop processors, including the 2P Xeon X5650 12-core/24-thread. 

The Forbes article that cites the Elchapuzas Informatico, however, is skeptical that AMD could make such a short-sighted product investment. It believes that development of a 10-core die on existing "Zen+" architecture could warrant a massive redesign of the CCX (Zen Compute Complex), and AMD would only get an opportunity to do so when working on "Zen 2," which AMD still expects to debut by late-2018 on its EPYC product line. We, however, don't discount the possibility of a 10-core "Zen+" silicon just yet. GlobalFoundries, AMD's principal foundry partner for CPUs, has given up on 7 nm, making the company fall back to TSMC to meet its 7 nm roadmap commitments. TSMC already has a long list of clientele for 7 nm, including high-volume contracts from Apple, Qualcomm, and NVIDIA. This could force AMD to bolster its existing lineup as a contingency for delays in 7 nm volume production. 





*View at TechPowerUp Main Site*


----------



## londiste (Sep 18, 2018)

X5650 is not mainstream-desktop and it is from 2010.
From the leaks so far, 2100 nT Cinebench score is roughly at par with i9-9900K, not convincingly higher.

The silicon is a big question though. Zen2 this early?


----------



## hat (Sep 18, 2018)

Begun, the core war has.


----------



## Japie073 (Sep 18, 2018)

Xeon X5650 is 6C/12T. They used a dual CPU setup. Never have I ever saw a 12 Core 1366 CPU.


----------



## R0H1T (Sep 18, 2018)

The alleged deca core isn't possible, supposedly with the current 4 core per CCX arrangement, of course AMD might have an ace or two up their multiple sleeves.
We'll see, but I have serious doubts about the rumor!


----------



## notb (Sep 18, 2018)

londiste said:


> X5650 is not mainstream-desktop and it is from 2010.
> From the leaks so far, 2100 nT Cinebench score is roughly at par with i9-9900K, not convincingly higher.


Exactly. Ryzen 8C/16T competes with i7 6C/12T.
10C Ryzen might not be enough for 9900K. They should go with 12 already.

Of course we would be once more flooded with simple multi-core benchmarks like rendering. 


> The silicon is a big question though. Zen2 this early?


Surely not Zen1 on current node, since it's basically against the whole architecture.
We know from the start what can be done with CCX and IF.
It's very limiting for AMD. For example: they can't even make an 8-core APU (there's a lot of free space in the package).
IGP takes one CCX space away.

Of course AMD could design a 10-core CPU from scratch if they really wanted. Or 11 cores. Or whatever they fit.
But that would mean the whole brilliant Zen idea lasted just for 2 years.

The other issue is the mount. Would >8 cores work on AM4? How? AMD never told us.

This is also a big question in case of 7nm. People believe AMD will manage to squeeze a scaled down Threadripper into AM4, but how would it work?
In fact, have they ever said that AM4 will get Zen2? Maybe it's time for AM4+?


----------



## The Quim Reaper (Sep 18, 2018)

Not  going  to  happen.

The best we can expect are some highly binned 2700x CPU's which can reach, and sustain, a 4.5Ghz boost.

Personally, I think AMD should just ignore the 9900K, it will be too expensive for most and won't take away (many) sales from the 2700X, and just carry on getting the mainstream Zen 2 ready for launch in Q1 2019.


----------



## R0H1T (Sep 18, 2018)

notb said:


> Exactly. Ryzen 8C/16T competes with i7 6C/12T.
> 10C Ryzen might not be enough for 9900K. They should go with 12 already.
> 
> Of course we would be once more flooded with simple multi-core benchmarks like rendering.
> ...


Why would they tell you? AMD never said anything about mainstream Ryzen topping out at just 8 cores, did they? Do you also think zen2/3 will only have 4 cores per CCX, if not then next Ryzen could bring 12~16 cores for mainstream but that's still at least half a year away.


----------



## silentbogo (Sep 18, 2018)

notb said:


> Surely not Zen1 on current node, since it's basically against the whole architecture.


You've answered all of your following questions here. It's an early Zen2 prototype, since Zen1 only allows for 4 cores per module, and the package can only fit 2 modules.
Zen 2 (according to AMD slides), is projected to have 6-8 cores per CCX module.
With this in mind, we may expect something ridiculous like a 16c/32t CPU in a consumer segment next year. 

Maybe I should wait for an upgrade one more year... this is getting quite interesting.


----------



## kastriot (Sep 18, 2018)

Price/perf is winner here so all this is shooting the breeze, and atm amd is winner for next 2-3 years, intel has got what it deserves by not pulling head from sand and milking his loyal customers.


----------



## Prima.Vera (Sep 18, 2018)

hat said:


> Begun, the core war has.


We are on Episode XXL already...


----------



## MCJeeba (Sep 18, 2018)

$499


----------



## notb (Sep 18, 2018)

R0H1T said:


> Why would they tell you?


Rather: why would they tell us?
Because they told us that AM4 will be in use until 2020. And people took the bait. Platform longevity and upgrade path is still one of the most popular arguments people mention when recommending Ryzen.
But at the same time people expect 7nm, Zen2 and 16 cores in consumer segment before 2020.
Wouldn't these people want some sort of confirmation? Or at least a clear info that it will still work on the same AM4 motherboards.


> AMD never said anything about mainstream Ryzen topping out at just 8 cores, did they?


AFAIK they never said AM4 will go beyond 8 cores as well.


> Do you also think zen2/3 will only have 4 cores per CCX, if not then next Ryzen could bring 12~16 cores for mainstream but that's still at least half a year away.


It doesn't matter what I think. What matters is the "could" in your sentence. 


silentbogo said:


> Zen 2 (according to AMD slides), is projected to have 6-8 cores per CCX module.


These slides are exactly what I'm missing. Could you give a link? I can't find anything.


----------



## R0H1T (Sep 18, 2018)

notb said:


> Rather: why would they tell us?
> Because they told us that AM4 will be in use until 2020. And people took the bait. Platform longevity and upgrade path is still one of the most popular arguments people mention when recommending Ryzen.
> But at the same time people expect 7nm, Zen2 and 16 cores in consumer segment before 2020.
> Wouldn't these people want some sort of confirmation? Or at least a clear info that it will still work on the same AM4 motherboards.
> ...


You won't find anything because Zen2 isn't offcial yet, maybe we'll get a preview from AMD before the end of the year. AMD would also be wary to not leak too much _unnecessary_ info before Zen2 lands, especially as EPYC ~ *Intel is Serving Major Xeon Discounts to Combat AMD EPYC*

We're just looking at a tree in the forest, the actual forest is the bigger deal.


vprem said:


> “Intellect Running-Amok” and “others” might call this “Chase-Me”, Competition, but it could also be caused by “Half-Baked” CPUs. You know, those “Brain-Deadened” CPUs that did not make it to being Da Bigge-Vun but are otherwise “bigger”/more-muskular that Dem Weeny-Vuns”Cheapos”.
> 
> When Sell, Sell, & Sell is Running Wild-&-Wanton, “Relatively”/Materially it could be seen as Buy, Buy, & Buy but Realistically/”Spiritually”, it could also be due to “Thought-Processing”  needing “Its-Walkies”/to-Run-Amok. Especially when The Kitchen is getting Too-Hot due to all that Wild & Wanton Baking. You know, when “Baking” is Running Amok. When “Parts” are being “Binned”/Brain-Deadened/Financially-Justified.


Is this  what you call *klingon *


----------



## silentbogo (Sep 18, 2018)

notb said:


> These slides are exactly what I'm missing. Could you give a link? I can't find anything.


Lol. This is gonna be embarrassing. I've decided to find those old slides for you, and out of all the sources the first one that came up in google was an article from WCFTech with an informative title "Fake AMD Ryzen 2800X 12 Core 5.1GHz Slide Sends Media Into Frenzy" ))))
So much for keeping up with news.... 

So, all we have to go on, is a now-taken-down and non-existent MSI promotional video for B450 motherboard that claimed "8-core and up CPU" support... All clues and hints have been meticulously erased.


Spoiler


----------



## Mr.Origami696 (Sep 18, 2018)

I think everyone is missing an important point here.

AMD isn't interested in trivial competitions against Intel but in is own way to conceive the CPU technology keeping a solid path.

Why the logic cores number are higher, compared to the physical ones? Well, one reason might be...better compute performances and lower power consumption at a final cost market always competitive and catching for the customers (any category). It is almost obvious that GPU card are the most expansive compared on the many things the CPU's have to do on the office side. So, TDP reasonably increase on gaming rigs where a mid-higher GPU (mostly Nvidia) is the ideal...but AMD just should be keeping their excellent part, providing the best multi thread performances at lower power consumption (and prices).

That's AMD philosophy, after all. Just think about it.


----------



## Vya Domus (Sep 18, 2018)

Intel is the one late to the party, not AMD. They can simply wait this one out Zen 2 less than a year away, this 10 core part seems unlikely.



silentbogo said:


> So, all we have to go on, is a now-taken-down and non-existent MSI promotional video for B450 motherboard that claimed "8-core and up CPU" support... All clues and hints have been meticulously erased.



I don't know why it is that hard for people to believe core count can increase without a change in socket/package.

Platform longevity is indeed something that people don't believe in thanks to Intel *but it is in fact a thing.*


----------



## medi01 (Sep 18, 2018)

Intel has enjoyed (and still is) margins AMD could never dream of.
Intel's fab advantage has vanished, thank you, TSMC.
Intel is years behind on GPU front.
nVidia won't even be let to join the x86 party (so let's even skip the "how many decades would it take them to catch up").
In "moar coars" ward, AMD has architectural edge, CCX + infinity fabric, 16 core CPU for $699 anyone?

So, where was I: *who says that AMD even needs to have symmetric or even "faster" answers to whatever Intel has?*


----------



## R0H1T (Sep 18, 2018)

medi01 said:


> Intel has enjoyed (and still is) margins AMD could never dream of.
> Intel's fab advantage has vanished, thank you, TSMC.
> Intel is years behind on GPU front.
> nVidia won't even be let to join the x86 party (so let's even skip the "how many decades would it take them to catch up").
> ...


AMD needs to cover all their bases, or at least put viable alternatives in segments where Intel enoys a near monopoly. At this point in time AMD's probably doing the best they can, but they can do even better.


----------



## notb (Sep 18, 2018)

Vya Domus said:


> I don't know why it is that hard for people to believe core count can increase without a change in socket/package.


There's a thing called physics. And another thing called architecture limits.

I'm not saying AMD can't make a 10 core CPU for AM4. But I am skeptical about it being compatible with current AM4 mobos. If it will be - great for Ryzen owners. But why hasn't AMD told us that? It's not a technological secret or anything.
Wouldn't current AM4 owners like to know that "AM4 supported until 2020" means something more than just current Zen+ CPUs being in production for another 2 years?
Because if they switch to a new socket for new CCX modules, what happens to the mystical "upgrade path"?

And there's another issue as well. Next year we might see DDR5 in servers, but DDR4 will remain in consumer segment for another 1-2 years. How will AMD cover this? Will they still be able to build both consumer and server CPUs out of the same parts? 


medi01 said:


> So, where was I: *who says that AMD even needs to have symmetric or even "faster" answers to whatever Intel has?*


Because their market share is still around 10% of mobile+desktop market and 1% of servers. And they need more.


Mr.Origami696 said:


> I think everyone is missing an important point here. [cut]


Please read that post once again. Or maybe show it to your life partner or a friend. There's no shame in having a text edited before posting...
I've read it twice and I don't know what you wanted to say. :-/


----------



## R0H1T (Sep 18, 2018)

notb said:


> *There's a thing called physics. And another thing called architecture limits.*
> 
> I'm not saying AMD can't make a 10 core CPU for AM4. But I am skeptical about it being compatible with current AM4 mobos. If it will be - great for Ryzen owners. But why hasn't AMD told us that? It's not a technological secret or anything.
> Wouldn't current AM4 owners like to know that "AM4 supported until 2020" means something more than just current Zen+ CPUs being in production for another 2 years?
> ...


There's this thing called speculation, what you're saying is beyond that. Where did Physics come into all of this, do 16 cores @*7nm* bore a quantum tunnel that 8 cores @*14nm *cannot?


----------



## kings (Sep 18, 2018)

Given that most 9900K buyers should be gamers (such as the 8700K), an AMD 10 core most likely forced to lower clock speeds than the 2700X, it would bring nothing new!

Furthermore, Intel is advertising this new CPU mostly for gaming, so even less sense makes AMD go that way, when they already know that in pure gaming adding more cores don't solve anything.

In most games the 10-core would be worse than the 2700X, unless they discover some magic to increase clocks significantly. So, I think it would be a release with no sense!


----------



## hat (Sep 18, 2018)

notb said:


> I'm not saying AMD can't make a 10 core CPU for AM4. But I am skeptical about it being compatible with current AM4 mobos. If it will be - great for Ryzen owners. But why hasn't AMD told us that? It's not a technological secret or anything.
> Wouldn't current AM4 owners like to know that "AM4 supported until 2020" means something more than just current Zen+ CPUs being in production for another 2 years?
> Because if they switch to a new socket for new CCX modules, what happens to the mystical "upgrade path"?



Were you aware of AM3 processors that supported both DDR2 and DDR3? This meant that you could stick an AM3 chip in an AM2/AM2+ board. You could also use an AM2/AM2+ chip in an AM3 board, provided it had DDR2 slots. That's the kind of forwards (and backwards) compatibility AMD users have enjoyed previously, and there's no reason to expect this will come to an end with AM4. However, probably not all boards will do this. Support varied from board to board; it was up to the board makers to provide BIOS updates for support. I've taken advantage of this feature myself a number of times in the past.

These features exist for a reason. It's the same reason we had motherboards like the Asrock 775Dual-VSTA. That was a weird (but useful) board for sure. It had DDR and DDR2 slots, and it had an AGP slot as well as a PCI-E slot. Some of us only want to (or can only afford to) upgrade one component at a time once in a while, so these things really come in handy in such situations. How nice would it be if I could put one of the upcoming Whiskey Lake chips in my old, but still functioning socket 1155 board? I wouldn't need to buy a new motherboard and RAM just to upgrade the CPU.


----------



## techy1 (Sep 18, 2018)

this (rumored) 10 core is as possible as 11 core - yea anything is possible, but not gonna happen. and I sure do hope amd is not preparing anything at all (vs next months intel moves). I do not want amdto spend time/money on Zen+ when Zen 2 is near (aprox 6+ months) and Zen2 will be the fatality move vs intel current and next few year lineup.


----------



## Mr.Origami696 (Sep 18, 2018)

R0H1T said:


> There's this thing called speculation, what you're saying is beyond that. Where did Physics come into all of this, do 16 cores @*7nm* bore a quantum tunnel that 8 cores @*14nm *cannot?


Hi R0H1T, you're perfectly right. Anyway, quantum physics isn't something really light to digest for everyone  but it is astonishing to see (not only here of course, I'm new here) a slight useless conspiracy theories....even inside the IT environment! :/


----------



## dj-electric (Sep 18, 2018)

There comes a point where adding more cores isn't the solution to the problem, because the problem wasn't "not enough cores".


----------



## Valantar (Sep 18, 2018)

Don't remember where I saw it, but this has been debunked. It's fake. Why?


CineBench scores can be manipulated just by editing a text file
The description of the CPU doesn't match AMD naming conventions in Cinebench (among other things, it says "10-core", while AMD CPUs are described with words, not numbers ("six-core", "eight-core")
And last but certainly not least: *There's no way to get a 10-core AM4 chip without new silicon, which would mean this is 7nm. There's no way they'd launch their first 7nm CPUs as an afterthought like this.*
Please stop treating this like it's real.


----------



## nemesis.ie (Sep 18, 2018)

I don't put much stock in the validity of this "leak".

However, just for fun, imagine for a moment:

1. 7nm samples are being tested for Epyc right now.
2. They have run through "quite a few" wafers testing 7nm.
3. The yields are pretty terrible, but they have managed to get some working cores on some dies - the CCX is now 8 cores, meaning 16 cores on a fully working die.
4. They put out "2800X" as a stepping stone to the full Ryzen 3/Zen 2 release using up these not 100% working dies giving anything from 2 to 14 cores (assuming ones with 16 would go to Epyc or an early TR release).

Pure speculation/wishful thinking here of course, but is it possible that there are usable, fully tested 7nm dice with 8 or more cores working available in "reasonable quantities" that could be packaged and sold?

Obviously if they did have such a product and wanted to sell it at a 9900k competitive price they would probably need to have a lot of them available as they would likely go out of stock very quickly.

They could of course make it a "halo product" for this generation and charge extra.

/wishful thinking


----------



## jpvalverde85 (Sep 18, 2018)

Too much r&d for a 10 core. Dont think  Ms. Su would aprove it, and if zen2 AMD will not release it as 2000 series.


----------



## Space Lynx (Sep 18, 2018)

Hmm I did not realize that Apple, nvidia, etc lots of big names are in line for 7nm production before AMD... that has to mean Nvidia will have 7nm gpu's next year, I am sure they are throwing their big wallet around the TSMC building to make sure AMD is behind them on the production schedule... this does not bode well for AMD, people are already sick of waiting so many years for competitive GPU's, heh.


----------



## HTC (Sep 18, 2018)

This is highly unlikely *@ this time. *Perhaps after gen 2 is released (as in *not the 2000 series*) but not in this generation.

Cinebench scores can be faked rather easily, apparently:




Seen the above pic in a similar topic @ AnandTech, a few days ago ...


----------



## Caring1 (Sep 18, 2018)

HTC said:


> This is highly unlikely *@ this time. *Perhaps after gen 2 is released (as in *not the 2000 series*) but not in this generation.
> 
> Cinebench scores can be faked rather easily, apparently:
> 
> ...


If a soggy loaf of bread can do that, I would love to know how much better a potato is.


----------



## HTC (Sep 18, 2018)

Caring1 said:


> If a soggy loaf of bread can do that, I would love to know how much better a potato is.



That makes 3 os us: you, me ... and the potato ...


----------



## Valantar (Sep 18, 2018)

Yeah, that wasn't difficult.


----------



## Gasaraki (Sep 18, 2018)

That leak was so fake 2 weeks ago. AMD can't make a 10 core Ryzen right now due to the way their architecture works.



lynx29 said:


> Hmm I did not realize that Apple, nvidia, etc lots of big names are in line for 7nm production before AMD... that has to mean Nvidia will have 7nm gpu's next year, I am sure they are throwing their big wallet around the TSMC building to make sure AMD is behind them on the production schedule... this does not bode well for AMD, people are already sick of waiting so many years for competitive GPU's, heh.




He's just throwing shit out there. He doesn't really know that Apple, nVidia, are in line for 7nm before AMD. Think about it, RTX20x0 is not 7nm, why would nVidia be in line now? More like end of next year in preparation for RTX21x0 or Titan RTX.


----------



## Vya Domus (Sep 18, 2018)

lynx29 said:


> that has to mean Nvidia will have 7nm gpu's next year, I am sure they are throwing their big wallet around the TSMC building to make sure AMD is behind them on the production schedule



Thankfully TSMC isn't run by morons, they don't give a shit about any of that. Their goal is to sell as many wafers as possible.



Gasaraki said:


> He doesn't really know that Apple, nVidia, are in line for 7nm before AMD.



Apple already is already about to ship 7nm chips inside the new phones. No one is in line before anyone, whoever has the designs ready and their yields are met gets the chips. Given those chips are in the region of 100 mm^2 or less no wonder they were the first to get them.


----------



## newtekie1 (Sep 18, 2018)

Valantar said:


> There's no way to get a 10-core AM4 chip without new silicon, which would mean this is 7nm. There's _no _way they'd launch their first 7nm CPUs as an afterthought like this.



The only thing I disagree with is that new silicon would require 7nm.  There is no reason, that I see, that they couldn't make changes to the CCX and leave it on 12nm.  I mean, they already have the 12nm design down, so adding a few more cores to it shouldn't be a big task.

That said, I still don't think this is real.


----------



## GorbazTheDragon (Sep 18, 2018)

Yep, called bullshit on this as soon as I saw 10... Going to have to give me something more convincing.

I would be much more believable if it was 12 cores too..


----------



## Valantar (Sep 18, 2018)

newtekie1 said:


> The only thing I disagree with is that new silicon would require 7nm.  There is no reason, that I see, that they couldn't make changes to the CCX and leave it on 12nm.  I mean, they already have the 12nm design down, so adding a few more cores to it shouldn't be a big task.
> 
> That said, I still don't think this is real.


I didn't say it would _require _7nm. If it wasn't clear, I simply meant that launching a new piece of silicon at this time on 12nm makes no sense whatsoever. Why? Because what constitutes a "big task" is mighty relative.

First, let's ignore how exactly they're getting to 10 cores, and focus on die production. Taping out and ramping a production line for a brand-new large-size silicon design, even on a well-known process, is a multi-million-dollar investment at the very least. Also, the process from initial tape-out to volume production is at least 6 months. Which means they'd need to sell _a lot_ of these to cover those costs alone. Is it possible? Absolutely. But considering their 7nm Zen2 design has been sampling to server customers for several months now, a die like this would be obsolete before it was packaged, and if it existed it's rather likely that AMD would have hinted to it. Even without that, there's no way they'd recoup the R&D costs.

Then, of course, there's changing the CCX design, which in and of itself would not be a small undertaking. Either they'd need to fundamentally redesign the core building block of their entire Zen lineup until now, or they'd need to figure out how to connect three CCXes on a single die. Both of these would be a "big task", even with the inherent modularity of AMD's designs.


----------



## newtekie1 (Sep 18, 2018)

I don't agree that it would be nearly as big of an undertaking as you suggest.  They are already working on a CCX design with more cores, we know that, it is slated for 7nm.  However, adapting it to 12nm for an accelerated release would not be that difficult.  In fact, it has been done many times in the past when processing nodes were not up to the task of meeting production goals.  I mean, when 14nm wasn't able to produce enough A9 processors for Apple, they reworked the processor in a matter of weeks to get it into production on 16nm.

So, if they already have the reworked CCX, which we are pretty sure they do, adapting it to 12nm shouldn't be a major undertaking.  Plus, now that the only 7nm producer will likely be TSMC, there is a question on if they can keep up with demand. Having the option to fall back on Globalfoundries 12nm for the desktop chips if needed wouldn't necessarily be a bad thing.


----------



## mohammed2006 (Sep 18, 2018)

2019 we will see
16 core AM4
64 core TR4


----------



## HTC (Sep 18, 2018)

mohammed2006 said:


> 2019
> 16 core AM4
> 64 core TR4



I can totally picture it in *Zen 2 arch* but not in *Zen + arch*, which is why no 2800X should be released.


----------



## dwade (Sep 18, 2018)

No thanks. We’re all interested in an Intel 8 cores instead. Fast 8 cores > slower 10 cores. Zero compromises for the target audience which is gamers.


----------



## Valantar (Sep 18, 2018)

newtekie1 said:


> I don't agree that it would be nearly as big of an undertaking as you suggest.  They are already working on a CCX design with more cores, we know that, it is slated for 7nm.  However, adapting it to 12nm for an accelerated release would not be that difficult.  In fact, it has been done many times in the past when processing nodes were not up to the task of meeting production goals.  I mean, when 14nm wasn't able to produce enough A9 processors for Apple, they reworked the processor in a matter of weeks to get it into production on 16nm.
> 
> So, if they already have the reworked CCX, which we are pretty sure they do, adapting it to 12nm shouldn't be a major undertaking.  Plus, now that the only 7nm producer will likely be TSMC, there is a question on if they can keep up with demand. Having the option to fall back on Globalfoundries 12nm for the desktop chips if needed wouldn't necessarily be a bad thing.


Apple sells tens of millions of each model of the iPhone, and has the largest cash reserves of any company on the planet. They could likely afford to hire every engineer available from TSMC to make that work. Also, TSMC 16nm and Samsung 14nm are nowhere near as different as Samsung/GloFo 12nm (refined 14nm) and TSMC 7nm. While the redesign of the A9 was no doubt a reasonably-sized undertaking, this would be quite a lot larger. Also, do you have any sources documenting how they did this "in a matter of weeks"? I'd love to read about how they managed to pull that off. Considering the lead times on phone designs (SoCs enter volume production around half a year before launch), it's likely they had more than a few weeks for this.

There is absolutely a question of whether TSMC can produce enough Zen2 dice for AMD, at least in the short term. This is not an argument against the major undertaking back-porting the design to 12nm would be, though. With the dramatically different density and power attributes of these process nodes, it'd likely require quite a lot of tuning to get right.


----------



## HimymCZe (Sep 18, 2018)

WTF? We already established that 9900k is no more then *8%* stronger then 2700X.
AND you don't even need a decent water cooler to OC 2700X *way beyond* 9900k performace.
(SOURCE).
THIS is just another nail in Intel coffin.


----------



## Slizzo (Sep 18, 2018)

lynx29 said:


> Hmm I did not realize that Apple, nvidia, etc lots of big names are in line for 7nm production before AMD... that has to mean Nvidia will have 7nm gpu's next year, I am sure they are throwing their big wallet around the TSMC building to make sure AMD is behind them on the production schedule... this does not bode well for AMD, people are already sick of waiting so many years for competitive GPU's, heh.





Gasaraki said:


> He's just throwing shit out there. He doesn't really know that Apple, nVidia, are in line for 7nm before AMD.



As pointed out above, Apple's new iPhones are using TSMC built 7nm A12 chips. They're being delivered to customer hands in a couple days here now, so they likely shipped last month.


----------



## nemesis.ie (Sep 18, 2018)

@dwade, no, not all of us are interested in an Intel 8 core instead.


----------



## dwade (Sep 18, 2018)

nemesis.ie said:


> @dwade, no, not all of us are interested in an Intel 8 core instead.


Most of us then. The world’s first gaming 8 core CPU would look glorious. Intel is smart to release the beast next to Turing. New GPU means more CPU bottleneck for AMD. Smart move is smart.


----------



## TheLaughingMan (Sep 18, 2018)

This is easily possible. The highest tier Threadripper CCX have a max of 8 cores. So you just need two chips with 3 disabled/defective cores. put two on the die like you would for Threadripper 2950X but with only two chips instead of 4. There you have 10 to 16 cores. There would be some heat concerns mind you with two dies that close together. Plus what would be the point if you are just trying to maintain your multi-threaded crown. You have an entire product skew for that already.


----------



## notb (Sep 18, 2018)

TheLaughingMan said:


> This is easily possible. The highest tier Threadripper CCX have a max of 8 cores. So you just need two chips with 3 disabled/defective cores. put two on the die like you would for Threadripper 2950X but with only two chips instead of 4. There you have 10 to 16 cores.


No, you don't. Have you ever read anything about how your Ryzen works and looks inside? Aside from benchmarks, obviously. ;-)

All currently available Ryzen and EPYC CPUs are made using the same 4-core CCX.


hat said:


> Were you aware of AM3 processors that supported both DDR2 and DDR3?


Not really a huge feat, let's be honest.


> However, probably not all boards will do this. Support varied from board to board; it was up to the board makers to provide BIOS updates for support. I've taken advantage of this feature myself a number of times in the past.


Probably. That makes the whole "upgrade path" argument making even less sense.


> It's the same reason we had motherboards like the Asrock 775Dual-VSTA. That was a weird (but useful) board for sure. It had DDR and DDR2 slots, and it had an AGP slot as well as a PCI-E slot.


We seen motherboards with dual-DIMM support fairly recently (Skylake). BTW: anything similar in the AMD camp? 


> Some of us only want to (or can only afford to) upgrade one component at a time once in a while, so these things really come in handy in such situations.


But, as you said, future AM4 CPU support will depend on mobo. You don't know which one would be updated. Doesn't this make the "upgrade path" argument a bit... poor?


> How nice would it be if I could put one of the upcoming Whiskey Lake chips in my old, but still functioning socket 1155 board? I wouldn't need to buy a new motherboard and RAM just to upgrade the CPU.


But it would be a worse CPU, so maybe you wouldn't be tempted to upgrade at all?
Intel's strategy is based around building very precise products. They make only what addresses current demand - hence, sells well. They control waste, they minimize costs. That's how you make money in this business.



HimymCZe said:


> WTF? We already established that 9900k is no more then *8%* stronger then 2700X.


Weeks before the launch? Man, you should try the lottery.


> AND you don't even need a decent water cooler to OC 2700X *way beyond* 9900k performace.
> (SOURCE).


Well... I don't care that much about OC in general, but something tells me you'll be able to OC that 9900K as well.


----------



## efikkan (Sep 18, 2018)

dj-electric said:


> There comes a point where adding more cores isn't the solution to the problem, because the problem wasn't "not enough cores".


Yes. We needed more cores in the mainstream, but now that we got it we need faster cores.
i7-8700 already beats 2700/X in overall with two fewer cores, and where Ryzen doesn't scale as well as Intel, adding two more cores is not help.

But AMD knows very well that core count sells, and that many reviews have some sort of weighted score. Zen does very well in a few benchmarks but behind Intel in many others. AMD will probably continue to push core count, hoping the good scores in select benchmarks and the hype will keep the sales up.



lynx29 said:


> Hmm I did not realize that Apple, nvidia, etc lots of big names are in line for 7nm production before AMD... that has to mean Nvidia will have 7nm gpu's next year, I am sure they are throwing their big wallet around the TSMC building to make sure AMD is behind them on the production schedule... this does not bode well for AMD, people are already sick of waiting so many years for competitive GPU's, heh.


Apple is using a different variant of the node. I don't know the share of wafers for each vendor on 7nm, but in the past Nvidia has greatly outnumbered AMD.

Nvidia will probably release their first 7nm for the professional market.


----------



## Hood (Sep 18, 2018)

dj-electric said:


> There comes a point where adding more cores isn't the solution to the problem, because the problem wasn't "not enough cores".


Too right, AMD fans have all the hollow "bragging rights" they're going to get in this pointless moar cores sales tactic.  They have their useless 24 core Threadrumper to brag on while checking their email, so now they're going to vomit cores all over the mainstream?  It's no secret that AMD sales are dropping along with all their prices (in a futile attempt to suck in more people who think with their dicks instead of their brains).


----------



## TheLaughingMan (Sep 18, 2018)

notb said:


> No, you don't. Have you ever read anything about how your Ryzen works and looks inside? Aside from benchmarks, obviously. ;-)
> 
> All currently available Ryzen and EPYC CPUs are made using the same 4-core CCX.



Yes I have read pretty much everything. I have a Ryzen 1800X. So let me help you out since you are confused.

A single CCX is indeed enough space to fit either 4 cores or a Vega GPU. Each die has 2 CCX that are connected via Infinity fabric for various configuration that top out at either 8 cores/16 threads (Ryzen 7), or 4 cores/8 threads + Vega 11 GCN on the package. A Threadripper chip is built like EPYC which has a total of 4 of the dies = 8 total CCX. 8 x 4 = 32 Cores total such as the EPYC server chips and Threadripper 2990wx.

They have already created lower tier Threadripper chips where 2 of the dies (4 possible CCXs or 16 maximum cores) are dummies. This leaves it with 2 dies, 4 CCXs, 16 maximum cores disabled.

So yes they can create a chip on the AMD Ryzen size for AM4 with two dies, 4 CCXs, 16 cores maximum by dropping the two dummy chips.

My point was while it is possible, the tooling rework, possible massive heat increase are not really worth it to me.


----------



## dwade (Sep 18, 2018)

Hood said:


> Too right, AMD fans have all the hollow "bragging rights" they're going to get in this pointless moar cores sales tactic.  They have their useless 24 core Threadrumper to brag on while checking their email, so now they're going to vomit cores all over the mainstream?  It's no secret that AMD sales are dropping along with all their prices (in a futile attempt to suck in more people who think with their dicks instead of their brains).


AMD ended their own hypothetical "core wars" with the mess that is 2990WX.


----------



## notb (Sep 18, 2018)

TheLaughingMan said:


> A single CCX is indeed enough space to fit either 4 cores or a Vega GPU. Each die has 2 CCX that are connected via Infinity fabric for various configuration that top out at either 8 cores/16 threads (Ryzen 7), or 4 cores/8 threads + Vega 11 GCN on the package. A Threadripper chip is built like EPYC which has a total of 4 of the dies = 8 total CCX. 8 x 4 = 32 Cores total such as the EPYC server chips and Threadripper 2990wx.


I don't see the relevance of everything above. Earlier you said AMD makes 8-core CCX for TR. Now you say it's 4. So?


> So yes they can create a chip on the AMD Ryzen size for AM4 with two dies, 4 CCXs, 16 cores maximum by dropping the two dummy chips.


Now this is a different theory. You would like to put 4 CCX into an AM4 package. This seems very unlikely, but I won't say "impossible". Look for a size comparison of AM4 and TR4 packages. AM4 is much shorter ("vertical" dimension).
https://adrenaline.uol.com.br/admin...Z0A2b8y59/amd_ryzen_threadripper_1950x_44.jpg
The dies would literally have to touch each-other. 


> My point was while it is possible, the tooling rework, possible massive heat increase are not really worth it to me.


Yeah... 200W in a package smaller than Intel's 2066. Unless they make it from the low-voltage "U" cores, but that would be a weird CPU...
I don't think wiring is a big problem. They've already done that with 2990WX (32 cores, but just half the needed RAM wiring).


----------



## mohammed2006 (Sep 18, 2018)

dwade said:


> AMD ended their own hypothetical "core wars" with the mess that is 2990WX.
> View attachment 107066



i think it is nvidia driver problem with 32 core CPU and they fixed it but no one made another review eith new driver  because 2950x preform much better 

*Fixed Issues*


[3D games]: Game performance drops in half when moving from 16 core/32 thread CPU to 32 core/64 thread CPU. [2334312]


----------



## Valantar (Sep 18, 2018)

Hood said:


> Too right, AMD fans have all the hollow "bragging rights" they're going to get in this pointless moar cores sales tactic.  They have their useless 24 core Threadrumper to brag on while checking their email, so now they're going to vomit cores all over the mainstream?  It's no secret that AMD sales are dropping along with all their prices (in a futile attempt to suck in more people who think with their dicks instead of their brains).





dwade said:


> AMD ended their own hypothetical "core wars" with the mess that is 2990WX.
> View attachment 107066


Why are you talking about high end workstation chips for gaming? Has anyone said the >16 core TRs are good for gaming? 'Cause they aren't. That doesn't mean they don't have a use - particularly anything involving virtualization, but also various forms of rendering, software compilation, and so on). TR is excellent for what it's made for, but gaming isn't among that. AMD likes to push the gaming+streaming angle, which is somewhat valid, but with that kind of budget you'd be better off getting a secondary streaming PC anyhow, so that's kind of moot. The 18-core X299 Intel chip doesn't exactly game well either...

For the foreseeable future, 8c16t will be plenty for gaming (even 8c8t will likely last for years and years), and we definitely need an increase in per-core performance. Nobody is saying >8c CPUs are the next big thing for gaming. But there's not really any reason _not_ to expect that from 7nm Zen2, is there? Given that AMD has a core count advantage, they know they need to work on IPC and clock speed, and they've said there's plenty of low-hanging fruit to improve the former, while the latter should improve with the new node.



notb said:


> I don't see the relevance of everything above. Earlier you said AMD makes 8-core CCX for TR. Now you say it's 4. So?
> 
> Now this is a different theory. You would like to put 4 CCX into an AM4 package. This seems very unlikely, but I won't say "impossible". Look for a size comparison of AM4 and TR4 packages. AM4 is much shorter ("vertical" dimension).
> https://adrenaline.uol.com.br/admin...Z0A2b8y59/amd_ryzen_threadripper_1950x_44.jpg
> ...


The packaging is the main issue here. As you say, the dice would have to touch, the power density would be ridiculous, and you'd end up with a Zeppelin with neither direct memory access nor PCIe connected to it (as both of those would need to be wired to a single die) with all the related latency issues and whatnot.

Then there's the issue of actually getting a second zeppelin to work in that small a package. So in addition to all the traces already connected to the primary die, you'd need an internal IF link embedded into the substrate. That would at the very least require additional layers, which would make the substrate too thick to fit the AM4 platform specs. Your coolers might not fit any more - fun! Anyone who's seen a Threadripper package knows how ridiculously thick the substrate is - and that's with 4x the area.

Then you'd need to update the AM4 platform to suddenly become NUMA aware and essentially transfer every function of Threadripper except the quad memory channels over.

Would this be possible? Sure, probably. Feasible? No. Smart? Don't make me laugh.


----------



## R0H1T (Sep 18, 2018)

notb said:


> I don't see the relevance of everything above. Earlier you said AMD makes *8-core CCX for TR*. Now you say it's 4. So?
> 
> Now this is a different theory. You would like to put 4 CCX into an AM4 package. This seems very unlikely, but I won't say "impossible". Look for a size comparison of AM4 and TR4 packages. AM4 is much shorter ("vertical" dimension).
> https://adrenaline.uol.com.br/admin...Z0A2b8y59/amd_ryzen_threadripper_1950x_44.jpg
> ...


It's 8 core per die, 4 per CCX, in an MCM package.

Because who'd buy EPYC then?


----------



## hat (Sep 18, 2018)

notb said:


> Not really a huge feat, let's be honest.



Maybe not, but you seemed to be worried about it back on page 1 when you mentioned DDR5...



notb said:


> Probably. That makes the whole "upgrade path" argument making even less sense.



Because it was nonsensical before? Users don't like to upgrade a single part without "upgrading" other parts to support it? Anyways... if you are someone who is looking for this upgrade path, I would recommend doing your research before you buy as opposed to just buying a random product and hoping it might get support. I would expect the one who releases a product and promises future support would see increased sales from savvy buyers looking for just that.



notb said:


> We seen motherboards with dual-DIMM support fairly recently (Skylake). BTW: anything similar in the AMD camp?



Hmm... there was that whole AM2/AM3 thing...



notb said:


> But, as you said, future AM4 CPU support will depend on mobo. You don't know which one would be updated. Doesn't this make the "upgrade path" argument a bit... poor?



It's also a poor decision to haphazardly walk into Home Depot and buy a drill press when you need a circular saw. I agree it would be better if these things were supported across the board, but they're not, so the one looking to buy would benefit from some research.



notb said:


> But it would be a worse CPU, so maybe you wouldn't be tempted to upgrade at all?
> Intel's strategy is based around building very precise products. They make only what addresses current demand - hence, sells well. They control waste, they minimize costs. That's how you make money in this business.



Whiskey Lake is a worse CPU? Than what? Certainly not my aging, locked Sandy chip. I'm not saying AMD is better than Intel here (performance wise) either. The 9600k would likely be a very good chip for me with very high single thread performance and more cores than I need right now, except in the odd game that benefits from >4.

As far as very precise products... yeah, if you can call it that. Everyone makes stuff that addresses demand, that's why 775 had a shitload of different chipsets. You could get a low end crap board with an awful chipset that didn't offer much just as well as you could get a high end x38 board that had more to offer. Lately Intel likes to force motherboard upgrades for no reason. I was surprised to see "counterfeit" socket 2011 boards... with an h61 chipset. Didn't think it was possible for that chipset to work with those CPUs, but lo and behold, they do (even if the board itself is kinda crap).


----------



## TheLaughingMan (Sep 18, 2018)

Hood said:


> Too right, AMD fans have all the hollow "bragging rights" they're going to get in this pointless moar cores sales tactic.  They have their useless 24 core Threadrumper to brag on while checking their email, so now they're going to vomit cores all over the mainstream?  It's no secret that AMD sales are dropping along with all their prices (in a futile attempt to suck in more people who think with their dicks instead of their brains).



AMD stock value has gone up from $10/share to $32/share as of writing this. Sales for their CPUs has been on a steady rise since the release of Ryzen to market. They are continuing to push the server side because that is where the real money is thus why EPYC will get Zen2 first. I am not sure what you are confused about as competition is what drives market price. AMD doing well keeps Intel from charging a premium because they are the only ones in a market segment and vise versa.

So once again, this is a bad move and AMD should not persue a 10-core 2800X.



dwade said:


> AMD ended their own hypothetical "core wars" with the mess that is 2990WX.
> View attachment 107066



I could state the driver issue was fixed or that it runs better in Linux, but the simple truth is this is not what you buy a $1600 chip for. AMD did not release any Threadripper to try and "brute force" gaming performance. I feel sorry for anyone who bought a HEDT chip and spent all that extra money if all they are going to do is game.


----------



## Captain_Tom (Sep 18, 2018)

The Quim Reaper said:


> Not  going  to  happen.
> 
> The best we can expect are some highly binned 2700x CPU's which can reach, and sustain, a 4.5Ghz boost.
> 
> Personally, I think AMD should just ignore the 9900K, it will be too expensive for most and won't take away (many) sales from the 2700X, and just carry on getting the mainstream Zen 2 ready for launch in Q1 2019.



If AMD could _highly_ bin to release a 125w chip at say... 4.6GHz+ Boost, 4.2GHz+ base - they could sell that sucker for $350-$400 and completely fend off the i7-9700K (and likely dissuade most from caring about the i9).  Anything less than that though and I agree it is a waste of time.


----------



## Valantar (Sep 18, 2018)

Captain_Tom said:


> If AMD could _highly_ bin to release a 125w chip at say... 4.6GHz+ Boost, 4.2GHz+ base - they could sell that sucker for $350-$400 and completely fend off the i7-9700K (and likely dissuade most from caring about the i9).  Anything less than that though and I agree it is a waste of time.


They probably could, but it would be quite limited in terms of availability, not to mention a PR disaster ("MAD returning to their space-heater roots" and so on). They're better off holding off until the next generation. It's not like they're struggling to sell their current chips, after all.


----------



## Captain_Tom (Sep 18, 2018)

nemesis.ie said:


> I don't put much stock in the validity of this "leak".
> 
> However, just for fun, imagine for a moment:
> 
> ...



Hmmm, interesting idea haha.  This actually is a more reasonable hypothesis than highly binned 8-cores imo, and that is because _it makes logical sense_.  Releasing junk yields early, but still better due to being on 7nm, would allow AMD to 100% convince investors that their current stock valuation is justified.  They could simply "glue" (lol) two defective 7nm CCX's together and release a 5+5 4.5GHz Zen 2 2800X.  7nm in 2018 while Intel won't have 10nm till 2020... That would look _really_ bad.

However 10 cores only makes sense imo if Zen 2 has 6-core CCX's (So the 2800X is 2 x partially disabled Zen 2 cores).



Valantar said:


> They probably could, but it would be quite limited in terms of availability, not to mention a PR disaster ("MAD returning to their space-heater roots" and so on). They're better off holding off until the next generation. It's not like they're struggling to sell their current chips, after all.



I am not so sure about that.  It would likely meet the gaming performance of the i7-9700K (Go look at IPC tests, in some games Ryzen is ahead of Intel at the same core count and clocks), and frankly anything to steel Intel's thunder at this point could (literally!) pay dividends.  Oh, and it would not use more energy than Intel's newest gen of space heaters.  I can assure you those 8-cores from Intel will use 200w+ if you clock all cores to 5GHz, and 5GHz _will be required_ to beat a 4.5GHz+ Ryzen in gaming.

AMD is no longer a $15 stock, they are worth $32 - and that is because the perception is that AMD is in full control of the market.  Heck on Newegg the 2700X is advertised as "Tomshardware's 2018 best Overall Gaming Chip of the year."  I wouldn't want to lose that if I could avoid it.  Even if it needs to be priced at $400, it would crush the 9900K in value and availability; and sandwich the 9700K in-between two better choices.  But remember that this might be a pipe dream, and it might not be worth it if Ryzen 3 is ahead of schedule.


----------



## notb (Sep 19, 2018)

R0H1T said:


> It's 8 core per die, 4 per CCX, in an MCM package


Yeah. Keep defending a Ryzen fanboy who doesn't know what a CCX is. ;-)
"The highest tier Threadripper CCX have a max of 8 cores"


> Because who'd buy EPYC then?


I could say: no one - just like currently. ;-)

But here's the gentle variant: the people who these chips are designed for.
You think way too much about core count and performance. 4-core Xeon CPUs sell beautifully despite consumer CPUs going past 6 and the pointless HEDT closing 20.


hat said:


> Maybe not, but you seemed to be worried about it back on page 1 when you mentioned DDR5...


Because IF is very RAM-dependent. Based on how many RAM compatibility issues we've seen, I bet it will have to be reprogrammed for DDR5.
So yeah... unless AMD manages to do some microcode magic, I highly doubt any of currently available Ryzen CPUs will work with a future DDR5 motherboard.


> Users don't like to upgrade a single part without "upgrading" other parts to support it?


Yup. Most don't care. I don't know if you've heard, but there's a phenomenon called "laptops" and it squashed desktops so much that most people only see desktops at work. And these office desktops are not AMD-powered, because, frankly, AMD seems not to care much about their PRO lineup.

AMD is making a big compromise here to get a "like" from PC tinkerers. IMO it's a waste of time and opportunity.


----------



## hat (Sep 19, 2018)

Dude, your whole post is wat

Again with the "AMD has no sales" comment, and now upgradeable platforms are pointless because nobody even sees desktops outside of work environments...

Yeah, Zen had a lot of unfortunate compatibility issues when it first released. How quick we are to forget all those AGESA updates, BIOS updates, etc that helped make the platform more compatible, and of course Zen+ which doesn't have so many issues. Zen was a radical new architecture, and early adopters paid the price they're always at risk of paying. This is one positive thing about Intel's old architecture. While we haven't had any radical changes since Sandy (with the possible exception of higher core count CPUs with Coffee and Whiskey Lake, if you call it that), what we got instead were minor performance uplifts, tweaks and refinements. We would hear of the odd issue with some Intel platform every now and again, but it wasn't as big of a deal as the way Zen was when it first released... though that's history now, and again, nothing a little research wouldn't guard against.

AMD isn't really catering to PC tinkerers. They do give us things that we want, but most of it doesn't require much effort. I don't think there were any business meetings at AMD where some guy stood up, slammed his fist on the table and said "but what about the ENTHUSIAST market?!"... We got unlocked processors because that's likely a simple change in microcode that allows us to change the multiplier. We likely got solder because it was important for their product in some other way than just to appease the enthusiasts, because they keep slamming Intel for using mayonnaise paste (they too used paste, see the Athlon II series). Having a more compatible product (like AM3 chips with compatibility for the older DDR2) means a better product, and is better for business.


----------



## FlanK3r (Sep 19, 2018)

Its fake news. 
U will see in few weeks, new pinnacle ridge is not necessary. And if AMD has 7nm samples in testing phase...ready to launch in h1 2019.


----------



## ratirt (Sep 19, 2018)

My comment will be as simple as it is. 
I'd rather have +one core more every year than + 100-200Mhz per year. 
I'd grab that 2800x with 10c  I'm currently thinking about buying new stuff and been thinking about 2700 or "x" versions. Wonder if I should wait longer and get 2800x. 10 cores are sexy.


----------



## Valantar (Sep 19, 2018)

Captain_Tom said:


> Hmmm, interesting idea haha.  This actually is a more reasonable hypothesis than highly binned 8-cores imo, and that is because _it makes logical sense_.  Releasing junk yields early, but still better due to being on 7nm, would allow AMD to 100% convince investors that their current stock valuation is justified.  They could simply "glue" (lol) two defective 7nm CCX's together and release a 5+5 4.5GHz Zen 2 2800X.  7nm in 2018 while Intel won't have 10nm till 2020... That would look _really_ bad.
> 
> However 10 cores only makes sense imo if Zen 2 has 6-core CCX's (So the 2800X is 2 x partially disabled Zen 2 cores).


Getting 10 cores isn't actually possible with AMD's current architecture - the CCXes in each Zeppelin need to be balanced, i.e. you can disable either 2, 4 or 6 cores per die, but not 1, 3, 5 or 7. AFAIK, connected Zeppelins also need to be balanced, though I'm not 100% sure about that. At the very least, AMD hasn't yet released anything with a lopsided MCM setup. With an MCM 2-die solution, you could then have 4 (3 disabled per CCX), 8 (2 disabled per CCX), 12 (1 disabled per CCX) or 16 cores, but nothing else. From what I've read on the topic, this seems like a fundamental trait of the design, and not something easily overcome, at least on the Zeppelin level.

Regardless of that, there's then the issue of having enough "junk" that still manages to clock high enough to be useful to sell as a new high-end SKU. Given that maximum clock speeds and power draw are the most common points of failure in a new node, this seems unlikely.




Captain_Tom said:


> I am not so sure about that.  It would likely meet the gaming performance of the i7-9700K (Go look at IPC tests, in some games Ryzen is ahead of Intel at the same core count and clocks), and frankly anything to steel Intel's thunder at this point could (literally!) pay dividends.  Oh, and it would not use more energy than Intel's newest gen of space heaters.  I can assure you those 8-cores from Intel will use 200w+ if you clock all cores to 5GHz, and 5GHz _will be required_ to beat a 4.5GHz+ Ryzen in gaming.


Oh, you're absolutely right about the power draw of Coffee (and in all likelihood Whiskey) Lake; even die-hard Intel fans admit that they need ~1.35-1.45V to reach 5GHz on average, which means anything from ~170W to ~220W. Space heaters indeed. Heck, Intel doesn't even guarantee their own turbo speeds within TDP. My issue, though, is with what's marketed - which has an image-building effect on the public. Intel still sells their space heater-grade hardware at 95W (although it pulls far more on any motherboard that has MCE/MCT enabled by default). For uninformed users, comparing this to a 125W Ryzen would mean that AMD is "less efficient", even if that's entirely BS. It's likely that a highly binned 125W Ryzen would be very competitive against an intel 8c8t (and possibly 8c16t) CPU, but the marketing effect could just as well end up being negative for AMD. AMD has a history of selling clocked-to-the-maximum SKUs that disregard power draw for performance (both in the CPU and GPU space) and releasing a product like this would hew too close to this history.

This, of course, presupposes that it'd be possible to fit two dice in an AM4 package at all, which is ... unlikely, even with the area savings of 7nm. See my previous posts for clarification. The package, substrate and platform isn't designed to accommodate the traces required for this.



notb said:


> I could say: no one - just like currently. ;-)
> 
> But here's the gentle variant: the people who these chips are designed for.
> You think way too much about core count and performance. 4-core Xeon CPUs sell beautifully despite consumer CPUs going past 6 and the pointless HEDT closing 20.


EPYC is gaining market share rapidly. Also, even though low core count server hardware still sells a lot, high core count hardware is gaining momentum rapidly. Virtualization and the increasing parallelization of software makes this a given. Then again, it's natural that they sell less, given that you'd need 2-3 4-core chips to do the job of one 12-core, and so on.



notb said:


> Because IF is very RAM-dependent. Based on how many RAM compatibility issues we've seen, I bet it will have to be reprogrammed for DDR5.
> So yeah... unless AMD manages to do some microcode magic, I highly doubt any of currently available Ryzen CPUs will work with a future DDR5 motherboard.


Considering that there isn't a single finished integrated DDR5 controller in existence, it's obvious that DDR5 and current-gen Ryzen won't be compatible. I seriously doubt they'd be able to hack DDR5 support into their DDR4 controller - RAM isn't that simple. While IF (and particularly APUs) stands to gain a lot from faster RAM, we're still a few years out from consumer adoption of DDR5. 2020 seems like a well though out line.



notb said:


> Yup. Most don't care. I don't know if you've heard, but there's a phenomenon called "laptops" and it squashed desktops so much that most people only see desktops at work. And these office desktops are not AMD-powered, because, frankly, AMD seems not to care much about their PRO lineup.


The PRO lineup is gaining momemtum, but for markets like this, turnaround is slow. There's _a lot _of validation and testing required, not to mention far more competitive volume licencing prices in the business desktop market. Still, they're arriving, slowly but steadily. Same goes for laptops. Raven Ridge is showing up in ever more designs, including premium ones. Convincing OEMs to switch takes time, but AMD is gaining. And that's good for everyone.


----------



## hat (Sep 19, 2018)

Valantar said:


> Oh, you're absolutely right about the power draw of Coffee (and in all likelihood Whiskey) Lake; even die-hard Intel fans admit that they need ~1.35-1.45V to reach 5GHz on average, which means anything from ~170W to ~220W. Space heaters indeed. Heck, Intel doesn't even guarantee their own turbo speeds within TDP. My issue, though, is with what's marketed - which has an image-building effect on the public. Intel still sells their space heater-grade hardware at 95W (although it pulls far more on any motherboard that has MCE/MCT enabled by default). For uninformed users, comparing this to a 125W Ryzen would mean that AMD is "less efficient", even if that's entirely BS. It's likely that a highly binned 125W Ryzen would be very competitive against an intel 8c8t (and possibly 8c16t) CPU, but the marketing effect could just as well end up being negative for AMD. AMD has a history of selling clocked-to-the-maximum SKUs that disregard power draw for performance (both in the CPU and GPU space) and releasing a product like this would hew too close to this history.



That bothers me a little. My i5 2400, while being a 95w chip, still only pulls <75w even with OCCT's AVX Linpack test (according to Coretemp, anyway). Why should even the 9900k pull greater than TDP at stock settings? Though MCE isn't really "stock"... all that does is force the highest turbo multiplier across all cores, which should be a few hundred negligible MHz...


----------



## Valantar (Sep 19, 2018)

hat said:


> That bothers me a little. My i5 2400, while being a 95w chip, still only pulls <75w even with OCCT's AVX Linpack test (according to Coretemp, anyway). Why should even the 9900k pull greater than TDP at stock settings? Though MCE isn't really "stock"... all that does is force the highest turbo multiplier across all cores, which should be a few hundred negligible MHz...


That's exactly it - MCE isn't stock. Intel rates TDP for base clocks only (though there's often room for some turbo above this even within TDP, there's still zero guarantee that your CPU will turbo under sustained loads). MCE removes any power limits imposed by the CPU or BIOS, letting the CPU "run free". This just means that the upcoming 8-cores from Intel will have lower base clocks than their 6-core 8700K (which already has a .5GHz deficiency in base clock vs. the 7700K). Rumors say the reduction is minimal - from 3.7GHz in the 8700K to 3.6 in the 9700K and 9900K. Still, the 8700K actually undercuts its TDP at stock by a bit, and Intel should have been able to eke out some more efficiency in that frequency band with their fourth iteration of the 14nm process. So there should be some wriggle room there for fitting two more cores within TDP at base clocks - but the chance of them turboing at all over time is ever smaller.

The point here is: Intel's PL2 limit allows above-TDP power draw for short durations even on bone-stock setups, but PL2 has a time limit, and PL1 kicks in after a short period of time. This is a hard power limit that the system can't exceed unless de-limited in BIOS, but Turbo Boost will try to run as close to this as possible while maintaining temperatures regardless of base clocks.

Your i5 2400 pulling less than 95W is due to nothing more complex than it being a lower-tier CPU that happens to pull more than the 2nd-tier TDP available. Intel (and their platform partners) doesn't want to deal with infinite TDP levels, so if it exceeds 65W (or whatever the 2nd tier was in the Sandy Bridge era), it's "95W", regardless of actual power draw at base clocks. With modern, smart boost algorithms, this would allow the chip to boost higher at stock, though I don't know if this was the case back then. The i5 8400 only draws ~50W under load according to AnandTech.

I never said a 9900K would draw more than 95W at stock (though this isn't unheard-of; we've seen ~95W Intel chips pulling 100-110W at stock), I just said that it won't be running at its 4.6GHz all-core turbo speed in a 95W power envelope. Which it won't. And MCE isn't stock, as it disables any and all power limits, so it clearly doesn't count. The thing muddying the waters here are motherboards that have MCE enabled by default - which isn't Intel's fault, but which makes the 95W rating misleading at best (particularly when some publications review chips "at stock" with MCE enabled).

Edit: forgot to say, those "few hundred ... MHz" are definitely not negligible in terms of power draw. Power and clocks do not at all scale in a linear fashion, so there is likely to be a dramatic (or at least very noticeable) difference between a core running at 3.6GHz and a core running at 4.6GHz - I wouldn't be surprised if power draw increased by more than 1.5x with that (1.27x) increase in clock speed. And the higher the clocks (or more correctly: the further above the efficiency sweet spot of the design), the more dramatic the increase in power draw per clock.


----------



## mastrdrver (Sep 19, 2018)

This is a fake because in Cinebench Ryzen is never listed at R7.

This was known as a fake when it showed up about 2 weeks ago: 




__
		https://www.reddit.com/r/Amd/comments/9e4jm2/_/e5m40k3


----------



## ratirt (Sep 19, 2018)

mastrdrver said:


> This is a fake because in Cinebench Ryzen is never listed at R7.
> 
> This was known as a fake when it showed up about 2 weeks ago:
> 
> ...


Are you saying that there's no 2800x planned? I heard rumors that there will be 2800x, it's just nobody know when and what will the CPU actually have.


----------



## Valantar (Sep 19, 2018)

ratirt said:


> Are you saying that there's no 2800x planned? I heard rumors that there will be 2800x, it's just nobody know when and what will the CPU actually have.


It seems reasonable that AMD would skip this, yes. I have no sources for this, but neither do the rumors of a 2800X coming. Given that they had to push the TDP even for the 2700X, there's little reason to suspect they'd be able to release something noticeably faster in decent quantities. It'd need a _minimum_ of a 200MHz clock speed bump (and even that would be rather sad), and the 2700X is already pushing the limits of the 12nm node. And given how close 7nm Zen2 seems to be, I'm betting on AMD focusing on high-margin enterprise segments until that arrives. It doesn't matter to them whether the Intel competition has 6 or 8 cores - even the 8700K is slightly better in most games than the 2700X, so why play catch-up when you can wait a bit and deliver a far heavier blow? Ryzen 2000 will keep selling - and selling well - even if Intel matches their core count. There's no reason for AMD to act desperate and push out a hot-running, clocked-to-the-max SKU just to say "hey, we're here too!". They already are.


----------



## ratirt (Sep 19, 2018)

Valantar said:


> It seems reasonable that AMD would skip this, yes. I have no sources for this, but neither do the rumors of a 2800X coming. Given that they had to push the TDP even for the 2700X, there's little reason to suspect they'd be able to release something noticeably faster in decent quantities. It'd need a _minimum_ of a 200MHz clock speed bump (and even that would be rather sad), and the 2700X is already pushing the limits of the 12nm node. And given how close 7nm Zen2 seems to be, I'm betting on AMD focusing on high-margin enterprise segments until that arrives. It doesn't matter to them whether the Intel competition has 6 or 8 cores - even the 8700K is slightly better in most games than the 2700X, so why play catch-up when you can wait a bit and deliver a far heavier blow? Ryzen 2000 will keep selling - and selling well - even if Intel matches their core count. There's no reason for AMD to act desperate and push out a hot-running, clocked-to-the-max SKU just to say "hey, we're here too!". They already are.


I just really wanna try this 2700 or 2800 (with the last one if it gets realeased but what you are saying it isn't) I could wait for zen2 but i'm tempted to go with 2700 or "x" version. zen2 will be release probably 2019 q1 if  I remember correctly.


----------



## Valantar (Sep 19, 2018)

ratirt said:


> I just really wanna try this 2700 or 2800 (with the last one if it gets realeased but what you are saying it isn't) I could wait for zen2 but i'm tempted to go with 2700 or "x" version. zen2 will be release probably 2019 q1 if  I remember correctly.


Yeah, at the earliest. I wouldn't be surprised if they tried to uphold a 1-year release cadence, which would mean a launch around march/april - but I'd love for it to arrive earlier. I understand the desire for getting the fastest possible thing as soon as possible, though. Isn't that always the chief frustration for PC enthusiasts?  On the other hand, I bought a 1600X when it launched, and even if the 2600x is measurably faster across the board, I'm still very happy with mine. The 2700X is a crazy powerful processor, and even if it's bound to be eclipsed by 3000-series hardware in 6-8 months, that doesn't mean it won't be good any more at that point. It's still going to be great.


----------



## lexluthermiester (Sep 19, 2018)

hat said:


> Begun, the core war has.


That started about 13 years ago with the Athlon 64 X2 and Pentium D.
(Props on channeling Master Yoda..)



Valantar said:


> Don't remember where I saw it, but this has been debunked. It's fake. Why?


Please, do tell us..


Valantar said:


> CineBench scores can be manipulated just by editing a text file


Yawn..


Valantar said:


> The description of the CPU doesn't match AMD naming conventions in Cinebench (among other things, it says "10-core", while AMD CPUs are described with words, not numbers ("six-core", "eight-core")


Can you prove this with screen-shots of other high core-count CPU's from either Intel or AMD? (Hint, go looking..)


Valantar said:


> And last but certainly not least: *There's no way to get a 10-core AM4 chip without new silicon, which would mean this is 7nm. There's no way they'd launch their first 7nm CPUs as an afterthought like this.*


Let's leave the engineering to the experts, ok? Alrighty then..


Valantar said:


> Please stop treating this like it's real.


That is an opinion and not very good one.


----------



## Valantar (Sep 19, 2018)

lexluthermiester said:


> please, do tell us..
> 
> Yawn..


You know, lex, I've come to appreciate your style of argument. Some times, there's just too much on-topic discussion and actual arguments being presented, and your derogatory tone and dismissiveness without any backing or substance is quite refreshing at times.



lexluthermiester said:


> Can you prove this with screen-shots of other high core-count CPU's from either Intel or AMD? (Hint, go looking..)


Why high core count? This is supposed to be a Ryzen, not a Threadripper. My 1600X is denominated as "AMD Ryzen 5 1600X Six-Core Processor". 1800Xes are "... Eight-Core". One would expect naming to be consistent within the same product family, no? This, of course, is in addition to the other naming error: the use of "Ryzen R7" in the "leak", which isn't a naming style AMD has ever used (which makes sense, as "R7" is supposed to be short for "Ryzen 7"). While the use of numbers or words for the number of cores might vary depending on the count, it makes no sense for "R7" to appear no matter the context.


lexluthermiester said:


> Let's leave the engineering to the experts, ok? Alrighty then..


Is this an engineering question? No, it's an engineering and sales/marketing question. Engineering doesn't happen in a vacuum, and the engineers aren't going to make anything that they can't sell enough of to recoup the development and production costs - they wouldn't be given the resources to do so. Besides, it's not like you've shown anything engineering-related that might counter my arguments.


lexluthermiester said:


> That is an opinion and not very good one.


Welcome to the opinion club, I guess? This is a forum, where, among other things, we discuss opinions - particularly when we don't have facts to base anything off of. You're very welcome to present a counterargument if you like, and perhaps clarify why you think this "not a very good" opinion? 'Cause so far, your opinion seems limited to mine being wrong. IMO, that doesn't bring much of value or substance to the discussion.


----------



## lexluthermiester (Sep 19, 2018)

Valantar said:


> You know, lex, I've come to appreciate your style of argument. Some times, there's just too much on-topic discussion and actual arguments being presented, and your derogatory tone and dismissiveness without any backing or substance is quite refreshing at times.


You know Val, it's nice to be appreciated once in a while. For example you appreciating my "derogatory" "dismissiveness" to the misguided information you tried, and failed, to offer as convincing.


Valantar said:


> Why high core count? This is supposed to be a Ryzen, not a Threadripper.


If you had actually looked it up, you would know why. Clearly you did not, thus you're missing something that made your argument invalid.


Valantar said:


> IMO, that doesn't bring much of value or substance to the discussion.


Ah, but the contribution being made is to discredit your opinions and notions as invalid on merit. For example, to say the AMD lacks the resources to render a new product line that effectively is a mere extention of an existing, proven technology already in mass production is *completely without merit and logic*. Furthermore, this new CPU line might be a rebadge and retooling of Threadripper dies that have a mix of faulty and good cores, with the faulty cores being disabled, which has been done many times in the past by many companies.

Years ago, when AMD released 3 core CPU's, people said it wasn't possible or AMD couldn't do it because this, that or the other thing. They did it anyway. When Intel released the Core series of CPU's and claimed that it was a massive leap forward and that the bottom tier Core CPU's could stand their ground with the top tier Pentium series, people said it wasn't possible and yet it happened.

@Valantar, for someone like you to say a company like AMD can't do something they have literally already done is so absurd that it is almost beyond belief. It is preposterous for you, or anyone else, to say with any level of credibility that a 10 core, or even an 11 core, CPU can not be produced by a company that has already produced CPU's with over triple that core count in mass amounts. So when I said that your opinions are not very good, it is because they lack logic, reason and merit, but also stand in the face of known technological, scientific and historical facts. Those arguments are as incredulous and vapid as they are illogical.


----------



## Valantar (Sep 19, 2018)

lexluthermiester said:


> You know Val, it's nice to be appreciated once in a while. For example you appreciating my "derogatory" "dismissiveness" to the misguided information you tried, and failed, to offer as convincing.
> 
> If you had actually looked it up, you would know why. Clearly you did not, thus you're missing something that made your argument invalid.
> 
> ...


I have no idea where you're getting these claims from, but they're definitely not from my post. Let's see:

I didn't say AMD didn't have the resources to do this, I said it wouldn't make sense to waste resources on a one-off product added to an existing product line that wouldn't stand a chance to recoup the R&D costs.
"A rebadge of Threadripper dies" makes no sense, given that all Ryzen dies on the same process are the same die, just different bins. Ryzen 1000, TR 1000 and Epyc 1st gen are all the same die. Ryzen 2000 and TR 2000 is also the same die. Ryzen 3000 (and likely TR 3000) will in all likelihood be the same (Zen2) die as 7nm Epyc.
Whether these were "Threadripper dies" or not, it doesn't take away from the inherent issues with cramming two dice within the confines of an AM4 package. This _would _require a thicker/more complex substrate, which would mess with cooler compatibility, at the very least. Nor would getting the IF traces implemented be an easy task in a package that small.
The need for symmetric CCXes within a die is a documented feature of the Ryzen architecture. Of course AMD isn't flaunting this, but I've seen it reported from reputable sources that asymmetrical disabling of cores is impossible with the current design. This might change with Zen2, or it might not - we don't know.
Previous 3-core AMD chips were all based on a fundamentally different design from Zen. Don't see how this is applicable. Sure, the concept has been realized before, but with an entirely different basis.
Beyond this, nothing you've said is really applicable. TR and Epyc are MCM products, while the AM4 has no such thing. Making one wouldn't be _impossible_, but complicated, expensive, and would introduce the same issues we see with TR for consumer-facing applications (NUMA awareness, latency, etc.). It doesn't seem reasonable whatsoever that AMD would spend a significant amount of money on this when it would be obsolete in 6-8 months when 7nm Zen2 Ryzen CPUs arrive.

Of course, a 10-core could be a 7nm chip based on a 2x8-core Zeppelin (with three cores disabled per CCX) - but that would mean launching their first 7nm Ryzen as an afterthought product on an existing product line. Sure, the standalone launch would garner attention, but wouldn't it then make more sense for this to kickstart the 3000-series, rather than call it "2800X" and make it seem like a minor upgrade from the 2700X when it's in fact based on a significantly updated architecture?

Again: neither of these theories make sense in terms of business, economics, or engineering when factoring in the running of the business. Can it be done? Sure, probably. Would it be smart? No. And if there's one thing we can say about AMDs strategies over the last few years, they're damn smart.


----------



## mastrdrver (Sep 20, 2018)

lexluthermiester said:


> That started about 13 years ago with the Athlon 64 X2 and Pentium D.
> (Props on channeling Master Yoda..)....Let's leave the engineering to the experts, ok? Alrighty then........



You don't have to leave it to engineering. If you want a 10 core where you have a mask for 8 core you have to make a new mask. Just because you have a 8 core mask does not mean that adding 2 more cores to the mask will make it work. You have to certify the chips to make sure the mask is working. Lets not mention that there are the metal layers and the wires to connect the metal layers that have to be done too. When you get like 10 metal layers alone, there's a lot to go wrong.

Even if they went this route, I'd expect it more for Zen2. The time needed to get the silicon back from the fab and make the fixes and go through the changes needed to make it work, you're already looking at close to a year out hoping you only need to make 1 change in those 10+ metal layers.


----------



## lexluthermiester (Sep 20, 2018)

Valantar said:


> I didn't say AMD didn't have the resources to do this


You directly implied it by saying..


Valantar said:


> And last but certainly not least: *There's no way to get a 10-core AM4 chip without new silicon, which would mean this is 7nm. There's no way they'd launch their first 7nm CPUs as an afterthought like this.*


.. which it patently incorrect.


Valantar said:


> Of course, a 10-core could be a 7nm chip based on a 2x8-core Zeppelin (with three cores disabled per CCX) - but that would mean launching their first 7nm Ryzen as an afterthought product on an existing product line. Sure, the standalone launch would garner attention, but wouldn't it then make more sense for this to kickstart the 3000-series, rather than call it "2800X" and make it seem like a minor upgrade from the 2700X when it's in fact based on a significantly updated architecture?


You're thinking to small. They could take two 6core dies each with one of it's cores disabled. What makes sense is that AMD likely has a ton of wafer dies that have minor defects and they want to put them to use, as they have done in the past.


Valantar said:


> Again: neither of these theories make sense in terms of business, economics, or engineering when factoring in the running of the business. Can it be done? Sure, probably. Would it be smart? No.


Maybe it doesn't make sense to you. But you, nor anyone outside AMD, can say definitively what actually makes sense for AMD.


Valantar said:


> And if there's one thing we can say about AMDs strategies over the last few years, they're damn smart.


At least we can agree on *something*.


mastrdrver said:


> You don't have to leave it to engineering. If you want a 10 core where you have a mask for 8 core you have to make a new mask. Just because you have a 8 core mask does not mean that adding 2 more cores to the mask will make it work. You have to certify the chips to make sure the mask is working. Lets not mention that there are the metal layers and the wires to connect the metal layers that have to be done too. When you get like 10 metal layers alone, there's a lot to go wrong.
> 
> Even if they went this route, I'd expect it more for Zen2. The time needed to get the silicon back from the fab and make the fixes and go through the changes needed to make it work, you're already looking at close to a year out hoping you only need to make 1 change in those 10+ metal layers.


You really need to read the rest of the conversation.


----------



## Valantar (Sep 20, 2018)

lexluthermiester said:


> You directly implied it by saying..
> 
> .. which it patently incorrect.


You're misreading me there, there is no implication that AMD doesn't have the resources to do this. Saying it's too expensive doesn't mean they couldn't afford it if they wanted to - strategically smart use of money and the capability to spend money are not the same thing. What I've said all along is that two of the three possible ways of achieving this (new 12nm silicon or a MCM package for AM4) don't make sense for economic reasons. No implication that AMD can't do it, just that doing so - particularly for a single SKU that will be eclipsed by a new product series in 6-8 months - would essentially be throwing R&D money out the window with no chance whatsoever of recouping the costs (high-end, >$300 SKUs represent a quite small portion of sales). _Strategically_, this would be really dumb.

The third option, using defective 7nm dies, doesn't make sense for strategic and marketing reasons - while launching a retail 7nm CPU in 2018 would be a triumph, it'd be a short-lived, low-volume, very expensive product that would essentially undermine the entire 3000-series when it launches next spring. And the converse option, kick-starting the 3000-series with a high-end 7nm part in late 2018, would be downright dumb in terms of marketing - the vast majority of customers would want to wait for more reasonably priced 6-8-core parts, which would then arrive _more than half a year_ later. That's more than long enough for people to lose interest and buy whatever's available. Of course, this also presupposes that they can't use defective 7nm dice (once these enter volume production) for Epyc CPUs, which have far higher margins and would make far more sense for a low-volume part (especially as there'd be no extra R&D cost).



lexluthermiester said:


> You're thinking to small. They could take two 6core dies each with one of it's cores disabled. What makes sense is that AMD likely has a ton of wafer dies that have minor defects and they want to put them to use, as they have done in the past.


... but they already harvest defective dice for <8-core SKUs (and 12c/24c Threadripper). No need to put them to use in any other way unless the defect rate on 12nm is _far _higher than what's reasonable at this stage in production. In all likelihood they're already disabling fully functioning silicon to fulfill the need for low-end silicon. And why would they design a 6-core die? Unless you're expecting next-gen APUs to have 6 cores in a single CCX, this doesn't really align with AMD's stated strategies either. And given that Epyc is confirmed to launch with 64 cores, that means that the next-gen larger CCX is 8 cores, not 6. While it does make sense to separate out consumer and enterprise silicon as sales volumes grow, it makes more sense for them to launch single-CCX =<8-core CPUs (essentially the entire consumer lineup in a single-CCX design) than having up to 6 in one CCX and 8-12 in two.



lexluthermiester said:


> Maybe it doesn't make sense to you. But you, nor anyone outside AMD, can say definitively what actually makes sense for AMD.


That is of course true. But then again, neither can you. Unless you're working for AMD developing their CPU strategy?


----------



## lexluthermiester (Sep 20, 2018)

Valantar said:


> blah blah blah


Yawn


Valantar said:


> That is of course true.


Gee, thanks for the unneeded and unnecessary validation. Of course it's true. It doesn't take a genius to figure such out.

If AMD thinks they can profit from a 10 core product, then they are going to do it regardless.


----------



## Valantar (Sep 20, 2018)

lexluthermiester said:


> If AMD thinks they can profit from a 10 core product, then they are going to do it regardless.


Well, duh. That's a gross oversimplification, though. Not to mention that my entire line of argumentation revolves around how likely it is that they could profit from this or not. You don't seem to want to discuss that, though, nor to discuss the validity of the leak in question here. If you're so strongly opposed to discussing speculation and opinion, why are you even here? The "leak" itself is unverified and there are strong indications that it's fake. There are no facts to discuss here.


----------



## mastrdrver (Sep 21, 2018)

lexluthermiester said:


> You really need to read the rest of the conversation.



There is nothing else to read because it's all senseless babel. Nothing you've stated addresses the root of the issue of which I pointed out.


----------



## lexluthermiester (Sep 21, 2018)

Valantar said:


> Well, duh. That's a gross oversimplification, though.


Not really.


Valantar said:


> You don't seem to want to discuss that, though, nor to discuss the validity of the leak in question here.


Hmm. Interesting that.


Valantar said:


> If you're so strongly opposed to discussing speculation and opinion, why are you even here?


Discussing speculation is very different from sharing opinion. I'm not opposed to either. What I am opposed to is needless, meritless AMD/Intel/"Company name here" bashing which is very much what you seemed to be doing here.


Valantar said:


> The "leak" itself is unverified and there are strong indications that it's fake.


That is possible. It's also possible that it is real. We won't know until AMD goes public with a statement one way or the other.


Valantar said:


> There are no facts to discuss here.


Oh? How can that correct? There are plenty of facts to speak of.

Fact; If AMD thinks they can benefit from the release of a 10core CPU line and it is worth their time to do so, they will without fail.
Fact; As AMD has already released CPU lines with core counts greater than 10, making a 10core line is very possible and equally plausible.
Fact; AMD has historically been known for taking advantage of any and every opportunity available, including reworking existing designs to maximize capitalization.


mastrdrver said:


> There is nothing else to read because it's all senseless babel.


Yeah you seem to do that quite often..


mastrdrver said:


> Nothing you've stated addresses the root of the issue of which I pointed out.


And you think that because you failed to read through everything.


----------



## mtcn77 (Sep 21, 2018)

Everyone seems to discount there is money in server chips and Intel has asymmetrically arranged mesh topology, _but AMD is not to counter that?_ Where do you get that from, the parrot magazine?
Fact, Raven Ridge comes with a similar half-CCX variant. One only needs to add it to the 'Summit Ridge', potentially 4+4+2.





Raven Ridge vs Zeppelin​




CCX Variants (WikiChip)​


----------



## Valantar (Sep 21, 2018)

lexluthermiester said:


> What I am opposed to is needless, meritless AMD/Intel/"Company name here" bashing which is very much what you seemed to be doing here.


Sorry, what? Seriously? Wow. I don't even mean this as snark: you really need to work on your reading comprehension. What have I said here that can be seen as "bashing" any company, and how exactly? This is ridiculous. Oh, and not that it matters, but all the three desktop PCs in my household are AMD-based, two of them built/upgraded within the last year and a half... 

I have been discussing the merits of this leak, as well as the engineering/marketing/sales/strategy aspects of going forward with a design like this. I think it's a genuinely bad idea. You don't. We disagree. Deal with it. This is not bashing AMD, it's the complete opposite: not wanting AMD to waste their relatively limited resources on a silly product that won't do them any good in neither the short or long term. If you read this as bashing AMD, there is something very much wrong with how you're reading this.

Now, either please reread my posts here and try to figure out what you've misunderstood, or stop replying to me. I really can't be bothered dealing with this any more, as the misunderstanding is entirely on you.


----------



## lexluthermiester (Sep 21, 2018)

Valantar said:


> Sorry, what? Seriously? Wow. I don't even mean this as snark: you really need to work on your reading comprehension.


Comprehension is not the problem.


Valantar said:


> I have been discussing the merits of this leak, as well as the engineering/marketing/sales/strategy aspects of going forward with a design like this. I think it's a genuinely bad idea. You don't. We disagree.


Clearly


Valantar said:


> This is not bashing AMD


It comes off that way, whether you realize it or not. I'm leaning towards not..


Valantar said:


> it's the complete opposite: not wanting AMD to waste their relatively limited resources


That's just it, it wouldn't be a waste of resources. It would in fact be very wise to utilize in such a way. And because of the success of Ryzen, AMD's resources are not "relatively limited". For you to state that AMD's resources are limited and that making such a product would be outside the scope of those resources is bashing by implication.


Valantar said:


> If you read this as bashing AMD, there is something very much wrong with how you're reading this.


Or it could be the way you're stating things.


Valantar said:


> Now, either please reread my posts here and try to figure out what you've misunderstood, or stop replying to me. I really can't be bothered dealing with this any more, as the misunderstanding is entirely on you.


Ok, I'll yield on this one. I did reread your posts and still conclude that you seemed to be bashing AMD. If that is not your intent then I apologize.

I'm not a fan of company bashing in general unless the company in question has few or no redeeming qualities. In the case of AMD, they currently hold two very important crowns in the X86 CPU market; They have the highest performing single socket CPU crown and they have the best value for money spent crown. In Intel's case, they hold the best gaming performance crown, but not by much and they may loose it. In the last 18ish months AMD has forged ahead and they show no signs of slowing down. This is good for the entire industry as it is forcing Intel to actually compete with someone other than themselves. AMD releasing a 10core CPU not only makes sense from a logistics perspective, but it makes sense from a competition perspective. A 10core offering is good for the market even if the benefit isn't large as it will show everyone that AMD doesn't just have bite, but also ability to sustain.


----------



## Valantar (Sep 21, 2018)

lexluthermiester said:


> That's just it, it wouldn't be a waste of resources. It would in fact be very wise to utilize in such a way. And because of the success of Ryzen, AMD's resources are not "relatively limited". For you to state that AMD's resources are limited and that making such a product would be outside the scope of those resources is bashing by implication.
> 
> Or it could be the way you're stating things.
> 
> Ok, I'll yield on this one. I did reread your posts and still conclude that you seemed to be bashing AMD. If that is not your intent then I apologize.


I think I've identified the source of the misunderstanding, but feel free to correct me if I'm wrong. In post #63, I said


Valantar said:


> They probably could, but it would be quite limited in terms of availability, not to mention a PR disaster ("MAD [sic - my phone's autocorrect never seems to accept that I type "AMD" quite often] returning to their space-heater roots" and so on).


I'm guessing you misread the part in quotes as something _I_ was saying - which it wasn't. While that _could_ have been made more obvious, the context and quotation marks really ought to be enough to make it obvious that I was showing an example of the classic AMD-bashing that they would open themselves up to from Intel fans if they were to launch a new AM4 part with a TDP even higher than the 105W of the 2700X. It _would_ be a PR disaster, as it would make it far too easy for both Intel and their fans to bash AMD over their history of inefficient chips - even if Zen is arguably _more _efficient than CFL. AMD has a branding and mindshare disadvantage, and playing into overblown public perceptions of their previous weaknesses would be too dumb a strategy for me to believe AMD would even consider it. (This, of course, ignores the fact that current Intel offerings consume far more power than their TDP at the frequencies people applaud them for reaching (including their rated boost clocks), something that I've addressed earlier in this thread.)

As for AMD having limited resources: _of course_ they do. Arguing anything else is kind of absurd, really. They've just recently gone from losing money over multiple years to making a profit (an achievement that deserves a lot of praise, but nonetheless hasn't lasted long enough for them to have a lot of cash on hand). In terms of revenue, they're less than 1/10 the size of Intel. They're half the size of Nvidia by the same measure, and Nvidia only competes with them in half their product stack. There is _zero _question that AMD's R&D and silicon development budgets are _far_ smaller than those of their competitors. What does this mean? That relative to their competition (which is the only reasonable metric), they have limited resources.

This is actually one of the absolutely brilliant things of the base Zen design and the MCM approach: they turned an economic disadvantage into a technological advantage. AMD has also clearly stated that they went for the "one die to rule them all" design approach (plus APUs, of course) for cost reasons - which, given their size and resources was _very _smart. I'm quite convinced they will add another design in relatively short order (for a product stack of three, not two designs), but not before 7nm is here, and to me it seems most likely for that design to be an 8-core single-CCX (given the 8-core CCXes shown by 64-core EPYC), possibly with an iGPU.

Their current silicon product stack consists of two parts: 4c+GPU and 4c+4c. If the leaks of 64-core Epycs are true, that means an 8c+8c die is incoming. That leaves a significant gap between it and the (inevitable, as more doesn't make sense for ULV mobile) 4c+GPU refresh. 10 cores in silicon would strike a weird balance here - it would require two CCXes (unless you're implying they design yet another CCX, with 10 cores, which would be more expensive), and is a core count that makes sense for low-end harvested dice from the 16-core die. Designing an in-between 8c-single-CCX (either with or without an iGPU, based on the same CCX design as the 8c+8c Zeppelin) die to fill this gap makes a lot more sense. The 8c+8c die seen in current 7nm Epyc leaks would make a single-die 10-core AM4 design quite easy (and I don't really doubt we'll see it once 7nm Ryzen launches), but launching it at this time makes no sense for either marketing or sales reasons, as I've said before. There's more money to be had by selling the same harvested dice as lower-end Epycs.

As for my posts being "bashing": in my understanding of that term, that would require me to be unequivocally and universally critical of AMD at every level; _presenting them _as if they had few or no redeeming qualities. Which it should be plenty clear from my posts that I haven't done whatsoever. Nor have I even actually criticized AMDs current products or strategies - quite the opposite! What I've said is that this leak, which neither aligns with AMDs public roadmaps, their stated strategies, their publicized silicon development, or their product segmentation strategy, would be a strategically bad move - in particular as any Ryzen 3000-series based off a 16-core Zeppelin would render it obsolete when it launches in 6-8 months.

Let's see:


Valantar said:


> TR is excellent for what it's made for, but gaming isn't among that. AMD likes to push the gaming+streaming angle, which is somewhat valid, but with that kind of budget you'd be better off getting a secondary streaming PC anyhow, so that's kind of moot. The 18-core X299 Intel chip doesn't exactly game well either...


Is this bashing?


Valantar said:


> I bought a 1600X when it launched, and even if the 2600x is measurably faster across the board, I'm still very happy with mine. The 2700X is a crazy powerful processor


Or this? Just based on these two quotes alone, you should have been entirely able to tell that I haven't been bashing - or even criticizing! - AMD. Again: this _has _to boil down to reading comprehension. I'm leaning towards you wanting to antagonize me or read what I'm saying in an oppositional light due to the other thread where we've been arguing, but that might be going a bit far. Still, your reading of my posts makes no sense.



lexluthermiester said:


> I'm not a fan of company bashing in general unless the company in question has few or no redeeming qualities. In the case of AMD, they currently hold two very important crowns in the X86 CPU market; They have the highest performing single socket CPU crown and they have the best value for money spent crown. In Intel's case, they hold the best gaming performance crown, but not by much and they may loose it. In the last 18ish months AMD has forged ahead and they show no signs of slowing down. This is good for the entire industry as it is forcing Intel to actually compete with someone other than themselves. AMD releasing a 10core CPU not only makes sense from a logistics perspective, but it makes sense from a competition perspective. A 10core offering is good for the market even if the benefit isn't large as it will show everyone that AMD doesn't just have bite, but also ability to sustain.


You're preaching to the choir here, man. This right here is exactly why it makes no sense for AMD to go off-roadmap and spend tens of millions of dollars on a 10-core piece of silicon on 12nm that will inevitably be short-lived and still won't recapture the gaming crown from Intel. To quote myself yet again:


Valantar said:


> For the foreseeable future, 8c16t will be plenty for gaming (even 8c8t will likely last for years and years), and we definitely need an increase in per-core performance. [...] But there's not really any reason _not_ to expect that from 7nm Zen2, is there? Given that AMD has a core count advantage, they know they need to work on IPC and clock speed, and they've said there's plenty of low-hanging fruit to improve the former, while the latter should improve with the new node.


I really don't know who it is you're arguing against - it certainly isn't me.




mtcn77 said:


> Everyone seems to discount there is money in server chips and Intel has asymmetrically arranged mesh topology, _but AMD is not to counter that?_ Where do you get that from, the parrot magazine?
> Fact, Raven Ridge comes with a similar half-CCX variant. One only needs to add it to the 'Summit Ridge', potentially 4+4+
> ​


​AMD _has_ countered Intel's mesh topology. Their counter is the IF-linked MCM design. It works wonderfully as long as your workload doesn't exceed 8 cores/16 threads per task (which _very _few workloads even for servers do, and the upcoming 16c/32t Zeppelins will make up for any deficiencies here). AMDs response has a latency disadvantage in some scenarios, but a whole host of advantages that more than make up for this.

What you're effectively saying here is that AMD should ditch their current (known working, performant, efficient, easy to produce) IF-connected MCM approach and spend another on-die IF link on hooking up _two_ measly cores? Why? Those cores would still not have direct access to a memory controller (unless they make this a three-channel design, making it require a non-AM4 platform) or PCIe controller, both of which are attached to the two current CCXes.

As for your half-CCX example, care to share a link? I can't find that image in any AMD/Zen-related WikiChip page that I've looked at. And that's really besides the point: the 2-core image is showing a Raven Ridge chip with two cores disabled after production, such as the Athlon 200GE. AMD hasn't (yet, at least) made available any in-silicon CCX with any more or less the 4 physical cores. Given how small 4 cores will be on 7nm, there's little reason to believe a piece of silicon like that will ever be made.


----------



## lexluthermiester (Sep 21, 2018)

Valantar said:


> I think I've identified the source of the misunderstanding, but feel free to correct me if I'm wrong.


There were others. Like I said it's all good. You're not being like the standard "fanboy". I still contend it would not take much engineering(which might have been done ahead of time) to link two flawed 6core Ryzens to make a 10core. It's been done and it really wouldn't take much. Did the math on the physical dimensions and two dies would fit, with a bit of room to spare, in an AM4 package.


----------



## londiste (Sep 21, 2018)

lexluthermiester said:


> I still contend it would not take much engineering(which might have been done ahead of time) to link two flawed 6core Ryzens to make a 10core. It's been done and it really wouldn't take much. Did the math on the physical dimensions and two dies would fit, with a bit of room to spare, in an AM4 package.


As @Valantar mentioned, would vertical dimensions fit in AM4 package?
Adding another core would bring the same latency/NUMA issues Threadripper has (in gaming and other desktop use cases) to desktop. It would basically only be able to compete better than 2700X in workstation applications, maybe not even too well at that. Additional cost, complexity, power management/usage (2 cores, additional IF links) on top of all this.
While it could be done I cannot see how that would be worth the effort.

At the same time Zen 2 is reportedly planned for 2019H1.


----------



## Slizzo (Sep 21, 2018)

lexluthermiester said:


> There were others. Like I said it's all good. You're not being like the standard "fanboy". I still contend it would not take much engineering(which might have been done ahead of time) to link two flawed 6core Ryzens to make a 10core. It's been done and it really wouldn't take much. Did the math on the physical dimensions and two dies would fit, with a bit of room to spare, in an AM4 package.



As stated before, currently Ryzen requires symmetrical dies, even when pairing them together (otherwise you would have already have core counts not filled out by first gen Threadripper with 12 and 16 core CPUs).

Whether or not your theoretical core count comes to pass who knows? I don't see them breaking up the need for symmetry in the current line of Zen considering their cadence (yearly releases). They don't have a lot of time to course correct on Zen itself with the cash they have on hand (not much), and engineering team they have (small comparative to Intel).


----------



## Valantar (Sep 21, 2018)

lexluthermiester said:


> There were others. Like I said it's all good. You're not being like the standard "fanboy". I still contend it would not take much engineering(which might have been done ahead of time) to link two flawed 6core Ryzens to make a 10core. It's been done and it really wouldn't take much. Did the math on the physical dimensions and two dies would fit, with a bit of room to spare, in an AM4 package.


To me, even implying that I might be in the same spectrum as an Intel fanboy is downright insulting - I'm quite an ardent AMD supporter, for quite a few reasons. You really wouldn't have to look very far for confirmation on this - as I see it, including this thread (not wanting AMD to make dumb-a**ed decisions and waste R&D money on short-lived one-off products is _supporting _them, not criticism. I _would _criticize them if they did what this rumor says, but I don't see it as likely whatsoever). Outside of work-mandated laptops, I haven't bought/owned an Intel-based PC since my ThinkPad X201 back in ... 2010? Something like that. I've never bought an Nvidia GPU either, for that matter (well, I frankly don't remember what GPU was in my first two PCs, but considering I was 10-15 years old (and subsequently dirt poor) I doubt I could afford anything by Nvidia then either, even if it was the early 2000s and I was an idiot. I love both my Fury X and my 1600X).

Anyhow, as @londiste reiterated above, x and y measurements are not the main limitation for adding a second die to AM4. Adding an internal IF link between two dice would necessitate adding layers to the substrate, making it thicker than it is. This would again make the package incompatible with the AM4 spec, meaning that a lot of coolers would no longer fit (particularly the ones with clip-on mounting, though also likely quite a few with short mounting screws). If you doubt this - have you seen a Threadripper CPU in person? If not, let me tell you from experience: the substrate is _insanely _thick. I took this when I was installing the 1920X in my GF's video editing workstation:




That substrate alone is thicker than an entire mobile CPU package (I googled: CFL-H is 1.49mm, and that substrate is thicker than that), and comes close to some desktop packages. Sure, the TR substrate is designed for EPYC, and thus has room for even more IF links, two extra memory channels and extra PCIe - but it's also around 4x the area of AM4. Less area means more layers for fitting the same number of traces. In other words, fitting a 2-die package with the requisite IF link inside of the restrictions of the AM4 package dimensions would be quite the challenge, if possible at all.
For reference: while not as good a picture, the AM4 package is clearly _a lot _thinner than the TR4 package.





Then, again, there are issues like NUMA awareness (or the lack of it), latency between the dice, leaving one die without directly connected PCIe (at the very least) and RAM (unless you run one channel off each die, which would bring its own performance issues), and so on and so forth. A package like this would probably be a decent workstation chip (even with two less memory channels than TR it would be a beast for multi-threaded workloads), but it wouldn't make sense as a high-end consumer chip.

Saying this isn't bashing AMD. It's providing sound reasoning why this rumor doesn't ring true in terms of the technology behind AMD's current offerings (and what they have publicized about their future plans). It doesn't align with what they've said, and it would only barely make sense as a product. It definitely wouldn't be a suitable "answer" to the 9700K/9900K for those who need such an answer (Intel fans or people who only play games), as it wouldn't outperform the 2700X in those applications). In short: it makes _far_ more sense for AMD to have a bit of patience, wait until 7nm Zen2 is available in sufficient quantities for a consumer launch (which I'm willing to bet will be around April, as a yearly cadence makes sense), and blow people's minds by how good it is. Zen2 should bring both higher clocks, even better efficiency (I'm _really _looking forward to 7nm APUs!), and improved IPC. It should be a real triple threat. There's no reason for AMD to rush out some oddball in-between "answer" to Intel's rather desperate attempt at matching AMD's core count advantage before that.


----------



## mtcn77 (Sep 21, 2018)

Valantar said:


> AMD _has_ countered Intel's mesh topology. Their counter is the IF-linked MCM design. It works wonderfully as long as your workload doesn't exceed 8 cores/16 threads per task (which _very _few workloads even for servers do, and the upcoming 16c/32t Zeppelins will make up for any deficiencies here). AMDs response has a latency disadvantage in some scenarios, but a whole host of advantages that more than make up for this.
> 
> What you're effectively saying here is that AMD should ditch their current (known working, performant, efficient, easy to produce) IF-connected MCM approach and spend another on-die IF link on hooking up _two_ measly cores? Why? Those cores would still not have direct access to a memory controller (unless they make this a three-channel design, making it require a non-AM4 platform) or PCIe controller, both of which are attached to the two current CCXes.
> 
> As for your half-CCX example, care to share a link? I can't find that image in any AMD/Zen-related WikiChip page that I've looked at. And that's really besides the point: the 2-core image is showing a Raven Ridge chip with two cores disabled after production, such as the Athlon 200GE. AMD hasn't (yet, at least) made available any in-silicon CCX with any more or less the 4 physical cores. Given how small 4 cores will be on 7nm, there's little reason to believe a piece of silicon like that will ever be made.


These bulletpoints cannot be appreciated enough. When Zen v.1 took charge, the first reviews were mixed between that and 5960x, however since then Intel had to resort to bigger dies(10-core entrant to the mainstream) and split the HEDT line altogether because 6950x just didn't live up to hype due to overheating and nasty overclocking instability.




I know about the presence of CCX variants, but that is about it. Nothing solid other than the topology clues. It would be very easy actually, since the chip is an SoC and just adding more CCX islands only insert to the existing 'SoC' layout. The core matrices are asymmetrical anyway, this whole 'symmetrical die' rumour couldn't end soon enough. Infinity Fabric is outside the respective scope.


----------



## Valantar (Sep 21, 2018)

mtcn77 said:


> These bulletpoints cannot be appreciated enough. When Zen v.1 took charge, the first reviews were mixed between that and 5960x, however since then Intel had to resort to bigger dies(10-core entrant to the mainstream) and split the HEDT line altogether because 6950x just didn't live up to hype due to overheating and nasty overclocking instability.
> 
> 
> 
> ...


You seem to have misunderstood a few fundamentals here:

CCXes on the same die are also connected through Infinity Fabric. As such, IF is never beyond the scope of any multi-CCX Zen design.
You can't "just add" parts without accounting for I/O, even if it's an SoC. Your reasoning doesn't at all account for how the parts of the SoC actually communicate or work together. 
The part about Zen requiring symmetrical CCXes is not a rumor - this has been confirmed by AMD in interviews. As for requiring symmetrical dice in MCM setups, I haven't seen that part confirmed, but it makes sense in a system architecture/resource allocation perspective. It might be possible, but making an asymmetrical MCM setup would be far more challenging to implement and balance/tune in terms of cache migration, PCIe, RAM, thread scheduling, and so on. Possibly doable, but impractical.
Now, you say "I know about the presence of CCX variants". What does that mean? Where do you know of this from? Do you know of anything that isn't simply a version with disabled silicon, like <8-core Ryzen or <4-core APUs? If so, from where? I would be very interested in reading further about this, as it would be a very unexpected turn in AMD's strategy. The only variants I've seen are the CCXes with different cache layouts between the CPU and APU CCXes. Do you _know_ this (as in: have evidence) or is it something deduced from looking at core layouts? I don't mean to come off as too critical here, it's just that I have never seen this reported anywhere by anyone, so any new info is very interesting to me.


----------



## mastrdrver (Sep 22, 2018)

lexluthermiester said:


> There were others. Like I said it's all good. You're not being like the standard "fanboy". I still contend it would not take much engineering(which might have been done ahead of time) to link two flawed 6core Ryzens to make a 10core. It's been done and it really wouldn't take much. Did the math on the physical dimensions and two dies would fit, with a bit of room to spare, in an AM4 package.



Your ability to understand how to make a silicon die is quite the laughter.

If it was really this simple, what are you doing on here? You should be telling AMD how they're leaders are idiots and you should be running the company.


----------



## lexluthermiester (Sep 22, 2018)

Valantar said:


> To me, even implying that I might be in the same spectrum as an Intel fanboy is downright insulting


No insult was intended.


mastrdrver said:


> Your ability to understand how to make a silicon die is quite the laughter.
> If it was really this simple, what are you doing on here? You should be telling AMD how they're leaders are idiots and you should be running the company.


Go troll someone else.


----------



## ypsylon (Sep 22, 2018)

Just going back to topic a bit. What for are all these cores when CPU provides only paltry 16 PCIe lanes anyway? That's pathetic on a -possible- 10 core chip while TR4 gets 60 on 12. How about something in between AMD?


----------



## Valantar (Sep 22, 2018)

ypsylon said:


> Just going back to topic a bit. What for are all these cores when CPU provides only paltry 16 PCIe lanes anyway? That's pathetic on a -possible- 10 core chip while TR4 gets 60 on 12. How about something in between AMD?


That would require a new platform, as AM4 is limited to 16+4+4. What they could do is integrate a PCIe 3.0 switch into upcoming chipsets, though. With one SSD connected directly to the CPU, there wouldn't be much chance of bottlenecking the switch. More PCIe for anything outside of SSDs is meaningless for mainstream desktop usage, as multi-GPU gaming is dead and only a few non-prosumers want/need 10GBe. As such, adding 8-12 PCIe 3.0 lanes off the chipset would make this pretty much perfect.


----------



## lexluthermiester (Sep 23, 2018)

Valantar said:


> AM4 is limited to 16+4+4


For most users that's actually enough.


Valantar said:


> as multi-GPU gaming is dead


LOL! No it isn't. A solid 20% of my clients do either crossfire or SLI(mostly SLI). Multi GPU gaming has maintained a steady level of popularity for more than 10 years. It's not going anywhere.


----------



## Valantar (Sep 23, 2018)

lexluthermiester said:


> For most users that's actually enough.


I don't disagree, but with the increasing popularity of NVMe drives another few lanes would be a nice form of future-proofing. 



lexluthermiester said:


> LOL! No it isn't. A solid 20% of my clients do either crossfire or SLI(mostly SLI). Multi GPU gaming has maintained a steady level of popularity for more than 10 years. It's not going anywhere.


Well, that would place them solidly in the "more money than sense" category. I suppose you can't stop people from wasting money. The RTX 2070 doesn't support SLI, meaning that the minimum price for a dual GPU setup in the future will be ~$1400. According to GamersNexus' recent test of RTX SLI, scaling is actually slightly better than it has been (probably thanks to NVLink), but still unpredictable and requires per-game profiles, limiting support to a handful of titles per year. In other words, in most games your $1400 SLI setup will perform no better than a $699 single card, let alone a $1200 Ti - which works in every game. That's dead enough for me.


----------



## hat (Sep 23, 2018)

Valantar said:


> I don't disagree, but with the increasing popularity of NVMe drives another few lanes would be a nice form of future-proofing.



Supposedly Ryzen (at least the 2600x) has 20 lanes, 16 for video card and 4 for... whatever else, usually those NVMe drives you speak of. Beyond that the chipset provides more. Not sure what else you would be looking for.



Valantar said:


> Well, that would place them solidly in the "more money than sense" category. I suppose you can't stop people from wasting money. The RTX 2070 doesn't support SLI, meaning that the minimum price for a dual GPU setup in the future will be ~$1400. According to GamersNexus' recent test of RTX SLI, scaling is actually slightly better than it has been (probably thanks to NVLink), but still unpredictable and requires per-game profiles, limiting support to a handful of titles per year. In other words, in most games your $1400 SLI setup will perform no better than a $699 single card, let alone a $1200 Ti - which works in every game. That's dead enough for me.



This I agree with. SLI is plagued with problems and meh performance gains even in titles that support it. Even if someone handed me $10,000 and told me I MUST use it to buy myself a computer, SLI (or xfire) would still not be on the list. I have two 1070s right now only because of mining. If it weren't for that, I would still more than likely be rocking my old 660 Ti.


----------



## lexluthermiester (Sep 23, 2018)

Valantar said:


> I don't disagree, but with the increasing popularity of NVMe drives another few lanes would be a nice form of future-proofing.


But..


hat said:


> Supposedly Ryzen (at least the 2600x) has 20 lanes, 16 for video card and 4 for... whatever else, usually those NVMe drives you speak of. Beyond that the chipset provides more. Not sure what else you would be looking for.


This.


Valantar said:


> Well, that would place them solidly in the "more money than sense" category.


Or people that have the money and can, comfortably or not, afford the setup and want the extra performance.


Valantar said:


> limiting support to a handful of titles per year.


Rubbish, SLI/Crossfire support is driver-centric. All games will run fine in a dual GPU config.


hat said:


> SLI is plagued with problems and meh performance gains even in titles that support it.


Also rubbish. I haven't seen a show-stopping bug/glitch/problem in over three years and the last one had an easy workaround until AMD fixed the driver. Haven't seen an SLI related problem, that affected the systems I've built, in over five years.


----------



## hat (Sep 23, 2018)

lexluthermiester said:


> Also rubbish. I haven't seen a show-stopping bug/glitch/problem in over three years and the last one had an easy workaround until AMD fixed the driver. Haven't seen an SLI related problem, that affected the systems I've built, in over five years.



I've seen problems with SLI personally (in my uncle's system), but granted that was ages ago with two 8800GTS 320MB cards. Between that and the nonstop lamenting over SLI/xFire before and after, to this day, all over the net is more than enough to put me off of it.


----------



## lexluthermiester (Sep 23, 2018)

hat said:


> I've seen problems with SLI personally (in my uncle's system), but granted that was ages ago with two 8800GTS 320MB cards. Between that and the nonstop lamenting over SLI/xFire before and after, to this day, all over the net is more than enough to put me off of it.


I'm not saying dual GPU system aren't without glitch and issues once in a while, but these problems like many other, get blow *way* out of proportion. I've been building gaming PC's since the original Voodoo SLI and have never seen the kind of problems a lot of people lament over. The worst problem with multiGPU setups I ever encountered was with the Voodoo2's. Even that was just a matter of figuring out what the problem was.

With the new RTX series cards, SLI seems like an attractive prospect for those who can afford it. However..


Valantar said:


> The RTX 2070 doesn't support SLI


I had to look this up. The 2070 and below will not have *NVlink*. That does not mean they will not still have the standard SLI bridge. NVidia has not stated that it will not be available.


----------



## Valantar (Sep 23, 2018)

hat said:


> Supposedly Ryzen (at least the 2600x) has 20 lanes, 16 for video card and 4 for... whatever else, usually those NVMe drives you speak of. Beyond that the chipset provides more. Not sure what else you would be looking for.


That's the 16+4+4 I mentioned above. 16 for graphics (and general usage, really), 4 for NVMe (or, again, anything, really), and 4 for the chipset link. The chipsets only provide PCIe 2.0 (8 lanes for 70-series, 6 for 50). So if you want/need more than one full-speed NVMe SSD (which is growing more likely as time passes), you need to eat into the 16 GPU lanes, which means that no/few motherboards will provide more than one NVMe port from the CPU-connected lanes - they'll use the chipset 2.0 lanes instead. Of course, running your GPU at x8 isn't actually a problem, but this requires a riser card for the SSD. 



lexluthermiester said:


> Rubbish, SLI/Crossfire support is driver-centric. All games will run fine in a dual GPU config.


Yes, it depends on drivers - SLI profiles in the drivers, specifically. SLI has zero effect without a bespoke profile for the game in question (activating it in a game without a profile usually leads to a tiny but measurable performance drop, bugginess, or nothing at all happening). For some games, modders even make their own profiles, with varying success. The only difference between SLI and DX12 multi-GPU in this regard is that the effort lies with Nvidia and not the developer. The statement that "all games will run fine in a dual GPU config" is thus either false (no performance scaling without a profile) or meaningless (defining "running fine" as not requiring performance scaling, invalidating the point of SLI). 

As for the 2070 having SLI, there are no SLI fingers visible on the back of the board (scroll down for a picture of the back). For previous cards, the SLI fingers needed a cut-out in the backplate, so unless they've redesigned the entire SLI interface, it doesn't have it. There isn't room to fit the bridge connector between the backplate and the PCB, so a cutout or fingers sticking up past the backplate would be necessary. The NVLink slot is also visible from the back on the 2080/- TI. Nvidia cut SLI from the third-largest die (then the 60-series) previously, so it's no surprise if they keep to this line even now that the third-largest die is in the 70-series.

SLI gives you the ultimate performance in the (relatively few) games that support it for the people who can afford it, but given the cost and what you gain back, it's an utter waste of money.


----------



## lexluthermiester (Sep 23, 2018)

Valantar said:


> As for the 2070 having SLI, there are no SLI fingers visible on the back of the board (scroll down for a picture of the back).


That's a CGI mock-up not an actual photograph. And the FE RTX cards and many of the AIB cards have a removable cover for the NVlink, the FE GTX 2070 cards likely have the same.


Valantar said:


> The statement that "all games will run fine in a dual GPU config" is thus either false (no performance scaling without a profile) or meaningless (defining "running fine" as not requiring performance scaling, invalidating the point of SLI).


What I meant was that all games will benefit from SLI/CF. I have yet to see a game that doesn't get at least some performance increase from a multiGPU setup, *when properly configured*.


Valantar said:


> SLI gives you the ultimate performance in the (relatively few) games that support it for the people who can afford it, but given the cost and what you gain back, it's an utter waste of money.


That is entirely your opinion, not shared by all.


----------



## Valantar (Sep 23, 2018)

lexluthermiester said:


> That's a CGI mock-up not an actual photograph. And the FE RTX cards and many of the AIB cards have a removable cover for the NVlink, the FE GTX 2070 cards likely have the same.


The same mock-ups of the 2080 and 2080 Ti have the NVLink connector (with its cover) very clearly visible (it protrudes slightly from the edge of the  backplate). Official renders of the 970 and 1070 also clearly showed the SLI fingers. The renders of the 2070 show nothing but a straight edge there - clearly no nvlink adapter, and also no SLI finger cutout. Official product renders for Founders Edition cards also tend to match the final product quite exactly. 



lexluthermiester said:


> What I meant was that all games will benefit from SLI/CF. I have yet to see a game that doesn't get at least some performance increase from a multiGPU setup, *when properly configured*.


I think what you mean is that all games _can_ benefit from it. _Will_ implies that it'll happen in time, which it won't - even Nvidia doesn't have the resources to do all that development. The problem is that >95% of games never come close to "properly configured" for SLI. The vast majority never even get profiles, and many of those who do never see more than 30-40% scaling (there are exceptions - in the GamersNexus 2080 Ti SLI scaling review i referenced above they had a title with >90% scaling!). Of course, some people don't mind paying 2x the price for 1.4x the performance in <5% of titles, and that's of course their right - but that won't make me stop calling it dumb, bad value, poorly implemented, and generally problematic. 'Cause it is. 

I'm quite a fan of the concept behind SLI/CF (I even had CF 4850s back in the day, which worked decently in supported titles up until their 512MB of RAM started being an issue). The problem is that until MULTI-GPU can be implemented universally and transparently on a general driver basis (meaning no developer effort required, unlike DX12 multi-GPU, and much less driver development required, unlike SLI/CF), it's going to be a niche solution with disappointing results and egregious value.


----------



## lexluthermiester (Sep 23, 2018)

Valantar said:


> The same mock-ups of the 2080 and 2080 Ti have the NVLink connector (with its cover) very clearly visible (it protrudes slightly from the edge of the backplate). Official renders of the 970 and 1070 also clearly showed the SLI fingers. The renders of the 2070 show nothing but a straight edge there - clearly no nvlink adapter, and also no SLI finger cutout. Official product renders for Founders Edition cards also tend to match the final product quite exactly.


Until they actually officially announce that they will not have SLI, I'm not willing to accept such. They would be shooting themselves in the foot and handing AMD a whole class of customers if they didn't continue SLI on mid-range cards.


Valantar said:


> I think what you mean is that all games _can_ benefit from it. _Will_ implies that it'll happen in time


Right, bad choice of vocabulary.

We are way off topic here, let's rope it in..


----------



## Valantar (Sep 23, 2018)

lexluthermiester said:


> Until they actually officially announce that they will not have SLI, I'm not willing to accept such. They would be shooting themselves in the foot and handing AMD a whole class of customers if they didn't continue SLI on mid-range cards.


Don't disagree here (I'm generally not a fan of making features exclusive to high-end SKUs), but seeing how they cut it from XX06 cards last generation, It'd be a bit strange for them to bring back support to this chip tier this generation. Of course, the separation of the three topmost SKUs into three separate silicon dice is itself unprecedented, so who knows what they'll end up doing?


----------



## ratirt (Sep 24, 2018)

Here's some SLI tests for the 2080 TI. It looks pretty good.


----------



## Earlzmoade (Sep 24, 2018)

silentbogo said:


> Lol. This is gonna be embarrassing. I've decided to find those old slides for you, and out of all the sources the first one that came up in google was an article from WCFTech with an informative title "Fake AMD Ryzen 2800X 12 Core 5.1GHz Slide Sends Media Into Frenzy" ))))
> So much for keeping up with news....
> 
> So, all we have to go on, is a now-taken-down and non-existent MSI promotional video for B450 motherboard that claimed "8-core and up CPU" support... All clues and hints have been meticulously erased.
> ...




Not just Msi. 

If you look at the manual for Asrock x370 PRO btc+

You can see in the bios that they have cpu overclock for 16 cores


----------



## Valantar (Sep 24, 2018)

Earlzmoade said:


> Not just Msi.
> 
> If you look at the manual for Asrock x370 PRO btc+
> 
> You can see in the bios that they have cpu overclock for 16 cores


... Which fits perfectly with AMD moving their top-end consumer parts to the same silicon as the currently sampling 7nm EPYC with the 3000-series (with two 8-core CCXes per die for a maximum of 64 cores in 4-die EPYC). There's no reason to suspect this is relevant before then.

This aligns with AMDs current strategy, as well as reasonable expectations of its extention into the future. It is as such the answer that requires the fewest new assumptions (no deviations from current strategy, no unknown silicon, no reconfiguration of the architecture that we don't know of) and is thus the best hypothesis according to Occam's razor.


----------



## Caring1 (Sep 25, 2018)

ratirt said:


> Here's some SLI tests for the 2080 TI. It looks pretty good.


Wrong thread.


----------



## Valantar (Sep 25, 2018)

Caring1 said:


> Wrong thread.


Not really - we went off on a bit of a tangent for the past couple of pages. Still, not really on-topic, but neither were the posts preceding it.


----------



## quadibloc (Sep 26, 2018)

With four cores per CCX, 10 cores aren't possible? Well then, why not go to 12 cores. And some rejects with two failed cores turned off would be the 10 core parts.


----------



## lexluthermiester (Sep 26, 2018)

quadibloc said:


> With four cores per CCX, 10 cores aren't possible? Well then, why not go to 12 cores. And some rejects with two failed cores turned off would be the 10 core parts.


That concept has already been covered/suggested. I agree that it's possible with some sort of combination. We will see what happens.


----------



## TheLaughingMan (Sep 28, 2018)

quadibloc said:


> With four cores per CCX, 10 cores aren't possible? Well then, why not go to 12 cores. And some rejects with two failed cores turned off would be the 10 core parts.



They will not do that for the same reason as they will not make a 16 core Ryzen chip and why I think a 10 core Ryzen 2800X is likely just a rumor. It doesn't make any sense cannibalize your own market segments. We already have a Threadripper 12 core/24-thread chips in the 1920X with a possible 2920X on the way. There is no reason to shoot yourself in the foot by offering a 10 or 12 core Ryzen chip.


----------



## lexluthermiester (Sep 29, 2018)

TheLaughingMan said:


> There is no reason to shoot yourself in the foot by offering a 10 or 12 core Ryzen chip.


But that isn't what would happen. They would be maximizing inventory that would otherwise sit unused. That's not shooting oneself in the foot, it's being smart. Shooting themselves in the foot would be wasting those unused dies.


----------



## R0H1T (Sep 29, 2018)

Earlzmoade said:


> Not just Msi.
> 
> If you look at the manual for Asrock x370 PRO btc+
> 
> You can see in the bios that they have cpu overclock for 16 cores


Does the BIOS show SMT as additional cores? If not then 16 core AM4 might not be too far away, though I have to question the 1.3375V~1.400V for 7nm & just 3.6GHz~3.8GHz core clocks.
http://asrock.pc.cdn.bitgravity.com/Manual/X370 Pro BTC+.pdf


----------



## os2wiz (Oct 1, 2018)

Not based ona tinge of evidence and it would likely massively increase the current die size for Zen+. It is an idiotic article by people who need to generate a buzz when there is nothing out there at all to substantiate it. Fake News in this case.


----------



## Earlzmoade (Oct 1, 2018)

R0H1T said:


> Does the BIOS show SMT as additional cores? If not then 16 core AM4 might not be too far away, though I have to question the 1.3375V~1.400V for 7nm & just 3.6GHz~3.8GHz core clocks.
> http://asrock.pc.cdn.bitgravity.com/Manual/X370 Pro BTC+.pdf




Yea i agree. The voltages seems up in the clouds.  

Hoping we get some more cores for the zen2.  Only time will tell i guess.


----------



## os2wiz (Oct 2, 2018)

Valantar said:


> That's the 16+4+4 I mentioned above. 16 for graphics (and general usage, really), 4 for NVMe (or, again, anything, really), and 4 for the chipset link. The chipsets only provide PCIe 2.0 (8 lanes for 70-series, 6 for 50). So if you want/need more than one full-speed NVMe SSD (which is growing more likely as time passes), you need to eat into the 16 GPU lanes, which means that no/few motherboards will provide more than one NVMe port from the CPU-connected lanes - they'll use the chipset 2.0 lanes instead. Of course, running your GPU at x8 isn't actually a problem, but this requires a riser card for the SSD.
> 
> 
> Yes, it depends on drivers - SLI profiles in the drivers, specifically. SLI has zero effect without a bespoke profile for the game in question (activating it in a game without a profile usually leads to a tiny but measurable performance drop, bugginess, or nothing at all happening). For some games, modders even make their own profiles, with varying success. The only difference between SLI and DX12 multi-GPU in this regard is that the effort lies with Nvidia and not the developer. The statement that "all games will run fine in a dual GPU config" is thus either false (no performance scaling without a profile) or meaningless (defining "running fine" as not requiring performance scaling, invalidating the point of SLI).
> ...



The X470 chipset abolished pciE 2.0 lanes in the chipset. All lanes are pciE 3.0 in the X470 chipset..


----------



## Valantar (Oct 2, 2018)

os2wiz said:


> The X470 chipset abolished pciE 2.0 lanes in the chipset. All lanes are pciE 3.0 in the X470 chipset..


No.


			
				AMD said:
			
		

> *PCI EXPRESS® GP**
> x8 Gen2 (plus x2 PCIe® Gen3 when no x4 NVMe)


Link (scroll down the page).


----------



## TheLaughingMan (Oct 2, 2018)

lexluthermiester said:


> But that isn't what would happen. They would be maximizing inventory that would otherwise sit unused. That's not shooting oneself in the foot, it's being smart. Shooting themselves in the foot would be wasting those unused dies.



This is how. Those dies would not sit around. They would be used for other chips like the Pro series Ryzen 2500X or 2600 depending on what parts came out defective. Like they have been doing for the last 12 years. This would help with stock that they will need for OEM partners especially if the Intel rumors about demand are true. It would be wasteful to use two of those dies to build a 10-core chip that will likely be super hot and still will not compete with high-end gaming CPUs from Intel or have any gaming performance over Ryzen 2700X or 2600X. Your HEDT shoppers would be better off going with Ryzen Threadripper if they really wanted a 2-die chip in which they would get at least 2 extra cores and 4 extra threads for likely the same price.

So please explain how cutting into their available dies for OEM chips and us 2 of those dies to make 1 chip instead of 2; and then fail to sale it to anyone because the price to do it will put it in a market place where there are significantly better options all around?


----------



## lexluthermiester (Oct 3, 2018)

TheLaughingMan said:


> This is how. Those dies would not sit around. They would be used for other chips like the Pro series Ryzen 2500X or 2600 depending on what parts came out defective. Like they have been doing for the last 12 years. This would help with stock that they will need for OEM partners especially if the Intel rumors about demand are true. It would be wasteful to use two of those dies to build a 10-core chip that will likely be super hot and still will not compete with high-end gaming CPUs from Intel or have any gaming performance over Ryzen 2700X or 2600X. Your HEDT shoppers would be better off going with Ryzen Threadripper if they really wanted a 2-die chip in which they would get at least 2 extra cores and 4 extra threads for likely the same price.
> 
> So please explain how cutting into their available dies for OEM chips and us 2 of those dies to make 1 chip instead of 2; and then fail to sale it to anyone because the price to do it will put it in a market place where there are significantly better options all around?


Those are some interesting points. All I'm saying is that if AMD thinks they have a market for a product and they can make money, they're going to do it regardless of whether or not certain people think they can't.


----------



## os2wiz (Oct 5, 2018)

lexluthermiester said:


> Those are some interesting points. All I'm saying is that if AMD thinks they have a market for a product and they can make money, they're going to do it regardless of whether or not certain people think they can't.


I never said they can't. I said they won't and they won't.  It is a waste of time for a card with only a  8 month window for sales before their 7nm gaming cards will be available. A waste of effort and money.


----------



## lexluthermiester (Oct 5, 2018)

os2wiz said:


> I never said they can't. I said they won't and they won't.  It is a waste of time for a card with only a  8 month window for sales before their 7nm gaming cards will be available. A waste of effort and money.


That's an opinion..


----------



## Valantar (Oct 5, 2018)

lexluthermiester said:


> That's an opinion..


And that isn't an argument. We've been through this before.


----------



## lexluthermiester (Oct 5, 2018)

Thus my redundant response to a redundant comment..


----------



## John Naylor (Oct 7, 2018)

Maybe GM should start building 6, 8 and 10 wheeled cars and then car manifacturers could drive up stocks by outdoing each other in number of wheels.


----------



## Vlada011 (Oct 8, 2018)

Definitely I will not buy processor before this agony stop.This Circus named "Who is better in Cinebench Multicore for lower price"
AMD will not tell customers that 2800X in games and everyday applications is like R5 2600X. Same as 2990WX. You pay 2990WX and you get 2600X for everyday. Adobre and Premier see half cores... Who buy that processors, I see gamers???
People thought if AMD dropped them in gaming segment they are really gaming processors.

Intel is completely different story, they are lost, one day news They back in 22nm, second day 560 euro for 8 cores. Every day one step closer to Nehalem.
And when AMD show higher Cinebench score because more cores people will thought That's it, he is better, he is better look...
Everything is followed by motherboards with Gen 4 socket worth 400$ and 32GB of RAM worth 300-400$.
And when people buy them for 6 months reviews will get better motherboard from vendors with story This is best...

I go to build scale models 1:48 of NATO planes and vehicles and watercooling for my X99 old 4 years.
Geekbench score 5300-28000 oc to 4.5GHz/4.0GHz Cache, i7-9700K make 6200-30000 and I will upgrade when DDR5 show up.
In mean time, speakers worth 300-400$, M.2 1TB, etc... GPU offcourse, but GTX1080Ti or RTX2080Ti when competition show up.


----------



## RichF (Oct 13, 2018)

Vlada011 said:


> AMD will not tell customers that 2800X in games and everyday applications is like R5 2600X. Same as 2990WX. You pay 2990WX and you get 2600X for everyday. Adobre and Premier see half cores... Who buy that processors, I see gamers???


I think holding the 2990WX against AMD by citing its gaming performance should earn a Monty Python foot.

As for Adobe... That company is so lazy it almost defies belief.


----------

