Wednesday, October 31st 2018

AMD Could Solve Memory Bottlenecks of its MCM CPUs by Disintegrating the Northbridge

AMD sprung back to competitiveness in the datacenter market with its EPYC enterprise processors, which are multi-chip modules of up to four 8-core dies. Each die has its own integrated northbridge, which controls 2-channel DDR4 memory, and a 32-lane PCI-Express gen 3.0 root complex. In applications that can not only utilize more cores, but also that are memory bandwidth intensive, this approach to non-localized memory presents design bottlenecks. The Ryzen Threadripper WX family highlights many of these bottlenecks, where video encoding benchmarks that are memory-intensive see performance drops as dies without direct access to I/O are starved of memory bandwidth. AMD's solution to this problem is by designing CPU dies with a disabled northbridge (the part of the die with memory controllers and PCIe root complex). This solution could be implemented in its upcoming 2nd generation EPYC processors, codenamed "Rome."

With its "Zen 2" generation, AMD could develop CPU dies in which the integrated northrbidge can be completely disabled (just like the "compute dies" on Threadripper WX processors, which don't have direct memory/PCIe access relying entirely on InfinityFabric). These dies talk to an external die called "System Controller" over a broader InfinityFabric interface. AMD's next-generation MCMs could see a centralized System Controller die that's surrounded by CPU dies, which could all be sitting on a silicon interposer, the same kind found on "Vega 10" and "Fiji" GPUs. An interposer is a silicon die that facilitates high-density microscopic wiring between dies in an MCM. These explosive speculative details and more were put out by Singapore-based @chiakokhua, aka The Retired Engineer, a retired VLSI engineer, who drew block diagrams himself.
The System Controller die serves as town-square for the entire processor, and packs a monolithic 8-channel DDR4 memory controller that can address up to 2 TB of ECC memory. Unlike current-generation EPYC processors, this memory interface is truly monolithic, much like Intel's implementation. The System Controller also features a PCI-Express gen 4.0 x96 root-complex, which can drive up to six graphics cards with x16 bandwidth, or up to twelve at x8. The die also integrates the southbridge, known as Server Controller Hub, which puts out common I/O interfaces such as SATA, USB, and other legacy low-bandwidth I/O, in addition to some more PCIe lanes. There could still be external "chipset" on the platform that puts out more connectivity.
The Retired Engineer goes on to speculate that AMD could even design its socket AM4 products as MCMs of two CPU dies sharing a System Controller die; but cautioned to take it with "a bowl of salt." This is unlikely given that the client-segment has wafer-thin margins compared to enterprise, and AMD would want to build single-die products - ones in which the integrated northbridge isn't disabled. Still, that doesn't completely discount the possibility of a 2-die MCM for "high-margin" SKUs that AMD can sell around $500. In such cases, the System Controller die could be leaner, with fewer InfinityFabric links, a 2-channel memory I/O, and a 32-lane PCIe gen 4.0 root.

AMD will debut the "Rome" MCM within 2018.
Source: The Retired Engineer
Add your own comment

60 Comments on AMD Could Solve Memory Bottlenecks of its MCM CPUs by Disintegrating the Northbridge

#26
champsilva
Vayra86So, all roads truly do lead to Rome, then.
But don't forget about the great fire of Rome.
Posted on Reply
#27
Aomine_Law
Wouldnt it be much better to just make the memory controller modular? just thinking out loud.

Im just saying this because im not sure if more then one memory controller is beneficial at all when you have a multi cpu setup...

I know... its a bit out of the box but yeah
Posted on Reply
#28
bug
Aomine_LawWouldnt it be much better to just make the memory controller modular? just thinking out loud.

Im just saying this because im not sure if more then one memory controller is beneficial at all when you have a multi cpu setup...
What do you mean by "modular"?
For reference, checkout the last paragraph here for on overview of the current implementation: en.wikichip.org/wiki/amd/infinity_fabric
Posted on Reply
#29
Aomine_Law
bugWhat do you mean by "modular"?
For reference, checkout the last paragraph here for on overview of the current implementation: en.wikichip.org/wiki/amd/infinity_fabric
Ah... so memory controllers stack their perfomance/bandwidth.
Well... thought it may be a better idea to just combine the memory controllers into one big die. One you can upgrade, the same way as you can with cpu`s.
Posted on Reply
#30
Zubasa
Aomine_LawAh... so memory controllers stack their perfomance/bandwidth.
Well... thought it may be a better idea to just combine the memory controllers into one big die. One you can upgrade, the same way as you can with cpu`s.
Once you run the traces out to the board / another socket, the latency goes through the roof.
Posted on Reply
#31
bug
Aomine_LawAh... so memory controllers stack their perfomance/bandwidth.
Well... thought it may be a better idea to just combine the memory controllers into one big die. One you can upgrade, the same way as you can with cpu`s.
Well, you're on to something. That's how things worked before Athlon64 and Core: the memory controller was in the so called northbridge - a standalone chip sitting on the motherboard. While obviously a more flexible design, it turns out it doesn't cut it anymore in modern systems.

Btw, welcome to TPU ;)
Posted on Reply
#32
Aomine_Law
ZubasaOnce you run the traces out to the board / another socket, the latency goes through the roof.
bugWell, you're on to something. That's how things worked before Athlon64 and Core: the memory controller was in the so called northbridge - a standalone chip sitting on the motherboard. While obviously a more flexible design, it turns out it doesn't cut it anymore in modern systems.

Btw, welcome to TPU ;)
yeah i know... but wouldnt it be much easier to reserve PCI-e lanes this way.
Im not saying that this is the solution. its just that with thinking out of the box one might find new ways to improve their product.

and thanks bug
Posted on Reply
#33
RH92
For everyone here saying this is bad solution or that it will create more problems here is some educational material for you :
I mean do you think the peoples working on these designs are ignorants or something ? Obviously this new design will resolve many problems !
Posted on Reply
#34
HD64G
Imho, this type of connectivity between CCXs is only meant for the next EPYC and Threadripper. And for this type of usage it is excellent and ingenious indeed. For Desktop Ryzens my opinion is that they will just improve the already existing connectivity. It is more than enough. And with 8C/16T CCX, most Ryzens will have just one CCX which means no added latency from the IF.
Posted on Reply
#35
bug
HD64GImho, this type of connectivity between CCXs is only meant for the next EPYC and Threadripper. And for this type of usage it is excellent and ingenious indeed. For Desktop Ryzens my opinion is that they will just improve the already existing connectivity. It is more than enough. And with 8C/16T CCX, most Ryzens will have just one CCX which means no added latency from the IF.
Ideally, AMD will want a design that scales across product lines. Otherwise they have to keep redesigning the CCX. But there's no telling which solution they'll choose.
Posted on Reply
#36
HD64G
bugIdeally, AMD will want a design that scales across product lines. Otherwise they have to keep redesigning the CCX. But there's no telling which solution they'll choose.
Since EPYC and TR is already made seperately than desktop Ryzen CPUs and they are making money from that, it is very viable to contintue doing that, especially when they will raise the game by adding many more cores and decreasing latency for the market sections those are needed most.
Posted on Reply
#37
bug
HD64GSince EPYC and TR is already made seperately than desktop Ryzen CPUs and they are making money from that, it is very viable to contintue doing that, especially when they will raise the game by adding many more cores and decreasing latency for the market sections those are needed most.
What do you mean "separately"? Aren't they all just the same CCXs in different layouts?
Posted on Reply
#38
Aldain
Hmmmmm maybe this gives some MERIT to the HARDOCP foum post which details that zen2 has some "newish" IF implementation...
Posted on Reply
#39
bug
AldainHmmmmm maybe this gives some MERIT to the HARDOCP foum post which details that zen2 has some "newish" IF implementation...
Believe it or not, IF is the Achille's heel for Zen. It was bound to be reworked in future incarnations.
Posted on Reply
#40
Steevo
Fabric solutions always create more problems than they solve once it becomes this complex, the ring bus approach may be simpler and offer more throughput and lower latentcy if they can get it wide or fast enough.

AMD brought most of this on themselves, technical issues with ZEN, bulldozer, and other designs and latency to cache and memory has never truly been solved for years and "add more cores" has always been the solution. They need to build a memory controller for a 8 core that can be expanded to these insane core and thread counts, where a little latency added to a server workload with custom aware of penalties software handling the threads can mask it.
Posted on Reply
#41
efikkan
sergionographyBut how does this effect minimum latency? Right now with the current approach there is a somewhat wide delta between min and max latency depending on which core is communicating with what. When an app is running locally on a ccx the latency is excellent, when both ccx's are needed then the latency slightly increases, and lastly when needing to connect to other chips on the module for one workload then latency maxes out. This central north bridge might lower that max latency and make the gap between min and max much smaller, however from a high level one can expect min latency to take a big hit and increase drastically.
I don't think core-to-core communication between threads is the problem, but rather memory and cache accesses. The impact is greater than just taking the extra jump through the other CCX, it also "borrows" memory bandwidth from that CCX, which can lead to additional bottlenecks.

Most applications are very sensitive to memory latency, so redesigning this approach in future Zen iterations seems like a very good idea. Keeping cache and memory controllers as efficient and low latency as possible is one of the keys to increasing IPC.
Posted on Reply
#42
HD64G
bugWhat do you mean "separately"? Aren't they all just the same CCXs in different layouts?
By separately I mean they have different packaging and layout maybe. And that's exactly the difference between the supposed new layout of the upcoming EPYC and a normal Ryzen if that stays the same in its layout aspect.
Posted on Reply
#43
bug
HD64GBy separately I mean they have different packaging and layout maybe. And that's exactly the difference between the supposed new layout of the upcoming EPYC and a normal Ryzen if that stays the same in its layout aspect.
But you were suggesting different IF implementations between Ryzen and Epyc. That would not mean simply a different layout, but also different CCXs. Which, as I said, would add to the costs. Unless I misunderstood something.
Posted on Reply
#44
WikiFM
Vya DomusThe cache needs to be low latency, therefore it has to be on the same die.



It's going to be less actually, on average.



If the communication between the cores is hampered as you say , how would that affect the single thread performance ? It's the exact opposite of what you are describing, leaving only the cores and cache on each die would allow for higher clocks and therfore higher single thread performance and higher performance in general.
There are 2 differents situations, first inter-core communications with cores in different dies will require a third die in between to communicate. Second, single threaded performance would be lower because the memory controller won't be on-die, that is why AMD implemented the new Dynamic Local Mode.
Posted on Reply
#45
GorbazTheDragon
Separated I/O for Zen 2 has been in the leaks for at least a month already...

Even in may there were already rumours of a similar idea being passed around at intel, however as we all know intel is very far behind on the whole MCM architecture and as such it will be at least a year before any of their offerings are even doing the rounds being sampled before their retail release.

This is the way forward for the high end CPU market and anyone who says it isn't is just impossibly deluded...

I hope to see AMD continue their competitive streak in the high end, they have set high targets but I am pretty sure they will be achieved. I also hope they put a bit more time into refining the 8 core and lower chips to be more competitive on the gaming side.
Posted on Reply
#46
Vya Domus
WikiFMSecond, single threaded performance would be lower because the memory controller won't be on-die
Just a blanket statement, nobody has a clue if that's going to have any impact whatsoever. Chances are it wont, if the leaks are true.
Posted on Reply
#49
WikiFM
R0H1TSo you've passed judgement on an undisclosed (publicly) design based on some of your assumptions & TR2, what about waiting for evidence or results?
I agree, but that applies both sides of the way.
Posted on Reply
Add your own comment
Nov 28th, 2024 07:04 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts