Wednesday, June 12th 2019

AMD Ryzen 3000 "Matisse" I/O Controller Die 12nm, Not 14nm

AMD Ryzen 3000 "Matisse" processors are multi-chip modules of two kinds of dies - one or two 7 nm 8-core "Zen 2" CPU chiplets, and an I/O controller die that packs the processor's dual-channel DDR4 memory controller, PCI-Express gen 4.0 root-complex, and an integrated southbridge that puts out some SoC I/O, such as two SATA 6 Gbps ports, four USB 3.1 Gen 2 ports, LPCIO (ISA), and SPI (for the UEFI BIOS ROM chip). It was earlier reported that while the Zen 2 CPU core chiplets are built on 7 nm process, the I/O controller is 14 nm. We have confirmation now that the I/O controller die is built on the more advanced 12 nm process, likely GlobalFoundries 12LP. This is the same process on which AMD builds its "Pinnacle Ridge" and "Polaris 30" chips. The 7 nm "Zen 2" CPU chiplets are made at TSMC.

AMD also provided a fascinating technical insight to the making of the "Matisse" MCM, particularly getting three highly complex dies under the IHS of a mainstream-desktop processor package, and perfectly aligning the three for pin-compatibility with older generations of Ryzen AM4 processors that use monolithic dies, such as "Pinnacle Ridge" and "Raven Ridge." AMD innovated new copper-pillar 50µ bumps for the 8-core CPU chiplets, while leaving the I/O controller die with normal 75µ solder bumps. Unlike with its GPUs that need high-density wiring between the GPU die and HBM stacks, AMD could make do without a silicon interposer or TSVs (through-silicon-vias) to connect the three dies on "Matisse." The fiberglass substrate is now "fattened" up to 12 layers, to facilitate the inter-die wiring, as well as making sure every connection reaches the correct pin on the µPGA.
Add your own comment

44 Comments on AMD Ryzen 3000 "Matisse" I/O Controller Die 12nm, Not 14nm

#26
olymind1
WavetrexYes, it does:

- 2 SATA ports OR 4 PCI-e lanes (1 single NVMe drive), so no more SATA at all if you use one single NVMe drive)
- 2 USB 3.1
- 16 PCIe lanes for GPU
- Audio chip link.

+ 4 free PCI-e lanes which work together.

So what are you going to do with those ?
- Network ? Then you won't have any extra USBs
- USBs? Then you won't have any SATA... or network, or wifi, or anything else
- SATA? Then give up on USBs and the rest...

Obviously all these options are very bad, so you need a device that gets 4 lanes in and a whole bunch of lanes out + other ports (USB, SATA, etc.)
That's what the chipset/south bridge does... and in case of amd's X370, X470, X570... that's A LOT of stuff which it offers, even more than Intel ones.

And it works because it's extremely rare that all of the devices will communicate all at once, so the 4 lanes between CPU and SB are more than enough.
I did manage however to overload them by copying from 4 SATA SSDs to NVMe drive and from network to USB 3.1 external drives all at once. (Intentionally overloading the south bridge) But that's an extremely rare use case.
I see your point. Maybe in a few generation they will increase the number of USB controllers/channels and SATA too inside the cpu.

For me though 4 USB ports are enough. If there is a LAN and SATA contoller chip onboard, then there would be 2 more PCI-E x1 lanes free, 1 for sound card and 1 for an another SATA card or USB 3.0 card.

But true, it is easier if the Bx50 handles those I/O and storage devices.
btarunrContact also says that the B450-successor which will come out late-2019 or early-2020 will be a new ASMedia chip with PCIe gen 4 support (and possibly Chimera-hardening). It will be as feature-rich as X470. If you want Ryzen 3000, don't need SLI/CFX support, and don't mind waiting till Xmas, I highly recommend waiting for that B450-successor chipset.
It's a little disappointing like they don't want me or others to buy their new Ryzen 3600 CPU. :(
Posted on Reply
#28
Jism
HwGeekDo we know if this 1.6V [ln2 OC] goes inside the CPU chiplets or the I/O Chiplet runs on 1.6V and the CPU Chiplets run on lower voltage ?[maybe we should stop thinking in monolithic way?]
Ryzen has very specific voltages for the CPU. Almost every aspect is being covered in any Ryzen / 2nd generation. I dont think that the I/O will be the same voltage as the CPU.
MatsSome of Thermalrights old chipset coolersgave you the opportunity to mount it in any desired angle you want, possibly making installation of expansion cards easier.

There aren't many heatpipe coolers around these days,there's only one here. That'll change soon though.
geizhals.eu/?cat=coolchip&xf=10809_Chipsatz-K%FChler
I think any good board will have a silent / normal / turbo mode for the chipset fan. They would shoot themself in the foot if the chipset fan would be fixed, i.e 100% duty cycle all the time.
if the chipset fan would be fixed, i.e 100% duty cycle all the time.
Posted on Reply
#29
TheLostSwede
News Editor
olymind1I see your point. Maybe in a few generation they will increase the number of USB controllers/channels and SATA too inside the cpu.

For me though 4 USB ports are enough. If there is a LAN and SATA contoller chip onboard, then there would be 2 more PCI-E x1 lanes free, 1 for sound card and 1 for an another SATA card or USB 3.0 card.

But true, it is easier if the Bx50 handles those I/O and storage devices.
That's not how it's set up today though and AMD doesn't seem to either have the interest or the IP to add a built in Ethernet controller. It's also missing a few other key features that is unlikely to be integrated into the CPU package, as they never were part of the chipset to date, such as the "super" I/O controllers.

If you look at the A300/X300 boards, they technically don't have a chipset, just an I/O controller for legacy I/O, GPIO, I2C and a few things like that. So AMD has already done something along the lines of what you're asking for, it just didn't turn out to be very popular with the board makers, as their value add is close to zero.
MatsSome of Thermalrights old chipset coolersgave you the opportunity to mount it in any desired angle you want, possibly making installation of expansion cards easier.

There aren't many heatpipe coolers around these days,there's only one here. That'll change soon though.
geizhals.eu/?cat=coolchip&xf=10809_Chipsatz-K%FChler
Well, maybe we'll see a resurgence of these kind of products...
Posted on Reply
#30
Casecutter
Now we see where AMD/RTG was putting that "pittance" of R&D funds they had... to use! And man they're achieving things neither competitor thought they could execute on. Jen-Hsun has to pull his Super costume out to make him feel relevant, while Intel is looking up after getting bowel over on the floor.
MatsThere aren't many heatpipe coolers around these days,there's only one here.
I've got the HR-55 still in my collection.
Posted on Reply
#32
Mephis
CasecutterNow we see where AMD/RTG was putting that "pittance" of R&D funds they had... to use! And man they're achieving things neither competitor thought they could execute on. Jen-Hsun has to pull his Super costume out to make him feel relevant, while Intel is looking up after getting bowel over on the floor.

I've got the HR-55 still in my collection.
I hate to break it to you, but if you think Nvidia is scared by what we saw at E3, you are crazy. AMD has the node advantage and the best they could do is match a GPU that was released 8 months ago.
Posted on Reply
#33
Octopuss
I bought my share of northbridge heatsinks in my life.
I'm sad I will have to again despite it bringing me back memories of my youth.
Posted on Reply
#34
Casecutter
MephisI hate to break it to you, but if you think Nvidia is scared by what we saw at E3.
Jen-Hsun, or what the got he got out of E3 probably no... but given the pittance-of-a-pittance of the engineering resources RTG has gotten compared to what Nvidia has dropped in the last five years... says more about Nvidia. RTG has a GPU that's like what 15% the CU count, while then actually less size dia. area if either where built on corresponding node process even if you pullout RT cores. So I won't count out RGT, forget who "winning" all that matters is being able to sell every 7nm they can get their hands on, and I don't think that's going to be any problem.
Posted on Reply
#35
Vayra86
TheLostSwedeThat's what Intel said
www.notebookcheck.net/Intel-doesn-t-think-PCI-Express-4-0-is-a-big-deal-and-has-the-numbers-to-prove-it.423772.0.html



Better, most likely, but not as wacky or weird...
Given the time it will be relevant I think they're accurate. PCIe 4.0 will be eclipsed sooner than 3.0 was.

Turn it around, define the use cases for that extra bandwidth right now. GPU - none. There is only storage, and any sane person has long since concluded that with simple SATA you've already eliminated most of the storage 'bottleneck' for I reckon over 90% of users.

What's really missing all these years was lane counts - never bandwidth. And even for that only a small group would feel inclined to get HEDT. Its not like that reality has suddenly changed. Even with the much faster storage available - speed is only useful if you can use it.

Regardless, PCIe 4.0 is irrelevant in the whole circus. AMD is killing it and it just keeps looking better for Zen. They have by far the most efficient use of chips and a product that has immense margin because of Intel's pricing strategy of the past decade.
Posted on Reply
#36
Mephis
CasecutterJen-Hsun, or what the got he got out of E3 probably no... but given the pittance-of-a-pittance of the engineering resources RTG has gotten compared to what Nvidia has dropped in the last five years... says more about Nvidia. RTG has a GPU that's like what half the CU count, while then actually less size dia. area if either where built on corresponding node process even if you pullout RT cores. So I won't count out RGT, forget who "winning" all that matters is being able to sell every 7nm they can get their hands on, and I don't think that's going to be any problem.
We have no clue which gpu would be bigger if they were on equal nodes. But considering the fact that Nvidia has dominated AMD in efficiency and performance in the recent years, I would bet on them.

In an ideal world, process sizes would actually tell the size of the smallest feature on a chip, but they dont in the real world. In this ideal world a 7nm gpu with identical features to a 12nm one would be 35% the size of the 12nm one. Which would mean that a 7nm 2070 would be roughly 156mm^2. The 5700xt is 251mm^2, or 56% the size of the 2070. Again we don't have all the info we need to know what kind of scaling we should be seeing, but I'm would bet that on equal footing the 5700xt would be bigger.

And "winning" should mean growing market share and making more money than Nvidia. AMD may grow market share, but I'm willing to bet that Nvidia will still have higher profits and a higher gross margin on each of there gpus sold.
Posted on Reply
#37
lexluthermiester
xkm1948This is giving me some 80s' retro style vibe

80's & 90's.
Posted on Reply
#38
Casecutter
MephisBut considering the fact that Nvidia has dominated AMD in efficiency and performance in the recent years, I would bet on them.
Well I've basically understood that Cuda cores are bigger/more complex, while AMD stream processors are smaller/simpler, and work on lower frequency verses Cuda core anticipate to run with higher frequency (while still more efficient). So from that we conclude the area for 2944 Cuda cores uses more "real estate" than 2560 of GCN shader parts. As we are talking sq. area even small percentage pays dividends. Please others school me if incorrect...

www.techconsumerguide.com/nvidia-cuda-cores-vs-amd-stream-processor/
MephisI'm willing to bet that Nvidia will still have higher profits and a higher gross margin on each of there gpus sold.
Sure they will, they don't have the challenge of deciding what products/wafers should should go before who. I'm sure AMD/RTG scheduling all the 7nm start's they can get.
I guess I'm less attached to "greed is good".
Posted on Reply
#39
InVasMani
JismThis is going on for many years already. Look at phones and their SOC's. All is housed inside one chip almost and it offers all posssible functionality. AMD has done it in a very clever way to maximize performance and keep the costs low. A 10W chipset is'nt the end of the world. I'm sure users are able to cool it passive and that the fan is temperature based and not fixed (as we had in the old days).

PCI-E 4.0 does'nt offer that much of a "extra" gain compared to 3.0. AMD said this in their own presentation. No need to jump to PCI-E 4.0 and put in a 4.0 capable card. It wont do much compared to PCI-E 3.0. What's more interesting is booting up the default PCI-E clocks from 100 to 120MHz for example.
The increased R&D budget should help bolster AMD's graphics division for what comes after NAVI. The transition down to 7nm or 7nm+ for Nvidia will be a nice jump in performance though at the same time for them. What AMD has planned for what follows NAVI is somewhat critical. They can't let their foot off the gas and need to accelerate their plans a bit and be more aggressive.

AMD should probably aim for
  • 3X more more instruction rate's over NAVI for it's successor
  • 3X to 4X further lossless compression
  • increase ROP's from 64 to 80
  • improve the texture filter units by 0.5X
  • improve texture mapping units by 0.5X to 1.5X (allowing for a better ratio of TFU's to TMU's)
  • 3 CU resource pooling
  • 7nm+ or node shrink
  • more GDDR capacity hopefully I think by the time a successor arrives we could see more per chip GDDR6 capacity or a price reduction
  • higher clocked GDDR
Bottom line I think AMD should really try to be more aggressive and further optimize it's efficiency of it's design and hopefully bump up frequency as well a bit. I don't think they need more stream processors right now, but rather need to improve the overall efficiency as a whole further to get more out of them. They also should aim to do some things to offer a few more GPU sku's to consumers at different price targets. I tend to think if they do that as well they might be able to even cut down chips to offer some good 2X or even 3X dual/triple GPU's as well based on PCIE 4.0 which good be good. I think if they could make the ROPs scale from 44/64/80 it would work well for just that type of thing and allowing better yields and binning options for AMD to offer to consumers.

Those are my optimistic aggresive expectations of what AMD should try to aim towards for NAVI's successor if the R&D budget allows for it at least. They should really make some attempt to leap frog ahead a bit further especially as Nvidia will be shrinking down to a lower node for whatever comes after Turing or "SUPER" anyway since that sounds like more of a simply refresh and rebadge with a new bigger high end Super Titan sku added because what else would they name it instead 2080ti Super why!?!?
CasecutterWell I've basically understood that Cuda cores are bigger/more complex, while AMD stream processors are smaller/simpler, and work on lower frequency verses Cuda core anticipate to run with higher frequency (while still more efficient). So from that we conclude the area for 2944 Cuda cores uses more "real estate" than 2560 of GCN shader parts. As we are talking sq. area even small percentage pays dividends. Please others school me if incorrect...

www.techconsumerguide.com/nvidia-cuda-cores-vs-amd-stream-processor/


Sure they will, they don't have the challenge of deciding what products/wafers should should go before who. I'm sure AMD/RTG scheduling all the 7nm start's they can get.
I guess I'm less attached to "greed is good".
Nvidia's GPU's are in general more granular in terms of workload management and thus power and efficiency. AMD needs to step it up more and it's not that AMD GPU's can't be efficient, but in order for a GPU like Vega 56/64 to compete with Nvidia's higher end and more diverse offers they have to stray more outside of power and efficiency so end up looking less efficient and more power hungry than they could be under more ideal circumstances with a better budget to design more complex and granular GPU's as Nvidia offers. It boils down to price segments and where they are marketed by both companies, but it's a more uphill battle for AMD given the R&D budget. The transition to 7nm was a smart call for AMD at least since it'll get cheaper over time along with yields and binning improvements. It should make for a easier transition to 7nm+ as well. Finer power gating would probably help out AMD a fair amount as well at improving TDP for load and idle and will become more important anyway at lower node sizes to reduce voltages and waste heat plus it's important for mobile which is a area for big room for growth for the company.
Posted on Reply
#40
Mephis
CasecutterSure they will, they don't have the challenge of deciding what products/wafers should should go before who. I'm sure AMD/RTG scheduling all the 7nm start's they can get.
I guess I'm less attached to "greed is good".
It has nothing to do with greed. The point of any corporation is to maximize profits. You can do that 2 ways. Either sell a ton of items with low margin (Ford, GM, etc.) or sell less items with high margin (Ferrari, Bentley, etc.)

P.S. - I guess the is a 3rd option. Sell a lot of products with very high margins, the Apple way.
Posted on Reply
#41
Zubasa
metalfiberIt was the north bridges back in the day mobos that needed the extra cooling.



Interesting tibit is the SOC / IO Die of Ryzen is functionally a NorthBridge.
It more or less served the same function to connect the CPU to the memory and the fastest buses.
Which also connects to the SouthBridge for more Slower IO.
So in this case if the X570 really is just the Zen2 IO Die, it is a "NorthBridge" chip serving the function of a PCH / "SouthBridge".
Posted on Reply
#42
Casecutter
InVasManiNvidia's GPU's are in general more granular in terms of workload management and thus power and efficiency.
Well, as I've understood RDNA advances most of what you cover above as the Nucleotide idea (building blocks) or as you indicate "granular". Most of the texture, compression, instruction in theory should all be manipulated given the cards placement in the market (entry, mid-range enthusiast) or all such resources would be tailored for the work (gaming/professional/HPC/AI). Instead of fixed modules that might contain too much or not enough of the specific parts for a workload.

My take on capitalism.
I say a true business isn't beholden to the stock price... They deliver a quality product at a price that enable's them to continuously move forward on products, quality and markets to maintain/secure their position and that of the employees who make them what they are.

Once you've worked a that kind of company you'll get it.
Posted on Reply
#43
heky
CasecutterI say a true business isn't beholden to the stock price... They deliver a quality product at a price that enable's them to continuously move forward on products, quality and markets to maintain/secure their position and that of the employees who make them what they are.
That, my man, is exactly how i think every business should be run!
Posted on Reply
#44
Roddey
turbogearThe only problem I see here in mainboards where the x570 chip sits behind GPU PCIE slot. :(
In that case it would be hard to use tall heatsink like we used to do 15 years ago due lengthy GPUs nowdays.

Back those days I hated these noisy small fans.
Zalmann used to offer nice after market chipset heatsinks but they were quite tall. I used to own one like this one back then: :p

I just noticed this at a retailer and thought it could work as a solution for some people. It looks like it would work in my case if the chipset fan failed and I couldn't find a replacement. Might work with a short finned heatsink? Just a guess. Kinda expensive though.
www.in-win.com/en/fans/mars/#product_download
Posted on Reply
Add your own comment
Jul 19th, 2024 03:32 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts