Wednesday, March 2nd 2022

Intel, AMD, Arm, and Others, Collaborate on UCIe (Universal Chiplet Interconnect Express)

Intel, along with Advanced Semiconductor Engineering Inc. (ASE), AMD, Arm, Google Cloud, Meta, Microsoft Corp., Qualcomm Inc., Samsung and Taiwan Semiconductor Manufacturing Co., have announced the establishment of an industry consortium to promote an open die-to-die interconnect standard called Universal Chiplet Interconnect Express (UCIe). Building on its work on the open Advanced Interface Bus (AIB), Intel developed the UCIe standard and donated it to the group of founding members as an open specification that defines the interconnect between chiplets within a package, enabling an open chiplet ecosystem and ubiquitous interconnect at the package level.

"Integrating multiple chiplets in a package to deliver product innovation across market segments is the future of the semiconductor industry and a pillar of Intel's IDM 2.0 strategy," said Sandra Rivera, executive vice president and general manager of the Datacenter and Artificial Intelligence Group at Intel. "Critical to this future is an open chiplet ecosystem with key industry partners working together under the UCIe Consortium toward a common goal of transforming the way the industry delivers new products and continues to deliver on the promise of Moore's Law."
The founding companies, representing a wide range of industry expertise across cloud service providers, foundries, system OEMs, silicon IP providers and chip designers, are finalizing incorporation as an open standards body. Upon incorporation of the new UCIe industry organization this year, member companies will begin work on the next generation of UCIe technology, including defining the chiplet form factor, management, enhanced security and other essential protocols.

The chiplet ecosystem created by UCIe is a critical step in the creation of unified standards for interoperable chiplets, which will ultimately allow for the next generation of technological innovations.

For more information, visit this page.
Add your own comment

19 Comments on Intel, AMD, Arm, and Others, Collaborate on UCIe (Universal Chiplet Interconnect Express)

#1
Nanochip
As much hate as Intel gets, we should give them credit for open standards like USB, thunderbolt's donation to the USB-IF which has enabled USB4, PCIe, CXL, and now UCIe. They also facilitated the initial bring up of the kernel support for USB4 on Linux. Kudos to them.
Posted on Reply
#2
lexluthermiester
I would love to see universal CPU socket re-adoption. Back in the early days of PCs, CPU sockets were designed to handle a CPU made by anyone. This made upgrades and swap-outs a breeze.

This proprietary socket nonsense has always sucked.
Posted on Reply
#3
bobsled
What on earth are Meta doing getting involved in doing things that could benefit society? Why start now? They clearly attended the wrong consortium…
Posted on Reply
#4
AnarchoPrimitiv
I'm surprised Intel didn't call this consortium "Glue"...it's funny how they go from bashing chiplets to fully embracing them....if there were a way to ban intel from using chiplets and have to eat their words on "glue" I'd be all for it....in addition to not allowing them to use any other foundry but their own.
Posted on Reply
#5
TheLostSwede
News Editor
lexluthermiesterI would love to see universal CPU socket re-adoption. Back in the early days of PCs, CPU sockets were designed to handle a CPU made by anyone. This made upgrades and swap-outs a breeze.

This proprietary socket nonsense has always sucked.
Actually, it was the other way around, everyone adopted what Intel had already made. Most of it was simply because companies like IBM demanded a second source for CPUs. I mean, that's pretty much how AMD became a CPU manufacturer.
Posted on Reply
#6
TechLurker
Kind of surprised, but also not, to NOT see NVIDIA on the party list. One would have expected they'd be all for it given that they're considering chiplet GPUs, but given that they like pursuing proprietary hardware elements, I guess they're not immediately interested.
Posted on Reply
#7
lexluthermiester
TheLostSwedeActually, it was the other way around, everyone adopted what Intel had already made. Most of it was simply because companies like IBM demanded a second source for CPUs. I mean, that's pretty much how AMD became a CPU manufacturer.
The details of how that dynamic came to be are as much a history lesson. I just would love to see it happen again. It would solve SOOOOO many problems!
Posted on Reply
#8
TheLostSwede
News Editor
lexluthermiesterThe details of how that dynamic came to be are as much a history lesson. I just would love to see it happen again. It would solve SOOOOO many problems!
What problems exactly? And who decides when the shift to a new socket happens? AMD and Intel clearly have very different ideas on when to transition not only sockets, but also things like PCIe generations, RAM types and what not.
Unfortunately, I think it would stifle innovation in some ways, but it might also lead to less electronic waste and slower upgrade cycles.
Posted on Reply
#9
DeathtoGnomes
NanochipAs much hate as Intel gets, we should give them credit for open standards like USB, thunderbolt's donation to the USB-IF which has enabled USB4, PCIe, CXL, and now UCIe. They also facilitated the initial bring up of the kernel support for USB4 on Linux. Kudos to them.
hate towards Intel and others like Nvidia is due to releasingpromoting 'new standards' and saying 'this is the best, use it or else be left sitting on the curb'.
Posted on Reply
#10
lexluthermiester
TheLostSwedeWhat problems exactly?
Whole platform upgrades when replacing a CPU for starters. Replacing a mobo and often heatsink + RAM just to change a CPU is irritating and wasteful, even if you can resell the left-overs. That's one problem that didn't exist until after Socket 7.
TheLostSwedeAnd who decides when the shift to a new socket happens?
Well in a consortium, the group would develop a replacement standard that could be adopted when the CPU/chipset makers were ready. Everyone would have input.
TheLostSwedeAMD and Intel clearly have very different ideas on when to transition not only sockets, but also things like PCIe generations, RAM types and what not.
True. But with a common motherboard the CPU's would add to the base features and the user would decide what feature set is to their liking and choose a motherboard accordingly. And the user would not need to replace the CPU to upgrade a feature set. Likewise, a user would not need to replace the motherboard to change/upgrade a CPU. BITD, CPU's would plug into and use whatever features a motherboard provided.

It was simple and easy. The previous way of doing things was FAR more flexible and greatly more environmentally friendly.
Posted on Reply
#11
Wirko
lexluthermiesterIt would solve SOOOOO many problems!
Like, a certain company could start making those nForce chips again.

But seriously, it doesn't sound like science fiction any longer. The CPU/system on chip development by Intel and AMD has basically converged, the level of integration is very similar in both, even the number of pins is suspiciously similar, at least on consumer platforms. One serious issue of course remains: how would an Intel CPU on a non-Intel board (and non-Intel chipset) know if it's allowed to run overclocked?
bobsledWhat on earth are Meta doing getting involved in doing things that could benefit society? Why start now? They clearly attended the wrong consortium…
They need a technology to build foot-tall stacks of chiplets full of user data, metadata, Metadata, and other user data.
Posted on Reply
#12
Valantar
lexluthermiesterTrue. But with a common motherboard the CPU's would add to the base features and the user would decide what feature set is to their liking and choose a motherboard accordingly. And the user would not need to replace the CPU to upgrade a feature set. Likewise, a user would not need to replace the motherboard to change/upgrade a CPU. BITD, CPU's would plug into and use whatever features a motherboard provided.

It was simple and easy. The previous way of doing things was FAR more flexible and greatly more environmentally friendly.
Except that back then most controllers were off the CPU and thus independent of CPU upgrades. For what you're saying here to work, you'd need PCIe and other IO controllers to physically be on the motherboard, not on the CPU. If not, then your scenario of "the user would not need to replace the CPU to upgrade a feature set" just doesn't work. And if anything, controllers have been migrating onto CPUs, not off of them, due to this being far more power efficient, more performant, and more flexible.

I would also love if Intel and AMD shared a socket, but ... well, that sounds utopian. Not only would it be a "consortium" of two members (unless motherboard makers get to join, which doesn't really make sense - they would literally always be motivated to not want to make a new model as that's quite expensive), and two members with a ~4-to-1 power balance, at least in market share; you would make BIOS development massively more complicated (including managing diverging featuresets across not only CPU lines but vendors); you would need a socket/platform design that attempts to be unrealistically future-proof (at least accounting for one generation ahead in any relevant I/O standards for your desires to be possible), and a bunch of other issues.


As for this consortium, my first thought was "this sounds great". Second thought: "Coming from Intel though ... wonder if they've purposely designed this to be inferior to EMIB and its derivatives?" I certainly wouldn't put it past them in how they tend to conduct business, though they might not have done anything like that simply because engineering something to be slightly worse than something else is incredibly difficult - and it couldn't be too much worse, as it wouldn't see adoption.

Still, broader adoption of chiplet architectures is a great thing, as are open standards. Curious why Nvidia is nowhere to be seen though - but then given that both TSMC and Samsung are in, I guess they'd have access anyhow.
Posted on Reply
#13
ghazi
I'm actually very surprised Intel would do this. Their interconnect tech for next-gen is supposed to give a big advantage over AMD's current chiplet architecture thanks to much lower latency. Are they really going to give it all up to AMD, even if AMD/TSMC already have similar tech in their pipeline? It makes sense their marketing is about making chiplet design available to its foundry customers (all 0 of them), but that doesn't totally add up, there must be something more to it.
TechLurkerKind of surprised, but also not, to NOT see NVIDIA on the party list. One would have expected they'd be all for it given that they're considering chiplet GPUs, but given that they like pursuing proprietary hardware elements, I guess they're not immediately interested.
Intel and NVIDIA really do not like each other, for a wide variety of reasons, including the fact that Intel sees NVIDIA CPUs as a more existential threat than AMD. Without even going into finances, ecosystem, or anything else, AMD is just another supplier of x86 CPUs, Intel would rather lose share to them than lose the captive market for x86. The relations between them and AMD are much more friendly in general from what I've seen. I'm sure NVIDIA has their reasons for not wanting to be part of this as well.
TheLostSwedeWhat problems exactly? And who decides when the shift to a new socket happens? AMD and Intel clearly have very different ideas on when to transition not only sockets, but also things like PCIe generations, RAM types and what not.
Unfortunately, I think it would stifle innovation in some ways, but it might also lead to less electronic waste and slower upgrade cycles.
Well, as you probably know, how it worked in the old days was, someone (Intel) puts out a new socket and then all the underdogs make chips for the same socket so they're compatible with the dominant platform and common motherboards. So you can upgrade your 66MHz POS to a 233MHz AMD copycat on the cheap for example. I don't think you'll ever see such a thing actually planned by a consortium, as you said, it would stifle innovation and competition, and who would agree to it?
Posted on Reply
#14
Wirko
Didn't Intel avoid the word "chiplet" until now and used "tile" instead?
Posted on Reply
#15
ghazi
ValantarAs for this consortium, my first thought was "this sounds great". Second thought: "Coming from Intel though ... wonder if they've purposely designed this to be inferior to EMIB and its derivatives?" I certainly wouldn't put it past them in how they tend to conduct business, though they might not have done anything like that simply because engineering something to be slightly worse than something else is incredibly difficult - and it couldn't be too much worse, as it wouldn't see adoption.

Still, broader adoption of chiplet architectures is a great thing, as are open standards. Curious why Nvidia is nowhere to be seen though - but then given that both TSMC and Samsung are in, I guess they'd have access anyhow.
I thought that as well initially but it wouldn't make any sense. Hard to say. I also wondered if the spec might be 'missing something' but it seems everything is there, only possible thing I can imagine is that the spec runs over CXL/PCIe protocol whereas Intel could use something else with the same physical layer.

Bear in mind TSMC and AMD also already had EMIB-like tech in their pipeline. For all we know, TSMC might have been ahead of Intel on future development in this area and everyone decided to settle down and level the playing field. It does lower the risk and the investment cost for all parties as far as chiplet interconnects go and that is a big deal. With an open standard AMD and Intel can focus their attention elsewhere and not have to worry that chiplet interconnects will ruin their next generation's competitive standing.

As I look deeper into it though, I find it very interesting how the materials from this consortium repeatedly refer to building SoCs with IP/chiplets from different suppliers. That is an interesting focus. Is that just for the foundry customers, or do the big guys have plans of their own? Interesting times ahead.
WirkoDidn't Intel avoid the word "chiplet" until now and used "tile" instead?
Chiplets are for the open standard plebs. Only THEIR chiplets are tiles :laugh:
Posted on Reply
#16
Valantar
ghaziI thought that as well initially but it wouldn't make any sense. Hard to say. I also wondered if the spec might be 'missing something' but it seems everything is there, only possible thing I can imagine is that the spec runs over CXL/PCIe protocol whereas Intel could use something else with the same physical layer.

Bear in mind TSMC and AMD also already had EMIB-like tech in their pipeline. For all we know, TSMC might have been ahead of Intel on future development in this area and everyone decided to settle down and level the playing field. It does lower the risk and the investment cost for all parties as far as chiplet interconnects go and that is a big deal. With an open standard AMD and Intel can focus their attention elsewhere and not have to worry that chiplet interconnects will ruin their next generation's competitive standing.
My guess: Intel Foundry Services is realizing that even if they offer a competitive node and a bunch of interesting interconnects, in a supply-constrained future they will struggle to attract high-budget, large volume customers if the chips of those customers are primarily designed for competing nodes with fundamentally incompatible interconnects, as that would (likely) significantly increase the costs of porting over any design, potentially involving shifting around parts of the die to fit the interconnects etc. If Intel can make everyone adopt the same interconnect standard, that's one part of the design that will be quite simple to port, at least.
ghaziAs I look deeper into it though, I find it very interesting how the materials from this consortium repeatedly refer to building SoCs with IP/chiplets from different suppliers. That is an interesting focus. Is that just for the foundry customers, or do the big guys have plans of their own? Interesting times ahead.
I wouldn't be surprised if Intel is going that route - they seem to be gobbling up chips from whereever they can get them. This would definitely ease the workload there as well, potentially allowing for SKUs spanning multiple fabs at once (though I shudder to think what power management will be like for those chips!). They're probably banking on Intel's massive R&D budgets placing them at an inherent advantaged position with this even if they share the interconnect tech with competitors.
Posted on Reply
#17
Chrispy_
WirkoDidn't Intel avoid the word "chiplet" until now and used "tile" instead?
Tiles are logic blocks in their silicon that can be scaled up or down in various combination for different die designs, but the end result is always a singular, monolithic die.

The last time they made a chiplet in earnest for the consumer market was early Core2Quad where they 'glued' two Core2Duo chips onto the same substrate and had the motherboard chipset deal with most of the mess that caused. Those were fun days too; Easy 50% overclocks and good old FSB shenanigans...
Posted on Reply
#18
Valantar
Chrispy_Tiles are logic blocks in their silicon that can be scaled up or down in various combination for different die designs, but the end result is always a singular, monolithic die.

The last time they made a chiplet in earnest for the consumer market was early Core2Quad where they 'glued' two Core2Duo chips onto the same substrate and had the motherboard chipset deal with most of the mess that caused. Those were fun days too; Easy 50% overclocks and good old FSB shenanigans...
No, tiles have consistently been chiplets in Intel's communications for several years. Anandtech's recent article on Sapphire rapids was titled "how to go monolithic with tiles" precisely to point out how they're using a heap of interconnects to connect four tiles so tightly that they emulate a monolithic CPU. For Meteor Lake they speak of (discrete, separate silicon) graphics and compute tiles, which again are chiplets. Their XE HPC GPUs are "multi-tile" when they have more than one die.
Posted on Reply
#19
TheoneandonlyMrK
lexluthermiesterI would love to see universal CPU socket re-adoption. Back in the early days of PCs, CPU sockets were designed to handle a CPU made by anyone. This made upgrades and swap-outs a breeze.

This proprietary socket nonsense has always sucked.
You know, your old when you get dragged back that far you second guess how old you were.

Loved those days :)

I so saw this happening, I said only the other day I thought Intel would/could dish out Emib on license, totally wrong on the license but who the f£#@ saw that team up coming without licences?!

I think anything that could mean less E waste ,more cooperation and innovation and easier lives could only be win win for all.
Posted on Reply
Add your own comment
Dec 18th, 2024 19:58 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts