Diversifying. That is good for AMD and hopefully for customers. I'm not sure how much advanced the 3nm from Samsung is or how it compares directly to TSMC but from what I ready here, Samsung's 3nm Garte all around FET is nothing to scoff at.
Even if Samsung's 3nm GAAFET is slightly inferior, but is cheaper than TSMCs extortion prices, only behemoths like Aple and nVidia can pay, by throwing billions in allocation. It's still is better, than sit and wait when the spare foundry time, with no products, to participate in the market. There's nothing wrong in making significant amount of products of "lesser" importance, even on a bit more inferior node, eg, some low end GPUs for more appealing prices.
Don't get me wrong, though! I'm all for the progress, and especially for the power efficiency, which is crucial, for personal reasons. But when the supply of the cards is non-existant, and the rival outsells with 9:1 quantitative ratio, then the idea to get some alternative allocation elsewhere, for at least experimental effort, is not a bad idea.
The problem is that there is a stagnation and AMD doesn't deliver any technological progress. Hopefully, this move will help to release somehow that stagnation, and to begin releasing new products.
Radeon RX 7900 XTX is slow, Ryzen 9 7950X is slow. We need new products.
You're probably kidding, don't you? 7950X is astonishing chip. 7900XTX is quite of the same. They both are incredibly great for the tasks they do.
always good to have choices or competetors..
but Samsung foundry needs to prove that their node can be efficient and well as providing better performance per watt.
for ex: like how RTX 3090(samsung 8nm) vs RTX 4090 (TSMC 5nm), the latter made a huge diffeence in performance and efficiency..when everyone was guessing etc 4090 would consume 800W for 30% more performance.
You missing the fact, that the dog-sh*t node, didn't prevent nVidia to make a
huge fortune, on their absolutely furnace cards. They sold them in dozens of millions, like the hot cakes. And this was the sole reason, they applied for
much more expensive 5nm n4 node later.
Amd doesn't deliver technological.progress?
Weren't they the first to use HBM? First to use chiplets in a CPU, something that Intel eventually copied? Aren't they the first to use chiplets in a consumer GPU? The first to have an 8 core mainstream desktop chip? The first to have a 16 core consumer chip? The first to have 3D v-cache?....I could go on. It's arguable that AMD is more responsible for shaping the x86 industry and certainly the consumer DIY market more than anybody else since the release of Ryzen.
...oh, and AMD has done all that while having an R&D budget that is over 3x smaller than Intels currently, and in the past was over 7x smaller.
Some people, either don't get, or are deliberate Wintelvidia trolls.
This is not a progress, this is a method.
This is not a progress, this is a method.
When? 2018? It's already 6 years later, and there is nothing new. And no new manufacturing process. No 3nm, no 2nm, only the 10-year-old 7nm relabeled as 7+nm, and 5nm from 2019.
And intel has a ring bus between its monolithic cores arrangement and has always been faster in gaming, hence AMD needed to do something to make the performance gap smaller.
Intel monolithic chips are long in the past. They've joined the "snake" oil "glued" chip manufacturing for almost all their products. The glorified "supperior" intel 12,13,14 gen is nowhere near a solid silicon chip.
The sole reason the Wintel cartel still exists, and their "collab" failure called W11, is because Intel is
incapable to make
16 Performance cores for the
same power envelope as AMD. If they could, they'd put a bunch of e-waste cores under the same lid. Because if the efficiency is there, the P cores are able to operate with efficiency of E cores, while keeping their high perofrmance.
The node superiority is sometimes overrated. Especially for en-masse lower end products, which are the bread and butter, like 7600-7700XT. The efficiancy isn't the problem there, as these consume a tiny bit, and don't have the power to set the benchmark records. Many would swap their old Polaris/Pascal cards in a blink of an eye, if the price/specs was there. Unfortunatelly, paying about $400 for an entry level GPU is absurd. Be it on something less expensive than TSMC 5nm, while maintaining a wider bus, it would be much more... nope it would have become bestseller overnight.
And nVidia is keeping their monolithic design, because they can throw in, literally any amount of money, while keeping their margins astronomically high.
Well, hopefully newer processes will go better for Samsung than what we’ve seen before. Most issues with Ampere were caused by their 8nm, if we’re being realistic.
Intel is done with monolithic CPUs. It’s a pointless talking point at this stage. For all its drawbacks, AMDs chiplet approach proved superior in the long run. Not to mention that, optics aside, gaming performance is really, REALLY not what either company cares about presently. The real battle is for the server/datacenter/supercomputer market. And that one is all about how many cores you can have on a single socket.
I mean, the mobile GPU in the Samsung SoC for smarthones/tablets might still resemble the chiplet design after all. This might be still useful overall, as it provides the better scalability in the long run. As the MCM GPU design can be used in much wider range of products. From phone and handheld, to premium dGPU.