TheLostSwede
News Editor
- Joined
- Nov 11, 2004
- Messages
- 17,991 (2.44/day)
- Location
- Sweden
System Name | Overlord Mk MLI |
---|---|
Processor | AMD Ryzen 7 7800X3D |
Motherboard | Gigabyte X670E Aorus Master |
Cooling | Noctua NH-D15 SE with offsets |
Memory | 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68 |
Video Card(s) | Gainward GeForce RTX 4080 Phantom GS |
Storage | 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000 |
Display(s) | Acer XV272K LVbmiipruzx 4K@160Hz |
Case | Fractal Design Torrent Compact |
Audio Device(s) | Corsair Virtuoso SE |
Power Supply | be quiet! Pure Power 12 M 850 W |
Mouse | Logitech G502 Lightspeed |
Keyboard | Corsair K70 Max |
Software | Windows 10 Pro |
Benchmark Scores | https://valid.x86.fr/yfsd9w |
Did you read the article on Anandtech? It's talking about AMD using the 5nm node, then questions why AMD isn't using a more cutting edge node, when the writer full and well knows those nodes are not optimised for the kind of chips AMD makes, since Apple and MediaTek are using them, so why can't AMD. That's the issue I'm having with their article, as Ian Cutress should know better. On top of that, it's not normal to make huge node jumps, bypassing nodes that are the next step to the one you're on, presumably because it can lead to more problems than its worth. I have zero issues with the question he put to AMD, as it was a sensible question, it's the bit before that, that doesn't make sense.Even if you and me and Anandtech knows the reason why, there will be people asking the question. So it is better to get an official answer straight from AMD because people will ask the question anyway. By getting the answer from AMD, people get an official answer that should satisfy most of them if not all of them. Rather than rely on guesses and speculation like yours. Unless you get an answer to your question from the source, you are just speculating. Speculation doesn't mean you are right. You maybe wrong, or maybe besides yours, there is more than one answer, or there is more to it then just your speculation.
I guess the Lost in your name means you are truly lost, thinking yourself as a know-it-all.
If you read the comments on Anandtech, their own readers are pretty much saying the exact same thing.
You're having a go at me without knowing anything about me. I've been writing about this stuff for over 20 years, I've been to Intel's fabs, I've been to GloFo fab conferences and I've met the people that started a lot of the tech sites that carry their names to this day, but are no longer working there themselves. But yeah, I'm the one that's lost and that is a know-it-all, because I'm just some random person on a forum... Honestly dude, maybe at least check up on who you're having a go at first.
You don't have to agree with my thoughts on what Ian wrote, but as I said, he should really know better in this case, as low power nodes aren't suitable for making desktop CPUs and GPUs and that's common industry knowledge that he also has.
Most important, maybe not, but it's obviously a top three factor, since the most important one is being that the foundry has a suitable node for your chip design, since you always have a design target and changing that design target is apparently a 6-12 month job in most cases. Then there's allocation, as if you get none, you're not making any chips. After that, cost I would say. Qualcomm is actually quite far behind on the nodes, as they went with Samsung something or the other at 8 nm or below, whereas Apple is at TSMC's 4 nm and supposedly on whatever node TSMC is working on next, since Apple is pretty much paying for TSMC's push towards smaller and smaller nodes. Samsung is going to be behind MediaTek if they really are going to be on the 3 nm node this year.I think the price is the most important factor, smartphone dies are much smaller than gpus for example. Phones are also being sold at crazy prices and qualcomm/apple are enjoying some very fat margins which don't exist on the computer space (I mean, not in CPUs anyway, GPUs are all over the place). So for Apple/Samsung/Qualcomm to jump on the new flashy node is great for marketing even if yields are still on the low side, AMD on the other hand has to wait for yields to stabilize.
At least that's my theory anyway
It's also not really about a "flashy node", most of these companies can't advanced their products without a node shrink. That's actually what was quite impressive with Nvidia, they managed to squeeze out a lot of extra performance while being stuck on the same node for three generations (Fermi, Kepler and Maxwell), something that is quite rare. It goes to show that some companies are capable of making do with what's available. Obviously Intel was stuck for a very long time, but then again, we didn't see nearly as good performance advances from them as Nvidia managed.
As mentioned above, AMD has to wait because they need to be on what used to be called a high power version of the node, as the low power versions used for MCUs and ARM/RISC-V/MIPS based SoCs, are not suitable for desktop CPUs and GPUs. This has been the case for as long as I've been writing about this stuff, which is as I mentioned, over 20 years by now.
Will we even get quad core Zen 4 based CPUs?I reckon we'll see Zen 4c without 3D cache and higher end Zen 4 with it. Might not see it in 7600X, but Zen 4 has 2x the L2 cache as Zen 3.
Last edited: