Thursday, February 18th 2021
Intel Rocket Lake-S Lands on March 15th, Alder Lake-S Uses Enhanced 10 nm SuperFin Process
In the latest round of rumors, we have today received some really interesting news regarding Intel's upcoming lineup of desktop processors. Thanks to HKEPC media, we have information about the launch date of Intel's Rocket Lake-S processor lineup and Alder Lake-S details. Starting with Rocket Lake, Intel did not unveil the exact availability date on these processors. However, thanks to HKEPC, we have information that Rocket Lake is landing in our hands on March 15th. With 500 series chipsets already launched, consumers are now waiting for the processors to arrive as well, so they can pair their new PCIe 4.0 NVMe SSDs with the latest processor generation.
When it comes to the next generation Alder Lake-S design, Intel is reported to use its enhanced 10 nm SuperFin process for the manufacturing of these processors. This would mean that the node is more efficient than the regular 10 nm SuperFin present on Tiger Lake processors, and some improvements like better frequencies are expected. Alder Lake is expected to make use of big.LITTLE core configuration, with small cores being Gracemont designs, and the big cores being Golden Cove designs. The magic of Golden Cove is expected to result in 20% IPC improvement over Willow Cove, which exists today in Tiger Lake designs. Paired with PCIe 5.0 and DDR5 technology, Alder Lake is looking like a compelling upgrade that is arriving in December of this year. Pictured below is the LGA1700 engineering sample of Alder Lake-S processor.
Sources:
HKEPC, via VideoCardz
When it comes to the next generation Alder Lake-S design, Intel is reported to use its enhanced 10 nm SuperFin process for the manufacturing of these processors. This would mean that the node is more efficient than the regular 10 nm SuperFin present on Tiger Lake processors, and some improvements like better frequencies are expected. Alder Lake is expected to make use of big.LITTLE core configuration, with small cores being Gracemont designs, and the big cores being Golden Cove designs. The magic of Golden Cove is expected to result in 20% IPC improvement over Willow Cove, which exists today in Tiger Lake designs. Paired with PCIe 5.0 and DDR5 technology, Alder Lake is looking like a compelling upgrade that is arriving in December of this year. Pictured below is the LGA1700 engineering sample of Alder Lake-S processor.
82 Comments on Intel Rocket Lake-S Lands on March 15th, Alder Lake-S Uses Enhanced 10 nm SuperFin Process
I don't disagree that Intel's node tend to be better than competitors just like you mentioned, but I think you've highlighted the issue yourself here. Intel is pitting their 14nm against TSMC's 7nm which puts them at a disadvantage. What is important is performance, and we have already witnessed how Zen 3 is beating Intel in most metrics while using around 60% of power, and beating them to soundly in multicore scenario. These are cold hard facts. The reality is Intel took it too easy as they grew in dominance and got caught with their pants down when AMD and ARM caught up. While I don't disagree that it is great engineering feat to squeeze so much out of 14nm which I give credit to the engineers, I won't give Intel the credit because this is a band aid solution.
And Ryzen happens to provide similar or better performance depending on task type anyways. "Identical" performance between two different chips doesn't exist in the first place.
Yeah, amazing benchmark numbers until you get to the nitty gritty and realise the AGESA is literally a beta for about a year after every new AMD CPU release, and AMD seems to find this acceptable so long as they do slightly better than Intel in whatever hot topic measure is currently fashionable, and push out cutting edge improperly tested products (a boon or a curse to enthusiasts?) . Read my specs, I use a 5950x. The performance is great. It's not, however, a shining example of a 100% stable platform, even at stock, I get the odd WHEA error even 3 months later of BIOS updates. Software and correctional routines are good enough these days to absorb mild instability or poor firmware implementation without hard crashes, like bluescreens or data corruption, so for most people, 99.9% stability is fine. AMD and even Linus Torvalds recently shit on Intel for "holding back ECC implementation by making you pay more for it", which is a fair point. However AMD isn't exactly a shining example of producing products that don't ever corrupt bits lmao.
This is the same line AMD apologists int he early 2010s used to excuse bulldozer's attrocious power draw and lack of performance "but but but muh overclock". You're still getting whipped in performance, and you have to resort to ever more exotic methods of cooling trying desperately to squeeze another 1% out of your dinosaur of an architecture.
Don't discount something just because it's not your preferred option. In the same way that scaling works you can make the argument that the introduction of 12 and 16 core SMT ryzen chips on mainstream platforms did little for actual performance, because as you said "scaling begins to fall apart". Applies to high numbers of threads not just high frequencies too, don't forget that.
AMD has had a much easier time of it because all they have to do is make the design of the chip, actual manufacturing is outsourced to TSMC, and all they have to do is focus on manufacturing.
Intel is one of the last companies that has a fully owned chain from design to delivery of product, and yes, that can lead them to be slower with innovation (not always, they have introduced many gamechanging concepts and designs to the world), but it also results in a very well integrated product. Even if you discount the technical advantages of this approach, which obviously have drawbacks too, you have to respect the benefits of that approach both economically and practically.
If you think you can somehow enforce that, you're wrong full stop. No. It's called "it's their own money and they can spend it as they see fit". You don't get to dictate that either. Again, you can believe and attempt to, nobody is obliged to give a crap and they can spend them as they originally thought regardless. And once more, complaining about "virtue signalling" isn't gonna change that either.
And noone really argues that the 12 and 16 cores did anything for gaming, as it is generally agreed that past 6 core sscaling begins to fall off, and REALLY falls off past 8 cores. However, it is accurate to say that AMD introducing 8 core mainstream CPUs kicked intel int he pants and finally forced them to increase core counts. It is not coincidence that Intel went from pushing quad cores to 6, 8, and 10 cores as a response to the ryzen 1700. AMD has to design their hardware on a shoestring budget. Intel's single quarter R+D budget was higher then AMD's entire yearly revenue stream was. They hardly have it "easier", as the manufacturing and design elements of intel anre entirely different departments with different management.
You keep talking about gaming, but the example I used was that the vast majority of people who use computer are not gamers, and the vast majority of software those people use is not heavily multithreaded. That is true - it's simply a fact. I make the observations I do not based on assumption. I slag everyone, my point here is that it's better to do so from a balanced point of view with a strong foundation in fact. You keep bringing up gaming. It's something less than 25% of the world spends any real time doing on their computers, and of that, less than 10% who game as the primary function. Maybe stop bringing up debates that aren't being argued. There's nothing wrong with high core count as long as it isn't done at the sacrifice of single core speed, and as long as the software being used actually takes advantage of more than, for typical example, 6 or 8 cores. Intel is more ambitious with their product stack than AMD. With zen, AMD focused on delivering an efficient, scalable and competitive architecture performance wise. They achieved this, good for them and good for consumers. What they did not magically do was develop in the same time, a rock solid firmware implementation. As much shit as Intel gets for their security vulnerabilities, they're the biggest player and are subject to the most attacks. Apple discovered the same thing when they actually started to get some market share, mac's weren't "immune to computer viruses", they were actually rather vulnerable as security hadn't been a focus. As I've stated before, the AGESA firmware for Zen chips, and also the RADEON drivers for GPUs, are nowhere near the featureset or maturity as Intel/Nvidia platforms. There is some good progress being made on the enterprise side of things, for last generation chips (zen 2 EPYC), but that architecture is almost two years old at this point, and still has some issues. There is a reason why EPYC and Threadripper chips lag behind the consumer grade Zen chips by 6 months or even more than a year.
No idea why you need to exaggerate. AMD's yearly revenue is 2.7x or so larger than Intel's single quarter R&D budget.
Intel's R&D spending in 2020 - $13.5B ($3.6B in Q4)
AMD's revenue in 2020 - $9.7B
For reference:
AMD R&D spending in 2020 - $2B
TSMC R&D spending in 2019 - $3.3B (somewhat more in 2020)
Samsung R&D spending for chip-related stuff in 2020 was $5.6B
Nvidia R&D spending in 2020 - $2.8B Again, why all the exaggerations?
Years ago? July 2019 is 1 year and 7 months back.
When you say AMD's IPC and Intel's you mean desktop (and server). Intel does have Ice Lake and Tiger Lake in mobile that are much closer in IPC and Zen2-based Ryzen 4000 is less than a year old.
Getting slaughtered? In power efficiency, yes. Otherwise, not really. Lower IPC can be and has been largely compensated by higher clock speeds. Price? That was kind of true until Ryzen 3000. Today Intel tends to be the cheaper option.
Fraction of the power? Depends on how you want to define fraction but the difference even in well-threaded tests is often not as pronounced as the maximum numbers that tend to be shown in reviews. For example TPU's 5800X review shows the power usage difference between 5800X and 10700K in multi-threaded tests was 32W.
Performance is not the same as IPC. RISC chips were originally meant to be lower IPC (took more instructions to do the same thing vs CISC) but capable of higher clocks, hence higher performance, as example.
As far as Intel vs AMD, you stated : "Intel lost the higher performance per core years ago.....in production workloads or ANYTHING that isnt vidya intel has been getting slaughtered by AMD"
This is demonstrably false. Now I fully expect you to start goalpost moving but this is a simple fact - AMD never had a clean sweep and far from it with Zen 1, 1+, and 2. In fact, the only time they had an advantage was in highly intense multi-core workloads, and even then it was not a clean sweep.
See below :
8 core 9900K and 10 core 10900K solidly defeats the 12 core 3900X :
10 core 10900K defeats 12 core 3900X :
MS Office, the most used productivity application on the planet - it's lightly threaded depending on use, a 6 core 10600K beats the 12 core 3900X :
Photoshop... probably the single biggest image editor on the planet :
Premier Pro, the #1 video editor :
OCR, a very common use in office productivity environments :
I have finally been itching in totally replacing my 12 year old Dell XPS 730x but not with with AMD. But with Alder Lake and what it will offer for the first time out of the box. I will also splurge on a much wanted all NVMe storage arrangement as the new Mobo generation I expect will be fantastic. I may even go as far to carefully watching the next few Intel earnings reports (reading between the lines) to considering taken some AMD stock cash and putting it towards the Alder Lake Wall Street action in July/August this year. 2021 has already been a very good year for me. But then we all know that everyone makes their own luck. Just perhaps when Intel reports their full 2021 earnings in January 2022 it be another good year for luck and smiles?
And hey, if you want to pay a premium for a chip that does statistically increase performance but makes no difference in the real world (as you are focused on gaming and not the ego-inflating FPS numbers at the top left), to replace your current, perfectly fine chip, go for it. This is why I don't care that Intel had/will have better gaming chips.
I suppose if it makes like 10 seconds difference in productivity, that's something else.
Look at the 11900K for example. Sacrificing productivity by shaving off cores off the 10900K for le gaming performance that literally nobody notices. But that's just me I guess. Carry on.
Your real world is as limited as your opinion. Let people enjoy things.