Friday, February 18th 2022

Intel Raptor Lake with 24 Cores and 32 Threads Demoed
When Intel announced the company's first hybrid design, codenamed Alder Lake, we expected to see more of such design philosophies in future products. During Intel's 2022 investor meeting day, the company provided insights into future developments, and a successor to Alder Lake is no different. Codenamed "Raptor Lake," it features a novel Raptor Cove P-core design that is supposed to bring significant IPC uplift from the previous generation of processors. Using Intel 7 processor node, Raptor Lake brings a similar ecosystem of features to Alder Lake, however, with improved performance across the board.
Perhaps one of the most exciting things to note about Raptor Lake is the advancement in core count, specifically the increase in E-cores. Instead of eight P-cores and eight E-cores like Alder Lake, the Raptor Lake design will retain eight P-cores and double the E-core count to 16. It was a weird decision on Intel's end; however, it surely isn't anything terrible. The total number of cores now jumps to 24, and the total number of threads reaches 32. Additionally, Raptor Lake will bring some additional overclocking improvement features and retain socket compatibility with Alder Lake motherboards. That means that, at worst, you would need to perform a BIOS update to get your previous system ready for new hardware. We assume that Intel has been working with software vendors and its engineering team to optimize core utilization for this next-generation processor, even though they have more E-cores present. Below, we can see Intel's demonstration of Raptor Lake running Blender and Adobe Premiere and the CPU core utilization.
Source:
via VideoCardz
Perhaps one of the most exciting things to note about Raptor Lake is the advancement in core count, specifically the increase in E-cores. Instead of eight P-cores and eight E-cores like Alder Lake, the Raptor Lake design will retain eight P-cores and double the E-core count to 16. It was a weird decision on Intel's end; however, it surely isn't anything terrible. The total number of cores now jumps to 24, and the total number of threads reaches 32. Additionally, Raptor Lake will bring some additional overclocking improvement features and retain socket compatibility with Alder Lake motherboards. That means that, at worst, you would need to perform a BIOS update to get your previous system ready for new hardware. We assume that Intel has been working with software vendors and its engineering team to optimize core utilization for this next-generation processor, even though they have more E-cores present. Below, we can see Intel's demonstration of Raptor Lake running Blender and Adobe Premiere and the CPU core utilization.
153 Comments on Intel Raptor Lake with 24 Cores and 32 Threads Demoed
But much more importantly, we might find the value of 'lesser' cores quite substantial if or when the technology matures or when the market 'catches up' to new hardware: this was even the case with FX procs, albeit extremely late.
If I wasn't honest here, I would be cheering Intel's hybrid design, because I own both Intel and AMD shares. So anything good for the shareholder, and this hybrid design is the best thing it could happen to the shareholder, it's also good for me. I gain more as a shareholder if desktop/mobile X86 CPUs follow the hybrid design, than I lose as a consumer. But I like posting on forums as a consumer, because people here are also consumers.
I look at shops and I see "12700K, 12 core processor", not 8+4 processor. If YOU want to be honest, at least acknowledge that 90%+ of consumers out there wouldn't be able to/wouldn't care to/never ever consider asking what is a 8+4 cores hybrid design. They will just think what the shop/sales person/Intel box will tell them "12 Cores CPU". You doubt? Look at the Android phone market. NO ONE markets their phones as having a 4+4 SOC. OCTACORES they say.
As for my subject here, it was the "excitement" in the news article. You misunderstood my post, or you are not being honest here and you just distort my post. Should I repeat again your advice, to remove those "rose tinted glasses" and read it again? I said that a hybrid future where P cores get stack at 8 and E cores increase in numbers to create an illution of progress, IS NOT something that should make us feel excited. Do you understand it now or are my English so bad that I should start posting in Greek and let Google translate do the translation?
It's also convenient to throw all the blame to consumers for not RTFM or not spending enough time to be able to understand CPU specs. I am pretty sure that when you go out and buy for example a refrigerator you read reviews about it's parts to see if all the parts in the refrigerator, even the fan or the light bulb used is the best model. Or when buying a washing machine you try to find if the steal used is from country X that produces quality A steal and not from country Y that produces A- quality steal. People do better stuff with their free time than reading specs. You, me everyone.
FX processors where AMD's try on the Pentium 4 idea and also a way to negate Intel's thread advantage with HyperThreading. So they created a marketing CPU that would give them the chance to advertise high frequencies and a higher number of cores than Intel. Well, as with Pentium 4, it didn't worked as they where hopping.
Intel's HEDT line started in the upper $300s until 9th/10th gen when it started in the upper $500. Threadripper currently starts at $1400 and carries much higher motherboard costs (who knows what a hypothetical new Intel HEDT motherboard might cost though).
If Intel can match MT with 8p/8e/24t to AMD's 16c/32t why should anyone care what type of cores they are? You, much like everyone else complaining in this thread, didn't give a single actual reason why you're upset over the core configuration. Using your take what would you possibly use 16p lower clocked cores for (which they would have to be due to power consumption) that would be better than 8p/8e? If your argument is that if the prices keep going up without real benefit then it's bad then sure I think we'd all agree with that, but nobody knows what pricing will look like.
I am not an expert in HEDT platform, but I am pretty sure that Intel offering a 4 core 7th series HEDT processor for under $300, with cut down features in memory, number of PCIe lanes, cache and probably other areas, was not met with enthusiasm. Neither was a valid option to anyone needing an HEDT processor, than those hopping to replace that 4 core HEDT joke in the future with something that will feel like a real HEDT CPU.
Motherboard costs where ALWAYS higher in EVERY Intel platform. By stating AMD CPU costs and AMD motherboard costs as negative facts and then questioning about Intel costs, isn't making your remarks more objective.
Why I am seeing a hybrid design as something negative for the consumer? If this is not understood by now, what can I do? In Alder Lake, the high IPC of P cores is used to both give the CPU the single thread crown and also cover up the performance deficiency of E cores. Intel is trying to find a way to remain competitive while being at a less dense manufacturing node with AMD, the same way AMD was trying to match Intel's advantage in manufacturing 10 years ago, by offering dual core modules instead of real dual cores that would be taking more space on die. If AMD's FX line was offering excellent IPC, today's CPUs would have been mostly dual or quad core modules. While we would be happy with performance compared to older CPU models, we would be losing, without knowing, the advantage of having full cores instead of modules in our CPUs. I really can't write a book to explain it further. I am feeling I am stating the obvious to people who don't want to listen.
Personally i can't wait to see what AM5 is bringing, if it is good enough i will happily ditch the ADL for it as i have absolutely no loyalty to any camp. At least my ADL is good enough for at least raptor if it is worth buying.
90% of consumers are NEVER going to understand a thing despite the absolute requirement for 'customer due diligence'. Its just straight up laziness or information overload / lack of time / attention, no amount of 'extra info' is going to fix this for people. Its up to the people themselves at some point. You can't cater to the bottom end of the (social) ladder. That's what commerce has been doing quite precisely and it doesn't help us in the slightest. They have as little understanding of the meaning of 12 cores versus 8+4, as they do about numbers of GB RAM, Megapixels, Speaker Watt RMS, etc etc etc. You will never, EVER, fix stupid. The way to do that is not better commerce, its better education - something we seem to forget a lot lately. Education as in school. Not reading 'best answer' posts on forums and google ;) Real education is aimed at making people 'capable' to teach themselves, to filter through the bullshit and distill the correct information. And to filter, to begin with.
Lacking that, in the end, what matters for consumers is the real world performance/characteristics of a product.
And in thát sense, we agree on many points don't get me wrong - maybe I misjudged your stance wrt 'lesser cores' and it being just an Intel thing. I think we both recognize it happens everywhere. I, too, only consider tangible progress, actual progress. Surely you've read my criticism elsewhere about products of either camp/color.
The point I'm trying to get across though is exactly what you called 'excitement in the news post'; it is that we're in early adopter land when it comes to ADL and the purpose of bigLittle on desktop. Some say 'just give me P cores'. Others see the benefit of the higher core count as it also frees up TDP budget for performance on single thread-oriented workloads. Newer iterations of Zen are likely to follow up on the approach too. DDR5 is going to open up additional bandwidth getting MSDT closer to HEDT performance (even if the bar moves up on that segment too, you do get access to more workstation-like capability in MSDT too) and that also opens up ways to use higher core counts for applications. Similarly, Zen also exceeds the 8 core 'limit' we long considered enough on MSDT. How is this not progress?
So the question I presented against your stance, is that maybe its a bit too early to judge 'E cores' as half functional cores or 'lesser cores' altogether. You have no way of telling right now this approach is not progress. And that's where the rose tinted glasses are: you pre empted bigLittle maturing on desktop x86 and being an actual advantage, but the performance numbers so far don't follow that argument at all. ADL performs quite well with or without them, even with a much lower TDP ceiling than that horrible PL2 241W limit. And that's in (small) part due to the presence of E-cores that increase efficiency opportunistically. The low hanging fruit in CPU land is gone. These are the baby steps we're going to be looking at apart from just 'more silicon', sadly, to make progress at all.
The other aspect you mention: commercial approach, is another aspect altogether, and in that one, we consumers do have the power. We just don't buy. If there is no progress there is absolutely no need to upgrade, so that problem fixes itself sooner rather than later. If the price for upgrading is not relative to the profit, a similar situation exists. Look at GPUs right now. You can buy them at 2-3x the normal price. But people simply won't, there are limits. Have faith in those limitations, is my message for that, and also apply them to your own situation - something you also seem to do, very wisely.
I do see your points in the rest of your post, but I do not agree about Alder Lake. Alder Lake seems to be progress when running at 5+GHz with spikes of over 200W power consumption and compared to a 1.5 years old CPU. I don't see progress in that. I only see an opportunity for companies to start selling E cores for the price of P cores in the future and the mainstream platform going from 16 P cores, back to 8-12 max P cores. Intel already done it, going from 10 P cores to 8 P cores, AMD will just follow latter.
We consumers don't have a choice. That's a fantasy. We get an option when a company needs us. AMD needed us, so they gave us HEDT CPUs, like the Ryzen 7 1700, on a mainstream platform. Intel might need us to beta check their first series of discrete GPUs, they might offer us some GPUs at really good value. In the majority of cases, companies will avoid giving us a very good alternative, just to avoid price war with a bigger competitor. As for no progress = no reason to upgrade, excuse me, but Intel was selling 4 cores for a decade. Never seen their sales going down. The GPU example is flawed. A valid GPU example is for example the RX 480 that was followed by RX 580, got replaced with RX 5500 and then RX 6500. Not just lack of progress, but even degradation.
Hence 3 PC's and rising.
We as consumers might need to reflect a little bit on our own behaviour in that sense. We love to be misled, it seems, and we love to think there is always something better around the corner.
You made some great points, but Im also seeing a constant misplaced drive to keep buying. Not out of necessity, but 'because we can'.
It is our excessive consumerism that created what we have. Mountains of e-waste. If we cannot collectively change that, we wont change the nonsensical 'upgrades' and we open the door for profit maximizing for companies.
Note all of these consumer traits are researched things. We are being played. And we build a narrative for ourselves to tell each other that companies are to blame, so we can stop looking in the mirror and face truth.
As for ADL... Im not convinced yet myself and wont buy into it, even though I also see the CPUs work fine at similar TDP limits to Zen. But I simply dont have workloads to go up yet in core count. Thats a clear no-buy situation to me. The only moment I upgrade is when I see the performance is required across the board. And Im seriously questioning whether that isnt the exact same for almost everyone on this forum, yes even those who might get a 10% advantage from the connected GPUs.
But when you get years of baby steps, 10% in our heads might create that silly narrative for us that we call 'upgrade itch' - that nagging feeling of money burning a hole in your pocket for no reason at all but emotion. Exactly this. Things have limits. Even if we think otherwise.
Can't count defective cores as real cores though they'd get backlash
14 cores 28 threads "wow I have one of those with a terrible mesh/ cache design" but these 12 series are where you have only 8 decent oc cores and 6 crappy ones
Only perks are awesome cache oc and ddr5 if you want to count that too
Intel just came out with a gimmick to use cores they used to disable with binning or toss because they were space heaters on the same voltage as good cores.
But they make okay limited threads.
What I mean is that this is NOT a case of "People just think they need 16+ core CPU's for home use but wouldn't even use it fully". If this was the case, an Android tablet will do most jobs nicely. A 10 years old quad core system with an SSD would be doing most jobs nicely also. No one looks at Apple's M1 and says "Too few cores".
But we are not talking about what the average Joe needs. We are talking about progress and from the beginning, my whole argument is that a Hybrid design is not something to make us feel excited. It's a necessity for Intel to keep up in this core count war. Until Ryzen 3000, Intel was fine. They had the IPC advantage and 8-10 faster cores where some times better than 16 fast cores. They lost that advantage with Ryzen 5000 series and they can't be certain that they will not lose it again with Zen 4. So, they gone the hybrid way to at least keep up with core count, without having to build a line of massive chips that need 300W of power to keep up. It's an excellent strategy from Intel, not something to make us consumers "excited", especially when it is already obvious that the excited thing of the future is "more E cores". I believe I was answering to you while you where posting. That's where Intel's strategy is based. "How many cores do consumers need? 8 max. Why? Because most software can't/doesn't need to use more and modern consoles come with 8c/16t CPUs. What do we do to keep up with AMD? We throw little cores in the mixture. We write on the box 8+8 cores to avoid negative press and lawsuits, but we know that they will be sold to the average consumer as 16 core CPUs."
The question is. "We are being played." as you say. Are we going to not just accept that fact, but also support that? Like it? Even argue in favor of the company that tries to play as the most?
I am really curious about the 6500 XT. If I spent time in here to read comments about this card, am I going to read, from the same people who say that this hybrid design is good, positive comments about the 6500 XT? I could think a few.
4GB RAM, bad for miners, positive for gamers.
4GB RAM. Who needs more in a low end card? It keeps price down. Better than having 8GB and being more expensive.
A mobile chip. Good job AMD offering us this solution. Better than offering nothing.
Who cares about PCIe 3.0 performance because of that PCIe x4 limitation? People should move to PCIe 4.0 anyway. It probably cuts the cost of the card also. So it's good!
64 bit data bus? Bad for miners, so it is good. Who needs an 128bit data bus? We gamers have that huge amount of 16MB Infinity cache anyway.
There are probably more stuff I could think of, but better stop here. I don't like writing stuff I do not believe.
Yes, the E-cores are slower, and yes, it was a great thing that AMD ""democratized"" higher core counts. But the E-core focus isn't hampering your ST performance. One user said something along the lines of, "why can't we have 32 P cores". You could have bought a 3970X with 32 cores two years ago, but you didn't, why? Unless you do 3D modeling for a living or some other high-power workstation task, the MSDT CPUs were the same if not better for you in terms of real performance. To a much greater degree these P cores are infernos and you're not stitching a ton of them together without paying a price.
Intel's big.LITTLE, for all its flaws and issues, solves one problem really well, in that it allows Intel to have leadership in ST and MT at the same time. The 8 P cores are faster than AMD's fast cores and none of us "enthusiasts" really care to have more, as far as the workloads that are highly sensitive to ST perf go. Some do need more MT performance, but when you're talking a program using 16-32+ cores effectively, what difference does it make to you how fast the cores are, so long as summed up they're faster than the competition's CPU? You get something better all around from games to Blender and then you complain because the cores aren't the same. Granted, I am looking forward to Zen 4 and hoping AMD can regain ST leadership, in which case Intel banking on its 16 extra weak cores making up for an ST deficit wouldn't be a winning proposition when AMD already gives you 16 big cores to play with and no weird issues with compatibility etc.
Now, I still think big.LITTLE is far from perfect, and it's more beneficial to Intel than to AMD at the moment, since Intel is still using monolithic designs where area is a more major concern, and the E-cores do really well there; whereas conversely on power consumption their benefit is very much debatable, and AMD is mitigating the cost of increased silicon area through the tiny CCDs. Just my two cents, probably obvious to many but people will always argue.
I don't think they did it to keep up with Ryzen, I think it was to test BIG.little. They have shown their future chips are going to be this, and i am sure they would not have done this just to keep up. imo ADL was a test of it. All next gen CPU's will be BIG.little even AMD's so maybe Intel have got the jump on them by testing it with ADL?
You think Intel manufactured ADL just to keep up with the core count? Maybe that is what they are also doing with Raptor lake and Meteor lake too? or just maybe ADL was a test of BIG.little and they have decided it is a good idea.
As already shown, Intel is going to be using tiles instead of monolithic which will give them much better options as regard P core number etc. They can just add more tiles as AMD do with their chiplets for more power cores.
Its not even about acceptance of the fact, is it? Its about us just not knowing any better. We must buy. We shall buy. So what if the card has 4GB. So what if the CPU has cores that hardly do a thing in practice. Money must roll, we worked hard for it, we want our dopamine shot.
Reflect. That is what I'm saying. When you start going into the detail of P/E cores and whether or not those are good or not, we've already gone way too deep down the rabbit hole. The real question is, do you need them? The marketing is of a similar nature: when we're down to that level of detail, are we not skipping past far more important indicators of performance? You said it right when you pointed it out its impossible to know everything about everything.
As for these 4GB cards and hardly progressing CPUs, they exist for exactly that reason: buy buy buy. You can upgrade your card again to a newer generation. Wooptiedoo! Finally we can buy a GPU. Only idiots, and those who have (or see) no other options, downgrade to upgrade, let's be honest here. The specs are irrelevant for that group. Anyone else with a lick of sense will skip past something that will never hold value past this dank pit of scarce product, which will eventually end as all things do.
Markets usually self-correct. Its a mistake to use a limited scope on the market to determine what's what which is my main (and only) concern with your stance on ADL right now. You could be right in the end, but you're likely not. That applies to both ADL and this 6500XT example. Isn't this all a big pile of 'who the hell cares, and why would we even care'? You just skip the product, and in doing so, you voice the fact you're not interested in it. That's how the market works. Similarly wrt to those Intel quads we've had for ten years... apparently none of these consumers felt any need to buy into Extreme CPUs that offered 6 cores or more. That literally spells that there is no market for it in regular consumer space. Fast forward to 2022 and I dare say Intel had a very good view of the demand on the market at the time, and made sound business decisions up to and including Skylake. The kicker here is, that people (gamers! consumer segment!) ARE in fact spending over 1K on board and CPU right now. That's the price point Extreme was at...
Why would there suddenly be a market now for infinitely scaling core counts? That's literally the salesman's pitch: creating a demand where there is none. Like I pointed out earlier: we are fooling ourselves with supposed demand that is not 'necessity'. And depending on the glasses you wear, you could quite simply say the same of AMD, who was first in pushing the core count 'war' that is remarkably similar to a Megapixel race, and all the other 'look at my numbers' races between companies. A neutral pair of glasses IMHO should value all of these activities the exact same: companies that want to move product, and silly customers buying the fantasy.