Monday, February 1st 2021

Intel "Alder Lake-P" Mobile Processor with 14 Cores (6 Big + 8 Little) Geekbenched

An Intel 12th Gen Core "Alder Lake-P" sample surfaced on the Geekbench online results database. The "Alder Lake" microarchitecture introduces heterogenous multi-core to the desktop platform, following its long march from Arm big.LITTLE in 2013, through to laptops with Intel's "Lakefield" in 2019. Intel will build both desktop- and mobile processors using the microarchitecture. The concept is unchanged from big.LITTLE. A processor has two kinds of cores—performance and low-power. Under lower processing loads, the low-power cores are engaged, and the performance cores are only woken up as needed. In theory, this brings about tremendous energy-efficiency gains, as the low-power cores operate within a much higher performance/Watt band than the high-performance cores.

The "Alder Lake" silicon features two kinds of cores—eight "Golden Cove" performance cores, and eight "Gracemont" low-power cores. The "Golden Cove" cores can be configured with HyperThreading (2 logical processors per core). Intel's product managers can create multiple combinations of performance and low-power cores, to achieve total core counts of up to 16, and logical processor counts of up to 24. This also warrants close attention to the composition of the core types, beyond an abstract core-count. A 14-core processor with 6 performance- and 8 low-power cores will perform vastly different from a 14-core processor with 8 performance- and 6 low-power cores. One way to derive core counts is by paying attention to the logical processor (thread) counts, as only the performance "Golden Cove" cores support HTT.
Back to the Geekbench v5.31 database entry, and we see a 14-core/20-thread "Alder Lake-P" processor. This chip features 6 performance "Golden Cove" cores, and 8 low-power "Gracemont" ones. As a mobile chip, it's paired with LPDDR4X memory, and its clock speed ranges between 800 MHz idle, and 4.70 GHz max Turbo Boost. The chip yields an OpenCL compute performance of 13438 points.
Source: HotHardware
Add your own comment

13 Comments on Intel "Alder Lake-P" Mobile Processor with 14 Cores (6 Big + 8 Little) Geekbenched

#1
DeathtoGnomes
So they glued a Hare to a Turtles back? :kookoo: :rolleyes::D
Posted on Reply
#2
InVasMani
DeathtoGnomesSo they glued a Hare to a Turtles back? :kookoo: :rolleyes::D
They glued 6 hares to 8 turtles actually! The way to look at though is the slower cores are wider while the faster cores are more narrow and additionally the faster cores draw more power while the slower wider cores draw less power. It's also only a start and for mobile the way they have it setup is good in practice the best of both worlds. Plugged in and when battery life is well charged you have good performance across the 6 of the cores while when battery life decreases performance can slowly shift to the other other cores to preserve battery life. Eventually it's safe to say Intel will have enough space for another chiplet and another tier of cores with middle ground performance that's even more balanced. That or each package will pack in more cores and achieve the same type of thing. In either case I see it as a positive, but I think a 3rd and/or 4th chiplet will have lots of additional upside as things get revised Intel can shuffle certain instruction sets to each chiplet package to maximize performance and efficiency.
Posted on Reply
#3
crimsontape
You know, this is pretty cool. Part of me wants to cry about it, because it seems awkward and bizarre. But I know there's part of the market that's been working on the implementation of big-little for a while. I think Intel is going to ride that wave with the general efforts to standardize that approach.

Mean while, I think AMD and their FPGA strategy mind will end up being a real banger in the long term. Imagine a CPU no longer CPU, but more like a GPU that's more like a CPU. And load in different micro instructions as needed... Just musing...
Posted on Reply
#4
ratirt
I don't know what to think about it. I'm not gonna say cool or awesome nor crap and shit. I'm just gonna wait and see what it really offers. For me, thins Big.Little is more of a power efficiency improvement than a performance improvement.
Posted on Reply
#5
8BitZ80
ratirtI don't know what to think about it. I'm not gonna say cool or awesome nor crap and shit. I'm just gonna wait and see what it really offers. For me, thins Big.Little is more of a power efficiency improvement than a performance improvement.
I would wager that power efficiency won't be the only benefit. More of a flow-on effect. Over time, each type of core architecture will become more refined and focused on performing specific tasks. My speculation is that we'll reach a point down the road where the big cores are simply not designed to do the same tasks that the small cores can, but the latency and task scheduling efficiency will be far improved.

The main hurdle with the Big.Little concept will be marketing. I can just imagine the forum posts now: "I bought a 14 core CPU but it turned out to be a shitty 6+8 thing WTFFFF" There's really no limit to how dumb consumers can be and it will be a major hurdle to prevent backlash.
Posted on Reply
#6
ratirt
PooPipeBoyI would wager that power efficiency won't be the only benefit. More of a flow-on effect. Over time, each type of core architecture will become more refined and focused on performing specific tasks. My speculation is that we'll reach a point down the road where the big cores are simply not designed to do the same tasks that the small cores can, but the latency and task scheduling efficiency will be far improved.

The main hurdle with the Big.Little concept will be marketing. I can just imagine the forum posts now: "I bought a 14 core CPU but it turned out to be a shitty 6+8 thing WTFFFF" There's really no limit to how dumb consumers can be and it will be a major hurdle to prevent backlash.
Well the little cores for light workloads in my opinion have only the meaning of power efficiency why else would they put there? Overtime sure but we are talking about this product not what's going to happen in 10 years time. You can't put little cores and expect to get more performance. So the only advantage about this little cores is power efficiency. You get bigger cores to punch through heavy workload if needed. That's how I see it.
The Big.Little changes stuff like you say. 14 core big little mix is not the same as 12 core Big only lets say. If consumers are dumb enough to buy first and then thing about what they have bought then sure, there will be backlash.
Posted on Reply
#7
InVasMani
crimsontapeYou know, this is pretty cool. Part of me wants to cry about it, because it seems awkward and bizarre. But I know there's part of the market that's been working on the implementation of big-little for a while. I think Intel is going to ride that wave with the general efforts to standardize that approach.

Mean while, I think AMD and their FPGA strategy mind will end up being a real banger in the long term. Imagine a CPU no longer CPU, but more like a GPU that's more like a CPU. And load in different micro instructions as needed... Just musing...
Honestly this is part of why Intel's mobile efforts failed if they'd had this tech it would've done much better on battery life and in turn mindset. As for AMD and FPGA's let's not forget Intel also has FPGA tech and they have had it longer than AMD. I'd anticipate whatever AMD does with FPGA's Intel will closely follow suit and vice versa. I've long felt big.LITTLE and FPGA are two key area's where tech innovation will have big growth with big.LITTLE being more fixed function FPGA tech being more adaptive like DSP hardware and effects though you can apply it to much more than that however that's a area use case where FPGA has been utilized heavily. I think we'll find in the future as FPGA tech becomes more powerful though that it's reach and scope will be much wider in terms of it's upside. I can see it replacing a motherboard chipset in large part. Just reprogrammed unused portions of the chipset for other purposes. The potential for it is enormous because it's super flexible by nature. I think FPGA tech will be used a lot to micro tune and optimized fixed hardware designs of short comings and negative aspects.
ratirtI don't know what to think about it. I'm not gonna say cool or awesome nor crap and shit. I'm just gonna wait and see what it really offers. For me, thins Big.Little is more of a power efficiency improvement than a performance improvement.
That's not true it's both it can be used for efficiency in order to extract more performance given how heat and voltage works. It can be used with different chipset structures to maximize efficiency. It's similar in context to variable rate shading spare some performance overhead with portions of the scene that aren't so vital and use the saved overhead for parts that are more critical or frame rates that are more critical due to frame rate dips. By being more efficient with the hardware you have through optimization either in hardware or software you can maximize the benefits to the hardware available and extract more performance. Look at the CCX situation with latency a minor adjustment to that made quite a difference. Performance is a delicate flux and synergy of heat and voltage really and how to excite electrons in the right ways to maximum efficiency relative to the hardware constraints.
ratirtWell the little cores for light workloads in my opinion have only the meaning of power efficiency why else would they put there? Overtime sure but we are talking about this product not what's going to happen in 10 years time. You can't put little cores and expect to get more performance. So the only advantage about this little cores is power efficiency. You get bigger cores to punch through heavy workload if needed. That's how I see it.
The Big.Little changes stuff like you say. 14 core big little mix is not the same as 12 core Big only lets say. If consumers are dumb enough to buy first and then thing about what they have bought then sure, there will be backlash.
If you conserve heat in area's that aren't critical you'll be able to push heat and performance higher for tasks that are. Do you need certain programs to run off the faster core no obviously not they can offloaded. In fact you can offload a bunch already to a android tv box or a smartphone already though much much of what you realistically would you probably don't run anyway while playing a CPU intensive program like a game if your very CPU limited you'd try to avoid that naturally. That said there are a lot of background tasks. Just offloading DWM to the weaker cores is a perk itself then you've got the networking services, virus scanners, updates, sound, and array of other stuff. Basically anything that isn't a game program or any other CPU sensitive task specific performance sensitive program to the weaker cores. You offload all the wasted overhead performance along with all the waste heat that gets in the way the primary core running at higher turbo boosts for longer periods across more cores. It's not hard to grasp at all and Intel will improve big.LITTLE in follow ups to it from one generation to the next the same way RTRT is evolving and chiplets and 3D stacked NAND/DRAM and so on.
Posted on Reply
#8
Wirko
DeathtoGnomesSo they glued a Hare to a Turtles back? :kookoo: :rolleyes::D
Certainly not. They (will) put six hares on a cart drawn by eight turtles, with peacocks (GPU) walking behind.
PooPipeBoyI would wager that power efficiency won't be the only benefit. More of a flow-on effect. Over time, each type of core architecture will become more refined and focused on performing specific tasks. My speculation is that we'll reach a point down the road where the big cores are simply not designed to do the same tasks that the small cores can, but the latency and task scheduling efficiency will be far improved.
Come to think of that, the compiler could tag executable code with some metadata to indicate to the task scheduler which core would execute it best/fastest/most efficiently.
I'd do that if I were Intel, if only to make life harder for the competition.

It might be a stupid idea but stupid ideas get patented all the time. Intel will have a hard time trying to file a patent now, well, if they haven't done that already.
Posted on Reply
#9
1d10t
A 14-core processor with 6 performance- and 8 low-power cores will perform vastly different from a 14-core processor with 8 performance- and 6 low-power cores. One way to derive core counts is by paying attention to the logical processor (thread) counts, as only the performance "Golden Cove" cores support HTT.
14 core 20 threads is slow poke and 14 core 22 threads mean blazing fast. Gotcha Intel.
Posted on Reply
#10
Vayra86
So.... on Alder Lake, Intel isn't stuck to 10 cores max, but 6, effectively.

Awesome. I can smell the progress.
InVasManiThey glued 6 hares to 8 turtles actually! The way to look at though is the slower cores are wider while the faster cores are more narrow and additionally the faster cores draw more power while the slower wider cores draw less power. It's also only a start and for mobile the way they have it setup is good in practice the best of both worlds. Plugged in and when battery life is well charged you have good performance across the 6 of the cores while when battery life decreases performance can slowly shift to the other other cores to preserve battery life. Eventually it's safe to say Intel will have enough space for another chiplet and another tier of cores with middle ground performance that's even more balanced. That or each package will pack in more cores and achieve the same type of thing. In either case I see it as a positive, but I think a 3rd and/or 4th chiplet will have lots of additional upside as things get revised Intel can shuffle certain instruction sets to each chiplet package to maximize performance and efficiency.
That's two bloodied turtles and two halved hares then. Brutal.
PooPipeBoyThe main hurdle with the Big.Little concept will be marketing. I can just imagine the forum posts now: "I bought a 14 core CPU but it turned out to be a shitty 6+8 thing WTFFFF" There's really no limit to how dumb consumers can be and it will be a major hurdle to prevent backlash.
More importantly there is no limit to the number of types and conventions Intel can mix and match now. Imagine, an i7 could be anything from a dual to an octacore now, its a marketing dream. Everything is premium. Didn't they get new stickers for Alder Lake too?
Posted on Reply
#11
crimsontape
InVasManiHonestly this is part of why Intel's mobile efforts failed if they'd had this tech it would've done much better on battery life and in turn mindset. As for AMD and FPGA's let's not forget Intel also has FPGA tech and they have had it longer than AMD. I'd anticipate whatever AMD does with FPGA's Intel will closely follow suit and vice versa. I've long felt big.LITTLE and FPGA are two key area's where tech innovation will have big growth with big.LITTLE being more fixed function FPGA tech being more adaptive like DSP hardware and effects though you can apply it to much more than that however that's a area use case where FPGA has been utilized heavily. I think we'll find in the future as FPGA tech becomes more powerful though that it's reach and scope will be much wider in terms of it's upside. I can see it replacing a motherboard chipset in large part. Just reprogrammed unused portions of the chipset for other purposes. The potential for it is enormous because it's super flexible by nature. I think FPGA tech will be used a lot to micro tune and optimized fixed hardware designs of short comings and negative aspects.
I like the way you put that. Like DSP hardware... Or, imagine it in as a reverse extrapolation for maybe an AI-based approach to branch prediction along with that fancy-nAMD'ed "APD"? NOW we're getting somewhere...
Posted on Reply
#12
InVasMani
Intel's big.LITTLE isn't a bad design it just needs more maturity. It's strikingly similar to Apple M1 really in scope so I can't see why people are complaining so much about it. You don't like it don't buy it get a AMD vote with your wallet. There are plenty of people that will see the upside to Intel's approach though and that's particularly true of mobile where both heat and battery life is much more important. It'll get better in design over time with adjustments to the approach arguing otherwise is completely idiotic. I see it like synthesizer chips myself they have evolved more and more over they years in the scopes of things they can do and how they do them. A lot of the design is continuation of earlier design, but they are way more versatile today and it's gotten to the point where a lot of VA synths aren't terribly far off from legitimate analog synths. There are differences yet there is a fusing and blending of designs and it is becoming harder and harder to differentiate as time moves on and synth makers keep innovating approaches.

I have no doubt at all in my mind that a company like Intel will do similarly with CPU chips. At the end of the day price relative to performance is also a major part of what's important along with actual yields. You can have a great chip and virtually no supply and that's kind of a big problem to have just look at the GPU market the high end of the spectrum for GPU design is too wide a gap of price and performance that it's destroying the DIY PC market and gaming industry in general in a lot of ways. For every AAA developer there a lot more lesser known indie developers being hurt by the push towards higher and more expensive GPU's and less progress at the lower end and mid range. The mining situation doesn't help either, but it's exaggerated by high end GPU's in a sense because the supply volume issue. They'd have a easier time ensuring more GPU's get into the hands of more people rather than mining bot farms.
Posted on Reply
#13
ratirt
InVasManiThat's not true it's both it can be used for efficiency in order to extract more performance given how heat and voltage works. It can be used with different chipset structures to maximize efficiency. It's similar in context to variable rate shading spare some performance overhead with portions of the scene that aren't so vital and use the saved overhead for parts that are more critical or frame rates that are more critical due to frame rate dips. By being more efficient with the hardware you have through optimization either in hardware or software you can maximize the benefits to the hardware available and extract more performance. Look at the CCX situation with latency a minor adjustment to that made quite a difference. Performance is a delicate flux and synergy of heat and voltage really and how to excite electrons in the right ways to maximum efficiency relative to the hardware constraints.
I think you are not correct. It is more of just efficiency with the little cores or you can say higher performance to power ratio. The big cores do the heavy lifting and these power consumption with efficiency stays the way it always has been.
It is obvious Intel will improve it but I look at it as it is now. There's a reason for Intel to move big.little and I'm 100% sure it was efficiency and power usage. definitely not higher performance.
Posted on Reply
Add your own comment
Nov 21st, 2024 12:21 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts