Tuesday, November 24th 2020

Alleged Apple M1X Processor Specifications Surface

Apple's silicon design team has recently launched its "fastest" CPU core ever, found inside the company's M1 processor designed for laptops and mini-PCs. Featuring an eight-core processor, where four cores are represented by low power small configurations, and four big, high-performance design cores, the M1 processor proved to be extremely fast. However, the Apple Silicon processor doesn't seem to cover anything higher than the 13-inch MacBook Pro. And that is about to change. When it comes to higher-end models like the 16-inch MacBook Pro, which provides more cooling area, it is logical that the processor for those designs is a higher performance design.

Enter the world of the Apple M1X processor. Designed for high-end laptops and the most demanding workloads, the new processor aims to create a new performance level. Featuring a 12-core CPU with eight big and four small cores, the M1X processor is going to deliver much better performance than M1. The graphics and memory configuration are currently unknown, so we have to wait and see how it will look like. The M1X is set to arrive sometime in Q1 of 2021, according to the source of the leak, so be patient and remember to take this information with a grain of salt.
Source: LeaksApplePro (Twitter)
Add your own comment

45 Comments on Alleged Apple M1X Processor Specifications Surface

#26
Aquinus
Resident Wat-man
FouquinThe higher you push the performance the lower the efficiency? Somebody better tell AMD! Zen 3 must be the least efficient chip they've ever made, it's way higher performance than Zen 2! :kookoo:
Clocking chips higher requires more power because leakage primarily comes from transistors switching states because between the one and off state, the gate has resistance since the switch doesn't effectively turn off immediately (although it happens pretty fast.) Because of that, switching transistors faster will result in more leakage; higher power consumption. Zen 3 is more efficient than earlier revisions because of architectural changes and smaller manufacturing nodes, but if you take a Zen 3 core, it will run far more efficiently at lower clock speeds than higher ones. Since you're not switching the transistors as quickly, you can lower the operating voltage, both of which reduces leakage. Just because the overall efficiency of chips improves over time doesn't mean that running a chip slower or leaning out the core (fewer transistors,) on the same node isn't going to result in power consumption improvements. If Apple adds more high power cores than low power ones, power consumption is going to go up by a lot. If they scale it out, it might increase in a more (not quite,) linear fashion.

@Vya Domus is right though, historically, when Apple releases an X chip, its usually a beefer chip based on the same design.
Posted on Reply
#27
ratirt
FouquinAre we just going to forget IPC, ILP and clockrate? On 28nm GM204 with 2B fewer transistors and 200mm smaller die overtook GK110B in performance. On 7nm RDNA took 3B fewer transistors on a 80mm smaller die to match and sometimes outperform the 13B transistors in GCN5.1. Architecture. It matters a lot.
No we are not and I have mentioned it in earlier posts but it is still irrelevant since the argument is about big small die size and it is about the die size for you I suppose but you haven't specify it though. OK.
If you consider same node same arch you must increase die size and pack more transistors to make it faster thus more power will be required. If you have transferred to a new smaller node you can pack more transistors within the same die size to get more performance, bump clocks and depending how you balance the GPU it may use more power to get better performance or same power with slight increase in performance due to transistor increase. This all depends on so many things.
Posted on Reply
#28
Fouquin
Vya DomusNo we are not, you are. No one uses a PC in the way you describe, I feel like I am repeating myself indefinitely, the bigger the chip and the higher the power consumption and the worse the battery life gets, even if performance/watt has increased. The only situation when that's not true is if the chip is used at maximum load until the battery dies, almost no one uses their device like that.
Not. if. The. Time. It. Takes. To. Complete. The. Tasks. Is. Shorter. If the chip wakes from idle state, completes the task, and goes back to idle in under half the time of the lower power chip it has consumed less power. Welcome to 20 years of CPU clock gating! This is the most basic mathematics I'm not sure why you're being so obtuse.
Vya DomusA first gen Zen 1 core topped out at around 8-9W, a Zen 3 core tops out at 20W.


22.62W


We're done here.
Posted on Reply
#29
Aquinus
Resident Wat-man
FouquinNot. if. The. Time. It. Takes. To. Complete. The. Tasks. Is. Shorter. If the chip wakes from idle state, completes the task, and goes back to idle in under half the time of the lower power chip it has consumed less power. Welcome to 20 years of CPU clock gating! This is the most basic mathematics I'm not sure why you're being so obtuse.
I'm sure that's exactly why mobile chips consume less power and produce less heat at lower speeds and operating voltages. CPUs have a lot of area between "fully on" and "idle". A core only exits C0 if there is nothing to do for a certain duration of time. If load is relatively low, it's going to operate at a lower clock speed and a lower voltage, because it's more efficient. It's not going to go to idle, then run at full tilt, then go idle, then go full tilt again. That's not how CPUs work unless your load comes in sporadically, in which case that will occur with any CPU because it's dependent on the load.
Posted on Reply
#30
Vya Domus
Fouquin

22.62W
Those are power figures for 2 threads per core, hence the "2T", that chart is also in the context of XFR, it took me a while to find it because I knew something is off. Zen 3 uses 20W for 1 thread per core but nice try.

FouquinIf the chip wakes from idle state, completes the task, and goes back to idle in under half the time of the lower power chip it has consumed less power.
What are you talking about dude, that's literately incomprehensible. It consumed less power than the chip with lower power .... the one that's lower power, right ? What does that even mean ?

For a chip to consume less power in half the time it has to be at the very least more than twice as fast and use the same power. The pattern of usage is however still absurd.
Posted on Reply
#31
TheoneandonlyMrK
ratirtSure but here you have RDNA vs RDNA2 like 5700xt and 6800. You have 2080 Ti and 3080. It can also go all the way around. Besides there are more things that come into play.
Just to be clear. You are talking about the die size or transistor count? These two can be misleading. Just like the 2080Ti and 3080 are a different node so 2080 Ti is bigger in die size but 3080 packs more transistors. Just like Kepler vs Maxwell. The RDNA and RDNA2 are a better comparison since these are the same node. 6800 is almost twice as big as the 5700XT
Your forgetting apples talent ,they can make twice the performance in quarter the die, that's sarcasm physics is physics.
No one gets performance for naught.
Posted on Reply
#32
0xCats
"Apple bad, but apple made a fast CPU that beats the CPU's I like. This cannot be!"
The cognitive dissonance is strong here
Posted on Reply
#33
okbuddy
where is the m2 24core plus 3070 level integrated gpu
Posted on Reply
#34
Searing
People need to calm down around here. I already play a ton of games on my M1 Mac. All Blizzard games except Overwatch, and a huge library of Steam games that work on Mac OS. You can buy them cheaply on the PC and then you automatically get the mac version too. Rosetta works so well I can play all of them.

If Apple doubles the M1 GPU it will move from PS4 (30fps) to PS4 Pro+ Powerful CPU (60fps) in all modern games, including Cyberpunk would be possible. I hope they do so. The integrated performance is already much higher than my Tiger Lake laptop and getting double the M1 would be perfect.

As for the pro models? I'd like to see 4x M1 and really crush the competition in the laptop form factor. Maybe next year.

As it is, the M1 is equal to a PS4 in a Switch power envelope. It is a massive achievement. Now let's get some higher wattage parts!
Posted on Reply
#35
TheoneandonlyMrK
SearingPeople need to calm down around here. I already play a ton of games on my M1 Mac. All Blizzard games except Overwatch, and a huge library of Steam games that work on Mac OS. You can buy them cheaply on the PC and then you automatically get the mac version too. Rosetta works so well I can play all of them.

If Apple doubles the M1 GPU it will move from PS4 (30fps) to PS4 Pro+ Powerful CPU (60fps) in all modern games, including Cyberpunk would be possible. I hope they do so. The integrated performance is already much higher than my Tiger Lake laptop and getting double the M1 would be perfect.

As for the pro models? I'd like to see 4x M1 and really crush the competition in the laptop form factor. Maybe next year.

As it is, the M1 is equal to a PS4 in a Switch power envelope. It is a massive achievement. Now let's get some higher wattage parts!
The node precludes crazy wattages, they're best going chiplet but these chips are not small, I suppose we have seen some massive packages.
They're wide ,high transistor count cores plus a lot of accelerators make for comparatively large die, like Nvidia this could cause problems, it has for Intel(density but same)
Posted on Reply
#36
Aquinus
Resident Wat-man
SearingPeople need to calm down around here. I already play a ton of games on my M1 Mac. All Blizzard games except Overwatch, and a huge library of Steam games that work on Mac OS. You can buy them cheaply on the PC and then you automatically get the mac version too. Rosetta works so well I can play all of them.

If Apple doubles the M1 GPU it will move from PS4 (30fps) to PS4 Pro+ Powerful CPU (60fps) in all modern games, including Cyberpunk would be possible. I hope they do so. The integrated performance is already much higher than my Tiger Lake laptop and getting double the M1 would be perfect.

As for the pro models? I'd like to see 4x M1 and really crush the competition in the laptop form factor. Maybe next year.

As it is, the M1 is equal to a PS4 in a Switch power envelope. It is a massive achievement. Now let's get some higher wattage parts!
Out of curiosity, what do you play and at what resolution? I've been playing WoW on my 5ks with my MacBook Pro at rendering resolution of 4k with the quality bar set to 7 and I'll usually be 40-50FPS most of the time with the Radeon Pro 5600M. I'm just trying to gauge the relative performance compared to what I have now. I've read that the CPU is quick but the GPU has a ways to go. Being faster than Intel is a relatively low bar and I've been pretty impressed with my 5600M. I think AMD has set the bar high for Apple, more so than Intel.
Posted on Reply
#37
Houd.ini
AquinusOut of curiosity, what do you play and at what resolution? I've been playing WoW on my 5ks with my MacBook Pro at rendering resolution of 4k with the quality bar set to 7 and I'll usually be 40-50FPS most of the time with the Radeon Pro 5600M. I'm just trying to gauge the relative performance compared to what I have now. I've read that the CPU is quick but the GPU has a ways to go. Being faster than Intel is a relatively low bar and I've been pretty impressed with my 5600M. I think AMD has set the bar high for Apple, more so than Intel.
It’s pretty potent, and already beats earlier 15” MBPs with discrete graphics even through Rosetta2: Anandtech quick review
Posted on Reply
#38
Mussels
Freshwater Moderator
To the guys arguing about power consumption:

It depends on the tasks you're doing. If you're just gaming for 30 minutes, then the max power draw is what matters.
If you're doing any kind of productivity (or using a battery) you REALLY want the CPU to complete tasks in a short as time as possible, and idle as fast as possible.

Desktop CPU's tend to complete tasks in fast bursts then idle, resulting in much lower energy used for the task as a whole - AMD threadripper smashed that metric apart in the tech world recently.

Apple can release low wattage CPU's all they like, but they need to balance between getting the task done fast, and consuming as little power as possible for that task. They can tweak this more than other companies because they're designing all the hardware and OS to work exclusively together, like a... well a portable game console really. This makes me think of the new macs as kin to the nintendo switch now.
Posted on Reply
#39
Vya Domus
MusselsDesktop CPU's tend to complete tasks in fast bursts then idle, resulting in much lower energy used for the task as a whole - AMD threadripper smashed that metric apart in the tech world recently.
Burst workloads are the least efficient actually because the processor goes in it's highest power state and then idles and then back again and so on. The bigger the changes in voltage and frequency the more energy is wasted.
Posted on Reply
#40
Mussels
Freshwater Moderator
Vya DomusBurst workloads are the least efficient actually because the processor goes in it's highest power state and then idles and then back again and so on. The bigger the changes in voltage and frequency the more energy is wasted.
not really, no. those transitions happen in milliseconds and the higher energy efficiency comes into play massively. These arent a large vehicle that has to slow down its mass, when they finish a task they instantly stop.
Posted on Reply
#41
Vya Domus
MusselsThese arent a large vehicle that has to slow down its mass, when they finish a task they instantly stop.
It's the exact same principle and no they are not instantaneously. It takes a lot of energy to power up portions of the chip that are power gated.
Posted on Reply
#42
Mussels
Freshwater Moderator
Vya DomusIt's the exact same principle and no they are not instantaneously. It takes a lot of energy to power up portions of the chip that are power gated.
... no? no it does not. Are you one of the people that thinks a light bulb takes so much wattage to power on, that its best left on alllllll the time too?
Posted on Reply
#43
Vya Domus
Mussels... no? no it does not. Are you one of the people that thinks a light bulb takes so much wattage to power on, that its best left on alllllll the time too?
I don't know why you are arguing over this. www.microsoft.com/en-us/research/wp-content/uploads/2016/02/pcyc-web.pdf
Many modern microprocessors have low-power states,in which they consume little or no power. To take advantage of such low-power states, the operating system needs to direct the processor to turn off (or down) when it is predicted that the consequent savings in power will be worth the time and energy overhead of turning off and restarting.
It's one of that reason operating systems schedule as many threads as possible on as few cores as possible, that's why you always see than one core with a lot of load on it, to increase the chance that other cores go into to their low power states for as long as possible.
Posted on Reply
#44
q_ex
Vya DomusNo we are not, you are. No one uses a PC in the way you describe, I feel like I am repeating myself indefinitely, the bigger the chip and the higher the power consumption and the worse the battery life gets, even if performance/watt has increased. The only situation when that's not true is if the chip is used at maximum load until the battery dies, almost no one uses their device like that.
Kinda the opposite. Say you have a 100 Watt-hour battery (yes, they are rated in Wh, not W) and SoC/CPU with max power draw of 20W. If you run that SoC/CPU at full-tilt the whole time, you always get 5 hours of battery life. Don’t matter what the perf/watt is. Sure, there’s the user experience.... higher perf/watt maybe gets you snappier response, more data crunching, more FPS, whatever have you in that same 5 hours. But as you say, no one uses their device that way.

Now say you have a short-ish workload that lets you clock-gate so you don’t consume the full 20W all the time, then perf/watt matters again. Clearly, if you have SoC #1 that chews through the job at 20W for 1 hour then idles vs a higher perf/watt (but same max 20W) SoC #2 that gets done in 45 minutes (0.75 hours), then #1 just ate 20% battery whereas #2 ate only 15%.

Now all things being equal, @Fouquin ’s point is that a 1-hour job @20W and a 2-hour job @10W would chew through the same amount of battery. I’m guessing what you’re arguing when you say “efficiency” is that power scaling isn’t linear, so with the same micro-arch etc., @10W it’s really like a 1:45 job (17.5% batt) and not 2 hours (20% batt). So the lower TDP system won. Point taken.

But to that, @Fouquin ’s counter-argument is that if you have different micro-archs with maybe better perf/watt, then @20W you really could be looking at a 45 minute job (15% batt), swinging the balance back in favor of the higher TDP SoC for the same job.
Posted on Reply
#45
Aquinus
Resident Wat-man
MusselsTo the guys arguing about power consumption:

It depends on the tasks you're doing. If you're just gaming for 30 minutes, then the max power draw is what matters.
If you're doing any kind of productivity (or using a battery) you REALLY want the CPU to complete tasks in a short as time as possible, and idle as fast as possible.

Desktop CPU's tend to complete tasks in fast bursts then idle, resulting in much lower energy used for the task as a whole - AMD threadripper smashed that metric apart in the tech world recently.

Apple can release low wattage CPU's all they like, but they need to balance between getting the task done fast, and consuming as little power as possible for that task. They can tweak this more than other companies because they're designing all the hardware and OS to work exclusively together, like a... well a portable game console really. This makes me think of the new macs as kin to the nintendo switch now.
Isn't this literally the reason why there are 4 lightning cores and 4 thunder cores? It's almost like Apple thought of this ahead of time.
Posted on Reply
Add your own comment
Nov 23rd, 2024 21:20 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts