Tuesday, November 24th 2020
Alleged Apple M1X Processor Specifications Surface
Apple's silicon design team has recently launched its "fastest" CPU core ever, found inside the company's M1 processor designed for laptops and mini-PCs. Featuring an eight-core processor, where four cores are represented by low power small configurations, and four big, high-performance design cores, the M1 processor proved to be extremely fast. However, the Apple Silicon processor doesn't seem to cover anything higher than the 13-inch MacBook Pro. And that is about to change. When it comes to higher-end models like the 16-inch MacBook Pro, which provides more cooling area, it is logical that the processor for those designs is a higher performance design.
Enter the world of the Apple M1X processor. Designed for high-end laptops and the most demanding workloads, the new processor aims to create a new performance level. Featuring a 12-core CPU with eight big and four small cores, the M1X processor is going to deliver much better performance than M1. The graphics and memory configuration are currently unknown, so we have to wait and see how it will look like. The M1X is set to arrive sometime in Q1 of 2021, according to the source of the leak, so be patient and remember to take this information with a grain of salt.
Source:
LeaksApplePro (Twitter)
Enter the world of the Apple M1X processor. Designed for high-end laptops and the most demanding workloads, the new processor aims to create a new performance level. Featuring a 12-core CPU with eight big and four small cores, the M1X processor is going to deliver much better performance than M1. The graphics and memory configuration are currently unknown, so we have to wait and see how it will look like. The M1X is set to arrive sometime in Q1 of 2021, according to the source of the leak, so be patient and remember to take this information with a grain of salt.
45 Comments on Alleged Apple M1X Processor Specifications Surface
@Vya Domus is right though, historically, when Apple releases an X chip, its usually a beefer chip based on the same design.
If you consider same node same arch you must increase die size and pack more transistors to make it faster thus more power will be required. If you have transferred to a new smaller node you can pack more transistors within the same die size to get more performance, bump clocks and depending how you balance the GPU it may use more power to get better performance or same power with slight increase in performance due to transistor increase. This all depends on so many things.
22.62W
We're done here.
What are you talking about dude, that's literately incomprehensible. It consumed less power than the chip with lower power .... the one that's lower power, right ? What does that even mean ?
For a chip to consume less power in half the time it has to be at the very least more than twice as fast and use the same power. The pattern of usage is however still absurd.
No one gets performance for naught.
The cognitive dissonance is strong here
If Apple doubles the M1 GPU it will move from PS4 (30fps) to PS4 Pro+ Powerful CPU (60fps) in all modern games, including Cyberpunk would be possible. I hope they do so. The integrated performance is already much higher than my Tiger Lake laptop and getting double the M1 would be perfect.
As for the pro models? I'd like to see 4x M1 and really crush the competition in the laptop form factor. Maybe next year.
As it is, the M1 is equal to a PS4 in a Switch power envelope. It is a massive achievement. Now let's get some higher wattage parts!
They're wide ,high transistor count cores plus a lot of accelerators make for comparatively large die, like Nvidia this could cause problems, it has for Intel(density but same)
It depends on the tasks you're doing. If you're just gaming for 30 minutes, then the max power draw is what matters.
If you're doing any kind of productivity (or using a battery) you REALLY want the CPU to complete tasks in a short as time as possible, and idle as fast as possible.
Desktop CPU's tend to complete tasks in fast bursts then idle, resulting in much lower energy used for the task as a whole - AMD threadripper smashed that metric apart in the tech world recently.
Apple can release low wattage CPU's all they like, but they need to balance between getting the task done fast, and consuming as little power as possible for that task. They can tweak this more than other companies because they're designing all the hardware and OS to work exclusively together, like a... well a portable game console really. This makes me think of the new macs as kin to the nintendo switch now.
Now say you have a short-ish workload that lets you clock-gate so you don’t consume the full 20W all the time, then perf/watt matters again. Clearly, if you have SoC #1 that chews through the job at 20W for 1 hour then idles vs a higher perf/watt (but same max 20W) SoC #2 that gets done in 45 minutes (0.75 hours), then #1 just ate 20% battery whereas #2 ate only 15%.
Now all things being equal, @Fouquin ’s point is that a 1-hour job @20W and a 2-hour job @10W would chew through the same amount of battery. I’m guessing what you’re arguing when you say “efficiency” is that power scaling isn’t linear, so with the same micro-arch etc., @10W it’s really like a 1:45 job (17.5% batt) and not 2 hours (20% batt). So the lower TDP system won. Point taken.
But to that, @Fouquin ’s counter-argument is that if you have different micro-archs with maybe better perf/watt, then @20W you really could be looking at a 45 minute job (15% batt), swinging the balance back in favor of the higher TDP SoC for the same job.