Monday, November 4th 2024

Apple M4 Max CPU Faster Than Intel and AMD in 1T/nT Benchmarks

Early benchmark results have revealed Apple's newest M4 Max processor as a serious competitor to Arm-based CPUs from Qualcomm and even the best of x86 from Intel and AMD. Recent Geekbench 6 tests conducted on the latest 16-inch MacBook Pro showcase considerable improvements over both its predecessor and rival chips from major competitors. The M4 Max achieved an impressive single-core score of 4,060 points and a multicore score of 26,675 points, marking significant advancements in processing capability. These results represent approximately 30% and 27% improvements in single-core and multicore performance, respectively, compared to the previous M3 Max. This is also much higher than something like Snapdragon X Elite, which tops out at twelve cores per SoC. When measured against x86 competitors, the M4 Max also demonstrates substantial advantages.

The chip outperforms Intel's Core Ultra 9 285K by 19% in single-core and 16% in multicore tests, surpassing AMD's Ryzen 9 9950X by 18% in single-core and 25% in multicore performance. Notably, these achievements come with significantly lower power consumption than traditional x86 processors. The flagship system-on-chip features a sophisticated 16-core CPU configuration, combining twelve performance and four efficiency cores. Additionally, it integrates 40 GPU cores and supports up to 128 GB of unified memory, shared between CPU and GPU operations. The new MacBook Pro line also introduces Thunderbolt 5 compatibility, enabling data transfer speeds up to 120 Gb/s. While the M4 Max presents an impressive response to the current market, we have yet to see its capabilities in real-world benchmarks, as these types of synthetic runs are only a part of the performance story that Apple has prepared. We need to see productivity, content creation, and even gaming benchmarks to fully crown it the king of performance. Below is a table comparing Geekbench v6 scores, courtesy of Tom's Hardware, and a random Snapdragon X Elite (X1E-00-1DE) run in top configuration.
Source: Tom's Hardware
Add your own comment

21 Comments on Apple M4 Max CPU Faster Than Intel and AMD in 1T/nT Benchmarks

#1
Onasi
“Synthetic” is the keyword here. As impressive as Apple Silicon is, you won’t see that level of performance in all real world tasks. That’s not even to mention that cross-platform benches are notoriously iffy.
Also, Geekbench, lmao.
Posted on Reply
#2
Luminescent
All good but let's see some benchmarks in real world projects in Adobe Premiere pro, Davinici Resolve who heavily relies on GPU power, Lightroom, Photoshop.
And of course, not the usual it can decode some 4k, 8k format, real world projects means heavy color grading with masks, multiple cameras editing and in photoshop real world editing with filters that usually have no idea what to do with M3 max or M4, filters that use GPU to detect faces, objects, denoise...etc.
Probably you won't see that.
Posted on Reply
#3
Sunlight91
Hopefully this encourages AMD to release a CPU with more cores. For example a 8X3D +16 Zen5C model.
Posted on Reply
#4
arbiter
Is this one those benchmarks like Apple used to do back in the day to make powerpc processors look like gods using highly optimized benchmarks but using random one from download.com for pc cpu?
Posted on Reply
#5
Redwoodz
LOL. So much fake news. Geekbench results are notoriously unreliable. Search the results browser yourself. M4 comes in around page 2 of top single core results.
Also, a full s.o.c. @ $2600. Nice try.
Posted on Reply
#6
dyonoctis
LuminescentAll good but let's see some benchmarks in real world projects in Adobe Premiere pro, Davinici Resolve who heavily relies on GPU power, Lightroom, Photoshop.
And of course, not the usual it can decode some 4k, 8k format, real world projects means heavy color grading with masks, multiple cameras editing and in photoshop real world editing with filters that usually have no idea what to do with M3 max or M4, filters that use GPU to detect faces, objects, denoise...etc.
Probably you won't see that.
You might be surprised...Sadly there's not enough quality data vs a desktop PC

Posted on Reply
#7
zigzag
Onasi“Synthetic” is the keyword here. As impressive as Apple Silicon is, you won’t see that level of performance in all real world tasks. That’s not even to mention that cross-platform benches are notoriously iffy.
Also, Geekbench, lmao.
It's true that synthetic benchmarks don't always correlate with real world tasks, but they still measure some workload. Geekbench generally correlates well with SPEC, which is standard synthetic benchmark for comparing processors across architectures. 30% bigger single core Geekbench score tells us that performance of some part(s) of CPU-core increased by a good amount. Performance increase looks promising. But only if improved parts match with various bottlenecks of real world workloads we'll see performance increase in those apps/games. Like you said, we will need to wait for real world benchmarks to get a clearer picture.

I would really like, if benchmarks would include all possible platforms, even if the results can be problematic. Sure, it's not fair if single platform gets lower score because code is not optimized for that platforms. But if that is the case in 10 benchmarks, that is valuable information to me as a user. It's also fun to know, if a new phone can match performance of 5 year old desktop PC. How long before we get docking stations for phones and phones that can run desktop OSes?

546 GB/s memory bandwidth of M4 Max is impressive. Apple is using 8 memory channels while desktop PCs are stuck with 2 memory channels. Big LLMs need very high bandwidth. In simplified terms, for each word LLM generates, it needs to read the whole model. So, if LLM size is 50 GB, a 500 GB/s of memory bandwidth will allow to generate up to ~10 words/s. That is, if you also have enough compute power.
Posted on Reply
#8
Luminescent
dyonoctisYou might be surprised...Sadly there's not enough quality data vs a desktop PC
Professionals don't work on laptops, how are you gonna judge the image you edit on a tiny laptop screen ?
From real world experience in Davinci Resolve i can tell you nothing touches an RTX 4090 coupled with some decent CPU, not important since GPU is used most.
Posted on Reply
#9
TumbleGeorge
However, it turns out that there will be an M4 Ultra, probably in Q2 2025. In Mac Studio. So, it will outperform all competitors in all work tasks and significantly so?
Posted on Reply
#10
dyonoctis
LuminescentProfessionals don't work on laptops, how are you gonna judge the image you edit on a tiny laptop screen ?
From real world experience in Davinci Resolve i can tell you nothing touches an RTX 4090 coupled with some decent CPU, not important since GPU is used most.
You might be surprised again...you can be an editor for the BBC and use a MacBook. And if you look at other post on the avidmedia media composer page, you'll see that a fair amount of big movies and shows are edited on a Mac, rather than a big PC with Threadripper and a 4090. Avid media composer is also very popular for big budget stuff over premiere or DaVinci. Stuff like the avengers were edited with that software.

On set editing is also a thing
Posted on Reply
#11
unwind-protect
To be honest I don't understand how Geekbench can be as bad at predicting real-world performance as it is. It is made up of multiple real-world apps. Are they just not using enough data size with those apps?
Posted on Reply
#12
Prima.Vera
Geekbench... lol

You have to love the callous marketing.
zigzagApple is using 8 memory channels while desktop PCs are stuck with 2 memory channels.
One can argue that 2 modules of DDR5 modules in PC, are in fact Quad Channel, since 1 module is Dual Channel internally...
Posted on Reply
#13
zigzag
Prima.VeraOne can argue that 2 modules of DDR5 modules in PC, are in fact Quad Channel, since 1 module is Dual Channel internally...
By your definition, Apple is using 16 channels.
Posted on Reply
#14
Prima.Vera
zigzagBy your definition, Apple is using 16 channels.
No. LPDDR5 and DDR5 are NOT the same thing.
Posted on Reply
#15
JohH
  1. It's >500mm² 3nm N3E vs AMD's 140mm² 4nm N4P + 100mm² 6nm N6.
  2. It has 512-bit memory interface on 3nm with 546GB/s of bandwidth vs 128-bit memory interface on 6nm with 96GB/s.
  3. Geekbench 6.3 added niche ML SME extensions which slightly inflates the score relative to its lead in SPECint 2017 1T since no compiler will emit these instructions for SPECint tests. Compare, for example, Geekbench 5.5 where Zen 5's performance is overstated such that it matches some of the M4 series in 1T composite score.
  4. M4 Max cannot be overclocked. A memory-tuned 9950X can easily score >3600 on GB6.3 1T composite, since it's rather memory speed sensitive.
As a consequence of #1 and #2 the cheapest M4 Max machine is $3700. A price where you can buy an RTX 4090 and 9950X PC.
Posted on Reply
#16
zigzag
Prima.VeraNo. LPDDR5 and DDR5 are NOT the same thing.
So LPDDR5 is wider than DDR5? I've never checked LPDDR5 specs.

The point I was trying to make is that M4 Max has about 4x more memory bandwidth than the latest AMD or Intel based PCs.
Posted on Reply
#17
Scrizz
dyonoctis


My guy. That editing is clearly happening on a Windows PC. just look at the taskbar. :laugh:
So you just proved the other guy's point. ;):D:toast:
Posted on Reply
#18
Luminescent
dyonoctisYou might be surprised again...you can be an editor for the BBC and use a MacBook. And if you look at other post on the avidmedia media composer page, you'll see that a fair amount of big movies and shows are edited on a Mac, rather than a big PC with Threadripper and a 4090. Avid media composer is also very popular for big budget stuff over premiere or DaVinci. Stuff like the avengers were edited with that software.

On set editing is also a thing
I didn't know avid still exists.
I guess for cutting any NLE will do but when you get to serious color grading you need that very expensive Nvidia Quadro and RTX 4090's, those threadrippers even if you don't fully utilize them, just because they have a ton of money and don't have the time to mess around with ARM cpu's.
I was curios why apple is so energy efficient so i browsed the web for information, it's not because the 3nm node or whatever exclusive deal they have to be first with TSMC latest, or that all in one chip design, it's mostly because of ARM instruction set and while is very efficient, not everything works, they mostly don't work.
So x86 carry a large instruction set so everything works from the "beginning of time" while ARM can't do that unless someone adapts for ARM, if it can be done.
Posted on Reply
#19
dyonoctis
Scrizz

My guy. That editing is clearly happening on a Windows PC. just look at the taskbar. :laugh:
So you just proved the other guy's point. ;):D:toast:
I choose the wrong picture but my argument is still correct if I use other evidence :D Saturday night edited on a MacBook by the guy who edited John wick, and the Orville edited on a Mac

LuminescentI didn't know avid still exists.
I guess for cutting any NLE will do but when you get to serious color grading you need that very expensive Nvidia Quadro and RTX 4090's, those threadrippers even if you don't fully utilize them, just because they have a ton of money and don't have the time to mess around with ARM cpu's.
I was curios why apple is so energy efficient so i browsed the web for information, it's not because the 3nm node or whatever exclusive deal they have to be first with TSMC latest, or that all in one chip design, it's mostly because of ARM instruction set and while is very efficient, not everything works, they mostly don't work.
So x86 carry a large instruction set so everything works from the "beginning of time" while ARM can't do that unless someone adapts for ARM, if it can be done.
Honestly, from what I’ve seen from the field, « professional » are not always super tech literate people, some of them don’t even own a personal workstation if their main job is in a studio, and just use a laptop for their personal projects.

I even know a vfx studio that stuck with the Mac when everyone moved to Nvidia,and somehow made it works. Professional is such a loose term, depending on your field, and the level at which one work, you don’t need a « pro » beefy computer. Where I’m working we have a Mac Studio, to handle heavy graphic design tasks, but we also have bottom of the barrel’s Lenovo PC’s plugged to 200 000 € worth of printing gear because the software isn’t available for MacOS, and you don’t need that much power when you just have to prep a file to be printed on something not that big.
Posted on Reply
#20
Scrizz
dyonoctisI choose the wrong picture but my argument is still correct if I use other evidence :D Saturday night edited on a MacBook by the guy who edited John wick, and the Orville edited on a Mac



Honestly, from what I’ve seen from the field, « professional » are not always super tech literate people, some of them don’t even own a personal workstation if their main job is in a studio, and just use a laptop for their personal projects.

I even know a vfx studio that stuck with the Mac when everyone moved to Nvidia,and somehow made it works. Professional is such a loose term, depending on your field, and the level at which one work, you don’t need a « pro » beefy computer. Where I’m working we have a Mac Studio, to handle heavy graphic design tasks, but we also have bottom of the barrel’s Lenovo PC’s plugged to 200 000 € worth of printing gear because the software isn’t available for MacOS, and you don’t need that much power when you just have to prep a file to be printed on something not that big.
For NLE stuff you don't really need the most powerful computer. What you really need is fast storage and lots of it.
Posted on Reply
#21
Guwapo77
ScrizzFor NLE stuff you don't really need the most powerful computer. What you really need is fast storage and lots of it.
In short, some professionals do choose to use Apple laptops.
Posted on Reply
Add your own comment
Dec 30th, 2024 12:22 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts