• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Apple's Graphics Performance Claims Proven Exaggerated by Mac Studio Reviews

Joined
May 2, 2017
Messages
7,762 (2.80/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I dont what the heck anand is doing wrong but PL2 includes everything, cores cache and rest of chip.
Running everything as stock, as configured by Intel and the motherboard maker. They've clarified their testing methodologies at length previously, and reiterated this at the launch of ADL due to its abandonment of tau and boost power limits for K SKUs.
I don't think I said lying, but yes I called it dishonest. In my book misleading makes you dishonest.
Then all marketing is dishonest - which isn't wrong IMO, but then you're missing the point. There's nothing especially dishonest about this marketing in comparison to the competition - it's pretty much on par with everyone else. As such, calling them out on it specifically is selective.
About the 3090 vs 3070, I'm not sure its so clear cut. It always depends on the workload and whether or not they can feed those cores. Im pretty sure that a power limited 3090 at 720p gaming will perform worse than a 3070. .
"Whether or not they can feed those cores" =
unless specifically bottlenecked elsewhere
So yes, you're right on that point, but at that point the GPU isn't the determinant of efficiency, the external bottleneck is. Which renders talking about GPU efficiency meaningless.

And no, a 3090 power limited to the same level as a 3070 will not in general perform worse at 720p. It might in some games - certain applications don't scale equally well in parallel, or hit execution ceilings that need higher clocks or IPC to be overcome - but in general, the 3090 will always be faster in GPU-bound workloads. It could theoretically be VRAM limited, but with 24GB that is quite unlikely, and no games come close to saturating a PCIe 4.0 x16 connection, so that isn't a differentiating factor either. And given how voltage/frequency curves work - the power cost for increasing clocks increases as you push clocks higher - a lower clocked, wider GPU will almost always be more efficient than a smaller, higher clocked GPU at the same power levels.

Im sure the 3970x is over twice as fast as the m1 ultra. Going by cbr23 results, it score 47+k. So what do you mean same ballpark?
Cinebench seems to be somewhat of an outlier in terms of M1 performance (which might be down to how well it scales with SMT - that's a big part of AMD's historical advantage over Intel in that comparison, at least, as their SMT is better than Intel's). Not that GeekBench is a representative benchmark either, but in that, the M1U scores ~24000 compared to 26500 for a 3975WX system. That 3975WX system is running quite slow RAM (DDR4-2400) but with 2x the channels of a 3970X it should still overall be faster in anything affected by memory bandwidth. Still, that's a ballpark result IMO, even if it is still a clear victory for the TR chip. "A bit faster", like I said above, fits pretty well.
 
Joined
Apr 17, 2021
Messages
564 (0.43/day)
System Name Jedi Survivor Gaming PC
Processor AMD Ryzen 7800X3D
Motherboard Asus TUF B650M Plus Wifi
Cooling ThermalRight CPU Cooler
Memory G.Skill 32GB DDR5-5600 CL28
Video Card(s) MSI RTX 3080 10GB
Storage 2TB Samsung 990 Pro SSD
Display(s) MSI 32" 4K OLED 240hz Monitor
Case Asus Prime AP201
Power Supply FSP 1000W Platinum PSU
Mouse Logitech G403
Keyboard Asus Mechanical Keyboard
I dont what the heck anand is doing wrong but PL2 includes everything, cores cache and rest of chip.

Here is mine at 35w beating the m1 at both efficiency and performance
View attachment 240407
You mean beating the M1 Pro?
The M1 Ultra is a 60W CPU that scores 25k.
The M1 Pro uses 30W and scores 15k. (edit: apparently most people get 13k or a bit less)
So your 30W i9-12900k is slower than both the M1 Pro and M1 Ultra. The M1 only uses 10W.
 
Last edited:
Joined
Jun 14, 2020
Messages
3,474 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Running everything as stock, as configured by Intel and the motherboard maker. They've clarified their testing methodologies at length previously, and reiterated this at the launch of ADL due to its abandonment of tau and boost power limits for K SKUs.

Then all marketing is dishonest - which isn't wrong IMO, but then you're missing the point. There's nothing especially dishonest about this marketing in comparison to the competition - it's pretty much on par with everyone else. As such, calling them out on it specifically is selective.

"Whether or not they can feed those cores" =

So yes, you're right on that point, but at that point the GPU isn't the determinant of efficiency, the external bottleneck is. Which renders talking about GPU efficiency meaningless.

And no, a 3090 power limited to the same level as a 3070 will not in general perform worse at 720p. It might in some games - certain applications don't scale equally well in parallel, or hit execution ceilings that need higher clocks or IPC to be overcome - but in general, the 3090 will always be faster in GPU-bound workloads. It could theoretically be VRAM limited, but with 24GB that is quite unlikely, and no games come close to saturating a PCIe 4.0 x16 connection, so that isn't a differentiating factor either. And given how voltage/frequency curves work - the power cost for increasing clocks increases as you push clocks higher - a lower clocked, wider GPU will almost always be more efficient than a smaller, higher clocked GPU at the same power levels.
I don't think we fundamentally disagree about anything, but I still insist that is incredibly misleading - dishonest comparing to a 3090. It's like Intel comparing a 10900k to a 5950x with the X bar being cores, and cuts off the chart at the 10 core mark. What's the point if they could just compare it to a different product on the stack, like the 5800x? Well the point is people will remember the comparison to a 3090 and think they are comparable, which they are not.

Anyways, there is a fundamental duckup on the verge's chart, on SOTR my 3090 gets 196 fps at highest settings 1440p, yet they only get 140 at 1080p! That's how much my 5 year old 1080ti was getting. There is something very wrong there
 
Joined
Jun 14, 2020
Messages
3,474 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
You mean beating the M1 Pro? The M1 Ultra is a 60W CPU that scores 25k. The M1 Pro scores 15k. So your 30W i9-12900k is slower than both the M1 Pro and M1 Ultra. The M1 only uses 10W.
Sorry, was talking about the max. And no it's not a 10w chip. Anandtech (you know, the same website you showed) has it at 35w scoring 12375 during cinebench. Here ya go


Cinebench seems to be somewhat of an outlier in terms of M1 performance (which might be down to how well it scales with SMT - that's a big part of AMD's historical advantage over Intel in that comparison, at least, as their SMT is better than Intel's). Not that GeekBench is a representative benchmark either, but in that, the M1U scores ~24000 compared to 26500 for a 3975WX system. That 3975WX system is running quite slow RAM (DDR4-2400) but with 2x the channels of a 3970X it should still overall be faster in anything affected by memory bandwidth. Still, that's a ballpark result IMO, even if it is still a clear victory for the TR chip. "A bit faster", like I said above, fits pretty well.
Geekbench is a joke, it measure cache and memory latency first and foremost. I managed a 40% increased score just by clocking my ddr4 from stock no xmp to 4400c16 on a 10900k! I don't why but it loves memory
 
Joined
Apr 17, 2021
Messages
564 (0.43/day)
System Name Jedi Survivor Gaming PC
Processor AMD Ryzen 7800X3D
Motherboard Asus TUF B650M Plus Wifi
Cooling ThermalRight CPU Cooler
Memory G.Skill 32GB DDR5-5600 CL28
Video Card(s) MSI RTX 3080 10GB
Storage 2TB Samsung 990 Pro SSD
Display(s) MSI 32" 4K OLED 240hz Monitor
Case Asus Prime AP201
Power Supply FSP 1000W Platinum PSU
Mouse Logitech G403
Keyboard Asus Mechanical Keyboard
Sorry, was talking about the max. And no it's not a 10w chip. Anandtech (you know, the same website you showed) has it at 35w scoring 12375 during cinebench. Here ya go

I said the M1 uses 10W. The M1 Max uses 30W. The M1 Ultra uses 60W. Approximately. I never said the Max was 10W. Let's settle on 12.5k for the Max. The 12900K is a very good chip at cinebench. Remember it has more cores though, the M1 Ultra if you could reduce it's power draw to 30W would obviously beat it.

Part of the problem is you're using undervolting. Not stock voltages. The 11th gen took 82.6 watts. Do you think a stock 12900k limited without undervolt and 30W would score that high? I don't think the 12900k improved from 82 to 30W, but maybe, it is a large chip. Has DDR5 also.

I looked up Anandtech's latest review for the laptop 12900hk, but they used Cinebench R20 instead of R23. Ugh. Anyways doesn't matter ;)
 
Joined
May 2, 2017
Messages
7,762 (2.80/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I don't think we fundamentally disagree about anything, but I still insist that is incredibly misleading - dishonest comparing to a 3090. It's like Intel comparing a 10900k to a 5950x with the X bar being cores, and cuts off the chart at the 10 core mark. What's the point if they could just compare it to a different product on the stack, like the 5800x? Well the point is people will remember the comparison to a 3090 and think they are comparable, which they are not.
Sorry, but just ... no. This is no more misleading than Intel comparing 12900K performance to a 5950X without mentioning that it consumes 100W more power while doing so. It is exactly the same thing: on a graph of power and performance, pick a comparison along one axis and fail to mention that this is not the full picture.
Anyways, there is a fundamental duckup on the verge's chart, on SOTR my 3090 gets 196 fps at highest settings 1440p, yet they only get 140 at 1080p! That's how much my 5 year old 1080ti was getting. There is something very wrong there
Gaming has never been a priority for Apple, and they never even alluded to it in their marketing materials. People reading "GPU performance" as "gaming performance" are as such fundamentally misunderstanding what Apple is saying. Neither are incorrect, but the statements exist in such fundamentally different contexts that they are inherently incomparable despite using similar words. But beyond that, I never trust any benchmarks posted by The Verge - they just don't operate on that level, and aren't warranted that level of trust.
Sorry, was talking about the max. And no it's not a 10w chip. Anandtech (you know, the same website you showed) has it at 35w scoring 12375 during cinebench. Here ya go

It's worth mentioning that that 'package power' also includes the LPDDR5, while yours doesn't include your DDR4. Not a massive difference, but an inherent Apple advantage (a single stick of DDR4 consumes ~2.5-5W).
Geekbench is a joke, it measure cache and memory latency first and foremost. I managed a 40% increased score just by clocking my ddr4 from stock no xmp to 4400c16 on a 10900k! I don't why but it loves memory
That's because it has several highly memory dependent workloads. This is quite unrealistic for consumer use cases (essentially no consumer applications are memory bound in any meaningful way), but for pro applications it does have its applicability. There's a reason why TR WX CPUs have 2x the memory channels of their non-W counterparts. But it's another example of knowing your workload: if this applies to you, you're likely to know it (and if it doesn't, you're unlikely to know).

Anandtech's SPEC testing also illustrates some of the nuances here: The M1 Max (8c+2ec) just about matches a 5800X (8c16t) in integer workloads, but trounces even the 5950X (16c32t) at floating point workloads. It's quite possible the massive memory bandwidth affects this too, but this is clearly a choice Apple made for a reason (i.e. it mattering to the workloads they care about). Different architectures lend themselves to different jobs in different ways. This makes like-for-like comparisons difficult, but it also means we need to be nuanced when discussing such comparisons.


Oh, and another value comparison worth making: since this is a pre-built kinda-workstation, let's look at a Lenovo Threadripper Pro comparison. Even with a 42% (!) rebate counted in, a ThinkStation P620 with a TR 3975WX, 64GB of RAM, a (no-longer-called-Quadro) RTX A4000 (close to a 3070), and a 1TB PCIe 4.0 m.2 SSD costs ... nearly $6500.

Edit: lol, misunderstood the memory configurator and accidentaly gave that config a single 64GB DIMM. With 4x16GB (sadly there's no way of filling all 8 channels at 64GB) it's $6790. Going top-of-the-line Mac Studio (same CPU, RTX A6000, 128GB RAM) puts you at about $12000 - about $3000 more than a specced out Mac Studio with 8TB of storage. Of course, one advantage of the P620 is that you can fit a whackton of GPUs, SSDs and accelerators if you want to - but it will cost you.
 
Last edited:
Joined
Jun 14, 2020
Messages
3,474 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
I said the M1 uses 10W. The M1 Max uses 30W. The M1 Ultra uses 60W. Approximately. I never said the Max was 10W. Let's settle on 12.5k for the Max. The 12900K is a very good chip at cinebench. Remember it has more cores though, the M1 Ultra if you could reduce it's power draw to 30W would obviously beat it.

Part of the problem is you're using undervolting. Not stock voltages. The 11th gen took 82.6 watts. Do you think a stock 12900k limited without undervolt only uses 30W? I don't think the 12900k improved from 82 to 30W, but maybe, it is a large chip. Has DDR5 also.
According to anandtech (sorry, could use another source if you have any) the m1 max uses 34w and scores 12375. So, it's basically a tie with my 12900k.

The 11th gen sucks, no questions about it.


No a stock 12900 wouldnt hit those numbers. BUT, the thing is, architecturally, it can achieve it. It all comes down to what Intel is willing to call a 12900k. That's a marketing decision, not a product decision. Intel decided that every chip managing to achieve 4.9 ghz at 240w is going to be named 12900k. They could just as well only choose the chips that could do that at 190w or whatever. It all comes down to how many chips Intel wants to sell as a full priced 12900k. Don't forget they are selling them for 600€, if they were selling them for 1.5k then yeah, every single 12900k would be binned to the highest degree.

Sorry, but just ... no. This is no more misleading than Intel comparing 12900K performance to a 5950X without mentioning that it consumes 100W more power while doing so. It is exactly the same thing: on a graph of power and performance, pick a comparison along one axis and fail to mention that this is not the full picture.

Gaming has never been a priority for Apple, and they never even alluded to it in their marketing materials. People reading "GPU performance" as "gaming performance" are as such fundamentally misunderstanding what Apple is saying. Neither are incorrect, but the statements exist in such fundamentally different contexts that they are inherently incomparable despite using similar words. But beyond that, I never trust any benchmarks posted by The Verge - they just don't operate on that level, and aren't warranted that level of trust.

It's worth mentioning that that 'package power' also includes the LPDDR5, while yours doesn't include your DDR4. Not a massive difference, but an inherent Apple advantage (a single stick of DDR4 consumes ~2.5-5W).

That's because it has several highly memory dependent workloads. This is quite unrealistic for consumer use cases (essentially no consumer applications are memory bound in any meaningful way), but for pro applications it does have its applicability. There's a reason why TR WX CPUs have 2x the memory channels of their non-W counterparts. But it's another example of knowing your workload: if this applies to you, you're likely to know it (and if it doesn't, you're unlikely to know).

Anandtech's SPEC testing also illustrates some of the nuances here: The M1 Max (8c+2ec) just about matches a 5800X (8c16t) in integer workloads, but trounces even the 5950X (16c32t) at floating point workloads. It's quite possible the massive memory bandwidth affects this too, but this is clearly a choice Apple made for a reason (i.e. it mattering to the workloads they care about). Different architectures lend themselves to different jobs in different ways. This makes like-for-like comparisons difficult, but it also means we need to be nuanced when discussing such comparisons.
Yeah, was going to wonder if geekbench results have any practical ability in the real world. So thanks for explaining, cause Im not afraid to admit i was clueless.
 
Joined
May 10, 2020
Messages
738 (0.44/day)
Processor Intel i7 13900K
Motherboard Asus ROG Strix Z690-E Gaming
Cooling Arctic Freezer II 360
Memory 32 Gb Kingston Fury Renegade 6400 C32
Video Card(s) PNY RTX 4080 XLR8 OC
Storage 1 TB Samsung 970 EVO + 1 TB Samsung 970 EVO Plus + 2 TB Samsung 870
Display(s) Asus TUF Gaming VG27AQL1A + Samsung C24RG50
Case Corsair 5000D Airflow
Power Supply EVGA G6 850W
Mouse Razer Basilisk
Keyboard Razer Huntsman Elite
Benchmark Scores 3dMark TimeSpy - 26698 Cinebench R23 2258/40751
I also wonder how long Apple is going to be able to keep making giant leaps in its speeds without astronomical R&D costs. Apple doesn't sell to anyone but through their products. Intel and AMD keep selling to game console makers, have licensing with their GPU technology in monitors and other technologies they create. They have multiple revenue streams and licensing deals. They make server parks and workstation parts as well as consumer parts. I'm not saying Apple can't be successful but they neglected a lot of things for a long time and had turned into a cell phone manufacturer. How many companies are going to trust them to switch to their hardware and expect Apple to keep making large leaps and to guarantee that the software always runs best on MAC.

I don’t get this part.
Apple is bigger than Intel and AMD (waaaay bigger than AMD) and they can spend whatever they want in R&D. Their revenue stream is unmatched.

There is no way 12900 pulls 272 at stock, since the pl2 limit is 240. It cannot draw more than 240. Mine caps at around 190 during cinebench r23.
Well, I don’t know which motherboard are you using, but definitely NO. basically every Z690 has no PL2 limit at all (it is set to 4096 that means “unlimited”).
How much it will draw depends on various factors: silicon lottery (I saw 30+ W difference between “identical” CPUs), motherboard (MSI Z690 seems to be very generous with stock voltage) and so on.
But I do agree with you: 272 W at stock is unreasonable.
By the way Cinebench R23 is not very “power hungry”. With Prime95 or OCCT I can draw way more Watts.
 
Joined
May 2, 2017
Messages
7,762 (2.80/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Yeah, was going to wonder if geekbench results have any practical ability in the real world. So thanks for explaining, cause Im not afraid to admit i was clueless.
GB is still pretty problematic - especially in cross-platform comparisons, which are subject to a non-transparent heap of optimization and compilation differences - but it's still okay within a single platform. But it also skews towards pro applications (in part because you have to do that if you want a wide variety of workloads). SPEC isn't representative of consumer workloads either, but it's as industry-standard as you can get (the key word kind of being "industry" - i.e. pro applications), and unlike GB it is very transparent, including the ability for testers to compile their own version of the tests. That's why I trust AT's SPEC testing more than any GB testing, despite knowing that it still skews towards memory intensive and latency insensitive applications (in other words: it does not paint an accurate picture for gaming, for example, which is typically latency sensitive). But AFAIK there exists nothing even remotely close to an industry-standard, open, game-like benchmark that could fit into SPEC or a similar test suite. There are tons of canned gaming benchmarks, but none are actual games, and they're all locked down and opaque, so ... it is what it is.
 
Joined
Jun 14, 2020
Messages
3,474 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Well, I don’t know which motherboard are you using, but definitely NO. basically every Z690 has no PL2 limit at all (it is set to 4096 that means “unlimited”).
How much it will draw depends on various factors: silicon lottery (I saw 30+ W difference between “identical” CPUs), motherboard (MSI Z690 seems to be very generous with stock voltage) and so on.
But I do agree with you: 272 W at stock is unreasonable.
By the way Cinebench R23 is not very “power hungry”. With Prime95 or OCCT I can draw way more Watts.
Im using apex, at stock it has a pl2 limit of 240w and during cbr23 it draws around 190 @ 1.18volts.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.81/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
How long has the M1 Ultra been out again? I suspect we're going to see improvements as they fine tune the drivers for this chip. It's a bit different than the M1, Pro, and Max in the sense that it has Apple's "glue," which nothing else has used yet. I wouldn't at all be surprised that the current driver isn't optimized as well as it could be for this new GPU configuration that no other Mac has. With that said, I never buy the first generation of a product for this very reason.
 
Joined
May 10, 2020
Messages
738 (0.44/day)
Processor Intel i7 13900K
Motherboard Asus ROG Strix Z690-E Gaming
Cooling Arctic Freezer II 360
Memory 32 Gb Kingston Fury Renegade 6400 C32
Video Card(s) PNY RTX 4080 XLR8 OC
Storage 1 TB Samsung 970 EVO + 1 TB Samsung 970 EVO Plus + 2 TB Samsung 870
Display(s) Asus TUF Gaming VG27AQL1A + Samsung C24RG50
Case Corsair 5000D Airflow
Power Supply EVGA G6 850W
Mouse Razer Basilisk
Keyboard Razer Huntsman Elite
Benchmark Scores 3dMark TimeSpy - 26698 Cinebench R23 2258/40751
Im using apex, at stock it has a pl2 limit of 240w and during cbr23 it draws around 190 @ 1.18volts.
That’s quite strange.
I’m using Strix A Gaming, so an Asus board, and I have no power limit at all (PL1=PL2=4096)
Same with the few MSI I used before.
 
Joined
Jun 14, 2020
Messages
3,474 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
That’s quite strange.
I’m using Strix A Gaming, so an Asus board, and I have no power limit at all (PL1=PL2=4096)
Same with the few MSI I used before.
If i remember correctly at first boot it asks what kind of cooler you have and it sets power limits accordingly. I think I chose air cooler, since I actually have an aircooler, and so it settled at 240. Ofc I removed the power limits manually afterwards.
 
Joined
May 10, 2020
Messages
738 (0.44/day)
Processor Intel i7 13900K
Motherboard Asus ROG Strix Z690-E Gaming
Cooling Arctic Freezer II 360
Memory 32 Gb Kingston Fury Renegade 6400 C32
Video Card(s) PNY RTX 4080 XLR8 OC
Storage 1 TB Samsung 970 EVO + 1 TB Samsung 970 EVO Plus + 2 TB Samsung 870
Display(s) Asus TUF Gaming VG27AQL1A + Samsung C24RG50
Case Corsair 5000D Airflow
Power Supply EVGA G6 850W
Mouse Razer Basilisk
Keyboard Razer Huntsman Elite
Benchmark Scores 3dMark TimeSpy - 26698 Cinebench R23 2258/40751
If i remember correctly at first boot it asks what kind of cooler you have and it sets power limits accordingly. I think I chose air cooler, since I actually have an aircooler, and so it settled at 240. Ofc I removed the power limits manually afterwards.
That’s for the MSI for sure.
But I don’t remember any question by the Asus :confused:
By the way… 12900K on air cooling you are courageous :D
 
Joined
Jun 14, 2020
Messages
3,474 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
That’s for the MSI for sure.
But I don’t remember any question by the Asus :confused:
By the way… 12900K on air cooling you are courageous :D
It's a small single tower cooler, the u12a. It does insane actually, 75c on cbr23 completely stock. Peaked at 65c with undervolt. Right now im running a 5.6ghz single core and 5.4 to 5.2 all core with TVB (clocks down based on temperature).
 

studentrights

New Member
Joined
Mar 19, 2022
Messages
11 (0.01/day)
Wow, the ignorance on this thread is amazing. So easily fooled.

"Civilization 6" runs in Rosetta translation from x86 based using OpenGL 4.1 that Apple depreciated ages ago.

Of course it runs poorly! These "translated" NON-NATIVE games in no way measures the power of Apple's M1, M1 Pro, M1 Max or M1 Ultra GPU.

Are any of the games tested NATIVE on the Mac M1 series of chips?

When looking at productivity tests, the M1 GPU chip kicks ass.
 

Attachments

  • Screen Shot 2022-03-19 at 19.33.28.png
    Screen Shot 2022-03-19 at 19.33.28.png
    46.5 KB · Views: 61
Joined
May 2, 2017
Messages
7,762 (2.80/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Wow, the ignorance on this thread is amazing. So easily fooled.

"Civilization 6" runs in Rosetta translation from x86 based using OpenGL 4.1 that Apple depreciated ages ago.

Of course it runs poorly! These "translated" NON-NATIVE games in no way measures the power of Apple's M1, M1 Pro, M1 Max or M1 Ultra GPU.

Are any of the games tested NATIVE on the Mac M1 series of chips?

When looking at productivity tests, the M1 GPU chip kicks ass.
That's the problem with game benchmarks - there are essentially zero high profile, performance-intensive games coded natively for Apple's ARM chips. You mostly either have a choice between iOS/iPad OS games (mostly exclusive and relatively lightweight) or emulated x86 titles. Apple's x86 emulation is great for what it is, but it's still significantly slower than native. But again, Apple didn't mention gaming performance whatsoever, so going that route is quite misleading in the first place.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.81/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
Are any of the games tested NATIVE on the Mac M1 series of chips?
...and of those games, how many are using Metal? OpenGL over Metal isn't nearly as fast as natively using Metal, even if natively compiled. I've had fairly good experiences with the performance of games that have native Metal support on my MBP though, so definitely something to consider.
 
Joined
Apr 1, 2009
Messages
60 (0.01/day)
Sorry, but just ... no. This is no more misleading than Intel comparing 12900K performance to a 5950X without mentioning that it consumes 100W more power while doing so. It is exactly the same thing: on a graph of power and performance, pick a comparison along one axis and fail to mention that this is not the full picture.

Gaming has never been a priority for Apple, and they never even alluded to it in their marketing materials. People reading "GPU performance" as "gaming performance" are as such fundamentally misunderstanding what Apple is saying. Neither are incorrect, but the statements exist in such fundamentally different contexts that they are inherently incomparable despite using similar words. But beyond that, I never trust any benchmarks posted by The Verge - they just don't operate on that level, and aren't warranted that level of trust.

It's worth mentioning that that 'package power' also includes the LPDDR5, while yours doesn't include your DDR4. Not a massive difference, but an inherent Apple advantage (a single stick of DDR4 consumes ~2.5-5W).

That's because it has several highly memory dependent workloads. This is quite unrealistic for consumer use cases (essentially no consumer applications are memory bound in any meaningful way), but for pro applications it does have its applicability. There's a reason why TR WX CPUs have 2x the memory channels of their non-W counterparts. But it's another example of knowing your workload: if this applies to you, you're likely to know it (and if it doesn't, you're unlikely to know).

Anandtech's SPEC testing also illustrates some of the nuances here: The M1 Max (8c+2ec) just about matches a 5800X (8c16t) in integer workloads, but trounces even the 5950X (16c32t) at floating point workloads. It's quite possible the massive memory bandwidth affects this too, but this is clearly a choice Apple made for a reason (i.e. it mattering to the workloads they care about). Different architectures lend themselves to different jobs in different ways. This makes like-for-like comparisons difficult, but it also means we need to be nuanced when discussing such comparisons.


Oh, and another value comparison worth making: since this is a pre-built kinda-workstation, let's look at a Lenovo Threadripper Pro comparison. Even with a 42% (!) rebate counted in, a ThinkStation P620 with a TR 3975WX, 64GB of RAM, a (no-longer-called-Quadro) RTX A4000 (close to a 3070), and a 1TB PCIe 4.0 m.2 SSD costs ... nearly $6500.

Edit: lol, misunderstood the memory configurator and accidentaly gave that config a single 64GB DIMM. With 4x16GB (sadly there's no way of filling all 8 channels at 64GB) it's $6790. Going top-of-the-line Mac Studio (same CPU, RTX A6000, 128GB RAM) puts you at about $12000 - about $3000 more than a specced out Mac Studio with 8TB of storage. Of course, one advantage of the P620 is that you can fit a whackton of GPUs, SSDs and accelerators if you want to - but it will cost you.

I mean on geekbench the Threadripper did ok. Single core isn't as good as what say like Toms hardware got on geekbech 5.4 . Best M1 ultra config gave 1,792. Multicore 23,931.

System Lambda Lambda Vector AMD Ryzen Threadripper PRO 3975WX s 4368 MHz (32 cores)
Uploaded December 20th, 2021
Platform Linux
Single-Core Score 1355
Multi-Core Score 34962

I would say 11,000 more in the multicore is pretty significant.

I mean I think AMD and Intel were caught a bit off guard but even the merging of the two dies with the higher end M1 Studio seems to maybe used Apple's version of what AMD started with infinity fabric. AMD's new 3D V-Cache stacking technology AMD says is adding up to 15 percent advantage in gaming for the 5800X3D compared to the 5900X and at lower voltage.

So I think it will be interesting as Intel is already doing lower and higher clocked cores and now AMD is pushing other stuff it is causing innovation. Apple will continue to be able to do it in a more power friendly manner. Yet look at the heatsink that is required to keep it cool even at like 60 watts. Half the build is heatsink and fans.

...and of those games, how many are using Metal? OpenGL over Metal isn't nearly as fast as natively using Metal, even if natively compiled. I've had fairly good experiences with the performance of games that have native Metal support on my MBP though, so definitely something to consider.

I mean OpenGL hasn't been updated in nearly 5 years. Vulkan replaced it. Even if you used OpenGL for anything it doesn't support HDR I believe or anything like DLSS etc. It is just old. I don't see how any comparisons with OpenGl even make sense just because you can't get, pardon the pun, an apples to apples comparison.
 

studentrights

New Member
Joined
Mar 19, 2022
Messages
11 (0.01/day)
Maybe, but that still does NOT make it the "World's most powerful chip for a PC".
The M1 Ultra basically matches a 28-core intel chip and we know Apple is going to double that in the Mac Pro chip. Of course, at the same time its matching against a Nvidia chip, but all on the same die, which neither INTEL or, AMD or NVIDA can claim.
 
D

Deleted member 24505

Guest
ultra score 24 MC my 12700k scored 22 Mc stock with no OC, not far off.
 
Joined
Jun 14, 2020
Messages
3,474 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
The M1 Ultra basically matches a 28-core intel chip and we know Apple is going to double that in the Mac Pro chip. Of course, at the same time its matching against a Nvidia chip, but all on the same die, which neither INTEL or, AMD or NVIDA can claim.
Uhm, a 28 core from a few years ago. Their new 12900k beats them both handily.
 
Joined
May 2, 2017
Messages
7,762 (2.80/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I mean on geekbench the Threadripper did ok. Single core isn't as good as what say like Toms hardware got on geekbech 5.4 . Best M1 ultra config gave 1,792. Multicore 23,931.

System Lambda Lambda Vector AMD Ryzen Threadripper PRO 3975WX s 4368 MHz (32 cores)
Uploaded December 20th, 2021
Platform Linux
Single-Core Score 1355
Multi-Core Score 34962

I would say 11,000 more in the multicore is pretty significant.
... except for the fact that that is clearly a heavily overclocked CPU? The Threadripper Pro 3975WX has a base clock of 3.4GHz, and a boost clock of 4.2 GHz. It might, depending on the workload, maintain a higher all-core clock than 3.5GHz, but it definitely doesn't have a 4.37GHz base clock like that result. So, maybe be a bit more careful about your examples? That chip is likely pulling 400+ watts. Geekbench correctly reports the base clock as 3.5GHz when not overclocked. That example is a random one from the browser, but seems mostly representative of scores from a quick look - there are lots of OC'd results, and a lot of weird outliers with very low MT scores, but other than that they seem to land in the high 20 000s MT and 1200-1400 ST.
I mean I think AMD and Intel were caught a bit off guard but even the merging of the two dies with the higher end M1 Studio seems to maybe used Apple's version of what AMD started with infinity fabric. AMD's new 3D V-Cache stacking technology AMD says is adding up to 15 percent advantage in gaming for the 5800X3D compared to the 5900X and at lower voltage.
Yes, and? None of this changes the fact that the M1 series has shockingly high IPC (nearly matching the ST performance of high clocked MSDT chips at 2/3rds the frequency at the time the architecture launched). And while AMD's achievement in bringing MCM CPUs to the mass market shouldn't be undersold, calling this chip "Apple's version of what AMD started with IF" is ... weird. The entire chip industry has been moving towards MCM packaging for half a decade if not more, in various forms. The bridging tech used by the M1 Ultra is reportedly TSMC's CoWoS-S ("chip on wafer on substrate - silicon interposer", AFAIK), which is similar but not quite the same as the tech AMD and Nvidia has used for HBM memory on GPUs with interposers. For reference, TSMC reportedly launched their 6th generation of CoWoS in 2020. However, nobody has used this tech for an MCM CPU or SoC yet. Intel is the closest (and arguably ahead of Apple), using EMIB bridges for their 2-die 56-core server chips (though those are terrible in many ways, but for very different reasons).

I frankly don't see how 3D cache is relevant here outside of "other tech vendors are also doing exotic packaging", which, as seen above, shouldn't be news to anyone. It's an IPC (and likely efficiency) boost for AMD, but ... other than that it's not directly relevant here - it's not like the 5800X3D will be particularly more competitive with the M1 Ultra than the 5800X - that's not what it's for, and the 5950X already exists.
So I think it will be interesting as Intel is already doing lower and higher clocked cores and now AMD is pushing other stuff it is causing innovation. Apple will continue to be able to do it in a more power friendly manner. Yet look at the heatsink that is required to keep it cool even at like 60 watts. Half the build is heatsink and fans.
The CPU pulls 60 watts. The full package under heavy load likely exceeds that significantly. This thing comes with a 400W PSU, after all, and it doesn't have much else consuming power. 6x15W from the TB ports + 2x10W from the USB ports + however much the motherboard and fans consume still leaves a power budget of >250W for the SoC. And given that the M1 Max pulls up to 92W under a combined CPU+GPU load, it's not unreasonable to expect the full M1 Ultra package to come close to doubling that. Remember, the total chip size here is huge - ~864mm², making the 628mm² GA102 powering the RTX 3090 look rather puny in comparison. And it's made on a much denser production node in addition to this - these two chips have 114 billion transistors, compared to the 28.3 billion of the GA102. It has a lot of resources to work with, and it can certainly use power when it needs to. Just because the CPU cores max out at around ~60W doesn't make that the package power limit - there's also the GPU, NPU, accelerators and RAM on that package. Also, have you picked up on how quiet that thing is under load? That's also a huge reason for a huge heatsink - more surface area = less need for crazy airflow to dissipate heat. It definitely doesn't have an unreasonable heatsink for its power level when taking these things into consideration.
I mean OpenGL hasn't been updated in nearly 5 years. Vulkan replaced it. Even if you used OpenGL for anything it doesn't support HDR I believe or anything like DLSS etc. It is just old. I don't see how any comparisons with OpenGl even make sense just because you can't get, pardon the pun, an apples to apples comparison.
Yet a lot of MacOS game ports still run OpenGL - in part because Vulkan AFAIK isn't supported on MacOS at all. Hence why non-Metal games are problematic as benchmarks.
Maybe, but that still does NOT make it the "World's most powerful chip for a PC".
You keep insisting on reading "most powerful chip" as "most powerful CPU". I mean, this is a slightly underclocked TR 3970X and an RTX 3070 Ti-ish on the same chip. With unheard of video playback acceleration, and a rather powerful ML accelerator. Of course that is the most powerful chip for a PC. It's not even close. It doesn't need any of its constituent parts to be the fastest for this to be true, it only needs the sum of those parts to be true. And it is - the closest chip in existence would either be the Xbox Series X APU or some of Nvidia's SoCs for self-driving cars, but even those are quite far behind in total performance compared to this.
 
Last edited:
Top