• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Apple's Graphics Performance Claims Proven Exaggerated by Mac Studio Reviews

D

Deleted member 24505

Guest
I don't really see why all the hee haw about this Apple M1. It does not run and never will run windows. If you have a windows Pc and do not use Apple, it is irrelevant. So what if it is quick at running stuff on it's own OS, comparison with windows is pointless, apart from benches that are imo pointless apart from epeen.

Most powerful chip/cpu in a Apple machine, not in a PC.

Some Apple user look down on PC like some sort of lower life form, why do we care what they think or use? or how powerful or not this Apple only chip is.
 
Joined
Apr 1, 2009
Messages
60 (0.01/day)
... except for the fact that that is clearly a heavily overclocked CPU? The Threadripper Pro 3975WX has a base clock of 3.4GHz, and a boost clock of 4.2 GHz. It might, depending on the workload, maintain a higher all-core clock than 3.5GHz, but it definitely doesn't have a 4.37GHz base clock like that result. So, maybe be a bit more careful about your examples? That chip is likely pulling 400+ watts. Geekbench correctly reports the base clock as 3.5GHz when not overclocked. That example is a random one from the browser, but seems mostly representative of scores from a quick look - there are lots of OC'd results, and a lot of weird outliers with very low MT scores, but other than that they seem to land in the high 20 000s MT and 1200-1400 ST.

Yes, and? None of this changes the fact that the M1 series has shockingly high IPC (nearly matching the ST performance of high clocked MSDT chips at 2/3rds the frequency at the time the architecture launched). And while AMD's achievement in bringing MCM CPUs to the mass market shouldn't be undersold, calling this chip "Apple's version of what AMD started with IF" is ... weird. The entire chip industry has been moving towards MCM packaging for half a decade if not more, in various forms. The bridging tech used by the M1 Ultra is reportedly TSMC's CoWoS-S ("chip on wafer on substrate - silicon interposer", AFAIK), which is similar but not quite the same as the tech AMD and Nvidia has used for HBM memory on GPUs with interposers. For reference, TSMC reportedly launched their 6th generation of CoWoS in 2020. However, nobody has used this tech for an MCM CPU or SoC yet. Intel is the closest (and arguably ahead of Apple), using EMIB bridges for their 2-die 56-core server chips (though those are terrible in many ways, but for very different reasons).

I frankly don't see how 3D cache is relevant here outside of "other tech vendors are also doing exotic packaging", which, as seen above, shouldn't be news to anyone. It's an IPC (and likely efficiency) boost for AMD, but ... other than that it's not directly relevant here - it's not like the 5800X3D will be particularly more competitive with the M1 Ultra than the 5800X - that's not what it's for, and the 5950X already exists.

The CPU pulls 60 watts. The full package under heavy load likely exceeds that significantly. This thing comes with a 400W PSU, after all, and it doesn't have much else consuming power. 6x15W from the TB ports + 2x10W from the USB ports + however much the motherboard and fans consume still leaves a power budget of >250W for the SoC. And given that the M1 Max pulls up to 92W under a combined CPU+GPU load, it's not unreasonable to expect the full M1 Ultra package to come close to doubling that. Remember, the total chip size here is huge - ~864mm², making the 628mm² GA102 powering the RTX 3090 look rather puny in comparison. And it's made on a much denser production node in addition to this - these two chips have 114 billion transistors, compared to the 28.3 billion of the GA102. It has a lot of resources to work with, and it can certainly use power when it needs to. Just because the CPU cores max out at around ~60W doesn't make that the package power limit - there's also the GPU, NPU, accelerators and RAM on that package. Also, have you picked up on how quiet that thing is under load? That's also a huge reason for a huge heatsink - more surface area = less need for crazy airflow to dissipate heat. It definitely doesn't have an unreasonable heatsink for its power level when taking these things into consideration.

Yet a lot of MacOS game ports still run OpenGL - in part because Vulkan AFAIK isn't supported on MacOS at all. Hence why non-Metal games are problematic as benchmarks.

You keep insisting on reading "most powerful chip" as "most powerful CPU". I mean, this is a slightly underclocked TR 3970X and an RTX 3070 Ti-ish on the same chip. With unheard of video playback acceleration, and a rather powerful ML accelerator. Of course that is the most powerful chip for a PC. It's not even close. It doesn't need any of its constituent parts to be the fastest for this to be true, it only needs the sum of those parts to be true. And it is - the closest chip in existence would either be the Xbox Series X APU or some of Nvidia's SoCs for self-driving cars, but even those are quite far behind in total performance compared to this.

Thing is if Nvidia, Intel, or AMD could focus and put all of the horsepower of their chips into using basically one OS with a single API and what not, or not worry about instruction sets for certain tech etc they could or probably would be significantly more powerful like Apple even using X86. They develop their own things be it Freesync or Gsync. TressFx, their versions of HRD. Ray Tracing, DLSS etc. Plus support on GPU side for Directx 10,11,12 and Vulkan etc.

Then for some reason on the SSD storage for new MAC's then don't seem to have controllers or anything on them. Seems like Apple just is shortcutting things on the storage side of things, which seems kind of important if you care for your data.


"Hector Martin works on Asahi Linux for AppleSilicon Macs:

"Well, this is unfortunate. It turns out Apple's custom NVMe drives are amazingly fast - if you don't care about data integrity. If you do, they drop down to HDD performance. Thread."

"For a while, we've noticed that random write performance with fsync (data integrity) on Asahi Linux (and also on Linux on T2 Macs) was terrible. As in 46 IOPS terrible. That's slower than many modern HDDs. We thought we were missing something, since this didn't happen on macOS.""

API's and different things matter. It is like when Doom came out and supported Vulkan. The FPS jumped up drastically especially for AMD. That is why context matters. Hardware is made to run certain API's etc and Apple shows their hardware in the best light. If you are going to do that you have to take synthetic benchmarks out and then run the hardware you are testing them against in their best case scenarios.

Like here for instance is an Nvidia graphic:

Nvidia_CES_2022.png
 
Joined
Apr 17, 2021
Messages
564 (0.43/day)
System Name Jedi Survivor Gaming PC
Processor AMD Ryzen 7800X3D
Motherboard Asus TUF B650M Plus Wifi
Cooling ThermalRight CPU Cooler
Memory G.Skill 32GB DDR5-5600 CL28
Video Card(s) MSI RTX 3080 10GB
Storage 2TB Samsung 990 Pro SSD
Display(s) MSI 32" 4K OLED 240hz Monitor
Case Asus Prime AP201
Power Supply FSP 1000W Platinum PSU
Mouse Logitech G403
Keyboard Asus Mechanical Keyboard
Don't forget that Apple is sandbagging. They want an easy way to release an M2 chip that is faster that causes more upgrades. MaxTech is the only one to do anything decent review wise. They also know basic things like that Geekbench doesn't even work for the GPU result. Geekbench creators told people that and yet people use it anyways. WATCH THIS VIDEO:

M1 Ultra Mac Studio - Benchmarks & Thermals (The TRUTH!) - YouTube

The Mac Studio is not what many people wanted: Apple going all out. It's just a doubled up M1 Max, a laptop chip. Because they are so far ahead of the competition they are holding back. Where is the 4ghz+ CPU chip? The one that actually uses 200W? And you can also see anomalies that show that Mac OS software hasn't been written for the new chip yet, such as a video export being the same speed as with the Max chip. But you buy a product the way it works today, not tomorrow. Apple doesn't want leaks, they didn't seed the product to developers in advance. Wait a month and look at it again. Also Anandtech and Maxtech and perhaps HardwareUnboxed are the only ones doing decent work. You can watch 100 videos on youtube, most of those guys know nothing about computers.

In his tests look at the actual CPU and GPU power usage, not even a fraction of what the M1 Ultra is even capable of using. Software issues. And then even at full load it is only a fraction of what the chip could do if we had unlocked power and voltage and clock speeds. It's running at only 3.0 Ghz, not even 3.2 Ghz.
 
Joined
Apr 1, 2009
Messages
60 (0.01/day)
Don't forget that Apple is sandbagging. They want an easy way to release an M2 chip that is faster that causes more upgrades. MaxTech is the only one to do anything decent review wise. They also know basic things like that Geekbench doesn't even work for the GPU result. Geekbench creators told people that and yet people use it anyways. WATCH THIS VIDEO:

M1 Ultra Mac Studio - Benchmarks & Thermals (The TRUTH!) - YouTube

The Mac Studio is not what many people wanted: Apple going all out. It's just a doubled up M1 Max, a laptop chip. Because they are so far ahead of the competition they are holding back. Where is the 4ghz+ CPU chip? The one that actually uses 200W? And you can also see anomalies that show that Mac OS software hasn't been written for the new chip yet, such as a video export being the same speed as with the Max chip. But you buy a product the way it works today, not tomorrow. Apple doesn't want leaks, they didn't seed the product to developers in advance. Wait a month and look at it again. Also Anandtech and Maxtech and perhaps HardwareUnboxed are the only ones doing decent work. You can watch 100 videos on youtube, most of those guys know nothing about computers.

In his tests look at the actual CPU and GPU power usage, not even a fraction of what the M1 Ultra is even capable of using. Software issues. And then even at full load it is only a fraction of what the chip could do if we had unlocked power and voltage and clock speeds. It's running at only 3.0 Ghz, not even 3.2 Ghz.

Well that is what R&D is going to have to work on. ARM was made to be low power. Originally like under 5 watts. So Apple is pushing the bounds of ARM by going higher than ARM was originally intended. ARM was about low power and low heat. Now they are creating chips that are pushing both of those well past what ARM was created to do. I'm sure as Apple adds instruction sets and what not we won't really be able to say they are even using ARM based chips as they morph into something else, but still Apple is taking ARM into a world it hasn't been used before which means it is always going to be taking the first step into the next push of what ARM is doing or can do. Which means most of the R&D will be on their engineers and 100 percent of the cost of it on them.

I'm not sure it is safe to say we are just seeing a fraction of what M1 Ultra can do because they wouldn't of added the cost of a second chip if they could just up the power and speed of the chip. That wouldn't make sense for them financially. So there definitely is a limitation they have run into there. At least at the moment. No company spends extra money when they don't have to.
 
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Well that is what R&D is going to have to work on. ARM was made to be low power. Originally like under 5 watts. So Apple is pushing the bounds of ARM by going higher than ARM was originally intended. ARM was about low power and low heat. Now they are creating chips that are pushing both of those well past what ARM was created to do. I'm sure as Apple adds instruction sets and what not we won't really be able to say they are even using ARM based chips as they morph into something else, but still Apple is taking ARM into a world it hasn't been used before which means it is always going to be taking the first step into the next push of what ARM is doing or can do. Which means most of the R&D will be on their engineers and 100 percent of the cost of it on them.

I'm not sure it is safe to say we are just seeing a fraction of what M1 Ultra can do because they wouldn't of added the cost of a second chip if they could just up the power and speed of the chip. That wouldn't make sense for them financially. So there definitely is a limitation they have run into there. At least at the moment. No company spends extra money when they don't have to.
They're also using the most expensive , leading node , beyond any other competitors, and still ending up with way bigger chips too.
It all really makes sense if taken as a whole.
 
Joined
Apr 17, 2021
Messages
564 (0.43/day)
System Name Jedi Survivor Gaming PC
Processor AMD Ryzen 7800X3D
Motherboard Asus TUF B650M Plus Wifi
Cooling ThermalRight CPU Cooler
Memory G.Skill 32GB DDR5-5600 CL28
Video Card(s) MSI RTX 3080 10GB
Storage 2TB Samsung 990 Pro SSD
Display(s) MSI 32" 4K OLED 240hz Monitor
Case Asus Prime AP201
Power Supply FSP 1000W Platinum PSU
Mouse Logitech G403
Keyboard Asus Mechanical Keyboard
They're also using the most expensive , leading node , beyond any other competitors, and still ending up with way bigger chips too.
It all really makes sense if taken as a whole.

The size of the chip is not what people think it is. For example one stick of 16GB DDR5 has more transistors than the entire Apple M1 Ultra chip. You can't compare an SoC with cache and other things to a GPU or CPU transistor count wise. We don't have accurate transistor counts, GPU versus GPU in the SoC.
 

studentrights

New Member
Joined
Mar 19, 2022
Messages
11 (0.01/day)
They're also using the most expensive , leading node , beyond any other competitors, and still ending up with way bigger chips too.
It all really makes sense if taken as a whole.

The continued intellectual dishonesty on this threat is amazing. Any comparison to the M1 Ultra requires TWO, not one PC chips from INTEL, AMD, and/or NVIDA, not to mention the missing RAM chips.

To compare the PERFORMANCE, SIZE and COST of the M1 Ultra you need:

1) A CPU from INTEL or AMD
2) A GPU form AMD or NIVIDA
3) 128GB of RAM

So now go back to that photo and add in all of the missing components, a GPU and RAM. Love to see the total COST when that's done. Throw in a motherboard to socket the RAM too.
 
Last edited:
Joined
Apr 1, 2009
Messages
60 (0.01/day)
Exactly. No one other than Apple can claim this on a SOC, not even close. It also goes to show that integrated graphics can easily compete with desecrate graphics using far less power and physical space.


Not in graphics power. It takes two different PC processors to beat the M1 in both CPU and GPU power and that combination is far more costly and consumes a ridiculous amount of power by comparison.

Apple still has one more chip to go beyond the Ultra for the MacPro, which will be double the power well beyond the capability of the new 12900k, which doesn't even hav a graphics chip to match. This chip is likely just months away. I'd guess June for obvious reasons.



The continued intellectual dishonesty on this threat is amazing. Any comparison to the M1 Ultra requires TWO, not one PC chips from INTEL, AMD, and/or NVIDA, not to mention the missing RAM chips.

To compare the PERFORMANCE, SIZE and COST of the M1 Ultra you need:

1) A CPU from INTEL or AMD
2) A GPU form AMD or NIVIDA
3) 128GB of RAM

So now go back to that photo and add in all of the missing components, a GPU and RAM. Love to see the total COST when that's done. Throw in a motherboard to socket the RAM too.


I didn't try to deal hunt just threw this together real fast. sure prices could be found cheaper other than gpu. GPU's have been available close to MSRP the last week or so. The estimate for the 3080 ti is from EVGA's 3080 Ti available direct 4 days ago.

128gb memory $600
3970x threadripper $2500
sTRX4 motherboard $500
1000w psu gold rated $180
3080ti $1300
atx case $200
1tb nvme $100

cost a little over $5400 (compared to M1 Studio with 128gb of memory at $5800) . But technically you at this point have more memory because the memory from Apple is unified. So in the PC build you would have 128gb on the motherboard plus 12gb dedicated to the GPU for a total of 140gb. The threadripper is 32 cores 64 threads so you can go one step down to the 3960x with 24 cores 48 threads for $700 less and get the price down to $4700.

It might not be more energy efficient and run as cool but that would be a pretty powerful machine. The 3970x gets you pretty close to the geekbench score of the M1 ultra. about 100 points lower. Not sure how the threadripper overclocks.

The pc build has the ability to add larger nvme's down the road more memory etc because it is upgradeable.

If your dropped the memory down to 64gb or 32gb's though there would be massive savings on the pc side because you can get 32gb's for under $150. Apple has a huge price increase for memory from 32gb to 64gb is $400 increase and to 128gb the price goes up $1200.
 
Joined
Aug 14, 2013
Messages
2,373 (0.58/day)
System Name boomer--->zoomer not your typical millenial build
Processor i5-760 @ 3.8ghz + turbo ~goes wayyyyyyyyy fast cuz turboooooz~
Motherboard P55-GD80 ~best motherboard ever designed~
Cooling NH-D15 ~double stack thot twerk all day~
Memory 16GB Crucial Ballistix LP ~memory gone AWOL~
Video Card(s) MSI GTX 970 ~*~GOLDEN EDITION~*~ RAWRRRRRR
Storage 500GB Samsung 850 Evo (OS X, *nix), 128GB Samsung 840 Pro (W10 Pro), 1TB SpinPoint F3 ~best in class
Display(s) ASUS VW246H ~best 24" you've seen *FULL HD* *1O80PP* *SLAPS*~
Case FT02-W ~the W stands for white but it's brushed aluminum except for the disgusting ODD bays; *cries*
Audio Device(s) A LOT
Power Supply 850W EVGA SuperNova G2 ~hot fire like champagne~
Mouse CM Spawn ~cmcz R c00l seth mcfarlane darawss~
Keyboard CM QF Rapid - Browns ~fastrrr kees for fstr teens~
Software integrated into the chassis
Benchmark Scores 9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999
Well that is what R&D is going to have to work on. ARM was made to be low power. Originally like under 5 watts. So Apple is pushing the bounds of ARM by going higher than ARM was originally intended. ARM was about low power and low heat. Now they are creating chips that are pushing both of those well past what ARM was created to do. I'm sure as Apple adds instruction sets and what not we won't really be able to say they are even using ARM based chips as they morph into something else, but still Apple is taking ARM into a world it hasn't been used before which means it is always going to be taking the first step into the next push of what ARM is doing or can do. Which means most of the R&D will be on their engineers and 100 percent of the cost of it on them.

I'm not sure it is safe to say we are just seeing a fraction of what M1 Ultra can do because they wouldn't of added the cost of a second chip if they could just up the power and speed of the chip. That wouldn't make sense for them financially. So there definitely is a limitation they have run into there. At least at the moment. No company spends extra money when they don't have to.
Amazon, Fujitsu, nVidia, alibaba, Marvell and Intel have been producing ARM server chips for a few years now. I think there’s a few other players but I don’t remember. They also do supercomputers IIRC.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Thing is if Nvidia, Intel, or AMD could focus and put all of the horsepower of their chips into using basically one OS with a single API and what not, or not worry about instruction sets for certain tech etc they could or probably would be significantly more powerful like Apple even using X86. They develop their own things be it Freesync or Gsync. TressFx, their versions of HRD. Ray Tracing, DLSS etc. Plus support on GPU side for Directx 10,11,12 and Vulkan etc.
This is rather inaccurate. While it's absolutely true that Apple has an inherent advantage through the API designer and hardware maker being the same company, ultimately Nvidia and AMD don't have meaningfully more APIs to deal with or a much higher degree of complexity - they just have a much less direct and likely more rigid structure through which to affect the design of APIs to fit their hardware. But it's not like designing DX12U GPUs has been a challenge for either company. And backwards compatibility is the exact same too - if Apple iterates Metal with new features that aren't supported by older hardware, developers will then need to account for that, just like they need to account for differences between DX10, 11 and 12 hardware.

Also, if MacOS was based around "a single API" as you say ... that wouldn't help much. The API would still need the same breadth of features and functionality of whatever combination of APIs competitors deal with. Integrating those features into a single API doesn't necessarily simplify things all that much. What matters is how well things are designed to interface with each other and how easy they are to learn and to program for. There is also nothing stopping AMD designing a Metal implementation of TressFX, for example - APIs stack on top of each other.
Then for some reason on the SSD storage for new MAC's then don't seem to have controllers or anything on them. Seems like Apple just is shortcutting things on the storage side of things, which seems kind of important if you care for your data.
Yeah, I'm well aware of this. If you've paid attention, you'd also notice that I have at no point extolled Apple's storage as anything special - in part because of this. Though saying "their SSDs don't seem to have controllers on them" is a misunderstanding - Apple has a very different storage hardware architecture than PCs, with the SSD controller integrated into the T2 system controller (which also handles security, encryption, etc.). In other words, the SSD controller doesn't live on the SSD. This is a really weird choice IMO, and while I see its benefits (lower cost, less duplicate hardware in the system), I think it overall harms the system's performance.
API's and different things matter. It is like when Doom came out and supported Vulkan. The FPS jumped up drastically especially for AMD. That is why context matters. Hardware is made to run certain API's etc and Apple shows their hardware in the best light. If you are going to do that you have to take synthetic benchmarks out and then run the hardware you are testing them against in their best case scenarios.

Like here for instance is an Nvidia graphic:

View attachment 240509
Yes, it's obvious that APIs matter a lot, though your representation of this is a bit weird. It's not like AMD's hardware is inherently designed to be faster in Vulkan than DX12, it's just a quirk of their architecture compared to Nvidia. It wouldn't make sense for them to make such a design consciously, as Vulkan is far rarer than DX11/12, and isn't used at all in consoles, which is a massive focus for AMD GPU architectures. Still, it is indeed important to keep in mind that different architectures interface differently with different APIs and thus perform differently - and that any given piece of software, especially across platforms, can use many different APIs to do "the same" work. That Nvidia graph is indeed a good example, as part of the background behind that is that most of those apps are not Metal native - Blender Cycles, for example, got its Metal render backend 11 days ago, while it has had support for highly accelerated workloads on Nvidia with first CUDA, then the Optix renderer, for several years.

This just illustrates the difficulty of doing reasonable cross-platfrom benchmarks, and the very real discussion of whether one should focus on real-world applications with all their specific quirks (the Nvidia graph above was true at the time, after all, with those applications), or if the goal is to demonstrate the actual capabilities of the hardware on as level a playing field as possible. Platform-exclusive software makes this even more complicated, obviously.
Well that is what R&D is going to have to work on. ARM was made to be low power. Originally like under 5 watts. So Apple is pushing the bounds of ARM by going higher than ARM was originally intended. ARM was about low power and low heat. Now they are creating chips that are pushing both of those well past what ARM was created to do. I'm sure as Apple adds instruction sets and what not we won't really be able to say they are even using ARM based chips as they morph into something else, but still Apple is taking ARM into a world it hasn't been used before which means it is always going to be taking the first step into the next push of what ARM is doing or can do. Which means most of the R&D will be on their engineers and 100 percent of the cost of it on them.
This is rather inaccurate. While early ARM designs were indeed focused only on low power, there has been a concerted effort for high-performance ARM chips for more than half a decade, including server and datacenter chips from a variety of vendors. Apple is by no means alone in this - though their R&D resources are clearly second to none. Also, AFAIK, Apple can't add instruction sets - their chips are based on the ARMv8.4-A instruction set. And while I'm reasonably sure there are architectural reasons for the M1 not scaling past ~3.2 GHz, it's impossible from an end user point of view to separate those from other hardware properties: the specific production node (Apple uses low-power, high density mobile-oriented nodes for all chips, with competitors like AMD and Intel using high-clocking nodes instead); their chips are designed with extremely wide execution pipelines, far beyond any other ARM (or x86) design, which makes clocking them high potentially problematic. You can't unilaterally peg the performance ceiling of the M1 family on it being ARM-based.

As for the R&D and attributed costs: Apple has been developing their own core and interconnect designs since the A6 in 2012. Their ARM licence is for fully custom designed cores, meaning they do not base their designs on ARM designs at all, and have not done so for a decade now.
The size of the chip is not what people think it is. For example one stick of 16GB DDR5 has more transistors than the entire Apple M1 Ultra chip. You can't compare an SoC with cache and other things to a GPU or CPU transistor count wise. We don't have accurate transistor counts, GPU versus GPU in the SoC.
This is problematic on many, many levels. First off: All modern CPUs have cache. GPUs also have cache, though in very varying amounts. Bringing that up as a difference between an SoC and a CPU is ... meaningless. (As is the distinction between SoC and CPU today - all modern CPUs from major vendors are SoCs, but with differing featuresets.) As for one stick of DDR5 having more transistors, that may be true, but that stick also has 8 or more dice on board, and those cosist nearly exclusively of a single type of high density transistor made on a bespoke, specific-purpose node. Logic transistors are far more complex and take much more space than memory transistors, and logic nodes are inherently less dense than memory and cache nodes (illustrated by how the cache die on the Ryzen 7 5800X3D fits 64MB of cache into the same area as the 32MB of cache on the CCD). If comparing transistor counts, the relevant comparisons must be of reasonably similar silicon, i.e. complex and advanced logic chips.

And no, we don't have feature-by-feature transistor counts. But that doesn't ultimately matter, as we can still combine the transistor counts of competing solutions for a reasonable approximate comparison - and Apple is way above anything else even remotely comparable. This illustrates that they are achieving the combination of performance and efficiency they are by going all-in on a "wide-and-slow" design at every level (as Anandtech notes in their M1 coverage, the execution pipeline of the M1 architecture is also unprecedentedly wide for a consumer chip). This is clearly a conscious choice on their part, and one that can likely in part be attributed to their vertical integration - they don't need to consider per-chip pricing or profit margins (which for Intel and AMD tend to be ~40%), but can have a more holistic view, integrating R&D costs and production costs into the costs of the final hardware. It is far less feasible for a competitor to produce anything comparable just because of the sheer impossibility of getting any OEM to buy the chip to make a product with it.
The continued intellectual dishonesty on this threat is amazing. Any comparison to the M1 Ultra requires TWO, not one PC chips from INTEL, AMD, and/or NVIDA, not to mention the missing RAM chips.
I mean, I mostly agree with you, but ... no, you don't need that if you're talking about performance or efficiency. There's no logical or technical requirement that because the M1U has two (joined) chips, any valid comparison must also be two chips. What matters is how it performs in the real world. Heck, if that's the case, you'd need four M1 Ultras to compare to an EPYC 3rd gen, as those have eight CCDs onboard. What matters is comparing the relative performance and combined featuresets of the systems in question - i.e. their capabilities. How their makers arrived at those capabilities is an interesting part of the background for such a discussion (and a separate, interesting discussion can be had as to the reasoning and circumstances behind those choices and their pros and cons), but you can't mandate that any comparison must match some arbitrary design feature of any one system (as the 8-CCD EPYC example illustrates).
To compare the PERFORMANCE, SIZE and COST of the M1 Ultra you need:

1) A CPU from INTEL or AMD
2) A GPU form AMD or NIVIDA
3) 128GB of RAM

So now go back to that photo and add in all of the missing components, a GPU and RAM. Love to see the total COST when that's done. Throw in a motherboard to socket the RAM too.
You're not wrong, but I think you're taking this argument in the wrong direction. First off, this has been acknowledged and discussed earlier in this thread, in detail. Secondly, it's valid to discuss the tradeoffs and choices made by different firms designing different chips - even if I think the "and [they're] still ending up with way bigger chips too" angle from the post you quoted is a bad take in desperate need of perspective and nuance.
 

studentrights

New Member
Joined
Mar 19, 2022
Messages
11 (0.01/day)
I get it has 128gb of ram in there too, but it is still a stupid ass Chip size
A CPU, GPU and 128GB of RAM and on a PC you'll also need a motherboard to socket the RAM. Now compare the size again.
I didn't try to deal hunt just threw this together real fast. sure prices could be found cheaper other than gpu. GPU's have been available close to MSRP the last week or so. The estimate for the 3080 ti is from EVGA's 3080 Ti available direct 4 days ago.

128gb memory $600
3970x threadripper $2500
sTRX4 motherboard $500
1000w psu gold rated $180
3080ti $1300
atx case $200
1tb nvme $100

cost a little over $5400 (compared to M1 Studio with 128gb of memory at $5800) . But technically you at this point have more memory because the memory from Apple is unified. So in the PC build you would have 128gb on the motherboard plus 12gb dedicated to the GPU for a total of 140gb. The threadripper is 32 cores 64 threads so you can go one step down to the 3960x with 24 cores 48 threads for $700 less and get the price down to $4700.

It might not be more energy efficient and run as cool but that would be a pretty powerful machine. The 3970x gets you pretty close to the geekbench score of the M1 ultra. about 100 points lower. Not sure how the threadripper overclocks.

The pc build has the ability to add larger nvme's down the road more memory etc because it is upgradeable.

If your dropped the memory down to 64gb or 32gb's though there would be massive savings on the pc side because you can get 32gb's for under $150. Apple has a huge price increase for memory from 32gb to 64gb is $400 increase and to 128gb the price goes up $1200.
Now factor in the electrical cost savings of the M1 Ultra if you were full tilt high-end work year-round.
 
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
The continued intellectual dishonesty on this threat is amazing. Any comparison to the M1 Ultra requires TWO, not one PC chips from INTEL, AMD, and/or NVIDA, not to mention the missing RAM chips.

To compare the PERFORMANCE, SIZE and COST of the M1 Ultra you need:

1) A CPU from INTEL or AMD
2) A GPU form AMD or NIVIDA
3) 128GB of RAM

So now go back to that photo and add in all of the missing components, a GPU and RAM. Love to see the total COST when that's done. Throw in a motherboard to socket the RAM too.
Exactly my point, apple use a node way beyond others Soo non comparable.

They also don't need that much space for actual memory , this isn't 2010 it's 2022.

Your comparison metrics are ass.

I'm not the one getting excited about this apple chip.

And as I said they still used a lot of transistors and die space to get that performance.


I,,. Wasn't doing direct comparison I'm not a deluded tool ,you can't compare disparate hardware and software ecosystems and architecture, but observational comparisons on nodes and sizes, sooo, there's that too.


That's what you're doing

"Continued intellectual dishonesty"
 
Joined
Jun 14, 2020
Messages
3,460 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
A CPU, GPU and 128GB of RAM and on a PC you'll also need a motherboard to socket the RAM. Now compare the size again.

Now factor in the electrical cost savings of the M1 Ultra if you were full tilt high-end work year-round.
You realize that the 3970x is not comparable to the m1 ultra though, right? For example, in corona render the 3970x is twice as fast as the 12900k, and the 12900k is way faster than the m1 ultra. So the 3970x is over twice as fast as the m1 ultra. The same applies to , for example, cinebench. Now factor in the increased income from being able to finish more workloads, right?

I don't think someone will ever be considering buying an m1 ultra or a 3970x. They are on completely different levels, the 3970x offers a level of total performance that is absolutely untouchable by the m1 ultra.
 
Joined
Apr 1, 2009
Messages
60 (0.01/day)
You realize that the 3970x is not comparable to the m1 ultra though, right? For example, in corona render the 3970x is twice as fast as the 12900k, and the 12900k is way faster than the m1 ultra. So the 3970x is over twice as fast as the m1 ultra. The same applies to , for example, cinebench. Now factor in the increased income from being able to finish more workloads, right?

I don't think someone will ever be considering buying an m1 ultra or a 3970x. They are on completely different levels, the 3970x offers a level of total performance that is absolutely untouchable by the m1 ultra.
Thats the thing though, Apple is building a device that is in the Workstation space. So it needs to be compared to Workstation parts. Workstations that can do a bunch of things that the M1 Ultra can't compete with. If you are dropping $6000 on a computer you aren't going to be buying all consumer grade parts at that point but getting the best you can. People that are dropping $6k on a computer aren't really going to be concerned about heat and power consumption. Again Apple is waving a hand to distract from the other because it pretends sustainability is super important to them by changing packaging etc, yet they keep creating more and more locked down devices with no upgradeability that have no sustained life once their specs are superseded. No one to keep the device relevant with a single upgrade. And because of the parts not being replaceable really like you see with people restoring broken PC's and what not for schools and under privileged kids isn't possible either.
 
D

Deleted member 24505

Guest
Thats the thing though, Apple is building a device that is in the Workstation space. So it needs to be compared to Workstation parts. Workstations that can do a bunch of things that the M1 Ultra can't compete with. If you are dropping $6000 on a computer you aren't going to be buying all consumer grade parts at that point but getting the best you can. People that are dropping $6k on a computer aren't really going to be concerned about heat and power consumption. Again Apple is waving a hand to distract from the other because it pretends sustainability is super important to them by changing packaging etc, yet they keep creating more and more locked down devices with no upgradeability that have no sustained life once their specs are superseded. No one to keep the device relevant with a single upgrade. And because of the parts not being replaceable really like you see with people restoring broken PC's and what not for schools and under privileged kids isn't possible either.
Apple, don't repair that device at our extortionate cost, buy a new one at our extortionate cost.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Thats the thing though, Apple is building a device that is in the Workstation space. So it needs to be compared to Workstation parts. Workstations that can do a bunch of things that the M1 Ultra can't compete with. If you are dropping $6000 on a computer you aren't going to be buying all consumer grade parts at that point but getting the best you can. People that are dropping $6k on a computer aren't really going to be concerned about heat and power consumption. Again Apple is waving a hand to distract from the other because it pretends sustainability is super important to them by changing packaging etc, yet they keep creating more and more locked down devices with no upgradeability that have no sustained life once their specs are superseded. No one to keep the device relevant with a single upgrade. And because of the parts not being replaceable really like you see with people restoring broken PC's and what not for schools and under privileged kids isn't possible either.
You're not entirely wrong, but there are a few issues here. First off, workstation buyers are likely to know their relevant workloads, and thus choose hardware that suits those. Those workloads can also vary quite a lot, from parsing massive databases to AI and ML to 3D rendering to video and/or photo editing to tons of other tasks of entirely different natures. Apple has carved themselves pretty specific niches within certain select workstation segments, aided by exclusive or highly optimized MacOS software. For these workloads, a Mac is often many times faster than a PC that might again be many times faster in other workloads. We can't expect all hardware to perform all tasks equally, so that's just how things work. So "getting the best you can" will be extremely application-dependent.

As for whether these customers are concerned about power: you're just wrong. If you're buying a hundred render boxes that will spend the vast majority of each day churning out renders? Power consumption starts mattering quite a lot, especially when you start talking about 2-300W power reductions for the same workload (assuming performance parity) (and a lot more than this if the lower power option has hardware accelerators that increase performance and lower power consumption). Assuming you're rendering 20h a day (and idling at near zero power the rest of the time, which is of course a simplification), and one render box consumes 200W with the other consuming 400W, assuming an electricity cost of US$0.112/kWh, over a year that's a savings of $163/year per render box, not accounting for the lower costs for ventilation/AC that come with dumping less heat into the room, which are also notable in warm locations. For a single box, that isn't a lot, but if you have ten, twenty, a hundred? It starts adding up. And no, this isn't an edge case - there is a relatively sizeable market for Mac-based server hardware, mostly using Mac Minis, and these will undoubtedly be migrating to Mac Studios to some extent. This is clearly not a viable competitor to the overall datacenter market, nor is it a major threat to the workstation/render farm market as a whole, but given Apple's grasp on certain parts of the software/hardware ecosystem, it is still notable. And for these people, reductions in power usage means either more money to spend on hardware for more output, or operating cost reductions. Also, an increasing amount of people and companies just genuinely want to cut their emissions. That buying new hardware is inherently contrary to that is obviously a relevant objection, but they will always need to do so at some point, and at that point it does matter which new hardware consumes the least power, as long as the difference is notable.

Also, don't discount the value of UX in hardware purchasing decisions. If you're doing CAD work or similar consistently demanding work and can't offload the heavy parts to a render box somewhere, the difference between a 600+W Threadripper Pro/Xeon+1-2-3GPU workstation on the floor next to you and this sitting on your desk? That's pretty significant in terms of user comfort, even if pro workstations also tend to be pretty well designed thermally and thus not server loud.

I definitely agree that Apple is mainly greenwashing with their talk of sustainability - but at the same time, this is also genuinely important so as a major actor, their actions actually matter somewhat even if they're hollow and shallow in many ways. Upgradeability and ease of maintenance would be fantastic, but for the market they're targeting here, that essentially doesn't happen - professionals don't have the time for downtime and troubleshooting surrounding a hardware upgrade. They buy hardware with an on-site service plan to ensure it stays working, then buy an upgrade after X years and sell/trash (hopefully the former) the old hardware. Upgradeability matters a lot more in lower end market segments, simply because those don't come with the disincentive of workin towards 24/7 uptime. But then Apple has managed the really weird feat of making regular users desire used workstation hardware, which to a large extent ensures that their old hardware gets re-sold and re-used for a long time. This is really weird that it actually works, but in this way, Apple's cult-like following actually benefits their environmental impact. I would still want them to provide in-house and third-party repair and upgrade services for at least a decade for all products - but we'll need legislation for that to happen, as Apple (as with all tech corporations) makes more money by selling you a new device. Soldered-down storage is essentially unforgiveable IMO - there's no valid reason for it - but CPU and RAM? It would be fine if there was access to replacement parts and repair/upgrade services, given the efficiency and performance benefits.
 
Joined
Jul 5, 2013
Messages
27,781 (6.67/day)
The M1 Ultra basically matches a 28-core intel chip and we know Apple is going to double that in the Mac Pro chip. Of course, at the same time its matching against a Nvidia chip, but all on the same die, which neither INTEL or, AMD or NVIDA can claim.
That statement clearly displays a misunderstanding due to a lack of context.

ultra score 24 MC my 12700k scored 22 Mc stock with no OC, not far off.
Which demonstrates further that Apples claim is false.
 

studentrights

New Member
Joined
Mar 19, 2022
Messages
11 (0.01/day)
You realize that the 3970x is not comparable to the m1 ultra though, right? For example, in corona render the 3970x is twice as fast as the 12900k, and the 12900k is way faster than the m1 ultra. So the 3970x is over twice as fast as the m1 ultra. The same applies to , for example, cinebench. Now factor in the increased income from being able to finish more workloads, right?

I don't think someone will ever be considering buying an m1 ultra or a 3970x. They are on completely different levels, the 3970x offers a level of total performance that is absolutely untouchable by the m1 ultra.

Feel free to show me those Cinebench numbers. Feel free to provide a link. I think you're confused.
 
Joined
Jun 14, 2020
Messages
3,460 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Feel free to show me those Cinebench numbers. Feel free to provide a link. I think you're confused.
What do you mean? There are reviews of the m1 ultra, it hits around 24k cbr23 score. It's not a secret..
 

studentrights

New Member
Joined
Mar 19, 2022
Messages
11 (0.01/day)
Thats the thing though, Apple is building a device that is in the Workstation space. So it needs to be compared to Workstation parts. Workstations that can do a bunch of things that the M1 Ultra can't compete with. If you are dropping $6000 on a computer you aren't going to be buying all consumer grade parts at that point but getting the best you can. People that are dropping $6k on a computer aren't really going to be concerned about heat and power consumption. Again Apple is waving a hand to distract from the other because it pretends sustainability is super important to them by changing packaging etc, yet they keep creating more and more locked down devices with no upgradeability that have no sustained life once their specs are superseded. No one to keep the device relevant with a single upgrade. And because of the parts not being replaceable really like you see with people restoring broken PC's and what not for schools and under privileged kids isn't possible either.
Which is why we need to wait and see what Apple offers in the MacPro, which is likely to be twice as powerful and be geared towards the upper end of the market. At this point, we're just comparing the M1 Ultra which is clearly the middle of the road, not their top of the line system. It's not far away, June 2022 is coming quickly.
 
Joined
Sep 18, 2017
Messages
198 (0.08/day)
I am curious because I dont know anything about gaming on a Mac, but does the Mac OS impact gaming performance? Apple is not known for gaming and I would think that Microsoft has spend a lot more focus on improving gaming performance through the OS.

So I am curious that the low overhead on a console OS drastically improves performance over gaming on Windows, is there a similar performance gap going from Windows to Mac OS?
 

studentrights

New Member
Joined
Mar 19, 2022
Messages
11 (0.01/day)
I am curious because I dont know anything about gaming on a Mac, but does the Mac OS impact gaming performance? Apple is not known for gaming and I would think that Microsoft has spend a lot more focus on improving gaming performance through the OS.

So I am curious that the low overhead on a console OS drastically improves performance over gaming on Windows, is there a similar performance gap going from Windows to Mac OS?
The problem is that the games they are testing don't run native on the Mac; this was pointed out when the M1 debut two years ago, along with the Pro and Max last year, but the testers don't bother to tell anyone this. They are x86 for INTEL Mac games running in translation on the M1; some also use OpenGL which is depreciated on the Mac and it's an older version 4.1. Of course it's going to run slow. No one can accurately measure M1's GPU performance compared to a PC using translated software written for another platform, it's absurd.
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
That statement clearly displays a misunderstanding due to a lack of context.


Which demonstrates further that Apples claim is false.
It's funny how you keep harping on CPU benchmarks alone, insisting on others "lacking context", while simultaneously absolutely ignoring any mention of the indisputable fact that "most powerful chip" is not synonymous with "most powerful CPU" (or GPU, or NPU, or whatever), but that the first statement must reasonably be understood as encompassing the combination of features present on said chip. Apple never claimed to have the most powerful CPU, nor the most powerful GPU, but the most powerful combined package of those + accelerators. And they do. It's also a sort of meaningless thing to argue because it implies a non-neutral weighting of the performance of components - it's like arguing you have "the most powerful car", without detailing whether that means drag race times, 0-60 times, the ability to carry passengers, luggage space, towing capabilities, whether it can carry a pallet of goods or a stack of plywood sheeting, or any random combination of the former. But as long as one makes a good-faith assumption of a somewhat neutral balance of performance between the features present, their statement is absolutely true. No other PC chip in existence comes close to that level of combined CPU, GPU, NPU and accelerator performance.



After watching that teardown video linked earlier, I'm actually more positive than I thought I would be in terms of repairability and upgradeability for the Mac Studio. Just the fact that every single port is modular and relatively easily replaceable? That is huge for repairability and the overall longevity and usability of a product like this. Dead ports will kill your PC far more easily than its components becoming outdated, so making them fully modular is brilliant. PC OEMs, please take note. This ought to be universally adopted.

This also just puts more weight behind my desire for Apple to implement a proper repair and upgrade service, where the motherboard is replaced. Heck, that thing has PCB area and on-board components akin to a GPU, so the embodied energy outside of the SoC itself isn't much at all. The internal design means that replacing the motherboard is essentially just replacing the SoC, as everything else is modular and can be re-used (though there are of course some onboard controllers, like 10GbE, but most of this is also integrated into the SoC). The SoC comes attached to a rather big PCB, but ports, PSU, and everything else can be kept. If they just allowed for an upgrade and trade-in programme for this, that would be a pretty decent solution for keeping these in service (of course including repair, refurbishing and re-use/re-sale of traded in parts). I'm also happy to see the storage isn't soldered down! The proprietary interface sucks, but anything else would frankly have been shocking. I didn't know they had started selling SSD upgrade kits for the Mac Pro, but having the same type of solution here speaks to at least some upgradeability (though it will no doubt cost you a ridiculous amount of money). But at least the flash modules are modular and can be upgraded. Hopefully Apple also has some sort of backdoor into their "secure" bond between flash and T2 chip so that data can be rescued off drives with dead motherboards, of course.

Also, I'm a bit surprised to see that the M1 Ultra package is the size it is - is it just me, or is that smaller than a Threadripper package? It looks a tad wider than the short side of a TR, but is definitely shorter than its long side.
 
Last edited:
Joined
Jun 14, 2020
Messages
3,460 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
He's saying the 3970x is twice as fast as the M1 Ultra in Cinebench. Is that true? I'm not familiar with the corona render; Is there even a native test for it on the M1?
Im the one thats saying it and yes, its true. It gets around 44k stock and 49k overclocked. It's on a whole different level
 
Top