• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Apple's Graphics Performance Claims Proven Exaggerated by Mac Studio Reviews

Joined
Feb 18, 2005
Messages
5,847 (0.81/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
This is why there's no point in talking to anyone on the internet. I gave you useful data showing the M1 Ultra crushing the RTX 3070, which means it is only a bit slower (5-15 percent slower) than the RTX 3090. It's to counteract a one sided interpretation you'll receive by only comparing 5 year old ported software to x86 Macs.

Ultimately though, it is Apple's fault for setting incorrect expectations. Nothing is stopping them from providing a reviewer's guide with the benchmarks for people to independently explain and verify. They didn't. That's not good. Clearly Apple is doing some bad marketing here. But it is also not true that somehow it isn't in the RTX 3080's ballpark.

Apple's original slides were against a TDP limited RTX 3080 (300W or so) and a TDP limited 12900K (160W or so). In that context it might be fairly accurate with properly written software. It is up to Apple to build a gaming console or through some other means get properly written software on the Mac.
I don't fucking care that M1U is faster than a RTX 3070 or "a bit slower" than the RTX 3090.

I care that Apple said it's faster than a 3090, when it absolutely is not in any performance comparison that anyone has been able to show.

That's not an exaggeration, nor an error by omission, nor a mistake - it's an absolutely shameless bald-faced lie. It doesn't surprise me, because that's the kind of company Apple is and the kind of morons they sell their overpriced garbage to, but this is a tech forum and I expected better from its members than apologism for such a company.

I have no problem with talking with people on the internet. I have serious problems with those who have drunk the kool-aid.
 
Joined
Apr 17, 2021
Messages
564 (0.43/day)
System Name Jedi Survivor Gaming PC
Processor AMD Ryzen 7800X3D
Motherboard Asus TUF B650M Plus Wifi
Cooling ThermalRight CPU Cooler
Memory G.Skill 32GB DDR5-5600 CL28
Video Card(s) MSI RTX 3080 10GB
Storage 2TB Samsung 990 Pro SSD
Display(s) MSI 32" 4K OLED 240hz Monitor
Case Asus Prime AP201
Power Supply FSP 1000W Platinum PSU
Mouse Logitech G403
Keyboard Asus Mechanical Keyboard
Wow, the bias with this one. Your graph shows m1 ultra beating a 3070 by 15%,and you called it crushing. Yet when the 3090 is 15% ahead you called it "only a bit slower". Make up your mind, is the 3090 crushing the m1 ultra or is the 3070 just a bit slower? Both cant be true
Ok now we're heading in to super juvenile territory here. First of all the graph I posted showed a 21 percent advantage for the Apple Silicon. Secondly as has been explained many times, the Apple graph was against a TDP limited 3090, around 300W. They did the same against the 12900k limited at 160W. Thirdly Apple was looking at professional use cases. Limit your 3090 to 300W and do the same, you'll get the same results. I'm trying to explain Mac benchmarking to people that don't know anything. This entire article is based on Geekbench numbers for example, even though the developers of Geekbench have explicitly said the results are not valid. This has been well known since the M1 Pro launched.

Did any of our supposed "tech" press run comparisons with the list of games I suggested? The native games? Did anyone here look at industry benchmarks against 300W 3090s? No. They are not interested. They want to pass some narrative that the M1 Ultra is slow, for some reason. Meanwhile I don't even expect Intel's fastest GPU to touch the Mac Silicon. And yes the CPU in the Mac silicon slows things down in single core at only 3.2ghz also, but you can always test at 4k resolution if you want a more GPU limited result. I guess Hardware Unboxed will do the work since Apple fans don't bother, and PC users don't care about the truth anyways.

Nobody is suggesting that Apple products are reasonably priced, but the Mac Studio is a good product. That horrible abomination called the Studio Display? That's another matter.

I don't fucking care that M1U is faster than a RTX 3070 or "a bit slower" than the RTX 3090.

I care that Apple said it's faster than a 3090, when it absolutely is not in any performance comparison that anyone has been able to show.

That's not an exaggeration, nor an error by omission, nor a mistake - it's an absolutely shameless bald-faced lie. It doesn't surprise me, because that's the kind of company Apple is and the kind of morons they sell their overpriced garbage to, but this is a tech forum and I expected better from its members than apologism for such a company.

I have no problem with talking with people on the internet. I have serious problems with those who have drunk the kool-aid.

Right so you have a selection of industry benchmarks against a 300W RTX 3090 to share? Waiting.

Do you have benchmarks against a 160W 12900k? Because I do.

Cinebench 12900k at ~160W TDP: ~19k
M1 Ultra at 100W lower than that limited TDP: 24k
Apple was more than 20 percent faster CPU wise.

That's what Apple "said" through their efficiency curve graph, and they did not lie. The M1 Ultra was also dramatically faster at Firefox compiling tests and code compiling in general. And because of the media encoding engines we don't even want to compare against the RTX or Intel competition, Apple has dedicated hardware for that.

As for the GPU, I'm still waiting for a scientifically literate person to run a comparison of industry benchmarks at 300W. As I've said though, it is Apple's fault for not specifying the benchmarks and conditions clearly for people. Especially since most people can't even read a perf/watt graph properly. (x,y) co-ordinates being too much for most people.

In no way did Apple say their chip was better at gaming. As for that, I expect we'll find that the Apple GPU is 20 percent faster than an RTX 3070, and 10-15 percent slower than 300W TDP limited 3080/3090s. And faster than a 300W limited 3090 in productivity benchmarks, as Apple said.
 
Last edited:
Joined
Mar 29, 2021
Messages
96 (0.07/day)
Ok now we're heading in to super juvenile territory here. First of all the graph I posted showed a 21 percent advantage for the Apple Silicon. Secondly as has been explained many times, the Apple graph was against a TDP limited 3090, around 300W. They did the same against the 12900k limited at 160W. Thirdly Apple was looking at professional use cases. Limit your 3090 to 300W and do the same, you'll get the same results. I'm not here to argue. I'm trying to explain Mac benchmarking to people that don't know anything. This entire article is based on Geekbench numbers for example, even though the developers of Geekbench have explicitly said the results are not valid. This has been well known since the M1 Pro launched.

Did any of our supposed "tech" press run comparison with the list of games I suggested? The native games? Did anyone here look at industry benchmarks against 300W 3090s? No. They are not interested. They want to pass some narrative that the M1 Ultra is slow, for some reason. Meanwhile I don't even expect Intel's fastest GPU to touch the Mac Silicon. And yes the CPU in the Mac silicon slows things down also, but you can always test at 4k resolution if you want a more GPU limited result. I'm still waiting for our tech press to actually do real work. I guess Hardware Unboxed will do it since Apple fans don't bother, and PC users don't care about the truth anyways.

Nobody is suggesting that Apple products are reasonably priced, but the Mac Studio is a good product. That horrible abomination called the Studio Display? That's another matter.



Right so you have a selection of industry benchmarks against a 300W RTX 3090 to share? Waiting.

Do you have benchmarks against a 160W 12900k? Because I do.

Cinebench 12900k at ~160W TDP: ~19k
M1 Ultra at 100W lower than that limited TDP: 24k
Apple was more than 20 percent faster CPU wise.

That's what Apple "said" through their efficiency curve graph, and they did not lie. The M1 Ultra was also dramatically faster at Firefox compiling tests and code compiling in general. And because of the media encoding engines we don't even want to compare against the RTX or Intel competition, Apple has dedicated hardware for that.

As for the GPU, I'm still waiting for a scientifically literate person to run a comparison of industry benchmarks at 300W. As I've said though, it is Apple's fault for not specifying the benchmarks and conditions clearly for people. Especially since most people can't even read a perf/watt graph properly. (x,y) co-ordinates being too much for most people.

In no way did Apple say their chip was better at gaming. As for that, I expect we'll find that the Apple GPU is 20 percent faster than an RTX 3070, and 10-15 percent slower than 300W TDP limited 3080/3090s. And faster than a 300W limited 3090 in productivity benchmarks, as Apple said.
There is no such thing as a "300W" RTX 3090 sold by nVidia or AIBs.

Apple can take any piece of hardware and put power restrictions on it, then claim "Mac is faster!" In fact, that is what they are doing to make the M1 look like a "high performance" chip.

BTW, I can run faster than Usain Bolt.*


*When Usain Bolt is 75 years old, overweight, and has had one foot amputated.


The difference between my claim and Apple's is that I added a disclaimer.
 
Joined
Jun 14, 2020
Messages
3,458 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Do you have benchmarks against a 160W 12900k? Because I do.

Cinebench 12900k at ~160W TDP: ~19k
M1 Ultra at 100W lower than that limited TDP: 24k
Apple was more than 20 percent faster CPU wise.
I do. My 12900k@35w gets 12600 cbr23. I can also hit 24k at 90 watts

I also doubt that the m1 will beat a 300w plimited 3090. I mean i have a 3090, do you want me to run something specific?

Also, just a side note, the sotr numbers are way off. My 3090 gets 196 fps at 1440p highest, yet in the verge review it hovers around 110? Hmmm
 
Joined
Apr 17, 2021
Messages
564 (0.43/day)
System Name Jedi Survivor Gaming PC
Processor AMD Ryzen 7800X3D
Motherboard Asus TUF B650M Plus Wifi
Cooling ThermalRight CPU Cooler
Memory G.Skill 32GB DDR5-5600 CL28
Video Card(s) MSI RTX 3080 10GB
Storage 2TB Samsung 990 Pro SSD
Display(s) MSI 32" 4K OLED 240hz Monitor
Case Asus Prime AP201
Power Supply FSP 1000W Platinum PSU
Mouse Logitech G403
Keyboard Asus Mechanical Keyboard
There is no such thing as a "300W" RTX 3090 sold by nVidia or AIBs.

Apple can take any piece of hardware and put power restrictions on it, then claim "Mac is faster!" In fact, that is what they are doing to make the M1 look like a "high performance" chip.

BTW, I can run faster than Usain Bolt.*


*When Usain Bolt is 75 years old, overweight, and has had one foot amputated.


The difference between my claim and Apple's is that I added a disclaimer.

Apple never suggested there was a 300W 3090. They posted a perf/watt curve. Apple did have a disclaimer, in their footnotes. They literally posted a performance/watt curve. They never "claimed" anything except that performance/watt curve. Stupid media that can't understand a curve made claims, not Apple.

I do. My 12900k@35w gets 12600 cbr23. I can also hit 24k at 90 watts
Are you sure you get 24k at 90 watts? Cinebench uses AVX and is not very power efficient. I took that result from a review. 160W around 19k, but it does depend on your voltage setting, undervolted or using the stock voltage curves. The 12900 in ArsTechnica's review only posted Handbrake efficiency, and the 12900 there used 130W to 301W. Locked to unlocked (because Intel TDP is almost meaningless, speaking of "misleading" claims).

I expect Apple would have used stock voltage curves for their Intel and nVidia power consumption results, they would have just used wattage limits instead. For sure you can downvolt the 12900k manually a lot, if you get a golden sample. My 12700k is a golden sample.
 
Last edited:
Joined
Jun 14, 2020
Messages
3,458 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Im talking about undervolted. At around 160-170w i can probably hit 29k. Ofc that's bin specific but i dont have a particularly good chip either way
 
Joined
Apr 17, 2021
Messages
564 (0.43/day)
System Name Jedi Survivor Gaming PC
Processor AMD Ryzen 7800X3D
Motherboard Asus TUF B650M Plus Wifi
Cooling ThermalRight CPU Cooler
Memory G.Skill 32GB DDR5-5600 CL28
Video Card(s) MSI RTX 3080 10GB
Storage 2TB Samsung 990 Pro SSD
Display(s) MSI 32" 4K OLED 240hz Monitor
Case Asus Prime AP201
Power Supply FSP 1000W Platinum PSU
Mouse Logitech G403
Keyboard Asus Mechanical Keyboard
Im talking about undervolted. At around 160-170w i can probably hit 29k. Ofc that's bin specific but i dont have a particularly good chip either way
I've found my 12700k to be an undervolting champ. I can actually keep much of my performance and drop it all the way down to 120W. But not nearly that much. I think you're wrong about that.

I did find one reviewer to talk intelligently about what Apple was claiming, and one youtuber mentioned it, that was The Verge and MKBHD (though I still think overall Ars Technica did a better job with their review). Here's their graph pointing out the 3090 does scale to over 400W. They get it. You can be faster at 200W less power, and slower overall at unlimited wattage. We'll probably get the most benchmarks from MaxTech (they do a lot of that) and HardwareUnboxed (they are the king at that) eventually, if they get a Studio in hand one day. They'd have to buy it, so now sure if that will happen.

theverge approx rtx 3090 500w.jpg


Im talking about undervolted. At around 160-170w i can probably hit 29k. Ofc that's bin specific but i dont have a particularly good chip either way
I just looked it up by the way, stock settings the 12900k scores 27k in Cinebench with a peak wattage of 272 watts. I highly doubt you can get 29k at 160W.
 
Last edited:
Joined
Apr 1, 2009
Messages
60 (0.01/day)
One thing i don't understand and no one is doing benchmarks on is the fact when you drop $4000 to $5000 you aren't competing with desktop processors and gpus for the type of work these people are doing. At this point you can buy a threadripper in the $2500 range and a $300-$500 motherboard and put together a machine that spec wise will kill the MAC. Sure you have to stick with at least ATX form factor and sure more power hungry but if it is for productivity I have never seen those be deciding factors.
 
Joined
Apr 17, 2021
Messages
564 (0.43/day)
System Name Jedi Survivor Gaming PC
Processor AMD Ryzen 7800X3D
Motherboard Asus TUF B650M Plus Wifi
Cooling ThermalRight CPU Cooler
Memory G.Skill 32GB DDR5-5600 CL28
Video Card(s) MSI RTX 3080 10GB
Storage 2TB Samsung 990 Pro SSD
Display(s) MSI 32" 4K OLED 240hz Monitor
Case Asus Prime AP201
Power Supply FSP 1000W Platinum PSU
Mouse Logitech G403
Keyboard Asus Mechanical Keyboard
One thing i don't understand and no one is doing benchmarks on is the fact when you drop $4000 to $5000 you aren't competing with desktop processors and gpus for the type of work these people are doing. At this point you can buy a threadripper in the $2500 range and a $300-$500 motherboard and put together a machine that spec wise will kill the MAC. Sure you have to stick with at least ATX form factor and sure more power hungry but if it is for productivity I have never seen those be deciding factors.
You don't get a small and quiet machine that uses 1/3 the power. I mean imagine if Intel used 1/3 the power of AMD, would we be arguing about who is superior? Anyways I've made my point. Now we have false claims of 29k Cinebench at 160W and all the rest of the stuff happening.

It's worth remembering Anandtech's review: "I’m not entirely sure what that says about Intel’s 7 manufacturing process compared to the 10SF used before. A lot of the performance gains here appear to come from IPC and DDR5, and that doesn’t seem to have come with performance per watt gains on the P-cores. It means that Intel is still losing on power efficiency at load compared to the competition."

Apple has got the future with their current products. Perf/watt is king in gaming consoles, in laptops, in mobile, in everything except the giant 500W workstations.

Cheers :)
 
Last edited:
Joined
Jul 16, 2014
Messages
8,198 (2.17/day)
Location
SE Michigan
System Name Dumbass
Processor AMD Ryzen 7800X3D
Motherboard ASUS TUF gaming B650
Cooling Artic Liquid Freezer 2 - 420mm
Memory G.Skill Sniper 32gb DDR5 6000
Video Card(s) GreenTeam 4070 ti super 16gb
Storage Samsung EVO 500gb & 1Tb, 2tb HDD, 500gb WD Black
Display(s) 1x Nixeus NX_EDG27, 2x Dell S2440L (16:9)
Case Phanteks Enthoo Primo w/8 140mm SP Fans
Audio Device(s) onboard (realtek?) - SPKRS:Logitech Z623 200w 2.1
Power Supply Corsair HX1000i
Mouse Steeseries Esports Wireless
Keyboard Corsair K100
Software windows 10 H
Benchmark Scores https://i.imgur.com/aoz3vWY.jpg?2
I took that result from a review.
Without a source, link or otherwise, you could be using fudged numbers and the rest of us reading wouldnt know the difference. So now I'll have to take anything you say with a grain of salt.

Reviews are at the mercy of the interpreter, it can be read in various ways and if someone was.... fanboi enough?, they can twist it anyway they want. PRs should never be taken as fact without a way to back any numbers, this is why we wait for reviews. One review doesnt say a lot, the sample size is too small and if problems show up for them, they could be one off or more serious, multiple reviews become a necessity.

Using Geekbench after everyone admits its useless and still using it for factual numbers is completely stupid. Corporate PR departments dont care what they put out is the truth or an exaggeration of said truth.

I find most of the discussion here goes good with the popcorn. :D
 
Last edited:
Joined
Apr 17, 2021
Messages
564 (0.43/day)
System Name Jedi Survivor Gaming PC
Processor AMD Ryzen 7800X3D
Motherboard Asus TUF B650M Plus Wifi
Cooling ThermalRight CPU Cooler
Memory G.Skill 32GB DDR5-5600 CL28
Video Card(s) MSI RTX 3080 10GB
Storage 2TB Samsung 990 Pro SSD
Display(s) MSI 32" 4K OLED 240hz Monitor
Case Asus Prime AP201
Power Supply FSP 1000W Platinum PSU
Mouse Logitech G403
Keyboard Asus Mechanical Keyboard
Without a source, link or otherwise, you could be using fudged numbers and the rest of us reading wouldnt know the difference. So now I'll have to take anything you say with a grain of salt.

Reviews are at the mercy of the interpreter, it can be read in various ways and if someone was.... fanboi enough?, they can twist it anyway they want. PRs should never be taken as fact without a way to back any numbers, this is why we wait for reviews. One review doesnt say a lot, the sample size is too small and if problems show up for them, they could be one off or more serious, multiple reviews become necessary.

Using Geekbench after everyone admits its useless and still using it for factual numbers is completely stupid. Corporate PR departments dont care what they put out is the truth or an exaggeration of said truth.

I find most of the discussion here goes good with the popcorn. :D
Apparently I have to write a scientific paper to explain everything. Meanwhile a random guy posts double the power efficiency versus Anandtech's review and I'm on the defensive? I've put plenty of sources. It's up to you to find a good source. Have fun! I think if you like popcorn MaxTech does the best in depth Mac coverage. I personally use a 12700K.

Mac Studio FULL Teardown - M1 Ultra chip REVEALED! - YouTube

Try that youtube channel :)
 
Joined
Apr 1, 2009
Messages
60 (0.01/day)
You don't get a small and quiet machine that uses 1/3 the power. I mean imagine if Intel used 1/3 the power of AMD, would we be arguing about who is superior? Anyways I've made my point. Now we have false claims of 29k Cinebench at 160W and all the rest of the stuff happening.

It's worth remembering Anandtech's review: "I’m not entirely sure what that says about Intel’s 7 manufacturing process compared to the 10SF used before. A lot of the performance gains here appear to come from IPC and DDR5, and that doesn’t seem to have come with performance per watt gains on the P-cores. It means that Intel is still losing on power efficiency at load compared to the competition."

Apple has got the future with their current products. Perf/watt is king in gaming consoles, in laptops, in mobile, in everything except the giant 500W workstations.

Cheers :)

Thing is each device like a gaming console for instance also uses different technologies from how they handle hdr to their integration with tv and monitor technology for instance with Freesync and Gsync. Ray tracing is something else. I haven't paid attention but can Apple hardware do Raytracing for gaming? It isn't just about the fact that Arm works differently than x86 and the different technologies used on GPU's and what not. If Apple was to enter other spaces they would have to then focus on those things as well. Which I have a feeling would slow down their other progress as then you have to make it all work together both software and hardware.

Thing is you can't compare a $5000 MAC to consumer parts because well you are competing with Workstations at that point. Workstations that if something goes wrong with a single part can be replaced. In most instances down the road you can also upgrade them with more Ram, multiple nvme, ssd's or HDD's. etc. MAC's at this point are one and done. You drop $5000 and that MAC will never be upgraded. Once a single part of it isn't usable anymore or outdated the whole thing is along with it.

I also wonder how long Apple is going to be able to keep making giant leaps in its speeds without astronomical R&D costs. Apple doesn't sell to anyone but through their products. Intel and AMD keep selling to game console makers, have licensing with their GPU technology in monitors and other technologies they create. They have multiple revenue streams and licensing deals. They make server parks and workstation parts as well as consumer parts. I'm not saying Apple can't be successful but they neglected a lot of things for a long time and had turned into a cell phone manufacturer. How many companies are going to trust them to switch to their hardware and expect Apple to keep making large leaps and to guarantee that the software always runs best on MAC.
 
Joined
Jul 16, 2014
Messages
8,198 (2.17/day)
Location
SE Michigan
System Name Dumbass
Processor AMD Ryzen 7800X3D
Motherboard ASUS TUF gaming B650
Cooling Artic Liquid Freezer 2 - 420mm
Memory G.Skill Sniper 32gb DDR5 6000
Video Card(s) GreenTeam 4070 ti super 16gb
Storage Samsung EVO 500gb & 1Tb, 2tb HDD, 500gb WD Black
Display(s) 1x Nixeus NX_EDG27, 2x Dell S2440L (16:9)
Case Phanteks Enthoo Primo w/8 140mm SP Fans
Audio Device(s) onboard (realtek?) - SPKRS:Logitech Z623 200w 2.1
Power Supply Corsair HX1000i
Mouse Steeseries Esports Wireless
Keyboard Corsair K100
Software windows 10 H
Benchmark Scores https://i.imgur.com/aoz3vWY.jpg?2
Apparently I have to write a scientific paper to explain everything. Meanwhile a random guy posts double the power efficiency versus Anandtech's review and I'm on the defensive? I've put plenty of sources. It's up to you to find a good source. Have fun! I think if you like popcorn MaxTech does the best in depth Mac coverage. I personally use a 12700K.

Mac Studio FULL Teardown - M1 Ultra chip REVEALED! - YouTube

Try that youtube channel :)
Not entirely. As the poster using specific facts its up to you to provide source for your material, obviously you're too lazy, and likely afraid of being proven wrong.

I'm not really into 'coverage' videos, those kinds of video are more hype than fact, but I still looked over that YT channel, the guy appears to be an Apple schill looking at his content history. That video is about a breakdown of the Ultra, not benchmarks. Until more reviews are made this back and forth is moot.

And with that I'll step out of this thread for fear of being called names.
 
Joined
Jun 14, 2020
Messages
3,458 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
I've found my 12700k to be an undervolting champ. I can actually keep much of my performance and drop it all the way down to 120W. But not nearly that much. I think you're wrong about that.

I did find one reviewer to talk intelligently about what Apple was claiming, and one youtuber mentioned it, that was The Verge and MKBHD (though I still think overall Ars Technica did a better job with their review). Here's their graph pointing out the 3090 does scale to over 400W. They get it. You can be faster at 200W less power, and slower overall at unlimited wattage. We'll probably get the most benchmarks from MaxTech (they do a lot of that) and HardwareUnboxed (they are the king at that) eventually, if they get a Studio in hand one day. They'd have to buy it, so now sure if that will happen.

View attachment 240358


I just looked it up by the way, stock settings the 12900k scores 27k in Cinebench with a peak wattage of 272 watts. I highly doubt you can get 29k at 160W.
There is no way 12900 pulls 272 at stock, since the pl2 limit is 240. It cannot draw more than 240. Mine caps at around 190 during cinebench r23.

I've found my 12700k to be an undervolting champ. I can actually keep much of my performance and drop it all the way down to 120W. But not nearly that much. I think you're wrong about that.

I did find one reviewer to talk intelligently about what Apple was claiming, and one youtuber mentioned it, that was The Verge and MKBHD (though I still think overall Ars Technica did a better job with their review). Here's their graph pointing out the 3090 does scale to over 400W. They get it. You can be faster at 200W less power, and slower overall at unlimited wattage. We'll probably get the most benchmarks from MaxTech (they do a lot of that) and HardwareUnboxed (they are the king at that) eventually, if they get a Studio in hand one day. They'd have to buy it, so now sure if that
That comparison is still stupid to be fair. Lets assume that a 3090 at 200 watts performs worse than a 3070. What is the freaking point then of comparing the m1 with a handicapped 3090 rather than a 3070? So even if their comparison is true, its still dishonest. Thats like saying my ford is as fast as a lamborghini (*when both cars have no wheels). Sure, my statement is true, but im still trying to deceive you
 
Joined
Apr 17, 2021
Messages
564 (0.43/day)
System Name Jedi Survivor Gaming PC
Processor AMD Ryzen 7800X3D
Motherboard Asus TUF B650M Plus Wifi
Cooling ThermalRight CPU Cooler
Memory G.Skill 32GB DDR5-5600 CL28
Video Card(s) MSI RTX 3080 10GB
Storage 2TB Samsung 990 Pro SSD
Display(s) MSI 32" 4K OLED 240hz Monitor
Case Asus Prime AP201
Power Supply FSP 1000W Platinum PSU
Mouse Logitech G403
Keyboard Asus Mechanical Keyboard
Not entirely. As the poster using specific facts its up to you to provide source for your material, obviously you're too lazy, and likely afraid of being proven wrong.

I'm not really into 'coverage' videos, those kinds of video are more hype than fact, but I still looked over that YT channel, the guy appears to be an Apple schill looking at his content history. That video is about a breakdown of the Ultra, not benchmarks. Until more reviews are made this back and forth is moot.

And with that I'll step out of this thread for fear of being called names.
No that video is just a teardown. I just mentioned him because he is probably one of the only channels that actually puts any detail into benchmarks at all. Coming soon. I don't know of any other one. Most youtubers don't do a lot of work there. Even in the PC space we have Hardware Unboxed and Gamer's Nexus and that's about it it feels like sometimes. And Techpowerup.

There is no way 12900 pulls 272 at stock, since the pl2 limit is 240. It cannot draw more than 240. Mine caps at around 190 during cinebench r23.


That comparison is still stupid to be fair. Lets assume that a 3090 at 200 watts performs worse than a 3070. What is the freaking point then of comparing the m1 with a handicapped 3090 rather than a 3070? So even if their comparison is true, its still dishonest. Thats like saying my ford is as fast as a lamborghini (*when both cars have no wheels). Sure, my statement is true, but im still trying to deceive you

Peak power draw according to Anandtech. Remember that is package power, I'm not sure that Intel's PL2 limit is the entire CPU package limit (including cache, the ring, DRAM controller etc.) or just the CPU core by itself.

Power%2012900K%20POVRay%20Win11%20DDR5[1].png

Apple was not dishonest (outside of not defining their productivity bench, but the curve is not a lie, it is a focus on their efficiency). The posted perf/watt curves. Then since their own product only uses 100 watts, they moved across the graph on the axis to see how many watts a 3090 used to get the same performance, and it was 300W, and they stopped there. I think Apple overestimated the average journalist's ability to understand basic mathematical graphs. They never claimed "we are faster than a 3090". If you read that somewhere, a journalist wrote that, not Apple. I am a mathematician and frequently look at graphs like that. Pick x, follow y. Pick y, follow x. That's how you get reference points for comparisons on a curve.

Not entirely. As the poster using specific facts its up to you to provide source for your material, obviously you're too lazy, and likely afraid of being proven wrong.

I'm not really into 'coverage' videos, those kinds of video are more hype than fact, but I still looked over that YT channel, the guy appears to be an Apple schill looking at his content history. That video is about a breakdown of the Ultra, not benchmarks. Until more reviews are made this back and forth is moot.

And with that I'll step out of this thread for fear of being called names.
I did provide sources for almost everything I said. You picked one comment, and complain that I didn't provide a citation, and you're trying to use that to discredit everything I said? That's not in good faith.
 
Last edited:
Joined
May 10, 2020
Messages
738 (0.44/day)
Processor Intel i7 13900K
Motherboard Asus ROG Strix Z690-E Gaming
Cooling Arctic Freezer II 360
Memory 32 Gb Kingston Fury Renegade 6400 C32
Video Card(s) PNY RTX 4080 XLR8 OC
Storage 1 TB Samsung 970 EVO + 1 TB Samsung 970 EVO Plus + 2 TB Samsung 870
Display(s) Asus TUF Gaming VG27AQL1A + Samsung C24RG50
Case Corsair 5000D Airflow
Power Supply EVGA G6 850W
Mouse Razer Basilisk
Keyboard Razer Huntsman Elite
Benchmark Scores 3dMark TimeSpy - 26698 Cinebench R23 2258/40751
Wow… when Apple is involved, the number of haters is even higher than the number of supporters… impressive
 
Joined
Apr 1, 2009
Messages
60 (0.01/day)
No that video is just a teardown. I just mentioned him because he is probably one of the only channels that actually puts any detail into benchmarks at all. Coming soon. I don't know of any other one. Most youtubers don't do a lot of work there. Even in the PC space we have Hardware Unboxed and Gamer's Nexus and that's about it it feels like sometimes. And Techpowerup.

Luke Miani says he is going to build a system with all the parts Apple used in their graphs and run a bunch of benchmarks. Not sure how thorough he will be as he is more a MAC guy then PC guy so imagine they will mostly be synthetic benchmarks instead of any pertaining to gaming or anything though. Though his review of the M1 Ultra version today he did say Apple was pretty dishonest in some ways about how they showed things just based on the M1 Ultra version 48 core for $2000. His $4000 Mac Studio review is supposed to be out tomorrow.
 
Joined
Apr 17, 2021
Messages
564 (0.43/day)
System Name Jedi Survivor Gaming PC
Processor AMD Ryzen 7800X3D
Motherboard Asus TUF B650M Plus Wifi
Cooling ThermalRight CPU Cooler
Memory G.Skill 32GB DDR5-5600 CL28
Video Card(s) MSI RTX 3080 10GB
Storage 2TB Samsung 990 Pro SSD
Display(s) MSI 32" 4K OLED 240hz Monitor
Case Asus Prime AP201
Power Supply FSP 1000W Platinum PSU
Mouse Logitech G403
Keyboard Asus Mechanical Keyboard
Luke Miani says he is going to build a system with all the parts Apple used in their graphs and run a bunch of benchmarks. Not sure how thorough he will be as he is more a MAC guy then PC guy so imagine they will mostly be synthetic benchmarks instead of any pertaining to gaming or anything though. Though his review of the M1 Ultra version today he did say Apple was pretty dishonest in some ways about how they showed things just based on the M1 Ultra version 48 core for $2000. His $4000 Mac Studio review is supposed to be out tomorrow.
Words have meaning though. "Lie" "dishonest". I don't agree with those words. You could say the graphs were misleading, as they require math skills to understand clearly, and imply to laypeople something that isn't true. All they did was show that the CPU part of the system is more powerful than a 12900k at 160W, which is true. They showed that they can equal a 3090 in productivity applications 100W versus 300W. That's true.

They did not say "we can play games at a higher fps", if they wanted to claim that, they would have shown FPS data. 3DMark Wildlife shows the GPU is about 21 percent faster than an RTX 3070 and probably ~10 percent slower than an RTX 3090 at 300W, but of course, as The Verge said, it was not good and is very misleading, as Apple "CROPPED" the graph and didn't show where the 3090 goes with 450W for example. The 3090 is probably at least 30 percent faster than the Apple silicon, with unlocked power. But the Mac Studio has no option to unlock clock speeds or power, so of course they didn't want that comparison.

I still think Apple isn't being dishonest. If everyone reads Apple's charts and concludes the wrong things, you can't say because those claims are not true that Apple is dishonest. Apple didn't make those claims. What we should do is heavily criticize Apple for misleading marketing. But guess which companies are the most famous for misleading claims? NVidia and Intel are the worst. Won't go in to that.

Apple has problems. Mac OS is a mess. Gaming is a mess. They need a turbo option for their CPUs so we can get higher clock speeds at lower usage. And they charge too much. They make luxury products, not value for the dollar. They are also making the most exciting silicon in the business. Would that I could install Windows on the Mac Studio.

I hope software improves over time to work on two GPU dies stitched together. We'll see. Intel and AMD have big plans here. nVidia probably also. Apple did it first.
 
Joined
Apr 1, 2009
Messages
60 (0.01/day)
Words have meaning though. "Lie" "dishonest". I don't agree with those words. You could say the graphs were misleading, as they require math skills to understand clearly, and imply to laypeople something that isn't true. All they did was show that the CPU part of the system is more powerful than a 12900k at 160W, which is true. They showed that they can equal a 3090 in productivity applications 100W versus 300W. That's true.

They did not say "we can play games at a higher fps", if they wanted to claim that, they would have shown FPS data. 3DMark Wildlife shows the GPU is about 21 percent faster than an RTX 3070 and probably ~10 percent slower than an RTX 3090 at 300W, but of course, as The Verge said, it was not good and is very misleading, as Apple "CROPPED" the graph and didn't show where the 3090 goes with 450W for example. The 3090 is probably at least 30 percent faster than the Apple silicon, with unlocked power. But the Mac Studio has no option to unlock clock speeds or power, so of course they didn't want that comparison.

I still think Apple isn't being dishonest. If everyone reads Apple's charts and concludes the wrong things, you can't say because those claims are not true that Apple is dishonest. Apple didn't make those claims. What we should do is heavily criticize Apple for misleading marketing. But guess which companies are the most famous for misleading claims? NVidia and Intel are the worst. Won't go in to that.

Apple has problems. Mac OS is a mess. Gaming is a mess. They need a turbo option for their CPUs so we can get higher clock speeds at lower usage. And they charge too much. They make luxury products, not value for the dollar. They are also making the most exciting silicon in the business. Would that I could install Windows on the Mac Studio.

I hope software improves over time to work on two GPU dies stitched together. We'll see. Intel and AMD have big plans here. nVidia probably also. Apple did it first.

Dude be honest with yourself. Apple has been more vague with their graphs ever since they released the M1 hardware. They are going out of their way to be dishonest. Just like giving all this stuff about sustainability in their boxing and products yet creating products with zero upgradeability that you can't prolong their life down the road. Its like the magician waving one hand so you don't see what he is doing with the other one .
 
Joined
Dec 10, 2011
Messages
432 (0.09/day)
Truly shocking. Apple again lied in benchmarks.

Never happened before...

But I love the Studio make no mistake about it.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
One thing i don't understand and no one is doing benchmarks on is the fact when you drop $4000 to $5000 you aren't competing with desktop processors and gpus for the type of work these people are doing. At this point you can buy a threadripper in the $2500 range and a $300-$500 motherboard and put together a machine that spec wise will kill the MAC. Sure you have to stick with at least ATX form factor and sure more power hungry but if it is for productivity I have never seen those be deciding factors.
Would you, though? Let's see (using either MSRP or real-world prices, based on what's reasonable or sensible, using PCPartpicker), comparing to a base M1 Ultra (20c/48CU/64GB/1TB) Mac Studio:
Threadripper 3970X: MSRP $1999 (PCPartpicker says $3100 real-world)
Suitable motherboard: Cheapest is ASRock TRX40 Creator at $516
RAM: 4x16GB of DDR4-3600 is ~$260 at the minimum
Storage: ~$150 for 1TB of high speed PCIe 4.0 NVMe (WD SN850 or Samsung 980 Pro)
Cooler: Noctua NH-U14S TR4-SP3 $90
GPU: RTX 3070 or RX 6700 XT - $600-800 for an RX 6700 XT or RTX 3070 - subtract ~$2-300 for prices in a saner world
PSU: >650W 80+ Platinum or Titanium with a good design, ~$130
Case: Whatever you want, but ideally a well ventilated one, likely >$100
Windows licence: $100+

That leaves us with a total price of ~$3950-$4200 depending mostly on GPU choice (or ~$3750-3900 if GPU prices were less stupid), with performance very much in the same ball park, power consumption easily being 3x under heavy loads, a case many times the size, but far more upgradeability and expandability.

So, will this spec wise "kill the Mac"? No. Is it a tad more powerful? Sure - the TR 3970 is a bit faster in many workloads, and the GPUs used in the example are a bit faster than the 48-core M1 GPU. It's still quite comparable overall. You can beat it by going for a higher end CPU or GPU, but that will also (drastically) increase your price - another $2000 for a TR 3990X, another $1000 for an RTX 3090 or RX 6900 XT. RAM and storage upgrades are of course much cheaper, and the advantage from expandability shouldn't be undersold - it's important both for utility and longevity. But neither should the M1 advantage from its NPU (RTX cards can do some similar work with tensor cores, but far less efficiently, and otherwise the PC has no neural accelerators at all unless you add an accelerator card for even more cost), nor its massive potential for VRAM - if you have workloads that can make use of this. You might of course not - but that doesn't invalidate those examples where the advantage applies.

So, is the M1 Ultra better or worse than a HEDT PC? It's mainly different. It's still massively powerful, but not the most powerful; it's massively efficient, but not great at all workloads; it's extremely fast for certain specific workloads, but those are quite specific.
That comparison is still stupid to be fair. Lets assume that a 3090 at 200 watts performs worse than a 3070. What is the freaking point then of comparing the m1 with a handicapped 3090 rather than a 3070? So even if their comparison is true, its still dishonest. Thats like saying my ford is as fast as a lamborghini (*when both cars have no wheels). Sure, my statement is true, but im still trying to deceive you
When you're comparing efficiency curves it's a reasonably valid comparison. If you're comparing your part, which peaks at X power, it's reasonable to compare that to competitors also at X power, even if they scale much higher than that. It's not the full picture, and a fair comparison would also include competitor performance at Y (their peak) power, but that doesn't invalidate the X-X comparison or make it dishonest - it's just selective. Selective argumentation can indeed be misleading and can hide the truth, but it's quite a stretch to call this example lying or outright dishonest.

Also, no 3090 at a 3070's power level would be slower - a wide-and-slow design is nearly always much more efficient than a smaller, higher clocked one. That's a huge part of why Apple is managing their massive efficiency - they're just making huge cores and huge chips and running them at relatively modest clocks. But unless specifically bottlenecked elsewhere, and unless you go really unreasonably low in power, a low-clocked large GPU will always be faster than a smaller GPU at the same power levels. There's a reason Nvidia's mobile SKUs nearly always have more CUDA cores but lower clocks than their desktop counterparts - that's how you improve efficiency in a given thermal envelope. It's just expensive to do so.
 
Joined
Jun 14, 2020
Messages
3,458 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Peak power draw according to Anandtech. Remember that is package power, I'm not sure that Intel's PL2 limit is the entire CPU package limit (including cache, the ring, DRAM controller etc.) or just the CPU core by itself.

View attachment 240368
I dont what the heck anand is doing wrong but PL2 includes everything, cores cache and rest of chip.

Here is mine at 35w beating the m1 at both efficiency and performance
12630 35w.png

When you're comparing efficiency curves it's a reasonably valid comparison. If you're comparing your part, which peaks at X power, it's reasonable to compare that to competitors also at X power, even if they scale much higher than that. It's not the full picture, and a fair comparison would also include competitor performance at Y (their peak) power, but that doesn't invalidate the X-X comparison or make it dishonest - it's just selective. Selective argumentation can indeed be misleading and can hide the truth, but it's quite a stretch to call this example lying or outright dishonest.

Also, no 3090 at a 3070's power level would be slower - a wide-and-slow design is nearly always much more efficient than a smaller, higher clocked one. That's a huge part of why Apple is managing their massive efficiency - they're just making huge cores and huge chips and running them at relatively modest clocks. But unless specifically bottlenecked elsewhere, and unless you go really unreasonably low in power, a low-clocked large GPU will always be faster than a smaller GPU at the same power levels. There's a reason Nvidia's mobile SKUs nearly always have more CUDA cores but lower clocks than their desktop counterparts - that's how you improve efficiency in a given thermal envelope. It's just expensive to do so.
I don't think I said lying, but yes I called it dishonest. In my book misleading makes you dishonest.

About the 3090 vs 3070, I'm not sure its so clear cut. It always depends on the workload and whether or not they can feed those cores. Im pretty sure that a power limited 3090 at 720p gaming will perform worse than a 3070. .

Would you, though? Let's see (using either MSRP or real-world prices, based on what's reasonable or sensible, using PCPartpicker), comparing to a base M1 Ultra (20c/48CU/64GB/1TB) Mac Studio:
Threadripper 3970X: MSRP $1999 (PCPartpicker says $3100 real-world)
Suitable motherboard: Cheapest is ASRock TRX40 Creator at $516
RAM: 4x16GB of DDR4-3600 is ~$260 at the minimum
Storage: ~$150 for 1TB of high speed PCIe 4.0 NVMe (WD SN850 or Samsung 980 Pro)
Cooler: Noctua NH-U14S TR4-SP3 $90
GPU: RTX 3070 or RX 6700 XT - $600-800 for an RX 6700 XT or RTX 3070 - subtract ~$2-300 for prices in a saner world
PSU: >650W 80+ Platinum or Titanium with a good design, ~$130
Case: Whatever you want, but ideally a well ventilated one, likely >$100
Windows licence: $100+

That leaves us with a total price of ~$3950-$4200 depending mostly on GPU choice (or ~$3750-3900 if GPU prices were less stupid), with performance very much in the same ball park, power consumption easily being 3x under heavy loads, a case many times the size, but far more upgradeability and expandability.
Im sure the 3970x is over twice as fast as the m1 ultra. Going by cbr23 results, it score 47+k. So what do you mean same ballpark?
 
Top