Monday, October 18th 2021

Apple Introduces M1 Pro and M1 Max: the Most Powerful Chips Apple Has Ever Built

Apple today announced M1 Pro and M1 Max, the next breakthrough chips for the Mac. Scaling up M1's transformational architecture, M1 Pro offers amazing performance with industry-leading power efficiency, while M1 Max takes these capabilities to new heights. The CPU in M1 Pro and M1 Max delivers up to 70 percent faster CPU performance than M1, so tasks like compiling projects in Xcode are faster than ever. The GPU in M1 Pro is up to 2x faster than M1, while M1 Max is up to an astonishing 4x faster than M1, allowing pro users to fly through the most demanding graphics workflows.

M1 Pro and M1 Max introduce a system-on-a-chip (SoC) architecture to pro systems for the first time. The chips feature fast unified memory, industry-leading performance per watt, and incredible power efficiency, along with increased memory bandwidth and capacity. M1 Pro offers up to 200 GB/s of memory bandwidth with support for up to 32 GB of unified memory. M1 Max delivers up to 400 GB/s of memory bandwidth—2x that of M1 Pro and nearly 6x that of M1—and support for up to 64 GB of unified memory. And while the latest PC laptops top out at 16 GB of graphics memory, having this huge amount of memory enables graphics-intensive workflows previously unimaginable on a notebook. The efficient architecture of M1 Pro and M1 Max means they deliver the same level of performance whether MacBook Pro is plugged in or using the battery. M1 Pro and M1 Max also feature enhanced media engines with dedicated ProRes accelerators specifically for pro video processing. M1 Pro and M1 Max are by far the most powerful chips Apple has ever built.
"M1 has transformed our most popular systems with incredible performance, custom technologies, and industry-leading power efficiency. No one has ever applied a system-on-a-chip design to a pro system until today with M1 Pro and M1 Max," said Johny Srouji, Apple's senior vice president of Hardware Technologies. "With massive gains in CPU and GPU performance, up to six times the memory bandwidth, a new media engine with ProRes accelerators, and other advanced technologies, M1 Pro and M1 Max take Apple silicon even further, and are unlike anything else in a pro notebook."

M1 Pro: A Whole New Level of Performance and Capability
Utilizing the industry-leading 5-nanometer process technology, M1 Pro packs in 33.7 billion transistors, more than 2x the amount in M1. A new 10-core CPU, including eight high-performance cores and two high-efficiency cores, is up to 70 percent faster than M1, resulting in unbelievable pro CPU performance. Compared with the latest 8-core PC laptop chip, M1 Pro delivers up to 1.7x more CPU performance at the same power level and achieves the PC chip's peak performance using up to 70 percent less power. Even the most demanding tasks, like high-resolution photo editing, are handled with ease by M1 Pro.
M1 Pro has an up-to-16-core GPU that is up to 2x faster than M1 and up to 7x faster than the integrated graphics on the latest 8-core PC laptop chip. Compared to a powerful discrete GPU for PC notebooks, M1 Pro delivers more performance while using up to 70 percent less power. And M1 Pro can be configured with up to 32 GB of fast unified memory, with up to 200 GB/s of memory bandwidth, enabling creatives like 3D artists and game developers to do more on the go than ever before.
M1 Max: The World's Most Powerful Chip for a Pro Notebook
M1 Max features the same powerful 10-core CPU as M1 Pro and adds a massive 32-core GPU for up to 4x faster graphics performance than M1. With 57 billion transistors—70 percent more than M1 Pro and 3.5x more than M1—M1 Max is the largest chip Apple has ever built. In addition, the GPU delivers performance comparable to a high-end GPU in a compact pro PC laptop while consuming up to 40 percent less power, and performance similar to that of the highest-end GPU in the largest PC laptops while using up to 100 watts less power. This means less heat is generated, fans run quietly and less often, and battery life is amazing in the new MacBook Pro. M1 Max transforms graphics-intensive workflows, including up to 13x faster complex timeline rendering in Final Cut Pro compared to the previous-generation 13-inch MacBook Pro.
M1 Max also offers a higher-bandwidth on-chip fabric, and doubles the memory interface compared with M1 Pro for up to 400 GB/s, or nearly 6x the memory bandwidth of M1. This allows M1 Max to be configured with up to 64 GB of fast unified memory. With its unparalleled performance, M1 Max is the most powerful chip ever built for a pro notebook.

Fast, Efficient Media Engine, Now with ProRes
M1 Pro and M1 Max include an Apple-designed media engine that accelerates video processing while maximizing battery life. M1 Pro also includes dedicated acceleration for the ProRes professional video codec, allowing playback of multiple streams of high-quality 4K and 8K ProRes video while using very little power. M1 Max goes even further, delivering up to 2x faster video encoding than M1 Pro, and features two ProRes accelerators. With M1 Max, the new MacBook Pro can transcode ProRes video in Compressor up to a remarkable 10x faster compared with the previous-generation 16-inch MacBook Pro.
Advanced Technologies for a Complete Pro System
Both M1 Pro and M1 Max are loaded with advanced custom technologies that help push pro workflows to the next level:
  • A 16-core Neural Engine for on-device machine learning acceleration and improved camera performance.
  • A new display engine drives multiple external displays.
  • Additional integrated Thunderbolt 4 controllers provide even more I/O bandwidth.
  • Apple's custom image signal processor, along with the Neural Engine, uses computational video to enhance image quality for sharper video and more natural-looking skin tones on the built-in camera.
  • Best-in-class security, including Apple's latest Secure Enclave, hardware-verified secure boot, and runtime anti-exploitation technologies.A Huge Step in the Transition to Apple Silicon
  • The Mac is now one year into its two-year transition to Apple silicon, and M1 Pro and M1 Max represent another huge step forward. These are the most powerful and capable chips Apple has ever created, and together with M1, they form a family of chips that lead the industry in performance, custom technologies, and power efficiency.
macOS and Apps Unleash the Capabilities of M1 Pro and M1 Max
macOS Monterey is engineered to unleash the power of M1 Pro and M1 Max, delivering breakthrough performance, phenomenal pro capabilities, and incredible battery life. By designing Monterey for Apple silicon, the Mac wakes instantly from sleep, and the entire system is fast and incredibly responsive. Developer technologies like Metal let apps take full advantage of the new chips, and optimizations in Core ML utilize the powerful Neural Engine so machine learning models can run even faster. Pro app workload data is used to help optimize how macOS assigns multi-threaded tasks to the CPU cores for maximum performance, and advanced power management features intelligently allocate tasks between the performance and efficiency cores for both incredible speed and battery life.

The combination of macOS with M1, M1 Pro, or M1 Max also delivers industry-leading security protections, including hardware-verified secure boot, runtime anti-exploitation technologies, and fast, in-line encryption for files. All of Apple's Mac apps are optimized for—and run natively on—Apple silicon, and there are over 10,000 Universal apps and plug-ins available. Existing Mac apps that have not yet been updated to Universal will run seamlessly with Apple's Rosetta 2 technology, and users can also run iPhone and iPad apps directly on the Mac, opening a huge new universe of possibilities.
Apple's Commitment to the Environment
Today, Apple is carbon neutral for global corporate operations, and by 2030, plans to have net-zero climate impact across the entire business, which includes manufacturing supply chains and all product life cycles. This also means that every chip Apple creates, from design to manufacturing, will be 100 percent carbon neutral.
Add your own comment

156 Comments on Apple Introduces M1 Pro and M1 Max: the Most Powerful Chips Apple Has Ever Built

#126
Vya Domus
DrediIt definitely is. Try measuring IPC on your PC without a motherboard, or main memory. I’m not holding my breath.
What a terribly unintelligent response. Was that supposed to be a gotcha moment or something ?

A processors executes instructions, not the motherboard, not the memory. IPC is a CPU metric.
Posted on Reply
#127
Turmania
Love'em and or Hate'em, you have to respect Apple and what they are doing.
Posted on Reply
#128
Dredi
Vya DomusWhat a terribly unintelligent response. Was that supposed to be a gotcha moment or something ?

A processors executes instructions, not the motherboard, not the memory. IPC is a CPU metric.
But the memory speed specifically has a high impact in how quickly the CPU manages to execute instructions, for some software.

if you measure the IPC of some software with ddr5 and separately with ddr4 memory, which measurement is ”the IPC”, if the processor stays the same?

And please, do explain how you measure the generic IPC you keep touting of.
Posted on Reply
#129
defaultluser
I applaud Apple for pushing the boundaries, but their target audience for these machines is tiny - there will be no Linux or Windows support on these things!

The uptick in sales from the first generation ARM devics will die -off pretty quickly, And Apple will become yet-another-Sun.
Posted on Reply
#130
R0H1T
defaultluserAnd Apple will become yet-another-Sun.
With 1000-10000x in lifetime profits ~ yeah not happening :twitch:
Posted on Reply
#131
Vya Domus
DrediBut the memory speed specifically has a high impact in how quickly the CPU manages to execute instructions, for some software.
Then you're just measuring performance for that specific price of software, the CPU is still a constant that always behaves the same given the same input. If you however still decide to divide the number of instructions executed by the number of clock cycles elapsed while running that program and call that figure "the IPC", fine, but it's completely useless because it will vary for literally every single program for a million reasons.

Assume you run a program and you're trying to measure "the IPC", at some point that price of software has to communicate over the network, which obviously takes some time. At the end you draw the line and you say, well this CPU can do X instructions per clock. Do you see nothing stupid about that ? What did you actually measure ? The IPC of that specific CPU or system or whatever ? Really ?

Ultimately, if you want to measure IPC as precisely as possible you have to isolate the CPU from everything else, including memory speed. I don't know why that isn't obvious.
Drediif you measure the IPC of some software with ddr5 and separately with ddr4 memory, which measurement is ”the IPC”, if the processor stays the same?
You tell me, according to you the IPC of a processor is an ever changing value dependent on everything and all values that you get are valid, somehow.
Posted on Reply
#132
Dredi
Vya DomusThen you're just measuring performance for that specific price of software, the CPU is still a constant that always behaves the same given the same input. If you however still decide to divide the number of instructions executed by the number of clock cycles elapsed while running that program and call that figure "the IPC", fine, but it's completely useless because it will vary for literally every single program for a million reasons.

Assume you run a program and you're trying to measure "the IPC", at some point that price of software has to communicate over the network, which obviously takes some time. At the end you draw the line and you say, well this CPU can do X instructions per clock. Do you see nothing stupid about that ? What did you actually measure ? The IPC of that specific CPU or system or whatever ? Really ?

Ultimately, if you want to measure IPC as precisely as possible you have to isolate the CPU from everything else, including memory speed. I don't know why that isn't obvious.



You tell me, according to you the IPC of a processor is an ever changing value dependent on everything and all values that you get are valid, somehow.
Just describe to me how IPC should be measured, as you would do it.

My method is both correct, and produces results that can be used for something.

IPC that isn’t affected by memory speed requires it to be measured running software that does not utilize the memory controller. How relevant are such performance metrics?


And btw, I have for the entirety of this discussion said that IPC is application specific. Your post makes no sense.

Next you’ll probably state that IPC is only for single threaded software. :)
Posted on Reply
#133
defaultluser
R0H1TWith 1000-10000x in lifetime profits ~ yeah not happening :twitch:
Profits didn't matter for IBM, either - they still sailed their ship straight onto insignificance

I'm just saying that Sun is the closest thing to Apple's corporate direction (a lot more profitable, but every corp find's it's own unique ways to become irrelevant)
Posted on Reply
#134
TheoneandonlyMrK
DrediBut the memory speed specifically has a high impact in how quickly the CPU manages to execute instructions, for some software.

if you measure the IPC of some software with ddr5 and separately with ddr4 memory, which measurement is ”the IPC”, if the processor stays the same?

And please, do explain how you measure the generic IPC you keep touting of.
Your on about application specific performance of a system, that is not IPC.
How a system leverages it's intrinsic ability (IPC)
Is system level performance.

IPC instructions per clock is a hypothetical unbending max of the hardware. . .

It's not a constantly varying parameter depending on the surrounding system or the software running.

Sisoft Sandra.
Cpuz
Etc.

How relevant is your version of IPC when everyone buys different ram SSD motherboard?! Then runs different games apps.
Posted on Reply
#135
Dredi
TheoneandonlyMrKIPC instructions per clock is a hypothetical unbending max of the hardware. . .
No it is not. You are mixing IPC with theoretical throughput (usually given as FLOPS/IOPS). What point is in your idea of IPC? What use is it to anyone? Can you link some text where it is used in that manner?
TheoneandonlyMrKHow relevant is your version of IPC when everyone buys different ram SSD motherboard?! Then runs different games apps.
It is not. IPC is a metric one can use in the quest of understanding arcitectural differences, nothing more. It has little to do with actual real life performance.
Posted on Reply
#136
Vya Domus
DrediJust describe to me how IPC should be measured, as you would do it.
I already said how, you try and remove as many variables that don't have to do with the CPU. Otherwise you are not measuring IPC, you are measuring something else, memory performance or I/O or whatever else.

Say you had two CPUs, one a lot faster than the other one but paired with memory that's a lot slower and you'd try and calculate IPC your way by running a memory intensive program and say that for arguments sake you find out both give you extremely similar figures. Then, according to you, it would be totally valid to claim that both CPUs have the same IPC, which would be obviously wrong and extremely idiotic.
Posted on Reply
#137
Darmok N Jalad
ValantarI was responding to you stating that you don't think anyone buys $3000+ laptops for "office work", and your arguments against Apple knowing their audience. It's pretty clear that they do (in part because they've been pissing off their core creative audience for years, and are now finally delivering something they would want).
What often gets lost when looking at pricing of Apple products is when just doing a strict comparison of performance vs PC internals while not considering the rest of the machine. In the case of the MBP, they also include very very good displays (high-refresh, miniLED, ultra wide gamut, and very color accurate), and really good input components. Once you start shopping PC equivalents with these factors (display especially), not only does the cost difference start to dwindle, but so does the list of options. For example, I tried an XPS17 9700, and the display and keyboard were really good, but the trackpad was total crap. I mean really bad, mushy, ruined the entire experience. To your point specifically, it’s a major expectation of Apple customers, and was why the butterfly keyboard models were widely hated by Apple customers. Add the lack of ports, the dreaded touchbar, and the trash can Mac Pro to the list of big misses. What Apple announced yesterday has got to be making their customers very happy. While they really don’t just come out and admit that they were missing the mark for years, they are at least back to delivering on customer expectations again. That era of portless, overly thin, “this is what you should want” product launches just might be over.

In the end, Apple doesn’t care if everyone buys into their benchmarks, as long as people buy their products. When I look at the MBPs as the sum of their parts, I see them being incredibly successful and way better than what they replaced. Apple has never wanted to own all the marketshare in the desktop space, but rather continue to sell in their existing high-end market.
defaultluserI applaud Apple for pushing the boundaries, but their target audience for these machines is tiny - there will be no Linux or Windows support on these things!

The uptick in sales from the first generation ARM devics will die -off pretty quickly, And Apple will become yet-another-Sun.
When M1 came out, porting for Linux began. They have already got it to booting to the GUI, though GPU acceleration is not there yet and is a bigger hurdle. Apple said running Windows on M1 is in MS’s court, but MS has shown no interest in licensing their ARM version of Windows, especially now that you can just license a hosted version of Windows. Parallels works on M1 Macs.
Posted on Reply
#138
Dredi
Vya DomusSay you had two CPUs, one a lot faster than the other one but paired with memory that's a lot slower and you'd try and calculate IPC your way by running a memory intensive program and say that for arguments sake you find out both give you extremely similar figures. Then, according to you, it would be totally valid thing to claim that both CPUs have the same IPC, which would be obviously wrong and extremely idiotic.
But it would be the same! Specifically for that application!
Vya DomusI already said how, you try and remove as many variables that don't have to do with the CPU.
Can you link to some examples where your ideas are applied, preferrably for PC/mac processors?
Posted on Reply
#139
dragontamer5788
IPC includes memory performance.

Heck, you can almost always tell when memory-performance is the limiter. Your IPC figures are 0.1 or so (yes, one instruction every 10 clock ticks). Actually, that's something like L3 cache, main-memory is roughly one-instruction every 200 clock ticks or something, because CPU is way way faster than RAM these days.

In a minority of cases, people can write highly optimized code + have very good data organization (so it takes a bit of luck: not all problems can be optimized this way) to be CPU-limited instead. In these cases, you see IPC reach into 2 IPC, 3 IPC, or 4IPC.
Posted on Reply
#140
Richards
Vya DomusIt's absolutely important to know the lower bounds because that tells you what the worst case scenarios is.



Huh ? You always aim to use the most out of processor given whatever the time constraints are, I don't know what you are talking about.


That's such a bizarre thing to say. OK, you find out that a CPU can achieve X IPC in a certain application. What can you do with that information ? Absolutely nothing, people measure IPC to generalize what the performance characteristics are. If you are only interested in an application, as you say, then IPC measurements are pointless, you're actually just interested in the performance of that application.



I guess ? It's still stupid and that doesn't tell you anything about how relevant the hardware is if you argue that people who didn't actually needed one would buy it anyway.



I have no idea but I fail to see why it contradicts anything that I said. I just said that more L3 cache doesn't always translate to much improved performance. If your code and data mostly resides in L1 cache then messing around with the L3 cache wont do anything. Obviously real world workloads are a mixture of stuff that benefits more or less from different levels of caches or from none of them at all.


I read many of his articles and while they're very good I can't help but notice he has a particular affinity for everything Apple does.


A wide core with huge caches and most importantly a very conservative clock speed. That's why I am not impressed, trust me that if their chips ran at much higher clocks comparable to Intel's an AMD's chips while retaining the same characteristics then I'd be really impressed. But I know that's not possible, ultimately all they did is a simple clock for area/transistor budget tradeoff because that way efficiency increases more than linearly. I just cannot give them much credit when they outperform Intel and AMD in some metrics while using who knows, maybe several times more transistors per core ?
Glad you noticed that andrei from anantech is a big apple fan... plus anantech founder works for apple
Posted on Reply
#141
defaultluser
Darmok N JaladWhen M1 came out, porting for Linux began. They have already got it to booting to the GUI, though GPU acceleration is not there yet and is a bigger hurdle. Apple said running Windows on M1 is in MS’s court, but MS has shown no interest in licensing their ARM version of Windows, especially now that you can just license a hosted version of Windows. Parallels works on M1 Macs.
I'm sorry man - forgot that several million people ran Linux GPU-less on their PS3.

Oh wait, no they didn't :rolleyes:

If you have no GPU-support, then you have no public interest in the project (why would I spend gargantuan amounts of money for one of these things and shove Linux on it when you can't use the proprietary GPU or tensor system for anything?)

The PS3 worked better than Mac ARM Linux (because they had an official port,) but in the end-of-the-day nobody actually used it for home use! Mac ARM Linux, by-comparison going to remain a hack-job for years (therefoore, DOA!)
Posted on Reply
#142
windwhirl
defaultluserI applaud Apple for pushing the boundaries, but their target audience for these machines is tiny - there will be no Linux or Windows support on these things!

The uptick in sales from the first generation ARM devics will die -off pretty quickly, And Apple will become yet-another-Sun.
Apple's market is for the most part separated from the rest of brands. It's not just the device's brand itself, but also the ecosystem around it.

Apple will likely remain relevant for years if not decades to come.
Posted on Reply
#143
Vya Domus
DrediBut it would be the same!
You know what, you're totally right.
Posted on Reply
#144
Darmok N Jalad
defaultluserI'm sorry man - forgot that several million people ran Linux GPU-less on their PS3.

Oh wait, no they didn't :rolleyes:

If you have no GPU-support, then you have no public interest in the project (why would I spend gargantuan amounts of money for one of these things and shove Linux on it when you can't use the proprietary GPU or tensor system for anything?)

The PS3 worked better than Mac ARM Linux (because they had an official port,) but in the end-of-the-day nobody actually used it for home use! Mac ARM Linux, by-comparison going to remain a hack-job for years (therefoore, DOA!)
You said there was no support. I merely replied that it is being worked on. I wouldn’t expect it to “just work” even after a year. It’s a work in progress. No one is asking you to spend your money on it. I wouldn’t buy a Mac with the intent to run another OS on it as the primary OS.

The parallel with Playstation is a red herring. Very few would elect to use a game console as a desktop machine beyond the novelty of it, as it’s not exactly great hardware for the job in the first place. However, people just might pick a Mac to be a desktop machine to run an alternate OS on, as has been the case for a decade or so.
Posted on Reply
#145
TheoneandonlyMrK
DrediNo it is not. You are mixing IPC with theoretical throughput (usually given as FLOPS/IOPS). What point is in your idea of IPC? What use is it to anyone? Can you link some text where it is used in that manner?



It is not. IPC is a metric one can use in the quest of understanding arcitectural differences, nothing more. It has little to do with actual real life performance.
Who said it was useful to anyone.

It's in the abbreviation

Instructions per clock.

it's not my fault if you and others are using it wrong ,out of context etc.

Your on about application performance in an application.

I'm on about the actual term IPC.

It's been misused for years.

First note on wiki
"In computer architecture, instructions per cycle (IPC), commonly called instructions per clock is one aspect of a processor's performance: the average number of instructions executed for each clock cycle. It is the multiplicative inverse of cycles per instruction."

en.m.wikipedia.org/wiki/Instructions_per_cycle

I'm not at work now do go on.:)

And as a SYSTEM test engineer your reasoning is unsound you use known values and standards to ascertain or Benchmark the performance of a System not any or every application one might use.

You use design/mathematically provable and defined criteria to classify the spec of a Part.
Posted on Reply
#146
Dredi
TheoneandonlyMrKWho said it was useful to anyone.

It's in the abbreviation

Instructions per clock.

it's not my fault if you and others are using it wrong ,out of context etc.

Your on about application performance in an application.

I'm on about the actual term IPC.

It's been misused for years.

First note on wiki
"In computer architecture, instructions per cycle (IPC), commonly called instructions per clock is one aspect of a processor's performance: the average number of instructions executed for each clock cycle. It is the multiplicative inverse of cycles per instruction."

en.m.wikipedia.org/wiki/Instructions_per_cycle

I'm not at work now do go on.:)

And as a SYSTEM test engineer your reasoning is unsound you use known values and standards to ascertain or Benchmark the performance of a System not any or every application one might use.

You use design/mathematically provable and defined criteria to classify the spec of a Part.
Oh, you managed to read the wiki page finally. Too bad you didn’t understand the contents.

As you quoted, IPC is the average instructions per cycle of a given processor, (for a given piece of software). It has nothing to do with the maximum instructions per cycle of a given processor, what you previously wrote.
TheoneandonlyMrKIPC instructions per clock is a hypothetical unbending max of the hardware. . .
Posted on Reply
#147
TheoneandonlyMrK
DrediOh, you managed to read the wiki page finally. Too bad you didn’t understand the contents.

As you quoted, IPC is the average instructions per cycle of a given processor, (for a given piece of software). It has nothing to do with the maximum instructions per cycle of a given processor, what you previously wrote.
I'd re read that? That's average IPC.
The technical use of the term has changed over the years but it is what it is.

I disagree with your and everyone else like you, use of it clearly.

In the time before FP units, it was easier to define IPC, it's a shit show now.
Posted on Reply
#148
dragontamer5788
TheoneandonlyMrKIPC instructions per clock is a hypothetical unbending max of the hardware. . .
That value is 6+ for Intel/AMD, depending on some details of uop caches I don't quite remember. If the uop cache is full, it drops to 4 (fastest the decoder can operate from L1). Furthermore, Zen, Zen2, and Zen3 all have the same value.

Which is nonsense. We can clearly see that "average IPC" in video game applications goes up from Zen -> Zen2 -> Zen3. IPC is... this vague wishy-washy term that people use to sound technical but is really extremely poorly defined. We want to calculate IPC so that we can calculate GHz between processors and come up with an idea of which processor is faster. But it turns out that reality isn't very kind to us, and that these CPUs are horribly, horribly complicated beasts.

No practical computer program sits in the uop cache of Skylake/Zen and reaches 6 IPC. None. Maybe micro-benchmark programs like SuperPi get close, but that's the kind of program you'd need to get anywhere close to the max-IPC on today's systems. Very, very few programs are written like SuperPi / HyperPi.
Posted on Reply
#149
TheoneandonlyMrK
dragontamer5788That value is 6+ for Intel/AMD, depending on some details of uop caches I don't quite remember. If the uop cache is full, it drops to 4 (fastest the decoder can operate from L1). Furthermore, Zen, Zen2, and Zen3 all have the same value.

Which is nonsense. We can clearly see that "average IPC" in video game applications goes up from Zen -> Zen2 -> Zen3. IPC is... this vague wishy-washy term that people use to sound technical but is really extremely poorly defined. We want to calculate IPC so that we can calculate GHz between processors and come up with an idea of which processor is faster. But it turns out that reality isn't very kind to us, and that these CPUs are horribly, horribly complicated beasts.

No practical computer program sits in the uop cache of Skylake/Zen and reaches 6 IPC. None. Maybe micro-benchmark programs like SuperPi get close, but that's the kind of program you'd need to get anywhere close to the max-IPC on today's systems. Very, very few programs are written like SuperPi / HyperPi.
I agree, IPC should not be a term used in the discussion of modern processor's in the way people do, but there are Benchmark programs out there that can measure average IPC with a modicum of logic to the end result like Pi
Comparable to another chip on the same ISA only.
It's not something that translates to a useful performance metric of a chip or core anymore.
Posted on Reply
#150
Semel
It's marketing bullshit.

1) You can't compare TFLOPS directly when using different architecture. It's nonsensical.
3090 - 35.6 TFLOPS, 6900xt -20.6 TFLOPS. Almost 40% difference yet the real performance difference is 3-10% depending on a game\task

2) They didn't even show what exactly was tested. Show us some games running with the same settings and same resolution! Oh, wait. there are no games \sarcasm

3)Forget about AAA games on M1. Metal API+ ARM=> no proper gaming. Metal API essentially killed Mac gaming even on x86 architecture (bootcamp excluded).
Going Metal route instead of Vulkan was a huge mistake.

I have no doubt M1 max will be a great laptop for video editing and stuff... but if you think about getting it in hopes of running proper non-mobile games on it with good graphics settings, resolution and performance then think twice...
Posted on Reply
Add your own comment
May 29th, 2024 07:18 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts