- Joined
- Jun 21, 2021
- Messages
- 3,127 (2.37/day)
System Name | daily driver Mac mini M2 Pro |
---|---|
Processor | Apple proprietary M2 Pro (6 p-cores, 4 e-cores) |
Motherboard | Apple proprietary |
Cooling | Apple proprietary |
Memory | Apple proprietary 16GB LPDDR5 unified memory |
Video Card(s) | Apple proprietary M2 Pro (16-core GPU) |
Storage | Apple proprietary onboard 512GB SSD + various external HDDs |
Display(s) | LG UltraFine 27UL850W (4K@60Hz IPS) |
Case | Apple proprietary |
Audio Device(s) | Apple proprietary |
Power Supply | Apple proprietary |
Mouse | Apple Magic Trackpad 2 |
Keyboard | Keychron K1 tenkeyless (Gateron Reds) |
VR HMD | Oculus Rift S (hosted on a different PC) |
Software | macOS Sonoma 14.7 |
Benchmark Scores | (My Windows daily driver is a Beelink Mini S12 Pro. I'm not interested in benchmarking.) |
The operating system will choose for you sooner or later and will likely do it with 95%+ accuracy by 2026, just like task schedulers assigning jobs to performance or efficiency CPU cores.Alternate poll idea: since your GPU will have AI capabilities, your CPU will have AI capabilities as well, which one do you plan on using?
My guess is that the OS will run AI functions on the subsystem that has more free RAM available to the AI model. The OS may weight a certain type of silicon if the performance-per-watt metric is well known and/or perhaps based on memory bandwidth performance. Based on AMD's relative newness to AI/ML silicon differentiation, I expect modern OSes to assign AI/ML tasks directly to Nvidia discrete GPUs by default. A lot of today's AI code is already optimized for CUDA.
We already know how this is going to play out on smartphones: premium models (Samsung Galaxy, iPhones, etc.) will get SoCs with AI/ML cores first, followed by trickle down deployment to mid-range and entry-level SoCs a few years later. One should expect the same from the tablet market.
For mobile PCs, I expect AI-enabled SoCs on entry-level "business" ultrabooks. The high end gaming models with discrete GPUs may end up being the last to receive AI silicon, but that would be on AMD Radeon-equipped notebooks. There aren't many of those today, Nvidia has pretty much cornered the notebook PC discrete GPU market.
Apple has it easy: they have deprecated Apple Intelligence features from Intel-based Macs and all Apple Silicon systems feature UMA (Unified Memory Architecture). This includes iPhone, iPad, and other Apple devices that gain Apple Intelligence functionality later on. That basically means a single architectural approach which makes their engineers' jobs way easier. They don't have to worry about whether to assign AI/ML jobs to discrete GPUs (and GPU subsystem memory) because they have none.
We know smartphone manufacturers won't give consumers a choice, at least in the premium models. They will all have AI/ML cores whether Ned Nerdwad wants them or not. Joe Consumer won't even know the difference unless there's some sort of AI-assisted feature that isn't available to them because their SoC is excluded from supported systems.Not true. With enough public push back, companies will not include it if people will not buy it. Some are unwilling to pay extra, others will not buy anything that includes it.
Remember that smartphones with AI/ML cores have been around for years.
The discussion will end up restricted to a small subset of users of AMD Radeon discrete PC graphics card users. And that discussion will end up lasting about a year or so before smart ones realize that running an old non-AI GPU provides less performance-per-watt than a modern GPU (with AI silicon) that has AI features turned off.
It's not like Radeon GPUs are particularly energy efficient to begin with anyhow.
Last edited: