Friday, April 12th 2024
Apple Preparing M4 Chips with AI Capabilities to Fight Declining Mac Sales
While everyone has been focused on shipping an AI-enhanced product recently, one tech giant didn't appear to be bothered- Apple. However, according to Mark Gurman from Bloomberg, Apple is readying an overhaul of its Apple Silicon M-series chips to embed AI processing capabilities at the processor level. As the report indicates, Apple is preparing an update for late 2024 and early 2025 with the M4 series of chips, which will reportedly feature AI processing units similar to those found in other commercial chips. There should be three levels of the M4 series, with the entry-level M4 codenamed Donan, the mid-level M4 chip codenamed Brava, and the high-end M4 chip codenamed Hydra.
Sales of Apple Macs peaked in 2022; the following year was a sharp decline, and sales have continued to be flat since. The new AI PCs for Windows-based systems have been generating hype from all major vendors, hoping to introduce AI features to end users. However, Apple wants to be part of the revolution, and the company has already scheduled the World Wide Developer Conference for June 10th. At WWDC this year, Apple is supposed to show a suite of AI-powered solutions to enable better user experience and increase productivity. With M4 chips getting AI enhancement, the WWDC announcements will get extra hardware accelerations. However, we must wait for the exact announcements before making further assumptions.
Source:
Bloomberg
Sales of Apple Macs peaked in 2022; the following year was a sharp decline, and sales have continued to be flat since. The new AI PCs for Windows-based systems have been generating hype from all major vendors, hoping to introduce AI features to end users. However, Apple wants to be part of the revolution, and the company has already scheduled the World Wide Developer Conference for June 10th. At WWDC this year, Apple is supposed to show a suite of AI-powered solutions to enable better user experience and increase productivity. With M4 chips getting AI enhancement, the WWDC announcements will get extra hardware accelerations. However, we must wait for the exact announcements before making further assumptions.
53 Comments on Apple Preparing M4 Chips with AI Capabilities to Fight Declining Mac Sales
For what I know its just calculations, its just stuff google has been doing for years already and otherwise simulation programms.
What the heck is different about the hardware...what does it even mean? is it like an ASIC situation?
I know Apple still have the edge in photo editing and video editing, but the switch to Apple silicon means that their performance advantage (for the eye-watering price tags) is limited to an even narrower range of applications than the already restrictive walled garden.
Three of the most vocal Apple fans here are work defend Macbooks because they're beautiful hardware they can use to access a bash prompt, and I can't argue with that - in fact I strongly agree with them.
compare a mac pro to a dell precision that has upgradeability and repairability as well which is way more important for longevity in actual businesses...
or the xps for those that want slick and more mac form factor where the price difference is intense!
Apple's had a stellar last quarter & their popularity among the sheep's never been better :laugh:
For me the 7,1 Mac Pro was the final "FU" moment.
A $1,200 machine with a $4,500 chassis (not including the $400 non-locking wheels). Every single subsystem was obsolete the day it was released. Two years later, Apple cut sling load on every Intel Mac user.
Still rocking a Mini 2009 as a backup machine and showing my young kids my Macintosh SE HD (2.5 MB RAM, 40 MB SCSI HD) which still works!!! and some of the best games ever - Tetris and Milia Miglia.
NVidia has 4x4 sparse matrix (if more than half the locations are 0, NVidia executes them a bit faster).
AMD Called dot-product an AI routine a few years ago.
Intel calls their int8 matrix multiplication AVX512 instructions an "AI" instruction.
Various ASICs of 8x8 to 256x256 matrix sizes are also "AI capabilities", which are more efficient in theory. But in practice, NVidia / AMD / Intel just builds on a more advanced node (ex: 3nm), which more than wipes out the advantages of larger matrix instructions. Some companies call bfloat16 AI, but others are half-float (another 16-bit format). Other companies support both 16-bit formats. Some companies (ex: Intel) like to call 8-bit integer matrix multiplications AI.
So there's no standardization at all here.