- Joined
- Jun 21, 2021
- Messages
- 3,093 (2.51/day)
System Name | daily driver Mac mini M2 Pro |
---|---|
Processor | Apple proprietary M2 Pro (6 p-cores, 4 e-cores) |
Motherboard | Apple proprietary |
Cooling | Apple proprietary |
Memory | Apple proprietary 16GB LPDDR5 unified memory |
Video Card(s) | Apple proprietary M2 Pro (16-core GPU) |
Storage | Apple proprietary onboard 512GB SSD + various external HDDs |
Display(s) | LG UltraFine 27UL850W (4K@60Hz IPS) |
Case | Apple proprietary |
Audio Device(s) | Apple proprietary |
Power Supply | Apple proprietary |
Mouse | Apple Magic Trackpad 2 |
Keyboard | Keychron K1 tenkeyless (Gateron Reds) |
VR HMD | Oculus Rift S (hosted on a different PC) |
Software | macOS Sonoma 14.7 |
Benchmark Scores | (My Windows daily driver is a Beelink Mini S12 Pro. I'm not interested in benchmarking.) |
This demand for machine learning hardware should not come as a surprise to anyone who follows semiconductor technology.
Nvidia has enjoyed a flourishing and fast growing AI business for several years. You can read about various companies and industries and how they use machine learning under the enterprise focused sections of Nvidia's website, companies like grocery chains doing inventory analysis.
Also, Nvidia's datacenter business eclipsed their gaming business a while ago (a year ago? maybe more?).
As we know, from a consumer angle, Nvidia's most prominent machine learning feature has been DLSS so far, but other applications have been in use for a while. I've used both Nvidia Broadcast and Nvidia Canvas for a couple of years.
Apple put machine learning cores (their "Neural Engine") starting on their A11 SoC back in 2017. They opened up access to the machine learning for third-party developers the following year with the A12 SoC.
Without a doubt Nvidia will continue servicing the graphics industry rather than becoming an AI pure play.
Perhaps the most interesting thing about Jensen's SIGGRAPH keynote was the heavy emphasis on the Nvidia Omniverse platform as a cloud technology. You can do the basic prototyping on an AI hardware equipped workstation but do all of the analysis and other heavy lifting on datacenter-hosted cloud systems.
That means Nvidia isn't planning on selling every GPU chip. They rent out GPUs for AI just like Amazon AWS has been renting out excess computing resources for over twenty years.
In the long run, a big corporation will find it more cost effective to buy their own GPU hardware and integrate it into their data centers. But for smaller companies, startups, and newcomers to machine learning, renting GPU cycles from Nvidia is possibility, just like using Amazon EC2 instances for small computing projects.
CAGR for the machine learning market blows doors over the consumer graphics business which plods along. Nothing new there, it has been in Nvidia's quarterly earning statements and the slide decks they post on their site. Nvidia is putting their focus on the market with the most upside in the next ten years. They would be stupid to churn out graphics cards for the DIY consumer PC market since the margins are so much higher elsewhere. Nvidia is a publicly owned company and their number one priority is to increase shareholder value.
Nvidia has enjoyed a flourishing and fast growing AI business for several years. You can read about various companies and industries and how they use machine learning under the enterprise focused sections of Nvidia's website, companies like grocery chains doing inventory analysis.
Also, Nvidia's datacenter business eclipsed their gaming business a while ago (a year ago? maybe more?).
As we know, from a consumer angle, Nvidia's most prominent machine learning feature has been DLSS so far, but other applications have been in use for a while. I've used both Nvidia Broadcast and Nvidia Canvas for a couple of years.
Apple put machine learning cores (their "Neural Engine") starting on their A11 SoC back in 2017. They opened up access to the machine learning for third-party developers the following year with the A12 SoC.
Without a doubt Nvidia will continue servicing the graphics industry rather than becoming an AI pure play.
Perhaps the most interesting thing about Jensen's SIGGRAPH keynote was the heavy emphasis on the Nvidia Omniverse platform as a cloud technology. You can do the basic prototyping on an AI hardware equipped workstation but do all of the analysis and other heavy lifting on datacenter-hosted cloud systems.
That means Nvidia isn't planning on selling every GPU chip. They rent out GPUs for AI just like Amazon AWS has been renting out excess computing resources for over twenty years.
In the long run, a big corporation will find it more cost effective to buy their own GPU hardware and integrate it into their data centers. But for smaller companies, startups, and newcomers to machine learning, renting GPU cycles from Nvidia is possibility, just like using Amazon EC2 instances for small computing projects.
CAGR for the machine learning market blows doors over the consumer graphics business which plods along. Nothing new there, it has been in Nvidia's quarterly earning statements and the slide decks they post on their site. Nvidia is putting their focus on the market with the most upside in the next ten years. They would be stupid to churn out graphics cards for the DIY consumer PC market since the margins are so much higher elsewhere. Nvidia is a publicly owned company and their number one priority is to increase shareholder value.
Last edited: