- Joined
- May 22, 2015
- Messages
- 13,836 (3.95/day)
Processor | Intel i5-12600k |
---|---|
Motherboard | Asus H670 TUF |
Cooling | Arctic Freezer 34 |
Memory | 2x16GB DDR4 3600 G.Skill Ripjaws V |
Video Card(s) | EVGA GTX 1060 SC |
Storage | 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500 |
Display(s) | Dell U3219Q + HP ZR24w |
Case | Raijintek Thetis |
Audio Device(s) | Audioquest Dragonfly Red :D |
Power Supply | Seasonic 620W M12 |
Mouse | Logitech G502 Proteus Core |
Keyboard | G.Skill KM780R |
Software | Arch Linux + Win10 |
There are reasons. Nvidia's hardware is general-purpose, many things Google does (e.g. ads and targeting) are not. Or they may just want to own the whole solution.That still doesn't explain why huge companies like Google or Intel build their own hardware for AI and ML. Do we, simple forum users, understand reality better than them?
Anyway, you're looking at this the wrong way. The fact that CUDA doesn't command 100% market share is not a guarantee ROCm is just as serviceable. Case in point: this year OpenAI has announced they will buy en-masse from Nvidia. Have they, or any of their competitors, announced something similar for AMD hardware? Another case in point: open up a cloud admin interface and try to create an OpenCL/ROCm instance. Everybody offers CUDA, but otoh, I can't name a provider that also offers ROCm (I'm sure there are some, I just don't recall who).