- Joined
- Jan 2, 2009
- Messages
- 1,997 (0.34/day)
- Location
- Pittsburgh, PA
System Name | Titan |
---|---|
Processor | AMD Ryzen™ 7 7950X3D |
Motherboard | ASRock X870 Taichi Lite |
Cooling | Thermalright Phantom Spirit 120 EVO CPU |
Memory | TEAMGROUP T-Force Delta RGB 2x16GB DDR5-6000 CL30 |
Video Card(s) | ASRock Radeon RX 7900 XTX 24 GB GDDR6 (MBA) / NVIDIA RTX 4090 Founder's Edition |
Storage | Crucial T500 2TB x 3 |
Display(s) | LG 32GS95UE-B, ASUS ROG Swift OLED (PG27AQDP), LG C4 42" (OLED42C4PUA) |
Case | HYTE Hakos Baelz Y60 |
Audio Device(s) | Kanto Audio YU2 and SUB8 Desktop Speakers and Subwoofer, Cloud Alpha Wireless |
Power Supply | Corsair SF1000L |
Mouse | Logitech Pro Superlight 2 (White), G303 Shroud Edition |
Keyboard | Wooting 60HE+ / 8BitDo Retro Mechanical Keyboard (N Edition) / NuPhy Air75 v2 |
VR HMD | Occulus Quest 2 128GB |
Software | Windows 11 Pro 64-bit 23H2 Build 22631.4317 |
OpenCuda like most of Nvidia's tech is closed and designed to force you onto their hardware. CUDA cores suck compared to AMDs compute units, but NV tries their hardest to kill or stall OpenCL adoption because the only company that can make CUDA cores is NV, so they cripple OpenCL on their hardware, and make CUDA look so much better.
CUDA is open-source and ties into existing OpenCL code quite well. Even by itself, it is still more fleshed out than OpenCL. Most machine learning libraries that have a CUDA backend can still run the code directly on a x86 CPU or any other GPGPU without major issues.
AMD doesn't care much about the machine learning market as much as NVIDIA does. If they did, they should've continued with GCN (the Vega 56, 64 and Radeon VII are great cards for the enthusiast researcher) and not step backwards with RDNA (1.0, however they say 2.0 would be robust for developers).
ROCm (uses Google's pre-owned TensorFlow), which thankfully is still alive, is still primitive to NVIDIA's cuDNN. MIOpen (AMD's deep learning math library) works extremely well on NVIDIA's own tensor cores (which is not surprising since it's HIP-compatible).
NVIDIA's current CUDA cores are the same as AMD's GCN/RDNA shader processors. It all depends on the libraries used. It just so happens that NVIDIA's implementation is the most compatible at the moment.