- Joined
- Jun 12, 2017
- Messages
- 136 (0.05/day)
Actually, nope. Throughput on paper is one thing, actual throughput in real-world workload is another (cache-miss, warp latency, etc). Run some dgemm test on both cards you will see. Not to mention CUDA and its tool chain are much easier to use.The note on where they would gain is proberly the compute area, as AMD cards favor compute related tasks.
The only problem is that NV compute card with proper double-precision capabilities is so much more expensive. But for deep learning uses which only require FP32 or lower precision, I haven't seen a single lab that uses AMD card. For enterprise segment, AMD's MI25 hasn't found a single customer yet.