- Joined
- May 22, 2015
- Messages
- 13,794 (3.96/day)
Processor | Intel i5-12600k |
---|---|
Motherboard | Asus H670 TUF |
Cooling | Arctic Freezer 34 |
Memory | 2x16GB DDR4 3600 G.Skill Ripjaws V |
Video Card(s) | EVGA GTX 1060 SC |
Storage | 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500 |
Display(s) | Dell U3219Q + HP ZR24w |
Case | Raijintek Thetis |
Audio Device(s) | Audioquest Dragonfly Red :D |
Power Supply | Seasonic 620W M12 |
Mouse | Logitech G502 Proteus Core |
Keyboard | G.Skill KM780R |
Software | Arch Linux + Win10 |
Turing is nearly twice as efficient per watt. Vega 64 (4096 core, 10.2-12.7 Tflop) is beaten by RTX 2060 (1920 core, 5.2-6.5 Tflop). It should be obvious to anyone how inefficient GCN really is at this point. Even the advantage of 7nm will not make up for this.
Fwiw, Vega is still ahead in compute. But still, these aren't compute cards, at least not primarily. And, of course, even there Turing wins when it can flex its tensor muscles.
AMD wouldn't need to win in one round. Matching Nvidia will allow them to reap some cash that can go towards a future better iteration of their architecture. At the same time it will prevent Nvidia from charging inflated prices that will go towards their war chest, to be used against AMD in the future.I know all of those things, my point is that even if AMD is competetive on 7nm against turing, they are not gonna win anything.
I'm pretty sure nvidia likes to move to 7nm as soon as possible, rumors say they will use Samsung's EUV solution which should reduce manufacturing costs.
Theory is easy.