- Joined
- Dec 25, 2020
- Messages
- 7,608 (5.01/day)
- Location
- São Paulo, Brazil
System Name | "Icy Resurrection" |
---|---|
Processor | 13th Gen Intel Core i9-13900KS |
Motherboard | ASUS ROG Maximus Z790 Apex Encore |
Cooling | Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM |
Memory | 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V |
Video Card(s) | NVIDIA RTX A2000 |
Storage | 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD |
Display(s) | 55-inch LG G3 OLED |
Case | Pichau Mancer CV500 White Edition |
Audio Device(s) | Sony MDR-V7 connected through Apple USB-C |
Power Supply | EVGA 1300 G2 1.3kW 80+ Gold |
Mouse | Microsoft Classic IntelliMouse (2017) |
Keyboard | IBM Model M type 1391405 |
Software | Windows 10 Pro 22H2 |
Benchmark Scores | I pulled a Qiqi~ |
No. Simply, no. GPUs are not that complicated to have a performance estimate on.
6900XT vs 3090 were roughly equal (figuring out the SKUs aside where AMD seems to have reacted with 6900XT)
- 80CU vs 82SM, roughly same amount of transistors and shader units. Nvidia had slight disadvantage from being half a node behind.
- AMD bet on LLC to make up for 256-bit memory bus vs 384-bit on 3090. A successful bet, in hindsight.
This is simply not the case for 4090 vs 7900XTX. 128SM vs 96CU on same process node, same memory bus width, similar enough LLC.
There are definitely cases where 79000XTX can get close, mostly when power or memory becomes the limiting factor.
It's an invalid comparison because they aren't the same architecture or work in a similar way, remember back in the Fermi v. TeraScale days, the GF100/GTX 480 GPU had 480 shaders (512 really but that config never shipped) while a Cypress XT/HD 5870 had 1600... nor can you go by the transistor count estimate because the Nvidia chip has several features that consume die area such as tensor cores and an integrated memory controller and on-die cache that the Navi 31 design does not (with L3 and IMCs being offloaded onto the MCDs and the GCD focusing strictly on graphics and the other SIPP blocks). It's a radically different approach in GPU design that each company has taken this time around, so I don't think it's "excusable" that the Radeon has less compute units because that's an arbitrary number (to some extent).
If you ask me, I would make a case for the N31 GCD being technically a more complex design than the portion responsible for graphics in AD102. And of course, the 7900 XTX can never get close unless you pump double the wattage into it.