- Joined
- Dec 25, 2020
- Messages
- 7,246 (4.90/day)
- Location
- São Paulo, Brazil
System Name | "Icy Resurrection" |
---|---|
Processor | 13th Gen Intel Core i9-13900KS |
Motherboard | ASUS ROG Maximus Z790 Apex Encore |
Cooling | Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM |
Memory | 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V |
Video Card(s) | NVIDIA RTX A2000 |
Storage | 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD |
Display(s) | 55-inch LG G3 OLED |
Case | Pichau Mancer CV500 White Edition |
Power Supply | EVGA 1300 G2 1.3kW 80+ Gold |
Mouse | Microsoft Classic IntelliMouse (2017) |
Keyboard | IBM Model M type 1391405 |
Software | Windows 10 Pro 22H2 |
Benchmark Scores | I pulled a Qiqi~ |
AMD doesn't believe in ray-tracing. They said they it will be fully supported and available only in the cloud.
So, not mainstream and not for you.
View attachment 270730
You're taking this (very old, pre-RDNA2) slide quite out of context. AMD isn't going to be running raytracing server farms for Radeon owners, cloud computing is aimed at the application specific market.
And not for me? Not sure what you mean by that. I mean, I know I'm only a hobo that still has an RTX 3090 (smh I don't have a 4090 yet, what am I, poor?), but... I dunno, I enjoy raytraced gaming, even if my wood GPU only gets 100 fps or whatever, wtf how am I so poor, playing at 1080p and not using frame generation
I see people talking about cost per mm2 for production costs. Yep, sure that's increased. But did you even look to see the mm2 used by each GPU?
RTX 3080 : 628.4mm2
RTX 4080 : 379mm2
Even if their per mm costs have increased the die size has drastically decreased, by 40%. It definitely does NOT justify the massive cost increase to the cards. Nvidia are being greedy, it's a corporation after all, we expect them to do that. The problem is AMD isn't being competitive, and neither is Intel (in this high-end space). Nvidia has the market by the balls, you don't buy AMD cause they aren't very future proof, and you don't buy nvidia (but you will) cause they're too expensive.
Those people who say they don't believe in Ray Tracing, go live on an intel integrated and tell me you're still fine with it for gaming. Graphics goes forwards, Ray Tracing solves problems that typical shader based raster programs find difficult to scale, and we've been finding difficult to remedy for a decade now without dedicated hardware. Traditional triangle based rasterization is at it's limit of being efficient, and you might not think it, but taking the Ray Tracing route is about making certain effects MORE efficient, because otherwise you have to brute force them with traditional shader programs, which end up slower (grab a GTX 1080 and use it to run Quake 2 RTX, they have the entire ray tracing stack running in shaders).
The explanation for the die sizes is quite straightforward: The RTX 4080 is built on a much more advanced lithography node and is a lower segment ASIC (AD103) compared to the highest tier die (the AD102), and the RTX 3080 had a seriously cutdown GA102, only 68 out of the 84 computing units present in the GA102 are enabled in an RTX 3080. This number increases to 70 on 3080-12GB, 80 in the 3080 Ti and 82 in the 3090, with the 3090 Ti having a fully enabled processor. Using high yield harvested (low-quality!) large dies with several disabled units tends to hurt power efficiency. Add first-generation GDDR6X memory and it's no wonder the original Ampere has seen better days.
But indeed, I agree, Jensen has raised prices this generation quite a bit. But not only that, they've also left an insane amount of space for refresh SKUs, and even a comfortable lead above the RTX 4090 for an eventual 4090 Ti or potential 30th anniversary release Titan Ada or something, as the RTX 4090 has only 128 out of the 142 units of the AD102 processor enabled.