• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

The future of RDNA on Desktop.

Joined
Jan 14, 2019
Messages
15,033 (6.68/day)
Location
Midlands, UK
System Name My second and third PCs are Intel + Nvidia
Processor AMD Ryzen 7 7800X3D
Motherboard MSi Pro B650M-A Wifi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000 CL36
Video Card(s) PowerColor Reaper Radeon RX 9070 XT
Storage 2 TB Corsair MP600 GS, 4 TB Seagate Barracuda
Display(s) Dell S3422DWG 34" 1440 UW 144 Hz
Case Kolink Citadel Mesh
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply 750 W Seasonic Prime GX
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE Plasma
I mean, of course. But that's also a matter of common sense, a 1080p gamer with a 600W power supply doesn't have the budget or the need for a product of that price. I'll go a step beyond... that's probably the kind of gamer that the RX 9060 and 5060 "vanilla" versions are gonna target. That, indeed, would be the equivalent of installing a Ferrari engine onto a VW bug.
Who says they don't have the budget? I have friends who earn more than I do, but are on 1080p out of choice. You don't need a 30-something inch monster screen on a small desk, and there's no need to go higher than 1080p at 24" and below.

One of those friends has a 1000 W PSU and a 7800 XT inside an old-style noname grey sleeper chassis that he found next to a recycling container. Why? Because he likes it.

If your computer is of a higher end variety, and can support a 7900 XTX, that is probably still what you should buy. At least for now, although I expect many improvements from the RX 9070 XT to be relevant enough to make it remain more desirable, for example the new Radeon Image Sharpening 2.0 feature they announced last week will not be available on RDNA 3, the driver release notes state it is exclusive to RDNA 4. I know the technical reason why, although I'm not sure this has been divulged publicly anywhere just yet so, I'll write you a rain check on that one.
That's what I mean. What's "the best" isn't all black and white.
 
Joined
May 13, 2008
Messages
1,063 (0.17/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
1741340333020.png

(3919mhz)

Oh, ok, I see what they did there. Got it. That's pretty cool. I mean, theoretically, right, that would only take ~26.1xxgbps ram to run full out w/ rt (~5% more than raster)?

Probably hot as hell, though, if it's even possible with other stuff running. NGL, I'm surprised that clock even possible at all on 4nm. That actually could use more than 16GB of ram.
I wonder if that's truly even possible in any practical 3D application, or rather prepping the design for 3nm. Perhaps both. That could be actually pretty interesting.
What I'm curious about is...Are they running split raster/shader clocks again? Which one is getting reported in GPU-Z? Are they doing something weird like using a similar clock domain as like Zen 5c?

Side note, I was always trying to figure out why 5080 was limited to ~3154mhz avg. As I've said before, 5080FE at stock (2640mhz stable) requires 22gbps bw. At 3154 it would require ~26284mhz?
Which is a weird place to put a general cut-off. 22Gbps is nice and obvious; beat a 20gbps GDDR5 design. Which they kinda-sorta didn't, but it makes sense bc maybe they didn't account for cache increase.
Yeah, that'll happen. That fake advertised clock of 2.97ghz insinuating the old L2 will getcha every time when it's actually 3.15ghz in actuality (which is the difference the cache makes).
Sometimes you gotta rush an article nobody will ever read onto a forum somewhere as soon as you realize why it matters, even though nobody else probably cares...until they do (but still don't understand).
You'll get it in a second why nVIDIA probably did their clocks in general, but now it makes even more sense why the clock limit/weird bandwidth requirement if that's the actual capability of N48.

And then ofc for excess bw it's about 16% perf on average for a doubling over what's required...blahblahblah...it decreases if they actually use it for real compute performance but excessive still helps.
Which they will sell to everyone, as they have as the second coming of the flying spaghet monster, when it really isn't that big of deal (<6% extra perf currently on 5080).
But you guys don't care about that part.
I do, so you know it doesn't actually matter much, but could have.

To actually *use* 30gbps on GB203, they would need a 3600mhz core clock w/ 10752sp.
Gives you an idea of how these designs *could* have gone. You know, that way on N4P...or 12288sp @ 3150mhz even if current '4NP'...exactly...which they also didn't give us, but could have.
Because, well, greed. At some point nVIDIA fans really should be sad when you realize the very obvious designs they've tested...and then decided "No, fuck it, sell it it about six more times until they get that".
Ofc, if they *had* made that design, the replacement Rubin would be 9216sp @ 4200/40000. But no, they'll sell each step (36000/40000) as different gens, probably, perhaps using a denser process at first...because cheaper (especially w/Micron ram), and even then the small boost later as a freakin' 6070 Super or some shit. Maybe 7070. Each as a boost. You want MORE fuckery?
You may ask yourself, why not just 12288sp @ 3360/32000, the actual speed of the fucking ram on 4nm and the capability of even the dense process? Answer: Bc nVIDIA does it little as it can, to sell it again.
Because then the 12288sp part on 3nm with higher clocks would be the replacement. Not the 9216sp part. Potentially twice. Are you following?

Kinda just got interesting, though, imo. I still think nVIDA needs to give people a guaranteed ~3.23-3.24ghz/24GB clock to make that product, especially if AMD pull this off, which would be nice, or else the difference between that and even the next smaller Rubin (9216, low 3nm clocks) will do the exact same shit they keep doing to outdate GPUs (unlike AMD). They'll still probably do it again when they eventually use 40gbps ram (and high clocks), but if they do it two times even after using a massively cut down design already on Blackwell to replace a even smaller Rubin as an upgrade (twice)...That'd be funny.
That really is selling a possible design like 6 times, and cheap as possible for them each time.
While looking like they're improving things. And it will work. Because most people probably will not understand what I just wrote.

Don't forget, the quality of DLSS will likely magically improve as a 'feature', absolutely not an obsolescence technique, by the compute difference between the GPUs each time, relegating the former enough under 60fps when in newer titles (absolutely sponsored by nVIDIA) the new one makes the cut each time.
.
Because they're just that much better, guys. Shit, I left my italics on. Pretend the sentence two sentences ago was even more italtics to symbolize sarcasm. Bold or underline just doesn't do it justice.


You know, just thought I'd throw it out there for the about 407th time in case nVIDIA doesn't realize they're not going to get away with doing that if I can help it.
Different designs are one thing; the DLSS thing, not unlike never giving enough buffer, is fuckin' bullshit. Because they *know* people won't understand.

Will they do even 3.24+ with a 24GB model, or just try to beat AMD using similar clocks to the current 5080, which may be enough for some current games at 1440p when not limited by ram?
Probably the later, bc nVIDIA.
I have no doubt they tested 24gbps (and the N4P process) for the practical limit, as the 3600mhz ideal of 30gbps ram insuates that, if not something similar or even AMD's design itself.

Either practically or 'with the power of [5070 is a 4090] AI'. On the "so we don't make Fermi again" supercomputer. That made Blackwell. The greatest GPU family in existence, some people say. At least one guy.

That guy wasn't me.

This is what I mean, though. I don't know how they know, but ALWAYS KNOW what AMD is capable of doing. The popcorn moment is if nVIDIA will give people that GPU clock, which might actually force them to sell *slightly* better GPUs next generation. If they don't, they're still doing even more of the same shit, even after doing the same shit most of you don't even know they already did, which likely made their products worse three times over before you even knew they existed.

What do you guys think...do you think it's possible it might actually happen with these two GPUs? Any hope of getting a decent stock 1440pRT?

The real truth is that you *know* AMD is trying to get there...and you *know* nVIDIA really hopes they won't have to make it happen.




1741342421776.png
 
Last edited:
Joined
Jan 2, 2024
Messages
816 (1.87/day)
Location
Seattle
System Name DevKit
Processor AMD Ryzen 5 3600 ↗4.0GHz
Motherboard Asus TUF Gaming X570-Plus WiFi
Cooling Koolance CPU-300-H06, Koolance GPU-180-L06, SC800 Pump
Memory 4x16GB Ballistix 3200MT/s ↗3800
Video Card(s) PowerColor RX 580 Red Devil 8GB ↗1380MHz ↘1105mV, PowerColor RX 7900 XT Hellhound 20GB
Storage 240GB Corsair MP510, 120GB KingDian S280
Display(s) Nixeus VUE-24 (1080p144)
Case Koolance PC2-601BLW + Koolance EHX1020CUV Radiator Kit
Audio Device(s) Oculus CV-1
Power Supply Antec Earthwatts EA-750 Semi-Modular
Mouse Easterntimes Tech X-08, Zelotes C-12
Keyboard Logitech 106-key, Romoral 15-Key Macro, Royal Kludge RK84
VR HMD Oculus CV-1
Software Windows 10 Pro Workstation, VMware Workstation 16 Pro, MS SQL Server 2016, Fan Control v120, Blender
Benchmark Scores Cinebench R15: 1590cb Cinebench R20: 3530cb (7.83x451cb) CPU-Z 17.01.64: 481.2/3896.8 VRMark: 8009
It's no mystery that RDNA4 could have utilized more memory but that's not the point of the card.
There's no guarantee of what is to come but it would be a hilarious gut punch if the next AMD model is a 9080XT.
Preserves the 90 stack name to keep the sauce confusing to everybody, development kept very hush and made in limited units...
Ships with 20-24GB, 3.3GHz core clocks and completely edges out nvidia's 5080 refresh by high single digit % right before it drops.
First to shelf = first to sale. That's the drum beat that AMD needs to fully understand and I think they're finally starting to get it.
 
Joined
May 13, 2008
Messages
1,063 (0.17/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
It's no mystery that RDNA4 could have utilized more memory but that's not the point of the card.
There's no guarantee of what is to come but it would be a hilarious gut punch if the next AMD model is a 9080XT.
Preserves the 90 stack name to keep the sauce confusing to everybody, development kept very hush and made in limited units...
Ships with 20-24GB, 3.3GHz core clocks and completely edges out nvidia's 5080 refresh by high single digit % right before it drops.
First to shelf = first to sale. That's the drum beat that AMD needs to fully understand and I think they're finally starting to get it.
That clock is 3919mhz. Potentially (with overclocked 24gbps ram) they could literally do that in RT (if somehow they have it set up that way). The bandwidth is there, unlike 9070 xt which is bw-limited.

Think about that. According to W1z, the current RT clock is 3ghz. That is not a *small* increase, and *could* actually be a 1440p card.

5080 always could have been, nVIDIA decided to not let it be one. If they will with 24GB...Unknown. I would hope so, but you truly never know with them....they truly do as little as they have to in order to win.
So they can make the cheapest design the next time to sell as an improvement. I know some people think I'm being hyperbolic, but I'm truly not. Read what I wrote about other potential designs.
And why they did what they did. They literally are going to replace the current 5080 with a whole chip down then they *might* have. I mean, honestly, they *could* have made 4090/'6080' a 4080. Instead...
Gotta sell it 6 times. 4080. 4080S. 5080. 508024GB. 6070. 6070S. That is amazing. It will be interesting to see the 6 generations of progress.
Of damn near nothing in reality.

N48 would actually need more ram (unlike anything under 60TF; 9070 xt) if it is like this...and actually be extremely neat. If it didn't have more ram, It would have the same problem as the current 5080.

I may be interpreting what they're doing incorrectly.

But I don't think I am.
 
Last edited:
Joined
Oct 5, 2024
Messages
230 (1.45/day)
Location
United States of America
The reason the RTX 4090 outperforms the 5080 is because its core is (sometimes significantly) more powerful, not because the 5080 is memory capacity starved. To run into the limitations of 16 GB, you currently have to go all-out, with the most extreme scenarios (and it would still fit into memory by a hair) - not to mention W1zz tested this on the 5090, where this would be about 49.5% of its capacity. On a 16 GB card it would preallocate less, and use a bit less as a result.
Yes exactly. AMD usually does add just enough VRAM to be usable by the GPU at its intended resolution. If you have to resort to 4K to get to the VRAM cap for a 1440p-intended card, maybe consider getting a 4K-intended GPU instead.
 
Joined
Jan 14, 2019
Messages
15,033 (6.68/day)
Location
Midlands, UK
System Name My second and third PCs are Intel + Nvidia
Processor AMD Ryzen 7 7800X3D
Motherboard MSi Pro B650M-A Wifi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000 CL36
Video Card(s) PowerColor Reaper Radeon RX 9070 XT
Storage 2 TB Corsair MP600 GS, 4 TB Seagate Barracuda
Display(s) Dell S3422DWG 34" 1440 UW 144 Hz
Case Kolink Citadel Mesh
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply 750 W Seasonic Prime GX
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE Plasma
Yes exactly. AMD usually does add just enough VRAM to be usable by the GPU at its intended resolution. If you have to resort to 4K to get to the VRAM cap for a 1440p-intended card, maybe consider getting a 4K-intended GPU instead.
I recommend watching the 5070 and 9070 reviews on Gamer's Nexus, with special attention to Cyberpunk with RT. It's unplayable on the 5070, but runs fine on similarly specced 16 GB cards.

I'm not saying that Cyberpunk with RT is a must-play, but it's a good indication that VRAM capacity is not to be underestimated in every case.
 
Joined
Mar 7, 2023
Messages
1,032 (1.40/day)
Processor 14700KF/12100
Motherboard Gigabyte B760 Aorus Elite Ax DDR5
Cooling ARCTIC Liquid Freezer II 240 + P12 Max Fans
Memory 32GB Kingston Fury Beast @ 6000
Video Card(s) Asus Tuf 4090 24GB
Storage 4TB sn850x, 2TB sn850x, 2TB Netac Nv7000 + 2TB p5 plus, 4TB MX500 * 2 = 18TB. Plus dvd burner.
Display(s) Dell 23.5" 1440P IPS panel
Case Lian Li LANCOOL II MESH Performance Mid-Tower
Audio Device(s) Logitech Z623
Power Supply Gigabyte ud850gm pg5
Keyboard msi gk30
My 4090 has never exceeded 16GB vram allocation. For gaming, 32GB is totally unnecessary and at least for now, 16GB is enough for anything ( well maybe there's some very unique edge cases ). Can't say for how long that will be the case. But for now it is. I wish the 5090 had less vram to be honest. It would be less desirable for non-gamers and therefore demand would be lowered. Obviously thats not in nvidia's best interest, but imo, its in gamer's best interest.

Besides, there's already an enterprise line, if you need super high amounts of vram, go through those channels. But I am guessing, this is for the productivity individual and small business that can't afford a enterprise card. But still, I personally do not like geforce cards with specs clearly aimed at non-gaming uses. Geforce is supposed to be the gaming line of cards.

Anyway sorry I guess thats off topic, anyway to bring this back to amd, I think 16GB is fine, developers are still trying to consider how to have 8 and 12gb cards be compatible with their games. I think 16GB will be safe for a while longer. And these cards are supposed to be mid-range. The complete vacuum of cards from everybody right now is obviously going to jack up prices beyond what was intended but still... they said they are not going halo, and 16GB is enough for midrange now and presumably, a while into the future as well.

And yeah I guess for the moment, midange is now 700+ unless you get lucky and live by a microcentre or something where I hear they have stock. Nowhere in my area has stock. One place offers a queue for the non-xt..... thats it.

And it makes me nervous for what I'd do if my 4090 bites the dust....
 
Joined
Oct 5, 2024
Messages
230 (1.45/day)
Location
United States of America
(snip) I think 16GB is fine, developers are still trying to consider how to have 8 and 12gb cards be compatible with their games. I think 16GB will be safe for a while longer. (snip)
I would not go so far as to say that 8 GB cards are still being figured out by developers. Quite a few games I have easily use more than 8 GB, and even go right up to the 12 GB limit. 12 GB is the new 8 GB and 16 GB is the new sweet spot that 12 GB used to occupy.

I just don't think you need more than 16 GB in March 2025 for 1440p, which is what I consider to be the intended resolution for the 9070 cards. The bottleneck is elsewhere in the design, not VRAM.
 
Joined
Mar 7, 2023
Messages
1,032 (1.40/day)
Processor 14700KF/12100
Motherboard Gigabyte B760 Aorus Elite Ax DDR5
Cooling ARCTIC Liquid Freezer II 240 + P12 Max Fans
Memory 32GB Kingston Fury Beast @ 6000
Video Card(s) Asus Tuf 4090 24GB
Storage 4TB sn850x, 2TB sn850x, 2TB Netac Nv7000 + 2TB p5 plus, 4TB MX500 * 2 = 18TB. Plus dvd burner.
Display(s) Dell 23.5" 1440P IPS panel
Case Lian Li LANCOOL II MESH Performance Mid-Tower
Audio Device(s) Logitech Z623
Power Supply Gigabyte ud850gm pg5
Keyboard msi gk30
I would not go so far as to say that 8 GB cards are still being figured out by developers. Several games I have easily use more than 8 GB. 12 GB is the new 8 GB and 16 GB is the new sweet spot that 12 GB used to occupy.

I just don't think you need more than 16 GB in March 2025 for 1440p, which is what I consider to be the intended resolution for the 9070 cards.

I know 8gb is a problem, has been for a couple years now, and I would not recommend them for anybody wanting to play new games, but there' still a lot of people with them, so not all new games are completely disregarding them, thats what I meant. If you look at system requirements you will often see 8gb in the minimum area, with big sacrifices having to be made in some cases. So they are clearly still considered, even if in a diminished way.
 
Last edited:
Joined
Oct 19, 2022
Messages
375 (0.43/day)
Location
Los Angeles, CA
Processor AMD Ryzen 7 9800X3D (+PBO 5.4GHz)
Motherboard MSI MPG X870E Carbon Wifi
Cooling ARCTIC Liquid Freezer II 280 A-RGB
Memory 2x32GB (64GB) G.Skill Trident Z Royal @ 6200MHz 1:1 (30-38-38-30)
Video Card(s) MSI GeForce RTX 4090 SUPRIM Liquid X
Storage Crucial T705 4TB (PCIe 5.0) w/ Heatsink + Samsung 990 PRO 2TB (PCIe 4.0) w/ Heatsink
Display(s) AORUS FO32U2P 4K QD-OLED 240Hz (DP 2.1 UHBR20 80Gbps)
Case CoolerMaster H500M (Mesh)
Audio Device(s) AKG N90Q w/ AudioQuest DragonFly Red (USB DAC)
Power Supply Seasonic Prime TX-1600 Noctua Edition (1600W 80Plus Titanium) ATX 3.1 & PCIe 5.1
Mouse Logitech G PRO X SUPERLIGHT
Keyboard Razer BlackWidow V3 Pro
Software Windows 10 64-bit
To be completely fair, that was a flagship product. Don't know what people were expecting. But the rate of generation uplift NVIDIA provides is decreasing, the 40 series (pre super) and 50 series are good examples of that, whereas the gen uplift on AMD is inconsistent rather than a noticeable decline.

The next generation of GPU's will probably have either a super big generational uplift akin to what NVIDIA used to pump out, or very little. And that's just the GPU side of things, AAA developers woefully don't optimize their games till after launch 99% of the time it seems.

It is decreasing because Nvidia are cheaping out... Blackwell was supposed to be made on TSMC 3nm and with a better design. But NVIDIA decided to change their plants when AMD cancelled NAVI 41 chips. They knew they would have no competitor so they took it easy. The 5090 would have been around 50% more powerful on average if it was made on 3nm.
 
Joined
Sep 17, 2014
Messages
23,495 (6.13/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
View attachment 387774

The point is 4090 won't be a 4090 on 3nm. It will be a 6080 (and likely faster so 1440p->4k up-scaling more consistent). Again, I think the 'whatever they call the 5080 replacement w/ 18GB" will be 1440p.
Because again, 5080 is 10752sp @ 2640mhz. 9216sp @ 3780sp is 22% faster in RT/raster, and 18GB of RAM is 12.5% more than 16GB. How far is a 5080 away from 60?
What if you turn on FG (native framerate)? OH, that's right...nVIDIA literally HIDES IT FROM YOU BC OF THIS REASON.

Again, then 9070 xt / 128-bit cards will be 1080p. No up-scaling (in a situation like this; yes more demanding situations exist and hence why higher-end cards exist).

I don't get how other people don't see this? I think it's clear as day.

I just do not agree, and it really goes to show that a lot of people have not used RT and/or up-scaling. It's very normal. Asking for 4kRT is ridiculously absurd (this gen). Pick a game and look at even a 5090.
Also, 1440p->4k up-scaling looks pretty good, even with FSR. Now 960->1440p will look good (always has been okay with DLSS, but now will with FSR4, I think), which again, is the point of 9070 xt.
1440p isn't good-enough (in my view, especially with any longevity and/or using more features like FG) on 5080 bc it doesn't have to be...yet. It could be, but it isn't bc there is no competition.
Now, up-scaling is even important for 1080p->4k, which is *literally the point of DLSS4*. Even for a 5080 (because of the situation above).
'5070 is a 4090' is because 1080p up-scaling IQ has improved to the point they think they can compare it (along with adding FG) to a 4090 running native 4k (in raster). That is the point of that.
Up-scaling is super important. On consoles, they have (and continue to use) DRS. This is no different than that, really. Asking for consistant native frames at high-rez using RT is just not realistic for most budgets.
This is the whooollleee point of why they're improving up-scaling. RT will exist, in some cases in a mandatory way. You will use up-scaling, and you will prefer it looks ok. OR, you will not play those games.
OR, you will spend a fortune on a GPU. Or you willl lower other settings (conceivably quite a bit as time goes on). That's just reality.

Look, I get that some people still get hung up on things like "but gddr7" and such. GUYS, a 5080 FE needs 22gbps ram to run at stock (to saturate compute at 2640mhz). Do you know why those speeds?
Think of what AMD is putting out, and where that ram clocks. That is what you do (when you can). You put out the slowest thing you can to win; nothing more. Give nothing away you can sell as an upgrade.
Especially, as I'm showing you above, when it can be tangible. Save it for next-gen and sell it then. nVIDIA truly could give you 24GB and 3.23ghz clocks. They didn't...but they could.

This will bear out when people overclock 9070 XT's (somehow often just short of a stock 5080FE or similar to 5070ti OC) and/or there is a 24gbps card.
Somehow magically similar in many circumstances, especially if 3.47ghz or higher.
Because it's really, honestly, just MATH. It's not opinion. It's MATH. Yes, some units/ways of doing things differ; yes excess bw helps some (perhaps ~6% stock in this case?), but so do extra ROPs on N48.
I don't have the math on the ROPs; I'm sure it differs by resolution. I've never looked into it. But compute similar; I don't know about how much the TMUs help RT (yet). But the main point remains.
Compute is compute (in which 10752 @ 2640= 8192 @ 3465). Bandwidth is bandwidth. Buffer is buffer. It's all solvable. None of this is magic, but they will try to sell you bullshit which I am not.


SMH.

Sometimes I just want to dip until Rubin comes out. We'll just see...won't we. Period not question mark.
The 5080... you mean that 999,- MSRP GPU that you can't buy, and if you do, you're still going to fiddle with adapters on your current PSU, or have to upgrade it anyway alongside it... and you might be missing 12% perf due to 8 missing ROPs? That one?

Oh man, where can I sign up! Yeah, RT is really here with THAT GPU. The 6080! That'll be the day! All yours for the low-low price of 1299,-, coming soon TM with DLSS 19

Come on. I'm never gonna be buying into that bullshit "enthusiast" n00b trap. No matter how many influencers say I need to. Its just a terrible deal, and gen-to-gen progress has come to a complete and utter standstill because of 'Nvidia' pushing RT 'forward'. You have to be blind to not see this, and still keep promoting the tech. Not even Nvidia wants it to succeed, they're busy selling AI and RT is just a fun side project. So the only way RT will really gain traction is if we do a lot less of it, so a shitty x60 that is in fact an x50 in disguise can still run it somehow. Otherwise its dead in the water, forever the dream that will never materialize.

Even RDNA4 didn't move the needle forward; the performance level on offer, was already in the market for years. RT perf has fully stagnated - only fake frames can move it ahead now. Telling, indeed.
 
Last edited:
Joined
Jun 26, 2023
Messages
88 (0.14/day)
Processor 7800X3D @ Curve Optimizer: All Core: -25
Motherboard TUF Gaming B650-Plus
Memory 2xKSM48E40BD8KM-32HM ECC RAM (ECC enabled in BIOS)
Video Card(s) 4070 @ 110W
Display(s) SAMSUNG S95B 55" QD-OLED TV
Power Supply RM850x
[..] The next node for Desktop GPUs will be UDNA. [..]
If true, yes, you hit the nail on the head: One could argue why get RDNA4 now, if a major architecture overhaul will supersede RDNA4 and it will be the last RDNA architecture. If one doesn't really need a new GPU, one may wait. CUDA-like support on consumer hardware in UDNA1 lets goo. With UDNA, the same team can develop for both HPC and consumer and it is should save AMD money, money is the only reason why any company would do it, and not out of their hearts (e.g. loosing market share to CUDA). Supposedly PlayStation 6 may also use UDNA1, another reason to wait.
It is nice that AMD tries different things, like chiplet based design in RDNA3 or the architecture split of CDNA and RDNA, although they revert both, back to UDNA (CDNA + RDNA) and back to monolithic chip design in RDNA4 (at least so far with the 9070 (XT)).
 
Joined
Apr 23, 2020
Messages
70 (0.04/day)
My 4090 has never exceeded 16GB vram allocation. For gaming, 32GB is totally unnecessary and at least for now, 16GB is enough for anything ( well maybe there's some very unique edge cases ). Can't say for how long that will be the case. But for now it is. I wish the 5090 had less vram to be honest. It would be less desirable for non-gamers and therefore demand would be lowered. Obviously thats not in nvidia's best interest, but imo, its in gamer's best interest.

Besides, there's already an enterprise line, if you need super high amounts of vram, go through those channels. But I am guessing, this is for the productivity individual and small business that can't afford a enterprise card. But still, I personally do not like geforce cards with specs clearly aimed at non-gaming uses. Geforce is supposed to be the gaming line of cards.

Anyway sorry I guess thats off topic, anyway to bring this back to amd, I think 16GB is fine, developers are still trying to consider how to have 8 and 12gb cards be compatible with their games. I think 16GB will be safe for a while longer. And these cards are supposed to be mid-range. The complete vacuum of cards from everybody right now is obviously going to jack up prices beyond what was intended but still... they said they are not going halo, and 16GB is enough for midrange now and presumably, a while into the future as well.

And yeah I guess for the moment, midange is now 700+ unless you get lucky and live by a microcentre or something where I hear they have stock. Nowhere in my area has stock. One place offers a queue for the non-xt..... thats it.

And it makes me nervous for what I'd do if my 4090 bites the dust....
No game has ever exceeded 16Gb, because there has never been a GPU for games above 16Gb.

When a developer is going to make a game, they purposely limit the use of VRAM.

Imagine if they were to make a game and didn't set a limit on the number of objects and textures in the game? They could easily use 50Gb and the game would crash and crash.

If the Game Director wants the game to use a maximum of 12Gb, then all developers will not be able to exceed that limit.

That's the basics of the basics
 

wolf

Better Than Native
Joined
May 7, 2007
Messages
8,556 (1.31/day)
System Name MightyX
Processor Ryzen 9800X3D
Motherboard Gigabyte B650I AX
Cooling Scythe Fuma 2
Memory 32GB DDR5 6000 CL30 tuned
Video Card(s) Palit Gamerock RTX 5080 oc
Storage WD Black SN850X 2TB
Display(s) LG 42C2 4K OLED
Case Coolermaster NR200P
Audio Device(s) LG SN5Y / Focal Clear
Power Supply Corsair SF750 Platinum
Mouse Corsair Dark Core RBG Pro SE
Keyboard Glorious GMMK Compact w/pudding
VR HMD Meta Quest 3
Software case populated with Artic P12's
Benchmark Scores 4k120 OLED Gsync bliss
Which goes down into semantics, if it's the most you can have due to whatever conditions (market, technology limitations, choice of the vendor not to release anything better, etc.) it's also the best you can have.
Yeah I mean it's undeniably the fastest gaming graphics card on the market, but people could have (or invent) their own reasons why it wouldn't suit them I guess. It's certainly not for everyone, but I'd agree the word best fits here. Semantics perhaps.
 
Joined
Oct 27, 2022
Messages
150 (0.17/day)
Location
Fort Worth, Texas
System Name The TUF machine shh...
Processor Ryzen 7 5800X3D (4.5Ghz)
Motherboard TUF GAMING X570-PRO (WI-FI) BIOS 5021
Cooling EK 360mm AIO Elite, D-RGB, 2x240mm rads
Memory G.SKILL RIPJAWS V 32GB (2 x 16GB) DDR4 3866mhz cl18-16-21-20-29
Video Card(s) ASUS TUF RX 7900 XTX OC/PTM 7950 (Liquid Cooled)
Storage T-FORCE CARDEA ZERO Z330 1TB , Crucial P3 Plus 1TB
Display(s) Gigabyte G27FC A, G32QC A
Case Thermaltake Tower 500
Audio Device(s) R-120SW, R-100SW, Logitech X-240 2.1 Speakers, Skullcandy PLYR
Power Supply Corsair RM850x
Mouse Logitech G502 Hero
Keyboard Logitech G815
Software Windows 11x64 Home
Benchmark Scores Time Spy 21,266 6700XT(2x)/5800X3D 12,569 6700XT/5800X3D
No game has ever exceeded 16Gb, because there has never been a GPU for games above 16Gb.
Lets not forget about the system resource hog "Star Citizen".
 
Top