(3919mhz)
Oh, ok, I see what they did there. Got it. That's pretty cool. I mean, theoretically, right, that would only take ~26.1xxgbps ram to run full out w/ rt (~5% more than raster)?
Probably hot as hell, though, if it's even possible with other stuff running. NGL, I'm surprised that clock even possible at all on 4nm. That actually could use more than 16GB of ram.
I wonder if that's truly even possible in any practical 3D application, or rather prepping the design for 3nm. Perhaps both. That could be actually pretty interesting.
What I'm curious about is...Are they running
split raster/shader clocks again? Which one is getting reported in GPU-Z? Are they doing something weird like using a similar clock domain as like Zen 5c?
Side note, I was always trying to figure out why 5080 was limited to ~3154mhz avg. As I've said before, 5080FE at stock (2640mhz stable) requires 22gbps bw. At 3154 it would require ~26284mhz?
Which is a weird place to put a general cut-off. 22Gbps is nice and obvious; beat a 20gbps GDDR5 design. Which they
kinda-sorta didn't, but it makes sense bc maybe they didn't account for cache increase.
Yeah, that'll happen. That fake advertised clock of 2.97ghz insinuating the old L2 will getcha every time when it's actually 3.15ghz in actuality (which is the difference the cache makes).
Sometimes you gotta rush an article nobody will ever read onto a forum somewhere as soon as you realize why it matters, even though nobody else probably cares...until they do (but still don't understand).
You'll get it in a second why nVIDIA probably did their clocks in general, but now it makes even more sense why the clock limit/weird bandwidth requirement if that's the actual capability of N48.
And then ofc for excess bw it's about 16% perf on average for a doubling over what's required...blahblahblah...it decreases if they actually use it for real compute performance but excessive still helps.
Which they will sell to everyone, as they have as the second coming of the flying spaghet monster, when it really isn't that big of deal (<6% extra perf currently on 5080).
But you guys don't care about that part.
I do, so you know it doesn't actually matter much, but could have.
To actually *use* 30gbps on GB203, they would need a 3600mhz core clock w/ 10752sp.
Gives you an idea of how these designs *could* have gone. You know, that way on N4P...or 12288sp @ 3150mhz even if current '4NP'...exactly...which they also didn't give us, but could have.
Because, well, greed. At some point nVIDIA fans really should be sad when you realize the very obvious designs they've tested...and then decided "No, fuck it, sell it it about six more times until they get that".
Ofc, if they *had* made that design, the replacement Rubin would be 9216sp @ 4200/40000. But no, they'll sell each step (36000/40000) as different gens, probably, perhaps using a denser process at first...because cheaper (especially w/Micron ram), and even then the small boost later as a freakin' 6070 Super or some shit. Maybe 7070. Each as a boost. You want MORE fuckery?
You may ask yourself, why not just 12288sp @ 3360/32000, the actual speed of the fucking ram on 4nm and the capability of even the dense process? Answer: Bc
nVIDIA does it little as it can, to sell it again.
Because then the 12288sp part on 3nm with higher clocks would be the replacement. Not the 9216sp part. Potentially twice. Are you following?
Kinda just got interesting, though, imo. I still think nVIDA needs to give people a guaranteed ~3.23-3.24ghz/24GB clock to make that product, especially if AMD pull this off, which would be nice, or else the difference between that and even the next smaller Rubin (9216, low 3nm clocks) will do the exact same shit they keep doing to outdate GPUs (unlike AMD). They'll still probably do it again when they eventually use 40gbps ram (and high clocks), but if they do it two times even after using a massively cut down design already on Blackwell to replace a even smaller Rubin as an upgrade (twice)...That'd be funny.
That really is selling a possible design like 6 times, and cheap as possible for them each time.
While looking like they're improving things. And it will work.
Because most people probably will not understand what I just wrote.
Don't forget, the quality of DLSS will likely magically improve as a 'feature', absolutely not an obsolescence technique, by the compute difference between the GPUs each time, relegating the former enough under 60fps when in newer titles (absolutely sponsored by nVIDIA) the new one makes the cut each time.
.
Because they're just that much better, guys. Shit, I left my italics on. Pretend the sentence two sentences ago was even more italtics to symbolize sarcasm. Bold or underline just doesn't do it justice.
You know, just thought I'd throw it out there for the about 407th time in case nVIDIA doesn't realize they're not going to get away with doing that if I can help it.
Different designs are one thing; the DLSS thing, not unlike never giving enough buffer, is fuckin' bullshit. Because they *know* people won't understand.
Will they do even 3.24+ with a 24GB model, or just try to beat AMD using similar clocks to the current 5080, which may be enough for some current games at 1440p when not limited by ram?
Probably the later, bc nVIDIA.
I have no doubt they tested 24gbps (and the N4P process) for the practical limit, as the 3600mhz ideal of 30gbps ram insuates that, if not something similar or even AMD's design itself.
Either practically or 'with the power of [5070 is a 4090] AI'. On the "so we don't make Fermi again" supercomputer. That made Blackwell. The greatest GPU family in existence, some people say. At least one guy.
That guy wasn't me.
This is what I mean, though. I don't know how they know, but ALWAYS KNOW what AMD is capable of doing. The popcorn moment is if nVIDIA will give people that GPU clock, which might actually force them to sell *slightly* better GPUs next generation. If they don't, they're still doing even more of the same shit, even after doing the same shit most of you don't even know they already did, which likely made their products worse three times over before you even knew they existed.
What do you guys think...do you think it's possible it might actually happen with these two GPUs? Any hope of getting a decent stock 1440pRT?
The real truth is that you *know* AMD is trying to get there...and you *know* nVIDIA really hopes they won't have to make it happen.