Saturday, February 1st 2025

Edward Snowden Lashes Out at NVIDIA Over GeForce RTX 50 Pricing And Value

It's not every day that we witness a famous NSA whistleblower voice their disappointment over modern gaming hardware. Edward Snowden, who likely needs no introduction, did not bother to hold back his disapproval of NVIDIA's recently launched RTX 5090, RTX 5080, and RTX 5070 gaming GPUs. The reviews for the RTX 5090 have been mostly positive, although the same cannot be said for its affordable sibling, the RTX 5080. Snowden, voicing his thoughts on Twitter, claimed that NVIDIA is selling "F-tier value for S-tier prices".

Needless to say, there is no doubt that the RTX 5090's pricing is quite exorbitant, regardless of how anyone puts it. Snowden was particularly displeased with the amount of VRAM on offer, which is also hard to argue against. The RTX 5080 ships with "only" 16 GB of VRAM, whereas Snowden believes that it should have shipped with at least 24, or even 32 GB. He further adds that the RTX 5090, which ships with a whopping 32 GB of VRAM, should have been available with a 48 GB variant. As for the RTX 5070, the security consultant expressed desire for at least 16 GB of VRAM (instead of 12 GB).
But that is not all that Snowden had to say. He equated selling $1000+ GPUs with 16 GB VRAM to a "monopolistic crime against consumers," further accusing NVIDIA of "endless next-quarter" thinking. This is debatable, considering that NVIDIA is a publicly traded company, and whether they stay afloat does boil down to their quarterly results, whether we like it or not. There is no denying that NVIDIA is in desperate need of some true competition in the high-end segment, which appears to be the only way to get the Green Camp to price their hardware appropriately. AMD's UDNA GPUs are likely set to do just that in a year or two. The rest, of course, remains to be seen.
Source: @Snowden
Add your own comment

243 Comments on Edward Snowden Lashes Out at NVIDIA Over GeForce RTX 50 Pricing And Value

#226
10tothemin9volts
Edward Snowden from twitter[..] 5070 should have had 16GB VRAM minimum, 5080 w 24/32 SKUs, [..]
That's basically what I have been saying. 16GB on the 5070 would force NV to use a 256-bit chip and that's more expensive than the current 192-bit one (ofc, I would have nothing against a 256-bit chip if it wasn't more expensive), So I'd be also ok with the use of 3GB VRAM modules, instead of the current 2GB ones (=18 GB VRAM).
With local LLM AI self-hosting/inferencing, we need at least 4GB VRAM modules and at least 48GB VRAM (possible with 192-bit, 256-bit and 384-bit chips) per cheap consumer GPU.
Posted on Reply
#227
Assimilator
ContraEspecially because all these RTs can be calculated on standard "raster" shader units without hw-rt-cores
Then you don't have real-time ray-tracing. You have ray-tracing at 1 frame every 10 seconds.
10tothemin9voltsThat's basically what I have been saying. 16GB on the 5070 would force NV to use a 256-bit chip and that's more expensive than the current 192-bit one (ofc, I would have nothing against a 256-bit chip if it wasn't more expensive), So I'd be also ok with the use of 3GB VRAM modules, instead of the current 2GB ones (=18 GB VRAM).
With local LLM AI self-hosting/inferencing, we need at least 4GB VRAM modules and at least 48GB VRAM (possible with 192-bit, 256-bit and 384-bit chips) per cheap consumer GPU.
LLMs are a professional workload. You want to run those, buy a professional-grade GPU that has sufficient VRAM.
Posted on Reply
#228
Solid State Brain
AssimilatorLLMs are a professional workload.
Far from it; most people on /r/LocalLlama on Reddit are using them for entertainment, believe it or not.
The recent DeepSeek R1 release has also made a lot of new people interested in running LLMs locally beyond strictly professional uses.
Posted on Reply
#229
Assimilator
Solid State BrainFar from it; most people on /r/LocalLlama on Reddit are using them for entertainment, believe it or not.
The recent DeepSeek R1 release has also made a lot of new people interested in running LLMs locally beyond strictly professional uses.
I wish the mods would ban people who used reddit as any sort of proof for anything. Since they won't, welcome to my ignore list.
Posted on Reply
#230
Legacy-ZA
AssimilatorI wish the mods would ban people who used reddit as any sort of proof for anything. Since they won't, welcome to my ignore list.
Oh, I had a good chuckle, thank you. Yes, Reddit... that place... urgh.

It does make for an interesting study on human behaviour though. :roll:

Posted on Reply
#231
Solid State Brain


For those who weren't aware, /r/LocalLlama is probably the largest single local LLM user community on the Internet. Some professionals and individuals from the industry write there, but it's mostly amateurs, definitely not mostly professionals. Calling LLMs in general a "professional workload" is laughable, considering that their size ranges from small enough to be run on a smartphone to large enough you need a GPU farm to use them.
Posted on Reply
#232
AusWolf
AssimilatorWhen I say raster is a hack, I'm not being pejorative per se, I am simply referring to how it is implemented WRT light as opposed to RT. Because the human eye, which is what we use to perceive visuals, begins and ends with light - and so does RT, whereas in raster light is an afterthought that has to be simulated - poorly. To quote Matt Pharr from the excellent link posted by @dyonoctis, with an additional bolded word inserted by me for clarity:
Ah, I see your point.
AssimilatorRaster exists only because we couldn't do real-time RT until recently, similarly to how we only used horse-drawn carriages until we were able to produce internal combustion engines small and light enough to move those same carriages. It's not bad, it's just had its time and that time is now over, and we need to stop trying to make horse-drawn carriages better when we can instead make better cars.
But the thing is that those cars had to be cheap enough for people using horse carriages to consider switching. It took a long time, just like RT seems to be taking an awfully long time to run on midrange hardware properly.
Assimilator"Perfect is the enemy of good enough", and as an engineer I agree completely. But here's the thing, nobody - and I mean nobody - who works in graphics rendering (I'm talking people like Pharr, and computer scientists) wants to use rasterisation, because it is so god-awfully complex, and therefore brittle and imperfect, compared to RT.
Then they should focus on making RT run on lesser than high-end hardware so that a larger customer base can enjoy it.
AssimilatorIt's always been a pejorative term, the fact that it got repurposed to be a synonym for "tip" is one of those particularly American desecrations of English that I refuse to acknowledge.
That's a brilliant way to put it, and I agree completely! :)
Posted on Reply
#233
mkppo
Roman's posted his take on stock levels and I agree with him. Now where are the defenders of the faith springing to nvidia's defense with statements such as 'newegg had pallets of 50 series GPU's so stock levels were fine' or absolutely shocking stuff along those lines. Oh wait, they went quiet a few pages ago when they realised the error of their ways but it's funny seeing those posts. Do watch it though, in case you somehow still have doubts.

Funny thing is, it seems like 9070XT and 50 series ramped at the same time, that part is pretty clear. One decided to launch a few hundred cards worldwide for jokes and one decided to make an absolute mess of this situation.

All AMD had to do is to not do this this whole back and forth mess and say 'we're not going to do a terribly shitty launch with a few hundred cards worldwide and have an proper launch when enough cards are available'. Based on the current launch, it would be 100% plausible.
Posted on Reply
#234
b1k3rdude
ZoneDymoThey have changed the title to "What Snowden just said is WILD (shocked face) read till the end (crying laughing emoji).
I have never understood the American use of 'Wild' in sentences, they appear to be trying to make it mean something it doesn’t. Trying to be too polite and failing..?
Posted on Reply
#235
Caring1
Ok we have Edward Snowden's take on this, but what really matters is important people who are experts on everything like Greta and Angelina Jolie, what's their take on it? ;)
Posted on Reply
#236
zenlaserman
Blows my mind how large these 5K series are! Thas a lotta transistors tho! Prolly more than my all 20+ Radeons I've owned combined since 2003.

I saw in another thread how the 5090 is larger than a full 104-key keyboard. Yeah, like, that's effing gigantic, but there sure is a lot of transistors there!

I still remember fondly my first GPU with 1 billion (the HD 4850). That is but a fraction of the madness now! Also, Snowden can suck a beet. One with a red head.
Posted on Reply
#237
Visible Noise
zenlasermanBlows my mind how large these 5K series are! Thas a lotta transistors tho! Prolly more than my first 12 Radeons combined.
5090 are small compared to a 4090.

A 4090 is larger than an Xbox



Posted on Reply
#238
AusWolf
zenlasermanBlows my mind how large these 5K series are! Thas a lotta transistors tho! Prolly more than my all 20+ Radeons I've owned combined since 2003.

I saw in another thread how the 5090 is larger than a full 104-key keyboard. Yeah, like, that's effing gigantic, but there sure is a lot of transistors there!

I still remember fondly my first GPU with 1 billion (the HD 4850). That is but a fraction of the madness now! Also, Snowden can suck a beet. One with a red head.
Since when is a graphics card being big (and thus, needing a bigger chassis to fit into) a good thing?
Posted on Reply
#239
chrcoluk
dyonoctisIt's not an Nvidia Buzzword, they were the first to market it, but not the only ones who've been looking into it. Intel, AMD, game studios and academics are actively researching it. The goal is to both go beyond what rasterization can do (like good subsurface scattering, or complex shaders to accurately represent some materials) and make path tracing more efficient rather than using brute force. Raster graphics is all about using tricks and other fakery for a more efficient rendering. Neural rendering is the same principle but will leverage machine learning to render the visuals.
Intel Is Working on Real-Time Neural Rendering
AMD's 'Neural Supersampling' seeks to close the gap with Nvidia's DLSS - NotebookCheck.net News

There's an interesting read, about how the CG industry would rather use additional power to enable more visual effects rather than doing the same thing but faster, which is an issue with 4K and high refresh rate becoming a thing. The CEO of Epic says that you could give them a gpu x10 faster than what we have now, they would still find a way to bring that hardware to its knees. But hardware isn't the only limitation. We've probably reached a point where CG artist are looking to fix problems that are glaring to their eyes, but that most gamers don't see. Hence any talk about why they are looking beyond raster graphics doesn't really hit with people. A bit like how a seasoned painter would see areas of improvement in his craft when someone with untrained eyes could think that he already reached his peak.
I agree.

The DF interview with the PS5 Cerny interview, a game dev boss was also there. As in an above post the PS5 pro is basically prototyping stuff for PS6, they also confirmed as in an above post that the new features are in partnership with AMD, meaning it probably is a UDNA early chip. On your own points though regarding spare GPU performance, the dev in the interview confirmed that instead of using the extra ms available on each frame for performance, they improved visuals to fill the budget.

Also of interest, in a stream I watched a few weeks back in a game, there was a bug and the person fell partially through the floor, under the floor was some rocks and stuff being rendered. Think back to the days when tessellation was new and it hit the press that there was tessellation being rendered out of view making games slow down.
Posted on Reply
#240
MentalAcetylide
Visible Noise5090 are small compared to a 4090.

A 4090 is larger than an Xbox
FFS, at what point do we say "f#%$ computer cases, lets just have the graphics card house everything!" :roll:
Posted on Reply
#241
Contra
MentalAcetylideFFS, at what point do we say "f#%$ computer cases, lets just have the graphics card house everything!" :roll:
See Bluefield DPU:)
Posted on Reply
#243
MentalAcetylide
ContraSee Bluefield DPU:)
That's technically not a GPU, or at least not something you can game/render graphics with.
Posted on Reply
Add your own comment
Mar 4th, 2025 22:06 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts