Saturday, February 1st 2025

Edward Snowden Lashes Out at NVIDIA Over GeForce RTX 50 Pricing And Value
It's not every day that we witness a famous NSA whistleblower voice their disappointment over modern gaming hardware. Edward Snowden, who likely needs no introduction, did not bother to hold back his disapproval of NVIDIA's recently launched RTX 5090, RTX 5080, and RTX 5070 gaming GPUs. The reviews for the RTX 5090 have been mostly positive, although the same cannot be said for its affordable sibling, the RTX 5080. Snowden, voicing his thoughts on Twitter, claimed that NVIDIA is selling "F-tier value for S-tier prices".
Needless to say, there is no doubt that the RTX 5090's pricing is quite exorbitant, regardless of how anyone puts it. Snowden was particularly displeased with the amount of VRAM on offer, which is also hard to argue against. The RTX 5080 ships with "only" 16 GB of VRAM, whereas Snowden believes that it should have shipped with at least 24, or even 32 GB. He further adds that the RTX 5090, which ships with a whopping 32 GB of VRAM, should have been available with a 48 GB variant. As for the RTX 5070, the security consultant expressed desire for at least 16 GB of VRAM (instead of 12 GB).But that is not all that Snowden had to say. He equated selling $1000+ GPUs with 16 GB VRAM to a "monopolistic crime against consumers," further accusing NVIDIA of "endless next-quarter" thinking. This is debatable, considering that NVIDIA is a publicly traded company, and whether they stay afloat does boil down to their quarterly results, whether we like it or not. There is no denying that NVIDIA is in desperate need of some true competition in the high-end segment, which appears to be the only way to get the Green Camp to price their hardware appropriately. AMD's UDNA GPUs are likely set to do just that in a year or two. The rest, of course, remains to be seen.
Source:
@Snowden
Needless to say, there is no doubt that the RTX 5090's pricing is quite exorbitant, regardless of how anyone puts it. Snowden was particularly displeased with the amount of VRAM on offer, which is also hard to argue against. The RTX 5080 ships with "only" 16 GB of VRAM, whereas Snowden believes that it should have shipped with at least 24, or even 32 GB. He further adds that the RTX 5090, which ships with a whopping 32 GB of VRAM, should have been available with a 48 GB variant. As for the RTX 5070, the security consultant expressed desire for at least 16 GB of VRAM (instead of 12 GB).But that is not all that Snowden had to say. He equated selling $1000+ GPUs with 16 GB VRAM to a "monopolistic crime against consumers," further accusing NVIDIA of "endless next-quarter" thinking. This is debatable, considering that NVIDIA is a publicly traded company, and whether they stay afloat does boil down to their quarterly results, whether we like it or not. There is no denying that NVIDIA is in desperate need of some true competition in the high-end segment, which appears to be the only way to get the Green Camp to price their hardware appropriately. AMD's UDNA GPUs are likely set to do just that in a year or two. The rest, of course, remains to be seen.
243 Comments on Edward Snowden Lashes Out at NVIDIA Over GeForce RTX 50 Pricing And Value
Give us a break with the exaggerated title. He was just voicing a detailed opinion on a consumer product. Get over it.
You can either have enough raw power to use high polycount assets everywhere, or you can smartly make use of LOD, and use low-resolution/remove assets in the distance because you won't notice the loss of details of things that are that far anyway. If it's done right it's the right way. But sometimes it's not done right, and you get pop-in or a very obvious loading of a higher quality asset that feels very unnatural). LOD and Culling (not rendering invisible things) are techniques that are still being improved to this day. It would have been easier to just brute force high-quality assets and textures, but they would rather use the additional power for something else, and refine that trick until they consistently get it right.
CG is like magic: when you can't see/aren't aware of the tricks, it's wonderful. There's something to be said about how some console gamers think that their consoles offer them the best experience when they have been using a less elaborate upscaling solution than the PC for over a decade...and consoles used to be native. (and that's with the hardware still being sold at a loss at launch)
And the big irony is that the platform that could benefit the most from upscaling, had to be stimulated by the appearance of upscaling on the PC where it's seen as a plague that needs to be shut down (and I'm barely being hyperbolic here).
And that's a 2003 book
(Not that I necessarily disagree with some of his points mind you, i'm just confused).
I blame all of you.
was 4th gen Intel core around that time? If you could do the same 2014 vs. 2024 perf./price math on that it would be cool.
Competition, competition, competition = good for consumer/market efficiency.
And Lumen calculates RT programmatically on regular "raster" Shader Units, and quite successfully, and not on dedicated hw-RT-core hardware. Lumen considers itself RT, BUT reviewers, including TPU, for some reason do not consider it and games on it RT! Strange, isn't it?
Will their opinion change if he enables the HW-RT calculation path? And if he enables it for one GPU and not for another, what effect should this have on anything?) Should the calculation result change in any way?
Stop drinking their Kool-Aid and get help, Stockholm syndrome can be healed.