• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Edward Snowden Lashes Out at NVIDIA Over GeForce RTX 50 Pricing And Value

Joined
Feb 8, 2017
Messages
272 (0.09/day)
Dude stop defending your abuser (Nvidia), get help you have Stockholm syndrome, the fact is Nvidia is making 60% gross profits on their gaming GPU's. Clearly there is a huge amount that they can shave off of the prices and still be profitable, but at least provide good value GPU's, the thing is they don't want to provide decent value, they want to rip you off and screw you in every possible way.

Stop drinking their Kool-Aid and get help, Stockholm syndrome can be healed.
 
Joined
Jun 26, 2023
Messages
58 (0.10/day)
Processor 7800X3D @ Curve Optimizer: All Core: -25
Motherboard TUF Gaming B650-Plus
Memory 2xKSM48E40BD8KM-32HM ECC RAM (ECC enabled in BIOS)
Video Card(s) 4070 @ 110W
Display(s) SAMSUNG S95B 55" QD-OLED TV
Power Supply RM850x
Edward Snowden from twitter said:
[..] 5070 should have had 16GB VRAM minimum, 5080 w 24/32 SKUs, [..]
That's basically what I have been saying. 16GB on the 5070 would force NV to use a 256-bit chip and that's more expensive than the current 192-bit one (ofc, I would have nothing against a 256-bit chip if it wasn't more expensive), So I'd be also ok with the use of 3GB VRAM modules, instead of the current 2GB ones (=18 GB VRAM).
With local LLM AI self-hosting/inferencing, we need at least 4GB VRAM modules and at least 48GB VRAM (possible with 192-bit, 256-bit and 384-bit chips) per cheap consumer GPU.
 
Joined
Feb 18, 2005
Messages
5,986 (0.82/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) Dell S3221QS(A) (32" 38x21 60Hz) + 2x AOC Q32E2N (32" 25x14 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G604
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
Especially because all these RTs can be calculated on standard "raster" shader units without hw-rt-cores
Then you don't have real-time ray-tracing. You have ray-tracing at 1 frame every 10 seconds.

That's basically what I have been saying. 16GB on the 5070 would force NV to use a 256-bit chip and that's more expensive than the current 192-bit one (ofc, I would have nothing against a 256-bit chip if it wasn't more expensive), So I'd be also ok with the use of 3GB VRAM modules, instead of the current 2GB ones (=18 GB VRAM).
With local LLM AI self-hosting/inferencing, we need at least 4GB VRAM modules and at least 48GB VRAM (possible with 192-bit, 256-bit and 384-bit chips) per cheap consumer GPU.
LLMs are a professional workload. You want to run those, buy a professional-grade GPU that has sufficient VRAM.
 
Joined
Jun 22, 2012
Messages
316 (0.07/day)
Processor Intel i7-12700K
Motherboard MSI PRO Z690-A WIFI
Cooling Noctua NH-D15S
Memory Corsair Vengeance 4x16 GB (64GB) DDR4-3600 C18
Video Card(s) MSI GeForce RTX 3090 GAMING X TRIO 24G
Storage Samsung 980 Pro 1TB, SK hynix Platinum P41 2TB
Case Fractal Define C
Power Supply Corsair RM850x
Mouse Logitech G203
Software openSUSE Tumbleweed
LLMs are a professional workload.
Far from it; most people on /r/LocalLlama on Reddit are using them for entertainment, believe it or not.
The recent DeepSeek R1 release has also made a lot of new people interested in running LLMs locally beyond strictly professional uses.
 
Joined
Feb 18, 2005
Messages
5,986 (0.82/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) Dell S3221QS(A) (32" 38x21 60Hz) + 2x AOC Q32E2N (32" 25x14 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G604
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
Far from it; most people on /r/LocalLlama on Reddit are using them for entertainment, believe it or not.
The recent DeepSeek R1 release has also made a lot of new people interested in running LLMs locally beyond strictly professional uses.
I wish the mods would ban people who used reddit as any sort of proof for anything. Since they won't, welcome to my ignore list.
 
Joined
Dec 14, 2011
Messages
1,233 (0.26/day)
Location
South-Africa
Processor AMD Ryzen 9 5900X
Motherboard ASUS ROG STRIX B550-F GAMING (WI-FI)
Cooling Noctua NH-D15 G2
Memory 32GB G.Skill DDR4 3600Mhz CL18
Video Card(s) ASUS GTX 1650 TUF
Storage SAMSUNG 990 PRO 2TB
Display(s) Dell S3220DGF
Case Corsair iCUE 4000X
Audio Device(s) ASUS Xonar D2X
Power Supply Corsair AX760 Platinum
Mouse Razer DeathAdder V2 - Wireless
Keyboard Corsair K70 PRO - OPX Linear Switches
Software Microsoft Windows 11 - Enterprise (64-bit)
I wish the mods would ban people who used reddit as any sort of proof for anything. Since they won't, welcome to my ignore list.

Oh, I had a good chuckle, thank you. Yes, Reddit... that place... urgh.

It does make for an interesting study on human behaviour though. :roll:

Bruce Willis Sweetie GIF
 
Joined
Jun 22, 2012
Messages
316 (0.07/day)
Processor Intel i7-12700K
Motherboard MSI PRO Z690-A WIFI
Cooling Noctua NH-D15S
Memory Corsair Vengeance 4x16 GB (64GB) DDR4-3600 C18
Video Card(s) MSI GeForce RTX 3090 GAMING X TRIO 24G
Storage Samsung 980 Pro 1TB, SK hynix Platinum P41 2TB
Case Fractal Define C
Power Supply Corsair RM850x
Mouse Logitech G203
Software openSUSE Tumbleweed
1738606864080.png

For those who weren't aware, /r/LocalLlama is probably the largest single local LLM user community on the Internet. Some professionals and individuals from the industry write there, but it's mostly amateurs, definitely not mostly professionals. Calling LLMs in general a "professional workload" is laughable, considering that their size ranges from small enough to be run on a smartphone to large enough you need a GPU farm to use them.
 
Joined
Jan 14, 2019
Messages
14,068 (6.36/day)
Location
Midlands, UK
Processor Various Intel and AMD CPUs
Motherboard Micro-ATX and mini-ITX
Cooling Yes
Memory Overclocking is overrated
Video Card(s) Various Nvidia and AMD GPUs
Storage A lot
Display(s) Monitors and TVs
Case The smaller the better
Audio Device(s) Speakers and headphones
Power Supply 300 to 750 W, bronze to gold
Mouse Wireless
Keyboard Mechanic
VR HMD Not yet
Software Linux gaming master race
When I say raster is a hack, I'm not being pejorative per se, I am simply referring to how it is implemented WRT light as opposed to RT. Because the human eye, which is what we use to perceive visuals, begins and ends with light - and so does RT, whereas in raster light is an afterthought that has to be simulated - poorly. To quote Matt Pharr from the excellent link posted by @dyonoctis, with an additional bolded word inserted by me for clarity:
Ah, I see your point.

Raster exists only because we couldn't do real-time RT until recently, similarly to how we only used horse-drawn carriages until we were able to produce internal combustion engines small and light enough to move those same carriages. It's not bad, it's just had its time and that time is now over, and we need to stop trying to make horse-drawn carriages better when we can instead make better cars.
But the thing is that those cars had to be cheap enough for people using horse carriages to consider switching. It took a long time, just like RT seems to be taking an awfully long time to run on midrange hardware properly.

"Perfect is the enemy of good enough", and as an engineer I agree completely. But here's the thing, nobody - and I mean nobody - who works in graphics rendering (I'm talking people like Pharr, and computer scientists) wants to use rasterisation, because it is so god-awfully complex, and therefore brittle and imperfect, compared to RT.
Then they should focus on making RT run on lesser than high-end hardware so that a larger customer base can enjoy it.

It's always been a pejorative term, the fact that it got repurposed to be a synonym for "tip" is one of those particularly American desecrations of English that I refuse to acknowledge.
That's a brilliant way to put it, and I agree completely! :)
 
Joined
Oct 30, 2020
Messages
396 (0.25/day)
Location
Toronto
System Name GraniteXT
Processor Ryzen 9950X
Motherboard ASRock B650M-HDV
Cooling 2x360mm custom loop
Memory 2x24GB Team Xtreem DDR5-8000 [M die]
Video Card(s) RTX 3090 FE underwater
Storage Intel P5800X 800GB + Samsung 980 Pro 2TB
Display(s) MSI 342C 34" OLED
Case O11D Evo RGB
Audio Device(s) DCA Aeon 2 w/ SMSL M200/SP200
Power Supply Superflower Leadex VII XG 1300W
Mouse Razer Basilisk V3
Keyboard Steelseries Apex Pro V2 TKL
Roman's posted his take on stock levels and I agree with him. Now where are the defenders of the faith springing to nvidia's defense with statements such as 'newegg had pallets of 50 series GPU's so stock levels were fine' or absolutely shocking stuff along those lines. Oh wait, they went quiet a few pages ago when they realised the error of their ways but it's funny seeing those posts. Do watch it though, in case you somehow still have doubts.

Funny thing is, it seems like 9070XT and 50 series ramped at the same time, that part is pretty clear. One decided to launch a few hundred cards worldwide for jokes and one decided to make an absolute mess of this situation.

All AMD had to do is to not do this this whole back and forth mess and say 'we're not going to do a terribly shitty launch with a few hundred cards worldwide and have an proper launch when enough cards are available'. Based on the current launch, it would be 100% plausible.
 
Top