Monday, January 6th 2025

NVIDIA 2025 International CES Keynote: Liveblog

NVIDIA kicks off the 2025 International CES with a bang. The company is expected to debut its new GeForce "Blackwell" RTX 5000 generation of gaming graphics cards. It is also expected to launch new technology, such as neural rendering, and DLSS 4. The company is also expected to highlight a new piece of silicon for Windows on Arm laptops, showcase the next in its Drive PX FSD hardware, and probably even talk about its next-generation "Blackwell Ultra" AI GPU, and if we're lucky, even namedrop "Rubin." Join us, as we liveblog CEO Jensen Huang's keynote address.

02:22 UTC: The show is finally underway!
02:35 UTC: CTA president Gary Shaprio kicks off the show, introduces Jensen Huang.
02:46 UTC: "Tokens are the building blocks of AI"

02:46 UTC: "Do you like my jacket?"
02:47 UTC: NVIDIA recounts progress all the way till NV1 and UDA.
02:48 UTC: "CUDA was difficult to explain, it took 6 years to get the industry to like it"
02:50 UTC: "AI is coming home to GeForce". NVIDIA teases neural material and neural rendering. Rendered on "Blackwell"
02:55 UTC: Every single pixel is ray traced, thanks to AI rendering.
02:55 UTC: Here it is, the GeForce RTX 5090.
03:20 UTC: At least someone is pushing the limits for GPUs.
03:22 UTC: Incredible board design.
03:22 UTC: RTX 5070 matches RTX 4090 at $550.
03:24 UTC: Here's the lineup, available from January.
03:24 UTC: RTX 5070 Laptop starts at $1299.
03:24 UTC: "The future of computer graphics is neural rendering"
03:25 UTC: Laptops powered by RTX Blackwell: staring prices:
03:26 UTC: AI has come back to power GeForce.
03:28 UTC: Supposedly the Grace Blackwell NVLink72.
03:28 UTC: 1.4 ExaFLOPS.
03:32 UTC: NVIDIA very sneakily teased a Windows AI PC chip.

03:35 UTC: NVIDIA is teaching generative AI basic physics. NVIDIA Cosmos, a world foundation model.
03:41 UTC: NVIDIA Cosmos is trained on 20 million hours of video.

03:43 UTC: Cosmos is open-licensed on GitHub.

03:52 UTC: NVIDIA onboards Toyota for its next generation EV for full-self driving.

03:53 UTC: NVIDIA unveils Thor Blackwell robotics processor.
03:53 UTC: Thor is 20x the processing capability of Orin.

03:54 UTC: CUDA is now a functional safe computer thanks to its automobile certifications.
04:01 UTC: NVIDIA brought a dozen humanoid robots to the stage.

04:07 UTC: Project DIGITS, is a shrunk down AI supercomputer.
04:08 UTC: NVIDIA GB110 "Grace-Blackwell" chip powers DIGITS.
Add your own comment

470 Comments on NVIDIA 2025 International CES Keynote: Liveblog

#426
ModEl4
Actually I find the statement in Nvidia slide that 5070 has 4090 performance plausible (except from misleading)
Before the slide Jensen was talking about raytracing and AI generated frames saying that for every 33 million pixels calculated with MFG, only 2 million is calculated through traditional rendering.
So the comparison that Nvidia seemingly wants to do for 5070 vs 4090 is FHD native res with raytracing applied and then upscaled to 4K with DLSS (performance) with MFG in 5070's case and FG in 4090's case.
Apply DLSS in the below results (4090 is 145 and 4070S is 92 for example) then multiple with MFG for 5070 and just FG for 4090, it seems perfectly doable and also the experience will not be far off for many games that aren't fast paced since 5070 will have as an example 30fps base with up to 120fps with MFG and 4090 will have 60 fps base with up to 120fps with FG.
Posted on Reply
#427
remekra
ModEl4Actually I find the statement in Nvidia slide that 5070 has 4090 performance plausible (except from misleading)
Before the slide Jensen was talking about raytracing and AI generated frames saying that for every 33 million pixels calculated with MFG, only 2 million is calculated through traditional rendering.
So the comparison that Nvidia seemingly wants to do for 5070 vs 4090 is FHD native res with raytracing applied and then upscaled to 4K with DLSS (performance) with MFG in 5070's case and FG in 4090's case.
Apply DLSS in the below results (4090 is 145 and 4070S is 92 for example) then multiple with MFG for 5070 and just FG for 4090, it seems perfectly doable and also the experience will not be far off for many games that aren't fast paced since 5070 will have as an example 30fps base with up to 120fps with MFG and 4090 will have 60 fps base with up to 120fps with FG.
If on my 7900XTX I will apply Lossless Scaling 4X frame gen or even better FSR FG with AFMF2 which will give me 2 interpolated frames is my GPU suddenly faster than 4090? And can it be put into benchmarks? If not then 5070 is not equal to 4090.
Don't get me wrong I might jump on 5080 myself, just for the RT perf and tbh just because, but bullshit marketing is bullshit marketing.
Posted on Reply
#428
gffermari
Even if a gpu can create a million fake frames in a second, still we should never accept this as performance metric.
Yes it helps/it's necessary for some PT games but it's not a deal breaking feature.
DLSS is the most important asset nVidia has. Not the FG.
Posted on Reply
#429
AusWolf
ModEl4Actually I find the statement in Nvidia slide that 5070 has 4090 performance plausible (except from misleading)
Before the slide Jensen was talking about raytracing and AI generated frames saying that for every 33 million pixels calculated with MFG, only 2 million is calculated through traditional rendering.
So the comparison that Nvidia seemingly wants to do for 5070 vs 4090 is FHD native res with raytracing applied and then upscaled to 4K with DLSS (performance) with MFG in 5070's case and FG in 4090's case.
Apply DLSS in the below results (4090 is 145 and 4070S is 92 for example) then multiple with MFG for 5070 and just FG for 4090, it seems perfectly doable and also the experience will not be far off for many games that aren't fast paced since 5070 will have as an example 30fps base with up to 120fps with MFG and 4090 will have 60 fps base with up to 120fps with FG.
That's exactly what's happening. The thing is that without knowing how MFG affects your gameplay experience, this isn't valid information. It's almost like saying that the 5070 runs games faster at 720p low than the 4090 does at 4K ultra. Of course it does, duh. ;)

First we had fake resolutions, then fake frames, now we have multiple fake frames, all introducing different sorts of graphical and latency issues, and people are pissing their pants in joy because it gives them MOAR POWAH!!! What happened to just enjoying games? :(
Posted on Reply
#430
Visible Noise
Vya DomusOutputs from FP4 and FP8 models are not equivalent, quit thinking as an AI tourist. People supposedly using these for work would know this is a false comparison.
Your beloved is adding FP4.
  • The first product in the AMD Instinct MI350 Series, the AMD Instinct MI350X accelerator, is based on the AMD CDNA 4 architecture and is expected to be available in 2025. It will use the same industry standard Universal Baseboard server design as other MI300 Series accelerators and will be built using advanced 3nm process technology, support the FP4 and FP6 AI datatypes and have up to 288 GB of HBM3E memory.
ir.amd.com/news-events/press-releases/detail/1201/amd-accelerates-pace-of-data-center-ai-innovation-and
Posted on Reply
#431
AusWolf
Visible NoiseYour beloved is adding FP4.
  • The first product in the AMD Instinct MI350 Series, the AMD Instinct MI350X accelerator, is based on the AMD CDNA 4 architecture and is expected to be available in 2025. It will use the same industry standard Universal Baseboard server design as other MI300 Series accelerators and will be built using advanced 3nm process technology, support the FP4 and FP6 AI datatypes and have up to 288 GB of HBM3E memory.
ir.amd.com/news-events/press-releases/detail/1201/amd-accelerates-pace-of-data-center-ai-innovation-and
You're missing the point. Vya wasn't discussing FP4 support, but the fact that it was compared to FP8 performance in the presentation as if they were equal.
Posted on Reply
#432
igormp
AusWolfYou're missing the point. Vya wasn't discussing FP4 support, but the fact that it was compared to FP8 performance in the presentation as if they were equal.
They did that to showcase a new feature. Anyone that's actually in the field would find this a nice thing.
Anyone in the field would also know that they could have been using 1-bit weights and the graph would still be pretty similar, the data type for that model is almost negligible since it's mostly memory-bound, and that's where the perf uplift came from.
Posted on Reply
#433
Vya Domus
igormpthey could have been using 1-bit weights and the graph would still be pretty similar, the data type for that model is almost negligible since
100% false, I don't know why you insist the data type is irrelevant and totally interchangeable but it's total nonsense.
Posted on Reply
#434
dragontamer5788
FP4 has a place, mostly because 80B parameter models have better performance (even on highly compressed / quantized FP4) than 11B parameter models on FP16.

So running an 80B parameter model at 4-bit is better than running 11B parameter model at 16-bit. But everyone would prefer to run the 80B parameter at 16-bit if at all possible. Alas, the 80B parameter model needs too much RAM.

Comparing an FP8 benchmark on the old card and an FP4 benchmark on the new cards is 100% shenanigans. Its just false marketing, dare I say. In fact, because the FP4 model uses 1/2 the RAM I bet that running FP4 on old-hardware would still be a dramatic speedup (cuts down on memory bandwidths by 50% !!!). Even without any FP4 tensor units, you can just do the FP8 or FP16 multiply-and-accumulate, then downsample to FP4 storage.
Posted on Reply
#435
JustBenching
ModEl4Actually I find the statement in Nvidia slide that 5070 has 4090 performance plausible (except from misleading)
Before the slide Jensen was talking about raytracing and AI generated frames saying that for every 33 million pixels calculated with MFG, only 2 million is calculated through traditional rendering.
So the comparison that Nvidia seemingly wants to do for 5070 vs 4090 is FHD native res with raytracing applied and then upscaled to 4K with DLSS (performance) with MFG in 5070's case and FG in 4090's case.
Apply DLSS in the below results (4090 is 145 and 4070S is 92 for example) then multiple with MFG for 5070 and just FG for 4090, it seems perfectly doable and also the experience will not be far off for many games that aren't fast paced since 5070 will have as an example 30fps base with up to 120fps with MFG and 4090 will have 60 fps base with up to 120fps with FG.
Of course it's plausible when you use MFG, but that's the point, if the 5070 with MFG matches the 4090 with FG, that literally makes the 4090 twice as fast in raw horsepower.

Something im puzzled with, both with reading the comments here and on other platforms - and using the data from TPU's latest GPU testing (www.techpowerup.com/review/gpu-test-system-update-for-2025/2.html), looks like Nvidia will have the 5 (maybe 6, we have to see where the 5070ti lands) fastest cards for pure raster. Again, just raster, not even touching RT. Looking at RT, it has the top 9-11 fastest cards depending on resolution (assuming the 5070ti will be faster than the XTX, which is likely the case) And yet we are complaining that they are ignoring raw performance for AI....? The have cards from 2020 (LOL) that are faster in pure RT performance than amd's latest and greatest..

So, do they have to have the 50 top fastest cards in both RT and raster to stop complaining, or what am I missing?
Posted on Reply
#436
LittleBro
igormpTo be honest, even though the 4090 had almost 70% more cores, this doesn't mean that it had 70% more performance in games, in the same way the 5090 won't have 100% higher perf than the 5080 in this scenario.
The 4090 was really bottlenecked by memory bandwidth for games, and the 5080 has a bandwidth pretty similar to it, so the gap between those two may not be as big as the difference in SMs.
Will it be faster or equal in games? I don't know, reviews should reveal that once they're available, but I wouldn't be surprised if it does (in the same sense I wouldn't be in case it doesn't). Game perf is not really linear with either memory bandwidth nor compute units, so it's hard to estimate anything.
5080 has 75% memory bandwidth of 4090. I wouldn't call it "pretty similar". [EDIT: 5080 has 95% of 4090's memory bandwidth. My bad.]
Even though you made a valid point, this is (IMHO) still not enough for 5080 to beat 4090 in native (raster. perf.).
AusWolfThat's exactly what's happening. The thing is that without knowing how MFG affects your gameplay experience, this isn't valid information. It's almost like saying that the 5070 runs games faster at 720p low than the 4090 does at 4K ultra. Of course it does, duh. ;)

First we had fake resolutions, then fake frames, now we have multiple fake frames, all introducing different sorts of graphical and latency issues, and people are pissing their pants in joy because it gives them MOAR POWAH!!! What happened to just enjoying games? :(
Next is fake games! I've already mentioned that before if you recall (AI generated graphics, sounds, even script).
Gaming industry will face tremendous difficulties once gamers get their hands on tools to create AI generated games for free.
I have no problem playing older games as long as there is multi-player/co-op support (servers still running).
I doubt that quality of games such as L4D2, Diablo, Borderlands, Red Alert, Starcraft 2, Battlefield BC2, Jagged Alliance 2, etc. will ever get beaten in future. I lost thousands of my life hours with these games.
I'm not trying to be pessimistic here, but realistic. If you look at what is quality of games today ...
JustBenchingOf course it's plausible when you use MFG, but that's the point, if the 5070 with MFG matches the 4090 with FG, that literally makes the 4090 twice as fast in raw horsepower.
Exactly.
Posted on Reply
#437
Vya Domus
LittleBroonce gamers get their hands on tools to create AI generated games for free.
That stuff is still ages away from being close to usable, if it will ever be. The concern is misguided, it's these corporations who will try to fill games with AI generated trash the moment it becomes feasible.
Posted on Reply
#438
AusWolf
LittleBroNext is fake games! I've already mentioned that before if you recall (AI generated graphics, sounds, even script).
Gaming industry will face tremendous difficulties once gamers get their hands on tools to create AI generated games for free.
I have no problem playing older games as long as there is multi-player/co-op support (servers still running).
I doubt that quality of games such as L4D2, Diablo, Borderlands, Red Alert, Starcraft 2, Battlefield BC2, Jagged Alliance 2, etc. will ever get beaten in future. I lost thousands of my life hours with these games.
I'm not trying to be pessimistic here, but realistic. If you look at what is quality of games today ...
Oh there are lots of amazing games out there, believe me! Just look further than the overhyped, mass produced usual EA / Ubisoft AAA crap. The future is in indie and lesser known titles, I've been saying this for years. Stray and Hellblade: Senua's Sacrifice both made me cry. Abzu is also a great one to recommend. They run great on a Steam Deck, you don't even need high-end hardware for them. I have a few more on my list that I've yet to play (Lost Ember, Star Trucker, Bramble: The Mountain King just to name a few).
Posted on Reply
#439
JustBenching
AusWolfOh there are lots of amazing games out there, believe me! Just look further than the overhyped, mass produced usual EA / Ubisoft AAA crap. The future is in indie and lesser known titles, I've been saying this for years. Stray and Hellblade: Senua's Sacrifice both made me cry. Abzu is also a great one to recommend. They run great on a Steam Deck, you don't even need high-end hardware for them. I have a few more on my list that I've yet to play (Lost Ember, Star Trucker, Bramble: The Mountain King just to name a few).
KENA bridge of spirits is pretty damn good too.
Posted on Reply
#440
AusWolf
JustBenchingKENA bridge of spirits is pretty damn good too.
How did I forget, that's on my list of games to play, too! I just installed it on my Deck a few days ago. :)
Posted on Reply
#441
10tothemin9volts
ModEl4Actually I find the statement in Nvidia slide that 5070 has 4090 performance plausible (except from misleading)
Before the slide Jensen was talking about raytracing and AI generated frames saying that for every 33 million pixels calculated with MFG, only 2 million is calculated through traditional rendering.
So the comparison that Nvidia seemingly wants to do for 5070 vs 4090 is FHD native res with raytracing applied and then upscaled to 4K with DLSS (performance) with MFG in 5070's case and FG in 4090's case.
Apply DLSS in the below results (4090 is 145 and 4070S is 92 for example) then multiple with MFG for 5070 and just FG for 4090, it seems perfectly doable and also the experience will not be far off for many games that aren't fast paced since 5070 will have as an example 30fps base with up to 120fps with MFG and 4090 will have 60 fps base with up to 120fps with FG.
[..]
The problem is that pathtracing/raytracing requires a lot of VRAM and the 5070 only has 12 GB of it. The 4070 with its 12GB runs out of VRAM when enabling even the Medium Path Tracing (Full Ray Tracing) setting in Indiana Jones and the Great Circle.
LittleBro5080 has 75% memory bandwidth of 4090.
5080 has 960 GB/s and 4090 has 1008 GB/s, that's 95%. But the only 16GB VRAM of the 5080 is a problem. Feels like planned obsolescence especially because pathtracing/raytracing requires so much VRAM and I would have to look up how much 16GB are (going to be) enough for pure raster.
Posted on Reply
#442
LittleBro
10tothemin9volts5080 has 960 GB/s and 4090 has 1008 GB/s, that's 95%. But the only 16GB VRAM of the 5080 is a problem. Feels like planned obsolescence especially because pathtracing/raytracing requires so much VRAM and I would have to look up how much 16GB are (going to be) enough for pure raster.
Thanks for pointing out. My bad, I must have been looking at other specs tab, most probably on the tab of 4080S.
Posted on Reply
#443
Vayra86
LittleBroNext is fake games! I've already mentioned that before if you recall (AI generated graphics, sounds, even script).
Gaming industry will face tremendous difficulties once gamers get their hands on tools to create AI generated games for free.
I have no problem playing older games as long as there is multi-player/co-op support (servers still running).
I doubt that quality of games such as L4D2, Diablo, Borderlands, Red Alert, Starcraft 2, Battlefield BC2, Jagged Alliance 2, etc. will ever get beaten in future. I lost thousands of my life hours with these games.
I'm not trying to be pessimistic here, but realistic. If you look at what is quality of games today ...
On that one I'm totally not worried at all. Gaming will survive. Compare it to young kids that you give a piece of paper and a pencil. Something nice will come out of it, sooner or later. Imagination never ends. Its also because of that PC gaming has never, and will never die. If its not happening on Windows, it happens on Linux, and if GPUs are too expensive, we'll code for IGPs. Look at the Deck and the numerous indies: its that reality happening as we speak.

You see this even today, between the oceans of salted plastic AAA soup you can find sweetwater rivers of indie games that show the world what gaming was all about to begin with: plain fun, immersion in a set of mechanics and systems and worlds, and taking you deep into it. Escapism also happens at that point, and not when you're playing the umpteenth triple A with the same mechanics ad infinitum. That's just braindead entertainment, like watching TV. Also fine, but not what gaming is really about - just watch TV then, so you can actually do nothing. Gaming is, after all, engagement, being active, not being passive. And that is also the biggest issue AI-driven gaming is going to face: how much is generated, and how much is left to player agency? How real and how far can it go?

Its paradoxical in a way, the same way AI-driven opponents are: the AI will always have a responsive and data-advantage over the player, because the player responds to end-output and AI has all the steps before it. So how do you fix this inbalance of power? By coding the AI, giving it limitations, making it slower... effectively negating or notably reducing its advantages. Is the end result going to be better than a scripted NPC? Doubtful - at best, it is going to be different and perhaps more dynamic. But not too dynamic, because how then, as a player, can you ever get good, or better than the AI? Skill caps are not infinite. A good example of this problem is already out in the wild for a long time: automated matchmaking in games, based on skill rankings. If you want a dynamic AI, you will want to use a similar ranking system to categorize player skill and appropriate AI skill. But its not fun, and never surprising, unless you as a player are actively trying to get better. How much effort are you willing to put into that? Weren't you just trying to have fun?
Posted on Reply
#444
ModEl4
remekraIf on my 7900XTX I will apply Lossless Scaling 4X frame gen or even better FSR FG with AFMF2 which will give me 2 interpolated frames is my GPU suddenly faster than 4090? And can it be put into benchmarks? If not then 5070 is not equal to 4090.
Don't get me wrong I might jump on 5080 myself, just for the RT perf and tbh just because, but bullshit marketing is bullshit marketing.
I agree, that's why I mentioned that it's misleading, with my post I just wanted to clarify in what terms (probably) Nvidia makes the comparison...
JustBenchingOf course it's plausible when you use MFG, but that's the point, if the 5070 with MFG matches the 4090 with FG, that literally makes the 4090 twice as fast in raw horsepower.

Something im puzzled with, both with reading the comments here and on other platforms - and using the data from TPU's latest GPU testing (www.techpowerup.com/review/gpu-test-system-update-for-2025/2.html), looks like Nvidia will have the 5 (maybe 6, we have to see where the 5070ti lands) fastest cards for pure raster. Again, just raster, not even touching RT. Looking at RT, it has the top 9-11 fastest cards depending on resolution (assuming the 5070ti will be faster than the XTX, which is likely the case) And yet we are complaining that they are ignoring raw performance for AI....? The have cards from 2020 (LOL) that are faster in pure RT performance than amd's latest and greatest..

So, do they have to have the 50 top fastest cards in both RT and raster to stop complaining, or what am I missing?
I agree, that's why I mentioned that it's misleading, with my post I just wanted to clarify in what terms (probably) Nvidia makes the comparison.
Regarding raster only 5090/4090/5080 is clearly faster than RX 7900XTX in raster, 4080S/4080/5070Ti (probably) is similar and in reality a little bit worse than RX 7900XTX in 4K raster, don't look the latest wizard data, because I'm not convinced that the latest game selection is representative exactly, the difference is minor of course with what I consider but enough for RX7900XT not to lose vs 4080S in 4K which is the intending resolution for these VGAs, and I don't consider unreal lumen pure raster (on the other hand with the success and adoption of unreal 5 maybe it's fair in general)
10tothemin9voltsThe problem is that pathtracing/raytracing requires a lot of VRAM and the 5070 only has 12 GB of it. The 4070 with its 12GB runs out of VRAM when enabling even the Medium Path Tracing (Full Ray Tracing) setting in Indiana Jones and the Great Circle.
In agree that 12GB is a problem, I haven't seen usage in Indiana with path tracing at FHD base resolution to check it and also 5070 will support neural texture compression enabling similar quality textures with smaller memory footprint (or higher quality textures in a given memory budget) but what you said even if we suppose that it isn't true today (FHD base...) certainly will be in the near future.
Posted on Reply
#445
Dawora
10tothemin9voltsThe problem is that pathtracing/raytracing requires a lot of VRAM and the 5070 only has 12 GB of it. The 4070 with its 12GB runs out of VRAM when enabling even the Medium Path Tracing (Full Ray Tracing) setting in Indiana Jones and the Great Circle.

5080 has 960 GB/s and 4090 has 1008 GB/s, that's 95%. But the only 16GB VRAM of the 5080 is a problem. Feels like planned obsolescence especially because pathtracing/raytracing requires so much VRAM and I would have to look up how much 16GB are (going to be) enough for pure raster.
Maybe it use less Vram.
12Gb is still ok in that price, not maybe good deal but can play couple years just fine or until Super series is coming.
There is still lot oof ppls who use 8/10GB cards no issues.
Posted on Reply
#446
chrcoluk
Happy I took the bite on the 4080 super, 5000 series seems 'meh' as I gambled it would be, the rumoured VRAM boost not really surfaced and most of any gains are down to better AI (multi frame gen).

Will still get the better DLSS/DLAA so am happy.

For what its worth my prediction is we going to see another delayed pricing drop, I dont think these will sell that well compared to older gens. So I wouldnt buy now and hold off if your plan is to get a 5000 series card.
Posted on Reply
#447
SRS
Nvidia isn't calling the 5090 a Titan card, even though the pricing and tech gap between it and the 5080 is much larger than Titan cards were, from my recollection. I could be wrong but I do not remember Titan being 50% more powerful than the next step down. I vaguely recall Titan also being more appealing because of its prosumer use cases.

Why would Nvidia do this? It has most likely decided that it will obtain more money by having reviewers (perhaps unwittingly) pressure buyers into the 5090 purchase (because it's not an extravagent Titan... it's simply the best of the regular consumer cards and is therefore the standard for most every review's benchmarks). It seems to be mainly a matter of optics.

It also exposes the problem of Nvidia's monopoly over higher-end/enthusiast consumer GPUs. Nvidia would never be able to get away with such a large gap if something even approaching adequate competition were in place. (I really loathe the minute difference = new product tier strategy but the fact is that consumers have shown they are williing to tolerate having so many products with tiny differences. 50% as a gap is excessive, however, even for me.)

I find it rather amusing to read so many "Oh... so inexpensive!" comments from the first two pages or so of this thread. $2500–$3000 is inexpensive? We all know how the game works by now, right? Only a tiny number of people will get the Founder's card (or whatever they're calling it) and there will be Reddit pages with people hoping, tracking, bragging about their BestBuy escapades. Everyone else will deal with scalpers, shortages, and 3rd-party cards with minuscule overclocks and a higher price. We've also lived through two cycles of mining-driven shortages at least, which compounded AMD's higher-end cards that seemed to be more designed for mining than for gaming. (Now we don't even have those to be disappointed with.)

Instead of hoping for things to be different this time... what I'd like to see is competition. Not a giant void where even a dupolist could be half-heartedly competing with cards that have too-small dies at too-high clocks with too-small coolers like the vaunted Radeon VII.

Most everything I've written above is debatable to some degree. However, the fact that the higher-end consumer GPU space (both for gaming and home AI, such as text-to-image) is occupied by a single corporate entity is not. That's not capitalism.
Posted on Reply
#448
igormp
SRSNvidia isn't calling the 5090 a Titan card, even though the pricing and tech gap between it and the 5080 is much larger than Titan cards were, from my recollection. I could be wrong but I do not remember Titan being 50% more powerful than the next step down. I vaguely recall Titan also being more appealing because of its prosumer use cases.
Titans never had much of a gap, if any, to begin with. I posted this in another thread:
igormpI did a graph some time ago comparing the different generations and their % to the top die, similar to the one that has been floating around here, but that compares an assumed "top product" for each generation instead of the actual die nvidia uses:

I decided to give it a go and update it today with the known values of the blackwell gen:

Conclusions are up to each one.
Not only is the gap between the highest and the 2nd one bigger, seems like we have a new trend where consumers don't even get the full die (or close to it) anymore.
SRS$2500–$3000 is inexpensive?
For games it's pretty expensive. But for productivity? It's still quite cheap.
I had heard a rumor about 80% of the 4090s being used in AI farms, but I don't have anything to back this up, but this would mean that indeed those cards are not even being used for games anymore, and Nvidia is pretty aware of that.
SRSMost everything I've written above is debatable to some degree. However, the fact that the higher-end consumer GPU space (both for gaming and home AI, such as text-to-image) is occupied by a single corporate entity is not. That's not capitalism.
It is awful for sure. But I the same time, I don't feel it's like Nvidia has been bribing companies to not use their competitors. They have been just doing a good job and the others didn't try to stack up to it, which is bad but I'm not sure we could fine Nvidia for that.
IIRC, France did try to investigate Nvidia on monopolistic practices and didn't come up with anything.
Posted on Reply
#449
Dawora
Can be very good for 5070 12GB
  • RTX Neural Texture Compression
    uses AI to compress thousands of textures in less than a minute. Their neural representations are stored or accessed in real time or loaded directly into memory without further modification.
    The neurally compressed textures save up to 7x more VRAM or system memory than traditional block compressed textures at the same visual quality.
Posted on Reply
#450
AusWolf
DaworaCan be very good for 5070 12GB
  • RTX Neural Texture Compression
    uses AI to compress thousands of textures in less than a minute. Their neural representations are stored or accessed in real time or loaded directly into memory without further modification.
    The neurally compressed textures save up to 7x more VRAM or system memory than traditional block compressed textures at the same visual quality.
That's marketing. Marketing always sounds good. Let's see how it works in action.
Posted on Reply
Add your own comment
Apr 3rd, 2025 22:05 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts