Monday, January 6th 2025
NVIDIA 2025 International CES Keynote: Liveblog
NVIDIA kicks off the 2025 International CES with a bang. The company is expected to debut its new GeForce "Blackwell" RTX 5000 generation of gaming graphics cards. It is also expected to launch new technology, such as neural rendering, and DLSS 4. The company is also expected to highlight a new piece of silicon for Windows on Arm laptops, showcase the next in its Drive PX FSD hardware, and probably even talk about its next-generation "Blackwell Ultra" AI GPU, and if we're lucky, even namedrop "Rubin." Join us, as we liveblog CEO Jensen Huang's keynote address.02:22 UTC: The show is finally underway!02:35 UTC: CTA president Gary Shaprio kicks off the show, introduces Jensen Huang.02:46 UTC: "Tokens are the building blocks of AI"
02:46 UTC: "Do you like my jacket?"02:47 UTC: NVIDIA recounts progress all the way till NV1 and UDA.02:48 UTC: "CUDA was difficult to explain, it took 6 years to get the industry to like it"02:50 UTC: "AI is coming home to GeForce". NVIDIA teases neural material and neural rendering. Rendered on "Blackwell"02:55 UTC: Every single pixel is ray traced, thanks to AI rendering.02:55 UTC: Here it is, the GeForce RTX 5090.03:20 UTC: At least someone is pushing the limits for GPUs.03:22 UTC: Incredible board design.03:22 UTC: RTX 5070 matches RTX 4090 at $550.03:24 UTC: Here's the lineup, available from January.03:24 UTC: RTX 5070 Laptop starts at $1299.03:24 UTC: "The future of computer graphics is neural rendering"03:25 UTC: Laptops powered by RTX Blackwell: staring prices:03:26 UTC: AI has come back to power GeForce.03:28 UTC: Supposedly the Grace Blackwell NVLink72.03:28 UTC: 1.4 ExaFLOPS.03:32 UTC: NVIDIA very sneakily teased a Windows AI PC chip.
03:35 UTC: NVIDIA is teaching generative AI basic physics. NVIDIA Cosmos, a world foundation model.03:41 UTC: NVIDIA Cosmos is trained on 20 million hours of video.
03:43 UTC: Cosmos is open-licensed on GitHub.
03:52 UTC: NVIDIA onboards Toyota for its next generation EV for full-self driving.
03:53 UTC: NVIDIA unveils Thor Blackwell robotics processor.03:53 UTC: Thor is 20x the processing capability of Orin.
03:54 UTC: CUDA is now a functional safe computer thanks to its automobile certifications.04:01 UTC: NVIDIA brought a dozen humanoid robots to the stage.
04:07 UTC: Project DIGITS, is a shrunk down AI supercomputer.04:08 UTC: NVIDIA GB110 "Grace-Blackwell" chip powers DIGITS.
02:46 UTC: "Do you like my jacket?"02:47 UTC: NVIDIA recounts progress all the way till NV1 and UDA.02:48 UTC: "CUDA was difficult to explain, it took 6 years to get the industry to like it"02:50 UTC: "AI is coming home to GeForce". NVIDIA teases neural material and neural rendering. Rendered on "Blackwell"02:55 UTC: Every single pixel is ray traced, thanks to AI rendering.02:55 UTC: Here it is, the GeForce RTX 5090.03:20 UTC: At least someone is pushing the limits for GPUs.03:22 UTC: Incredible board design.03:22 UTC: RTX 5070 matches RTX 4090 at $550.03:24 UTC: Here's the lineup, available from January.03:24 UTC: RTX 5070 Laptop starts at $1299.03:24 UTC: "The future of computer graphics is neural rendering"03:25 UTC: Laptops powered by RTX Blackwell: staring prices:03:26 UTC: AI has come back to power GeForce.03:28 UTC: Supposedly the Grace Blackwell NVLink72.03:28 UTC: 1.4 ExaFLOPS.03:32 UTC: NVIDIA very sneakily teased a Windows AI PC chip.
03:35 UTC: NVIDIA is teaching generative AI basic physics. NVIDIA Cosmos, a world foundation model.03:41 UTC: NVIDIA Cosmos is trained on 20 million hours of video.
03:43 UTC: Cosmos is open-licensed on GitHub.
03:52 UTC: NVIDIA onboards Toyota for its next generation EV for full-self driving.
03:53 UTC: NVIDIA unveils Thor Blackwell robotics processor.03:53 UTC: Thor is 20x the processing capability of Orin.
03:54 UTC: CUDA is now a functional safe computer thanks to its automobile certifications.04:01 UTC: NVIDIA brought a dozen humanoid robots to the stage.
04:07 UTC: Project DIGITS, is a shrunk down AI supercomputer.04:08 UTC: NVIDIA GB110 "Grace-Blackwell" chip powers DIGITS.
446 Comments on NVIDIA 2025 International CES Keynote: Liveblog
Before the slide Jensen was talking about raytracing and AI generated frames saying that for every 33 million pixels calculated with MFG, only 2 million is calculated through traditional rendering.
So the comparison that Nvidia seemingly wants to do for 5070 vs 4090 is FHD native res with raytracing applied and then upscaled to 4K with DLSS (performance) with MFG in 5070's case and FG in 4090's case.
Apply DLSS in the below results (4090 is 145 and 4070S is 92 for example) then multiple with MFG for 5070 and just FG for 4090, it seems perfectly doable and also the experience will not be far off for many games that aren't fast paced since 5070 will have as an example 30fps base with up to 120fps with MFG and 4090 will have 60 fps base with up to 120fps with FG.
Don't get me wrong I might jump on 5080 myself, just for the RT perf and tbh just because, but bullshit marketing is bullshit marketing.
Yes it helps/it's necessary for some PT games but it's not a deal breaking feature.
DLSS is the most important asset nVidia has. Not the FG.
First we had fake resolutions, then fake frames, now we have multiple fake frames, all introducing different sorts of graphical and latency issues, and people are pissing their pants in joy because it gives them MOAR POWAH!!! What happened to just enjoying games? :(
- The first product in the AMD Instinct MI350 Series, the AMD Instinct MI350X accelerator, is based on the AMD CDNA 4 architecture and is expected to be available in 2025. It will use the same industry standard Universal Baseboard server design as other MI300 Series accelerators and will be built using advanced 3nm process technology, support the FP4 and FP6 AI datatypes and have up to 288 GB of HBM3E memory.
ir.amd.com/news-events/press-releases/detail/1201/amd-accelerates-pace-of-data-center-ai-innovation-andAnyone in the field would also know that they could have been using 1-bit weights and the graph would still be pretty similar, the data type for that model is almost negligible since it's mostly memory-bound, and that's where the perf uplift came from.
So running an 80B parameter model at 4-bit is better than running 11B parameter model at 16-bit. But everyone would prefer to run the 80B parameter at 16-bit if at all possible. Alas, the 80B parameter model needs too much RAM.
Comparing an FP8 benchmark on the old card and an FP4 benchmark on the new cards is 100% shenanigans. Its just false marketing, dare I say. In fact, because the FP4 model uses 1/2 the RAM I bet that running FP4 on old-hardware would still be a dramatic speedup (cuts down on memory bandwidths by 50% !!!). Even without any FP4 tensor units, you can just do the FP8 or FP16 multiply-and-accumulate, then downsample to FP4 storage.
Something im puzzled with, both with reading the comments here and on other platforms - and using the data from TPU's latest GPU testing (www.techpowerup.com/review/gpu-test-system-update-for-2025/2.html), looks like Nvidia will have the 5 (maybe 6, we have to see where the 5070ti lands) fastest cards for pure raster. Again, just raster, not even touching RT. Looking at RT, it has the top 9-11 fastest cards depending on resolution (assuming the 5070ti will be faster than the XTX, which is likely the case) And yet we are complaining that they are ignoring raw performance for AI....? The have cards from 2020 (LOL) that are faster in pure RT performance than amd's latest and greatest..
So, do they have to have the 50 top fastest cards in both RT and raster to stop complaining, or what am I missing?
5080 has 75% memory bandwidth of 4090. I wouldn't call it "pretty similar".[EDIT: 5080 has 95% of 4090's memory bandwidth. My bad.]Even though you made a valid point, this is (IMHO) still not enough for 5080 to beat 4090 in native (raster. perf.). Next is fake games! I've already mentioned that before if you recall (AI generated graphics, sounds, even script).
Gaming industry will face tremendous difficulties once gamers get their hands on tools to create AI generated games for free.
I have no problem playing older games as long as there is multi-player/co-op support (servers still running).
I doubt that quality of games such as L4D2, Diablo, Borderlands, Red Alert, Starcraft 2, Battlefield BC2, Jagged Alliance 2, etc. will ever get beaten in future. I lost thousands of my life hours with these games.
I'm not trying to be pessimistic here, but realistic. If you look at what is quality of games today ... Exactly.
You see this even today, between the oceans of salted plastic AAA soup you can find sweetwater rivers of indie games that show the world what gaming was all about to begin with: plain fun, immersion in a set of mechanics and systems and worlds, and taking you deep into it. Escapism also happens at that point, and not when you're playing the umpteenth triple A with the same mechanics ad infinitum. That's just braindead entertainment, like watching TV. Also fine, but not what gaming is really about - just watch TV then, so you can actually do nothing. Gaming is, after all, engagement, being active, not being passive. And that is also the biggest issue AI-driven gaming is going to face: how much is generated, and how much is left to player agency? How real and how far can it go?
Its paradoxical in a way, the same way AI-driven opponents are: the AI will always have a responsive and data-advantage over the player, because the player responds to end-output and AI has all the steps before it. So how do you fix this inbalance of power? By coding the AI, giving it limitations, making it slower... effectively negating or notably reducing its advantages. Is the end result going to be better than a scripted NPC? Doubtful - at best, it is going to be different and perhaps more dynamic. But not too dynamic, because how then, as a player, can you ever get good, or better than the AI? Skill caps are not infinite. A good example of this problem is already out in the wild for a long time: automated matchmaking in games, based on skill rankings. If you want a dynamic AI, you will want to use a similar ranking system to categorize player skill and appropriate AI skill. But its not fun, and never surprising, unless you as a player are actively trying to get better. How much effort are you willing to put into that? Weren't you just trying to have fun?