Monday, January 6th 2025
NVIDIA 2025 International CES Keynote: Liveblog
NVIDIA kicks off the 2025 International CES with a bang. The company is expected to debut its new GeForce "Blackwell" RTX 5000 generation of gaming graphics cards. It is also expected to launch new technology, such as neural rendering, and DLSS 4. The company is also expected to highlight a new piece of silicon for Windows on Arm laptops, showcase the next in its Drive PX FSD hardware, and probably even talk about its next-generation "Blackwell Ultra" AI GPU, and if we're lucky, even namedrop "Rubin." Join us, as we liveblog CEO Jensen Huang's keynote address.02:22 UTC: The show is finally underway!02:35 UTC: CTA president Gary Shaprio kicks off the show, introduces Jensen Huang.02:46 UTC: "Tokens are the building blocks of AI"
02:46 UTC: "Do you like my jacket?"02:47 UTC: NVIDIA recounts progress all the way till NV1 and UDA.02:48 UTC: "CUDA was difficult to explain, it took 6 years to get the industry to like it"02:50 UTC: "AI is coming home to GeForce". NVIDIA teases neural material and neural rendering. Rendered on "Blackwell"02:55 UTC: Every single pixel is ray traced, thanks to AI rendering.02:55 UTC: Here it is, the GeForce RTX 5090.03:20 UTC: At least someone is pushing the limits for GPUs.03:22 UTC: Incredible board design.03:22 UTC: RTX 5070 matches RTX 4090 at $550.03:24 UTC: Here's the lineup, available from January.03:24 UTC: RTX 5070 Laptop starts at $1299.03:24 UTC: "The future of computer graphics is neural rendering"03:25 UTC: Laptops powered by RTX Blackwell: staring prices:03:26 UTC: AI has come back to power GeForce.03:28 UTC: Supposedly the Grace Blackwell NVLink72.03:28 UTC: 1.4 ExaFLOPS.03:32 UTC: NVIDIA very sneakily teased a Windows AI PC chip.
03:35 UTC: NVIDIA is teaching generative AI basic physics. NVIDIA Cosmos, a world foundation model.03:41 UTC: NVIDIA Cosmos is trained on 20 million hours of video.
03:43 UTC: Cosmos is open-licensed on GitHub.
03:52 UTC: NVIDIA onboards Toyota for its next generation EV for full-self driving.
03:53 UTC: NVIDIA unveils Thor Blackwell robotics processor.03:53 UTC: Thor is 20x the processing capability of Orin.
03:54 UTC: CUDA is now a functional safe computer thanks to its automobile certifications.04:01 UTC: NVIDIA brought a dozen humanoid robots to the stage.
04:07 UTC: Project DIGITS, is a shrunk down AI supercomputer.04:08 UTC: NVIDIA GB110 "Grace-Blackwell" chip powers DIGITS.
02:46 UTC: "Do you like my jacket?"02:47 UTC: NVIDIA recounts progress all the way till NV1 and UDA.02:48 UTC: "CUDA was difficult to explain, it took 6 years to get the industry to like it"02:50 UTC: "AI is coming home to GeForce". NVIDIA teases neural material and neural rendering. Rendered on "Blackwell"02:55 UTC: Every single pixel is ray traced, thanks to AI rendering.02:55 UTC: Here it is, the GeForce RTX 5090.03:20 UTC: At least someone is pushing the limits for GPUs.03:22 UTC: Incredible board design.03:22 UTC: RTX 5070 matches RTX 4090 at $550.03:24 UTC: Here's the lineup, available from January.03:24 UTC: RTX 5070 Laptop starts at $1299.03:24 UTC: "The future of computer graphics is neural rendering"03:25 UTC: Laptops powered by RTX Blackwell: staring prices:03:26 UTC: AI has come back to power GeForce.03:28 UTC: Supposedly the Grace Blackwell NVLink72.03:28 UTC: 1.4 ExaFLOPS.03:32 UTC: NVIDIA very sneakily teased a Windows AI PC chip.
03:35 UTC: NVIDIA is teaching generative AI basic physics. NVIDIA Cosmos, a world foundation model.03:41 UTC: NVIDIA Cosmos is trained on 20 million hours of video.
03:43 UTC: Cosmos is open-licensed on GitHub.
03:52 UTC: NVIDIA onboards Toyota for its next generation EV for full-self driving.
03:53 UTC: NVIDIA unveils Thor Blackwell robotics processor.03:53 UTC: Thor is 20x the processing capability of Orin.
03:54 UTC: CUDA is now a functional safe computer thanks to its automobile certifications.04:01 UTC: NVIDIA brought a dozen humanoid robots to the stage.
04:07 UTC: Project DIGITS, is a shrunk down AI supercomputer.04:08 UTC: NVIDIA GB110 "Grace-Blackwell" chip powers DIGITS.
446 Comments on NVIDIA 2025 International CES Keynote: Liveblog
Again, to clarify, I know nothing about AI, I just know the trend is to move to lower precision calculations for whatever reason.
You can find the AMD slide if you want, but I don't think you need to. I already agreed with you that it's a bullshit comparison just the same (and I'm not in search for a Ryzen Pro Plus Uber Ultra Super AI Bla Bla Bla 390 XXXTXT AIAI Whateveritscalled anyway).
I don't side with any company. I side with fairness and honesty. :)
I don't think I interpreted that chart incorrectly. The three games that show double the performance are comparing 4x framegen on the 5080 to 2x framegen on the 4080. That's not an apples to apples comparison, IMO.
The two games on the left are apples-to-apples. FC6 is running with no DLSS at all, so there's no fake frames. Plague Tale is running on the old DLSS3, putting the 4080 and 5080 on equal ground when it comes to number of faked frames using framegen.
The reason I think you misinterpreted this chart is because you might think there is actual raster / raw performance on tap here based on the left-most bar that only says RT... but this actual performance increase could also just be coming from improved RT handling on the game(s) in question. It does not necessarily speak of raster performance, which is the basis for all performance anyway - DLSS included.
Nvidia widened gap between SKUs. RTX 4080 has only 60% of 4090's compute units.
With RTX 5000 series, gap is even more widening. RTX 5080 will have only 50% of RTX 5090's compute units and just 65% of RTX 4090's.
www.techpowerup.com/gpu-specs/geforce-rtx-4080-super.c4182
www.techpowerup.com/gpu-specs/geforce-rtx-5080.c4217
RTX 5080 is basically RTX 4080S with 5% more compute units and a bit higher clocks.
No room for any significant performance boost this time (no significant node change).
Seriously, do your math people. I say it once more - RTX 5080 won't beat RTX 4090 in native (no DLSS and FG) because it lacks hardware resources.
We should come back to this discussion after reviews of 5080 are up. I don't have problem to admit that I was wrong WHEN I was wrong.
As for RX 9070 XT, no one is really expecting that it will go toe to toe with RX 7900 XTX, with RX 7900 XT maybe. (I'm not taking RT into account here.)
AMD clearly stated that they want to focus on making mainstream card for masses with vastly improved RT performance over previous generation.
I personally estimate for RX 9070 XT to be 5% below RX 7900XT but beating RX 7900XT in RT. Unfortunately, I don't give a f* about RT now.
RT may become a reasonable thing when it will become less burdening on hardware, meaning ramping up RT will degrade performance by as much as 15-20%.
Anything above that is just too much of a penalty. My personal expectation is that AMD will move with RDNA4 from -60% RT perf. degradation to "just" RT 30-35% degradation.
EDIT: Some fragments about RX 9070XT performance here and here. If this comes true, then ... holy shit ... I guess.
Nope u wont, u just look other car u like more.
U can buy 5070 but u dont have to.
Nvidia has been trying to sweep raster performance under the rug for two whole years already, hiding raw performance stagnation behind software tricks. Remember, the only numbers we have right now are Nvidia's so raster performance numbers are intentionally absent.
The 4060 is barely more performant than the 3060 in raw raster performance if you disable all of the RT/DLSS-focused features, and the same is true of the 4060Ti vs 3060Ti. Actual raw raster performance gains this generation are likely to come from simply having more cores and higher clocks at higher power consumption, and that's only obviously true for the 5090. The 5080 gets 15% more cores than the 4080, and more power to clock them higher, so if there's any raw raster performance gain once you take away all the RT/DLSS improvements then it's only going to be a direct result of these extra cores and clocks, IMO.
I just saw 1 game, then another one with DLSS stuff on top, and another one like that, and just stopped looking because I don't even care about games, and the comparison between DLSS stuff is kinda moot as everyone already agreed to in here.
I only noticed it when someone else brought the FP4 vs FP8 stuff earlier, I got confused what they were talking about until I noticed Flux was in there (and only for the 4090 vs 5090 comparison, not the others), so I was already aware of it by the time I saw it. 5070s I don't think so, but 5080s would still be useful. If your model fits within 16GB, you can use these as cheap-yet-powerful inference stations.
Not a big company doing collocation in a big DC, but plenty of startups do something like that. Just take a look at vast.ai and you'll notice systems like so:
8x 4070S Ti on an Epyc node with 258GB of RAM. B200s would be for the big techs, most others either buy smaller models (or even consumer GPUs), or just subscribe to some cloud provider. They are more than fine, it means lots of vram savings and more perf. That was AMD doing comparisons of strix halo vs lunar lake (and it was the shitty 30W lunar lake model, instead of the regular 17W). To be honest, even though the 4090 had almost 70% more cores, this doesn't mean that it had 70% more performance in games, in the same way the 5090 won't have 100% higher perf than the 5080 in this scenario.
The 4090 was really bottlenecked by memory bandwidth for games, and the 5080 has a bandwidth pretty similar to it, so the gap between those two may not be as big as the difference in SMs.
Will it be faster or equal in games? I don't know, reviews should reveal that once they're available, but I wouldn't be surprised if it does (in the same sense I wouldn't be in case it doesn't). Game perf is not really linear with either memory bandwidth nor compute units, so it's hard to estimate anything.