Monday, January 6th 2025
NVIDIA 2025 International CES Keynote: Liveblog
NVIDIA kicks off the 2025 International CES with a bang. The company is expected to debut its new GeForce "Blackwell" RTX 5000 generation of gaming graphics cards. It is also expected to launch new technology, such as neural rendering, and DLSS 4. The company is also expected to highlight a new piece of silicon for Windows on Arm laptops, showcase the next in its Drive PX FSD hardware, and probably even talk about its next-generation "Blackwell Ultra" AI GPU, and if we're lucky, even namedrop "Rubin." Join us, as we liveblog CEO Jensen Huang's keynote address.02:22 UTC: The show is finally underway!02:35 UTC: CTA president Gary Shaprio kicks off the show, introduces Jensen Huang.02:46 UTC: "Tokens are the building blocks of AI"
02:46 UTC: "Do you like my jacket?"02:47 UTC: NVIDIA recounts progress all the way till NV1 and UDA.02:48 UTC: "CUDA was difficult to explain, it took 6 years to get the industry to like it"02:50 UTC: "AI is coming home to GeForce". NVIDIA teases neural material and neural rendering. Rendered on "Blackwell"02:55 UTC: Every single pixel is ray traced, thanks to AI rendering.02:55 UTC: Here it is, the GeForce RTX 5090.03:20 UTC: At least someone is pushing the limits for GPUs.03:22 UTC: Incredible board design.03:22 UTC: RTX 5070 matches RTX 4090 at $550.03:24 UTC: Here's the lineup, available from January.03:24 UTC: RTX 5070 Laptop starts at $1299.03:24 UTC: "The future of computer graphics is neural rendering"03:25 UTC: Laptops powered by RTX Blackwell: staring prices:03:26 UTC: AI has come back to power GeForce.03:28 UTC: Supposedly the Grace Blackwell NVLink72.03:28 UTC: 1.4 ExaFLOPS.03:32 UTC: NVIDIA very sneakily teased a Windows AI PC chip.
03:35 UTC: NVIDIA is teaching generative AI basic physics. NVIDIA Cosmos, a world foundation model.03:41 UTC: NVIDIA Cosmos is trained on 20 million hours of video.
03:43 UTC: Cosmos is open-licensed on GitHub.
03:52 UTC: NVIDIA onboards Toyota for its next generation EV for full-self driving.
03:53 UTC: NVIDIA unveils Thor Blackwell robotics processor.03:53 UTC: Thor is 20x the processing capability of Orin.
03:54 UTC: CUDA is now a functional safe computer thanks to its automobile certifications.04:01 UTC: NVIDIA brought a dozen humanoid robots to the stage.
04:07 UTC: Project DIGITS, is a shrunk down AI supercomputer.04:08 UTC: NVIDIA GB110 "Grace-Blackwell" chip powers DIGITS.
02:46 UTC: "Do you like my jacket?"02:47 UTC: NVIDIA recounts progress all the way till NV1 and UDA.02:48 UTC: "CUDA was difficult to explain, it took 6 years to get the industry to like it"02:50 UTC: "AI is coming home to GeForce". NVIDIA teases neural material and neural rendering. Rendered on "Blackwell"02:55 UTC: Every single pixel is ray traced, thanks to AI rendering.02:55 UTC: Here it is, the GeForce RTX 5090.03:20 UTC: At least someone is pushing the limits for GPUs.03:22 UTC: Incredible board design.03:22 UTC: RTX 5070 matches RTX 4090 at $550.03:24 UTC: Here's the lineup, available from January.03:24 UTC: RTX 5070 Laptop starts at $1299.03:24 UTC: "The future of computer graphics is neural rendering"03:25 UTC: Laptops powered by RTX Blackwell: staring prices:03:26 UTC: AI has come back to power GeForce.03:28 UTC: Supposedly the Grace Blackwell NVLink72.03:28 UTC: 1.4 ExaFLOPS.03:32 UTC: NVIDIA very sneakily teased a Windows AI PC chip.
03:35 UTC: NVIDIA is teaching generative AI basic physics. NVIDIA Cosmos, a world foundation model.03:41 UTC: NVIDIA Cosmos is trained on 20 million hours of video.
03:43 UTC: Cosmos is open-licensed on GitHub.
03:52 UTC: NVIDIA onboards Toyota for its next generation EV for full-self driving.
03:53 UTC: NVIDIA unveils Thor Blackwell robotics processor.03:53 UTC: Thor is 20x the processing capability of Orin.
03:54 UTC: CUDA is now a functional safe computer thanks to its automobile certifications.04:01 UTC: NVIDIA brought a dozen humanoid robots to the stage.
04:07 UTC: Project DIGITS, is a shrunk down AI supercomputer.04:08 UTC: NVIDIA GB110 "Grace-Blackwell" chip powers DIGITS.
446 Comments on NVIDIA 2025 International CES Keynote: Liveblog
I can't say I share the optimism here in any way shape or form. I'm actually more optimistic enough idiots throw their Ada cards on 2nd hand market because they fell for the empty marketing just like you, because we are in complete stagnation territory between Ada and Blackwell, its crystal clear, even Huang confirmed it by talking about DLSS4 exclusively. I'll happily pick up a 4080 or a 4070ti Super at 650~700 sooner than I would even consider a poorly balanced Blackwell at 550,-.
That's the real upgrade path here. 2nd hand last gen as all the n00bs upgrade to the latest greatest that didn't gain them anything.
Why did I assume that every midrange card anno 2025 has 16 GB? :confused: I guess I'm tired after work.
They are also both 30% faster in farcry so even is they are half that in a large selection of games they'll be fine considering the competition.
RTX3070Ti RTX5070
But yeah, of course the clock speeds/cache and so on will make the RTX5070 faster, but by just how much is what I am wondering? The 5070 will obliterate it in A.I workloads, that is for sure.
When I owned my 3070Ti, I used "Lossless Scaling" which you can buy from Steam, you could also get impressive FPS numbers if you enabled frame generation but the 8GB VRAM never allowed for it to be used for long periods. Anyways, erm, I am glad I got rid of my RTX3070Ti and hell no, I would not buy a RTX5070, especially with 12GB VRAM. Why? Because if you start using those A.I features, you will run into that VRAM limit very fast and you will wish you still had an GTX1070, because performance will drop way below that when you hit that VRAM ceiling (on 1440p and above)
I don't think I would one free, it's would be just too much of a pain in the ass in many scenarios to manage instead of just enjoy playing games.
4080s would probably barely scrape by.
Everyone games differently though.
I'm on a 6750 XT and only now starting to find it slightly lacking at maximum detail. I know my expectations aren't the greatest in the gaming world, but yours must be through the roof.
AMD is well-aware that 8GB GPUs have been criticised by reviewers and developers alike for the last 2 years now. One of their only selling points during RDNA2/3 generations was that they weren't as stingy with VRAM as Nvidia, so the cards would be viable for longer despite a defecit in features, power efficiency, and RT performance.
I personally got rid of a 2060 6GB, 3060 laptop (6GB), and 3070 8GB long before they were too weak to run games, simply because they ran out of VRAM and I had to severely compromise on graphics settings at the relatively modest 1440p resolution. I'm not buying midrange $350+ gpus to run at low settings. A PS5 was running those same games at pseudo-4K at better settings, and that whole console costs less than some of those GPUs alone.
According to "nvidia-ada-gpu-architecture.pdf", the 4090 is: So it's either 1321 INT8 sparse or 1321 INT4 dense? Anyway, what matters more, is that it's an apples to apples comparison.
8800XT9070XT offered RT performance of a 4080 at some point? I'm not sure what the source was for that rumour, and like all info from AMD, you don't trust it until it has been independently verified.I won't forget that AMD gave us FSR without A.I, something I think, was extremely awesome of them to do, sure it doesn't look as good in some scenarios but it is a great option for non RTX users. :love:
You can watch the whole presentation, Mark explains it pretty well.
RDNA3 didn't increase the intersection perf. I have looked everywhere for that info sometime ago and didn't find a single mention of that.
EDIT
Here you can find good deeper explanation on how RDNA3 and 2 RT works:
chipsandcheese.com/p/raytracing-on-amds-rdna-2-3-and-nvidias-turing-and-pascal
And even from AMD own slides:
No mention of improvement on intersection. So basically PS5 Pro has better RT than RDNA3. But still even if they doubled the intersection it doesn't mean RT perf will double from that alone, but well there are other improvements so who knows, we will see.