Monday, January 6th 2025

NVIDIA 2025 International CES Keynote: Liveblog

NVIDIA kicks off the 2025 International CES with a bang. The company is expected to debut its new GeForce "Blackwell" RTX 5000 generation of gaming graphics cards. It is also expected to launch new technology, such as neural rendering, and DLSS 4. The company is also expected to highlight a new piece of silicon for Windows on Arm laptops, showcase the next in its Drive PX FSD hardware, and probably even talk about its next-generation "Blackwell Ultra" AI GPU, and if we're lucky, even namedrop "Rubin." Join us, as we liveblog CEO Jensen Huang's keynote address.

02:22 UTC: The show is finally underway!
02:35 UTC: CTA president Gary Shaprio kicks off the show, introduces Jensen Huang.
02:46 UTC: "Tokens are the building blocks of AI"

02:46 UTC: "Do you like my jacket?"
02:47 UTC: NVIDIA recounts progress all the way till NV1 and UDA.
02:48 UTC: "CUDA was difficult to explain, it took 6 years to get the industry to like it"
02:50 UTC: "AI is coming home to GeForce". NVIDIA teases neural material and neural rendering. Rendered on "Blackwell"
02:55 UTC: Every single pixel is ray traced, thanks to AI rendering.
02:55 UTC: Here it is, the GeForce RTX 5090.
03:20 UTC: At least someone is pushing the limits for GPUs.
03:22 UTC: Incredible board design.
03:22 UTC: RTX 5070 matches RTX 4090 at $550.
03:24 UTC: Here's the lineup, available from January.
03:24 UTC: RTX 5070 Laptop starts at $1299.
03:24 UTC: "The future of computer graphics is neural rendering"
03:25 UTC: Laptops powered by RTX Blackwell: staring prices:
03:26 UTC: AI has come back to power GeForce.
03:28 UTC: Supposedly the Grace Blackwell NVLink72.
03:28 UTC: 1.4 ExaFLOPS.
03:32 UTC: NVIDIA very sneakily teased a Windows AI PC chip.

03:35 UTC: NVIDIA is teaching generative AI basic physics. NVIDIA Cosmos, a world foundation model.
03:41 UTC: NVIDIA Cosmos is trained on 20 million hours of video.

03:43 UTC: Cosmos is open-licensed on GitHub.

03:52 UTC: NVIDIA onboards Toyota for its next generation EV for full-self driving.

03:53 UTC: NVIDIA unveils Thor Blackwell robotics processor.
03:53 UTC: Thor is 20x the processing capability of Orin.

03:54 UTC: CUDA is now a functional safe computer thanks to its automobile certifications.
04:01 UTC: NVIDIA brought a dozen humanoid robots to the stage.

04:07 UTC: Project DIGITS, is a shrunk down AI supercomputer.
04:08 UTC: NVIDIA GB110 "Grace-Blackwell" chip powers DIGITS.
Add your own comment

446 Comments on NVIDIA 2025 International CES Keynote: Liveblog

#351
Vayra86
AusWolfI'm cautiously optimistic of the 5070. It caught my attention with its price. But yeah, reviews are key, we learned that with Ampere, Ada and RDNA 3.
550,- for a 12GB GPU with lots of core power in 2025? I've got a fire you can burn that money on, too. Both approaches will keep you warm for a short while.

I can't say I share the optimism here in any way shape or form. I'm actually more optimistic enough idiots throw their Ada cards on 2nd hand market because they fell for the empty marketing just like you, because we are in complete stagnation territory between Ada and Blackwell, its crystal clear, even Huang confirmed it by talking about DLSS4 exclusively. I'll happily pick up a 4080 or a 4070ti Super at 650~700 sooner than I would even consider a poorly balanced Blackwell at 550,-.

That's the real upgrade path here. 2nd hand last gen as all the n00bs upgrade to the latest greatest that didn't gain them anything.
Posted on Reply
#352
AusWolf
Vayra86550,- for a 12GB GPU with lots of core power in 2025? I've got a fire you can burn that money on, too. Both approaches will keep you warm for a short while.
You got a point there, I somehow missed this "small" detail. :ohwell:

Why did I assume that every midrange card anno 2025 has 16 GB? :confused: I guess I'm tired after work.
Posted on Reply
#353
oxrufiioxo
Vayra86550,- for a 12GB GPU with lots of core power in 2025? I've got a fire you can burn that money on, too. Both approaches will keep you warm for a short while.
I mean the 12GB card from Nvidia is still beating the 20GB one from AMD at 4k with RT and while I don't necessarily disagree with you if the 9070 sucks at RT and FSR4 is a bust that 5070 is gonna start looking pretty good.
AusWolfYou got a point there, I somehow missed this "small" detail. :ohwell:

Why did I assume that every midrange card anno 2025 has 16 GB? :confused: I guess I'm tired after work.
Yeah the 12GB is a huge bummer I'd wait and see if they refresh it with an 18GB model....
Posted on Reply
#354
Vayra86
oxrufiioxoI mean the 12GB card from Nvidia is still beating the 20GB one from AMD at 4k with RT and while I don't necessarily disagree with you if the 9070 sucks at RT and FSR4 is a bust that 5070 is gonna start looking pretty good.
Both cards will get stuck at RT anyway; but a few years down the line, the 20GB card will run ultra textures and max detail, but the 5070 will not, at similar FPS. The 'today performance' is pretty irrelevant, all cards are fast enough for virtually anything even at this perf segment.
Posted on Reply
#355
Legacy-ZA
oxrufiioxoWe have no idea he did the same thing at the 40 series launch showing every card being 2x-4x they all ended up with decent gains just with price hikes.


It definitely will beat a 3070ti it just might not beat a 4070 by much...
Of course it will, but not by much, that is, if it's not A.I work related, meaning MFG, DLSS and so on. Watch and see. :)
Posted on Reply
#356
oxrufiioxo
Vayra86Both cards will get stuck at RT anyway; but a few years down the line, the 20GB card will run ultra textures and max detail, but the 5070 will not, at similar FPS. The 'today performance' is pretty irrelevant, all cards are fast enough.
They both are bad options for different reasons I'd wait for the 3GB Gddr7 to release first.
Legacy-ZAOf course it will, but not by much, that is, if it's not A.I work related, meaning MFG, DLSS and so on. Watch and see. :)
It would be pretty funny if it lost on raster I just don't see it happening lol.
Posted on Reply
#357
Legacy-ZA
oxrufiioxoThey both are bad options for different reasons I'd wait for the 3GB Gddr7 to release first.



It would be pretty funny if it lost on raster I just don't see it happening lol.
As I said, it won't, but it's going to be closer than people expect. ^_^
Posted on Reply
#358
oxrufiioxo
Legacy-ZAAs I said, it won't, but it's going to be closer than people expect. ^_^
I think there is a real chance the 5070/5070ti are not very impressive vs the super cards but lets be real they could have just kept selling ADA which is likely cheaper to make at a slight price cut if that was the case and just release the 5090 for content creators.

They are also both 30% faster in farcry so even is they are half that in a large selection of games they'll be fine considering the competition.
Posted on Reply
#359
Legacy-ZA
oxrufiioxoI think there is a real chance the 5070/5070ti are not very impressive vs the super cards but lets be real they could have just kept selling ADA which is likely cheaper to make at a slight price cut if that was the case and just release the 5090 for content creators.

They are also both 30% faster in farcry so even is they are half that in a large selection of games they'll be fine considering the competition.
All I am saying is, this is comical:


RTX3070Ti RTX5070


But yeah, of course the clock speeds/cache and so on will make the RTX5070 faster, but by just how much is what I am wondering? The 5070 will obliterate it in A.I workloads, that is for sure.

When I owned my 3070Ti, I used "Lossless Scaling" which you can buy from Steam, you could also get impressive FPS numbers if you enabled frame generation but the 8GB VRAM never allowed for it to be used for long periods. Anyways, erm, I am glad I got rid of my RTX3070Ti and hell no, I would not buy a RTX5070, especially with 12GB VRAM. Why? Because if you start using those A.I features, you will run into that VRAM limit very fast and you will wish you still had an GTX1070, because performance will drop way below that when you hit that VRAM ceiling (on 1440p and above)

I don't think I would one free, it's would be just too much of a pain in the ass in many scenarios to manage instead of just enjoy playing games.
Posted on Reply
#360
oxrufiioxo
Legacy-ZAAll I am saying is, this is comical:


RTX3070Ti RTX5070


But yeah, of course the clock speeds/cache and so on will make the RTX5070 faster, but by just how much is what I am wondering? The 5070 will obliterate it in A.I workloads, that is for sure.

When I owned my 3070Ti, I used "Lossless Scaling" which you can buy from Steam, you could also get impressive FPS numbers if you enabled frame generation but the 8GB VRAM never allowed for it to be used for long periods. Anyways, erm, I am glad I got rid of my RTX3070Ti and hell no, I would not buy a RTX5070, especially with 12GB VRAM. Why? Because if you start using those A.I features, you will run into that VRAM limit very fast and you will wish you still had an GTX1070, because performance will drop way below that when you hit that VRAM ceiling (on 1440p and above)

I don't think I would one free, it's would be just too much of a pain in the ass in many scenarios to manage instead of just enjoy playing games.
I view the 5-600 usd cards as 1080p cards as it is so 12GB is fine but I agree people should be buying 16GB cards in 2025 regardless of how fast this 12GB card is or isn't.
Posted on Reply
#361
AusWolf
oxrufiioxoI view the 5-600 usd cards as 1080p cards as it is
I find that a very strange statement considering that the market still calls this layer "performance segment". 1080p isn't where performance is at / is needed these days.
Posted on Reply
#362
Legacy-ZA
oxrufiioxoI view the 5-600 usd cards as 1080p cards as it is so 12GB is fine but I agree people should be buying 16GB cards in 2025 regardless of how fast this 12GB card is or isn't.
No, 1440p + cards are xx70+, the xx60- is for 1080p and below, I will not accept it any other way, no matter the marketing or brainwashing spewed by that snake Jenson.
Posted on Reply
#363
oxrufiioxo
Legacy-ZANo, 1440p + cards are xx70+, the xx60- is for 1080p and below, I will not accept it any other way, no matter the marketing or brainwashing spewed by that snake Jenson.
AusWolfI find that a very strange statement considering that the market still calls this layer "performance segment". 1080p isn't where performance is at / is needed these days.
I use a 4090 for 1440p that's my minimum ok level of performance just to give you context. A 4070 would be half that perfomance.

4080s would probably barely scrape by.

Everyone games differently though.
Posted on Reply
#364
AusWolf
oxrufiioxoI use a 4090 for 1440p that's my minimum ok level of performance just to give you context.
How the hell is that even possible? :eek:

I'm on a 6750 XT and only now starting to find it slightly lacking at maximum detail. I know my expectations aren't the greatest in the gaming world, but yours must be through the roof.
Posted on Reply
#365
oxrufiioxo
AusWolfHow the hell is that even possible? :eek:

I'm on a 6750 XT and only now starting to find it slightly lacking at maximum detail. I know my expectations aren't the greatest in the gaming world, but yours must be through the roof.
There is no right or wrong way to game if you're happy with the performance it's all that matters.... I might have to drop down a tier for my next upgrade and just be happy with it lol.
Posted on Reply
#366
chstamos
Maybe it's high time for another intel-AMD cross licensing agreement, this time in graphics. Intel could bring XESS and ray tracing technologies to the table, seeing how useless AMD is at those, and Radeon group could license some raster technologies back to intel :)
Posted on Reply
#367
AusWolf
chstamosMaybe it's high time for another intel-AMD cross licensing agreement, this time in graphics. Intel could bring XESS and ray tracing technologies to the table, seeing how useless AMD is at those, and Radeon group could license some raster technologies back to intel :)
That's actually not a bad idea! :) (but psst... you're in the wrong thread) :ohwell:
Posted on Reply
#368
Chrispy_
JustBenchingIsn't the 9060 rumored to be an 8gb card? That will show them (nvidia ain't any better in this department, but at least they have a reason being stingy with vram)
Depends on pricing but if it's over $300 it will likely use double-density GDDR6 to get capacity up to 16GB.

AMD is well-aware that 8GB GPUs have been criticised by reviewers and developers alike for the last 2 years now. One of their only selling points during RDNA2/3 generations was that they weren't as stingy with VRAM as Nvidia, so the cards would be viable for longer despite a defecit in features, power efficiency, and RT performance.

I personally got rid of a 2060 6GB, 3060 laptop (6GB), and 3070 8GB long before they were too weak to run games, simply because they ran out of VRAM and I had to severely compromise on graphics settings at the relatively modest 1440p resolution. I'm not buying midrange $350+ gpus to run at low settings. A PS5 was running those same games at pseudo-4K at better settings, and that whole console costs less than some of those GPUs alone.
Posted on Reply
#369
10tothemin9volts
igormpNo, TOPS is INT4 for both products.
You mean NV is comparing apples to apples in this one?, this would be nice (I guess since there is no fineprint, it might be so). Then the AI TOPS would indeed be massively improved (+70% when adjusting for the power increase of +25% for the 5070 vs 4070: 988/(466*1.25)).
According to "nvidia-ada-gpu-architecture.pdf", the 4090 is:
Peak INT8 Tensor TOPS1 660.6/1321.22
Peak INT4 Tensor TOPS1 1321.2/2642.42
2. Effective TOPS / TFLOPS using the new Sparsity Feature
So it's either 1321 INT8 sparse or 1321 INT4 dense? Anyway, what matters more, is that it's an apples to apples comparison.
Posted on Reply
#370
Chrispy_
AusWolfThat's actually not a bad idea! :) (but psst... you're in the wrong thread) :ohwell:
Didn't AMD suggest that the 8800XT 9070XT offered RT performance of a 4080 at some point? I'm not sure what the source was for that rumour, and like all info from AMD, you don't trust it until it has been independently verified.
Posted on Reply
#371
AusWolf
Chrispy_Didn't AMD suggest that the 8800XT 9070XT offered RT performance of a 4080 at some point? I'm not sure what the source was for that rumour, and like all info from AMD, you don't trust it until it has been independently verified.
I can't remember if it was RT or raster, but something like that. They did say that RDNA 4 has a totally new RT engine, though.
Posted on Reply
#372
Legacy-ZA
AusWolfI can't remember if it was RT or raster, but something like that. They did say that RDNA 4 has a totally new RT engine, though.
Yes, it's what I heard too, perhaps their version of 2de generation RT cores? FSR 4.0 will also utilize A.I similar to DLSS according to the rumours, which will finally give great uplifts to image quality.

I won't forget that AMD gave us FSR without A.I, something I think, was extremely awesome of them to do, sure it doesn't look as good in some scenarios but it is a great option for non RTX users. :love:
Posted on Reply
#373
remekra
We can safely asssume that it's going to be similar to what PS5 Pro has so double the intersection perf and better BVH acceleration, also better handling of divergent rays so increased performance in harder RT workloads like GI and PT. Plus any other tech that Sony didn't disclose in detail.

Posted on Reply
#374
AusWolf
remekraWe can safely asssume that it's going to be similar to what PS5 Pro has so double the intersection perf and better BVH acceleration, also better handling of divergent rays so increased performance in harder RT workloads like GI and PT. Plus any other tech that Sony didn't disclose in detail.

The PS5 Pro has an RDNA 3(.5?) iGPU.
Posted on Reply
#375
remekra
AusWolfThe PS5 Pro has an RDNA 3(.5?) iGPU.
It doesn't have RDNA3 or 3.5. Shaders are the same as PS5 so RDNA2, plus RT from RDNA4 and ML hardware that comes from Sony itself.
You can watch the whole presentation, Mark explains it pretty well.
RDNA3 didn't increase the intersection perf. I have looked everywhere for that info sometime ago and didn't find a single mention of that.

EDIT

Here you can find good deeper explanation on how RDNA3 and 2 RT works:
chipsandcheese.com/p/raytracing-on-amds-rdna-2-3-and-nvidias-turing-and-pascal

And even from AMD own slides:



No mention of improvement on intersection. So basically PS5 Pro has better RT than RDNA3. But still even if they doubled the intersection it doesn't mean RT perf will double from that alone, but well there are other improvements so who knows, we will see.
Posted on Reply
Add your own comment
Jan 9th, 2025 10:43 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts