Tuesday, September 20th 2022

NVIDIA Project Beyond GTC Keynote Address: Expect the Expected (RTX 4090)

NVIDIA just kicked off the GTC Autumn 2022 Keynote address that culminates in Project Beyond, the company's launch vehicle for its next-generation GeForce RTX 40-series graphics cards based on the "Ada" architecture. These are expected to nearly double the performance over the present generation, ushering in a new era of photo-real graphics as we inch closer to the metaverse. NVIDIA CEO Jensen Huang is expected to take center-stage to launch these cards.

15:00 UTC: The show is on the road.
15:00 UTC: AI remains the center focus, including how it plays with gaming.

15:01 UTC: Racer X is a real-time interactive tech demo. Coming soon.
15:02 UTC: Future games will be simulations, not pre-baked- Jensen Huang
15:03 UTC: This is seriously good stuff (RacerX). It runs on a single GPU, in real-time, uses RTX Neural Rendering
15:05 UTC: Ada Lovelace is a huge GPU
15:06 UTC: 76 billion transistors, over 18,000 shaders. 76 billion transistors, Micron GDDR6X memory. Shader execution reordering is major innovation, as big as out-of-order execution for CPUs, gains up to 25% in-game performance. Ada built on TSMC 4 nm, using 4N, a custom process designed in together with NVIDIA.

There's a new streaming multiprocessor design, with a total of 90 TFLOPS. Power efficiency is doubled over Ampere.
Ray Tracing is on the third generation now, with 200 RT TFLOPS and twice the triangle intersection speed.
Deep Learning AI uses 4th gen Tensor Cores, 1400 TFLOPS, "Optical Flow Accelerator"
15:07 UTC: Shader Execution Reordering similar to the one we saw with Intel Xe-HPG
15:08 UTC: Several new hardware-accelerated ray tracing innovations with 3rd gen RTX.
15:09 UTC: DLSS 3 is announced. It brings with it several new innovations, including temporal components, and Reflex latency optimizations. Generates new frames without involving the graphics pipeline.
15:11 UTC: Cyberpunk 2077 to get DLSS 3 and SER. 16 times increase in effective performance using DLSS 3 vs. DLSS 1. MS Flight Simulator to get DLSS 3 support
15:13 UTC: Portal RTX, a remaster just like Quake II RTX, available from November, created with Omniverse RTX Remix.
15:14 UTC: Ada offers a giant leap in total performance. Everything has been increased 40 -> 90 TFLOPS shader, 78 -> 200 TFLOPS RTX, 126 -> 300 TFLOPS OFA, 320 -> 1400 TFLOPS Tensor.
15:17 UTC: Power efficiency is more than doubled, but power goes up to 450 W now.
15:18 UTC: GeForce RTX 4090 will be available on October 12, priced at $1600. It comes with 24 GB GDDR6X and is 2-4x faster than RTX 3090 Ti.
15:18 UTC: RTX 4080 is available in two versions, 16 GB and 12 GB. The 16 GB version starts at $1200, the 12 GB at $900. 2-4x faster than RTX 3080 Ti.
15:19 UTC: New pricing for RTX 30-series, "for mainstream gamers", RTX 40-series "for enthusiasts".
15:19 UTC: "Ada is a quantum leap for gamers"—improved ray tracing, shader execution reordering, DLSS 3.
15:20 UTC: Updates to Omniverse

15:26 UTC: Racer X demo was built by a few dozen artists in just 3 months.
15:31 UTC: Digital twins would play a vital sole in product development and lifecycle maintenence.
15:31 UTC: Over 150 connectors to Omniverse.
15:33 UTC: GDN (graphics delivery network) is the new CDN. Graphics rendering over the Internet will be as big in the future as streaming video is today.
15:37 UTC: Omniverse Cloud, a planetary-scale GDN
15:37 UTC: THOR SuperChip for automotive applications.

15:41 UTC: NVIDIA next-generation Drive
Add your own comment

333 Comments on NVIDIA Project Beyond GTC Keynote Address: Expect the Expected (RTX 4090)

#326
THU31
When did TDP actually become a relevant thing? I do not remember at all.

When I look at Wikipedia, the first mention of TDP is with the GeForce 8000 series. Was this the first time they went drastically above 100 W, with the 8800 GTX, which was an unbelievable advancement over the 7000 series?
The TPU database does mention TDP for the FX and 6000/7000 cards, but they all seem to be under 100 W. I do not think people paid any attention to power consumption back then.
Posted on Reply
#327
dogwitch
THU31When did TDP actually become a relevant thing? I do not remember at all.

When I look at Wikipedia, the first mention of TDP is with the GeForce 8000 series. Was this the first time they went drastically above 100 W, with the 8800 GTX, which was an unbelievable advancement over the 7000 series?
The TPU database does mention TDP for the FX and 6000/7000 cards, but they all seem to be under 100 W. I do not think people paid any attention to power consumption back then.
they did. it was just far less tolerance . Hardware wise . compare to now.
now a gpu can toast a psu pretty easy.
Posted on Reply
#328
Valantar
THU31When did TDP actually become a relevant thing? I do not remember at all.

When I look at Wikipedia, the first mention of TDP is with the GeForce 8000 series. Was this the first time they went drastically above 100 W, with the 8800 GTX, which was an unbelievable advancement over the 7000 series?
The TPU database does mention TDP for the FX and 6000/7000 cards, but they all seem to be under 100 W. I do not think people paid any attention to power consumption back then.
They did pay attention to it, but mainly in terms of noise, thermals and the need for axillary power connectors. In the early (AGP/PCI) days there was no standard for axillary power, so you saw all kinds of weird solutions from external power bricks with inputs on the card's I/O, to more ordinary Molex power. Crucially, PSUs didn't have power connectors, or the output ratings to support them either. Still, coolers were so much less advanced back then, TDPs were much, much lower. I don't know when the term came to prominence for GPUs, but it was probably as cards grew ever more power hungry and started needing the power of 6 and 8-pin PCIe connectors - so around the late 2000s maybe? It takes a long time for terminology like that to move out of "for people who read datasheets" territory and into common (even enthusiast) parlance though.
Posted on Reply
#329
Bzuco
SisyphusRaw compute performance 3090 30 TFLOPS, 4090 90 TFLOPS.
and 12x larger L2 cache, 2.32x higher fillrates. Every game will benefit from those numbers in terms of higher fps or lower power consumption.
I hope lower power states will have reasonably set memory frequencies and not locked GPU frequencies.
ARFNo, ray-tracing is niche, expensive and not worthy as of today. It will always be niche because there is no manufacturing method to produce that many transistors to make it work.

Also, ray-tracing done by nvidia is a gimmick. They remove the traditional lighting that is as good or even better and push through your throat something that you have never asked for.
Thanks to temporal filtering RT games does not need hundreds of rays to be fired for every pixel. That is the reason why we can use that technique these days in games to compute more preciselly GI and shadows for long distances in realtime. Until we reach technological progress ala Star Trek, nvidia RT is still pushing us forward.
THU31I feel like rasterization possibilities have been maxed out.
We still does no reach visual realism like in real world, so rasterization can still bring more details close to camera...it just does not make sense spend HW performance on certain rasterization algorithms, if RT algorithms can use chip performance better. As you said, "Ray tracing is the future and it will continue to evolve".

Mixing raster techniques with RT is good and necessary step in the right direction.
Posted on Reply
#333
Haile Selassie
What good is a "simulation, not pre-baked" when the general game is just a bland, boring re-hash of the existing idea?
Posted on Reply
Add your own comment
Jun 29th, 2024 10:49 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts