Tuesday, September 20th 2022

NVIDIA Delivers Quantum Leap in Performance, Introduces New Era of Neural Rendering With GeForce RTX 40 Series

NVIDIA today unveiled the GeForce RTX 40 Series of GPUs, designed to deliver revolutionary performance for gamers and creators, led by its new flagship, the RTX 4090 GPU, with up to 4x the performance of its predecessor. The world's first GPUs based on the new NVIDIA Ada Lovelace architecture, the RTX 40 Series delivers massive generational leaps in performance and efficiency, and represents a new era of real-time ray tracing and neural rendering, which uses AI to generate pixels.

"The age of RTX ray tracing and neural rendering is in full steam, and our new Ada Lovelace architecture takes it to the next level," said Jensen Huang, NVIDIA's founder and CEO, at the GeForce Beyond: Special Broadcast at GTC. "Ada provides a quantum leap for gamers and paves the way for creators of fully simulated worlds. With up to 4x the performance of the previous generation, Ada is setting a new standard for the industry," he said.
DLSS 3 Generates Entire Frames for Faster Game Play
Huang also announced NVIDIA DLSS 3—the next revolution in the company's Deep Learning Super Sampling neural-graphics technology for games and creative apps. The AI-powered technology can generate entire frames for massively faster game play. It can overcome CPU performance limitations in games by allowing the GPU to generate entire frames independently.

The technology is coming to the world's most popular game engines, such as Unity and Unreal Engine, and has received support from many of the world's leading game developers, with more than 35 games and apps coming soon.

Additionally, the RTX 40 Series GPUs feature a range of new technological innovations, including:
  • Streaming multiprocessors with up to 83 teraflops of shader power—2x over the previous generation.
  • Third-generation RT Cores with up to 191 effective ray tracing teraflops—2.8x over the previous generation.
  • Fourth-generation Tensor Cores with up to 1.32 Tensor petaflops—5x over the previous generation using FP8 acceleration.
  • Shader Execution Reordering (SER) that improves execution efficiency by rescheduling shading workloads on the fly to better utilize the GPU's resources. As significant an innovation as out-of-order execution was for CPUs, SER improves ray tracing performance up to 3x and in-game frame rates by up to 25%.
  • Ada Optical Flow Accelerator with 2x faster performance allows DLSS 3 to predict movement in a scene, enabling the neural network to boost frame rates while maintaining image quality.
  • Architectural improvements tightly coupled with custom TSMC 4N process technology results in an up to 2x leap in power efficiency.
  • Dual NVIDIA Encoders (NVENC) cut export times by up to half and feature AV1 support. The NVENC AV1 encode is being adopted by OBS, Blackmagic Design DaVinci Resolve, Discord and more.
New ray tracing Tech for Even More Immersive Games
For decades, rendering ray-traced scenes with physically correct lighting in real time has been considered the holy grail of graphics. At the same time, geometric complexity of environments and objects has continued to increase as 3D games and graphics strive to provide the most accurate representations of the real world.

Achieving physically accurate graphics requires tremendous computational horsepower. Modern ray-traced games like Cyberpunk 2077 run over 600 ray tracing calculations for each pixel just to determine lighting—a 16x increase from the first ray-traced games introduced four years ago.

The new third-generation RT Cores have been enhanced to deliver 2x faster ray-triangle intersection testing and include two important new hardware units. An Opacity Micromap Engine speeds up ray tracing of alpha-test geometry by a factor of 2x, and a Micro-Mesh Engine generates micro-meshes on the fly to generate additional geometry. The Micro-Mesh Engine provides the benefits of increased geometric complexity without the traditional performance and storage costs of complex geometries.

Creativity Redefined With RTX Remix, New AV1 Encoders
The RTX 40 Series GPUs and DLSS 3 deliver advancements for NVIDIA Studio creators. 3D artists can render fully ray-traced environments with accurate physics and realistic materials, and view the changes in real time, without proxies. Video editing and live streaming also get a boost from improved GPU performance and the inclusion of new dual, eighth-generation AV1 encoders. The NVIDIA Broadcast software development kit has three updates, now available for partners, including Face Expression Estimation, Eye Contact and quality improvements to Virtual Background.

NVIDIA Omniverse—included in the NVIDIA Studio suite of software—will soon add NVIDIA RTX Remix, a modding platform to create stunning RTX remasters of classic games. RTX Remix allows modders to easily capture game assets, automatically enhance materials with powerful AI tools, and quickly enable RTX with ray tracing and DLSS.

Portal Is RTX ON!
RTX Remix has been used by NVIDIA Lightspeed Studios to reimagine Valve's iconic video game Portal, regarded as one of the best video games of all time. Advanced graphics features such as full ray tracing and DLSS 3 give the game a striking new look and feel. Portal with RTX will be released as free, official downloadable content for the classic platformer with RTX graphics in November, just in time for Portal's 15th anniversary.

The GeForce RTX 4090 and 4080:
The New Ultimate GPUs The RTX 4090 is the world's fastest gaming GPU with astonishing power, acoustics and temperature characteristics. In full ray-traced games, the RTX 4090 with DLSS 3 is up to 4x faster compared to last generation's RTX 3090 Ti with DLSS 2. It is also up to 2x faster in today's games while maintaining the same 450 W power consumption. It features 76 billion transistors, 16,384 CUDA cores and 24 GB of high-speed Micron GDDR6X memory, and consistently delivers over 100 frames per second at 4K-resolution gaming. The RTX 4090 will be available on Wednesday, Oct. 12, starting at $1,599.

The company also announced the RTX 4080, launching in two configurations. The RTX 4080 16 GB has 9,728 CUDA cores and 16 GB of high-speed Micron GDDR6X memory, and with DLSS 3 is 2x as fast in today's games as the GeForce RTX 3080 Ti and more powerful than the GeForce RTX 3090 Ti at lower power. The RTX 4080 12 GB has 7,680 CUDA cores and 12 GB of Micron GDDR6X memory, and with DLSS 3 is faster than the RTX 3090 Ti, the previous-generation flagship GPU.

Both RTX 4080 configurations will be available in November, with prices starting at $1,199 and $899, respectively.

The GeForce RTX 4090 and 4080 GPUs will be available as custom boards, including stock-clocked and factory-overclocked models, from top add-in card providers such as ASUS, Colorful, Gainward, Galaxy, GIGABYTE, Innovision 3D, MSI, Palit, PNY and Zotac. The RTX 4090 and RTX 4080 (16 GB) are also produced directly by NVIDIA in limited Founders Editions for fans wanting the NVIDIA in-house design. Look for the GeForce RTX 40 Series GPUs in gaming systems built by Acer, Alienware, ASUS, Dell, HP, Lenovo and MSI, leading system builders worldwide, and many more.
Add your own comment

31 Comments on NVIDIA Delivers Quantum Leap in Performance, Introduces New Era of Neural Rendering With GeForce RTX 40 Series

#1
BorisDG
Time for 4090... finally.
Posted on Reply
#2
GhostRyder
The two 4080's is what's got me, not a big fan of releasing to memory variants that also have different core counts because it comes off as just a memory difference. But I will be interested to see these in the wild and some comparisons!
Posted on Reply
#3
cvaldes
GhostRyderThe two 4080's is what's got me, not a big fan of releasing to memory variants that also have different core counts because it comes off as just a memory difference. But I will be interested to see these in the wild and some comparisons!
A wiser approach is to ignore the 4080 model number designations and just assess the cards by actual performance.

In addition to the VRAM size and core count differences, they also have different memory bus sizes, clock speeds, and power requirements.

It is noteworthy that all three cards announced today are based on different GPUs: AD102-300, AD103-300, and AD104-400.

In the same way, many people erroneously compared the 3080 Ti to the 3080 because of the model numbers. The 3080 Ti actually shared the same GPU as the 3090 so the better comparison would have been with the 3090 (basically the 3080 Ti was a binned 3090 with half the VRAM).
Posted on Reply
#4
birdie
Key points:
  • DLSS 3.0 is fantastic though proprietary.
  • Pricing is just bad.
  • Two 4080 SKUs with a different amount of shaders? Looks like NVIDIA decided to charge top dollar for what should have been RTX 4070 Ti. Let's see what RDNA 3.0 will bring because this is just ugly.
  • I expect RDNA 3.0 to reach the RTRT performance of the RTX 30 series which again means NVIDIA will take the performance crown for heavy RT games for the next two years.
  • Looks like we've reached the point where the laws of physics no longer allow to get more performance at the same power (envelope) which is really really sad.
Posted on Reply
#5
The_Enigma
Wish they would have announced a new Shield TV. The current one could really use a CPU and GPU upgrade. Get hardware decoding of newer formats, and it would be nice to use some DLSS tech to do better 4k content upscaling, and potentially use DLSS3 tech to do frame interpolating to 60-120fps that is actually good and without a big increase in latency. And of course, the current one is too slow to run emulators on so that would be a nice upgrade too. Guess we will have to wait till probably next year for that to come.


Is it confirmed that the RTX 4080 16GB is using the AD103 core with 9700 cores while the RTX 4080 12GB uses the AD104 core and has 7,600 cores? I didn't see core counts listed in the presentation, but he went pretty fast through that part (probably to try and pull a fast one on as many people as he could).
So effectively the 4080 12GB is a renamed 4070Ti and Nvidia once again shifted the product stack costs up a tier?
Posted on Reply
#6
Arkz
Saying how much more powerful they are when using DLSS3 compared to the last gen cards... I bet those last gen cards aren't even using DLSS 2.3 or anything, it will be native rendering on them vs DLSS3 on the new cards to make the jump seem bigger.
Posted on Reply
#7
ZoneDymo
"a quantum leap" but they can just throw out anything without consequences cant they?
Posted on Reply
#8
Upgrayedd
cvaldesA wiser approach is to ignore the 4080 model number designations and just assess the cards by actual performance.

In addition to the VRAM size and core count differences, they also have different memory bus sizes, clock speeds, and power requirements.

It is noteworthy that all three cards announced today are based on different GPUs: AD102-300, AD103-300, and AD104-400.

In the same way, many people erroneously compared the 3080 Ti to the 3080 because of the model numbers. The 3080 Ti actually shared the same GPU as the 3090 so the better comparison would have been with the 3090 (basically the 3080 Ti was a binned 3090 with half the VRAM).
3080 was also cut from the same die as 90s.

While ignoring the model number is okay for some, at the store to the regular person only difference will be VRAM. Which isn't cool if that's not the only difference.
Posted on Reply
#9
cvaldes
Well, the price is going to be a difference.

Discrete graphics cards have become an increasingly niche product, especially in the upper tier. For sure, NVIDIA's choice in product model number choice may confuse a handful of people but not those who do their homework.

Joe Consumer in the USA is going to buy whatever's cheaper anyhow.
Posted on Reply
#10
Hyderz
so nvidia holding back the RTX4070 to counter amd price to perf gpu
or if you guys think the rtx4080 12gb is the new 4070...
massive gap between the 4080 16gb and 4090 so i guess its for a RTX4080ti to counter amd
later on we will see RTX4090ti?
Posted on Reply
#11
AnotherReader
This will be the biggest difference between the flagship and the next fastest GPU in terms of SMX count that I can remember. The previous generations were like this:

GenerationFlagship SMX count2nd tier GPU SMX countRatioComments
Kepler[RIGHT]15[/RIGHT][RIGHT]12[/RIGHT][RIGHT]1.25[/RIGHT]GTX 780 Ti vs GTX 780
Maxwell[RIGHT]24[/RIGHT][RIGHT]22[/RIGHT][RIGHT]1.09[/RIGHT]Titan X vs GTX 980 Ti
Pascal[RIGHT]30[/RIGHT][RIGHT]28[/RIGHT][RIGHT]1.07[/RIGHT]Titan Xp vs GTX 1080 Ti
Turing[RIGHT]72[/RIGHT][RIGHT]68[/RIGHT][RIGHT]1.06[/RIGHT]RTX Titan vs RTX 2080 Ti
Ampere[RIGHT]84[/RIGHT][RIGHT]68[/RIGHT][RIGHT]1.24[/RIGHT]RTX 3090 Ti vs RTX 3080 10 GB
Ada[RIGHT]128[/RIGHT][RIGHT]76[/RIGHT][RIGHT]1.68[/RIGHT]RTX 4090 vs RTX 4080 16 GB


One can easily see how the 4080 16 GB stands out as the runt and poor value.

Even if we go by die sizes for the actual 2nd tier full die, this generation is an outlier.

GenerationFlagship SMX countFlagship Price2nd tier GPU SMX count2nd tier PriceSMX RatioPrice RatioComments
Kepler[RIGHT]15[/RIGHT]699[RIGHT]8[/RIGHT]330[RIGHT]1.88[/RIGHT]2.12GTX 780 Ti vs GTX 680
Maxwell[RIGHT]24[/RIGHT]649[RIGHT]16[/RIGHT]499[RIGHT]1.50[/RIGHT]1.30980 Ti vs GTX 980
Pascal[RIGHT]30[/RIGHT]699[RIGHT]20[/RIGHT]499[RIGHT]1.50[/RIGHT]1.751080 Ti vs GTX 1080
Turing[RIGHT]72[/RIGHT]1200[RIGHT]48[/RIGHT]699[RIGHT]1.50[/RIGHT]2.082080 Ti vs RTX 2080 Super
Ampere[RIGHT]84[/RIGHT]1999[RIGHT]48[/RIGHT]599[RIGHT]1.75[/RIGHT]3.34RTX 3090 Ti vs RTX 3070 Ti
Ada[RIGHT]128[/RIGHT]1599[RIGHT]76[/RIGHT]1199[RIGHT]1.68[/RIGHT]1.33RTX 4090 vs RTX 4080 16 GB


Now it looks better for the 4080 16 GB until you consider the price which is outside the historical norm for the lower tier GPUs. Only the GTX 980 was priced this close to the flagship, and that was an atypical generation in many ways.
Posted on Reply
#12
RandallFlagg
Upgrayedd3080 was also cut from the same die as 90s.

While ignoring the model number is okay for some, at the store to the regular person only difference will be VRAM. Which isn't cool if that's not the only difference.
Agree, was just looking at the specs at videocardz and that is very deceptive. They even use different die it seems. This is more like the difference I'd expect between a 4080 and a 4080 Ti, or perhaps even a 4080 and 4090. Overall the 16GB seems to have 20-25% more SMs, CUDA cores, and memory bandwidth. That's normally an entire tier of performance.

Posted on Reply
#13
cvaldes
Hyderzso nvidia holding back the RTX4070 to counter amd price to perf gpu
or if you guys think the rtx4080 12gb is the new 4070...
massive gap between the 4080 16gb and 4090 so i guess its for a RTX4080ti to counter amd
later on we will see RTX4090ti?
I don't think it's wise to simply rely on NVIDIA marketing department's model numbers. They have a habit of changing their model number implementations from generation to generation. Hell, even the --90 cards are really Titans in sheep's clothing these days.

My guess is that we will see a 4090 Ti someday in the future, the full fat AD102 GPU from binned silicon. Why not? NVIDIA can set aside some better samples and charge more money for them. The cost to NVIDIA is the same, they are all coming off the same wafers.
Posted on Reply
#14
Fleurious
Any idea when the NDA lifts on reviews?
Posted on Reply
#15
Upgrayedd
RandallFlaggAgree, was just looking at the specs at videocardz and that is very deceptive. They even use different die it seems. This is more like the difference I'd expect between a 4080 and a 4080 Ti, or perhaps even a 4080 and 4090. Overall the 16GB seems to have 20-25% more SMs, CUDA cores, and memory bandwidth. That's normally an entire tier of performance.

Uhhh there's a 192bit 4080? Wtf
Posted on Reply
#16
cvaldes
FleuriousAny idea when the NDA lifts on reviews?
My guess is that W1zzard knows. Maybe some other TPU staffers as well.

Part of the NDA might be to not talk about the NDA until it is lifted. Any date at this point would just be speculation unless you've actually read the NDA itself.
Posted on Reply
#17
The_Enigma
FleuriousAny idea when the NDA lifts on reviews?
Safest bet will be October 13th, same day the card goes on sale. That way everyone has to go run and buy one first knowing they will run out of stock and then sit down and read the review after they already purchased it.
Posted on Reply
#18
steen
cvaldesI don't think it's wise to simply rely on NVIDIA marketing department's model numbers. They have a habit of changing their model number implementations from generation to generation. Hell, even the --90 cards are really Titans in sheep's clothing these days.

My guess is that we will see a 4090 Ti someday in the future, the full fat AD102 GPU from binned silicon. Why not? NVIDIA can set aside some better samples and charge more money for them. The cost to NVIDIA is the same, they are all coming off the same wafers.
About Titan or 4090tie branding... I depends on competitiveness of AMD. If perf diff significant enough it'll be Titan @ >$2k, otherwise 4090tie <$2k.
Posted on Reply
#19
Bwaze
"Computing is getting more expensive at incredible speeds!" - Jensen Huang, probably.

RTX 3080 was $700.

RTX 4080 12GB is $900, 16GB is $1200.

I just hate it when I'm right. I kind of feared such price increase, and I think this is just the beginning, everything coming out this fall will have such perverse price increases. New CPUs, motherboards, even new PSUs are reported to be much more expennsive.

On the other hand, even though inflation is more than 10% here, and cost of living is skyrocketing, salaries mostly remain the same - because companies are reportedly struggling and higher salaries would be a breaking point.
Posted on Reply
#20
thestryker6
The debacle with EVGA and the reporting from JPR shows full well Nvidia has been abusing its place in the market to raise prices for greater margins while squeezing AIBs. I wish AMD wasn't willing to go along with it and/or Intel would come out swinging on price. As it stands I can't imagine buying another video card in this market as it's apparent that it is up to the customers to stop it.
Posted on Reply
#21
MikeMurphy
I wonder if EVGA exited because it didn't see much demand at these MSPR prices, given the flood of used cards hitting the market and far less crypto demand.
Posted on Reply
#22
cvaldes
MikeMurphyI wonder if EVGA exited because it didn't see much demand at these MSPR prices, given the flood of used cards hitting the market and far less crypto demand.
Not a likely factor. One of EVGA's clearly stated gripes was that NVIDIA would not reveal MSRP to their AIB partners until the very last moment so they were basically flying blind on gross margin forecasting.

EVGA has two decades of 20-20 hindsight of how gross margins ended up and apparently they did not like how GM was trending.
Posted on Reply
#23
Crackong
That is clearly a (supposed to be 4070) rebranded to be 4080 12GB and sell for double the bucks
Posted on Reply
#24
Easo
Those prices... Worse than I feared, that's the best I can say without swearing.
Posted on Reply
#25
Bwaze


Prices in Germany:

- GeForce RTX 4090: 1.949 Euro
- GeForce RTX 4080 16 GB: 1.469 Euro
- GeForce RTX 4080 12 GB: 1.099 Euro

Nvidia marketing will have a hard time selling these price increases.

Especially if they come with just the normal performance increase, there's something very funny going on with rare benchmarks and performance numbers Nvidia is showing. Like for instance bragging about being 2x faster in Microsoft Flight Simulator, and then while showing DLSS 3.O also bragging about it being 2x faster than DLSS OFF, but the performance of DLSS OFF video isn't really that great - it should be twice the RTX 3090, right?

Posted on Reply
Add your own comment
Nov 18th, 2024 18:33 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts