Positioning & Architecture
Just a week after the
debut of the GeForce RTX 5090, we have the GeForce RTX 5080. Today the embargo lifts for all remaining GeForce RTX 5080 models, yesterday reviews of the NVIDIA Founders Edition and all other custom designs priced at the $1000 MSRP were allowed. This brings our total review coverage for the RTX 5080 to ten reviews!
ASUS RTX 5080 Astral OC,
Colorful RTX 5080 Vulcan OC,
Gainward RTX 5080 Phoenix GS,
Galax RTX 5080 1-Click OC,
Gigabyte RTX 5080 Gaming OC,
MSI RTX 5080 Suprim SOC,
MSI RTX 5080 Vanguard SOC, this card, the
Zotac RTX 5080 Amp Extreme and of course, last but not least, the
NVIDIA RTX 5080 Founders Edition.
The Blackwell architecture introduces several architectural improvements under the hood, like giving all shaders the ability to run FP32 or INT32 instructions, on Ada only half the cores had that ability. The Tensor Cores are now accessible from the shaders through a new Microsoft DirectX API, and they now support FP4 and INT4 instructions which run at lower precision, but much faster with less memory usage. There's numerous additional architecture improvements, we talked about all of them on the first pages of this review.
The GeForce RTX 5080 increases the number of GPU cores to 10,752, up from 10,240 on the RTX 4080 Super (+5%) and 9,728 on the RTX 4080 non-Super (+11%). Other unit counts are increased, too, the ROPs remain at 112. The memory capacity remains at 16 GB with a 256-bit bus, but NVIDIA has upgraded to brand-new GDDR7 graphics memory chips.
From a fabrication perspective nothing has changed though—Blackwell is fabricated on the same 5 nanometer "NVIDIA 4N" TSMC node as last generation's Ada. NVIDIA claims this is a "4 nanometer process," but during Ada it was confirmed that NVIDIA 4N is actually not TSMC N4 (note the order of N and 4), but 5 nanometer. At the end of the day the actual number doesn't matter much, what's important is that NVIDIA is using the same process node.
Palit's GeForce GameRock OC is the company's premium custom-design model in their lineup. It comes with a large triple-fan cooling solution. Clock speeds have been raised to 2730 MHz rated boost, or +4.3% over the NVIDIA baseline of 2617 MHz.
Performance
We upgraded our test system last month, which is now built on AMD technology with the outstanding Ryzen 7 9800X3D. We've updated to Windows 11 24H2, complete with the newest patches and updates, and have added a selection of new games. For the RTX 5080 Founders Edition, at 4K resolution, with pure rasterization, without ray tracing or DLSS, we measured a 16% performance uplift over the RTX 4080 Super, 19% over the RTX 4080 non-Super. This is definitely MUCH less than expected and not nearly as much as what we saw last week from RTX 5090, which beat the RTX 4090 by 35%. Compared to the GeForce RTX 3080, the performance increase is 82%, which means NVIDIA missed the "twice the performance every second generation" rule. Last-generation's flagship, the RTX 4090 is 9% faster than the RTX 5080 and the new RTX 5090 flagship is 48% faster, but twice as expensive.
GeForce RTX 5080 is still faster than AMD Radeon RX 7900 XTX, Team Red's best GPU, by 19% in a pure raster scenario, much more in RT. AMD has confirmed that they are not going for high-end with RDNA 4, and it's expected that the RX 9070 Series will end up somewhere between RX 7900 XT and RX 7900 GRE. This means that AMD's new cards don't pose a threat to the RTX 5080, which might explain why we're not getting bigger performance improvements.
Thanks to factory overclock on the GameRock, the card achieves a 5% performance uplift over the baseline RTX 5080, comparable to other custom designs that we've tested today, which reach similar numbers, with the majority hitting +4% and +5%. Subjectively, it will be hard to notice the gaming difference, and even when looking at the FPS counter, spotting a 5% difference is not easy.
Ray Tracing & Neural Rendering
NVIDIA is betting on ray tracing and Blackwell comes with several hardware improvements here. Interestingly, the RTX 5080 FE runs only 11% faster at RT than RTX 4080 Super—remember, we got +14% in without RT. It looks like this is partly due to the game selection. The games that show the biggest gains in our non-RT test suite do not support RT. Still, compared to AMD's Radeon RX 7900 XTX, the difference is massive—the RTX 5080 is 61% (!) faster than the RX 7900 XTX. On top of that, NVIDIA is introducing several new optimization techniques that game developers can adopt. The most interesting one is Neural Rendering, which is exposed through a Microsoft DirectX API (Cooperative Vectors). This ensures that the feature is universally available for all GPU vendors to implement, so game developers should be highly motivated to pick it up. AMD has confirmed that for RDNA 4 they have put in some extra love for the RT cores, so hopefully they can catch up a bit.
VRAM
While the RTX 5090 went overboard with 32 GB VRAM, the RTX 5080 comes with 16 GB, which is a reasonable choice for this segment. Of course 24 GB would be nicer, also to achieve parity with AMD. On the other hand, this advantage is only psychological, because the 24 GB RX 7900 XTX still can't beat the RTX 5080. Switching the RTX 5080 to 24 GB would require not only additional memory chips, the memory bus in the PCB layout would have to be increased as well and the memory controller inside the GPU, all of which would significantly increase the manufacturing cost of the whole card. NVIDIA's new DLSS 4 algorithms are less wasteful when it comes to VRAM usage though.
Maybe 16 GB is even a good thing, because it will make the card less attractive for creator and AI use—the same people who will probably buy up all supply of RTX 5090 tomorrow, which could mean RTX 5080 stock is under less pressure.
DLSS 4 Upscaling & Frame Generation
NVIDIA made a big marketing push to tell everyone how awesome DLSS 4 is, and they are not wrong. First of all, DLSS 4 Multi-Frame-Generation. While DLSS 3 doubled the framerates by generating a single new frame, DLSS 4 can now triple or quadruple the frame count. In our testing this worked very well and delivered the expected FPS rates. Using FG, gaming latency does NOT scale linearly with FPS, but given a base FPS of like 40 or 50, DLSS x4 works great to achieve the smoothness of over 150 FPS, with similar latency than you started out with. Image quality is good, if you know what to look for you can see some halos around the player, but that's nothing you'd notice in actual gameplay.
Want lower latency? Then turn on DLSS 4 Upscaling, which lowers the render resolution and scales up the native frame. In the past there were a lot of debates whether DLSS upscaling image quality is good enough, some people even claimed "better than native"—I strongly disagree with that—I'm one of the people who are allergic to DLSS 3 upscaling, even at "quality." With Blackwell, NVIDIA is introducing a "Transformer" upscaling model for DLSS, which is a major improvement over the previous "CNN" model. I tested Transformer and I'm in love. The image quality is so good, "Quality" looks like native, sometimes better. There is no more flickering or low-res smeared out textures on the horizon. Thin wires are crystal clear, even at sub-4K resolution! You really have to see it for yourself to appreciate it, it's almost like magic. The best thing? DLSS Transformer is available not only on GeForce 50, but on all GeForce RTX cards with Tensor Cores! While it comes with a roughly 10% performance hit compared to CNN, I would never go back to CNN. While our press driver was limited to a handful of games with DLSS 4 support, NVIDIA will have around 75 games supporting it on launch, most through NVIDIA App overrides, and many more are individually tested, to ensure best results. NVIDIA is putting extra focus on ensuring that there will be no anti-cheat drama when using the overrides.
Physical Design, Heat & Noise
Palit's new GameRock design focuses on bling, and it achieves that with a minimal amount of RGB hardware, which helps keep cost down. The smooth corners definitely look good on the card, too. We measured the large triple-fan to achieve temperatures of 64°C under load with 37.3 dBA, which are similar to the NVIDIA Founders Edition, slightly cooler, slightly louder. When you enable the optional "quiet" BIOS, temperatures increase marginally, by +3°C, and noise levels go down to 33 dBA, just a small change, but it helps bring the card into "quiet" territory. I would have wished for a much lower cooler speed, especially on the quiet BIOS. Except for the number in monitoring there is no difference between 60°C, 65°C, 70°C and 75°C. It does not put more heat into you case/room either. Lower noise levels on the other hand are immediately noticeable, all the time, especially while gaming. Also, the Founders Edition cards are fairly loud this time, so a lot of gamers are looking for quieter options.
Our apples-to-apples cooler comparison test reveals that the GameRock cooler sits roughly in the middle of the 10 RTX 5080 cards that we've tested, around 8°C cooler than the NVIDIA Founders Edition at the same heat load and noise level.
PCI-Express 5.0
NVIDIA's GeForce Blackwell graphics cards are the first high-end consumer models to support PCI-Express 5.0. This increases the available PCIe bandwidth to the GPU, yielding a small performance benefit. Of course PCIe Gen 5 is backwards compatible with older versions, so you'll be able to run the RTX 5080 even in an older computer.
Just like we've done over the years, we took a detailed look at
PCI-Express scaling in a separate article today. Testing includes x8 Gen 5, for instances when an SSD is eating some lanes. The popular x16 4.0 was tested, which is common on many older CPUs and entry-level motherboards. Finally, some additional combinations were run, down to PCIe x16 1.1. The results confirm that unless you are on an ancient machine, PCIe bandwidth won't be a problem at all.
Power Consumption
While gaming power consumption of the GameRock is very similar to the NVIDIA Founders Edition (good), the non-gaming power draw is considerably increased over the NVIDIA card—no idea why. With 30 W in idle, the card draws 10 W more than the FE, or +50%! This can push up your power bill when your system is running for many hours each day, even when not gaming. We saw similar idle numbers on the RTX 5090, NVIDIA gave us a long presentation about their Max-Q power management and all the efficiency gizmos they have, so I suspect this is some kind of bug and that NVIDIA will fix this—something similar happened a few years ago (RTX 2080 Ti).
Overclocking
Overclocking the RTX 5080 GameRock worked very well, we gained +11% in real-life performance on top of the factory OC. This is much more than what we usually see on modern graphics cards. Unfortunately NVIDIA is limiting the maximum overclocking for the GDDR7 memory chips to +375 MHz—usually NVIDIA doesn't have any OC limits. At first, I was wondering why NVIDIA left so much performance on the table, especially when the card's gen-over-gen gains are so small, but then I realized that they might want to build a RTX 5080 Super next year. The problem is that RTX 5080 already maxes out the GB203 GPU, so additional units can't be enabled, and they'll have to rely on increases to clock speeds only. Looking at our numbers, higher clocks and memory speeds, some firmware optimizations, maybe even faster GDDR7 chips can definitely yield +10% in mass production—RTX 5080 Super spotted.
Pricing & Alternatives
Priced at $1000, the GeForce RTX 5080 actually releases cheaper than the RTX 4080, which launched at a $1200 MSRP. The refreshed GeForce RTX 4080 Super brought that down to $1000 though. This means that either way you look at it, NVIDIA hasn't increased their pricing, but is giving us (relatively small) performance gains, and new software features like DLSS 4. Should this have been called RTX 4080 Super Ti? Sure, but it's Blackwell, so no GeForce 40 naming. While the performance gains are certainly less than expected, to me, they look sufficient to continue having dominance over this segment of the market, hence our "Recommended" award. No doubt, the RTX 5080 could be cheaper, or faster, but there is no incentive for NVIDIA for that, and they will sell everything they have for $1000, too.
We asked Palit for pricing, but they were unable to provide anything, so we have to estimate it. Based on other announced price points I'd say the card will sell for around $1200, which is a pretty big increase over the NVIDIA MSRP, or +20%. Fingers crossed that Palit won't make it that expensive, I'll update the review once prices are live.
For $1000, there is no reason you should buy RTX 4080 or RTX 4080 Super now. AMD's Radeon RX 7900 XTX is $820, or 18% cheaper, but it's also 15% slower in raster, and 38% slower in RT. NVIDIA is also very strong in software features, the new DLSS Transformer model is a game-changer and DLSS 4 multi-frame-generation is a notable selling point, too. No way I would buy RX 7900 XTX at that price instead of RTX 5080—maybe if AMD drops the price considerably. Also, the way AMD is handling Radeon lately makes me wonder if their discrete GPU brand will still be around in two or three years. The upcoming RDNA 4 lineup will not target the top end of the market, so unless a miracle happens, RX 9070 XT won't be able to compete with RTX 5080, maybe RTX 5070 Ti, which is coming out soon.
If you already have a high-end GeForce RTX 40 Series card, then there is no reason to upgrade. You're just missing out on multi-frame-generation, the DLSS Transformer model is supported on all older cards, too. On the other hand, if you're coming from GeForce 30, then suddenly you'll get to experience frame generation, which will make a huge difference for your gaming experience.