Positioning & Architecture
With the review embargo for the NVIDIA RTX 5090 Founders Edition ending yesterday, we are now able to share reviews of custom designs from NVIDIA's board partners starting today. We have tested four designs:
ASUS RTX 5090 Astral,
MSI RTX 5090 Suprim SOC,
MSI RTX 5090 Suprim Liquid SOC, this card and of course the
NVIDIA RTX 5090 Founders Edition. Unfortunately many other partners weren't able to provide samples in time. It seems the reason for that is that NVIDIA made some last-minute changes to the VBIOS and other things. First announced at CES Las Vegas, the new GeForce 5090 is the flagship of the RTX "Blackwell" lineup. Three other cards (RTX 5080, RTX 5070 Ti and RTX 5070) were announced at the Vegas event, too, these will be launched in the coming weeks.
The Blackwell architecture introduces several architectural improvements under the hood, like giving all shaders the ability to run FP32 or INT32 instructions, on Ada only half the cores had that ability. The Tensor Cores are now accessible from the shaders through a new Microsoft DirectX API, and they now support FP4 and INT4 instructions which run at lower precision, but much faster with less memory usage. There's numerous additional architecture improvements, we talked about all of them on the first pages of this review.
Compared to last generation's flagship, the RTX 4090, today's RTX 5090 increases the number of GPU cores to 21,760, up from 16,384. Other unit counts are increased, too. One of the highlights is the switch to the brand-new GDDR7 memory, which further increases bandwidth per pin, and NVIDIA bumped the memory bus from 384-bit to 512-bit, which results in a staggering 1.8 TB/s memory bandwidth. From a fabrication perspective nothing has changed though—Blackwell is built on the same 5 nanometer "NVIDIA 4N" TSMC node as last generation's Ada. NVIDIA claims this is a "4 nanometer process," but during Ada it was confirmed that NVIDIA 4N is actually not TSMC N4 (note the order of N and 4), but 5 nanometer. At the end of the day the actual number doesn't matter much, what's important is that NVIDIA is using the same process node.
The Palit GeForce RTX 5090 GameRock is the company's premium custom-design version of the RTX 5090. Unlike other cards tested today it doesn't come with a factory OC. Palit is using the same 575 W default power limit setting as the FE, but they removed the ability to manually increase the power to 600 W. Unlike the FE, the cooler is a massive quad-slot design with three fans and dual BIOS support.
Performance
We upgraded our test system last month, which is now built on AMD technology with the outstanding Ryzen 7 9800X3D. We've updated to Windows 11 24H2, complete with the newest patches and updates, and have added a selection of new games. At 4K resolution, with pure rasterization, without ray tracing or DLSS, we measured a 39% performance uplift over the RTX 4090. While this is certainly impressive, it is considerably less than what we got from RTX 3090 Ti to RTX 4090 (+51%). NVIDIA still achieves their "twice the performance every second generation" rule: the RTX 5090 is twice as fast as the RTX 3090 Ti. There really isn't much on the market that RTX 5090 can be compared to, it's 75% faster than AMD's flagship the RX 7900 XTX. AMD has confirmed that they are not going for high-end with RDNA 4, and it's expected that the RX 9070 Series will end up somewhere between RX 7900 XT and RX 7900 GRE. This means that RTX 5090 is at least twice as fast as AMD's fastest next-generation card. Compared to the second-fastest Ada card, the RTX 4080 Super, the performance increase is 79%—wow!
Even though there is no factory overclock advertised on the Palit GameRock it consistently runs a little bit faster than the NVIDIA Founders Edition—no idea why. Our clock/voltage monitoring confirms this: Palit 2741 MHz on average, FE 2684 MHz on average, which is exactly the 2% difference we're looking for. No doubt there are some performance variations between GPUs, even from the same batch, due to voltage leakage etc., it looks like our FE sample offers a little bit lower performance, or the Palit is a lucky bin. The factory overclocked cards aren't offering that much extra performance though, a few percent, up to 5%, at a lot higher cost.
There really is no question, RTX 5090 is the card you want for 4K gaming at maximum settings with all RT eye candy enabled. I guess you could run the card at 1440p at insanely high FPS, but considering that DLSS 4 will give you those FPS even at 4K, the only reason why you would want to do that is if you really want the lowest latency with the highest FPS.
Ray Tracing & Neural Rendering
NVIDIA is betting on ray tracing and Blackwell comes with several improvements here. Our results show a several percent smaller performance loss from enabling RT, which definitely helps. On top of that, the company is introducing several new optimization techniques that game developers can adopt. The most interesting one is Neural Rendering, which is exposed through a Microsoft DirectX API (Cooperative Vectors). This ensures that the feature is universally available for all GPU vendors to implement, so game developers should be highly motivated to pick it up. Performance in RT is good, at 4K we see a +32% improvement over the RTX 4090. AMD's fastest the RX 7900 XTX is way behind, the RTX 5090 is 2.5x (!) faster in this scenario. AMD has confirmed that for RDNA 4 they have put in some extra love for the RT cores, so hopefully they can catch up a bit.
VRAM
RTX 5090 comes with a staggering 32 GB VRAM size, which is a first for any consumer card. Modern games can't even make use of the 24 GB on RTX 4090, so this increase isn't something that benefits gamers today. I'm having serious doubts that we'll see 32 GB VRAM usage in any game in the next few years, but you never know, maybe next-gen consoles will bump the memory to 32 GB, which could motivate game studios to use that much memory. For creators and AI, this capability is useful though, as it lets them run larger problems, but that also means that these people, with deep pockets, will snatch away cards from gamers. When looked at from a different angle, 32 GB makes sense though. If you want to build a graphics card with a 512-bit memory bus, you will end up with 16 memory chips, which means your choices are either 16 GB or 32 GB. 16 GB wouldn't make sense for the RTX 5090, so 32 GB is the only logical choice, despite the increase in cost and design complexity.
DLSS 4 Upscaling & Frame Generation
NVIDIA made a big marketing push to tell everyone how awesome DLSS 4 is, and they are not wrong. First of all, DLSS 4 Multi-Frame-Generation. While DLSS 3 doubled the framerates by generating a single new frame, DLSS 4 can now triple or quadruple the frame count. In our testing this worked very well and delivered the expected FPS rates. Using FG, gaming latency does NOT scale linearly with FPS, but given a base FPS of like 40 or 50, DLSS x4 works great to achieve the smoothness of over 150 FPS, with similar latency than you started out with. Image quality is good, if you know what to look for you can see some halos around the player, but that's nothing you'd notice in actual gameplay.
Want lower latency? Then turn on DLSS 4 Upscaling, which lowers the render resolution and scales up the native frame. In the past there were a lot of debates whether DLSS upscaling image quality is good enough, some people even claimed "better than native"—I strongly disagree with that—I'm one of the people who are allergic to DLSS 3 upscaling, even at "quality." With Blackwell, NVIDIA is introducing a "Transformers" upscaling model for DLSS, which is a major improvement over the previous "CNN" model. I tested Transformers and I'm in love. The image quality is so good, "Quality" looks like native, sometimes better. There is no more flickering or low-res smeared out textures on the horizon. Thin wires are crystal clear, even at sub-4K resolution! You really have to see it for yourself to appreciate it, it's almost like magic. The best thing? DLSS Transformers is available not only on GeForce 50, but makes use of Tensor Cores on all GeForce RTX cards! While it comes with a roughly 10% performance hit compared to CNN, I would never go back to CNN. While our press driver was limited to a handful of games with DLSS 4 support, NVIDIA will have around 75 games supporting it on launch, most through NVIDIA App overrides, and many more are individually tested, to ensure best results. NVIDIA is putting extra focus on ensuring that there will be no anti-cheat drama when using the overrides.
Physical Design, Heat & Noise
Palit's new GameRock design focuses on bling, and it achieves that with a minimal amount of RGB hardware, which helps keep cost down. The smooth corners definitely look good on the card, too. We measured the large triple-fan to achieve temperatures of 74°C under load with 39.8 dBA, which are both slightly better than the NVIDIA Founders Edition. This means that at full load the card is "not quiet"—other custom designs tested today achieve much lower noise levels, but they are more expensive, too. While Palit offers a dual BIOS feature with a "quiet" mode, the differences between both BIOSes are minimal, I even originally suspected during testing that both BIOSes run the same thermal settings. 1894 RPM vs 1817 RPM under load really isn't enough difference to even justify a dual BIOS. I think a better solution would have been to allow well over 80°C with the quiet BIOS but with much better fan noise, which is the point of "quiet BIOS." The thermal cutoff point on Blackwell has been increased to 90°C+, so it shouldn't be a problem, especially for users who manually activate the quiet BIOS.
Our apples-to-apples cooler comparison test confirms that the Palit cooling solution is the weakest out of the cards tested today. It is 9°C cooler than the Founders Edition, at the same heat load and noise level, other cards achieve 20+°C improvements. But again, this should be reflected in the card's pricing.
PCI-Express 5.0
NVIDIA's GeForce Blackwell graphics cards are the first high-end consumer models to support PCI-Express 5.0. This increases the available PCIe bandwidth to the GPU, yielding a small performance benefit. Of course PCIe Gen 5 is backwards compatible with older versions, so you'll be able to run the RTX 5090 even in an older computer.
Just like we've done over the years, we took a detailed look at
PCI-Express scaling in a separate article. Testing includes x8 Gen 5, for instances when an SSD is eating some lanes. The popular x16 4.0 was tested, which is common on many older CPUs and entry-level motherboards. Finally, some additional combinations were run, down to PCIe x16 1.1. The results confirm that unless you are on an ancient machine, PCIe bandwidth won't be a problem at all.
Power Consumption
Palit's card runs the same 575 W power limit out of the box as the Founders Edition, but there is no way to manually increase the power limit to 600 W—no idea why. The card is using the same VRM design as other cards with 600 W and the cooler should be able to handle it. Despite the high absolute power number, the efficiency (with FPS results taken into account) is good—similar to the RTX 4090. Interestingly, we measured a significant increase in non-gaming power consumption, even higher than the Founders Edition, possibly due to the added RGB lighting circuitry.
Overclocking
Overclocking the RTX 5090 worked fairly well, and we gained +6% in real-life performance, which is in line with expectations from a high-end graphics card in 2025. GPU vendors have simply gotten very good at eking out the maximum performance out of their designs, even at stock settings. Unfortunately NVIDIA is limiting the maximum overclocking for the GDDR7 memory chips to +375 MHz—usually NVIDIA doesn't have any OC limits. I guess that's just another bug that will be fixed eventually—we really want to push these memory chips to the max!
Pricing & Alternatives
We asked Palit about pricing, but they only told us that they can't provide it at this time. We're estimating a price of $2200 for the card, given current market conditions and added RGB features. On the other hand, the cooler is roughly similar in performance to the FE, and there's no factory OC, so the right thing would be to price this card at the MSRP price point of $2000. Generally, RTX 5090 is very expensive, but it is the best and everyone will want to have one. AMD, the next-closest competitor, lacks the raw performance, and they don't have anything to counter software features like DLSS multi-frame-generation and the new Transformers upscaler model. NVIDIA's dominance with game studios will ensure that DLSS 4 support is coming to virtually all new releases—no doubt. According to some rumors, the availability of the RTX 5090 will be problematic, with only a small volume in stock at retailers that will go on sale on Jan 30th. Previous generation RTX 40 cards aren't obsolete at all, though. Thanks to DLSS Transformers you're getting a free image quality upgrade on all RTX cards. Also, isn't gameplay the most important thing? Rather than graphics? Are games fun at 1080p, with medium settings? Absolutely, as long as the game is good.

