Positioning & Architecture
NVIDIA has been busy—this week we have their fourth graphics card launch this year—the GeForce RTX 5070. With Blackwell, NVIDIA is releasing their product stack from the top: first the RTX 5090 flagship, then the RTX 5080 and RTX 5070 Ti, and now the 5070 non-Ti. While RTX 5070 Ti and RTX 5080 were based on the NVIDIA GB203 silicon, the RTX 5070 introduces the GB205 to NVIDIA's lineup. The 5070 comes with 6,144 cores enabled, vs 8,960 on the RTX 5070 Ti, a 45% difference. Other unit counts have been scaled accordingly, you get 80 ROPs, and yes, really, we checked. Also included are 192 TMUs and 48 RT cores. The memory subsystem uses GDDR7, too, like the other RTX 50 cards, but you only get 12 GB VRAM, and it uses a 192-bit wide memory bus, clocked at 28 Gbps.
Today the review embargo for all RTX 5070 cards priced at the $550 MSRP lifted, tomorrow all reviews for other, more expensive, cards can be published. Due to lack of time and the fact that AMD is launching both RX 9070 and RX 9070 XT tomorrow we will only publish the RTX 5070 FE review for now and follow up with the other reviews in the coming days—we will have five reviews in total.
The Blackwell architecture introduces several architectural improvements under the hood, like giving all shaders the ability to run FP32 or INT32 instructions, on Ada only half the cores had that ability. The Tensor Cores are now accessible from the shaders through a new Microsoft DirectX API, and they now support FP4 and INT4 instructions which run at lower precision, but much faster with less memory usage. There's numerous additional architecture improvements, we talked about all of them on the first pages of this review.
From a fabrication perspective nothing has changed though—Blackwell is built on the same 5 nanometer "NVIDIA 4N" TSMC node as last generation's Ada. NVIDIA claims this is a "4 nanometer process," but during Ada it was confirmed that NVIDIA 4N is actually not TSMC N4 (note the order of N and 4), but 5 nanometer. At the end of the day the actual number doesn't matter much, what's important is that NVIDIA is using the same process node.
Performance
We upgraded our test system in preparation for this wave of GPU launches, which is now built on AMD technology with the outstanding Ryzen 7 9800X3D. We've updated to Windows 11 24H2, complete with the newest patches and updates, and have added a selection of new games. At 1440p, with pure rasterization, without ray tracing or DLSS, we measured a 22% performance uplift over the RTX 4070, which is a bit lower than expected, but alright I'd say. At 4K, the increase is 25%, which is certainly better than the meager 15% that we got on RTX 5080, the RTX 5090 got +36%, and the RTX 5070 Ti was 27% faster. Just like on the RTX 5080, NVIDIA is unable to achieve their "twice the performance every second generation" rule with the RTX 5070, which is only 59% faster at 4K, 56% at 1440p. Overall performance is roughly similar to the RTX 4070 Ti, 10% behind the RTX 4070 Ti Super. Compared to the RTX 4070 Super the performance uplift is 5%. When compared against AMD's offerings, the 5070 sits roughly in the middle between RX 7900 XT and RX 7900 GRE. AMD's own performance projections see the RX 9070 XT a bit slower than the RTX 5070 Ti, which means it should beat the RX 5070, at least in pure raster. The RX 9070 non-XT should end up at roughly similar performance the RTX 5070, which means competition will be fierce in this segment (hopefully).
While the RTX 5070 is definitely a great card for 1440p, it's a little bit weak for 4K gaming without upscaling, that's why we recommend the card for a resolution of 1440p, especially when you consider that future games will have higher hardware requirements.
Ray Tracing & Neural Rendering
NVIDIA is betting on ray tracing and Blackwell comes with several improvements here. Interestingly, when comparing RTX 4070 with the RTX 5070, the performance gain with RT is smaller than with raster: +15% vs +22% at 1440p. We saw RTX 5070 trading blows with 4070 Ti in raster, with RT it's clearly behind and is only able to match RTX 4070 Super. Still, RT performance is good, it's faster than AMD's Radeon RX 7900 XTX. Whether it will be faster than the upcoming RX 9070 Series remains to be seen. AMD has given their RT cores some extra love with RDNA 4. With Blackwell, NVIDIA is introducing several new technologies. The most interesting one is Neural Rendering, which is exposed through a Microsoft DirectX API (Cooperative Vectors). This ensures that the feature is universally available for all GPU vendors to implement, so game developers should be highly motivated to pick it up.
VRAM
The RTX 5070 comes with 12 GB VRAM, just like the RTX 4070, RTX 4070 Super and RTX 4070 Ti—so no gen-over-gen increase here, except for the bandwidth, which is 33% higher thanks to the 28 Gbps GDDR7. The RTX 4070 Ti Super and RTX 5070 Ti come with 16 GB. AMD offers 16 GB on both the RX 9070 non-XT and RX 9070 XT. While our data clearly shows that 12 GB VRAM is sufficient for gaming at 1440p, even with RT enabled, it won't be enough for RT at 4K with Frame Generation in several titles. I'm sure that eventually there will be games, too, that hit 12 GB in 1440p, so the RTX 5070 is definitely not future-proof for 1440p, unless you are willing to dial down the settings slightly in those cases, which isn't an unreasonable ask I'd say. From an engineering perspective, 12 GB does make a lot of sense though. Increasing the memory size further would have required a wider memory bus and a GPU design with support for the extra bus width and more pins in the design, etc. All these changes would make the card more expensive, for relatively small gains. More memory does not automagically turn into additional performance. You would see scaling only in those rare cares that go beyond 12 GB. Still, AMD offers 16 GB on the RX 9070 Series and that will be a selling point for them.
DLSS 4 Upscaling & Frame Generation
NVIDIA made a big marketing push to tell everyone how awesome DLSS 4 is, and they are not wrong. First of all, DLSS 4 Multi-Frame-Generation. While DLSS 3 doubled the framerates by generating a single new frame, DLSS 4 can now triple or quadruple the frame count. In our testing this worked very well and delivered the expected FPS rates. When using FG, gaming latency does NOT scale linearly with FPS, but given a base FPS of like 40 or 50, DLSS x4 works great to achieve the smoothness of over 150 FPS, with similar latency than you started out with. Image quality is good, if you know what to look for you can see some halos around the player, but that's nothing you'd notice in actual gameplay.
Just to clarify, even multi-frame-generation will not turn 15 base FPS into 60 "playable" FPS. Sure, the movements will be very smooth, just like 60, but the game would react to your inputs at a rate of 15 FPS, which is noticeably sluggish. That's why the "base" FPS value is so important, as mentioned before, 40+ works very well, but it also depends on the type of game. Some slower titles, not shooters, will run great, even with a base FPS of 30.
Want lower latency? Then turn activate DLSS 4 Upscaling, which lowers the render resolution and scales up the native frame. In the past there were a lot of debates whether DLSS upscaling image quality is good enough, some people even claimed "better than native"—I strongly disagree with that—I'm one of the people who are allergic to DLSS 3 upscaling, even at "quality." With Blackwell, NVIDIA is introducing a "Transformer" upscaling model for DLSS, which is a major improvement over the previous "CNN" model. I tested Transformer and I'm in love. The image quality is so good, "Quality" looks like native, sometimes better. There is no more flickering or low-res smeared out textures on the horizon. Thin wires are crystal clear, even at sub-4K resolution! You really have to see it for yourself to appreciate it, it's almost like magic. The best thing? DLSS Transformer is available not only on GeForce 50, but uses Tensor Cores on all GeForce RTX cards! While it comes with a roughly 10% performance hit compared to CNN, I would never go back to CNN. NVIDIA's driver with Transformer model support has been public for a few weeks now and people are slowly become aware that there's a per-game DLSS override the NVIDIA App now—with very good results, even on older GPUs.
Physical Design, Heat & Noise
The GeForce RTX 5070 Founders Edition is a work of art. It looks incredible, thanks to an impressive industrial design with intricate metal construction and a refreshed color theme. Visually the 5070 looks identical to the 5080 and 5090, just physically smaller, but it retains the dual-slot design. The packaging is a let-down though, it's some "eco-friendly" paper—very disappointing for collectors. Given the low volume of FE sales this looks like some greenwashing campaign, that will actually make people sad. Disassembly is very similar to the RTX 5080 and RTX 5090—complicated but definitely not impossible, and I never felt like NVIDIA intentionally made the card difficult to take apart. In our thermal testing the card reached a peak temperature of 77°C, which is very reasonable. Unfortunately fan noise is quite high. With 38 dBA, the card is definitely "not quiet," but not "loud" either. Custom designs will certainly do better here, for example the Zotac Solid, which sell at MSRP, too, runs at a quiet 29 dBA, and it's still a dual-slot card.
PCI-Express 5.0
NVIDIA's GeForce Blackwell graphics cards are the first high-end consumer models to support PCI-Express 5.0. This increases the available PCIe bandwidth to the GPU, yielding a small performance benefit. Of course PCIe Gen 5 is backwards compatible with older versions, so you'll be able to run the RTX 5070 even in an older computer.
Just like we've done over the years, we took a detailed look at
PCI-Express scaling in a separate article. Testing includes x8 Gen 5, for instances when an SSD is eating some lanes. The popular x16 4.0 was tested, which is common on many older CPUs and entry-level motherboards. Finally, some additional combinations were run, down to PCIe x16 1.1. The results confirm that unless you are on an ancient machine, PCIe bandwidth won't be a problem at all.
Power Consumption
While we saw quite high power consumption in idle, multi-monitor and media playback on the other Blackwell cards, this isn't a problem on the RTX 5070. Gaming power draw is very reasonable, too, reaching around 230 W. This is a small improvement over even RTX 4070 Ti (252 W), but gen-over-gen, the power draw has increased by 19%, which is pretty close to the performance gains. Not surprising, considering that NVIDIA is using the same process on these GPUs. Overall gaming efficiency is really good, near the top of our charts, considerably better than anything AMD offers, at least with RDNA 3.
Overclocking
Overclocking worked extremely well on the 5070, reaching +13% in real-life performance. This is an exceptional OC result, and much higher than what we usually see on modern GPUs. No idea why NVIDIA didn't clock it higher—they are masters at eking the last bits of performance out of their GPUs, even at stock. Going from a real measured clock frequency of ~2800 MHz to ~3150 MHz is massive—why didn't they clock these cards higher? Maybe to keep the gap to 5070 Ti bigger, for the upsell? Or to improve harvesting, so that the best GPUs can go to higher-margin products? Either way it's a lost opportunity, but excellent potential for people who are willing to do some manual overclocking. Memory overclocking is equally impressive and topped out at NVIDIA's artificial cap of +375 MHz / +3000 MT/s—the chips could certainly take more.
Pricing & Alternatives
NVIDIA's MSRP for the RTX 5070 is set at $550, which is very reasonable for what's offered. Actually that price is $50 lower than the MSRP of the RTX 4070, which launched at $600. If you've followed the tech news in recent weeks, then you'll sure be aware of all the drama surrounding the MSRP. Right now not a single GeForce 50 card is in stock anywhere in the world, and scalpers are selling them at hugely inflated pricing. Custom designs from the various board partners are 20%, 30%, 40% more expensive than the baseline price just for some bigger coolers and RGB bling. Another annoyance is that AICs are introducing cards at the baseline price in minimal numbers, which quickly sell out, with no plans to replenish at that price, and suggesting that customers opt for the $100 more expensive version. I'm not sure why the MSRPs need to be faked? Just to build hype, get some positive reviews and disappoint customers when they want to buy the product? If supply is so low, why is NVIDIA even launching so many products in just a few weeks? Hopefully AMD does better tomorrow, but their pricing seems suspiciously low, too.
At $550, there aren't many alternatives to the RTX 5070. The closest option is the Radeon RX 7900 GRE for $530. It's slightly slower in raster, much slower in RT and draws a bit more power. The RTX 4070 Ti sells for $750, offering a little bit higher RT performance, but without multi-frame-generation. RTX 4070 Ti Super is much more interesting, because it offers 16 GB VRAM with significantly higher overall performance, but it's $800. If ray tracing or upscaling isn't your top priority, and you want to focus on pure raster performance, then Radeon RX 7900 XT could be an option, for $640. Due to DLSS 3 and DLSS 4, the better choice for most gamers will still be the RTX 5070.
Officially, RTX 5070 Ti should be available for $750, but it's scalped to $1000 right now. I guess this means that realistically, RTX 5070 will end up selling for around $700, unless there's a monumental amount of supply available that keeps pricing down.
As mentioned before, AMD is launching their Radeon RX 9070 and RX 9070 XT tomorrow, at $550 and $600, which could introduce strong alternatives in this segment. If you are interested in a RTX 5070, definitely wait for the AMD reviews tomorrow. Looking beyond that, we have RTX 5060 series and RX 9060 series on the horizon—I doubt that these will be able to make any difference for buyers interested in strong 1440p cards. Maybe Intel has an ace up their sleeves, their bigger Arc Battlemage cards could add more competition, but the release date is completely unknown.
Awards will be added after the cards go on sale, once we know more about pricing and the supply situation.