AMD Radeon VII 16 GB Review 553

AMD Radeon VII 16 GB Review

(553 Comments) »

Value and Conclusion

  • AMD's Radeon VII will be available starting today, from all major board vendors, for $699.
  • First 7 nanometer GPU in the world
  • 16 GB HBM2 memory
  • Solid performance increase over RX Vega 64
  • Power efficiency improved
  • Better overclocking potential than earlier Vega cards
  • Additional temperature sensors
  • Backplate included
  • Three AAA game bundle included
  • FreeSync/VESA Adaptive-Sync, HDMI 2.0, DisplayPort 1.4, 8K support
  • Average performance not even close to RTX 2080
  • Pricing seems high when compared to RTX 2080
  • Very noisy cooler in gaming
  • No idle fan stop, but very quiet in idle
  • Power efficiency still far worse than NVIDIA's
  • No support for DirectX Raytracing
Radeon VII ("seven") is the name of AMD's new flagship graphics card. It is the first gaming card in the world that uses a graphics chip that's produced on a 7 nanometer production process, which promises improvements to clock frequencies, power consumption, and die size. AMD did go beyond just the die shrink and doubled memory bandwidth by using a 4096-bit wide memory interface instead of 2048-bit like on first-generation Vega. This move is also the reason why VRAM size is now at a whopping 16 GB—any less wouldn't be possible with four HBM2 memory stacks because the smallest stack capacity on the market is 4 GB, or AMD would have had to reduce the memory interface width, which would have resulted in lower memory bandwidth, too.

When averaged over all our benchmarks at 1440p resolution, the Radeon VII is 25% faster than Radeon Vega 64 despite a 256 shader deficit (or 7%). Compared to Vega 56, the performance increase is around 40%. However, in their initial announcement, AMD marketed Radeon VII as delivering performance similar to RTX 2080, which isn't the case (at least with our suite of benchmarks). RTX 2080 is still 14% faster than Radeon VII; even the aging GTX 1080 Ti is 5% faster. That difference does get considerably smaller at 4K (-10% vs. RTX 2080), but since Radeon VII's performance is targeted at fluid 1440p gaming, we chose to use that resolution for comparison. You are of course free to look at whatever resolution you prefer—the data is in the review. NVIDIA's fastest, the RTX 2080 Ti (which is much more expensive, of course), is around 40% faster. If you look at individual benchmarks, you can see that in some of them, the Radeon VII does very well against RTX 2080, even beating it in a few, but overall, without any cherry picking, it's just not close enough. We would recommend the Radeon VII for full-details gaming at 1440p or 4K if you are willing to substantially reduce quality settings to achieve 60 FPS at that resolution. The closest NVIDIA GPU you can compare it to performance-wise is the GTX 1080 Ti—last generation's flagship.

Our Performance Summary numbers shouldn't be seen in isolation. There are games in which the Radeon VII does indeed compete head-on with the RTX 2080, but there are many games in which lack of optimization makes it significantly slower, playing in the league of the GTX 1080 ti or even RTX 2070. "Strange Brigade" leverages DirectX 12 and asynchronous-compute in the best possible way for this chip, and it ends up performing close to the RTX 2080 Ti at 4K, which is extremely impressive. "Far Cry 5" is another game where the Radeon VII lives up to its promise, staying consistently ahead of the RTX 2080. DirectX 12 title "Deus Ex: Mankind Divided also shows good numbers for this card. It's the titles not using DX12 or the latest rendering technologies that drag down the average. Hitman was supposed to be a posterboy for new-generation graphics technology, but with Hitman 2, DirectX 12 support has been dropped, and graphics tech has been dialed down in favor of content. The game hence ends up performing subpar across AMD hardware. If you've jumped straight to Relative Performance, we would advise you go through our individual game test results if you haven't to see if this card suits your specific use case. Hardcore Battlefield V players, for example, can expect slightly better performance than from an RTX 2080.

Just like NVIDIA does with their Founder Editions, AMD has given their card a high-end premium look and feel even though the design language of both companies is different. This generation is the first time AMD has chosen to use a triple-fan design that still uses two slots though, and it should fit into most cases size-wise. The three fans are paired with a highly capable vapor-chamber heatsink and metal baseplate that takes care of cooling the VRM circuitry. The power regulation uses a 10+2+2 VRM design that's built around the best components currently available on the market, which on the other hand drives up cost, no doubt. We'd estimate the cost of the VRM circuitry alone to be at least $70.

Just like Founders Edition cards, Radeon VII does not include the highly popular idle-fan-stop feature which completely shuts off the fans during idle, Internet browsing, and light gaming, eliminating all fan noise. To me this does look like a missed opportunity as it could have provided a unique selling point compared to NVIDIA's offerings. In idle, the card is whisper quiet though, thanks to good fan settings for that scenario. When gaming, fan noise is very high with 43 dBA, sitting between Vega 64 and Vega 56 reference, making the Radeon VII one of the loudest graphics cards we have. Competing cards with NVIDIA GPUs do MUCH better here, including the Founders Editions, which typically emit more noise than custom designs from NVIDIA's board partners.

It seems the underlying reason for excessive fan noise is that AMD tied fan control to a new sensor called "Junction Temperature", which RX Vega owners have seen before in GPU-Z, named "Hot Spot". This sensor reports the highest temperature of a 64-sensor thermal network that constantly monitors heat levels in various strategic areas on the GPU. We always like the idea of having more sensors as it gives users a more complete picture of the state of their hardware. One problem is that the reported numbers look alarmingly high to uneducated users, reaching more than 100°C most of the time. Besides the obvious implications for public perception and user confidence, the bigger issue is that the card will start throttling at 115°C Junction Temperature even though "classic" GPU temperature is well below 80°C. As Junction Temperature goes up, and it goes up much faster than normal temperature, the fans will ramp up quickly, frantically trying to keep the card below the throttle point. As far as I'm aware, no data for "hottest temperature" exists for competing GPUs. Assuming that temperature gradients are similar (75° GPU = 110°C Junction), we should see much more widespread throttling for other GPUs running well above 80°C, but that's not happening. Maybe 115°C is too conservative, or the thermal gradient is higher on Vega 20 specifically, I don't have the answers yet.

Power efficiency of Vega 20 is improved, making up lost ground vs. NVIDIA, but not by enough to even match their last-generation Pascal architecture, and Turing, even though on 12 nm, is still much more efficient. It looks like AMD will have to come up with a completely new architecture if they want to compete with NVIDIA in that metric. Power efficiency doesn't just mean "power bill", it actually affects thermals, too, because all the energy gets converted into heat, which drives up temperatures. These temperatures also dictate how big, noisy, and expensive your cooler has to be, and how fast you can run the card with a given cooler because more performance generally means more power draw. Last but not least, running more power through the card requires a more complex and expensive voltage regulation circuitry.

During testing, we noticed that the clocks on Vega 20 are much stabler. First-generation Vega would run high clock speeds for a few seconds only, while the card is cool, and over time, clocks will drop significantly as the card heats up. Radeon VII, on the other hand, delivers a much smoother set of clock frequencies, which ensures performance stays stable even through long gaming sessions.

With Vega 20, AMD made some changes to how overclocking works. Instead of adjusting clocks and voltages individually in separate sections, each with several control points, you now have a combined voltage-frequency curve with just three control points. Besides some easy-to-fix usability issues, the interface feels straightforward and more efficient even though it theoretically takes away some level of control probably nobody ever really used. Overvolting and undervolting comes naturally to the new interface, and if you desire so, you can greatly reduce the range of clock frequencies the GPU picks by merging all control points into a single one, telling the GPU, "hey, run only this frequency and voltage". With +8% on the GPU, we saw manual overclocking potential that was slightly better than previous AMD GPUs; the HBM2 memory overclocked well, too, with +12%. Overall real-life performance gained from OC was +8%, a percentage increase comparable to what we're seeing on NVIDIA Turing.

With 16 GB VRAM, Radeon VII has more memory than any other graphics card below $1000, twice that of RTX 2080, which really makes no measurable difference in any of our tests. Of course, if you look long and hard, you will be able to dig up cases where more memory helps. For example, AMD presents the case of frametime spikes in Far Cry 5 running at 4K with HDR10 and downscaled to 1440p, with dynamic resolution to achieve 60 FPS... not the most likely configuration. We're not saying that 16 GB is useless, it's definitely useful for nice markets like content creation, machine learning with large data-sets, and, of course, everybody's darling GPU crypto with new memory-intensive algorithms. The problem for AMD's Radeon VII specifically is that it competes with the RTX 2080, which has 8 GB of cheap GDDR6, whereas AMD has to use 16 GB of expensive HBM2 memory, which definitely eats into AMD's margins. As mentioned earlier in the review, HBM2 stacks don't come in odd sizes or smaller than 4 GB, so with four stacks (to achieve a 4096-bit bus), AMD has no other option than to make a 16 GB card.

16 GB could provide some future-proofing if VRAM requirements of games do indeed keep rising, but I think for now, this progress is stalled by memory sizes on consoles. If next-gen consoles come out soon and have 16 GB, we could see games using more VRAM, but developers will probably be careful to not drive requirements so high that nobody will end up buying their titles on PC because they don't meet the hardware requirements. Also, you have to consider that Radeon VII doesn't have the shading power for 4K 60 FPS, no matter whether extra VRAM is available or not, so upgrading to next-generation hardware might not be that many years away.

Before this launch, even NVIDIA's 4th fastest RTX 20-series product, the $350 RTX 2060, was beating AMD's fastest card in performance. This has changed today. Sold at a price matching the RTX 2080 at $699, Radeon VII seems pretty expensive, especially when you consider its shortcomings. In Europe, the Radeon VII will retail for around €729, including taxes, and the cheapest RTX 2080 can be found for €649. On the other hand, AMD includes a topnotch game bundle with the card. Resident Evil 2, Devil May Cry 5, and The Division 2 are all included with the purchase, which certainly sweetens the deal. NVIDIA offers two titles: Battlefield V and Anthem. In our opinion, a price point of $599 would be more appropriate as it would encourage potential owners to overlook lower performance, noise levels, and other caveats. Another factor could be the lack of DirectX Raytracing support, which could be an important feature going forward if developers adopt it more frequently. At this time, only Battlefield V, a single title, supports DXR, but I have no doubts NVIDIA is putting all its weight behind raytracing, to push this new technology and (at this time) unique selling point. DLSS is an NVIDIA exclusive anti-aliasing technology that can significantly improve performance with a small image quality hit, making RTX 2080 fit for 4K 60 FPS. Microsoft's DirectML machine-learning framework could bring wider support for this concept, but seems to be in its infancy (technology preview in Windows 10 Spring 2019 Update), and DirectML only introduces the basic framework for machine learning, not the full ready-to-ship solution DLSS offers, and we once more have to pray that developers are willing to invest resources to add support.

We would have loved for Radeon VII to be a success, but looking at our numbers, it seems NVIDIA will still get away with controlling high-end graphics card pricing even though it might not be able to justify it with RTX alone (as proven by their quarterly financials showing a weak response to the RTX 20-series). At a better price, such as $599, the Radeon VII could have forced NVIDIA to trim pricing of the RTX 2080 and RTX 2070 despite its shortcomings, which would have spurred the upgrade itch in everyone, benefiting the PC gaming market as a whole. AMD also needs to fill the vast price-to-performance gorge between the RX 590 and the Radeon VII with a real successor for the RX Vega 56. One way to do so would be a cut down the "Vega 20" GPU die, mated to just two 4 GB HBM2 stacks at 512 GB/s, for performance rivalling the RTX 2070. Those with a Pascal (or even first gen Vega) graphics card are yet to be given one good reason to upgrade.
Discuss(553 Comments)
View as single page
Nov 13th, 2024 03:48 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts