• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

GPU Test System Update March 2021

So dirty Geforce FE cards. Are those used for mining?
Happens if you throw your cards around like this
1apgjk9lvk.jpg
 
Just curious: When you benchmark, do you set the pump and fan speed at 100%, or do you aim for a balanced cooling and noise level? I know e.g. Gamer's Nexus set the pump and fan speed to 100%, in order to not be limited by the cooling capacity. Personally I don't like that approach, but I get the idea.

By the way, would it be possible for you to do a quick Cinebench R20 test? I began to make a comparison between reviewers when Zen 3 was released, and I found quite a big discrepancy:

Especially when looking at the 5900X reviews:

Just curious where your system scores.
 
DLSS - which option was tested? Quality, Balanced, Performance?
 
All this resizable BAR talk makes me wonder. When AMD enables support for Ryzen 3000 CPUs to use it in AGESA 1.2.0.2, would I be able to enable it on my 3900X with my RTX 3070 when its resizable bar vbios is released?

Or is it only for RX 6000 cards?
 
Last edited by a moderator:
would I be able to enable it on my 3900X with my RTX 3070 when its resizable bar vbios is released?
Yes, it works on any platform. You need to boot with UEFI, enable the resizable BAR option in mobo BIOS, have the NVIDIA VBIOS update and have the new NVIDIA driver
 
Yes, it works on any platform. You need to boot with UEFI, enable the resizable BAR option in mobo BIOS, have the NVIDIA VBIOS update and have the new NVIDIA driver
Nice, will be patiently waiting for my card's new VBIOS to release.
 
Multi GPU has been the scope of several previous special reviews, but nowadays it's simply not worth it because it's just a huge waste of money due to lack of game support
The last review I can find is from 2 1/2 years ago with 2080 TI and NV-link , The last AMD one was with crossfire RX 480 from almost over 5 years ago. Buying a gpu right now isn't worth the money and is a huge waste of money. Only like 0.2% of people are getting them anyways.

I'm only curious because this would move these cards away from a gpu bottleneck to a cpu bottleneck where it looks like a driver overhead becomes problem mostly on Nvidia currently.

Dual 6800 XT test (I don't think they used raytracing on any test)

Nvidia Has a Driver Overhead Problem, GeForce vs Radeon on Low-End CPUs

AMD vs NV Drivers: A Brief History and Understanding Scheduling & CPU Overhead

The lack of games with raytracing and mutl-gpu is annoying there's like two or three games I know of, Rise of The Tomb Raider, Shadow of the tomb raider , Deus ex Mankind divided.

I think DX12/ raytracing is a complete failure because of this.
 
Last edited:
Impressive job! But your Relative Performance charts when viewing Gpu's specs is different from these current numbers here. Is there a plan to update those too with the new data? And would there be a better way to display Relative Performance differently than it is today, as in most of the Gpu's are in the 1080 category, but the better ones are in the 4K category? What If one would want to buy a top card for 1080 and not 4k? Finally, thanks for these, general charts are always a good start for measuring performance over product stacks.
 
But your Relative Performance charts when viewing Gpu's specs is different from these current numbers here. Is there a plan to update those too with the new data?
Yeah that is planned.

And would there be a better way to display Relative Performance differently than it is today, as in most of the Gpu's are in the 1080 category, but the better ones are in the 4K category? What If one would want to buy a top card for 1080 and not 4k?
Any suggestions? Obviously the mix of 1080p and 4K performance data is not perfect, but it's the best we came up with. The primary objective for the GPU Database is to keep the chart simple, give a basic understanding of performance. There's no way one number can be 100% accurate. If that was possible, GPU reviews would be one chart, and not 20+ games :)
 
Yeah that is planned.


Any suggestions? Obviously the mix of 1080p and 4K performance data is not perfect, but it's the best we came up with. The primary objective for the GPU Database is to keep the chart simple, give a basic understanding of performance. There's no way one number can be 100% accurate. If that was possible, GPU reviews would be one chart, and not 20+ games :)
I do have a suggestion, but not sure how easy it would be to implement, the suggestion being have 3 clickable resolution pickers in the Relative Performance tab, for 1k, 2k, 4k, and single clicking on each would dynamically show the dataset for that given resolution. That would also make the list a lot more fancy function wise, but I guess that's not something you can easily do, would be awesome though :D
 
Good idea! I always knew that this nVidia overhead would become serious in the long term since the Kepler architecture, their software stack had become bloated and while they had been doing good strides with the software scheduling, this constant change from 192 CUDAs per SMX to 128 and then to 64 and now back to 128 shows that something fishy is going on.
 
holy shit at AMD and radeon vii. wtf

Comedy gold.
" Luckily, I found the updated BIOS in our VGA BIOS collection, and the card is working perfectly now."
comedy gold indeed
 
I think it would have been prudent to wait for Rocket Lake, especially less then a month before its release and considering the fact that the jump from 9900k to 5800x brings maybe 2-3% difference. Honestly it would have been fine just sticking with 9900k (you could probably even get away with 8700k still, of course delidded and really well OCed) until Alder Lake (and I guess maybe also Zen4) land. When the differences are so small, it makes more sense to just keep the same testing platform for better consistency and easy comparison between several dozen gpus of several generatios.
 
Back
Top