• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Is Radeon REALLY better for old PCs? | Driver Overhead

Joined
Sep 10, 2018
Messages
6,321 (2.91/day)
Location
California
System Name His & Hers
Processor R7 5800X/ R7 7950X3D Stock
Motherboard X670E Aorus Pro X/ROG Crosshair VIII Hero
Cooling Corsair h150 elite/ Corsair h115i Platinum
Memory Trident Z5 Neo 6000/ 32 GB 3200 CL14 @3800 CL16 Team T Force Nighthawk
Video Card(s) Evga FTW 3 Ultra 3080ti/ Gigabyte Gaming OC 4090
Storage lots of SSD.
Display(s) A whole bunch OLED, VA, IPS.....
Case 011 Dynamic XL/ Phanteks Evolv X
Audio Device(s) Arctis Pro + gaming Dac/ Corsair sp 2500/ Logitech G560/Samsung Q990B
Power Supply Seasonic Ultra Prime Titanium 1000w/850w
Mouse Logitech G502 Lightspeed/ Logitech G Pro Hero.
Keyboard Logitech - G915 LIGHTSPEED / Logitech G Pro
I'd say yes and no.... While pairing an older CPU with a 6000/7000 series AMD card might give you a better experience in some games vs it's direct Nvidia competitor you likely will be bottlenecked regardless of card being used so a full system upgrade is likely the better option and just getting whatever GPU you want.


Bottom line I wouldn't buy an AMD card just for this reason it would have to tick all the other performance boxes in what I would be looking for in a gpu for me to want to buy it.
 
Joined
Dec 25, 2020
Messages
5,774 (4.33/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Galax Stealth STL-03
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
The situation is always evolving, but in short, yes, Radeon's design is friendlier towards machines with limited CPU power with the tradeoff of having less potential when ample CPU power is available. Like I mentioned on the Valley thread, AMD's DirectX 11 driver operates in an immediate context. This means that it issues commands through a limited amount of, and primarily one CPU thread. These commands are then processed by the GPU's hardware command processor itself. It is highly optimized, and thus quite fast, but there are situations where this approach doesn't work as well, notably, when you're talking about Bethesda's Creation Engine games.

Nvidia's driver, on the other hand, is capable of parsing an instance where an immediate context is created, generate driver command lists and defer these commands through multiple CPU threads. This results in an increased CPU load, but significantly higher potential, as it will be able to handle a much, much, *much* higher amount of draw calls than an immediate context would. While a little older now, this is a good technical explanation by Intel, I had been talking about this with another forum member shortly before you joined:


Support for driver command lists is detected by GPU-Z and shown in the Advanced > DirectX 11 tab:

1685112185744.png


3DMark's old API overhead test is an useful tool to show the result of this behavior, you will find that NVIDIA GPUs have substantially higher performance in that benchmark, despite that not reflecting in real world applications. Thus, the UL guys dropped support, as it's a very technical tool which does not really serve to compare between different setups. The DirectX SDK samples will also contain examples where this will be quite apparent.

In DirectX 12 or Vulkan, this is irrelevant: they should be significantly closer apart, with NVIDIA's driver again using a little more CPU in exchange for slightly stronger performance if it can be helped, but nowhere near the drastic difference between vendors when it comes to DirectX 11. This is because they are low-level APIs and this abstraction is done in a completely different manner. Also, for the record: Intel does not support driver command lists in their Gen 12.2 architecture integrated graphics (such as UHD 770). Unsure if Arc does, but I would guess it does not either. Would appreciate if someone could clarify.
 
Joined
May 23, 2023
Messages
66 (0.15/day)
System Name anal fucker
Processor a
Motherboard n
Cooling a
Memory l
Storage f
Display(s) u
Case c
Audio Device(s) k
Power Supply e
Mouse r
The situation is always evolving, but in short, yes, Radeon's design is friendlier towards machines with limited CPU power with the tradeoff of having less potential when ample CPU power is available. Like I mentioned on the Valley thread, AMD's DirectX 11 driver operates in an immediate context. This means that it issues commands through a limited amount of, and primarily one CPU thread. These commands are then processed by the GPU's hardware command processor itself. It is highly optimized, and thus quite fast, but there are situations where this approach doesn't work as well, notably, when you're talking about Bethesda's Creation Engine games.

Nvidia's driver, on the other hand, is capable of parsing an instance where an immediate context is created, generate driver command lists and defer these commands through multiple CPU threads. This results in an increased CPU load, but significantly higher potential, as it will be able to handle a much, much, *much* higher amount of draw calls than an immediate context would. While a little older now, this is a good technical explanation by Intel, I had been talking about this with another forum member shortly before you joined:


Support for driver command lists is detected by GPU-Z and shown in the Advanced > DirectX 11 tab:

View attachment 297658

3DMark's old API overhead test is an useful tool to show the result of this behavior, you will find that NVIDIA GPUs have substantially higher performance in that benchmark, despite that not reflecting in real world applications. Thus, the UL guys dropped support, as it's a very technical tool which does not really serve to compare between different setups. The DirectX SDK samples will also contain examples where this will be quite apparent.

In DirectX 12 or Vulkan, this is irrelevant: they should be significantly closer apart, with NVIDIA's driver again using a little more CPU in exchange for slightly stronger performance if it can be helped, but nowhere near the drastic difference between vendors when it comes to DirectX 11. This is because they are low-level APIs and this abstraction is done in a completely different manner. Also, for the record: Intel does not support driver command lists in their Gen 12.2 architecture integrated graphics (such as UHD 770). Unsure if Arc does, but I would guess it does not either. Would appreciate if someone could clarify.
Thank you for this information, very helpful!
 
Joined
Aug 20, 2007
Messages
21,093 (3.40/day)
System Name Pioneer
Processor Ryzen R9 7950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 905p Optane 960GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64 / Windows 11 Enterprise IoT 2024
Just don't pair an old PC with an old unsupported Radeon, as until like this year, both DX11 and OGL overhead was absurdly bad.
 
Joined
Sep 21, 2020
Messages
1,555 (1.09/day)
Processor 5800X3D -30 CO
Motherboard MSI B550 Tomahawk
Cooling DeepCool Assassin III
Memory 32GB G.SKILL Ripjaws V @ 3800 CL14
Video Card(s) ASRock MBA 7900XTX
Storage 1TB WD SN850X + 1TB ADATA SX8200 Pro
Display(s) Dell S2721QS 4K60
Case Cooler Master CM690 II Advanced USB 3.0
Audio Device(s) Audiotrak Prodigy Cube Black (JRC MUSES 8820D) + CAL (recabled)
Power Supply Seasonic Prime TX-750
Mouse Logitech Cordless Desktop Wave
Keyboard Logitech Cordless Desktop Wave
Software Windows 10 Pro
Joined
Dec 10, 2022
Messages
481 (0.78/day)
System Name The Phantom in the Black Tower
Processor AMD Ryzen 7 5800X3D
Motherboard ASRock X570 Pro4 AM4
Cooling AMD Wraith Prism, 5 x Cooler Master Sickleflow 120mm
Memory 64GB Team Vulcan DDR4-3600 CL18 (4×16GB)
Video Card(s) ASRock Radeon RX 7900 XTX Phantom Gaming OC 24GB
Storage WDS500G3X0E (OS), WDS100T2B0C, TM8FP6002T0C101 (x2) and ~40TB of total HDD space
Display(s) Haier 55E5500U 55" 2160p60Hz
Case Ultra U12-40670 Super Tower
Audio Device(s) Logitech Z200
Power Supply EVGA 1000 G2 Supernova 1kW 80+Gold-Certified
Mouse Logitech MK320
Keyboard Logitech MK320
VR HMD None
Software Windows 10 Professional
Benchmark Scores Fire Strike Ultra: 19484 Time Spy Extreme: 11006 Port Royal: 16545 SuperPosition 4K Optimised: 23439
Adrenalin drivers do have less CPU overhead but not all Radeons are good for older PCs. For example, the RX 6500 XT has a PCI-Express 4.0 x4 interface which means that motherboards with PCI-Express v3.0 or older will choke the card's performance. If you're comparing a GeForce card with a Radeon and they both have x16 interfaces, then yes, the Radeon is absolutely the better choice.
 
Joined
Jan 27, 2020
Messages
15 (0.01/day)
The situation is always evolving, but in short, yes, Radeon's design is friendlier towards machines with limited CPU power with the tradeoff of having less potential when ample CPU power is available. Like I mentioned on the Valley thread, AMD's DirectX 11 driver operates in an immediate context. This means that it issues commands through a limited amount of, and primarily one CPU thread. These commands are then processed by the GPU's hardware command processor itself. It is highly optimized, and thus quite fast, but there are situations where this approach doesn't work as well, notably, when you're talking about Bethesda's Creation Engine games.

Nvidia's driver, on the other hand, is capable of parsing an instance where an immediate context is created, generate driver command lists and defer these commands through multiple CPU threads. This results in an increased CPU load, but significantly higher potential, as it will be able to handle a much, much, *much* higher amount of draw calls than an immediate context would. While a little older now, this is a good technical explanation by Intel, I had been talking about this with another forum member shortly before you joined:


Support for driver command lists is detected by GPU-Z and shown in the Advanced > DirectX 11 tab:

View attachment 297658

3DMark's old API overhead test is an useful tool to show the result of this behavior, you will find that NVIDIA GPUs have substantially higher performance in that benchmark, despite that not reflecting in real world applications. Thus, the UL guys dropped support, as it's a very technical tool which does not really serve to compare between different setups. The DirectX SDK samples will also contain examples where this will be quite apparent.

In DirectX 12 or Vulkan, this is irrelevant: they should be significantly closer apart, with NVIDIA's driver again using a little more CPU in exchange for slightly stronger performance if it can be helped, but nowhere near the drastic difference between vendors when it comes to DirectX 11. This is because they are low-level APIs and this abstraction is done in a completely different manner. Also, for the record: Intel does not support driver command lists in their Gen 12.2 architecture integrated graphics (such as UHD 770). Unsure if Arc does, but I would guess it does not either. Would appreciate if someone could clarify.

You seem to be quite right! I upgraded my GPU from a 4G Nvidia GTX 960 to a 12G AMD 6700 XT, having an i5 4690k at 4.3 Ghz (4 cores without hyperthreading, so 4 threads total), 2400 Mhz (Or should I say 2400MT/s, anyway...) Cl11 2x8 RAM sticks and a 960 evo nvme drive, and the AMD driver seem to significantly reduce stuttering on Assetto Corsa Competizione (probably the extra gpu power and video memory helps too, as it usually consumes about 7G in the game, with mixed settings and 1440p internal resolution [133% resolution scale], with the GTX 960 I was using similar if not exact settings at internal 1080p resolution, and the video ram usage is not much lower like this, still way above 4G). My target is 60 fps as I'm aware of the CPU/system RAM limitations and my monitor (tv really) is 60hz. A few months back I upgraded from 2x8 1600cl10 as one of the sticks started failing, and that upgrade was noticeable, and this also was, probably more.

Unfortunately it seems the best would be to have a toggle in driver settings to have either Nvidia approach for older not very well threaded (and hopefully not very CPU demanding) games, or the less CPU consuming AMD approach, but I don't see that happening anytime soon. It may be placebo but I feel like I lost a bit of performance for instance in a very old Directx 9 game, NFS: MW from 2005, probably because here the Nvidia driver is capable of squeezing more draw calls out if it.

The performance loss in the lower end of the CPU spectrum due to the Nvidia drivers in more or less recent demanding games can be very noticeable. Look at this comparison in Battlefield V, the frametime graph... :


PD: It's clearly not ideal to pair the 6700 xt with an i5 4690k, but I couldn't afford a whole PC upgrade, still upgrading was noticeable.
 
Last edited:

Space Lynx

Astronaut
Joined
Oct 17, 2014
Messages
16,671 (4.64/day)
Location
Kepler-186f
this is interesting, never even heard of any of this until I just stumbled on this thread.

i doubt it matters all that much, but it is nice to know. i like my all AMD setup, everything seems very fluid so far regardless of what I throw at it
 
Joined
Nov 13, 2007
Messages
10,451 (1.71/day)
Location
Austin Texas
Processor 13700KF Undervolted @ 5.4, 4.8Ghz Ring 190W PL1
Motherboard MSI 690-I PRO
Cooling Thermalright Peerless Assassin 120 w/ Arctic P12 Fans
Memory 48 GB DDR5 7600 MHZ CL36
Video Card(s) RTX 4090 FE
Storage 2x 2TB WDC SN850, 1TB Samsung 960 prr
Display(s) Alienware 32" 4k 240hz OLED
Case SLIGER S620
Audio Device(s) Yes
Power Supply Corsair SF750
Mouse Xlite V2
Keyboard RoyalAxe
Software Windows 11
Benchmark Scores They're pretty good, nothing crazy.
Low end Radeon gaming boxes are very strong 6700xt rigs that I built did feel snappy as hell.
 
Joined
Dec 25, 2020
Messages
5,774 (4.33/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Galax Stealth STL-03
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
You seem to be quite right! I upgraded my GPU from a 4G Nvidia GTX 960 to a 12G AMD 6700 XT, having an i5 4690k at 4.3 Ghz (4 cores without hyperthreading, so 4 threads total), 2400 Mhz (Or should I say 2400MT/s, anyway...) Cl11 2x8 RAM sticks and a 960 evo nvme drive, and the AMD driver seem to significantly reduce stuttering on Assetto Corsa Competizione (probably the extra gpu power and video memory helps too, as it usually consumes about 7G in the game, with mixed settings and 1440p internal resolution [133% resolution scale], with the GTX 960 I was using similar if not exact settings at internal 1080p resolution, and the video ram usage is not much lower like this, still way above 4G). My target is 60 fps as I'm aware of the CPU/system RAM limitations and my monitor (tv really) is 60hz. A few months back I upgraded from 2x8 1600cl10 as one of the sticks started failing, and that upgrade was noticeable, and this also was, probably more.

Unfortunately it seems the best would be to have a toggle in driver settings to have either Nvidia approach for older not very well threaded (and hopefully not very CPU demanding) games, or the less CPU consuming AMD approach, but I don't see that happening anytime soon. It may be placebo but I feel like I lost a bit of performance for instance in a very old Directx 9 game, NFS: MW from 2005, probably because here the Nvidia driver is capable of squeezing more draw calls out if it.

The performance loss in the lower end of the CPU spectrum due to the Nvidia drivers can be very noticeable. This is an example of this happening in Battlefield V, the frametime graph... :


An option to disable threaded optimization in the Nvidia drivers seems to exist, but I've never tried it.
 

Space Lynx

Astronaut
Joined
Oct 17, 2014
Messages
16,671 (4.64/day)
Location
Kepler-186f
An option to disable threaded optimization in the Nvidia drivers seems to exist, but I've never tried it.

interesting, this is an option in sid meir's civilization beyond earth, i was playing it the other day and noticed it said "threaded optimization" in settings, and i was like wth is that lol
 
Joined
May 12, 2017
Messages
2,207 (0.83/day)
My current pairing of old PC is Phenom 1100T/Vega 56 Nano.

Because I'm a hardware modder, I can extract more performance than most users when it comes to old hardware. It's built to be extraordinary snappy.
 
Joined
Feb 18, 2005
Messages
5,581 (0.78/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Logitech G613
Software Windows 10 Professional x64
You seem to be quite right! I upgraded my GPU from a 4G Nvidia GTX 960 to a 12G AMD 6700 XT, having an i5 4690k at 4.3 Ghz (4 cores without hyperthreading, so 4 threads total), 2400 Mhz (Or should I say 2400MT/s, anyway...) Cl11 2x8 RAM sticks and a 960 evo nvme drive, and the AMD driver seem to significantly reduce stuttering on Assetto Corsa Competizione (probably the extra gpu power and video memory helps too, as it usually consumes about 7G in the game, with mixed settings and 1440p internal resolution [133% resolution scale], with the GTX 960 I was using similar if not exact settings at internal 1080p resolution, and the video ram usage is not much lower like this, still way above 4G). My target is 60 fps as I'm aware of the CPU/system RAM limitations and my monitor (tv really) is 60hz. A few months back I upgraded from 2x8 1600cl10 as one of the sticks started failing, and that upgrade was noticeable, and this also was, probably more.

Unfortunately it seems the best would be to have a toggle in driver settings to have either Nvidia approach for older not very well threaded (and hopefully not very CPU demanding) games, or the less CPU consuming AMD approach, but I don't see that happening anytime soon. It may be placebo but I feel like I lost a bit of performance for instance in a very old Directx 9 game, NFS: MW from 2005, probably because here the Nvidia driver is capable of squeezing more draw calls out if it.

The performance loss in the lower end of the CPU spectrum due to the Nvidia drivers can be very noticeable. This is an example of this happening in Battlefield V, the frametime graph... :

Why on God's green Earth would you pair an ancient, underpowered CPU like that with a far faster GPU? That 6700 XT is probably spending 50% or more of its time idle, waiting for your CPU to catch up.

My current pairing of old PC is Phenom 1100T/Vega 56 Nano.

Because I'm a hardware modder, I can extract more performance than most users when it comes to old hardware. It's built to be extraordinary snappy.
Cool story bro!
 
Joined
Jan 8, 2017
Messages
9,270 (3.33/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
This is wild because it used to be that AMD had way bigger driver overhead I wonder how Nvidia screwed this up.
 
Joined
Aug 29, 2005
Messages
7,176 (1.04/day)
Location
Stuck somewhere in the 80's Jpop era....
System Name Lynni PS \ Lenowo TwinkPad L14 G2
Processor AMD Ryzen 7 7700 Raphael \ i5-1135G7 Tiger Lake-U
Motherboard ASRock B650M PG Riptide Bios v. 2.02 AMD AGESA 1.1.0.0 \ Lenowo BDPLANAR Bios 1.68
Cooling Noctua NH-D15 Chromax.Black (Only middle fan) \ Lenowo C-267C-2
Memory G.Skill Flare X5 2x16GB DDR5 6000MHZ CL36-36-36-96 AMD EXPO \ Willk Elektronik 2x16GB 2666MHZ CL17
Video Card(s) Asus GeForce RTX™ 4070 Dual OC GPU: 2325-2355 MEM: 1462| Intel® Iris® Xe Graphics
Storage Gigabyte M30 1TB|Sabrent Rocket 2TB| HDD: 10TB|1TB \ WD RED SN700 1TB
Display(s) LG UltraGear 27GP850-B 1440p@165Hz | LG 48CX OLED 4K HDR | Innolux 14" 1080p
Case Asus Prime AP201 White Mesh | Lenowo L14 G2 chassis
Audio Device(s) Steelseries Arctis Pro Wireless
Power Supply Be Quiet! Pure Power 12 M 750W Goldie | 65W
Mouse Logitech G305 Lightspeedy Wireless | Lenowo TouchPad & Logitech G305
Keyboard Ducky One 3 Daybreak Fullsize | L14 G2 UK Lumi
Software Win11 Pro 23H2 UK | Arch (Fan)
Benchmark Scores 3DMARK: https://www.3dmark.com/3dm/89434432? GPU-Z: https://www.techpowerup.com/gpuz/details/v3zbr
Depending because in my Windows XP system I am using a GTX 750 Ti MSI TwinFrozr version (Only wish it had a DP port) with the latest supported driver for XP and it's basicly not doing much in all the games I game on my XP machine but it gives a good experience in both older and older older titles that I run compared to what AMD can have of issues from the time.
 

ARF

Joined
Jan 28, 2020
Messages
4,568 (2.74/day)
Location
Ex-usa | slava the trolls
This is wild because it used to be that AMD had way bigger driver overhead I wonder how Nvidia screwed this up.

It's weird when at 4K the CPU sits almost idle at ~10% utilisation. Just waste of resources. In this sense the bigger the driver overhead, the better.
 
Joined
Jan 27, 2020
Messages
15 (0.01/day)
Why on God's green Earth would you pair an ancient, underpowered CPU like that with a far faster GPU? That 6700 XT is probably spending 50% or more of its time idle, waiting for your CPU to catch up.

I couldn't afford a whole PC upgrade, and having seen this driver performance gap when CPU limited plus the fact that I wasn't that far from reaching fixed 60 fps on the most CPU demanding game I was playing made me just buy it second hand with warranty, and the performance difference is noticeable. In top of that, now I should have enough GPU power for a good amount of time, so I'm glad I buy it.

One thing that's also noticeable is that, when I tested uncapping the fps, the game didn't suddenly turn into an unplayable stutter mess, something that I would expect with the Nvidia driver and the GTX 960, it runs to about 75-90 fps when I'm alone or not much cars around me, and about 60-70 fps when I'm surrounded by several other cars. It's not a very even experience but the fact it wasn't straight up unplayable was a pleasant surprise. IIRC I tested this uncap setting with the time progression in the game at 1x (real time progression) and Zandvoort with 24 AI cars in a dry race, which isn't the most demanding scenario that can happen in the game, but it still was surprising.

And I tried to disable the thread optimization with the GTX 960 too, @Dr. Dro . It seemed to me the performance was a little bit improved, but I wasn't sure, maybe it was placebo effect. Still it's a setting Nvidia users should try if they're CPU limited in a certain scenario. I could have tested it further but I guess I wanted to upgrade anyway. :)
 
Last edited:
Joined
Aug 10, 2023
Messages
263 (0.70/day)
Processor Ryzen 5800X3D (undervolted)
Memory 2x16 GB G.Skill DDR4 3600 CL16
Video Card(s) RTX 4090 Phantom (stock)
Display(s) 27" 1440p 144Hz VA main (HDR600), 55" LG OLED Evo secondary for controller based games
Generally:

Radeon has less driver overhead because DX12 / Vulkan (low level APIs) are directly calculated in the hardware, more so than with Nvidia who have a more software approach to it.

This means weaker CPUs run DX12 / VK games better on Radeon. But generally DX12 / VK will work better for weaker CPUs, as those APIs are designed to reduce driver overhead or eliminate it altogether.
 
Joined
Jul 30, 2019
Messages
2,848 (1.54/day)
System Name Not a thread ripper but pretty good.
Processor Ryzen 9 5950x
Motherboard ASRock X570 Taichi (revision 1.06, BIOS/UEFI version P5.50)
Cooling EK-Quantum Velocity, EK-Quantum Reflection PC-O11, EK-CoolStream PE 360, XSPC TX360
Memory Micron DDR4-3200 ECC Unbuffered Memory (4 sticks, 128GB, 18ASF4G72AZ-3G2F1)
Video Card(s) XFX Radeon RX 5700 & EK-Quantum Vector Radeon RX 5700 +XT & Backplate
Storage Samsung 2TB & 4TB 980 PRO, 2TB 970 EVO Plus, 2 x Optane 905p 1.5TB (striped), AMD Radeon RAMDisk
Display(s) 2 x 4K LG 27UL600-W (and HUANUO Dual Monitor Mount)
Case Lian Li PC-O11 Dynamic Black (original model)
Power Supply Corsair RM750x
Mouse Logitech M575
Keyboard Corsair Strafe RGB MK.2
Software Windows 10 Professional (64bit)
Benchmark Scores Typical for non-overclocked CPU.
This is wild because it used to be that AMD had way bigger driver overhead I wonder how Nvidia screwed this up.
Perhaps it's by design since you can throw a little more power at the Intel CPU and it won't matter?
 
Joined
Jan 27, 2020
Messages
15 (0.01/day)
Perhaps it's by design since you can throw a little more power at the Intel CPU and it won't matter?

The thing is, it seems that this performance delta isn't only showing in low end CPUs being maxed out (or very close to) on all threads, as I've came across a video in which an RX 480 is seen beating the RTX 3060 in average and min* framerate in Hogwarts Legacy and in The Witcher 3 remaster (in the latter case also in the 1% lows) with a Ryzen 5 3600x:


*PD: I still find weird that min figure > 1% low, maybe low average, sustained min or something like that would be a better tag to define what it measures, because when I read a minimum figure I guess I expect the absolute lowest frametime translated into a second in frames.
 
Last edited:

Ruru

S.T.A.R.S.
Joined
Dec 16, 2012
Messages
11,903 (2.79/day)
Location
Jyväskylä, Finland
System Name 4K-gaming
Processor AMD Ryzen 7 5800X
Motherboard Gigabyte B550M Aorus Elite
Cooling Arctic Freezer 50
Memory 48GB Kingston DDR4-3200C16
Video Card(s) Asus GeForce RTX 3080 TUF OC 10GB
Storage ~3TB SSDs + 6TB external HDDs
Display(s) Acer 27" 4K120 IPS + Lenovo 32" 4K60 IPS
Case Corsair 4000D Airflow White
Audio Device(s) Asus TUF H3 Wireless
Power Supply EVGA Supernova G2 750W
Mouse Logitech MX518 + Asus TUF P1 mousepad
Keyboard Roccat Vulcan 121 AIMO
VR HMD Oculus Rift CV1
Software Windows 11 Pro
Benchmark Scores It runs Crysis
Damn, I totally forgot this fact. I have a XP rig with Phenom 9750 and GTX 660 3GB, I have a spare HD 7970 which would be a better choice I guess.
 
Joined
Aug 20, 2007
Messages
21,093 (3.40/day)
System Name Pioneer
Processor Ryzen R9 7950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 905p Optane 960GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64 / Windows 11 Enterprise IoT 2024
This is wild because it used to be that AMD had way bigger driver overhead I wonder how Nvidia screwed this up.
They didn't. AMD just really got it's act together only recently, but they did so with such advancements that it left nvidia behind.

Nvidia is still as good as its always been, overhead wise. But AMD is now slightly better.
 
Joined
Aug 10, 2023
Messages
263 (0.70/day)
Processor Ryzen 5800X3D (undervolted)
Memory 2x16 GB G.Skill DDR4 3600 CL16
Video Card(s) RTX 4090 Phantom (stock)
Display(s) 27" 1440p 144Hz VA main (HDR600), 55" LG OLED Evo secondary for controller based games
They didn't. AMD just really got it's act together only recently, but they did so with such advancements that it left nvidia behind.

Nvidia is still as good as its always been, overhead wise. But AMD is now slightly better.
with dx11 etc they did, with low level apis they already had. They basically introduced that stuff with mantle. dx11 etc is much more driver dependant so it took them more time.
 
Joined
Dec 25, 2020
Messages
5,774 (4.33/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Galax Stealth STL-03
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
This is wild because it used to be that AMD had way bigger driver overhead I wonder how Nvidia screwed this up.

It isn't a screw-up, it's a tradeoff that Nvidia opted for, (wisely) deducing that in the future, CPUs with an ample amount of threads and high clock frequencies would be available. This means that DirectX 11 games are all but guaranteed to scale better on Nvidia hardware as long as there is CPU power for the driver to batch and defer commands to the GPU.

On AMD, the driver interfaces directly with the GPU, which has a dedicated command scheduler. The new DirectX and OpenGL on PAL drivers that were released are highly optimized, but the same limitations apply: single-thread immediate context only; maximum amount of draw-calls before the frame rate tanks highly dependent on the GPU's command processor, etc.

with dx11 etc they did, with low level apis they already had. They basically introduced that stuff with mantle. dx11 etc is much more driver dependant so it took them more time.

AMD still doesn't and likely will never support the features I have mentioned in my post above. Check the github link on my reply to Vya, you'll be able to grasp a little about what is making AMD's newer driver releases tick. This is what's important, anyway:

palDriverStack.png
 
Top